Advances in Artificial Intelligence, Software and Systems Engineering: Proceedings of the AHFE 2021 Virtual Conferences on Human Factors in Software ... (Lecture Notes in Networks and Systems, 271) [1st ed. 2021] 3030806235, 9783030806231

This book addresses emerging issues concerning the integration of artificial intelligence systems in our daily lives. It

346 25 56MB

English Pages 551 [548] Year 2021

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Advances in Human Factors and Ergonomics 2021
Preface
Contents
Human Factors in Artificial Intelligence and Social Computing
Human-AI-Collaboration in the Context of Information Asymmetry – A Behavioral Analysis of Demand Forecasting
1 Introduction
2 Literature Review
3 Methodology
4 Results
5 Conclusion
References
A Minimally Supervised Event Detection Method
1 Introduction
2 Method
2.1 Novelty Filtering
2.2 SME Tagging and Bayesian Network Classification
2.3 Iterative Workflow
3 Example Results
3.1 Choosing the Novelty Filter Threshold
3.2 SME Tagging and TAN BN Classification
4 Conclusion and Future Work
References
LDAvis, BERT: Comparison of Method Application Under Corona Conditions
1 Introduction
2 Data Preparation: Nominalization, Toponyms, Topic Count
3 Material Exploration: LDAvis, BERT
3.1 LDAvis
3.2 BERT
4 Conclusion
References
Artificial Intelligence (AI) Coupled with the Internet of Things (IoT) for the Enhancement of Occupational Health and Safety in the Construction Industry
1 Introduction
2 Legislations and Government Policies
3 Machine Learning Models
4 Artificial Intelligence
4.1 Principles and Methodology
4.2 Sub-systems
5 Limitations and Challenges
6 Conclusion
References
Graph-Based Modeling for Adaptive Control in Assistance Systems
1 Introduction
2 Adaptive Control and Pathways Modeling
3 Data-Driven Modeling for Flexible Adaption
4 Application Example
5 Conclusion
References
Chatbot User Experience: Speed and Content Are King
1 Introduction
2 Methodology
3 Participant Demographics
4 Research Findings
4.1 Individual Scenarios
4.2 Chatbot Overall Satisfaction Ratings and Predictors of Net Promoter Score
5 Best Practices in Designing Chatbots
6 Conclusions
References
Is Artificial Intelligence Digital?
1 Introduction
2 Signal Characteristic
3 Crosstalk and Noise
4 Impact on Higher Level
References
Interpreting Pilot Behavior Using Long Short-Term Memory (LSTM) Models
1 Introduction
2 Methods
3 Results
4 Discussion
4.1 Interpretation of Window Length and Lag Length Decay Metrics
4.2 Applications and Next Steps
5 Conclusion
References
Felix: A Model for Affective Multi-agent Simulations
1 Introduction
2 Background
2.1 Emotions
2.2 Affective Computing
3 Project Laetitia
4 The Felix Model
4.1 Internal Components
4.2 Genetics
4.3 Fuzzy Affective System
4.4 Preliminary Data
5 Conclusions and Future Work
References
The Old Moral Dilemma of “Me or You”
1 Introduction
2 Research Background
2.1 Autonomous Vehicles (AVs) in Marketing Literature
2.2 The Ethical Issue
3 Research Design
4 Findings
5 Conclusion
References
Analysis of a Bankruptcy Prediction Model for Companies in Chile
1 Introduction
2 Methodology
3 Conclusions
References
Computational Intelligence Applied to Business and Services: A Sustainable Future for the Marketplace with a Service Intelligence Model
1 Introduction
2 Approaches to the Use of Computational Intelligence in Business
3 Incorporation of Computational Intelligence in the Design and Delivery of Services
4 A Sustainable Future for the Marketplace: Service Intelligence Model
5 Conclusions
References
Modeling Cognitive Load in Mobile Human Computer Interaction Using Eye Tracking Metrics
1 Introduction
2 Related Work
3 Proposed Approach
4 Results
5 Discussion
6 Conclusion
References
Theoretical Aspects of the Local Government’s Decision-Making Process
1 Introduction
2 Decision-Making Process and Related Theories
3 The Algebraic System and the Theory of Local Government’s Management
4 The Decision Making as Inference
5 Conclusion
References
Application of AI in Diagnosing and Drug Repurposing in COVID 19
1 Introduction
1.1 Diagnosis
1.2 Prognosis
1.3 Dashboards and Projections
2 Research and Development of Vaccines and Drugs
3 Prevention/Communication
4 Challenges
5 Conclusion
References
Using Eye-Tracking to Check Candidates for the Stated Criteria
1 Introduction
2 Technique and Research Methods
3 Search for Criteria for Decoding a Pupillogram in the “Gray Zone” and Results
4 Discussion
5 Main Results
References
Towards Understanding How Emojis Express Solidarity in Crisis Events
1 Introduction and Related Work
2 Data
3 Understanding the Emojis of Solidarity
3.1 RQ1: How Useful Are Emojis as Features in Classifying Expressions of Solidarity?
3.2 RQ2: What Sentiments Are Conveyed by Emojis in Solidarity Expressions During Crisis Events?
3.3 RQ3: How Can Emojis Be Used to Understand the Diffusion of Solidarity Expressions Over Time?
4 Conclusion and Discussion
References
Deep Neural Networks as Interpretable Cognitive Models for the Quine’s Uncertainty Thesis
1 Introduction: Quine’ Uncertainty Thesis Meets the Black Box
2 Gestalt Cognition Uncertainty as Enhanced Version for Quine’ Uncertainty Thesis
3 Approaches for Tackling Gestalt Cognition in Deep Neural Networks
4 Conclusions: Interpretability as a Mixture of Computer Vision and Human Cognition
References
Artificial Intelligence: Creating More Possibilities for Programmatic Advertising
1 Challenge: Programmatic Advertising is Confronted with Increasingly Complicated
2 Solution: AI Drives the Evolution of the Whole Chain of Programmatic Advertising
2.1 Deep Marketing Insight
2.2 Intelligent Advertising
2.3 Data Feedback and Optimization
3 Empowerment: Specific Application of AI Technology in Programmatic Advertising
3.1 Understanding Diverse Unstructured Data
3.2 Building Up Deep Users’ Portrait
3.3 Achieving Creative Contents
4 Case: Programmatic Marketing Solution for Full Stack AI Decision-Making Platform
4.1 MIP, the AI Decision-Making Platform of Deep Zero|Pinyou [4]
4.2 The Digital and Intelligent Marketing Practice of Friso [4]
5 Reflection: Do not Fall into the Misunderstanding That Marketing is Only Technology-Based
References
A CRISP-DM Approach for Predicting Liver Failure Cases: An Indian Case Study
1 Introduction
2 Methodology
2.1 Business Understanding
2.2 Data Understanding
2.3 Data Preparation
2.4 Modeling
2.5 Evaluation
3 Results and Discussion
4 Conclusion
References
Towards a Most Suitable Way for AGI from Cognitive Architecture, Modeling and Simulation
1 Introduction
2 Artificial Intelligence, AGI and Cognitive Architecture
2.1 Artificial Intelligence and AGI
2.2 AI/AGI and Cognitive Architecture Share Same Ambition
2.3 Modeling, Cognitive Architecture and Simulation Aims at AGI
3 Cognitive Architecture: A Framework for Achieving AGI
3.1 AGI-Oriented and Model-Based Human Cognitive Characteristics
3.2 AGI-Oriented Cognitive Architecture and Its Mechanism
3.3 Implementation of Cognitive-Model-Based AGI Applications
4 Parameterization of Cognitive Characteristics in Cognitive Architecture and Modeling with PVT
4.1 Studies Motivation, Task and Cognitive Process
4.2 Parameterization, Modeling and Simulation
4.3 The Results
5 An AGI Case for Autonomous Maintenance/Assembly Task
5.1 Autonomous Control Task and Cognitive Process
5.2 Cognitive Modeling and Simulation
5.3 The Results
6 Conclusion
References
Promoting Economic Development and Solving Societal Issues Within Connected Industries Ecosystems in Society 5.0
1 Introduction
2 Key Areas Needing Change
3 Addressing These Challenges
3.1 Business Resiliency
3.2 Food Supply Chain
3.3 Digital Payments
4 Conclusions
References
Artificial Intelligence and Tomorrow’s Education
1 Introduction
2 Applications of AI in Education
2.1 The Automation of Administrative Tasks
2.2 Collection and Analysis of Information to Create Smart Content
2.3 The Implementation of Virtual Assistants in the Teaching-Learning Process
2.4 The Potential Delivery of Lectures by Humanoid Robots with AI
3 Conclusions
References
Towards Chinese Terminology Application of TERMONLINE
1 Introduction
2 The Establishment of TERMONLINE
3 The Advancement of TERMONLINE
4 The Service Functions of TERMONLINE
5 A Case-Based Term Query and Retrieval
6 Conclusion
References
AsiRo-μ: A Multi-purpose Robotic Assistant for Educational Inclusion of Children with Multiple Disabilities
1 Introduction
2 Related Work
3 Proposed Method
3.1 Robotic Assistant
3.2 Gesture Recognition System
3.3 Systems Integration
4 Pilot Experiment and Preliminary Results
5 Conclusions
References
Intelligent Agent Proposal in a Building Electricity Monitoring System for Anomalies’ Detection Using Reinforcement Learning
1 Introduction
2 Anomalies Detection in Building Electricity Consumption
3 Proposal
4 Conclusions
References
Voltammetric Electronic Tongues Applied to Classify Sucrose Samples Through Multivariate Analysis
1 Introduction
2 Methodology
2.1 Data Acquisition
2.2 Classifier Design
3 Results
3.1 Training
3.2 Validation
4 Conclusions
References
Proposal for a Platform Based on Artificial Vision for the Identification and Classification of Ceramic Tiles
1 Introduction
2 Bibliographic Review
3 Proposal
4 Conclusions
References
Tourism Recommender System Based on Natural Language Classifier
1 Introduction
2 Method
3 Architecture Proposal
4 Evaluation
4.1 Evaluation Using Google Analytics
4.2 Evaluation by Survey
5 Conclusions
References
Real-Time Emotion Recognition for EEG Signals Recollected from Online Poker Game Participants
1 Introduction
2 Related work
3 Methodology and Materials
3.1 EEG Data Collection
3.2 Training Data, Test Data, and Feature Extraction
4 Results
5 Conclusions
References
Knowledge Discovery About Cancer Based on Fuzzy Predicates
1 Introduction
2 Materials and Methods
2.1 Materials
2.2 Methods
3 Results
3.1 Analysis of Relationship Between Variables
3.2 Correlation Monoplot
3.3 Similarities Between Observations
4 Discussions
5 Conclusion
References
Teaching Brooks Law Based on Fuzzy Cognitive Maps and Chatbots
1 Introduction
2 Materials and Methods
2.1 Materials
2.2 Methods
2.3 Results
3 Discussion
4 Conclusion
References
Algorithm for the Signal Validation in the Emergency Situation Using Unsupervised Learning Methods
1 Introduction
2 Review of Previous Studies and VAE and LSTM
2.1 Review of Data-Driven Approaches for Signal Validation
2.2 Methods
3 Development of Signal Validation Algorithm
3.1 Signal Detection Algorithm
3.2 Optimization
4 Result of Validation for Signal Validation Algorithm
5 Conclusion
References
AI Evolution Trend Analysis Based on Semantic Network Analysis
1 Introduction
1.1 Discussion on News Frames
1.2 Discussion on Semantic Network Analysis and Centrality Research
2 Research Questions and Methods
2.1 Research Issues
2.2 Method
3 Results
3.1 Keyword Comparison Analysis by Report Period
3.2 Analysis of Semantic Network and Centrality
4 Conclusion
References
Decision Support System and Human Factors
Agile Circular Design
1 Introduction
2 Objective
3 Process and Collaboration Models
3.1 Circular Design Approaches
3.2 Agile Workflow
4 Agile Circular Design
4.1 Circular Economy Requirements
4.2 Circularity Measurement
4.3 Automation Management
4.4 Integrated ACD Process Model
5 Further Research
References
Object Detection Using Artificial Intelligence: Predicting Traffic Congestion to Improve Emergency Response to Mass Casualty Incidents
1 Introduction
2 Methodology
2.1 Data
2.2 Vehicle Detection
2.3 Traffic Parameters and Congestion Classification
2.4 Traffic Status
3 Analysis
4 Conclusions and Future Work
References
Modelling Key Performance Indicators for Improved Performance Assessment in Persistent Maritime Surveillance Projects
1 Introduction
2 Related Work
3 Performance Assessment Model
4 Results and Discussion
5 Conclusions
References
Compositional Sonification of Cybersecurity Data in a Baroque Style
1 Introduction
2 Review of Sonification Methods
3 “Baroque Style” Manual Sonification from Algorithms
4 Sonification with a Music Sequencer in Baroque Style
5 Performing Sonification on Classical Instruments
6 Comparison of Different Methods
7 Conclusions
References
Demand Forecasting Using Ensemble Learning for Effective Scheduling of Logistic Orders
1 Introduction
2 Real-World Use Case on Demand Forecasting for Logistic Orders
2.1 Advanced Feature Engineering and Unsupervised Time Series Clustering
2.2 Long-Term Demand Forecasting on Group Level
2.3 Short-Term Demand Forecasting on Product Level
3 Results and Discussion
4 Conclusion
References
A Test Method of Inverter Performance Parameters Based on Virtual Instrument
1 Introduction
2 Research Scheme
3 The Hardware Design
4 The Software Design
4.1 The Login Modules
4.2 Main Menu Module
4.3 Three-Phase Unbalance Measurement
4.4 Voltage and Voltage Deviation Measurement
4.5 Frequency and Frequency Deviation Measurement
4.6 Harmonic Measurement
4.7 Data Store View Subroutine
5 Program Debugging
5.1 Debugging Results of Three-Phase Unbalance Calculation Module
5.2 Harmonic Analysis Subroutine Debugging Results
6 Conclusion
References
New Findings on the Calculation of the Catenary
1 Foreword and Summary of Known Findings
1.1 Catenary and Rope Line
1.2 The Chain in a Homogeneous Gravity Field - and in Weightlessness
2 Chain Mathematics (State of the Art)
2.1 Chain Shape and Scaling
2.2 The Scaling Factor “a”
3 Mathematics: Asymmetrical Suspension and Tangent Angle
3.1 Scaling Factor: Any Measuring Point Positions
3.2 Asymmetrical Chain Pitch and Tangent Angle
3.3 General Valid Method to Determine the Scaling Factor and Chain’s Length
4 Summary
References
Design and Creation of Educational Video Games Using Assistive Software Instruments
1 Introduction
2 Background Works
2.1 Software Instruments for Game Creation in the APOGEE Platform
2.2 Content Management Systems
3 The Workflow of Design and Creation of Educational Video Games Using Assistive Software Instruments
4 Online System for the Management of Learning and Gaming Content
5 Practical Platform Validation Through Creation of an Educational Video Game
6 Conclusion
References
Study and Implementation of Cross-Platform Real-Time Message Middleware in ATC Systems
1 Introduction
2 The Structure of Middleware
3 The Implementation of Middleware
3.1 Data Structure
3.2 Models of Transmission
3.3 MQM
3.4 CDCS State Management
3.5 Data Compression and Encryption
4 Application Experiment
5 Conclusion and Prospect
References
Building an Educational Product: Constructive Alignment and Requirements Engineering
1 Introduction
2 Related Work
2.1 Engineering Concepts
2.2 Educational Concepts
2.3 Previous Research on Requirements in Educational Software
3 Methodology
3.1 Combining Requirements Engineering and Constructive Alignment
3.2 Step-By-Step Description of the Approach Used in the Case Study Project
4 Results and Discussion
4.1 The Student Profile and User Persona
4.2 Intended Learning Outcomes (ILOs)
4.3 Product’s Functionality, Learning Activities and Content
4.4 Evaluation and Assessment
5 Conclusion
References
Analysis of the Application of Steganography Applied in the Field of Cybersecurity
1 Introduction
2 Related Projects: A Brief Review
3 Design of an Algorithm to Hide Information in a Digital Image Using the LSB Technique by Fortran
3.1 Design of an Algorithm for Information Concealment in a PGM Image
4 Conclusions
References
Multiple LDPC Code Combined with OVCDM to Improve Signal Coding Efficiency and Signal Transmission Effects
1 Introduction
2 Multiple LDPC Coding and Decoding System
2.1 Features of LDPC
2.2 Encoding and Decoding Characteristics of Multiple LDPC Codes
3 Overlapped Code Division Multiplexing System
3.1 OVCDM Systems
3.2 Operating Principle of OVCDM
4 Research on Improvement of WSN Data Coding and Decoding by Using Multiple LDPC and OVCDM System
5 The Conclusion
References
An Intelligent Systems Development for Multi COVID-19 Related Symptoms Detection: A Framework
1 Introduction
2 Background
3 Methodology
4 Design
4.1 Fever Detection Subsystem
4.2 Dry Cough Detection Subsystem
4.3 Shortness of Breath Detection Subsystem
4.4 Symptoms Integration Subsystem
5 Preliminary Results
6 Conclusion
References
Analysis of Influencing Factors of Depth Perception
1 Introduction
2 Method
3 Results
3.1 Deep Perception and Gender Factors
3.2 Depth Perception and Hobby (Sports) Factors
3.3 Depth Perception and Binocular Factors
3.4 Depth Perception and Interaction Factors
4 Conclusion
Software Instruments for Analysis and Visualization of Game-Based Learning Data
1 Introduction
2 Background Works
3 Online System for Learning and Gaming Analytics of Data About Playing Enriched Educational Maze Games
3.1 Software Architecture
3.2 The Organization of Its Data Model
3.3 Organization of the System User Interface
4 Conclusion
References
Work Accident Investigation Software According to the Legal Requirements for Ecuadorian Companies
1 Introduction
2 Materials and Methods
3 Results
3.1 System Architecture and Functionalities
3.2 Opening a New Case
3.3 Safety Inspection and Witnesses Interview
3.4 Root Cause Analysis and Investigation Report
4 Conclusions
References
Systemic Analysis of the Territorial and Urban Planning of Guayaquil
1 Introduction
2 Systemic Analysis of the Planning of Guayaquil
2.1 Rural Territory
2.2 Endogenous Resources
3 Conclusions
References
Spatial Model of Community Health Huts Based on the Behavior Logic of the Elderly
1 Introduction
2 Analysis of User Behaviors
2.1 User Requirements Analysis
2.2 Building Task Flow
3 Spatial Design Model of Health Hut
3.1 Environmental Psychology and Scene Theory
3.2 Design Model of the Health Hut
4 Facility Layout of the Health Hut
5 Man-Machine Engineering Simulation by Jack Software
5.1 Establishment of Virtual Person and Device Model
5.2 Task Flow and Simulation Evaluation
6 Conclusion
References
Analysis of the Proposal for the SOLCA Portoviejo Hospital Data Network Based on QoS Parameters
1 Introduction
2 Related Works
3 Metodology
4 Analysis of the Current Situation
5 Proposals for Improvements
6 Simulation and Validation of Results
7 Conclusions
References
Design and Optimization of Information Architecture of NGC Cloud Platform
1 Introduction
2 Research Methods
2.1 Cloud Platform Function Points Combing
2.2 Invite Participant
3 Research Results and Analysis
3.1 Cards Correlation Matrix
3.2 Cards Correlation Matrix
3.3 Information Architecture Dendrogram
3.4 The Results of Analysis
4 Conclusions
References
Design of Self Rescue Equipment for High Rise Building Fire Based on Integrated Innovation Theory
1 Introduction
2 Investigation and Analysis of High Rise Building Fire Self Rescue Products
2.1 Analysis of Advantages and Disadvantages of Fire Self Rescue Products
2.2 Characteristic Requirements of Fire Self Rescue Products
3 Demand Analysis of Victims in Fire
4 Analysis of Design Principles of Fire Self Rescue Products
5 Design Process of Self Rescue Products for High Rise Buildings
5.1 User Demand Survey
5.2 Design Concept
5.3 Concept Generation and Sketch Divergence
5.4 Design Presentation
5.5 Product Size
6 Conclusion
References
Adoption, Implementation Information and Communication Technology Platform Application in the Built Environment Professional Practice
1 Introduction
2 Research Design and the Study Population
3 Research Design and the Study Population
4 Data Collection Instrument, Data Presentation and Discussion
5 Recommendations
References
The Influence of E-commerce Web Page Format on Information Area Under Attention Mechanism
1 Introduction
2 Background
2.1 Web Page Format Design
2.2 Visual Attention Theory
2.3 Eye Movement Measurement Methods
3 Research Method
3.1 Experimental Design
3.2 Materials
3.3 Apparatus and Participants
3.4 Data Measurement and AOI Definitions
4 Results
4.1 Heat Map
4.2 AOI Data Analysis
5 Discussion and Conclusion
5.1 Browse Path
5.2 Attention
References
Usage of Cloud Storage for Data Management in the Built Environment
1 Introduction
2 Data Management in the Construction Industry
2.1 Cloud Storage
3 Lessons Learnt
4 Conclusion and Recommendation
References
Digital Model for Monitoring SME Architecture Design Projects
1 Introduction
2 State of Art
2.1 Innovation
2.2 Monitoring and Evaluation
3 Proposal
4 Validacion
4.1 Understanding the Need
4.2 Objectives of the Project
4.3 Digital Prototype Model for Monitoring
5 Conclusions
References
Charging and Changing Service Design of New Energy Vehicles Under the Concept of Sustainable development—A Case Study of NIO
1 NIO Power
2 Sustainable Theory and 3R Principle
3 Charging and Changing Service Design
3.1 Stakeholders
3.2 Service Pain Points
3.3 Service Situation
3.4 Service Blueprint
4 Service Design Strategies for Sustainable Development
4.1 User-Centric
4.2 Full-Service Process Tracking
4.3 Setting up a Service Loop
5 Conclusion
References
Human Factors in Energy
What Niche Design Can Learn from Acceptance Mining
1 Introduction
2 Theoretical Background
2.1 Niche Design and Development
2.2 Technology Acceptance Research
2.3 Niche Development Using Technology Acceptance Approaches
3 Acceptance-Mining Approach
4 Results
4.1 Technology-Related Aspects and Technology Evaluation
4.2 Niche-Related Aspects and Evaluations
4.3 Requirements for Niche Development
5 Discussion
6 Conclusion
References
When Dullscreen is Too Dull
1 Introduction
2 System Overviews
3 Operator Study on System Overviews (OSSO)
4 OSSO Findings
5 Discussion
References
Examining the Use of the Technology Acceptance Model for Adoption of Advanced Digital Technologies in Nuclear Power Plants
1 Introduction
2 Common Barriers to NPP Modernization
2.1 Perceived Value and ROI of Digital Technology
2.2 Perceived Risk: Licensing, Regulatory, and Cybersecurity
3 Addressing Attitudinal Factors for Technology Adoption
4 Using TAM as a Framework for Technology Acceptance
4.1 Introduction to the Technology Acceptance Model
4.2 Applications and Extensions of the TAM
5 Applying TAM to Technology Acceptance in NPPs
6 Final Remarks and Future Directions
References
Developments in the Application of Nano Materials for Photovoltaic Solar Cell Design, Based on Industry 4.0 Integration Scheme
1 Introduction
2 Overview of Nanomaterial for Solar Cells
3 Latest Research and Development
3.1 Photoelectric Effect and the P-N Junction
4 Deposition Methods
4.1 Spin Coating
4.2 Dip Coating
4.3 Spray Pyrolysis
4.4 Chemical Vapour Deposition (CVD)
4.5 Other Deposition Methods
5 Modelling and Simulation
5.1 Description of the Model
5.2 Modeling Under the Matlab/Simulink/SCAPS
5.3 Simulation
6 Conclusion
References
Autonomous Emergency Operation of Nuclear Power Plant Using Deep Reinforcement Learning
1 Introduction
2 Emergency Operation Analysis
2.1 Emergency Operation Analysis Based on FRP
2.2 Work Domain Analysis by Using Abstraction Decomposition Space
3 Development of an Algorithm for Emergency Operation
4 Training and Experiment
4.1 Training Environment
4.2 Training and Stability
4.3 Experiment Results
5 Conclusion
References
Author Index
Recommend Papers

Advances in Artificial Intelligence, Software and Systems Engineering: Proceedings of the AHFE 2021 Virtual Conferences on Human Factors in Software ... (Lecture Notes in Networks and Systems, 271) [1st ed. 2021]
 3030806235, 9783030806231

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Lecture Notes in Networks and Systems 271

Tareq Z. Ahram Waldemar Karwowski Jay Kalra   Editors

Advances in Artificial Intelligence, Software and Systems Engineering Proceedings of the AHFE 2021 Virtual Conferences on  Human Factors in Software and Systems Engineering, Artificial Intelligence and Social Computing, and Energy, July 25–29, 2021, USA

Lecture Notes in Networks and Systems Volume 271

Series Editor Janusz Kacprzyk, Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland Advisory Editors Fernando Gomide, Department of Computer Engineering and Automation—DCA, School of Electrical and Computer Engineering—FEEC, University of Campinas— UNICAMP, São Paulo, Brazil Okyay Kaynak, Department of Electrical and Electronic Engineering, Bogazici University, Istanbul, Turkey Derong Liu, Department of Electrical and Computer Engineering, University of Illinois at Chicago, Chicago, USA; Institute of Automation, Chinese Academy of Sciences, Beijing, China Witold Pedrycz, Department of Electrical and Computer Engineering, University of Alberta, Alberta, Canada; Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland Marios M. Polycarpou, Department of Electrical and Computer Engineering, KIOS Research Center for Intelligent Systems and Networks, University of Cyprus, Nicosia, Cyprus Imre J. Rudas, Óbuda University, Budapest, Hungary Jun Wang, Department of Computer Science, City University of Hong Kong, Kowloon, Hong Kong

The series “Lecture Notes in Networks and Systems” publishes the latest developments in Networks and Systems—quickly, informally and with high quality. Original research reported in proceedings and post-proceedings represents the core of LNNS. Volumes published in LNNS embrace all aspects and subfields of, as well as new challenges in, Networks and Systems. The series contains proceedings and edited volumes in systems and networks, spanning the areas of Cyber-Physical Systems, Autonomous Systems, Sensor Networks, Control Systems, Energy Systems, Automotive Systems, Biological Systems, Vehicular Networking and Connected Vehicles, Aerospace Systems, Automation, Manufacturing, Smart Grids, Nonlinear Systems, Power Systems, Robotics, Social Systems, Economic Systems and other. Of particular value to both the contributors and the readership are the short publication timeframe and the world-wide distribution and exposure which enable both a wide and rapid dissemination of research output. The series covers the theory, applications, and perspectives on the state of the art and future developments relevant to systems and networks, decision making, control, complex processes and related areas, as embedded in the fields of interdisciplinary and applied sciences, engineering, computer science, physics, economics, social, and life sciences, as well as the paradigms and methodologies behind them. Indexed by SCOPUS, INSPEC, WTI Frankfurt eG, zbMATH, SCImago. All books published in the series are submitted for consideration in Web of Science.

More information about this series at http://www.springer.com/series/15179

Tareq Z. Ahram Waldemar Karwowski Jay Kalra •



Editors

Advances in Artificial Intelligence, Software and Systems Engineering Proceedings of the AHFE 2021 Virtual Conferences on Human Factors in Software and Systems Engineering, Artificial Intelligence and Social Computing, and Energy, July 25–29, 2021, USA

123

Editors Tareq Z. Ahram Institute for Advanced Systems Engineering University of Central Florida Orlando, FL, USA

Waldemar Karwowski University of Central Florida Orlando, FL, USA

Jay Kalra Department of Pathology, College of Medicine University of Saskatchewan Royal University Hospital Saskatoon, SK, Canada

ISSN 2367-3370 ISSN 2367-3389 (electronic) Lecture Notes in Networks and Systems ISBN 978-3-030-80623-1 ISBN 978-3-030-80624-8 (eBook) https://doi.org/10.1007/978-3-030-80624-8 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Advances in Human Factors and Ergonomics 2021

AHFE 2021 Series Editors Tareq Z. Ahram, Florida, USA Waldemar Karwowski, Florida, USA

12th International Conference on Applied Human Factors and Ergonomics and the Affiliated Conferences (AHFE 2021) Proceedings of the AHFE 2021 Virtual Conferences on Human Factors in Software and Systems Engineering, Artificial Intelligence and Social Computing, and Energy, July 25–29, 2021, USA.

Advances in Neuroergonomics and Cognitive Engineering Advances in Industrial Design

Advances in Ergonomics in Design Advances in Safety Management and Human Performance Advances in Human Factors and Ergonomics in Healthcare and Medical Devices Advances in Simulation and Digital Human Modeling Advances in Human Factors and System Interactions Advances in the Human Side of Service Engineering Advances in Human Factors, Business Management and Leadership Advances in Human Factors in Robots, Unmanned Systems and Cybersecurity Advances in Human Factors in Training, Education, and Learning Sciences

Hasan Ayaz, Umer Asgher and Lucas Paletta Cliff Sungsoo Shin, Giuseppe Di Bucchianico, Shuichi Fukuda, Yong-Gyun Ghim, Gianni Montagna and Cristina Carvalho Francisco Rebelo Pedro M. Arezes and Ronald L. Boring Jay Kalra, Nancy J. Lightner and Redha Taiar Julia L. Wright, Daniel Barber, Sofia Scataglin and Sudhakar L. Rajulu Isabel L. Nunes Christine Leitner, Walter Ganz, Debra Satterfield and Clara Bassano Jussi Ilari Kantola, Salman Nazir and Vesa Salminen Matteo Zallio, Carlos Raymundo Ibañez and Jesus Hechavarria Hernandez Salman Nazir, Tareq Z. Ahram and Waldemar Karwowski (continued)

v

vi

Advances in Human Factors and Ergonomics 2021

(continued) Advances in Human Aspects of Transportation Advances in Artificial Intelligence, Software and Systems Engineering Advances in Human Factors in Architecture, Sustainable Urban Planning and Infrastructure Advances in Physical, Social & Occupational Ergonomics

Advances in Manufacturing, Production Management and Process Control Advances in Usability, User Experience, Wearable and Assistive Technology Advances in Creativity, Innovation, Entrepreneurship and Communication of Design Advances in Human Dynamics for the Development of Contemporary Societies

Neville Stanton Tareq Z. Ahram, Waldemar Karwowski and Jay Kalra Jerzy Charytonowicz, Alicja Maciejko and Christianne S. Falcão Ravindra S. Goonetilleke, Shuping Xiong, Henrijs Kalkis, Zenija Roja, Waldemar Karwowski and Atsuo Murata Stefan Trzcielinski, Beata Mrugalska, Waldemar Karwowski, Emilio Rossi and Massimo Di Nicolantonio Tareq Z. Ahram and Christianne S. Falcão Evangelos Markopoulos, Ravindra S. Goonetilleke, Amic G. Ho and Yan Luximon Daniel Raposo, Nuno Martins and Daniel Brandão

Preface

Researchers and business leaders are called to address important challenges caused by the increasing presence of artificial intelligence and social computing in the workplace environment and daily lives. Roles that have traditionally required a high level of cognitive abilities, decision making and training (human intelligence) are now being automated. The AHFE International Conference on Human Factors in Artificial Intelligence and Social Computing promotes the exchange of ideas and technology enabling humans to communicate and interact with machines in almost every area and for different purposes. The recent increase in machine and systems intelligence has led to a shift from the classical human–computer interaction to a much more complex, cooperative human system work environment requiring a multidisciplinary approach. The first part of this book deals with those new challenges and presents contributions on different aspects of artificial intelligence, social computing and social network modeling taking into account those modern, multifaceted challenges. The AHFE International Conference on Human Factors, Software and Systems Engineering provides a platform for addressing challenges in human factors, software and systems engineering pushing the boundaries of current research. In the second part of the book, researchers, professional software and systems engineers, human factors and human systems integration experts from around the world discuss next-generation systems to address societal challenges. The book covers cutting-edge software and systems engineering applications, systems and service design and user-centered design. Topics span from analysis of evolutionary and complex systems, to issues in human systems integration, and applications in smart grid, infrastructure, training, education, defense and aerospace. The last part of the book reports on the AHFE International Conference of Human Factors in energy, addressing oil, gas, nuclear and electric power industries. It covers human factors/systems engineering research for process control and discusses new energy business models.

vii

viii

Preface

In keeping with a system that is vast in its scope and reach, the chapters in this book cover a wide range of topics. The chapters are organized into three main sections as follows: Human Factors in Artificial Intelligence and Social Computing 1. Human Factors in Artificial Intelligence and Social Computing Human Factors in Software and Systems Engineering 2. Decision Support System and Human Factors Human Factors in Energy 3. Human Factors in Energy The research papers included here have been reviewed by members of the International Editorial Board, whom our sincere thanks and appreciation go to. They are listed below: Software, and Systems Engineering A. Al-Rawas, Oman T. Alexander, Germany S. Belov, Russia O. Bouhali, Qater H. Broodney, Israel A. Cauvin, France S. Cetiner, USA P. Fechtelkotter, USA F. Fischer, Brazil S. Fukuzumi, Japan C. Grecco, Brazil N. Jochems, Germany G. Lim, USA D. Long, USA M. Mochimaru, Japan C. O’Connor, USA C. Orłowski, Poland H. Parsaei, Qatar S. Pickl, Germany S. Ramakrishnan, USA J. San Martin Lopez, Spain K. Santarek, Poland M. Shahir Liew, Malaysia D. Speight, UK M. Stenkilde, Sweden T. Winkler, Poland H. Woodcock, UK

Preface

ix

Artificial Intelligence and Social Computing A. Al-Rawas, Oman T. Alexander, Germany S. Belov, Russia O. Bouhali, Qater H. Broodney , Israel A. Cauvin, France S. Cetiner, USA P. Fechtelkotter, USA F. Fischer, Brazil S. Fukuzumi, Japan R. Goonetilleke, Hong Kong C. Grecco, Brazil N. Jochems, Germany S. Koley, India G. Lim, USA D. Long, USA M. Mochimaru, Japan C. O’Connor, USA C. Orłowski, Poland H. Parsaei, Qatar S. Pickl, Germany S. Ramakrishnan, USA J. San Martin Lopez, Spain K. Santarek, Poland M. Shahir Liew, Malaysia J. Sheikh, Pakistan D. Speight, UK M. Stenkilde, Sweden T. Winkler, Poland H. Woodcock, UK B. Xue, China Human Factors in Energy: Oil, Gas, Nuclear and Electric Power Industries S. Al Rawahi, Oman R. Boring, USA P. Carvalho, Brazil S. Cetiner, USA D. Desaulniers, USA G. Lim, USA

x

Preface

P. Liu, China E. Perez, USA L. Reinerman-Jones, USA K. Söderholm, Finland We hope that this book, which reports on the international state of the art in human factors research and applications in artificial intelligence and systems engineering, will be a valuable source of knowledge enabling human-centered design of a variety of products, services and systems for global markets. July 2021

Tareq Z. Ahram Waldemar Karwowski Jay Kalra

Contents

Human Factors in Artificial Intelligence and Social Computing Human-AI-Collaboration in the Context of Information Asymmetry – A Behavioral Analysis of Demand Forecasting . . . . . . . . . Tim Lauer and Sophia Wieland A Minimally Supervised Event Detection Method . . . . . . . . . . . . . . . . . Matthew Hoffman, Sam Bussell, and Nathanael Brown LDAvis, BERT: Comparison of Method Application Under Corona Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Heike Walterscheid, Veton Matoshi, Katarzyna Wiśniewiecka-Brückner, Klaus Rothenhäusler, and Frank Eckardt Artificial Intelligence (AI) Coupled with the Internet of Things (IoT) for the Enhancement of Occupational Health and Safety in the Construction Industry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Kavitha Palaniappan, Chiang Liang Kok, and Kenichi Kato

3 14

23

31

Graph-Based Modeling for Adaptive Control in Assistance Systems . . . Alexander Streicher, Rainer Schönbein, and Stefan Pickl

39

Chatbot User Experience: Speed and Content Are King . . . . . . . . . . . . Jason Telner

47

Is Artificial Intelligence Digital? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Vaclav Jirovsky and Vaclav Jirovsky Jr.

55

Interpreting Pilot Behavior Using Long Short-Term Memory (LSTM) Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ben Barone, David Coar, Ashley Shafer, Jinhong K. Guo, Brad Galego, and James Allen Felix: A Model for Affective Multi-agent Simulations . . . . . . . . . . . . . . Giovanni Vincenti and James Braman

60

67

xi

xii

Contents

The Old Moral Dilemma of “Me or You” . . . . . . . . . . . . . . . . . . . . . . . Maria Colurcio and Ambra Altimari

75

Analysis of a Bankruptcy Prediction Model for Companies in Chile . . . Benito Umaña-Hermosilla, Hanns de la Fuente-Mella, Claudio Elórtegui-Gómez, Jorge Ferrada-Rodríguez, and Mauricio Arce-Rojas

83

Computational Intelligence Applied to Business and Services: A Sustainable Future for the Marketplace with a Service Intelligence Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mariana Alfaro Cendejas Modeling Cognitive Load in Mobile Human Computer Interaction Using Eye Tracking Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Antony William Joseph, J. Sharmila Vaiz, and Ramaswami Murugesh

91

99

Theoretical Aspects of the Local Government’s Decision-Making Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 Maryna Averkyna Application of AI in Diagnosing and Drug Repurposing in COVID 19 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 G. K. Ravikumar, Skanda Bharadwaj, N. M. Niveditha, and B. K. Narendra Using Eye-Tracking to Check Candidates for the Stated Criteria . . . . . 125 Oksana Isaeva, Marina Boronenko, Yuri Boronenko, and Vladimir Zelensky Towards Understanding How Emojis Express Solidarity in Crisis Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 Sashank Santhanam, Vidhushini Srinivasan, Khyati Mahajan, and Samira Shaikh Deep Neural Networks as Interpretable Cognitive Models for the Quine’s Uncertainty Thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 Dingzhou Fei Artificial Intelligence: Creating More Possibilities for Programmatic Advertising . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148 Yunbo Chen and Shuzhen Feng A CRISP-DM Approach for Predicting Liver Failure Cases: An Indian Case Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156 António F. Cunha, Diana Ferreira, Cristiana Neto, António Abelha, and José Machado

Contents

xiii

Towards a Most Suitable Way for AGI from Cognitive Architecture, Modeling and Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165 Yanfei Liu, Xin Wang, Zhiqiang Tian, Liang Zhang, Yuzhou Liu, Junsong Li, and Feng Fu Promoting Economic Development and Solving Societal Issues Within Connected Industries Ecosystems in Society 5.0 . . . . . . . . . . . . . . . . . . . 174 Elizabeth Koumpan and Anna W. Topol Artificial Intelligence and Tomorrow’s Education . . . . . . . . . . . . . . . . . 184 Omar Cóndor-Herrera, Hugo Arias-Flores, Janio Jadán-Guerrero, and Carlos Ramos-Galarza Towards Chinese Terminology Application of TERMONLINE . . . . . . . 190 Jiali Du, Christina Alexantris, and Pingfang Yu AsiRo-l: A Multi-purpose Robotic Assistant for Educational Inclusion of Children with Multiple Disabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . 199 Jonnathan Andrade-Altamirano, Ana Parra-Astudillo, Vladimir Robles-Bykbaev, Nidia Almeida-Solíz, Sofía Bravo-Buri, and Efren Lema-Condo Intelligent Agent Proposal in a Building Electricity Monitoring System for Anomalies’ Detection Using Reinforcement Learning . . . . . . . . . . . . 207 Santiago Felipe Luna Romero and Luis Serpa-Andrade Voltammetric Electronic Tongues Applied to Classify Sucrose Samples Through Multivariate Analysis . . . . . . . . . . . . . . . . . . . . . . . . . 216 Esteban M. Fuentes, José Varela-Aldás, Samuel Verdú, Raúl Grau Meló, and Miguel Alcañiz Proposal for a Platform Based on Artificial Vision for the Identification and Classification of Ceramic Tiles . . . . . . . . . . . . 223 Edisson Pugo-Mendez and Luis Serpa-Andrade Tourism Recommender System Based on Natural Language Classifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230 Maritzol Tenemaza, José Limaico, and Sergio Luján-Mora Real-Time Emotion Recognition for EEG Signals Recollected from Online Poker Game Participants . . . . . . . . . . . . . . . . . . . . . . . . . . 236 Edgar P. Torres, Edgar A. Torres, Myriam Hernández-Álvarez, and Sang Guun Yoo Knowledge Discovery About Cancer Based on Fuzzy Predicates . . . . . . 242 Miguel Angel Quiroz Martinez, Christian Rene Vargas Alava, Monica Daniela Gomez Rios, and Maikel Yelandi Leyva Vazquez

xiv

Contents

Teaching Brooks Law Based on Fuzzy Cognitive Maps and Chatbots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251 Miguel Angel Quiroz Martinez, Andres Fabian Arteaga Ramírez, Santiago Teodoro Castro Arias, and Maikel Yelandi Leyva Vazquez Algorithm for the Signal Validation in the Emergency Situation Using Unsupervised Learning Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259 Younhee Choi, Gyeongmin Yoon, and Jonghyun Kim AI Evolution Trend Analysis Based on Semantic Network Analysis . . . 269 Hyeon-Ju Cha Decision Support System and Human Factors Agile Circular Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279 Leonhard Glomann Object Detection Using Artificial Intelligence: Predicting Traffic Congestion to Improve Emergency Response to Mass Casualty Incidents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287 Rye Julson, Miranda Ahlers, Alexander Hamilton, Michael Kolesar, Gonzalo Barbeito, Jacob Ehrlich, Johnathon Dulin, Gregory Steeger, Justin Wilson, Kevin Cardenas, Marian Sorin Nistor, Stefan Pickl, and Dieter Budde Modelling Key Performance Indicators for Improved Performance Assessment in Persistent Maritime Surveillance Projects . . . . . . . . . . . . 295 Francesca de Rosa, Thomas Mansfield, Anne-Laure Jousselme, and Alberto Tremori Compositional Sonification of Cybersecurity Data in a Baroque Style . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304 Jakub Polaczyk, Katelyn Croft, and Yang Cai Demand Forecasting Using Ensemble Learning for Effective Scheduling of Logistic Orders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313 Katharina Lingelbach, Yannick Lingelbach, Sebastian Otte, Michael Bui, Tobias Künzell, and Matthias Peissner A Test Method of Inverter Performance Parameters Based on Virtual Instrument . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322 Zhang Fan and Zhang Hong New Findings on the Calculation of the Catenary . . . . . . . . . . . . . . . . . 333 Norbert L. Brodtmann and Daniel Schilberg Design and Creation of Educational Video Games Using Assistive Software Instruments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341 Yavor Dankov, Boyan Bontchev, and Valentina Terzieva

Contents

xv

Study and Implementation of Cross-Platform Real-Time Message Middleware in ATC Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350 Desheng Xu, Hong Liu, and Junyi Zhai Building an Educational Product: Constructive Alignment and Requirements Engineering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358 Nursultan Askarbekuly, Alexandr Solovyov, Elena Lukyanchikova, Denis Pimenov, and Manuel Mazzara Analysis of the Application of Steganography Applied in the Field of Cybersecurity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366 Luis Serpa-Andrade, Roberto Garcia-Velez, Eduardo Pinos-Velez, and Cristhian Flores-Urgilez Multiple LDPC Code Combined with OVCDM to Improve Signal Coding Efficiency and Signal Transmission Effects . . . . . . . . . . . . . . . . 372 Zhang Fan and Zhang Hong An Intelligent Systems Development for Multi COVID-19 Related Symptoms Detection: A Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . 380 Mohammed I. Thanoon Analysis of Influencing Factors of Depth Perception . . . . . . . . . . . . . . . 387 Yu Gu, Wei Su, and Minxia Liu Software Instruments for Analysis and Visualization of Game-Based Learning Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395 Boyan Bontchev, Yavor Dankov, Dessislava Vassileva, and Martin Kovachev Work Accident Investigation Software According to the Legal Requirements for Ecuadorian Companies . . . . . . . . . . . . . . . . . . . . . . . 403 Raúl Gutiérrez and Karla Guerra Systemic Analysis of the Territorial and Urban Planning of Guayaquil . . . 411 María Lorena Sánchez Padilla, Jesús Rafael Hechavarría Hernández, and Yoenia Portilla Castell Spatial Model of Community Health Huts Based on the Behavior Logic of the Elderly . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 418 Zhang Ping, Wang Chaofan, and Zhang Yuejiao Analysis of the Proposal for the SOLCA Portoviejo Hospital Data Network Based on QoS Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425 José Antonio Giler Villavicencio, Marely del Rosario Cruz Felipe, and Dioen Biosca Rojas Design and Optimization of Information Architecture of NGC Cloud Platform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433 Jinbo Hu, Zhisheng Zhang, and Zhijie Xia

xvi

Contents

Design of Self Rescue Equipment for High Rise Building Fire Based on Integrated Innovation Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439 Chang Zong and Zhang Zhang Adoption, Implementation Information and Communication Technology Platform Application in the Built Environment Professional Practice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 446 Lekan Amusan, David Adewumi, Adekunle Mayowa Ajao, and Kunle Elizah Ogundipe The Influence of E-commerce Web Page Format on Information Area Under Attention Mechanism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 456 Fan Zhang, Yi Su, Jie Liu, Nan Zhang, and Feng Gao Usage of Cloud Storage for Data Management in the Built Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465 Ornella Tanga, Opeoluwa Akinradewo, Clinton Aigbavboa, and Didibhuku Thwala Digital Model for Monitoring SME Architecture Design Projects . . . . . 472 Luis Cardenas, Gianpierre Zapata, and Diego Zavala Charging and Changing Service Design of New Energy Vehicles Under the Concept of Sustainable development—A Case Study of NIO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 477 Yan Wang, Zhixiang Xia, Xiuhua Zhang, and Yanmei Qiu Human Factors in Energy What Niche Design Can Learn from Acceptance Mining . . . . . . . . . . . . 485 Claas Digmayer and Eva-Maria Jakobs When Dullscreen is Too Dull . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 493 Ronald Laurids Boring Examining the Use of the Technology Acceptance Model for Adoption of Advanced Digital Technologies in Nuclear Power Plants . . . . . . . . . . 502 Casey Kovesdi Developments in the Application of Nano Materials for Photovoltaic Solar Cell Design, Based on Industry 4.0 Integration Scheme . . . . . . . . 510 Rosine Mouchou, Timothy Laseinde, Tien-Chien Jen, and Kingsley Ukoba Autonomous Emergency Operation of Nuclear Power Plant Using Deep Reinforcement Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 522 Daeil Lee and Jonghyun Kim Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 533

Human Factors in Artificial Intelligence and Social Computing

Human-AI-Collaboration in the Context of Information Asymmetry – A Behavioral Analysis of Demand Forecasting Tim Lauer1,2(B) and Sophia Wieland1 1 Infineon Technologies AG, Am Campeon 1-15, 85579 Neubiberg, Germany

{Tim.Lauer,Sophia.Wieland}@infineon.com 2 Technical University Dortmund, Emil-Figge-Straße 50, 44227 Dortmund, Germany

Abstract. Digitalization enables the full potential of Artificial Intelligence for the first time. This study deals with demand forecasting as a representation of supply chain planning. Statistical and judgmental approaches constitute the state-of-theart in methodology, but also present drawbacks such as human mental capacity constraints or data biases. Teaming of humans and AI promises synergies and better solutions, but challenging questions on how to organize collaborative tasks remain. Information asymmetry states an unsolved issue, as digitalization is going to take more time to be holistically established. Deploying a behavior analysis of an industrial case study, this paper investigates the impact of two different forms of interactions on the forecasting performance and ability of human planners to compensate the lack of contextual information included in an AI-based prediction. The results indicate that information asymmetry limits the magnitude of the decisionmaking anchor provided by the algorithm and affects the accuracy depending on the specific interaction form. Overall, an asymmetric sequential interaction set-up outperforms the other forecasts. Finally, this study states implications and limitations for human-AI collaboration. Keywords: Behavioral analysis · Human-AI collaboration · Demand forecasting · Digitalization · Supply chain planning · Teaming

1 Introduction Digitalization fosters the accumulation of vast data sets in new fields of application. This provides the basis to use Artificial Intelligence (AI) to its full potential for the first time. In particular, the application of machine learning algorithms makes it possible for companies to gain valuable information by identifying nonlinear patterns from vast datasets. In this age of digitalization, the roles of both humans and AI-agents must be redefined to leverage their individual strengths to gain overall best results. Instead of entirely substituting humans or opposing AI, both agents could contribute to a joint decision-making, referred to as human-AI collaboration or teaming. This research stream targets creating synergies by combining complementary capabilities: On one side, humans are able to © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Z. Ahram et al. (Eds.): AHFE 2021, LNNS 271, pp. 3–13, 2021. https://doi.org/10.1007/978-3-030-80624-8_1

4

T. Lauer and S. Wieland

access and interpret non-digitalized information, and to apply intuitive heuristics consolidated from a moderate dataset of experience and knowledge. Their decisions are limited in rationality and subject to biases and emotions, which may lead to non-optimal and even harmful decisions to their businesses [1]. On the other side, AI is capable of analyzing a tremendous amount of quantitative information and identifying nonlinear patterns in reasonable time, but is limited in its interpretation of unfamiliar changes and in the identification of causalities [2]. Supply chain planning, especially demand planning, poses a promising area of application for human-AI-collaboration: In many cases, demand data and internal and external reference data are available, but not all information is digitalized and/or can be automatically processed. Furthermore, causal independencies are complex and manifold, whereas fast and precise forecasts are necessary to counteract phenomena like the bullwhip effect [3]. The disadvantageous aversion of humans to algorithms consistently characterizes their relationship with AI. This mistrust stems from the lack of transparency of the algorithm, referred to as black-box behavior. Hence, humans often retain the final prediction in a forecasting process. State-of-the art research provides methods for combining the contribution of humans with statistical approaches and which vary in their results and recommendations: Some studies report an improvement in performance when humans conduct a collaborative prediction supported by statistical forecasts [4, 5]. Other researchers argue that humans usually make inefficient use of contextual information [6–8]. A separation of tasks between humans and AI has not gained attention in literature. While teaming promises new opportunities, challenging questions regarding the distribution of collaborative tasks between humans and AI in an efficient and effective manner arise. For this reason, the objective of the present study is to generate insights on beneficial human-AI collaboration set-ups deployed to the field of demand planning. A behavioral analysis compares two problemsolving concepts for human-AI teams, one characterized as a simultaneous and one as a sequential type of interaction. We then examine the contributions of both humans and AI-agents for a final forecast with regard to the available information, focusing on the cases of information symmetry and asymmetry. A further analysis takes the joint influence into account. The paper is structured as follows: After the introduction, Sect. 2 reviews current research in the field of behavioral analysis of judgmental forecasting in combination with quantitative forecasts. Based on the derived findings, Sect. 3 describes the design, implementation, and conduction of our behavioral experiment. In the Sect. 4, the results are analyzed and discussed. A conclusion in Sect. 5 summarizes the outcome of this study and states managerial and scientific implications as well as future research directions.

2 Literature Review This chapter consists of two sections. The first one deals with collaborative integrating forecasting methods and the second with behavioral studies of such methods. Integrating forecasting methods combining quantitative and judgmental approaches are common in supply chain planning [9]. [10] recommend adapting the forecasting process to the problem situation and define three opportunities to integrate judgment

Human-AI-Collaboration in the Context of Information Asymmetry

5

in a forecast: First, the forecaster decides on representative data. Second, the forecaster chooses an appropriate statistical method. Third, the possibility to apply judgment is optional. Based on that there are various integration approaches: Within judgmental decomposition, the forecaster first evaluates component factors of demand and the contextual information before applying a mathematical forecast. This method avoids information overload but bears higher risks of errors by neglecting interdependencies. [5] propose assigning higher weights to judgmental forecasts and lower weights to statistical forecasts with increasing uncertainty. Within judgmental adjustment [9] a mathematical forecast serves as an anchor and the planners adjust it in case of additional contextual information [6]. Here, forecasters reduce accuracy by consistently performing small adjustments, but increase accuracy by making large negative alterations [7]. Rule-based forecasting uses rules devised from literature to structure judgments in case of high domain knowledge and econometric forecasting supports model selection [10]. [11] develop a factor-based approach for demand influenced by events using iterative collaborative discussions. The divide-and-conquer method by [9] suggests that statistical forecasts should consider all necessary information of a time series. The planners receive the statistical forecasts with an explanation but not the historical data itself. This asymmetric distribution of information keeps the forecasters focused on non-digital information. This method outperforms combined forecasts in case of high domain knowledge, but the challenge to collect and interpret relevant contextual information remains [9]. Overall, the performance of integrating methods depends on the context and on the available domain knowledge. The assessment on behavior research focuses on information asymmetry between quantitative and qualitative forecasts. [12] and [13] provide comprehensive literature reviews on the interaction between statistical and judgmental forecasts. All studies identify contextual information, characteristics of a task, and the planner as impact factors on the prediction performance. Based on that, [5] create a guideline for the right timing for and strategy of manual adjustments. [4] investigate the impact of reliability and causal information on performance by using judgmental decomposition. The joint performance of humans and statistical forecasts is inferior to the accuracy of statistical forecasts due to algorithm aversion. The literature review by [9] discusses the experts’ trust and concludes that subjects tend to be more algorithm averse if task complexity is higher and if it errs frequently. [6] examine the influence of motivation and contextual statements on the uplift of a demand forecast during promotional effects. Forecasters ignore information on promotions and anchor on the last observed demand value [7]. To avoid this drawback, [8] restrict the option of intervention to promotional periods leading to worse adjustments. For forecasters, anchoring and the availability heuristic are the important behavior heuristics [1]. Forecasters use external or self-generated anchors. The more complex a forecast task is, the less important anchoring becomes [14]. Besides quantitative forecasts, contextual information plays an important role in sporadically occurring events [15]. [12] define contextual information as “the information, other than the time series and general experience, which helps in the explanation, interpretation, and anticipation of time series behavior”. According to [16], events are the main indicators for forecasting experts to incorporate judgment. AI methods and statistical forecasts

6

T. Lauer and S. Wieland

are not always able to access and correctly interpret contextual information. This creates information asymmetry between humans and quantitative methods.

3 Methodology Based on the literature, a behavioral experiment investigates two designs of humanAI interaction (simultaneous & sequential) in scenarios of information symmetry and asymmetry. The experiment set-up (see Fig. 1) is split into the integrating methods, the time series forecasting, and the hypothesis testing.

Fig. 1. Experimental framework

To enable collaboration, we substitute the traditional linear forecasting model by a Machine Learning (ML) algorithm utilizing judgmental decomposition and adjustment: First, forecasters filter relevant information from several sources, such as quantitative forecasts, time series data and contextual information [7]. They then decide between accepting the forecast produced by the ML algorithm and intervening [5]. The resulting interactions are defined as simultaneous Human-Algorithm-Human (HAH) and sequential Algorithm-Human (AH). In HAH both agents predict independently and simultaneously follow up their predictions with a manual revision. Judgmental decomposition is the underlying concept for this. Two potential anchors, their own forecast and one provided by the AI, influence the expert’s final decision. The trust towards the second anchor is constrained by algorithm aversion, bias and the relevance of the contextual information. The sequential design is based on judgmental adjustment: AI provides a forecast and the planners revises it to come to a decision. This sets a simple benchmark with a statistical anchor. Unlike AI, humans are able to incorporate contextual, non-digital information. In order to investigate the benefit of this ability, two information levels are defined: symmetric and asymmetric. In the first case, both the planner and the algorithm receive the

Human-AI-Collaboration in the Context of Information Asymmetry

7

same time series data. In the second one, only the subjects process additional contextual information of upcoming sales promotions following the approaches of [7] and [15]. These promotions are artificially modelled to decrease experimental noise in line with research standards [7]. Therefore, the transient factor is set to influence the demand of the same period and for one period [11]. Manipulating [7], a response between 50 and 100% is chosen with a uniform distribution. The experiment lasts 40 periods and the promotion has to occur often enough, but in a difficult to predict manner, to observe the behavior of the subjects sufficiently during normal and promotion periods. Therefore, the time between promotions varies from 15 to 25 time units. Information asymmetry is generated by providing only the planners with the timing of promotions, while neither agent receives such an indication under symmetry. The time series are the common source of both subjects and the ML-algorithm using a public set obtained from [17]. The original data of item 15 in store 8 with 1826 observations and a high standard deviation is suitable for the experiment. The data is manipulated to reflect the promotion strategy. An XGBoost model represents the used ML algorithm achieving an average MAPE of 8.87 (SD = 9.66) on the original data. A hyperparameter tuning is conducted on the original data to serve the concept that the ML algorithm performs well in normal periods and relatively worse in promotion instances. The algorithm is trained on the manipulated data, which is split into a training and a test set with the ratios 80:20. The ML algorithm achieves a MAPE of 11.09 on the manipulated data while a naïve forecast achieves a MAPE of 21.62, a moving average scores 16.69 and a rolling average nets 21.45. The experiment incorporates a 2 × 2 factorial design. The interaction defines the first independent variable as the domain of sequential and simultaneous and the information level as the second one with the domain information symmetry and asymmetry. The forecast accuracy measured by MAPE serves as one performance measure. Additionally, algorithm acceptance states the second dependent variable measuring the force of the provided anchor using the “weight of advice” (WOA) [18]. The WOA compares the difference between the initial and final forecast in relation to the difference between the initial forecast and the forecast provided by the ML algorithm. The WOA is one if the forecaster completely accepts the ML algorithm prediction and zero if she completely disregards it and picks her own one. The WOA is only applicable to the HAH interaction because the AH interaction does not have an initial forecast. Therefore, the adjusted WOA (WOA*) is introduced, which calculates the percentage error between the final forecast and the ML algorithm in relation to the latter. The higher the WOA* is, the weaker the anchor provided by the ML algorithm is. In total, 71 employees in the field of supply chain of Infineon Technologies participated in the experiment and were randomly and relatively equally assigned to the four treatments. Ten sessions were conducted in the company in comparable laboratory conditions within three weeks. The experiment included an introduction, an explanation of the forecast task and a questionnaire. The best five percent subjects as measured by their prediction accuracy received a prize. Based on feedback, we conclude that this prize was perceived as a sufficient incentive for participating and wanting to win.

8

T. Lauer and S. Wieland

4 Results Before evaluating the results, a z-score analysis identified one outlier, which was treated as a typo. The results are split into three sets of hypotheses: the overall influence of collaboration, the influence of the interaction designs measured by MAPE, and the influence of the anchoring effect measured by the WOA*. The results are not entirely normally distributed and, therefore, the non-parametric statistical hypothesis tests Wilcoxon SignedRank and Mann-Whitney U are calculated. The confidence level is set to 0.95 with a rejection level of α = 0.05 for all hypotheses. In addition, the effect size is measured to obtain a second statistical evaluation of the effect magnitude utilizing Cohen’s d and Hedges’ g. Table 1 displays the results. Table 1. Results of experimental hypothesis tests Tested Hypothesis Sample

U statistics

p-value

Test result Effect size

Test: One-sample Wilcoxon Signed Rank (one-tailored) Null Hypothesis H01: The collaborative forecast of the subject and AI achieves the same or lower forecast accuracy than the forecast conducted by the ML algorithm, which is measured by the MAPE.

Symmetric HAH

129.0

0.9993

Fail to reject H0

d = 0.8707

Symmetric AH

184.0

0.9998

Fail to reject H0

d = 0.7054

Asymmetric HAH

74.0

0.4633

Fail to reject H0

d = −0.026

Asymmetric AH

46.0

0.0449

Reject H0 d = 0.5915

0.0182

Reject H0 g = −0.797

Test: Two-sample Mann-Whitney-U Test (two-tailored) Interaction effect on performance Null Hypothesis Symmetric (HAH H02: vs AH) The simultaneous interaction (HAH) of the subject and AI has the same MAPE distribution than the sequential interaction (AH)

95.0

(continued)

Human-AI-Collaboration in the Context of Information Asymmetry

9

Table 1. (continued) Tested Hypothesis Sample

U statistics

p-value

Test result Effect size

Asymmetric (HAH vs AH)

123.0

0.1651

Fail to reject H0

Null Hypothesis H03: During normal periods, the simultaneous (HAH) interaction of the subject and AI has the same MAPE distribution than the sequential (AH) interaction

Symmetric (HAH vs AH)

93.0

0.0160

Reject H0 g = −0.795

Asymmetric (HAH vs AH)

122.5

0.1610

Fail to reject H0

g = 0.417

Null Hypothesis H04: During promotional periods, the simultaneous (HAH) interaction of the subject and AI has the same MAPE distribution than the sequential (AH) interaction

Symmetric (HAH vs AH)

142.5

0.2789

Fail to reject H0

g = 0.032

Asymmetric (HAH vs AH)

153.0

0.4934

Fail to reject H0

g = 0.032

0.0425

Reject H0 g = 0.602

g = 0.346

Effect of information level on performance Null Hypothesis HAH (Symmetric H05: vs. Asymmetric) The symmetric group (information symmetry) has the same MAPE distribution than the asymmetric group (information asymmetry)

94.0

(continued)

10

T. Lauer and S. Wieland Table 1. (continued)

Tested Hypothesis Sample

U statistics

p-value

Test result Effect size

AH (Symmetric vs. Asymmetric)

34.5

0.000018

Reject H0 g = 1.435

Null Hypothesis H06: During normal periods, the symmetric group (information symmetry) has the same MAPE distribution than the asymmetric group (information asymmetry)

HAH (Symmetric vs. Asymmetric)

105.0

0.0896

Fail to reject H0

AH (Symmetric vs. Asymmetric)

104.0

0.0217

Reject H0 g = 0.752

Null Hypothesis H07: During normal periods, the symmetric group (information symmetry) has the same MAPE distribution than the asymmetric group (information asymmetry)

HAH (Symmetric vs. Asymmetric)

16.0

0.000005

Reject H0 g = 3.04

AH (Symmetric vs. Asymmetric)

1.0

0.00

Reject H0 g = 3.45

106.5

0.0421

Reject H0 g = −0.616

g = −0.501

Anchoring effect of ML algorithm Null Hypothesis Symmetric (HAH H08: vs AH) The distribution of the WOA* of the simultaneous interaction (HAH) is the same as the one of the sequential interaction (AH)

(continued)

Human-AI-Collaboration in the Context of Information Asymmetry

11

Table 1. (continued) Tested Hypothesis Sample

U statistics

p-value

Test result Effect size

Asymmetric (HAH vs AH)

88.0

0.0166

Reject H0 g = 0.619

Symmetric (HAH vs AH)

109.0

0.0497

Reject H0 g = −0.604

Asymmetric (HAH vs AH)

84.5

0.0124

Reject H0 g = 0.706

Null Hypothesis Asymmetric (HAH H010: vs AH) During promotional periods, the distribution of the WOA* of the simultaneous interaction (HAH) is the same as the one of the sequential interaction (AH)

151.0

0.3699

Fail to reject H0

Null Hypothesis H011: During normal periods, the distribution of the WOA* of the symmetric group (information symmetry) is the same as the one of the asymmetric group (information asymmetry)

HAH (Symmetric vs. Asymmetric)

91.0

0.0340

Reject H0 g = −0.766

AH (Symmetric vs. Asymmetric)

114.0

0.0430

Reject H0 g = 0.585

Null Hypothesis H09: During normal periods, the distribution of the WOA* of the simultaneous interaction (HAH) is the same as the one of the sequential interaction (AH)

g = 0.004

12

T. Lauer and S. Wieland

Concerning the performance, human-AI collaboration is only beneficial for the asymmetric AH group. In the other three treatments, the subjects introduce, on average, inefficient adjustments to the ML algorithm forecast in line with [7]. Therefore, no general recommendation is derived. However, collaboration affects the forecast accuracy under information symmetry: The symmetric HAH group performs especially well during normal rounds where large gaps in the demand occur. They are better at pattern recognition than the symmetric AH group. An explanation could be that information is presented stepwise to the HAH group supporting structure judgments and decision-making verification [10]. Regarding the information level, the experiment’s results support the importance of contextual information in demand forecasting in case of volatile demand and impactful events [5]. The subjects with promotion notifications are able to efficiently implement this information into the forecasts and achieve a higher forecast accuracy during promotional periods and overall higher performance than the symmetric groups. However, a critical analysis concludes that the human forecasts are underestimating the demand increase by promotions at the beginning of the task. An explanation for the difference in performance among interaction designs could be the anchoring effect: The WOA* distribution of the asymmetric AH group significantly differs from the asymmetric HAH group. In line with this, Hedges’ g of 0.62 reports a medium effect size and a higher algorithm aversion by the asymmetric HAH group. Nevertheless, only a tendency can be observed. The information level significantly affects the performance with the exception of the HAH interaction during normal periods. This can be explained by how much weight the participants assign to the ML algorithm measured by the WOA*. During normal periods, there is a significantly stronger anchoring effect of the ML algorithm in the asymmetric AH group compared to the symmetric group. This greater algorithm acceptance could have resulted in higher forecast accuracy by making fewer unnecessary adjustments. For the HAH interaction, significance is not obtained for the effect of the information level on the integration of the ML algorithm during normal periods. However, the p-value of p = 0.055 is close to the rejection level and the medium effect size of -0.77 has the same direction as the effect size of the information level on the MAPE. The research of [15] confirms these findings and describes this by the cognitive effort of the both anchors.

5 Conclusion Humans and AI complement each other’s strengths. Prior research shows that behavioral aspects counteract desired synergies of collaboration. This study contributes to research by conducting a behavioral experiment on sequential AH and simultaneous HAH interaction designs for human-AI teaming in demand forecasting in the context of information levels. Promotion effects only known to the human subjects create information asymmetry. The results show that the interaction design affects the performance significantly under information symmetry, but not under asymmetry. However, only the asymmetric AH interaction outperformed AI solely. Due to the contextual information, human contributions are beneficial and lead to an overall higher accuracy of a collaborative human-AI forecast. The subjects act risk-aversely and underestimate the effect

Human-AI-Collaboration in the Context of Information Asymmetry

13

of promotions. Repetition of such promotion periods leads to better adjustments of the asymmetric AH group and a learning effect is observed. Promising further research directions are considering the learning effect for influencing the interaction and increasing the validity of the findings by additional experiments with more subjects. In summary, the behavioral analysis highlights specific conditions for interaction and information asymmetry in which a human-AI teaming shows the potential to improve the efficacy and accuracy of collaborative human-AI forecasts in demand forecasting.

References 1. Kahnemann, D., Tversky, A.: Judgment under uncertainty heuristics and biase. Science 185, 1124–1131 (1974) 2. Singh, L., Challa, R.: Integrated forecasting using the discrete wavelet theory and artificial intelligence. J. Flex. Syst. Manag. 17(2), 157–169 (2016) 3. Li, S., Ragu-Nathan, B., Ragu-Nathan, T., Subba, R.: The impact of SCM practices on competitive advantage and org. performance. Omega 34(2), 107–124 (2006) 4. Lim, J., O’Connor, M.: Judgemental adjustment of initial forecasts: its effectiveness and biases. J. Behav. Decis. Making 8, 149–168 (1995) 5. Sanders, N., Ritzman, L.: Judgmental adjustment of statistical forecasts. In: Armstrong, J. (ed.) Principles of Forecasting. A Handbook for Researchers and Practitioners, pp. 405–416. Kluwer Academic, Boston (2001) 6. Fildes, R., Petropoulos, F.: Improving forecast quality in practice. Foresight: Int. J. Appl. Forecast. 36, 5–12 (2015) 7. Fildes, R., Goodwin, P., Önkal, D.: Use and misuse of information in supply chain forecasting of promotion effects. Int. J. Forecast. 35(1), 144–156 (2019) 8. Goodwin, P., Fildes, R., Lawrence, M., Stephens, G.: Restrictiveness and guidance in support systems. Omega 39(3), 242–253 (2011) 9. Alvarado-Valencia, G., Barrero, L., Önkal, D., Dennerlein, J.: Expertise, credibility of system forecasts and integration methods in judgmental demand forecasting. Int. J. Forecast. 33(1), 298–313 (2017) 10. Armstrong, J., Collopy, F.: Integration of statistical methods and judgment for time series forecasting: principles from empirical. In: Wright, G., Goodwin, P. (eds.) Forecasting with Judgment, pp. 269–293. Wiley (1998) 11. Marmier, F., Cheikhrouhou, N.: Structuring and integrating human knowledge in demand forecasting. Prod. Plann. Control 21(4), 399–412 (2010) 12. Goodwin, P., Wright, G.: Improving judgmental time series forecasting: a review of the guidance provided by research. Int. J. Forecast. 9(2), 147–161 (1993) 13. Webby, R., O’Connor, M.: Judgemental and statistical time series forecasting: a review of the literature. Int. J. Forecast. 12(1), 91–118 (1996) 14. Epley, N., Gilovich, T.: When effortful thinking influences judgmental anchoring. J. Behav. Decis. Making 18(3), 199–212 (2005) 15. Arvan, M., Fahimnia, B., Reisi, M., Siemsen, E.: Integrating human judgement into quantitative forecasting methods: a review. Omega 86, 237–252 (2019) 16. Fildes, R., Goodwin, P.: Against your better judgment? How organizations can improve their use of management judgment in forecasting. Interfaces 37(6), 570–576 (2007) 17. Kaggle Inc. https://www.kaggle.com/c/demand-forecasting-kernels-only/data 18. See, K., Morrison, E., Rothman, N.B., Soll, J.: The detrimental effects of power on confidence, advice taking, and accuracy. Org. Behav. Hum. Decis. Process. 116(2), 272–285 (2011)

A Minimally Supervised Event Detection Method Matthew Hoffman, Sam Bussell, and Nathanael Brown(B) Sandia National Laboratories, Complex Systems for National Security, P.O. Box 5800, Albuquerque, NM 87185, USA {mjhoffm,sbussel,njbrown}@sandia.gov

Abstract. Solving classification problems with machine learning often entails laborious manual labeling of test data, requiring valuable time from a subject matter expert (SME). This process can be even more challenging when each sample is multidimensional. In the case of an anomaly detection system, a standard twoclass problem, the dataset is likely imbalanced with few anomalous observations and many “normal” observations (e.g., credit card fraud detection). We propose a unique methodology that quickly identifies individual samples for SME tagging while automatically classifying commonly occurring samples as normal. In order to facilitate such a process, the relationships among the dimensions (or features) must be easily understood by both the SME and system architects such that tuning of the system can be readily achieved. The resulting system demonstrates how combining human knowledge with machine learning can create an interpretable classification system with robust performance. Keywords: Human-systems integration · Bayesian networks · Rare events · Supervised classification · Data fusion · Machine learning

1 Introduction Due to their infrequent nature [1], rare events are difficult to model and detect due to a low number of positive (“event”) cases relative to the number of negative (“nonevent”) cases. Thus, detection is regarded as an imbalanced classification problem which attempts to detect events with high impact but low probability. Rare events detection has many applications such as network intrusion detection and credit fraud detection [2]. We are concerned with rare events of interest, a subset of rare events that must also meet some “importance” criteria. That is, we are focused on problems where all interesting events are rare but not all rare events are interesting. We describe a method for human-in-the-loop automated filtering and classification for more efficient labeling of data that contains an abundance of uninteresting observations. Our approach consists of a three-step method: (1) a modified ensemble technique acting as a novelty filter which labels uninteresting data, (2) SME tagging of the remaining unlabeled data, and (3) classification of the further reduced unlabeled data using a Bayesian Network (BN). We are specifically interested in problems with many event © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Z. Ahram et al. (Eds.): AHFE 2021, LNNS 271, pp. 14–22, 2021. https://doi.org/10.1007/978-3-030-80624-8_2

A Minimally Supervised Event Detection Method

15

detectors that output nonnegative values as our features, where zero means nothing happened and larger values indicate alerts of greater interest or concern. The novelty filter prioritizes the most interesting observations for SME review by assuming that the presence of more alerts (non-zero features) and/or rare alerts in a single observation make it more interesting and provides interpretable reasons for the novelty score. Using a BN for the final classification has several benefits. As probabilistic graphical models of a set of variables (corresponding to our detector features) and their conditional dependencies, BNs are more natively interpretable than most other machine learning methods. BNs have also been shown to perform well in the area of rare event detection. Previous applications have included intrusion detection as demonstrated by Benferhat et al. [3] who used naive Bayes and Tree Augmented Naive Bayes classifiers; Cheon et al. [4] who used a BN in ozone level modeling to automatically alert a forecaster when abnormal signals (ozone levels) are detected; and Wong et al. [5] who used BNs for detecting disease outbreaks. BNs have additional advantages over other machine learning techniques [6]: they provide a natural way to handle missing data, facilitate learning about causal relationships between variables, are robust to overfitting, and can deliver good prediction accuracy even with small sample sizes. To demonstrate extensibility of our approach to a variety of domains and problem sets, we evaluated against a generalized synthetic data set that is not tied to a particular use case. Although our data set is binary, we anticipate that the method can be equally applied to ordinal, continuous or categorical (via binary encoding) data or mixes thereof, insofar as larger values are of greater interest. Additionally, our method provides several benefits and differentiators relative to general classification problems where there is no initial labeling of data: • Reduces SME’s data review burden and assists the SME with data labeling • Using a BN for classification allows detecting multivariate patterns in the positively tagged (“interesting”) cases, tolerates and infers missing data, and natively provides classification likelihood and model fit estimates • Both the novelty filter and the BN are relatively interpretable and explainable.

2 Method The method consists of progressive data classification steps as depicted in Fig. 1. Step 0 shows the training-test partition normally used for a classification problem. In step 1, a novelty filter as described in Sect. 2.1 is used to automatically label most of the data as uninteresting (identified by the highlighted blue region around a negative sign [‘−’]). The labels in the test data are treated as final system classifications (i.e., the BN will not override them in step 3). In step 2, SME(s) review the remaining unfiltered samples in the training set and label them (interesting data is identified by a positive sign [‘+’]). Finally, a BN is built and trained based on all labels from previous steps and used to classify the remainder of the test set as highlighted in step 3. 2.1 Novelty Filtering

 We propose a weighted voting model sj = N i=1 wi aij where aij denotes the value of feature i in sample j, and wi = W i (ci )−k , where ci is the sum over all samples for the

16

M. Hoffman et al.

Fig. 1. Method steps with updated labels/classifications highlighted in blue

ith feature, W i is an a priori feature weight, and (positive) k determines the relative importance of rarer values. This model is appropriate for directed nonnegative features where larger values are more interesting, such as binary alarms, ordinal alert levels or continuous meter readings. It assumes that features with more frequent positives are less informative about abnormal conditions, and samples with more unusual features are more abnormal. Using k = 1 encodes that more active features are proportionately less interesting, and W i = 1 ∀i gives equal feature importance otherwise. Samples with score sj falling below a threshold value are automatically labeled as uninteresting. This enables SME review of a smaller data set and training the Bayesian network on the filter-labeled uninteresting examples in the test set. While choice of threshold is subjective, a low initial choice can filter out many observations in an unbalanced set and can be iteratively updated. In Sect. 3.1 we discuss (by way of example) the process of informing choice of threshold with domain information. Key benefits of the filter are its interpretability and adaptability: the reason for a sample’s score is clear based on its contributors, and a priori feature weights can be adjusted if score rankings or feature contributions conflict with domain knowledge. Scores and contributions may also assist the SME in tagging the remaining data. 2.2 SME Tagging and Bayesian Network Classification SME tagging is accomplished via reviewing the multivariate samples in the reduced training set and manually assigning a class (e.g., True, False, or Red, Yellow, Green). Upon completion of SME tagging, a BN is built and trained from all previously labeled data (including those in the test set classified by the filter) and used to classify the remaining samples. We use a Tree-Augmented Naive (TAN) Bayesian network, a restricted BN class which combines the simplicity of Naive Bayes with the ability to express the dependence among attributes in a Bayesian Network. It embodies a good tradeoff between the quality of the approximation correlation among attributes and the computational complexity in the learning stage. TAN relaxes the naive Bayes attribute independence assumption by employing a tree structure, in which each attribute only depends on the class and one other attribute. A maximum weighted spanning tree that maximizes the likelihood of the training data is used to perform classification [7, 8]. Figure 2 shows a BN representation of the type that TAN creates [7]. Key BN benefits for human-in-the-loop domain-informed classification include: • Native interpretability – Traditional machine learning (especially deep learning) still suffers from its black-box nature and many challenges remain to enhancing its interpretability and explainability [9]. One mitigation has been to use BNs to aid humans in interpreting the results of complex deep learning models [10]. As pointed

A Minimally Supervised Event Detection Method

17

Fig. 2. TAN Bayesian network

out in [6] and [11], the visual nature of a BN as a directed acyclic graph can be used to communicate the underpinnings of a model via the causal relationships among the real-world features. For example, in Fig. 2 we can immediately see that the probability distribution of Xi is dependent on its parents X1 and Y. • Explainability – BNs enable explainable classification via mutual information (describing how strongly features relate to the class variable), and provide metrics of classification confidence (e.g., probability of the assigned class) and per-sample model fit (e.g., log likelihood). • Tolerance for missing data – BNs natively tolerate and infer missing data features within a sample. 2.3 Iterative Workflow Although we present and describe the novelty filtering and SME tagging processes sequentially, in practice (and especially on large data sets) it can be thought of as an iterative workflow with no defined entry or exit points as shown in Fig. 3.

Fig. 3. Envisioned iterative method on larger data

The data reduction process prior to BN training and classification can begin and/or end with either novelty filtering or SME tagging, based on SME/analyst objectives and assessment of the results of each iteration.

18

M. Hoffman et al.

3 Example Results To provide a challenging example, we engineered a small data set with weakly correlated and sparsely positive binary features and highly unbalanced classes. We sampled 1000 observations with 10 features where the first 250 serve as the training set. The first three features are independent random binaries with a positive rate of 20%, 10% and 5%, followed by seven ordered features each with a 10% chance of being positive if any of the prior three features in the same sample were positive and another 10% chance of being independently positive otherwise. Ground truth classes are synthetic and rule-based, where an observation belongs to the “interesting” class only if any of the following criteria are true: 1. Features 2 and 5 are both positive 2. At least two of the remaining first six features are both positive 3. At least two of the last four features are both positive. While this data generation scheme does not mimic any specific known data set, it provides the complexity we desired for our example in terms of feature sparsity, weak multivariate correlation, and unbalanced classes based on nontrivial multivariate patterns. The random draw used for this example had 13 “interesting” training set cases. 3.1 Choosing the Novelty Filter Threshold Upon calculating novelty scores, one must determine the threshold below which observations are filtered out as uninteresting. Figure 4 depicts how the threshold score can be bounded by above and below.

Fig. 4. Example of using domain information to bound novelty threshold.

Our lower bound in this example is found by examining the training set, which reveals that all interesting samples have at least two positive features. The upper bound

A Minimally Supervised Event Detection Method

19

is established by iteratively applying the novelty filter using various thresholds. On real data, domain-informed rules based on SME knowledge should be used and may be more sophisticated. While ultimately a matter of choice, picking a threshold value above the lower bound reduces the amount of data for SME tagging, and picking a value below the upper bound reduces the likelihood of false negatives. The threshold score chosen for this example was 0.017, resulting in labels of uninteresting for approximately 75% of the data and leaving 57 candidate samples in the training set for SME review. Note that we are using an empirical CDF based on the entire population, so the fraction of novelty scores less than X and P(x < X) are identical. 3.2 SME Tagging and TAN BN Classification In practice, the next step would be SME classification of the remaining candidates in the larger training set (i.e., the 57 samples not filtered). For this illustrative example, SME training set classifications are simply assumed to match ground truth. The TAN BN was learned from the fully labeled training set and the filter-labeled portion of the test set. The BN provides a probability for each classification. We define a “prediction level” as the minimum value of probability P(class = “interesting”) required to tag a sample as interesting. Results of accepting the classifications as-is (i.e., using a 50% prediction level) are summarized in Tables 1 and 2. Table 1. Confusion matrix (50% prediction level)

Actual

Predicted 0 1 0 707 11 1 6 26

Table 2. Classification statistics at 50% prediction level Accuracy 97.7% True Positive (TP) Rate

81.3%

Precision 70.3% False Positive (FP) Rate

1.5%

Recall

81.3% True Negative (TN) Rate 98.5%

F1

75.4% False Negative (FN) Rate 18.8%

In Table 3 we summarize the results by P(class = predicted class) given by the BN. These results show that most incorrect classifications occur at lower probabilities. This suggests that the workflow could be adapted to include a third “ambiguous” class for lower-probability classifications to be flagged for further manual review. Figure 5 shows results formed by varying “prediction level” between 0 and 100%. The ROC (receiver operating characteristic) curve shows minimal tradeoff between true and false positive rates. Given the unbalanced nature of the data set, the plot of precision vs. recall (P/R) is more meaningful than ROC for this example. The P/R curve shows that

20

M. Hoffman et al. Table 3. Count of in/correct classifications by P(class = predicted) Probability TN FN TP FP >90%

677 3

24 1

(80, 90]

10 3

2 0

(70, 80]

4 0

0 4

(60, 70]

8 0

0 2

(50, 60]

8 0

0 4

this model has skill (precision > 0.5) for most prediction levels, and that including the filter-tagged uninteresting (negative) cases from the test set when training the BN results in considerably better precision and recall than training the BN on the original training partition alone. Model performance depends strongly on prediction level (denoted by the labeled percentages in Fig. 5), suggesting that tuning may be appropriate in practice.

Fig. 5. ROC and Precision vs. Recall curves (varying prediction levels)

4 Conclusion and Future Work We describe a parsimonious model for detecting rare events of interest from sparse, imbalanced data. The novelty filter allows fine control over the amount of data the SME must review. Using a Bayesian Network for classification allows detecting multivariate pattern differences between classes, enables partial learning from missing and untagged data, and natively provides probability estimates for classifications. Both the novelty filter and the Bayesian Network are explainable “glass box” methods whose results can readily be examined to understand why certain scores or classifications were provided – which we expect to be invaluable for human-in-the-loop interactive analytics. We show

A Minimally Supervised Event Detection Method

21

promising model performance on a synthetic data set designed to represent some of the challenges specific to detecting rare events of interest from small, sparse multivariate data. With proof of concept demonstrated, performance comparison against other methods on a diverse range of datasets is prudent. While our method is intended to work only on nonnegative features with positive directionality, such features should be attainable from other data sets via appropriate transformation and feature extraction methods. Further study is warranted into use of classification probabilities in analysis (e.g., classifying samples as ambiguous). Investigation of other BN structures and filtering techniques may be appropriate for some data. Expansions of this method for data with temporal patterns is of interest and should be feasible via use of Dynamic BNs in combination with augmentation of the novelty filter analysis with features that encode state change detections and other temporal patterns. Acknowledgments. This research was funded by the NNSA Office of Defense Nuclear Nonproliferation Research and Development, Office of Proliferation Detection (NA-221). Sandia National Laboratories is a multi-mission laboratory managed and operated by National Technology & Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International Inc., for the U.S. Department of Energy’s National Nuclear Security Administration under contract DE-NA0003525. This paper describes objective technical results and analysis. Any subjective views or opinions that might be expressed in the paper do not necessarily represent the views of the U.S. Department of Energy or the United States Government. SAND No. SAND2021-0992 C.

References 1. Harrison, D.C., Seah, W.K.G., Rayudu, R.: Rare event detection and propagation in wireless sensor networks. ACM Comput. Surv. 48(4), 22 (2016). https://doi.org/10.1145/2885508. Article id 58 2. Zhao, J.H., Li, X., Dong, Z.Y.: Online rare events detection. In: Zhou, Z.H., Li, H., Yang, Q. (eds.) PAKDD 2007. LNCS, vol. 4426, pp. 1114–1121. Springer, Heidelberg (2007). https:// doi.org/10.1007/978-3-540-71701-0_126 3. Benferhat, S., Tabia, K.: On the detection of novel attacks using behavioral approaches. In: Proceedings of the Third International Conference on Software and Data Technologies, Volume PL/DPS/KE, ICSOFT 2008, Porto, Portugal, pp. 265–272 (2008) 4. Cheon, S.-P., Kim, S., Lee, S.-Y., Lee, C.-B.: Bayesian networks based rare event prediction with sensor data. Knowl.-Based Syst. 22(5), 336–343 (2009). https://doi.org/10.1016/j.kno sys.2009.02.004. ISSN 0950-7051 5. Wong, W., Moore, A., Cooper, G., Wagner, M.: Bayesian network anomaly pattern detection for disease outbreaks. In: Fawcett, T., Mishra, N. (eds.) Proceedings of the Twentieth International Conference on Machine Learning, Menlo Park, California, August 2003, pp. 808–815. AAAI Press (2003) 6. Uusitalo, L.: Advantages and challenges of Bayesian networks in environmental modelling. Ecol. Model. 203(3–4), 312–318 (2007). https://doi.org/10.1016/j.ecolmodel.2006.11.033. ISSN 0304-3800 7. Zheng, F., Webb, G.I.: Tree augmented naive Bayes. In: Sammut, C., Webb, G.I. (eds.) Encyclopedia of Machine Learning. Springer, Boston (2011). https://doi.org/10.1007/978-0-38730164-8

22

M. Hoffman et al.

8. Shi, H-B., Huang, H-K.: Learning tree-augmented naive Bayesian network by reduced space requirements. In: Proceedings of the International Conference on Machine Learning and Cybernetics, Beijing, China, vol. 3, pp. 1232–1236 (2002) 9. Tjoa, E., Guan, C.: A survey on explainable artificial intelligence (XAI): towards medical XAI. arXiv:1907.07374v5 [cs.LG] (2020) 10. Vishnu, TV, Gugulothu, N., Malhotra, P., Vig, L., Agarwal, P., Shroff, G.: Bayesian networks for interpretable health monitoring of complex systems. In: AI4IOT Workshop at International Joint Conference on Artificial Intelligence (IJCAI) (2017) 11. Wiegerinck, W., Burgers, W., Kappen, B.: Bayesian networks, introduction and practical applications. In: Bianchini, M., Maggini, M., Jain, L. (eds.) Handbook on Neural Information Processing. ISRL, vol. 49, pp. 401–431. Springer, Heidelberg (2013). https://doi.org/10.1007/ 978-3-642-36657-4_12

LDAvis, BERT: Comparison of Method Application Under Corona Conditions Heike Walterscheid(B) , Veton Matoshi, Katarzyna Wi´sniewiecka-Brückner, Klaus Rothenhäusler, and Frank Eckardt Baden-Wuerttemberg State University Loerrach, Hangstraße 46-50, 79539 Loerrach, Germany {walterscheid,matoshi,brueckner, rothenhaeusler}@dhbw-loerrach.de, [email protected]

Abstract. In a previous modeling study, the automatic topic modeling method LDAvis was used to identify the thematic structure of a local discourse. Through the analysis, it became possible to determine which issues are relevant to the community in the period 2001–2020. The modeling specifications were adapted, whereby the nominalization of the medial linguistic data played a decisive role. The digital form of analysis of local discourse applied to southern Germany was applied to another local corpus and a nationwide corpus. The existing method is the basis for the configuration and validation (correlations and causalities) of a neural network with the aim of automating the process of identifying and evaluating local discourses. The results of the LDAvis modeling were validated using BERT calculations and found to be reliable. In this paper, the identified procedures are applied to new digitally accessible datasets and are tested again. Keywords: LDAvis · BERT · Discourse analysis · Geolocality · Corona · Local discourses · Nationwide discourses

1 Introduction Based on the results of the pilot study on the acquisition of thematic structure of a local mass media corpus, determined procedures and accesses are extended to two corpora (local and nationwide) within the framework of a comparative-contrastive approach and are subjected to further corpus-guided experiments using additional instruments (BERT). Following the pilot project, local media from the southern German region1 were again selected to obtain the local data. The local origin and generation of the texts published between the 18th October 2019 and the 18th December 2020 has been a central aggregation criterion for all selected sources and all data obtained from them. A thematic predefinition did not take place. In each case, entire texts accessible in electronic form were considered. The data volume of this local corpus amounts to approx. 1.02 million tokens. 1 Corpus content: local media of the southern German regions Freiburg and South Baden: Badische

Zeitung (www.badische-zeitung.de/), Wochenblatt (www.wzo.de/home), Baden FM Das Radio für Freiburg und Südbaden (www.baden.fm), Radio Regenbogen (www.regenbogen.de). © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Z. Ahram et al. (Eds.): AHFE 2021, LNNS 271, pp. 23–30, 2021. https://doi.org/10.1007/978-3-030-80624-8_3

24

H. Walterscheid et al.

This corpus was contrasted with a nationwide corpus with texts of national character. In this case the selection was limited to the major nationwide media with no access restrictions.2 The publication period was retained. In order to ensure comparability, no thematic predefinition and electronically accessible texts were used. The size of the nationwide corpus amounts to approx. 270 million tokens.

2 Data Preparation: Nominalization, Toponyms, Topic Count In the pilot study the nominalization of the corpus was identified as a purposeful application in potentiating the possibilities of interpretation and speeding up the process of interpretation. Thus, nominalization is adopted in the current study. The explicit retention of the toponyms proved to be helpful for the identification of the topics and their georeferencing, which is why the toponyms are retained masked. The number of topics for the local and the nationwide corpus could be determined by using an automatic calculation of the Coherence Score. The determined topic counts were used to analyze the local and nationwide data under examination and their quality was inspected during the study. The number was validated manually and automatically (BERT). The measures developed for evaluating a topic breakdown [1] give some clues, but these are not sufficient to dispense with manual control of the calculated topics, especially when a text collection does not have the usual mass media character but is characterized by its local specificity.

3 Material Exploration: LDAvis, BERT While the processing pipeline provides a variety of categories that can be evaluated in the detailed analysis of the discourse [2–4], the first step is to determine which discourse topics are relevant at all. Since unknown discourses are the subject of the study, nonhyperpredictive a priori themes are determined and data-driven inductive identification of those themes that constitute the text corpus is performed with the help of the topic modeling procedure LDAvis. Text production is modeled as a statistical generative process. If a sufficiently large number of results of this generative process is available (text corpus), then, with a sophisticated mathematical choice of random variables, their parameters can be estimated. This means that the topics on which a text collection is based can be represented as probability distributions over words. For the “back-calculation” of the topics common statistical methods such as Monte Carlo Simulation or Gibbs Sampling are available. In a second step, the previously used topic modeling method LDAvis is compared to another topic modeling method using the pre-trained language model BERT [5] developed by Google. BERT uses an attention-based learning method [6]. Because of its 2 Corpus content: Die Zeit (www.zeit.de), Frankfurter Allgemeine Zeitung (www.faz.net), Frank-

furter Rundschau (www.fr.de), Süddeutsche Zeitung (www.sueddeutsche.de), Der Spiegel (www.spiegel.de), Bild (www.bild.de), Fokus (/www.focus.de), Die Welt (www.welt.de) und Stern (www.stern.de).

LDAvis, BERT: Comparison of Method Application Under Corona Conditions

25

context-sensitive word disambiguation, BERT has been found to be highly accurate and versatile in a variety of applications, including topic modeling [7]. 3.1 LDAvis The coherence score was calculated for the examined corpora. In the case of the local corpus the score was approx. 0.50 with 60 topics for a size of approx. 1 million tokens; in the case of the nationwide corpus it was approx. 0.41 with a size of approx. 270 million tokens and resulted in 120 topics. For both the local and the nationwide corpus a particular salience of the theme Covid19 could be identified, which, due to the corpus size, shows a different vocabulary spectrum. Based on the data, 51 lemmas could be identified as theme-specific in the local corpus. In the nationwide corpus, 183 units could be assigned to the theme Corona. The numbers mentioned include data without duplicates3 . Term extraction was based on the interplay of automatic and manual evaluation processes, in which the unique evidence of the composita was considered. In addition to the unique compositions, based on the topic assignment and the world knowledge, types could be assigned to the theme, that did not contain the evident components. These were Geisterspiele, Kurzarbeit, Verschwörungstheoretiker, just to name a few, established as a pandemic unfolded. Accordingly, the specificity of the theme brings, in addition to the formal feature of the compound noun, special theme-related everyday lemmas (Absage, Regel, Welle) as well as theme-related expert lexis (Pandemie, Virus, Remdivisir), which were easily identified and assigned to the topic. What can be observed is that the total amount of the lemmas extracted from the nationwide corpus exceeds the lemmas extracted from the local corpus by more than three times. The results were evaluated according to the coherence calculation, which for the local corpus was about 0.50 for 60 topics, whereby their quality shows material-related specifics. Among these, the overrepresentation of toponyms and anthroponyms is the most important one that can be regarded as an integral feature for local corpora and thus as a constant characteristic. The masked entities were partially revealed multiple times in 53 out of 60 topics of the local corpus. On the one hand, the overrepresentation represents an automatically recorded and used bundling criterion, in which the geographic distribution of the respective theme plays a central role, on the other hand, it can cause a diffusing effect. Recording the actual theme strands, which are independent of the geolocality, causes theme splitting and theme dispersion. However, since the present study evaluates a local corpus in comparison to a nationwide corpus, the specificity stemming from the locality of the data must be explicitly retained, in order to uncover potential dynamics at the microlocal level. This is particularly important in the case of the study for identifying socially relevant themes and their salience in the corpus. No similar overrepresentation of toponyms or anthroponyms was found in the analysis of the nationwide corpus. 3 Formally and semantically identical units are classified as doubles in the article. In case of

semantic identity and formal (orthographic, grammatical) dissimilarity, variants are considered and listed separately. Cf. Coronakrise: Corona-Krise.

26

H. Walterscheid et al.

In the local corpus [=loc], whose topic size range is between 6.9% and 0.9%, the theme Corona virus was found to occur to varying degrees in 24 of 60 topics. This involves an occurrence of ≈40% in the middle size range (2.1%) and in the final size range (1%). The 30 most frequent terms of the respective topic with relevance value λ = 1 were evaluated. It could be determined that among the totality of the most frequent terms (total number: 1800) Corona-conditional terms appear 54 times, considering the duplicates and account for 3% of the term inventory determined for 60 topics. For the nationwide corpus [=nat], whose topic size spectrum ranges between 2.4% and 0.4%, Corona signals could be detected in varying degrees for 79 of 120 topics. This is an occurrence of ≈65% in almost the entire topic size spectrum between 1.9% and 0.4%. Analogous to the local corpus, the 30 most frequent terms of the respective topic were also evaluated in the nationwide corpus at the relevance value λ = 1. It could be determined that among the entire set of the most frequent terms (total number: 3,600), Corona-related terms appear 201 times, taking into account the duplicates, and account for 5.5% of the term inventory determined for 120 topics. When comparing the results it becomes clear that the distribution of the theme in the nationwide corpus at the level of topics is about 25% higher than the distribution of the theme in the local corpus. On the level of the term inventory a more than threefold (183: without doubles) and almost fourfold (201: with doubles) saliency in the term inventory of the nationwide corpus compared to that of the local corpus (51: without doubles; 183: with doubles) can be observed. In both corpora there are topics that reveal a certain cumulation of the identified Corona terms and play a decisive role in the identification of the theme assignment. Depending on the corpus the accumulation according to extraction results is proportionally identified within the corpus. While in the local corpus the pandemic theme is considered to be a cumulation already at five terms per topic due to its most frequent occurrence4 , in the nationwide corpus cumulations of 19 terms per 30 most frequent terms could be determined after internal proportioning.5 As well 18 (cf. [T36A-nat (1%): Test, Arzt, Covid-19, Patient, Marke, Wissenschaftler, Welle, Ausbruch, Behandlung, Ansteckung, Entwarnung, Studie, Corona-Patienten, Corona-Hotspot, Lungenkrankheit, Furcht, Erreger, Antikörper]) and 14 (cf. [T9A-nat (1,4%): Pandemie, Angst, Neuinfektion, Corona-Pandemie, Anstieg, Infektionszahl, Atem, Fallzahl, Schutzmaske, Coronavirus, Todesfall, Hoch, Rekordzahl] terms per 30 most frequent terms occur. The corpus-dependent cumulation of terms plays an important role in determining the diffusion of the theme within the theme spectrum, and if the number of Corona-affine terms is high (above 15), it is an important indicator of theme assignment to the given topic. However, if the accumulation is not so large, the accumulation value must be read in conjunction with the placement of the Corona lexis in the ranking of the most frequent terms. If the Corona-affine term occupies the first ranked position or one of the higher 4 Cf.: [T7A-loc (2,7%): Absage, Corona-Zahlen, Schutzmaske, Krankheit]; [T27A-loc (1,5%):

Pandemie-Lage, Pandemie, Corona-Regeln, Corona-Infektionen, Hygieneregel]; [T28A-loc (1,5%): Quarantäne, Corona-Krise, Korona-Fälle, Fall, Welle]; [T41A-loc (1,2%): Krise, Lockerung, Covid-19, Fall, Beschränkung]. 5 Cf.: [T8A-nat (1,4%): Coronavirus, Virus, Kampf, Krankheit, Ausbreitung, Mensch, Schuld, Medikament, Covid-19, Mediziner, Verlierer, Corona, Verbreitung, Corona-Pandemie, Infektion, China, Bundesgesundheitsminister, Nachbarland, Besserung].

LDAvis, BERT: Comparison of Method Application Under Corona Conditions

27

ranked positions, the Corona situation is explicitly considered in the theme collection: For Topic [T27A-loc (2.7%)], the term pandemic situation occupies the first place in the ranking list of the most frequent terms, and the number of Corona-affine lexemes appearing in its inventory amounts to five lexemes in total. Such a constellation allows the interpretation of the whole theme of this Topic as a pandemic situation. The situation is similar in the Topic [19A-loc (1,6%): Coronavirus, Ausbreitung], in which the term Coronavirus plays the central role, even though the total number of Corona-affine terms amounts to two. Analogous phenomena can be observed in the national corpus. In Topics, where the cumulations of Corona-affine terms are comparatively smaller, the ranking of Corona-affine terms has to be determined. Thus, due to the constellation of the terms: Bewohner, Einrichtung und Pflegeheim, which prominently occupy the ranking, the theme Situation in Pflegeheimen und Pflegeeinrichtungen, can be attributed to Topic [28A-loc (1.5%)]. Similarly, Topic [T6Anat (1.6%)], for example, could be attributed the theme Corona-Auswirkungen auf die Börse after considering the first and second rankings of stock Börse and Grund due to the constellation of the terms Coronakrise, Lockdown, Corona-Fälle, Gesundheitsamt, Corona-Auflagen, in which corona crisis occupies the third rank. In the identified topics per corpus in which Corona terms appear, the following relative clusters can be identified (Table 1): Table 1. Count of topics with Corona term total according to relative frequency of corona terms. Topic count with Corona term total Ratio topic count: relative frequency of Corona terms Local corpus

24

4: 5; 2: 4; 7: 2; 11: 1

Nationwide corpus 79

37: 1; 23: 2; 7: 3; 4: 4; 2: 5; 1: 6; 1: 7; 1: 8; 1: 14; 1:18; 1:19

The interpretation of the theme of the topic does not only depend on the amount of Corona-affine terms, but to a high degree on the placement of these terms within the list of the 30 most frequent terms. It should be noted that the theme Corona plays a major role in the media material studied, in accordance with real events and recent developments (Table 2): Table 2. Corona topic distribution and Corona term distribution in percent. LDAvis (Topic-Ebene)

LDAvis (Terme-Ebene)

Local corpus

≈40%

≈3%

Nationwide corpus

≈65%

≈5,5%

Out of 40% of Corona-affine topics in the local corpus, two separated topics can be characterized as Corona main themes. In other topics, the Corona theme is embedded in

28

H. Walterscheid et al.

a gradually different way in the social events it is able to influence. Of the 65% in the national corpus, six separated topics can be characterized as Corona main themes. In the remaining 73 themes, the Corona theme is integrated into the topic structure in different degrees and can be determined analogously to the local corpus. 3.2 BERT Following the existing topic modeling method BERTopic [8] and the approach [7], a custom Python application was written to sort the scrapped documents by theme. The pre-trained Bert model “bert-base-german-cased” [9] was used. The procedure consisted of transferring the available article contributions by means of BERT into mathematically computable word embeddings and of clustering these according to their semantic similarity. This reveals a crucial difference to LDA: while LDA assigns each article to a theme and allows multiple assignments, the BERT method assigns only one topic to an article. The topic assignment is done by the clustering method HDBSCAN [10], providing the advantage that articles that are difficult to classify thematically are simply not classified, which means that they can initially be considered as outliers. Although this results in a certain percentage of articles remaining unclassified, it also makes the topic assignment more accurate. To make the topics interpretable, the most characteristic words had to be extracted to use them as labels. Again, only nouns were used to achieve the best possible interpretability. The extraction of these terms was done using the Tf-idf measure [11]. According to this, the individual topics were classified and categorized as Coronarelevant or Corona-irrelevant. Topics were categorized as Corona-relevant if they contained the key terms identified in the LDAvis process. It must be emphasized that not every document that was assigned to a Corona-relevant theme necessarily also had actual points of reference to the Corona topic. Regardless of these irregularities, the results show that topic modeling achieves a relatively good abstraction of the temporal development of discourse. The BERT procedure was applied to the local and nationwide media data. For the local media, a total of 6123 topics were determined, whereby 2000 articles (approx. 6%) were not assigned to a topic; these were not considered in the subsequent analysis. On the basis of the defined set of key terms that signified Corona relevance, a total of 313 topics (approx. 5%) were classified as Corona-relevant, whereby 4 represents the highest number of Corona-relevant terms determined for a topic. The categorization by Coronarelevant Topics allows a visualization of the development of the Corona discourse over time. Figure 1 shows the development of the Corona discourse over time for the local media: the percentage of Corona-relevant articles per month is indicated. For the local media Corona jumps in relevance from February 2020 and peaks in April. While Corona then increasingly loses relevance as a topic, a new increase in Corona-relevant topics can be seen from July, and after a brief decline, again from September onwards: A similar picture is found for the nationwide media. A total of 89582 topics were identified for the nationwide media, whereby 22343 articles (approx. 4%) were not assigned to a topic; these were not considered in the following analysis. Based on the set of key terms that signalized Corona relevance, a total of 13251 Topics (approx. 15%) were classified as Corona relevant, where 9 represents the highest number

LDAvis, BERT: Comparison of Method Application Under Corona Conditions

29

Fig. 1. Share of Corona-relevant contributions in percent, local media.

of Corona relevant terms determined for a Topic. Figure 2 shows the temporal course of the Corona discourse for the national media: the percentage of Corona-relevant articles per month is given. For the nationwide media, Corona jumps in relevance starting in February 2020 and peaks in March and April. While Corona increasingly loses relevance as a topic over the summer months, a sharp increase in Corona-relevant topics can be seen from July, and after a brief decline, again from September 2020:

Fig. 2. Share of Corona-relevant contributions in percent, nationwide media.

If one compares the timeline of the Corona discourse in both the local and nationwide media, one can clearly see that the development of the Corona discourse correlates with the number of new infections [12] and [13].

4 Conclusion The application of both methods to the same data sets was carried out here to reveal their respective strengths and weaknesses in comparison. While LDAvis can be characterized as a corpus-guided method, the BERT model proved to be a tool suitable for corpusbased studies, in which many parameters must be predefined, but which does not require such extensive manual rework. While the results of modeling with BERT seem to be unaffected by corpus size, the question of the optimal amount of data for LDAvis remains an open question. The results of LDAvis modeling presented in this paper require further readjustments in the handling of data preparation. The BERT procedure outlined here is expandable and will be continuously improved. The presentation of the results of the BERT procedure has several purposes: on the one hand, the BERT approach should be checked for its accuracy by means of the comparison to the old LDA-method; on the

30

H. Walterscheid et al.

other hand, new perspectives for topic modeling should be exhibited. The salience of the Corona theme can be rated as very good for LDAvis and good for BERT. Furthermore, LDAvis offers a possibility to include spatial and temporal components in the evaluation, so that especially in local corpora certain toponyms could experience a space-time localization. The use of toponyms and anthroponyms will therefore be examined in future studies because, given their local specificity, they allow a microperspective on the dynamics of regional events and local discourses. This research area has been little explored so far. Acknowledgment. This research work is kindly supported by the Dr. K.H. Eberle Foundation, Germany.

References 1. Röder, M., Both, A., Hinneburg, A.: Exploring the space of topic coherence measures. In: Proceedings of the Eighth ACM International Conference on Web Search and Data Mining, pp. 399–408 (2015) 2. Bubenhofer, N., Rothenhäusler, K., Affolter, K., Pajovic, D.: The linguistic construction of world: an example of visual analysis and methodological challenges. In: Scholz, R. (ed.) Quantifying Approaches to Discourse for Social Scientists. PSDS, pp. 251–284. Springer, Cham (2019). https://doi.org/10.1007/978-3-319-97370-8_9 3. Bubenhofer, N., Rothenhäusler, K.: “Die aussicht ist grandios!” – korpuslinguistische analyse narrativer muster in bergtourenberichten. In: Eller-Wildfeuer, N., Rössler, P., Wildfeuer, A. (eds.) Alpindeutsch. Einfluss Und Verwendung Des Deutschen Im Alpinen Raum, pp. 39–60. Edition Vulpes, Regensburg (2018) 4. Scharloth, J., Bubenhofer, N., Rothenhäusler, K.: Andersschreiben aus korpuslinguistischer perspektive: datengeleitete zugänge zum stil. In: Schuster, B.-M., Tophinke, D. (eds.) Andersschreiben: Formen, Funktionen, Traditionen, pp. 157–78. Erich Schmidt Verlag, Berlin (2012) 5. Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding (2019) 6. Vasawi, A., et al.: Attention is all you need (2017) 7. Angelov, D.: Top2Vec: distributed representations of topics (2020) 8. https://pypi.org/project/bertopic/ 9. https://huggingface.co/bert-base-german-cased 10. Campello, R.J.G.B., Moulavi, D., Sander, J.: Density-based clustering based on hierarchical density estimates. In: Pei, J., Tseng, V.S., Cao, L., Motoda, H., Xu, G. (eds.) PAKDD 2013. LNCS (LNAI), vol. 7819, pp. 160–172. Springer, Heidelberg (2013). https://doi.org/10.1007/ 978-3-642-37456-2_14 11. Qaiser, S., Ali, R.: Text mining: use of TF-IDF to examine the relevance of words to documents. Int. J. Comput. Appl. 181, 25–29 (2018) 12. https://www.bing.com/search?q=deutschland+corona+zahlen&cvid=0819028b8f3e419498 f81916ea550ab9&pglt=171&FORM=ANNTA1&PC=U531 13. https://experience.arcgis.com/experience/478220a4c454480e823b17327b2bf1d4/page/pag e_1/

Artificial Intelligence (AI) Coupled with the Internet of Things (IoT) for the Enhancement of Occupational Health and Safety in the Construction Industry Kavitha Palaniappan1(B) , Chiang Liang Kok1 , and Kenichi Kato2 1 The University of Newcastle (UoN) Singapore, Singapore, Singapore {kavitha.palaniappan,chiangliang.kok}@newcastle.edu.au 2 Vital Sign Alert Pte. Ltd., Singapore, Singapore [email protected]

Abstract. The Singapore WSH (Workplace Safety and Health) Council states that 6 out of 17 fatal injuries in the first half of the year 2019 are from the Construction industry and that the top 3 causes of the fatal and major injuries are mainly falling from heights, machinery-related injuries and slips, trips or falls. The government does come up with various frameworks to reduce such accidents and incidents, incorporating the 4 main factors: policy, personnel, process and incentive factors. However, it has been found that still almost 25% of the safety hazards go unnoticed by the workers. One must understand that hazard identification at site is a visual search and analysis process which can be hindered by various human factors such as the workers’ physiological and psychological state of mind, attention span, bias and risk tolerance levels. This paper explores the possibilities of coupling AI with IoT to overcome those challenges. Keywords: Artificial Intelligence · Internet of Things · Construction industry

1 Introduction Right from its inception stage, the Construction Industry has been termed as one of the most dangerous industries to work in due to the high fatality rates that happen in that industry across the globe. Studies have shown that the fatality rates in the construction industry is about three times that of the national average of all industries put together in the USA and Europe [1]. The scenario is not different in Singapore as well. As per the Singapore Workplace Safety and Health (WSH) Council, 6 out of 17 fatal injuries that have been recorded in the first half of the year 2019 are from the Construction industry [2]. The Council also reports the top 3 causes of the fatal and major injuries as falling from heights, machinery-related injuries and slips, trips or falls. As this industry is the most hit in terms of occupational fatalities and injuries, there is a constant search for various ways of predicting accidents before they can happen. Furthermore, the situation is also made more complicated with the inclusion of technological advancements that are being rapidly introduced into the field in order to keep pace with the changing environments and market trends. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Z. Ahram et al. (Eds.): AHFE 2021, LNNS 271, pp. 31–38, 2021. https://doi.org/10.1007/978-3-030-80624-8_4

32

K. Palaniappan et al.

2 Legislations and Government Policies The government does take action and also keeps reviewing and amending its safety legislation and policies that concern the construction industry safety from time to time in order to curb the injury rates in the industry. However, it also indicates that almost 73% of the fatalities are due to systemic lapses in planning and execution of the various activities in the industry [2]. One study has suggested a revised framework to analyse construction industry accidents, develop systemic approaches for safety-related practices in order to prevent future accidents using the 3P + I model [3]. In the 3P + I model, the 3P stand for the three factors, namely, policy factors, personnel factors and process factors and the I represents incentive factors. The policy factors are at the top of the hierarchy which are dependant on the government’s legislations and policy framework put in place to ensure safety in the construction industry. Just the presence of such legislation and policies are not going to bring about safety in each and every company; however, it is essential to understand the reasons behind those and ensure compliance at individual company levels. Next comes the personnel factors at the middle management level which revolves around the initiatives taken by construction company’s management staff to ensure safety at the workplace. This includes a variety of aspects such as knowledge and experience, commitment and dedication, influence and training of supervisors and workers, and attitude towards safety culture at the workplace. The process factors come in at the lowest level which depends on the implementation of the above two factors by the workers at the actual worksite. This would require personal commitment and awareness from the side of each and every worker at the construction site. The final factor is the incentive factor which relates to awarding a person who has stuck by the safety regulations and policies and punishing another who has not followed the stipulations. There are quite a lot of studies pertaining to this in the literature and this is yet another controversial topic of discussion to decide whether incentives should be provided or not for good practices [4, 5]. However, literature does clearly indicate that there must be some kind of punishment to deter workers from not following the regulations [4].

3 Machine Learning Models Just identifying the causes of fatal injuries may not be fully sufficient to curb them from occurring. In the first place, one should also be able to predict the possibilities of potential accidents by observing the sites prior to occurrence of any accident at all. Some trials have been done to extract the fundamental, context-free attributes of the work environment from raw injury reports automatically by Natural Language Processing (NLP) systems [6] in order to enable such predictions at construction sites. Initially, the authors had predicted the outcomes using machine learning models [7] from the same document that had been used to extract the attributes and that led to certain limitations which were overcome later by using independent outcome measures [8]. Thus, these authors claim

Artificial Intelligence (AI) Coupled with the Internet of Things (IoT)

33

that the attributes are able to predict with precision and accuracy the incident types, injury types, body parts affected and the severity of injuries. So, now the industry has certain ways to predict the potential injuries and the workers can be trained to identify the hazards in the workplace and prevent them from happening. However, there are so many other human factors such as psychological states of the workers, stress levels, attention span, perceptions or biases towards the accident types and nature, risk tolerance levels and their physiological conditions can hinder their fullfledged abilities to recognize potential hazards and studies have indicated that in spite of all the trainings that the workers go through, there is still about 25% of the safety hazards that go unrecognized [9]. Here comes handy Artificial Intelligence (AI) and the Internet of Things (IoT).

4 Artificial Intelligence In simple terms, the above explained machine learning can be fed into computers and the computers can be made to do tasks that generally require human intelligence and this is what is commonly known as Artificial Intelligence (AI). The construction industry has already been using AI for a variety of purposes such as welding, demolition activities, laying bricks, etc. [10]. Likewise, they can also be used for safety in the construction industry and this would enable us to overcome the human factors that could be a limiting factor as explained earlier. 4.1 Principles and Methodology In this paper, we are suggesting the use of unmanned hybrid systems like AI predictive algorithm to improve the overall occupational safety in construction sites. AI technology can be utilized to monitor and perform predictive analysis in real-time through active monitoring of every human movement with high definitive (HD) and high accuracy cameras. Furthermore, more salient features could be coupled to such highly predictive analytical system by including a network of wearable Internet of Things (IoT) devices that could be worn by each construction worker. A highly probable scenario is when the AI device predicts a high probability that a dangerous fall from height may happen, it will trigger the device (in vibration mode) worn by workers in the vicinity of the potential victim. On the other hand, the target’s device will give out a loud and audible warning to alert others that he is a high probability victim who may have a high risk of falling from a dangerous height. This is to alert potential helpers or bystanders in the vicinity to quickly assist in such an event to prevent a claustrophobic event. 4.2 Sub-systems We shall now look at the different sub-systems that constitute this process to monitor and prevent the occurrence of accidents at the workplace. The sub-systems basically include the general motion sensors, vision systems, transmitters, receivers and mobile robots.

34

K. Palaniappan et al.

4.2.1 Motion Sensors Motion sensors that detect acceleration signals are usually placed at the lower back which is much closer to the center of gravity in a human [11]. Usually, a mechanical polysilicon sensor has a 3 degree of motion in the X, Y and Z axes. A ±3g measurement range would normally suffice. An example of such polysilicon micromechanical sensor (MEMS) is the ADXL330 from Analog Devices [12] that produce small and sensitive accelerometer signals for the wearables. Gyroscopic sensors are another type of polysilicon micromechanical sensor (MEMS) that is specialized in measuring the Coriolis forces. 4.2.2 Vision Systems The key element in the vision system is the ability for an autonomous robot to navigate in a targeted surrounding to optimize the capturing of all subjects of interest, which in this case, are the workers having the wearable devices on them. Since each wearable device has a unique identification system which is registered to the wearer, every worker in the construction site is tracked and monitored. To ensure optimal visual analysis, a high definition camera system is to be used. Examples are a camera that uses the complementary metal-oxide-semiconductor (CMOS) image sensor AR0237 RGB-IR [13] in a stereo vision design. The benefits and importance of using a stereo camera are that it provides 3-dimensional depth perception that is required for estimating the distance of the subject-of-interest (SOI) especially in a crowded environment like a construction site (Fig. 1).

Fig. 1. Illustration on the setup of a two cameras system with projective optics is arranged side by side such that the fields of view overlap at the desired object distance.

Artificial Intelligence (AI) Coupled with the Internet of Things (IoT)

35

4.2.3 Transmitters We would suggest to use radio frequency microcontroller system as transmitters. With the advent of wireless communication on wearable devices, the merging of the wireless integrated circuit onto the central processing unit has been on the rise that IC manufacturers are producing what is called an SoC or system-on-chip which is having the Bluetooth function on a microcontroller IC. One example is the NXP’s QN902X Ultra-Low Power BLE System-on-chip (SoC) [14] that integrates a Bluetooth v4.0 lower energy radio with an ARM Cortex-M0 MCU or microcontroller unit. These low-powered microcontrollers provide the best of both key components on a single integrated circuit which provides continuous transmission of data packets to receivers stationed within the permissible range. The transmitters are units that are attached to the wearable system that each worker will be wearing. All sensors, wherein this case, the accelerometer and gyroscope, are connected to the transmitter unit (Fig. 2) together with the power unit which comprises of rechargeable batteries, battery charging unit, power management unit, Audible Alert System, which contains a water-proof speaker in case it rains, and a Vibration Unit that will be activated in the event if that individual is near the falling victim.

Fig. 2. Block diagram for the transmitter unit

Usually, in a construction site, the environment is mostly open and unobstructed. This creates a good environment for RF devices as line-of-sights will give a longer communication range than in an urban location with many walls and buildings that block the signals.

36

K. Palaniappan et al.

4.2.4 Receivers The receivers are units that include multi-RF modules that not only perform the scanning and collection of data from transmitter units from the worker but it also includes its transmitter function to the main server located at a remote location of the organization. These units are usually also called Edge computing or IoT edge processing [15] as it moves to compute power closer to the source rather than performing all processing in a remote data centre. This is to reduce latency and bandwidth consumption. The wearable transmitter unit on the worker, on the other hand, is called a sensor node. Edge computing devices usually have more processing power than sensor nodes as it needs to perform a certain computation and/or data mining before relevant data are than forwarded to the main server which is normally off-site in a remote location for either further processing and/or being accessed by the stakeholders. These stakeholders are usually the users of such system who are either accountable or reporting to other stakeholders who are directly accountable to the safety and security of the workers on-site. Since the transmitters are on BLE, the receiver unit will have BLE, WiFi and LTE/cellular module for connecting to backend/remote servers, GPS module for localization, tracking and navigation on the platform the edge compute unit will be mounted on. 4.2.5 Mobile Robots Now comes the final section where the edge computing device is mounted on an autonomous robot, either on a mobile platform, which could be both on land and air or in a fixed and controlled environment for self-navigation. Thus, the autonomous or semi-autonomous robot is required to perform constant monitoring of its environment to ensure obstacle avoidance as its top main priority (Fig. 3).

Fig. 3. Illustrates an autonomous robot monitoring movement of vehicle and humans.

5 Limitations and Challenges As the environment in the construction site are usually disruptive, noisy and in high motion, the sound and vibration level would be higher than normal IoT wearables. One

Artificial Intelligence (AI) Coupled with the Internet of Things (IoT)

37

of the key challenges in implementing such technology would be the precise localisation of the workers since the identification of each IoT device is part of the system. A feasible and viable alternative solution would be to employ a hybrid utilisation of technology which include indoor & outdoor Global Positioning System (GPS) and Radio Frequency (RF).

6 Conclusion Thus, by integrating the motion sensors, vision systems, radio frequency microcontroller system as transmitters and mutli-radio frequency intelligent controller as receivers with the mobile robots, an inter-dependent architecture that results in a unified system is created. The backend or remote server will not only house as a database consolidation centre for all the Edge devices being deployed, it should have an interface to the corporate enterprise system which are usually the planning and scheduling system that oversees not only the deployed units but also the health status of the Edge devices, transmitter units and most importantly, the SOIs, their workers. Thus, our proposed work with the use of unmanned hybrid systems like AI predictive algorithm could improve the overall occupational safety in construction sites.

References 1. William, J.A.W., Janocha, J.: Comparing fatal work injuries in the United States and the European Union: Monthly Labor Review: U.S. Bureau of Labor Statistics (2014). https:// www.bls.gov/opub/mlr/2014/article/comparing-fatal-work-injuries-us-eu.htm. Accessed 27 Jan 2021 2. Ministry of Manpower. Workplace safety and health reports and statistics. Ministry of Manpower Singapore. https://www.mom.gov.sg/workplace-safety-and-health/wsh-reportsand-statistics. Accessed 27 Jan 2021 3. Teo, E.A.L., Ling, F.Y.Y., Chong, A.F.W.: Framework for project managers to manage construction safety. Int. J. Project Manag. 23(4), 329–341 (2005) 4. Hinze, J.: Safety incentives: do they reduce injuries? Pract. Period Struct. Des. Constr. 7(2), 81–84 (2002) 5. Lai, D.N.C., Liu, M., Ling, F.Y.Y.: A comparative study on adopting human resource practices for safety management on construction projects in the United States and Singapore. Int. J. Project Manag. 29(8), 1018–1032 (2011) 6. Tixier, A.J.-P., Hallowell, M.R., Rajagopalan, B., Bowman, D.: Automated content analysis for construction safety: a natural language processing system to extract precursors and outcomes from unstructured injury reports. Autom. Constr. 62, 45–56 (2016) 7. Tixier, A.J.-P., Hallowell, M.R., Rajagopalan, B., Bowman, D.: Application of machine learning to construction injury prediction. Autom. Constr. 69, 102–114 (2016) 8. Baker, H., Hallowell, M.R., Tixier, A.J.-P.: AI-based prediction of independent construction safety outcomes from universal attributes. Autom. Constr. 118 (2020). http://ezproxy.newcas tle.edu.au/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=edselp&AN= S0926580519310386&site=eds-live. Accessed 1 Oct 2020

38

K. Palaniappan et al.

9. Jeelani, I., Asadi, K., Ramshankar, H., Han, K., Albert, A.: Real-time visionbased worker localization & hazard detection for construction. Autom. Constr. 121 (2021). http://ezproxy.newcastle.edu.au/login?url=http://search.ebscohost.com/login.aspx? direct=true&db=edselp&AN=S0926580520310281&site=eds-live. Accessed 1 Jan 2021 10. Chakkravarthy, R.: Artificial intelligence for construction safety. Prof. Saf. 64(1), 45 (2019) 11. Tamura, T.: Wearable units. In: Tamura, T., Chen, W. (eds.) Seamless Healthcare Monitoring, pp. 212–217. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-69362-0_8 12. ADXL330 Datasheet. Analog Devices, Inc. (2007) 13. 3D-Data with Stereo Vision White Paper. IDS Imaging Development Systems GmbH (2020) 14. NXP QN902X Datasheet. NXP Semiconductors N.V. (2018) 15. Hazelcast—The Leading In-Memory Computing Platform. Hazelcast. https://hazelcast.com/. Accessed 31 Jan 2021

Graph-Based Modeling for Adaptive Control in Assistance Systems Alexander Streicher1(B) , Rainer Schönbein1 , and Stefan Pickl2 1 Fraunhofer IOSB, Karlsruhe, Germany {alexander.streicher,rainer.schoenbein}@iosb.fraunhofer.de 2 Universität der Bundeswehr, Munich, Germany [email protected]

Abstract. The topic of this contribution is characterization and analysis of assistance systems in order to enable adaptivity, i.e., as personalized adaptive systems. The research question of this article is how to facilitate the modeling efforts in adaptive e-learning assistance systems. Adaptivity here means to personalize the usage experience to the individual needs of the users and their current working context. For that, adaptive systems need usage models and user models. The problem statement is that expert knowledge and recurrent efforts are needed to create and update these types of models. Data driven and graph analytics approaches can help here, in particular when looking at standardized interaction data and models which encode sequences such as interaction paths or learning paths. This article studies how to make use of interaction usage data to create sequence-typed domain and user models for their use in adaptive assistance systems. The main contribution of this work is an innovative concept and implementation framework to dynamically create Ideal Paths Models (IPM) as reference models for adaptive control in adaptive assistance systems. Keywords: Adaptivity · Adaptive control · Graph analytics · Modeling

1 Introduction Assistance systems can support users to achieve their tasks [1, 2]. An intelligent assistance system observes the users’ interactions and automatically adapts to the users’ needs and their working context [2]. This is, it could change the way the users can interact with the system, or by providing context-sensitive support, e.g., context-related recommendations such as learning help in intelligent tutoring systems [1, 3]. This paper presents how to characterize and analyze assistance systems to enable adaptivity, hence forming personalized adaptive systems. The research question of this work is how to facilitate the modeling efforts in adaptive e-learning assistance systems: adaptivity in this article means to personalize the usage experience to the individual needs of a user and his current working context. For example, in adaptive e-learning the systems can provide the users with adaptive guidance or make dynamic difficulty adjustments, i.e., making it easier or more challenging. Adaptive guidance could be recommendations © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Z. Ahram et al. (Eds.): AHFE 2021, LNNS 271, pp. 39–46, 2021. https://doi.org/10.1007/978-3-030-80624-8_5

40

A. Streicher et al.

on the next best activity following (individual) learning paths [2]. However, adaptivity components in personalized assistance systems need user and domain models to determine the next best course of action. User models typically contain information on how, when and what a user has interacted with, or – for a cognitive user model – information on the cognitive state or load (e.g., stress level). The domain models can be interaction paths or, in e-learning, assessment questions or learning paths [2]. The problem statement is that expert knowledge and recurrent efforts are needed to create and update such kind of models. Data driven and graph analytics approaches may help here, especially when looking at interaction data and models which encode sequences such as usage paths. This article studies how to make use of interaction usage data to create sequence-typed domain and user models for their use in adaptive assistance systems. These models contain information on the general usage of the attached systems (domain model) as well as the individual interactions of single users (user model). The concrete research questions studies to create dynamically adjustable and flexible user and usage path models, and how to analyze the models for their application in adaptive control. This contributes to one of the main questions in adaptive assistance: when to actually adapt. The contribution of this work is a concept and implementation framework to dynamically create Ideal Paths Models (Fig. 1) [4] as reference models for adaptive control in adaptive assistance systems. Our graph-based modeling approach links the IPM to domain and user models. The concept shows how to define graph-based IPMs, and how to apply them to quantify user performance. The framework presents a software architecture which uses standard-based activity stream tracking data to generate IPMs.

Fig. 1. Example for usage pathways (top) and general Ideal Paths Model concept [4].

Graph-Based Modeling for Adaptive Control in Assistance Systems

41

Our field of application is assistance systems for education and training, in particular e-learning for image interpretation [4]. Our adaptivity approach tries to keep the users immersed in gamified interactive learning environment (so called serious games [5]) by keeping them in the so-called Flow channel [6], balanced between the perceived skills and challenges. This article concentrates on the modeling aspects of usage pathways which encode which sequence of actions users undertook [7]. An additional challenge is to align the modeling with established standard models to provide a solid basis for the usage paths, learning paths [7], learning goals [2], or learning performance [8]. For example, it must be possible to model and compute learning progress. The literature review points to established domain and user modeling approaches from the field of Intelligent Tutoring Systems (ITS), as described by Woolf [2] or with focus on user modeling by Kurup et al. [9]. All in common is the obvious separation of the usage pathways into atomic or logical coherent elements. In the context of this article we see modeling approaches such as the Knowledge Tracing (KT) model with Competence-based Knowledge Space Theory (CbKST) [10]. In comparison, for KT the usage pathways need to be predefined into states or knowledge components and the transitions between them, plus additional models on learning and the alignment to competencies. We adopt this modeling approach but our concept uses the observed interactions to construct the models’ base usage pathways layer. To quantify performance and determine the estimated level of needed assistance we adopt the Performance Factors Analysis (PFA) logistic regression model [11]. The PFA builds upon the KT modeling approach and uses observed success and failure states to compute performance scores.

2 Adaptive Control and Pathways Modeling Various adaptive assistance systems exist [1, 3, 8], and their underlying principle follows the concept of control systems theory, here adaptive control [12]. General (linear) closedloop feedback control systems react to observations or measurements to modify the controlled system or plant. With human behavior in the loop, we typically deal with complex, nonlinear processes which motivates the use of adaptive control theory [12]. Central to this – and central to this work – is the concept of Model Reference Adaptive Control (MSAR) [12, 13], i.e., using reference models to more broadly and informed react (or adapt) to parameter changes. For instance, these reference models can provide information on the corrects usage pathways which the users should follow, or to quantify their performance by computing a metric as deviation from a targeted pathway. The quantification aspect is of high importance for adaptive systems since they need to know when to actually adapt and in which direction. However, building such reference models is typically done manually and expert or domain knowledge is needed [2]. Our view on adaptive systems follows the 4-phased adaptivity cycle by Shute and Zapata-Rivera [3]. It structures an adaptivity process in four consecutive phases or stages where each new run depends upon the previous run, hence forming a cycle. Its main components are the four phases (1) capture, (2) analysis, (3) select and (4) present, plus an additional user or learner model after the analysis phase. However, we incorporate the user models into the analysis phase (2) [14]. The argumentation is that the select

42

A. Streicher et al.

phase (3) not only builds upon and uses the user models but also incorporates additional analysis results, such as usage pathways models. We define a usage pathway (Fig. 1) as the sequence of user interactions with a system. For e-learning, a special form is a learning pathways model. Typically, these models are pre-defined sequences of usage patterns within an e-learning system [7]. Learning pathways are a crucial element for adaptive e-learning systems since they provide information how learning courses are structured, how to determine if the users are on track, and to estimate the learning progress (cf. previous section). Without loss of generality this is also valid for assistance systems in general. Modeling of these pathways typically follows a standard directed graph model G = (V , E) with vertices or nodes V and edges E. Since our modeling is based on observations our graph is directed with a linear ordering in the sequence of user interactions. We can differentiate between predefined or offline usage pathways and effective usage or online pathways. The former is typically defined in the design and implementation phase of an assistance system [1, 2]; the latter, effective pathways, are dynamically build at runtime. Two cases of usage pathways adaption can be distinguished: macro and micro pathways adaption. Macro adaption looks at the whole pathways. An adaptive system would offer the users recommendations on suitable learning paths, or it would modify the navigation in such a way that for the continuing system usage suitable pathways are selected. Micro adaptation works on individual elements inside of pathways, i.e., it works on the nodes. Therefore, the recommendations or adaptations are more immediate. In this work the adaptation model uses the micro level. For our modeling approach there is no restriction on the granularity. Better said, it follows the level of detail of the observations. If the observations are at a very high, abstract level, then the resulting graph contains only a few nodes, and vice versa. Typically, assistance systems do not record all possible smallest events in a fine granular way (e.g., all mouse movements), but follow the systems’ logical structure (e.g., windows, scenes) and the events on the user-interaction elements (e.g., buttons). Basically, the filtering of which events to observe reflects the decision of what type of adaptivity should occur. For example, if adaptivity is to guide the user only at the macro level, then only the beginning and end of a user session might be required. In our e-learning application context the important aspect of didactic modeling is not yet made explicit. While the data-driven usage pathways detection hint at how users navigate through the content it does not directly reveal the didactic model of the elearning system. Model knowledge of learning pathway levels and sequencing of content would help in recommending next learning objects. Learning path levels could be: (1) sequence of (learning) courses; (2) sequence of chapters or missions; (3) sequence of subchapters; (4) sequence of knowledge units (individual scenes, web pages, etc.); (5) interaction sequence within a knowledge unit, e.g., factual knowledge before action knowledge before source knowledge.

3 Data-Driven Modeling for Flexible Adaption The idea is to use the observations from standardized tracking data to create graph-based common paths models. These common paths are needed for our concept of the Ideal

Graph-Based Modeling for Adaptive Control in Assistance Systems

43

Paths Models (IPM) (Fig. 1) [4] as reference model for adaptivity. The IPM describes all necessary steps to achieve the objective without unnecessary detours. Essentially, it is a sequence of episodes and interactions that leads most directly to the next goal. For instance, in a computer simulation for reconnaissance training users’ first select those interaction elements which lead them to a virtual command center where they are briefed on their first mission (Fig. 1). A scene can have multiple manifestations for each possible user interaction. These interactions are observed or tracked in an assistance system, typically in the form of activity streams, e.g., “John has completed reconnaissance mission 1”. To allow general applicability we propose to use the W3C Activity Streams standard which encodes usage interaction events in triple form with . In the e-learning domain this has been adopted to the Experience API (xAPI) standard following the same triple principle. This observation data can be recorded in a graph-database to make use of graph or social network algorithms [14, 15]. For instance, to find most or least active users or learning objects, or to compute the shortest path between nodes or subset of nodes. Graph pattern analysis [15] can find individual learning paths or, for several users, common learning path models. The results or outputs of these analysis processes are, for example, learning path models or ideal path models, which again can serve as inputs in further downstream processes in the formation of learning path models. Thus, the observation data (capture phase) can also be used for data-driven model building, which allows a much more flexible application, because the application-specific data models can “grow naturally”, schema-free. One example using graph pattern analysis and a shortest path algorithm is depicted in Fig. 2. Given is a possible interaction sequence P = {A, B, C, D, E, F} with the individual interaction elements A . . . F. Different interaction paths are observed for different users 1–3. As stated, before the observations are captured using xAPI whereas the xAPI objects reflect the interaction elements (Fig. 1), and they are stored as nodes in the graph. Using graph pattern matching, e.g., string similarity, a common subset of nodes C ⊆ P between all user paths can be determined. The result is a common path, e.g., C = {A, C, F} (Fig. 2). One approach to pattern analysis and to find the common path is to encode the users’ pathway graphs Pj (for user j) as binary adjacency matrices. Multiplication of these matrices yields the nodes which are common in all graphs, i.e., nodes on the common path. The next step is to quantify the individual user paths to determine the individual user performance. In our graph-based usage pathway model the transitions (or relationships) are assigned non-negative weights forming a weighted graph. This makes it suitable for applying path finding algorithms such as Yen’s k-shortest path [15]. The concept is depicted in Fig. 2 on the right: for each user compute the shortest path and select that path or sub-graph Pj∗ with the lowest cost cmin . These costs are then normalized using the costs c˙ of the common path as reference (the common path must c˙ (Fig. 2). In the context of the IPM the found always have the lowest cost), i.e., pj = cmin common path is the basis for the “ideal paths”. Since ideality is subjective to the individual user and his personal usage (or learning) goal, the common path is only the base for the user’s own other pathways. This performance score tells how near a user is on an ideal path. In an adaptive system this score could be used to determine the point in time when adaptivity should be enabled, e.g., by thresholding on window-based aggregation of scores.

44

A. Streicher et al.

Fig. 2. Concept to find common path and compute performance scores based on possible interaction sequences, individual user paths and application of shortest path graph algorithm.

4 Application Example We have conducted experiments with xAPI and graphs, implemented for adaptive assistance in serious games. The results indicate that storing usage data as graph structures indeed brings advantages for modeling (flexibility) and analysis (existing algorithms). This is, multiple assistance systems have been equipped easily with xAPI trackers, and for the modeling process a schema-free, NoSQL graph database (e.g., Neo4j) helped to stay flexible, e.g., for new application domains. These kinds of databases also allow to directly apply graph algorithms such as Social Network Analysis, e.g., to determine frequently occurring (learning) paths, shortest paths, or to determine frequent or rare activities. A real-life example of such graphs is depicted in Fig. 3.

Fig. 3. Real usage graphs, from xAPI observations. Nodes are actors and activities; edges are the verbs. Left: interaction sequence for one user. Right: interactions graph for multiple users.

The selection of tracked events directly corresponds to the level of the targeted adaptivity level, i.e., macro- or micro-adaptivity (cf. Sect. 2). For micro-adaptivity in our serious game application scenario we observe each interactive element in a scene. The outcome is that the adaptivity system can pinpoint the current state of a user within

Graph-Based Modeling for Adaptive Control in Assistance Systems

45

a session by comparing that to the common path reference model. Because of the coldstart problem the adaptive assistance system needs data from multiple user sessions to build valid pathway models. Neo4j’s graph algorithms can be applied directly to the xAPI based graphs. In our current implementation the graphs have identical edge weights (cost = 1); varying weights and stochastic graphs are planned for the future. After collecting the xAPI data from multiple user sessions we get all nodes and transitions on the possible interaction sequence (PIS) sub-graph (Fig. 2, top). The minimal PIS from start to end can be found by using path finding algorithms such as Yen’s k-shortest path [15] (k = 1). The final step is to find those nodes which are on the common path but not the users’ currently observed usage path (e.g., string edit distances). The adaptive system can use the information from the next estimated node (e.g., metadata such as the activity name or the name of the transition) to issue a hint to the user or to modify the system’s navigational path, i.e., allowing only to interact with the next estimated activity. In our application scenario we choose the adaptivity strategy based on the performance score and an additional assistance level (based on other features, e.g., cognitive load [16]).

5 Conclusion Adaptive systems need reference models to determine the correct timing and the direction how the automatic adaptation should happen. The underlying principle follows adaptive control from systems theory, i.e., in closed-loop feedback systems [12]. A key aspect in technical control systems is to measure the current state and derive some feedback. Assistance systems that follow that principle can quantify the deviations from ideal paths by computing a distance metric between the current interaction sequence and (pre-defined) usage pathways. However, the construction of these usage pathway models requires domain and expert knowledge. This contribution addresses the data-driven, graph-based generation of such kind of models. The presented approach makes use of standardized triple-structured tracking data which is the input to the model generation process. In our application of e-learning assistance systems this is Experience API (xAPI) data. However, as xAPI is related to the more general W3C Activity Streams standard the approach is not restricted to the e-learning domain. By applying graph algorithms, we extract common paths and Ideal Paths Models which can act as reference models for adaptivity systems. Future work will further deepen the transfer to the specific elearning domain as well as to assistance systems in general. In the case of the former, the modeling must take didactic models and user model more into account. Acknowledgments. The underlying project to this article is funded by the Federal Office of Bundeswehr Equipment, Information Technology and In-Service Support under promotional references.

46

A. Streicher et al.

References 1. Gorecky, D., Schmitt, M., Loskyll, M., Zühlke, D.: Human-machine-interaction in the industry 4.0 era. In: 2014 12th IEEE International Conference on Industrial Informatics (INDIN), pp. 289–294 (2014). https://doi.org/10.1109/INDIN.2014.6945523 2. Woolf, B.P.: Building Intelligent Interactive Tutors. Morgan Kaufmann, Burlington (2009) 3. Shute, V., Zapata-Rivera, D.: Adaptive educational systems. In: Adaptive Technologies for Training and Education, vol. 7, pp. 1–35 (2012). https://doi.org/10.1017/CBO978113904958 0.004 4. Streicher, A., Leidig, S., Roller, W.: Eye-tracking for user attention evaluation in adaptive serious games. In: Pammer-Schindler, V., Pérez-Sanagustín, M., Drachsler, H., Elferink, R., Scheffel, M. (eds.) 13th European Conference on Technology Enhanced Learning, EC-TEL 2018, pp. 583–586. Springer, Leeds (2018). https://doi.org/10.1007/978-3-319-98572-5_50 5. Dörner, R., Göbel, S., Effelsberg, W., Wiemeyer, J. (eds.): Serious Games - Foundations, Concepts and Practice. Springer International Publishing, Cham (2016). https://doi.org/10. 1007/978-3-319-40612-1 6. Nakamura, J., Csikszentmihalyi, M.: The concept of flow. In: Csikszentmihalyi, M. (ed.) Flow and the Foundations of Positive Psychology: The Collected Works of Mihaly Csikszentmihalyi, pp. 239–263. Springer, Dordrecht (2014). https://doi.org/10.1007/978-94-017-90888_16 7. Streicher, A., Heberle, F.: learning progress and learning pathways. In: Fuchs, K., Henning, P.A. (eds.) Computer-Driven Instructional Design with INTUITEL, pp. 37–55. River Publishers Series (2017) 8. Conati, C., Manske, M.: Evaluating adaptive feedback in an educational computer game. In: Ruttkay, Z., Kipp M., Nijholt A., Vilhjálmsson H.H. (eds) Intelligent Virtual Agents, IVA 2009. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), pp. 146–158 (2009). https://doi.org/10. 1007/978-3-642-04380-2_18 9. Kurup, L.D., Joshi, A., Shekhokar, N.: A review on student modeling approaches in ITS. In: Proceedings of the 10th INDIACom; 2016 3rd International Conference on Computing for Sustainable Global Development, INDIACom 2016 (2016) 10. Melero, J., El-Kechaï, N., Labat, J.-M.: Comparing two CbKST approaches for adapting learning paths in serious games. In: Conole, G., Klobuˇcar, T., Rensing, C., Konert, J., Lavoué, E. (eds.) Design for Teaching and Learning in a Networked World, pp. 211–224. Springer International Publishing, Cham (2015). https://doi.org/10.1007/978-3-319-24258-3_16 11. Pavlik, P.I., Cen, H., Koedinger, K.R.: Performance factors analysis – a new alternative to knowledge tracing. In: 14th International Conference on Artificial Intelligence in Education 2009 (2009) 12. Åström, K.J., Wittenmark, B.: Adaptive Control, 2nd edn. Dover Publications, Mineola (2013) 13. Nguyen, N.T.: Model-Reference Adaptive Control: A Primer. Springer, Heidelberg (2018). https://doi.org/10.1007/978-3-319-56393-0 14. Streicher, A., Pickl, S.W.: Characterization and Analysis with xAPI based graphs for adaptive interactive learning environments. In: WMSCI, Orlando, FL, USA (2020) 15. Needham, M., Hodler, A.E.: Graph Algorithms: Practical Examples in Apache Spark and Neo4j. O’Reilly Media, Sebastopol (2019) 16. Aydinbas, M.: Realizing Cognitive User Models for Adaptive Serious Games (2019)

Chatbot User Experience: Speed and Content Are King Jason Telner(B) IBM, 1 Orchard Road, Armonk, NY 10504, USA [email protected]

Abstract. Chatbots provide some key advantages for businesses such as increased cost savings and efficiencies, but often fail to meet customer expectations due to their unsuitable, inaccurate and difficult to understand responses to questions. This can often lead to customer skepticism and resistance to using this technology. There are, however, many best practices that can be taken when designing and implementing chatbots that may increase the chance of your customers being satisfied with their experience. In this paper, we will discuss the findings of a user study we conducted examining attitudes, preferences, and performance with using chatbots for customer technical support. We will discuss the research findings, review lessons learned and best design practices, as well as discuss which attributes are most predictive of a successful chatbot experience based on a regression model we conducted. Keywords: Chatbot design · Chatbot user experience · Best practices

1 Introduction Chances are if you have ever used online customer service, you have had the experience of interacting with a chatbot. At first, your experience engaging with customer service appears to sound like person-to-person interaction. After a few chat interactions that consist of some awkward, slightly off topic responses, you begin to suspect, to your disappointment, that what is on the other end of the conversation is not a real person but is in fact a chatbot. Chatbots, whether you like them or hate them are becoming increasingly popular for obtaining product information and technical support as technologies such as natural language processing and machine learning continue to advance [1]. A chatbot or conversational agent is a software program that simulates a conversation with humans by mimicking either text or voice based on a set of predefined rules or conditions [2]. Although using a chatbot provides some key advantages for businesses, such as increased cost savings and efficiencies [3], they often fail to meet customer expectations due to their unsuitable, inaccurate, and difficult to understand responses to questions. This can often lead to customer skepticism and resistance in using this technology. There are, however, many best practices that can be taken when designing and implementing chatbots that may increase the chance of your customers being satisfied with their experience. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Z. Ahram et al. (Eds.): AHFE 2021, LNNS 271, pp. 47–54, 2021. https://doi.org/10.1007/978-3-030-80624-8_6

48

J. Telner

In this paper, we will discuss the findings of a user study we conducted examining attitudes, preferences, and performance with using chatbots for customer technical support for a variety of applications. We will discuss the research findings, review lessons learned and best design practices, as well as discuss which attributes are most predictive of a successful chatbot experience based on a regression model we conducted.

2 Methodology Ten participants took part in one-on-one remote user research interview sessions lasting approximately 45-minutes with an experienced user researcher. Given that the chatbot was still in development, participants were told that the chatbot was limited in the questions it could answer at the time and to take that into consideration. Participants first engaged in some exploratory interaction in which they could ask the system any questions they liked related to the applications of Webex and Slack. Participants then interacted with the system for four different scenarios assigned in a randomized order in which they asked it questions to get answers to more specific questions or problems with four commonly used employee applications: Slack, Webex, Outlook email, and Cisco AnyConnect VPN. Participants shared their screen with the moderator while they interacted with the system and were encouraged to “think aloud” during the session and describe what they were typing and clicking on, as well as their thoughts and impressions of the system. Participants then completed rating surveys on the system based on the scenario they completed. The moderator then reviewed each scenario with them and asked them some additional questions. For each scenario, the moderator recorded completion rates, number of errors, completion times in seconds, number of questions asked and preferences on using guided links versus unguided answers, and the length and format of the chatbot’s responses. At the end of the session, participants completed a post-test survey evaluating different attributes of the chatbot.

3 Participant Demographics Participants were company employees primarily based in the United States who held more technical job roles such as design and software development to less technical roles including sales and human resources. Participant tenure ranged from one to three years to more than 25 years. Participants used chatbots outside of IBM on a weekly to monthly basis and at IBM from several times a month to a few times per year. The areas participants used chatbots outside of IBM included customer support, insurance, marketing and competitive information, retail and health benefits. Within IBM, participants used chatbots for asking human resource questions, for technical support, and for help on internal applications. Participants rated their sentiment toward using chatbots from 1- dislike using them to 10 – like using them. The mean sentiment rating was 6.40 with a standard deviation of 2.41. Participants tended to rate their affinity for using chatbots based on their experiences with obtaining relevant and accurate answers to the questions they had asked chatbots in the past. For those who had reported getting accurate answers to their questions were

Chatbot User Experience: Speed and Content Are King

49

more likely to give higher ratings compared to those who previously got irrelevant or poor results.

4 Research Findings 4.1 Individual Scenarios As shown in Table 1, the Webex plugin scenario and customizing Slack notifications had the highest completion rates without errors. The majority of participants who completed these scenarios got relevant answers to the questions they were asking by only asking a single question. The answers provided to them by the chatbot were also reported to be of appropriate length and not overly verbose. For the other two scenarios of a problem hearing the sound of their Webex meeting and finding out how to renew their VPN certificate, several participants had to retype their questions multiple times because they did not get an answer they liked or the chatbot did not understand their question. Also, due to the fact that the chatbot was still in development, some errors were present when users typed in their questions which also resulted in lower completion rates without errors and users having to ask the chatbot multiple questions. Figure 1 below, further shows the ease of completion ratings for the four different scenarios on a 5-point scale from very easy to very difficult. As can be seen from the figure, the scenarios that had the highest completion rates without errors (Creating a Webex plugin in Outlook and customizing notifications in Slack) also had the highest ease of completion ratings. The scenarios of having a problem hearing the sound on Webex and renewing your VPN certificate were noted to be more difficult to complete due to having to ask multiple questions to get an answer to their question, having to navigate multiple links to get to answer, the answer not appearing directly in the chatbot window, and the answers being too long in length and containing too much technical detail. Figure 2 further shows the overall satisfaction and ease of use satisfaction ratings on a 5-point scale from very satisfied to very dissatisfied. For the most part, the scenarios that had the highest completion rates and highest ease of completion ratings, also had the highest overall satisfaction and ease of use satisfaction ratings. The exception was the scenario involving customizing notifications in Slack, which received lower overall satisfaction ratings. This was due to the fact that although the chatbot provided a relevant answer to the question, the answer was too technical and not helpful in terms of providing specific details on how to customize the notification, and the formatting of the answer could be improved. For the other two scenarios (fixing Webex audio and renewing VPN certificate) that received lower ratings for overall satisfaction and ease of use satisfaction, the main reasons were lack of relevant or accurate answers, such as asking for help with Webex audio but getting only general information about Webex. Having to ask multiple questions to get a relevant answer, answers that were too verbose, disorganized, or poorly formatted, and having to search through multiple links for the relevant answer were the main reasons for lower overall satisfaction and satisfaction with ease of use for these scenarios (Fig. 2).

50

J. Telner Table 1. Objective measures for the four-user scenarios

Scenario

Task completion

Mean completion time Number questions

How to customize Slack notifications?

80% without errors 53 s

8 users got answer with 1 question 1 user got answer with 2 questions 1 user never got an answer

Problem hearing sound 30% without errors 51 s on Webex meeting

3 users got answer with 1 question 3 users got answer with 2 questions 1 user got answer with 4 questions 3 users never got an answer

Create Webex Plugin in Outlook

90% without errors 68 s

9 users got answer with 1 question 1 user never got an answer

How to renew your VPN certificate?

30% without errors 86 s

3 users got answer with 1 question 5 users used the guided links 1 user never got an answer

4.2 Chatbot Overall Satisfaction Ratings and Predictors of Net Promoter Score Participants finally completed an overall satisfaction ratings survey for various attributes of the chatbot, as well as the Net Promoter Score (NPS) rating after completing all the different scenarios. The Net Promoter Score rating asked participants to rate how likely they would be to recommend the chatbot to a friend or colleague from 0 being not at all likely to 10 being extremely likely. The chatbot received an NPS rating of 20 from 10 participants which is considered to be in the favorable range. The mean reasons cited by participants for giving a lower NPS rating were primarily due to the chatbot’s limited content and knowledge. This is also reflected in the overall satisfaction ratings shown in Fig. 3, in which the chatbot’s knowledge received the second lowest number of participants who were either very satisfied or satisfied with this attribute. The amount of time it took to get an answer to a question from the chatbot and the chatbot’s navigation received the highest number of participants who were either very satisfied or satisfied with the chatbot for the various attributes.

Chatbot User Experience: Speed and Content Are King

51

How to create a WebEx plugin in Outlook.

How to customize the notifications in Slack.

Problem hearing the sound on your Webex meeting and need to resolve the issue

Need to find out how to renew your VPN certificate. Very easy Somewhat easy Neither easy nor dificult Somewhat difficult Very difficult

0

2

4

6

8

10

Number of participants

Fig. 1. Ease of completion ratings for the four different scenarios

A stepwise linear regression was conducted with the overall NPS rating as an outcome variable and the ten overall attribute satisfaction ratings along with user sentiment toward using chatbots as predictors in the model. Satisfaction with the amount of time it took to get an answer to your question, significantly predicted NPS ratings, β = 1.17, p < 0.05. The more satisfied participants were with the time to get an answer to their question, generally faster response times from the chatbot to their questions, the higher an NPS they gave the chatbot. Satisfaction with the chatbot’s knowledge also showed a trend toward significantly predicting NPS ratings, β = 0.49, p < 0.1. Those who were more satisfied with the chatbot’s knowledge showed a trend toward giving the chatbot higher NPS ratings.

5 Best Practices in Designing Chatbots There are a variety of best practices for designing chatbots, and many of them appeared to have a substantial impact in the current user study. Participants were generally satisfied with the chatbot’s introductory message, as it was clear what the chatbot could assist with from the start of the interaction and how they could interact with it through either links or text responses. This led to more positive feedback about the chatbot and followed the best practice guidelines of ensuring your users know upfront what the chatbot can do by setting expectations, as well ensuring users understand how to interact when including

52

J. Telner

How to customize the notifications in Slack - Ease of use satisfaction How to create a WebEx plugin in Outlook - Ease of use satisfaction How to create a WebEx plugin in Outlook - Overall satisfaction Problem hearing the sound on your Webex meeting and need to resolve the issue -Ease of use satisfaction Problem hearing the sound on your Webex meeting and need to resolve the issue -Overall satisfaction Need to find out how to renew your VPN certificate - Ease od use satisfaction How to customize the notifications in Slack - Overall satisfaction Need to find out how to renew your VPN certificate - Overall satisfaction Very satisfied Satisfied Neither satisfied nor dissatisfied Dissatisfied Very dissatisfied

0

2

4

6

8

10

Number of participants

Fig. 2. Overall satisfaction ratings and ease of use satisfaction ratings for the four different scenarios

links in chatbot responses versus typing by providing some upfront instructions [4, 5, 7]. There were other instances in which the chatbot’s design did not follow best practice guidelines, leading to lower completion rates without errors and lower satisfaction and ease of use ratings. The first involved providing relevant and accurate answers to user questions. There were some instances mostly for the Webex audio and Slack scenarios in which participants had to type in their question multiple times to get a relevant answer. This was generally due to the natural language not understanding their question or keywords and users having to reword their question multiple times. It is important that the chatbot be able to account for multiple word choices, phrasings and start options to allow for flexibility of interaction with users. Chatbots should be designed to allow for possible misunderstandings at every step and be able to clarify the user’s question or answer if it detects an ambiguity before offering an answer, through a related question to remedy misunderstandings [4, 6, 7]. The answers provided by the

Chatbot User Experience: Speed and Content Are King

53

Amount of time it took to get an answer to your question Navigation through the chatbot Visual appearance of the chatbot Chatbot's vocabulary Guided conversation buttons Amount of information provided Ease of finding the chatbot's icon Quality of the content provided Chatbot's knowledge Chatbot's ability to overcome errors 0 Very satisfied Satisfied Neither satisfied nor disatified Dissatisfied Very dissatisfied

2

4

6

8

10

Number of participants

Fig. 3. Overall satisfaction ratings for various chatbot attributes.

chatbot should also be specific to the question asked, instead of pushing more generic links. For example, the scenario of fixing the audio issue in Webex received lower satisfaction and ease-of-use ratings because when participants asking the chatbot for help with Webex audio, it often only gave general information about Webex rather than a specific solution for fixing the audio problem. This issue was also present in the Slack notification scenario, in which the chatbot easily provided an answer, however the answer was not specific enough in terms of the steps required to customize a notification, leading to lower overall satisfaction ratings. In addition to chatbot responses to user questions being specific, relevant, and accurate, they should also use appropriate language, be formatted properly and be concise in length. Although participants for the most part were satisfied with the time it took them to get an answer to their question, as well as the navigation through the chatbot, there were a few participants for most of the scenarios who noted that the language used in the chatbot responses was too technical for them, contained too many links requiring more navigation and searching for the answer, that the answer did not follow a sequence of steps to resolve the issue, and that the length of the answer was too long. When designing chatbot responses, it is recommended to try to make the chatbot messages sound as human and conversational as possible. This can be achieved by using multiple short messages and not a single long message or a large block of text, by pushing direct links to specific solutions, and by properly formatting the solutions with plenty of

54

J. Telner

whitespace and having the solution formatted to reflect their task sequence of operations, with labels included in the solution for the step number [4, 5].

6 Conclusions It is apparent from the results of the study, as well as the best practice guidelines, that the quality of chatbot responses as well as the speed by which users can get access to these responses has significant impact on user’s satisfaction with the chatbot and their performance in getting answers to their questions. This can be achieved by ensuring chatbot responses are specific and direct, relevant and accurate, well formatted, concise, use understandable language and terminology.

References 1. Soufyane, A., Abelhakim, B.A., Ahmed, M.B.: An intelligent chatbot using NLP and TF-IDF algorithm for text understanding applied to the medical field. In: Ben Ahmed, M., Mellouli, S., Braganca, L., Anouar Abdelhakim, B., Bernadetta, K.A. (eds.) Emerging Trends in ICT for Sustainable Development, pp. 3–10. Springer, New York (2021). https://doi.org/10.1007/9783-030-53440-0_1 2. Ashfaq, M., Yun, J., Yu, S., Maria Correia Loireiro, S.: I, chatbot: modeling the determinants of users’ satisfaction and continuance intention of AI-powered service agents. Telematics Inform. 54, (2020) 3. Adam, M., Kessel, M., Benlian, A.: AI-based chatbots in customer service and their effects on user compliance. Electron. Mark. 9, 204 (2020) 4. Steele, I.: Chatbot Do’s and Don’ts – These Are the Best and Worst Chatbot Practices. Comm100 (2018). https://www.comm100.com/blog/chatbot-best-worst-practices.html#be 5. Verani, E.: Chatbot Best Practices: 8 Tips & Tricks you Can Benefit from Today. inbenta (2020). https://www.inbenta.com/en/blog/chatbot-best-practices/ 6. Tezer, T.: Chatbot User Experience 101: Vital Tips to Improve Chatbot UX. UX Collective (2017). https://uxdesign.cc/chatbot-user-experience-101-vital-tips-to-improve-chatbot-ux-ca0 2bc36587c 7. Budiu, R.: The User Experience of Chatbots. Nielsen Norman Group (2018). https://www.nng roup.com/articles/chatbots/

Is Artificial Intelligence Digital? Vaclav Jirovsky(B) and Vaclav Jirovsky Jr. Faculty of Transportation Sciences, Czech Technical University in Prague, 110 00 Prague, Czech Republic {jirovsky,x1.jirovsky}@fd.cvut.cz

Abstract. The models of thinking and behavior of human population, using regular meta-descriptors of IT procedures, show that to increase the computing power will not be the shortest way to the artificial intelligence with ability to solve even simple task of the management. Human nature, human mental ability together with intuition, abstraction and association skills has to be described more in analog then in digital world. From analog point of view, we have connection with many analog signal levels encumbered by distortion, interference or crosstalk’s. The main role will be played by signal to noise ratio at the input to the destination neuron. Furthermore, we are ready to state that interference caused by crosstalk’s would create a part of human association abilities and imagination. Some views of these approaches are addressed in the paper. Keywords: Inter-neuron communication · Artificial intelligence

1 Introduction The glory of artificial intelligence spread by massive communication means leads to expectation of miracles supported by digital technologies in the near future. The tendency to use digital technology for modeling processes of thinking, defined by specific programming languages running on digital computers, has a long history [1]. For long time no one even think to use analog computers (already forgotten at that time) for supporting the processes ongoing in human brain. But, from the elementary point of view, it is and it has been known that the basis of these processes is naturally analog1 . The first movement back to analogue way of thinking would be observed especially in last decade. Since the 1970s we can perceive mothballed analog technology as inaccurate and hard to program. Even it was major stream of 70’s, we can also find some analysis comparing analogue and digital world as an article of David Lewis comparing analogue

1 It seems necessary, at the moment, to recall way of basic operation of digital electronic cir-

cuit, which is working with physical analogue phenomenon’s, only visible outputs resp. inputs, demonstrate digital character. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Z. Ahram et al. (Eds.): AHFE 2021, LNNS 271, pp. 55–59, 2021. https://doi.org/10.1007/978-3-030-80624-8_7

56

V. Jirovsky and V. Jirovsky Jr.

and digital representation of the numbers [2]. The comeback of analogue computing should be observed in the solution of partial differential equations2 , where analogue computers are several orders faster than digital [3] or in philosophical contemplation of Hagar [4]. The full coverage of current scientific question should be found on pages of Freeman Dyson3 [5].

2 Signal Characteristic Historically all data, measured from our nervous system has been shown as an analogue value, e.g. encephalographic data or electrocardiogram. When digging deeper into the simple signal, scientist observe pulses in the neural system and came to the conclusion, that neural communication is digital, which shift all consideration away from convenience of continuous representations. The conclusion that neurons communicate with one another “almost exclusively through discrete action potentials” [9] throw away the mathematical elegance and analytical convenience of continuous representations and, at the best case, limited continuous representation to statistical representation of signals. Glorification of discrete attractor models for their superior stability [6] advocate the noise tolerance of the model suppressing the important role of the noise in the creativity process. The actual character of the signal is already described as a “spikes” [7] and theoretical investigation is presented very often as an analysis of the inter-spike interval (ISI) distributions [8]. But two important characteristic of neural system are overlooked – reduced susceptibility4 of cell when excited too often and integration character of the system elements. The spikes are pseudo-like Dirac impulses (see Fig. 1) which follow in different intervals each after other. Because of reduced susceptibility, reaction of the receiving cell could decrease, even the excitation would be large. Similarly, amount of pulses, even non-uniformly distributed, would be “integrated” at destination cells and actual reaction would be quite different from expected. We should see the confirmation of such a behavior as mentioned above in Weber-Fechner law, which states that the amount of change needed for sensory detection to occur increases with the initial intensity of stimulus, and is proportional to it. That means, the change in stimulus, that will be noticeable, will be dependent on the original stimulus. Because of these two phenomenon’s we cannot easily say, the brain is based on the digital information processing. Maybe the information transmission could have some 2 In 2016, researchers at MIT’s Computer Science and Artificial Intelligence Laboratory and

Dartmouth College presented a paper on a new analog compiler that could help to enable simulation of whole organs and even organisms. The compiler takes as input differential equations and translates them into voltages and current flows across an analog chip. The researcher used an analog chip to test their compiler on five sets of differential equations commonly used in biological research. 3 Freeman Dyson (1923–2020) professor of physics at the Institute for Advanced Study in Princeton. 4 The reduced susceptibility of the cell, sometimes called refractivity, represents its status change during the interval when it is undergoing excitation. The most significant change in the cell occurs after the first pulse and with subsequent pulses the change of the internal state of cell decreases, until the cell becomes completely insensitive to the incoming excitation pulses.

Is Artificial Intelligence Digital?

57

Fig. 1. Neuron spikes

marks of the digital communication but if we do not hang on the precision of the expression we can say that the transmission of the information between the neurons connection is pulse frequency modulation (PFM). The integration of the signal at receiving side will be very natural way of demodulation of such a signal. Moreover, the information processing on the destination side will be evidently analog.

3 Crosstalk and Noise The density of neurons in the human brain together with the connections created by axons and dendrites are incredibly high. It means that distance among individual axons, dendrites is very small, and the pulse represented by electric voltage could be crosstalked from one conducting edge to another one. Let’s consider each information passway as an individual communication channel where K channels are passing side by side. Than the power used for information transmission at the “input” of every receiving neuron will be p(t) =

K    P0 · m0 (t) + P0 · s0 (t) · mj (t)eiϕj + n(t)

(1)

j=1

where t P0 K mj (t) s0 (t) ϕj n(t)

- time, - intensity of the excitation on the transmitted neuron side, - number of interfering channels (axons, dendrite)5 , - excitation information transferred in j-th channel, - power suppression of unwanted channel, - phase of the excitation in j-th channel, - noise in the neural system (we suppose white Gaussian noise).

5 We may even consider two pieces of the passway connected in series, so one general channel

would be piecewise approximation using many single channels.

58

V. Jirovsky and V. Jirovsky Jr.

Because the excitation information is transmitted as a series of spikes, we can write  j j  (2) qn · gn t − nT − τj mj (t) = n

where j

qn j gn T τj

- is the spike transmitted in j-th channel in n-th interval, - represents coefficient derived from shape of the spike, - the time interval when spike will be expected to happen, - relative position of the spike in the respective time interval, the jitter.

The τj is assumed to be independent with uniform distribution in the interval [0, T] and [0, 2π] respectively. There has to be considered also the case when there will be no spike in the respective interval, because of probabilistic distribution of τj . If we assume Gaussian noise distribution over the channel, then Eq. 1 become   (3) p(t) = P0 · m0 (t) + P0 · s0 (t) · K · c(t) + n(t) where c(t) - is Gaussian correlated noise with unitary power and spectral density D(ν). The analysis above shows two important lemmas • even communication over neural system seems to be digital (spikes), actual result of the communication is analog, • the influence of the crosstalk and noise is not negligible.

4 Impact on Higher Level With the previous lemmas we should enlarge our contemplation to processes of human thinking, intuition, abstraction and association skills and creativity. First of all, as have been shown above, the information coming from one point in the brain or generally neural system, can be on its way disturbed. In the equation Eq. 1 we can find an element mj (t)eiϕj , out of noise element n(t), which would describe two limiting state of transmission • amplification effect, when two or more signals in adjoining channels are “in phase”, • attenuation effect, when two or more signals are in opposite phase or correlated that way that sum of “in-phase” signals in adjoining channels create new signal which could attenuate or disturb original signal. What does it mean to the overall behavior? As a result of signal disturbance the final display of the activity would be different than expected, depending on the signal power in adjacent neural pathway. Going little bit further, the distortion of the signal can generate new signal, unknown until now – we can call it creativity or unusual way of thinking on very high level or on the other hand, also hallucination, which could funnel in undisputable creativity, too (i.e. work of Salvador Dalí). Similarly, effect of

Is Artificial Intelligence Digital?

59

noise or crosstalk on the signals in brain can generate new signals, later amplified by similar signals in the neighbor axon or dendrite, and generate entirely new information. So the main difference observed between artificial intelligence and natural intelligence – missing intuition, abstraction and association skills leading to the creativity can be actually result of neural signal distortion and noise. As we can conclude, such a situation cannot happen in the fully digital system, which is noise immune, without bringing some random variables in the process and uncertainty. This necessarily leads to the understanding of analogue character of the artificial intelligence over fully digital design.

References 1. Bundy, A.: Catalogue of Artificial Intelligence Tools. Springer, New York (1984). https://doi. org/10.1007/978-3-642-96964-5 2. Lewis, D.: Analog and digital. Nous 5, 321–327 (1971) 3. Crane, L.: Back to analog computing: Columbia researchers merge analog and digital computing on a single chip. Computer Science - Columbia Engineering (2016) 4. Hagar, A.: Discrete or Continuous? in The Quest for Fundamental Length in Modern Physics. Cambridge University Press, Cambridge (2014) 5. Dyson, F.: Is Life Analog or Digital? 22 January 2020. https://www.edge.org/conversation/fre eman_dyson-is-life-analog-or-digital. Accessed 22 Jan 2020 6. Chaudhuri, R., Fiete, I.: Computational principles of memory. Nat. Neurosci. 19(3), 394–403 (2016) 7. Kittnar, O.: Lékaˇrská fyziologie (Medical Physiology), vol. 2. Grada Publishing, Prague (2020) 8. Kumar, S.K.: Characterizing ISI and sub-threshold membrane potential distributions: ensemble of IF neurons with random squared-noise intensity. Biosystems 43–49 (2018). https://doi.org/ 10.1016/j.biosystems.2018.02.005 9. Abbott, L.F., DePasquale, B., Memmesheimer, R.M.: Building functional networks of spiking model neurons. Nat. Neurosci. 19(3), 350–355 (2016)

Interpreting Pilot Behavior Using Long Short-Term Memory (LSTM) Models Ben Barone(B) , David Coar, Ashley Shafer, Jinhong K. Guo, Brad Galego, and James Allen Lockheed Martin Advanced Technology Laboratories, Cherry Hill, USA {ben.a.barone,david.coar,ashley.y.shafer,jinhong.guo, brad.j.galego,james.p.allen}@lmco.com

Abstract. Computational models of fighter pilot decision-making provide insight into a pilot’s behavior, facilitating pilot performance assessment. This paper describes our application of Long Short-Term Memory (LSTM) neural networks to a dataset of pilot actions during simulated missile avoidance to generate metrics of pilot behavior under pressure. Lockheed Martin collected and curated the data from multiple human pilots, with varying experience, executing simulated missile avoidance scenarios. By evaluating model performance across sweeps of data characteristics, then fitting exponential functions to the performance trends, we identify unique pilot behavior metrics that correlate with pilot performance. We discuss how these metrics could provide insight into the complexity of pilot behavior, as well as provide a mechanism to evaluate pilot performance and enhance human-machine symbiosis. Keywords: Deep learning · Behavior and decision making · Performance assessment · Simulation and training

1 Introduction Psychologists and neuroscientists have long studied the human behavior associated with human senses, decisions, and actions. Human decision modeling is a multi-disciplinary research field that uses computational models to quantify that behavior. With the emergence of human-machine intelligent systems, as well as advances in computing power, sensor technologies, and machine learning, the prevalence and utility of human decision modeling have increased. Cognitive architectures, such as [1], describe the complex interactions within and between cognitive functions such as perception, memory, and categorization. Further, there is a breadth of research applying AI technology to capture the accumulation to threshold of information associated with decision making [2]. These mechanisms attempt to explain the decision-making process and rely on the cognitive and neural mechanisms involved in decision making. The goal of this effort was to quantitatively understand the complexity of a pilot’s decision behavior given information about an incoming missile threat as well as information about the pilot’s environment. We did not need to explain the decision-making process or directly account for the cognitive and © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Z. Ahram et al. (Eds.): AHFE 2021, LNNS 271, pp. 60–66, 2021. https://doi.org/10.1007/978-3-030-80624-8_8

Interpreting Pilot Behavior Using Long Short-Term Memory (LSTM) Models

61

neural processes which influence decision making, and as such, opted to use a low complexity model to capture pilot decision behavior. Data was collected in a high-fidelity fighter jet simulator at Lockheed Martin Aeronautics in Fort Worth, Texas. Evaluating the predictivity of Long Short-Term Memory (LSTM) networks across varying Window Lengths (defined as the number of time-ordered data points included as input to the model) and Lag Lengths (defined as the number of time-ordered data points between the input window and the predicted time point), allows us to assess model performance given varying input data size and length of time to prediction. We quantify the effect of these characteristics on model performance by fitting an exponential function to the trend in performance then assessing the decay of each function from pilot to pilot. We associate these decay metrics with the complexity of pilot behavior, correlate the metrics with pilot performance, and examine the variance in these metrics from pilot to pilot.

2 Methods We are attempting to understand the complexity of a pilot’s behavior as they perform nonlinear maneuvers through time. We evaluated LSTMs because they excel at capturing nonlinear temporal relationships and can represent arbitrarily complex temporal distributions given enough data [3]. We evaluated a many to one LSTM model with multiple recurrent layers and a dense layer. The number of hidden layers in each recurrent layer and the dropout rate for each dropout layer are configurable and were optimized to maximize root-mean-squared error (RMSE) and minimize overfitting through cross validation. Mean squared error was chosen as a loss function to penalize higher deviations from ground truth more than lower deviations. Adaptive moment estimation (Adam) [4] was chosen as an optimizer because it is effective across different problem spaces and use cases. The collected dataset contained logs of pilot actions against a single missile for six unique human pilots across 27 scenarios. Each scenario reflects a single engagement for a pilot against a medium-range missile and each pilot completed the same set of 27 scenarios twice, totaling approximately 54 runs per pilot. Due to data collection constraints, not every scenario for every pilot has valid data points and therefore some scenarios are discarded from analysis. For each scenario, the pilot had real-time azimuth (deg) and elevation (ft) information of the incoming missile. The Pilot was not aware of the missile slant range information; however, they were aware that the missile azimuth and elevation information would only appear if the missile was within 30 km of their aircraft. We performed an analysis to determine the value of the missile information to the Pilot’s decision-making behavior. The RMSE of models with missile information included as input perform only marginally better than those models which omit missile information. We discussed these results with Subject Matter Expert’s involved with the data collection and concluded that these results were not unexpected. Many pilots approached the missile avoidance scenario by entering into a maneuver based on the initial missile information, then continuing with the maneuver throughout the scenario, rather than updating their maneuver consistently based on new missile information. Given the marginal improvement with missile information, we opted to include missile

62

B. Barone et al.

information as input. Our team had access to hit/miss data that indicated whether each run was successful (avoided missile) or unsuccessful (failed to avoid missile). The pilot also had awareness of the aircraft’s altitude (ft), commanded airspeed (kts), heading (deg), g load (G) information, and orientation (pitch and roll) (deg). In addition to the missile’s azimuth and elevation, each of the listed aircraft data points was considered valuable information for decision making and included as model inputs. We performed an additional analysis to determine whether aircraft orientation, which is the actual pitch and roll of the aircraft in degrees, or the pilot’s action on the stick, which is the amount of pressure the pilot applied to the stick in lbs, would be a more appropriate variable to predict. We determined that using aircraft position was better for this study as the pilot’s ultimate goal is to move the aircraft to a particular position in space. Additionally, given the data, we determined that there was little predictivity between stick action and aircraft orientation, indicating that aircraft orientation was most likely affected by latent variables related to the environment in the simulation (e.g., wind speed). To summarize, we considered missile azimuth (deg) and elevation (ft), aircraft altitude (ft), commanded airspeed (kts), heading (deg), g load (G), and pitch/roll orientation (deg) as the model input, and future pitch/roll orientation as model output. Our dataset was standardized for each feature. With the established LSTM architecture, we measured behavior metrics by performing sweeps on characteristics of the data, including Window Length and Lag Length, then evaluating models under each condition. This method enables the assessment of model performance given varying input data size and length of time to prediction. We performed a sweep from 2 samples (~0.4 s) to 48 samples (~9.6 s) for both Window Length and Lag Length for each of the six pilots, capturing RMSE of the model at each iteration. To account for the varying number of observations associated with changing Window Length and Lag Length, we controlled the number of observations per model to the number of observations found in the model with fewest observations. Using the RMSE at each point in the two sweeps, we fit exponential functions to the trends in both the Window Length and Lag Length sweeps. We identified the decay of each function as metrics that could be used to assess each pilot.

3 Results Figure 1 shows a parameter sweep between 2 samples (~0.4 s) and 48 samples (~9.6 s) for both Window Length and Lag Length for all six pilots. We observed that for any particular Lag Length value, RMSE typically started high then decreased as Window Length increased. For any particular Window Length value, we witnessed that RMSE started low then increased as Lag Length increased. The Window Length and Lag Length phenomenon subjectively described above are least visible in the Pilot 3 parameter sweeps. It should be noted that the data collection for Pilot 3 was especially limited.

Interpreting Pilot Behavior Using Long Short-Term Memory (LSTM) Models

63

Fig. 1. Pilot by pilot window length and lag length sweeps.

We fit a curve to the RMSE values along both Window Length and Lag Length sweeps for each pilot. Figure 2 (top) shows RMSE data as we sweep Window length, for each possible Lag Length, with a curve fit in red for each sweep. Figure 2 (bottom) shows RMSE data as we sweep Lag Length, for each possible Window Length. Each of these visualizations are based on Pilot 2 data. For Pilot 2, RMSE along a Window Length sweep can be explained by an exponential function with a decay of μ: −0.05 σ: 0.04, and RMSE along a Lag Length sweep can be explained by an exponential function with decay μ: −0.12 σ: 0.015. We extended this analysis to each pilot to demonstrate the difference in decay for the Window Length and Lag Length functions from pilot to pilot. The negatively decaying function for Window Length and positively decaying function for Lag Length persisted across pilots, however, each pilot exhibited unique decay constants. Figure 3 shows box plots constructed using the decay of the exponential function calculated across Window Length and Lag Length sweeps for each pilot. We performed a correlation between each of the sweep metrics and the hit/miss ratio for each pilot to determine whether these metrics could be related to pilot performance. Due to the data collection issues, Pilot 3 was withheld from the correlation analysis. The Pearson’s correlation coefficient is 0.3 with a p value of 0.6 between the Window Sweep mean decay and hit/miss ratio and 0.4 with a p value of 0.5 between the Lag Sweep mean decay and hit/miss ratio.

64

B. Barone et al.

Fig. 2. Curves fit for window length (left) and lag length (right) sweeps for a single pilot.

Fig. 3. Box plot of exponential decay constant of window length (left) and lag length (right) functions for each pilot.

4 Discussion 4.1 Interpretation of Window Length and Lag Length Decay Metrics The consistent negatively decaying function for Window Length describes that the RMSE of evaluated models decreased as the Window Length of the data increased. We propose that there are several explanations of this phenomena which could be tied to aspects of a pilot’s behavior, including the complexity of the maneuvers they are performing, as well as attributes of their cognitive processes. As the window of data included as input to a model increases in size, the ability of the model to predict more complex kinematics increases. Pilot’s performed maneuvers vary in complexity and are related to the function of RMSE across the Window Length sweep, where a steeper decay could correspond with a tendency to perform more complex maneuvers. Window Length may also be related to pilot working memory, where a certain amount of prior maneuver information may be used to plan maneuvers at some point in the future. The consistent positively decaying function for Lag Length describes that RMSE of evaluated models increased as the Lag Length increased. The time-variability of a pilot’s performed maneuvers explain these phenomena. If a particular maneuver is consistent over time, a model may effectively predict future actions, even with considerable lags. If maneuvers are inconsistent, short-lived, or relatively random, RMSE of the evaluated models should increase as Lag Length increases, where a steeper decay could correspond with a tendency to perform maneuvers with less temporal structure.

Interpreting Pilot Behavior Using Long Short-Term Memory (LSTM) Models

65

We present the decay functions for Window Length and Lag Length as a method to measure pilot behavior, by way of the complexity and time-variability of the maneuvers they perform. Additionally, these metrics could relate to pilot cognitive processes. Validating the relationship between the sweep decay functions and maneuver complexity might involve identifying unique maneuvers that a pilot may perform, then assigning subject matter expertise-derived complexity measures to each maneuver, before correlating them with our decay metric. Validating the relationship between the sweep decay functions and aspects of cognitive processes might be more complex and could involve additional assessment of each pilot’s working memory. 4.2 Applications and Next Steps These measurements can be thought of as a metric for pilot behavior and could be of interest when assessing pilot performance. Analyzing the decay functions for each of these attributes revealed that there is notable variation from pilot to pilot. Assessing the correlation between the decay functions and the hit/miss ratio across runs for each pilot did reveal low correlation for the Window Length decay function and Lag Length decay function. This reveals that the metrics discussed in this paper may hold value when assessing pilot performance. It is important to note that these correlations were performed across only five pilots. It would be of interest to evaluate these metrics with more pilots, across different types of scenarios, and repeatedly throughout the course of a training program to observe how the metrics might change with individual changes in experience. In addition to pilot performance assessment, the models and metrics developed as part of this effort have a broad range of applications in the field of Human-Machine Symbiosis [5]. Models of pilot decision making can be useful in the design and evaluation of autonomous mission aides. For example, models developed under this effort could be used to verify and validate autonomous teammates, providing human-like actions across a number of runs that wouldn’t be realistic for a human pilot to perform. Additionally, these models could help direct the actions of an autonomous teammate, potentially engendering trust by enabling human-like actions in manned-unmanned teams.

5 Conclusion Rather than training and evaluating a single model to analyze pilot behavior, we evaluate many different iterations of a model across a range of parameters. By evaluating model performance across sweeps of Window Length and Lag Length, then fitting exponential functions to the performance trends, we identify unique pilot behavior metrics that correlate with pilot performance. These metrics can provide insight into the complexity of pilot behavior, as well as providing a mechanism to assess pilot performance and enhance human-machine symbiosis. Acknowledgments. We would like to thank Danielle Clement and Sarah Mottino at Lockheed Martin Aero for providing us with the dataset used for this analysis. We would also like to thank Raquel Galvan-Garza and Joshua Pletz at Lockheed Martin Advanced Technology Laboratories for their contributions to this effort. Funding for this effort was provided by Lockheed Martin.

66

B. Barone et al.

References 1. Gray, W.D.: Integrated models of cognitive systems. In: AFOSR Cognitive Modeling Workshop. Saratoga Springs, NY, USA (2005) 2. Busemeyer, J.R., et al.: Cognitive and neural bases of multi-attribute, multi-alternative, valuebased decisions. Trends Cogn. Sci. 23(3) (2019). https://doi.org/10.1016/j.tics.2018.12.003 3. Sherstinsky, A.: Fundamentals of recurrent neural network (RNN) and long short-term memory (LSTM) network (2018). arXiv:1808.03314 4. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In: 3rd International Conference for Learning Representations, San Diego (2015) 5. Grigsby, S.S.: Artificial intelligence for advanced human-machine symbiosis. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-91470-1_22

Felix: A Model for Affective Multi-agent Simulations Giovanni Vincenti1(B) and James Braman2 1 University of Baltimore, Baltimore, MD, USA

[email protected] 2 Community College of Baltimore County, Essex, MD, USA

[email protected]

Abstract. This article introduces the Felix model, which utilizes concepts based on human physiology to simulate the influence of emotions in decision-making, the performance of actions, and memory mechanics. The model implements a fuzzy affective system, a genetically inspired emotive arousal modulation component that influences the agent’s performance. The goal of this project is to create multi-agent environments that simulate human behavior in emotionally charged situations. In addition to the model introduction, preliminary data for a sample situation is presented as well as a brief discussion of future work. Keywords: Affective computing · Kansei engineering · Agent-based models

1 Introduction The field of Artificial Intelligence has made significant leaps in the last two decades, leading these algorithms to the core of most modern computer-driven applications. It is important to remember that AI implements abstractions of human reasoning rather than the more complex physiological mechanics. Although this approach has been extremely effective in addressing large-scale machine learning problems where the solution must be based on analytical and perhaps merciless foundations, it is far from being representative of human nature. When we apply these algorithms to the simulation of populations, we quickly realize that a key component is missing. If we talk about a simulation to evaluate the traffic patterns for a city, our minds may wander to the last time we were stuck in traffic in the middle of a thunderstorm. Some of us may even relive feelings of resignation, impatience, or perhaps even a spark of anger. At this point, we need to compare the model that we are recreating, cold and calculating, to what the human experience is really like, and we find that emotions are often not part of the picture. This paper introduces Project Laetitia, which aims at creating social simulations that blend AI and a basic emotive behavioral response. In particular, we focus on Felix, a model that utilizes affective biomimicry to modulate reactive and rational behaviors and memory formation and retrieval by using fuzzy sets. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Z. Ahram et al. (Eds.): AHFE 2021, LNNS 271, pp. 67–74, 2021. https://doi.org/10.1007/978-3-030-80624-8_9

68

G. Vincenti and J. Braman

2 Background Before we introduce the Felix model, it is important to review some basic concepts of human physiology as well as other projects in affective computing, including agency. 2.1 Emotions The concept of emotions is often regarded as arbitrary since each one of us feels differently about the same events or subjects. If we were to explore this topic to include all different populations and cultures worldwide, the concept might seem even more arbitrary. It is essential to recognize that all humanity can identify some basic emotions in their peers unequivocally [1]. Once we can agree that some basic emotions exist, we then must classify them appropriately. The most well-known set of basic emotions are associated with Ekman, but in fact, there are many [2–4]. This project is based on Ekman’s basic emotions: anger, disgust, fear, joy, sadness, surprise. The cerebral center typically associated with emotions is the amygdala, which consists of two small bodies located in the medial temporal lobe. The amygdala is closely associated with the hippocampus, which also consists of two elements located next to the amygdala. The hippocampus is primarily responsible for memory, as well as the processing of emotions [5]. Consequently, emotions and memory are closely linked [6]. It is important to point out that although the limbic system is often associated with emotions, the neocortical system has taken up through evolution some of these functions [7]. This means that the processing of emotions is not as simple as recreating an artificial model of the amygdala. However, it is a mechanism that needs to be abstracted while still keeping in mind its physiological foundations. 2.2 Affective Computing The field of affective computing is relatively new and focuses on including an emotive component in digital products [8]. This significant change follows the inclusion of the quantification of emotions in the field of industrial design through Kansei engineering [9]. Affective computing can be divided into two major areas: the detection and recognition of emotions in humans and the emulation of emotions in machines. While most of the work in affective computing focuses on the first area [10, 11], our research is primarily focused on the second. The advent of affective computing has also brought psychology, sociology, and agency together [12]. The majority of affective agent architectures are designed for general-purpose situations [13, 14]. We can also find an array of specialized systems that focus on specific tasks [15, 16]. Although the approach described in this paper is not novel, some of its foundations and implementations are quite different from existing models in the variety and flexibility of emotion regulation. Most approaches focus on dichotomous and discrete states, where an agent can be either angry or not, and exclusive models, where the agent is meant to only emulate anxiety, for example.

Felix: A Model for Affective Multi-agent Simulations

69

3 Project Laetitia The main goal of this project is to abstract the individual by looking at the combined behavior of the group. This approach associated with computational sociology allows us to explore emergent group dynamics, even though each individual’s behavior may not be extremely accurate [12]. This approach also allows us to keep the processing power to a relative minimum in order not over-burdening the system. If we were to implement more complex individuals, scaling to thousands of agents may not be feasible without the use of significant computing infrastructures.

4 The Felix Model The core element of Project Laetitia is the model of the intelligent affective agent: Felix. The choice of the name (Latin for happy) is not reflective of each agent’s drive to achieve its own happiness, but it was chosen to highlight the importance of the affective component. The mechanics of this model are significantly inspired by biological systems and in particular, the way in which humans perceive and process emotions. This does not mean that Felix is designed to simulate a single person’s behavior, as that task would be excessively difficult with the current state of research. As stated earlier, the main goal of the project is to approximate group behaviors based in a simulation of simplified individuals. The architecture of the agent is an advancement of an early prototype [17] and is based on the three-layered architecture proposed by Sloman [18]. The first layer refers to reactive processes, which guard the agent’s self-preservation. The second layer is associated with deliberative processes, such as planning and deciding. The last layer focuses on meta-management processes, associated with aspects such as goal setting. These elements inspire the Felix architecture, as shown in Fig. 1.

Fig. 1. Basic model of the agent’s architecture, showing each component and their direct interactions and flow of information.

4.1 Internal Components Felix is composed of seven modules. Arrows represent one-directional data flows, and shared boundaries represent a two-way interaction between modules. For example, the

70

G. Vincenti and J. Braman

encoder presents data to the rational and reactive modules, but there is no data flow from either module back to the encoder. Instead, we do have two-way data flows between the rational module and memory. Encoder. The Encoder allows the agent to interpret the world in which it operates. Each input may be associated with an affective value. Each value may refer to a basic emotion (another agent communicates a message happily) or to a composite one (an agent appears angry and surprised). The encoded input will then be transferred to the Rational as well as Reactive modules. The Encoder is also responsible for translating clues (such as sounds or colors) to a format that the agent can understand. Reactive. Once an input is received from the Encoder, the Reactive module will assess whether to respond or not. In the case of no response, the system will only process the input through the Rational model. The Reactive module is activated either through a situation of danger or by a highly affective situation. In that case, the system will activate the Fight-or-Flight response and interact directly with the Decoder. Rational. The operations of the Rational module revolve around the agent’s drives. Each drive, or goal, may be in the nature of the agent, such as looking for food when hungry, or may be dictated by the environment, such as looking for resources. The Rational module is entrusted with the operations under normal circumstances when the agent is not responding to a Fight-or-Flight situation. Decoder. The Decoder receives inputs from the Rational as well as Reactive modules. The two inputs will be evaluated, and the system will choose which one to act on. In case the Reactive module fires, the system will ignore the input derived from the Rational module. The Decoder is also responsible for the management of the agent’s verbal and non-verbal communications. For example, an agent may communicate a message with concern or may look angry. Metacognitive. The Metacognitive module will not be implemented at first, since we are primarily concerned with how to incorporate emotions into an agent’s normal performance. Later on, this module will be responsible for the generation of new goals, especially when simulating societies that contain a complex structure and self-regulation mechanisms and processes. Memory. As Memory is an essential component of the human brain and its function closely tied to performance as well as affective states, it is impossible not to incorporate it into the model. Having a separate memory module allows us to test different mechanisms, such as the formation of new memories in emotionally charged situations, or the transfer of information from one agent to another. Although this process can become overly complex if we should implement this module in a way that is as close to physiology as possible, we will simplify and abstract its architecture in different ways, depending on the situation. Affective. The main component of this project is the Affective module, which has direct interactions with most modules, and will be the focus of Sects. 4.2 and 4.3.

Felix: A Model for Affective Multi-agent Simulations

71

4.2 Genetics The most important component of this model revolves around the way in which emotions influence the agent’s behavior through the Affective module. In order to simulate this aspect, we have chosen to create an artificial DNA-like string that controls the agent’s underlying modulatory system, which is based on a cubic Bézier curve. This curve is governed by four control points, and its range on each axis is [0, 1]. In this model, control point P0 will always be at coordinate (0, 0), and P3 will always be at coordinate (1, 1). The two other control points, P1 and P2, are dependent on the genetic makeup of the agent. Figure 2 summarizes this process.

Fig. 2. DNA-like structure that governs the affective modulation system.

As we process the string and find the coordinates of P1 and P2, we can then construct the curve, reported in Fig. 3. The x-axis represents the Objective Arousal State (OAS), and the y-axis represents the agent’s Subjective Arousal State (SAS). OAS refers to a standardized form of emotive states, whereas SAS reports the agent’s own emotive state. This distinction is quite important because every two humans have different reactions to situations, even though the same event may influence more than one person.

Fig. 3. Bézier curve representing the affective Fig. 4. Fluctuations in the affective modulation system. modulation system.

Another important aspect is how dynamic emotions are. This is also accounted into the model by allowing P1 and P2 to change throughout the simulation. This leads

72

G. Vincenti and J. Braman

the curve to shift, letting the agent experience different reactions to the same event, depending on the experiences that have affected it through its existence. Figure 4 shows an example of this fluctuation. The same OAS of 0.5 may lead to a greater or lower SAS, depending on how the agent’s affective state has been influenced. The range of movement of the control points is also regulated by the agent’s digital DNA. 4.3 Fuzzy Affective System Once the system is able to compute the SAS value, it is essential to let this correspond to actions, or limitations, that depend on the affective state. In order to do so, we utilize a Fuzzy Affective System (FAS), which is based on fuzzy sets. The nature of emotions is strictly tied to non-exclusive classification; thus, discrete calculations would create more problems than they would solve [19]. As reported in Fig. 5, we classify the membership of the SAS value to the particular set. In this example, FS1 corresponds to the lowest range of arousal levels, while FS5 to the greatest.

Fig. 5. Classifiers of the SAS value through fuzzy sets.

Fig. 6. Yerkes-Dodson curve.

Each fuzzy set can be associated with a group of actions relevant to that particular SAS. For example, if we look at the emotion of anger, they may not think about yelling at someone who is not angry. On the other hand, someone who is already angry may be more prone to choosing to yell at someone who is not helping. In that case, the “Yell” action may not be associated with FS1, while it may be related to FS5. This system can be accessed by both the Rational as well as the Reactive modules when making decisions. The range from low to high arousal states is essential in psychology, as the level of performance is typically dependent on this factor. The Yerkes-Dodson curve [20], reported in Fig. 6, shows the relationship between arousal and performance. Part of the variability of an agent’s mood over time will also be modeled using a dampened sine wave to simulate the Hedonic Treadmill effect as well as the desensitization to repeated stimuli [21]. 4.4 Preliminary Data To guide the development of the Felix model, a simple test environment using the Mesa framework was utilized for testing purposes. Mesa is a python based modular framework that can be used to build and analyze agent-based models [22]. To show a proof of concept, we expanded the example of a python agent using Mesa based on the Schelling

Felix: A Model for Affective Multi-agent Simulations

73

Segregation Model. This model simplifies agent behavior as a factor of its propensity to segregate with other similar agent types within a N × N grid world based on preset levels of homophily and simple rules. The agent’s drive (or happiness is this example case) is based on its desire for its surroundings to include a fraction of its “neighborhood” to consist of like agents. This model is of interest to examine population dynamics. Agents in an area will relocate to a random unoccupied location in the grid world until the fraction of like agents is within range. Although overly simplified in this case, we seek to examine the difference in agent behaviors in our experiment. Table 1 below illustrates three configurations for three model types. The reactive model is the original Schelling Segregation Model. The Annoyance model 1 and 2 are sample models where once agents reach a threshold of annoyance, they give up. In this case, some agents stay annoyed but do not move locations compared to the reactive model. Iterations continue since agents are not all happy. Iterations marked with an asterisk stabilize at the number, but the simulation continues to run, unlike the other combinations where it would stop once all agents are happy. Table 1. Sample data for three models Model

Density

% Minority

Homophily

Iterations

Reactive

80%

50%

2

Reactive

60%

30%

3

39.4

236.3

Reactive

40%

20%

3

122.4

155.6

Annoyance 1

80%

50%

2

7.2

322.7

Annoyance 1

60%

30%

3

11.2*

213.2

Annoyance 1

40%

20%

3

Annoyance 2

80%

50%

2

7.6

309.1

Annoyance 2

60%

30%

3

10.5*

214.7

Annoyance 2

40%

20%

3

12.2*

120.2

6.7

19*

Happy Agents 322.8

131.4

5 Conclusions and Future Work The next step of this research involves creating simple simulations that will test the agent’s affective model in more detail. These simulations will provide more robust foundational results that will then be utilized to test more complex scenarios. We believe that this agent has excellent potential, given the architecture’s modularity and the simplicity of the operations to be performed. From a technical standpoint, the modularity will allow the model to scale to large or distributed systems without significant changes. The many elements that compose the agent’s internal architecture are very pliable, lending themselves to simulations that mimic human physiology.

74

G. Vincenti and J. Braman

References 1. Ekman, P.: An argument for basic emotions. Cogn. Emot. 6(3–4), 169–200 (1992) 2. Izard, C.E.: Basic emotions, relations among emotions, and emotion-cognition relations. (1992) 3. Ortony, A., Turner, T.J.: What’s basic about basic emotions? Psychol. Rev. 97(3), 315 (1990) 4. Plutchik, R., Kellerman, H.: Emotion: Theory, Research and Experience. Volume 1, Theories of Emotion. Academic Press, Cambridge (1980) 5. Marieb, E.N., Hoehn, K.: Human Anatomy and Physiology. Pearson Education, London (2007) 6. Christianson, S.A.: The Handbook of Emotion and Memory: Research and Theory. Psychology Press, London (2014) 7. Everly, G.S., Jr., Lating, J.M.: A Clinical Guide to the Treatment of the Human Stress Response. Springer, Heidelberg (2012). https://doi.org/10.1007/978-1-4614-5538-7 8. Picard, R.W.: Affective Computing, vol. 252. MIT Press, Cambridge (1997) 9. Nagamachi, M.: Kansei engineering: a new ergonomic consumer-oriented technology for product development. Int. J. Ind. Ergon. 15(1), 3–11 (1995) 10. Anderson, K., McOwan, P.W.: A real-time automated system for the recognition of human facial expressions. IEEE Trans. Syst. Man Cybern. Part B (Cybern.) 36(1), 96–105 (2006) 11. Camurri, A., Lagerlöf, I., Volpe, G.: Recognizing emotion from dance movement: comparison of spectator recognition and automated techniques. Int. J. Hum. Comput. Stud. 59(1), 213–225 (2003) 12. Macy, M.W., Willer, R.: From factors to factors: computational sociology and agent-based modeling. Ann. Rev. Sociol. 28(1), 143–166 (2002) 13. Hudlicka, E.: Modeling effects of behavior moderators on performance: evaluation of the MAMID methodology and architecture. In: Proceedings of BRIMS 12 (2003) 14. Dias, J., Mascarenhas, S., Paiva, A.: Fatima modular: towards an agent architecture with a generic appraisal framework. In: Bosse, T., Broekens, J., Dias, J., van der Zwaan, J. (eds.) Emotion Modeling. LNCS, vol. 8750, pp. 44–56. Springer, Heidelberg (2014). https://doi. org/10.1007/978-3-319-12973-0_3 15. Oker, A., et al.: How and why affective and reactive virtual agents will bring new insights on social cognitive disorders in schizophrenia? An illustration with a virtual card game paradigm. Front. Hum. Neurosci. 9, 133 (2015) 16. Aydt, H., Lees, M., Luo, L., Cai, W., Low, M.Y.H., Kadirvelen, S.K.: A computational model of emotions for agent-based crowds in serious games. In: Proceedings of the 2011 IEEE/WIC/ACM International Conferences on Web Intelligence and Intelligent Agent Technology, vol. 02, pp. 72–80. IEEE Computer Society (2011) 17. Vincenti, G., Braman, J., Trajkovski, G.: Emotion-based framework for multi-agent coordination and individual performance in a goal-directed environment. In: 2007 Fall AAAI Symposium, Arlington, VA, USA (2007) 18. Sloman, A.: Beyond shallow models of emotion. Cogn. Process. 2(1), 177–198 (2001) 19. Bakhtiyari, K., Husain, H.: Fuzzy model of dominance emotions in affective computing. Neural Comput. Appl. 25(6), 1467–1477 (2014). https://doi.org/10.1007/s00521-014-1637-6 20. Diamond, D.M., Campbell, A.M., Park, C.R., Halonen, J., Zoladz, P.R.: The temporal dynamics model of emotional memory processing: a synthesis on the neurobiological basis of stressinduced amnesia, flashbulb and traumatic memories, and the Yerkes-Dodson law. Neural Plast. 2007 (2007) 21. Diener, E., Lucas, R.E., Scollon, C.N.: Beyond the hedonic treadmill: revising the adaptation theory of well-being. In: Diener, E. (ed.) The Science of Well-Being. SINS, vol. 37, pp. 103– 118. Springer, Dordrecht (2009). https://doi.org/10.1007/978-90-481-2350-6_5 22. Masad, D., Kazil, J.: MESA: an agent-based modeling framework. In: 14th PYTHON in Science Conference, pp. 53–60 (2015)

The Old Moral Dilemma of “Me or You” Maria Colurcio(B) and Ambra Altimari DIGES Department Campus Salvatore Venuta, University Magna Graecia of Catanzaro, 88100 Catanzaro, Italy {mariacolurcio,ambra.altimari}@unicz.it

Abstract. The paper addresses the complex implications of using AI when applied to the automotive industry (transferring responsibility for decisions from humans to machines). The study aims to understand behavioral intentions to use autonomous vehicles (AVs) by exploring how ethical issues influence customers’ decisions. Based on an online survey and in-depth interviews with KOLs, the study sheds light on the factors that influence customer acceptance of AI in personal transportation and shows that individuals mainly focus on the concerns of AI and fail to perceive its potential benefits. The study contributes to the emerging debate on AI and marketing with reference to ethical issues and offers new insights into Italian consumers’ awareness and usage intention for both policy makers and AV producers. Keywords: AI · Ethics · AVs · Customer acceptance

1 Introduction1 The relationship between ethics and innovation is controversial. Specifically, responsible innovation is a relatively new issue in the academic debate that envisions future scenario for human beings [6] and scholars stress the relevance of a moral awareness and ethics about innovation [5]. “We are entering an age in which machines are tasked not only to promote wellbeing and minimize harm, but also to distribute the well-being they create, and the harm they cannot eliminate” [4, p. 59]. Artificial intelligence (AI) is no longer a futuristic scenario full of imaginative filmography, but a reality that is already impacting different sectors and that in the near future will strongly influence the configuration of business and human ecosystem [8]. Recently the Journal of the Academy of Marketing Science published a study that discusses, through the analysis of concrete examples, how artificial intelligence will change the future of marketing, from the automotive industry, to healthcare, to the fashion sector [8]. Crossing the borders ok knowledge has always fascinated but it has raised many questions too. AI is part of our everyday life; we manage our appliances via mobile app, we plan holydays, shopping, according to suggestions made by an algorithm 1 Paragraph allocation.Maria Colurcio: Introduction, Research Background Conclusions.Ambra

Altimari: Research Design, Findings. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Z. Ahram et al. (Eds.): AHFE 2021, LNNS 271, pp. 75–82, 2021. https://doi.org/10.1007/978-3-030-80624-8_10

76

M. Colurcio and A. Altimari

etc. The point is: does a software be able to really understand our needs and think, make decisions as if it were us? Naturally when the choices made by an artificial intelligence concern safety and human life issues, profound ethical implications arise, which open the debate on a broad and interdisciplinary field, also involving the moral, cultural, and legal sphere [7]. Autonomous vehicles (AVs), also called self-driving cars (SDCs), are certainly one of the most disruptive innovation in service as they are a cutting-edge AI application in the B2C market that implies “an immediate sociotechnical concern” [16]. Specifically, while the widespread adoption of AVs would bring undoubted benefits in terms of environmental and energy improvement, safety and mobility for the disabled and those who cannot drive conventional vehicles on the one hand [2], on the other hand it results in crucial issues in terms of liability, information protection and compliance [17]. Nevertheless, a recent survey shows that in Europe, especially among the youngest, there is little understanding of the meaning of autonomous vehicle [3]. The present study aims at understanding behavioural intention to use autonomous vehicles [11] by investigating how ethical considerations affect customers’ decisions. The paper contributes to the emerging debate on AI and marketing with reference to ethical issues and provides fresh insight for both policy makers and AVs producers about the awareness and use intention of Italian consumers.

2 Research Background 2.1 Autonomous Vehicles (AVs) in Marketing Literature The definition of autonomous driving vehicles is preliminary to our study. AVs are vehicles that combine connectivity, popular feature in the last generation cars, and autonomy and therefore, benefit coming from each technology and from their combination. The Society of Automotive Engineers (SAE) has identified six levels of automation in the autonomous vehicles: level 0, No Automation; level 1, Driver Assistance, level 2, Partial Automation; level 3, Conditional Automation; level 4, High Automation; level 5, Full Automation. Level five means all individuals inside the vehicle are passengers, there is no way to control the car [16] Of course, level 5 is the most advanced and futuristic level and it has important implications in a wider transport system. Currently the largest part of modern vehicles belongs to level 1 or 2, Tesla models arrive to level 4. The driver-less car has in fact a revolutionary innovative impact not only in terms of product concept, but also in terms of contribution to sustainability and inclusion of the socio-economic system [7, 9]. The driver-less car is an application of AI that allows to reduce accidents, to allow mobility to those who for different reasons are not able to drive, to redesign routes and urban spaces [14]. According to recent contributions [18] AV are a key transportation technology of the future that will play a pivotal role for the development of smart cities and for the redesign of urban spaces [1]. Despite such a strategic implication, marketing literature focusing on customer acceptance and attitude to AVs is scant so far. A search performed through WOS Clarivate Analytics (January 2021) showed only 22 articles published on the topic in Management and Business Field. Of course, this depends on the novelty of the topic (the 77% of the scientific production is concentrated in last three years with a peak in

The Old Moral Dilemma of “Me or You”

77

2020 (50%) but is also the sign of a gap in literature about the behavior of consumers. Only two articles have been published in a marketing Journal (Journal of Consumer Research). Both these articles2 , published in 2020, focus on the moral discuss and the implications in terms of liabilities and development of the “new product” referring to the typology 5 of AVs. Specifically, Novak proposes “a generalized framework that organizes key research to date on moral decision and AVs” and draws four trajectories and research agenda points. 2.2 The Ethical Issue In light of the above, knowing the perception of the moral implications of AVs is a critical issue for predicting adoption levels and thus for developing marketing policies for AVs (mainly product and communication policies). The framework proposed by Novak [13] highlights that the topic is truly interdisciplinary [10] and privileges different perspectives depending on the specific goal or area of study (e.g., computer science, psychology, philosophy). Significant benefits are expected from the adoption of AVs, but consumers have concerns about self-driving machines [15]. Specifically, Bonnefon [7] argues that there is a discrepancy in what people expect from AVs depending on their own view versus the view of others and according to Gill [10]. “The latter discrepancy highlights the moral tension between self-interest versus prosociality when one’s own life is at stake and is to be traded against that of a pedestrian”. Currently, there is a lack of both theoretical and empirical research on the moral implications of artificial intelligence. However, even given the importance of the topic to both major automakers and policymakers, who are very much debating the future of transportation and the means for driverless vehicles, some important empirical studies are emerging that give significance to the questions of liability and ethical issues for AVs. Specifically, the Mitmedialab at Massachusetts Institute of Technology has developed a crowdsourcing platform called Moral Machine “for gathering a human perspective on moral decision made by machine intelligence, such as self-driving cars”. The moral machine collects for Avs a series of situations of ethical dilemmas in which the individual is called upon to judge from the outside and choose the lesser of two evils in a situation of brake failure. On the theoretical side, a recent article published in International Journal of Technoethics offers practical recommendations for the automotive sector to deal with ethical issues related to human agency and oversight of underlying fundamental rights (human dignity, right to self-determination and liberty, to life and safety, to privacy) by affirming the role of institutions in setting boundaries for the acceptance and applications of AVs.

3 Research Design The present study is part of a wider research project including two studies; the first study (hereafter “study 1”) was a quantitative survey; the second study (“study 2”) is the present study. In the study 1 we submitted an online questionnaire and conducted an 2 Gill, T. “Blame it on the self-driving car: how autonomous vehicles can alter consumer morality”

47(2), 272–291 and Novak, T. P. “A Generalized Framework for Moral Dilemmas Involving Autonomous Vehicles: A Commentary on Gill” 47(2), 292–300.

78

M. Colurcio and A. Altimari

explorative factor analysis to identify the main issues related to customer acceptance of AVs. Specifically, we launched an explorative online survey using Google Forms to get some preliminary suggestions about such a new and unexplored topic. The survey remained open almost one month between April and May 2020 and collected 212 responses. We started from the results of several surveys on the AVs such as the Ericsson survey on the future of self-driving vehicles and the Innovation Group survey on driving the next generation cars (The Innovation Group, 2019) (ERICSSON CONSUMER LAB, 2017) to set a preliminary group of questions. We then run an explorative factor analysis and a regression analysis to answer the question: what are the factors affecting customer acceptance of AVs? From the survey we got a matrix of 81 columns (variables) and 212 rows (responses). We analysed data using XLStat. The explorative factor analysis allowed us to identify the latent aspects that influence customers’ behavior. We used six different set of questions. (Table 1). Table 1. Areas of investigation (set of questions) – Study 1. Set of questions

Types of information

1- Personal details

Gender, age, education, geographical dimension

2- Confidence in ICTs (devices’ uses9

Online shopping, payments, telework,gaming

3- Private car uses and preference versus public transport

Travel, work, spare time, shopping

4- Propensity for distraction

Light (listening to music, talking on the phone..) Severe (wear make-up, playing video games, fall asleep…)

5-Interest in car automation

Preference about optional both actual and future: park assistance, autonomous emergency breaking, remote control

6-Advantages/disadvantages

Agreement/disagreement about potential advantages and problems

In the study 2 a qualitative research design was employed to generate rich and in-depth descriptions of concept of Autonomous Vehicle and its acceptance related to ethical issues. Specifically, we choose an abductive research logic to create synergy and simultaneous development of theoretical and empirical material. We developed two in-depth interviews to KOLs (Key Opinion Leaders) to discover more about ethics and artificial intelligence, to get key insight into the area, and to get suggestions on key questions to conduct focus groups. KOLs were two academic experts in ethics (Professor of Ethics and Philosophy at University of Catanzaro) and technology (Professor of Information and Management Engineering – Director of Design thinking for Business Observatory). Based on the results of Study 1 and the Key Opinion Leader evidence, we conducted two online focus groups.

The Old Moral Dilemma of “Me or You”

79

The focus groups focused on young adults (18 to 30 years old) with diving licenses, living in small and medium-sized urban areas. We included undergraduate and graduate students from the University of Magna Graecia. Given the broadness of the topic, we used the two focus groups to explore two different aspects of ethics and AI: in one group we talked about self-driving cars, and in the other group we talked about the emotional aspects of the relationship between ethics and AI.

4 Findings Study 1 Respondent do not perceive the potential advantages of AVs, such as working during the trip. Easiness of mobility and emission reduction are the only advantages that widely recognized. Instead, individuals focus mainly on concerns. (Fig. 1). Study 1 was an explorative factor analysis. Our aim was to identify latent factors that affect respondent behaviors (measured in the manifest variables). We then used a simple linear OLS regression to test the significance of factors on a response variable that measures the willingness to buy an AV (preliminary results to be considered only in terms of sign and significance). The disinterest in automation (factor 1) shows a significant negative sign: those who do not have automatic vehicles, nor are interested in having automatic services, are less prone to buy an AV in the future. Those who recognize AV’s advantages and ITC lovers are more prone to buy an AV. Instead, the higher the concerns on this technology, the lower the willingness to buy an AV. Two other factors plays as complement to disinterest (negative sign as well) and distraction (positive sign): it means this is a characteristic that will increase the demand for AVs.

Fig. 1. More than 70% fears that the car makes mistakes; over 60% of respondents want to keep the control of the car, and more than half wouldn’t trust the car, especially in the traffic.

The results show that ethical concerns are relevant to both the young and the less young. But what is the ethical matter? The ethical matter refers to the human perspective on moral decisions made by machine intelligence, by the algorithm that drives AVs. The moral dilemma, in its one-to-one explication could be: a driverless car must choose

80

M. Colurcio and A. Altimari

the lesser of two evils in the event of a sudden brake failure, such as driving ahead and crashing into the guardrail, killing the passenger, or swerving into the other lane and running over a pedestrian. The majority of interviewees fear that the car will make mistakes and do not trust it; they want to remain in control. Our results suggest that such fears may depend on the technical knowledge about ICTs and on the philosophical and cultural background of individuals, as the correlation matrix showed. Study 2 The KOL interviews highlighted some interesting points to elaborate on: the level of knowledge and awareness about AI/AVs among Italian individuals; the social implications (unemployment, alienation, social diseases); the emotional side of the issue (car lovers, the love of driving, the symbolic role of care). Based on these findings, we found in the focus groups that awareness of AI/AVs is low, as shown in Fig. 2.

Fig. 2. The most recurring words in defining AVs are future and innovation, generic and useful for any AI application; the main feelings are mainly intercepted by two negative words: Fear and Uncertainty

The world clouds that emerged during the focus groups confirm the fears that emerged from Study 1. The sense of anxiety and concern about loss of control that occurred in both Study 1 and Study 2 arise even more strongly from the discussion of a “me or you” situation. In relation to the moral dilemma presented in Fig. 33 , the focus group participants choose to save the “you”. No one wants to take responsibility for the life-or-death decision and everyone makes clear their own reservations about this “futuristic” scenario. Gender appears to be irrelevant in this decision (despite the high proportion of women in the groups), just as other types of discrimination (social, economic, form) appear to be irrelevant. Indeed, participants asked about situations (moral dilemma) involving homeless people, thieves, fat men/women showed no discrimination in judgment: in all cases, they chose the solution with less offense (fewer deaths or injuries). Although the participants were provoked to the social and economic consequences of the introduction of AVs, such as unemployment of taxi drivers, confusion due to AVS 3 The scenario shown in the figure was designed by the authors on the Moral Machine platform,

which allows you to create your own scenario to share and discuss https://www.moralmach ine.net.

The Old Moral Dilemma of “Me or You”

81

and traditional cars driving together on the streets, they insist on the moral issue and anxiety and fear of uncertainty emerge as the main feelings.

Fig. 3. The moral dilemma: the AV must decide to kill the passenger (me) or the pedestrian (you) Respondents preferred to crash into a barrier and die rather than drive on and kill a pedestrian (female).

During the conclusion of a focus group, one participant affirms that “AI in general must be used with moderation and control, because people must always be in control: We must not abuse it”. This position is shared by most, but we observed that, with the exception of one/two people, the participants, although young and studying at university, are not aware of the potential of AI and show no knowledge of AVs; moreover, more than half of all participants have never heard of AVs.

5 Conclusion Our studies show that Italian youth do not have sufficient awareness of the application and possibilities of Artificial Intelligence; in particular, the concept of self-driving cars is quite new and vague. Sensitivity to moral issues is high and negative sentiments outweigh positive ones. We argue that such empirical evidence depends on several factors that suggest us some considerations. First, the low level of knowledge and awareness about AVs, and more generally about AI, depends on people’s digital mastery and communication of AVs. Indeed, Study 1 showed that people who have higher levels of expertise and knowledge about technology have greater confidence in AVs, both for work and for pleasure. The Digital Economy and Society Index (DESI) for Italy is below the European average. Therefore, on the one hand, institutions are called to develop actions to improve the digitalization of people and communication campaigns about the advantages and disadvantages of using AI in daily life. On the other hand, the need to regulate the diverse application of AI arises; specifically for AVs, the definition of a set of rules on liability and limits. Secondly, AVs still appear as a distant frontier of technology; on the other hand, the KOL interview with the AI expert highlighted that AVs are not as complete as AI devices. In conclusion, we believe that the application of AI to cars can have a great impact on the overall quality of life in terms of a more inclusive and sustainable logic

82

M. Colurcio and A. Altimari

(by improving traffic conditions, enabling the design of urban spaces and the mobility of disabled and fragile people), but an extensive program of education and information is needed. On the other hand, we have enough time to achieve the availability of AVs as a concrete transport modality.

References 1. Altimari, A., Colurcio M.: Ethics and artificial intelligence: new and old challenges. Focus on self-driving cars. In: SIM Conference 2020 Proceedings (2020). ISBN: 978–88–943918–4–8 2. Anderson, J., Nidhi, K., Stanley, K., Sorensen, P., Samaras, C., Oluwatola, O.: Autonomous Vehicle Technology: A Guide for Policymakers. Rand Corporation, Santa Monica (2014) 3. ANSA: Veicoli connessi e autonomi mandano in tilt consumatori (2018). https://www.ansa. it/canale_motori/notizie/analisi_commenti/2018/09/20/veicoli-connessi-e-autonomi-man dano-in-tilt-consumatori_969aa8f8-6f6f-41f0-9fbe-6f2c26baba8e.html 4. Awad, E., et al.: The moral machine experiment. Nature 563, 59–64 (2018) 5. Bennink, H.: Understanding and managing responsibible innovation. Philos. Manag. 19(3), 317–348 (2020) 6. Blok, V.: Philosophy of innovation: a research agenda. Philos. Manag. 17(1), 1–5 (2017). https://doi.org/10.1007/s40926-017-0080-z 7. Bonnefon, J., Shariff, A., Rahwan, I.: The social dilemma of autonomous vehicles. Science 352(6293), 1573–1576 (2016) 8. Davenport, T., Guha, A., Grewal, D., Bressgott, T.: How artificial intelligence will change the future of marketing. J. Acad. Mark. Sci. 48(1), 24–42 (2019). https://doi.org/10.1007/s11 747-019-00696-0 9. Fernandez-Rojas, R., et al.: Contextual awareness in human-advanced-vehicle systems: a survey. IEEE Access 7, 33304–33328 (2019) 10. Gill, T.: Blame it on the self-driving car: how autonomous vehicles can alter consumer morality. J. Consum. Res. 47(2), 272–291 (2020) 11. Keszey, T.: Behavioural intention to use autonomous vehicles: systematic review and empirical extension. Transp. Res. Part C Emerg. Technol. 119, 102732 (2020) 12. Lütge, C., et al.: AI4People: ethical guidelines for the automotive sector-fundamental requirements and practical recommendations. Int. J. Technoethics (IJT) 12(1), 101–125 (2021) 13. Novak, T.P.: A generalized framework for moral dilemmas involving autonomous vehicles: a commentary on gill. J. Consum. Res. 47(2), 292–300 (2020) 14. Saeed, T., Burris, M., Labi, S., Sinha, K.: An empirical discourse on forecasting the use of autonomous vehicles using consumers’ preferences. Technol. Forecast. Soc. Change. 158, 120130 (2020) 15. Seuwou, P., Banissi, E., Ubakanma, G., Sharif, M.S., Healey, A.: Actor-network theory as a framework to analyse technology acceptance model’s external variables: the case of autonomous vehicles. In: Jahankhani, H., et al. (eds.) Global Security, Safety and Sustainability - The Security Challenges of the Connected World. CCIS, vol. 630, pp. 305–320. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-51064-4_24 16. Skeete, J.: Level 5 autonomy: the new face of disruption in road transport. Technol. Forecast. Soc. Change 134, 22–34 (2018) 17. Wu, S.S.: Autonomous vehicles, trolley problems, and the law. Ethics Inf. Technol. 22(1), 1–13 (2019). https://doi.org/10.1007/s10676-019-09506-1 18. Yuen, K.F., Cai, L., Qi, G., Wang, X.: Factors influencing autonomous vehicle adoption: an application of the technology acceptance model and innovation diffusion theory. Technol. Anal. Strateg. Manag. 1–15 (2020)

Analysis of a Bankruptcy Prediction Model for Companies in Chile Benito Umaña-Hermosilla1 , Hanns de la Fuente-Mella2(B) , Claudio Elórtegui-Gómez2 , Jorge Ferrada-Rodríguez1 , and Mauricio Arce-Rojas1 1 Departamento de Gestión Empresarial, Universidad del Bío-Bío, Avenida Andrés

Bello 720, Chillán, Chile [email protected], {joferrad,mauarce}@alumnos.ubiobio.cl 2 Facultad de Ciencias Económicas y Administrativas, Pontificia Universidad Católica de Valparaíso, Avenida Brasil 2830, Valparaíso, Chile {hanns.delafuente,claudio.elortegui}@pucv.cl

Abstract. The objective of this research is to examine bankruptcy prediction models and their application in companies in Chile. For this, bankruptcy regulations in Chile and internationally, specifically the United States and Colombia, are analyzed. In turn, the procedures that companies can use today are examined. In addition, the existing prediction models are compared. The model chosen for the development of the research is the Altman Z-score, with which it is sought to confirm its validity, applying it to Chilean companies. For this research, companies were classified into two groups: healthy companies and companies with financial problems. According to the results, the Altman Z-score model on companies, five companies could be found which may need to take advantage of bankruptcy regulations. Keywords: Bankruptcy · Liquidation and reorganization · Insolvency · Econometric modeling

1 Introduction Business failure is a phenomenon that impacts a large number of companies every year, it also has a certain effect since due to the components that these companies maintain, in addition to a certain impoverishing effect in the chain that occurs when the symptoms are not treated in time, Based on this, this research consists of analyzing business failure and derivative concepts such as insolvency, and how it has been treated in Chile according to the regulations that governed in certain periods of history and how they were changing due to the need for a rapid release of the resources in bankruptcy, as well as how some international regulations helped in its evolution in order to find the most efficient bankruptcy system. Due to the great need to avoid this situation, bankruptcy prediction models were created which reveal information about the solvency of the companies analyzed from financial indicators of the same, which helps in decisionmaking. In this investigation, a random company number was collected to evaluate the © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Z. Ahram et al. (Eds.): AHFE 2021, LNNS 271, pp. 83–90, 2021. https://doi.org/10.1007/978-3-030-80624-8_11

84

B. Umaña-Hermosilla et al.

solvency of each one of them, and according to their situation, how convenient it is to join a bankruptcy reorganization or liquidation system as the case may be. Considering that the term bankruptcy is associated with a more legal field, it is necessary to define what are the types of bankruptcies that distinguish the bankruptcy regulations: Fortuitous bankruptcy: This type of bankruptcy occurs due to variables that are uncontrollable for the company or natural person, since it happens even when all the precautions and good administrative decisions have been taken, examples of this are the bankruptcies that occur in economic crises like the one that affected Chile in 1982, which gave rise to bankruptcy law 18,175. Guilty bankruptcy: Guilty bankruptcy happens from poor planning and management by the administration of a company, it should be noted that the administrator did not plan to fail either due to lack of knowledge or poor key decisions for the permanence of the company, a An example of this situation could be that a purchase is made in the financial market of stocks that offer high profitability but high risk, and by not diversifying their portfolio the investor decides to invest all their capital in risky shares, therefore, they lose all their capital. Fraudulent bankruptcy: Fraudulent bankruptcy cases happen when managers or administrators intend to bankrupt the company, in legal terms “acted with intent” constitutes a criminal offense in addition to being linked to other types of crime, such as: crime of fraud, theft, embezzlement, criminal organization, corruption; and the like, because here, the merchant is aware that he is acting badly, and that the company is going to fail, but he does it in order to defraud investors, partners, and third parties involved in the business. In other words, the merchant not only wants bankruptcy, but plans and executes it in order to enrich himself with the money of bona fide third parties. It should be noted that recently in Chile this type of crime has increased greatly. As the business environment of companies evolves, the authors define several concepts that are associated with the origin of a bankruptcy itself, since bankruptcy is considered “the death of the company”. Business insolvency is understood as the lack of solvency, inability to pay a debt. But it is much more than that “insolvency is a multidimensional phenomenon and not only economic” [1], because its detection, it is not only that liabilities are greater than assets, it is also necessary to consider the amount of overdue debts within the total liabilities of the company, “the personal quality of the debtor, their access to credit, their assets, their liabilities in the short, medium and long term, market conditions, their productive capacity, etc.” [2]. That is why the importance of the timely detection of this phenomenon thanks to bankruptcy procedures, since insolvency has an impoverishing effect on bankrupt resources, taking into account the following situation: it is not the same to sell a property due to the need for cash, To sell a property to generate more wealth, in addition to this, the fines derived from the delay in the payment of obligations are increased, the cost of the operation rises because the risk for creditors increases, so the insolvent debtor negotiates with cheaper and lower quality suppliers, therefore losing their customers and ultimately a loss of the sense of reality known as the mirage of recovery [3].

Analysis of a Bankruptcy Prediction Model for Companies in Chile

85

Companies can go into insolvency due to various situations, either because the administrators did not have the knowledge or expertise to manage their business, their level of indebtedness was too high, a risky investment (investment in shares), disputes between partners, economic crisis or nature of the turn [4]. According to what was mentioned by Altman [5], insolvency is considered as a multidimensional phenomenon which can affect multiple entities related to the organization, as well as it can be originated by various reasons, whether internal or external, from this the most common causes are detailed. common below: I. High indebtedness: Over indebtedness is a factor that can put the company in danger of insolvency, since when requesting financing for amounts greater than those that the company can respond to, it can lead to bankruptcy. II. Organizational, administrative, financial and business deficiency: That there is no order or structure in the company in aspects such as finances or administration itself, can put the company at risk of independent bankruptcy if it has a good level of sale, since by not managing resources well results in chaos and weakens the company relative to the competition. III. This factor can be decisive in an industrial sector where competition is very high, and because not all have the same level of resources, they end up going bankrupt either because their costs may be higher than the competition. IV. Revaluation: The appreciation of the peso against the dollar has been one of the factors that has affected several exporters, causing a loss in the value of their products, although there are mechanisms to mitigate this effect, some attribute their bankruptcy to this situation. V. Problems in the company This factor is present in a more common way in family companies, according to studies most third generation family companies fail, when young people have a lot of knowledge due to having studied abroad, but not much experience, in which some cases can lead to the company to disappear. VI. Natural phenomena: Floods, earthquakes, among other natural disasters can cause enormous damage to a company, especially when it does not have insurance, savings or a plan to deal with these situations. VII. Problems with suppliers: The purchase of raw materials or everything that is necessary for the development of the product or service is vital for the company, so having problems in this area can be one of the greatest weaknesses that a company may have. VIII. Low portfolio turnover: Occurs when clients do not pay, which causes a lack of liquidity, it is important to consider an evaluation of the quality of the portfolio to avoid future problems. Within bankruptcy systems, efficiency is evaluated based on an improvement in the debtor’s ability to pay the assets, in addition to the reduction of the creditors’ need for credit within the bankruptcy. The first criterion of “improving the ability to pay to starting from the debtor’s equity” is based on the efficiency criteria of Kaldor Hicks and Carrasco Delgado, with regard to reducing credit needs, it refers to the fact that the procedure must be involved all creditors has been referred to since the early decade of 1980 by Michelle White and by James Ang and Jess Chau [6]. These would be the beginnings of the reorganization procedure worldwide [7].

86

B. Umaña-Hermosilla et al.

There are two mechanisms by which the insolvency of a pro-debtor and pro-creditor company is dealt with worldwide, which means that the country’s regulations are more flexible or have a certain beneficial tendency either to the debtor or creditor, for example in The countries of the United Kingdom have a pro-creditor position, and are usually severe with the debtors, protecting the country’s investment since with this type of measures investors will see that their capital is protected, therefore they will have a certain motivation to invest their resources in which the regulations favor them, the problem with this is that the first to detect the insolvency of the company is the debtor, therefore if our mechanisms are stigmatizing or very high penalties as was the case of Chile with the Regulation 18,715 the debtor will have a certain tendency to hide his situation, thus devaluing the value of the company and the money of the creditors who financed his project. Therefore, from that eventuality, pro-debtor mechanisms began to emerge, being a pioneer in this matter The Bankruptcy Code (1978), from the United States, giving the debtor the possibility of reorganizing and negotiating with his creditors before the court ruling in addition to granting him Some authority to the judge within a never before seen bankruptcy procedure, which consisted in that if the reorganization proposal generated a higher value in the bankrupt company but the creditors insisted on liquidating it, he could force that reorganization, but if this resource was used and the If the plan failed, the judge ran the risk of losing his position, the other country to emulate a regulation of these characteristics at the Latin American level was Colombia through Law 1116 (2006), which aims to protect credit and recover the company as an economic unit and source of employment, the regulations were designed based on the regulatory framework that was prepared by CNUDMI from US studies and regulation [8]. The first Chilean bankruptcy law was contained in Book IV of the Commercial Code of 1865, which was repealed by Law No. 4,558 promulgated at the beginning of 1929, whose final text was established in Decree No. 1297 less than two years later [9]. What produced these changes was a strong economic crisis in Chile in the years 1929– 1939 since the president of the time Carlos Ibáñez del Campo considerably increased public spending to modernize the productive infrastructure of the country, which led to an indebtedness that was intended pay with the profit of the exports of saltpeter and copper, a plan that did not work and ended with the resignation and exile of the president, curiously the following bankruptcy law in Chile would also be created from an economic crisis, in 1982 due to the massive increase in bankruptcies that were occurring in the country, handing over the bankruptcy administration to private parties with the intention of speeding up procedures which would lead to a faster recovery since it was based on the logic that the bankrupt company deserved its situation for therefore it should not be saved. By creating the “National Bankruptcy Prosecutor’s Office” whose function was to monitor the performance of the trustees and the responsibilities derived from the bankruptcy, it also presumed malicious act on the part of the debtors. Note the creation of the Bankruptcy Superintendency in 2002, replacing the National Bankruptcy Prosecutor’s Office. Insolvency proceedings are understood to be those intended to reorganize and/or liquidate the liabilities and assets of a debtor company (ED), and to reconcile the liabilities and/or liquidate the assets of a debtor. In other words, it aims to seek to balance the rights of debtors with those of creditors and offer both parties guarantees to resolve

Analysis of a Bankruptcy Prediction Model for Companies in Chile

87

conflicts. Mainly, the law distinguishes two types of taxpayers, the ED and the debtor, where only a focus will be given to the procedures related to the ED, which is defined as all legal persons of private law, with or without the purpose of profit, natural persons first category taxpayers and natural persons who practice the free service of the profession. The ED may be subject to bankruptcy procedures that according to LRLAEP are classified as: bankruptcy proceedings for judicial reorganization, bankruptcy proceedings for simplified or extrajudicial reorganization, bankruptcy proceedings for voluntary or forced liquidation [10].

2 Methodology Statistical models are a very helpful tool for companies, this because they provide an explanation to a problem, with the purpose of making predictions of an event of a certain phenomenon or the value of a magnitude, based on the value of certain variables explanations and the determination of which variables will be potentially explanatory of a phenomenon [11]. Regarding business failure, the studies are based on whether the company will go bankrupt or remain healthy, based on the information provided on the financial ratios, and at the same time try to establish which factors or variables are more closely related, managing to explain the reasons for the business failure [12]. Financial analysis is a procedure used to evaluate the situation and economicfinancial performance of a company, in order to perceive difficulties and apply appropriate corrections to solve them. Often this type of analysis is based on the calculation of financial indicators that show the liquidity, solvency, indebtedness, operational efficiency, performance and profitability of the company, with the purpose of diagnosing the current situation and being able to predict events that could occur in the long term. and thus act in time in the face of any adversity that may arise. Due to the fact that companies were usually financially affected, a series of models emerged to predict bankruptcies where their main objective was to predict which companies were going to fail in a short period of time before the event occurred in such a way that they could change your management, restructure and survive [13, 14]. Edward Altman by applying the Multiple Discriminant Analysis method, managed to make an important contribution to the predictive capacity prior to financial failure. From that study, in the 1960s Altman developed the bankruptcy predictor known as the Z-score, which combines several more significant financial indicators. He later adapted the original Z-score model for emerging economies in order to propose a new exclusive predictive global indicator for these types of markets, which he called the Emerging Market Scoring Model. The model is based on the iterative statistical analysis of multiple discrimination, where companies are classified as solvent and insolvent [15]. The proposed research is longitudinal and exploratory that seeks to analyze the implementation of Law 20,720 and the effectiveness of the Altman model for the prediction of bankruptcies. The research will be carried out on 13 companies classified as Limited Companies of different scope present in Chile, according to the data indicated in the SVS and the “Economatica program”. Where your annual reports will be analyzed, specifically data from the consolidated financial statements, in order to measure liquidity, solvency and the value that the market gives to the company [16, 17]. The companies

88

B. Umaña-Hermosilla et al.

will be classified into two groups: those companies that do not continue to carry out their economic activities as companies with financial problems, making a total of 5 organizations, which will be evaluated between the years 2005 to 2016 (taking into account that only 5 continuous years will be evaluated by company); and those companies that still have economic activities as a company that are not bankrupt or healthy, making a total of 8 companies of which 5 years will be evaluated from the year 2012 to 2016. The method used to carry out the research is the Altman Z-score 2 model, which is applied to all types of companies (whether they are listed on the stock market or not). The information collected from the sample does not correspond to companies in a specific sector. The variables that affect the study are the following: X1 working capital/total assets; X2 retained earnings/total assets; X3 earnings before taxes and interest/total assets; X4 book value of equity/total liabilities, and the equations Altman Z-score 2 model is: Z2 = 6.56(X1) + 3.26(X2) + 6.72(X3) + 1.05(X4). The analysis carried out indicates that of the 8 companies considered healthy, four are in the safe zone, 3 in the gray or uncertainty zone, where only one of the companies, in this case, AES Geners S.A. It is close to the limits of the danger zone and a company is in the danger zone. Of the 5 companies considered to have financial problems, none is in a safe zone, one in a gray or uncertain zone, in this case, Indalum S.A. it is close to the limits of the danger zone and four companies in danger zone or with probability of bankruptcy.

3 Conclusions This research provides a source of knowledge about law 20,720 and its composition, contextualizing how the regulations in Chile have evolved over the years and always with the objective of a rapid release of resources, to achieve this it was influenced by The bankruptcy systems of the United States worldwide and Colombia at the Latin American level, both with a pro-debtor tendency, this is due to the fact that to find the optimal bankruptcy system it was evaluated from different ex ante, intermediate and ex post perspectives with the main objective of avoiding type 1 errors, which is to reorganize companies whose liquidated value is higher than that of the going concern, and type 2 errors, which is the liquidation of companies that can generate greater value if it is allowed to last. To achieve a certain homogeneity in bankruptcy systems, CNUDMI creates a regulatory framework that helps countries to create their respective rules based on bankruptcy principles that allows the law to have a certain identity, being the main one, in addition to the principle of Conservation of the company that generated the pro-debtor current, which helped to generate mechanisms that allow the negotiation of the debt between both parties, something innovative considering that Law 18,175 issued a sentence and later allowed to “negotiate” on the debt owed. In addition, due to the change of names of the actors that participate in a reorganization or liquidation system, it allowed to generate a positive stigma on the insolvency situation, leading to a considerable increase in the participation of companies in the system, in addition to a higher rate recovery of assets, thereby halting the depleting effect of insolvency. Regarding the analysis, it was possible to evaluate the financial solvency of different companies in a random way because the digitization of financial information in Chile

Analysis of a Bankruptcy Prediction Model for Companies in Chile

89

is not a common practice, in addition to the fact that there are no databases that relate accounting information and bankruptcy as such therefore, there may be a risk of bias or non-generalizable conclusions. According to the results of the solvency situation that the Altman model showed on the companies, five companies could be found which may need to adhere to the 20,720 regulation, and according to their financial situation. A judgment was issued as to which bankruptcy alternative is more convenient according to the financial situation of each of them, therefore, the application of the Altman model as a tool in a situation of uncertainty about the continuity of the company, in addition to the bankruptcy alternatives that proposed by the law help companies to restart and generate value even in a situation of insolvency based on a correct evaluation.

References 1. Altman, E.I.: Corporate Financial Distress and Bankruptcy: A Complete Guide to Predicting and Avoiding Distress and Profiting from Bankruptcy. Wiley, New York (1993) 2. Forero, L.: Propuesta de modelo para la evaluación y predicción del riesgo de insolvencia financiera de pequeñas y medianas empresas manufactureras en Colombia (2015) 3. Puga, E.: Estudios de derecho concursal la ley 20.720 a un año de su vigencia: “Mirada crítica de la ley N°20.720” (2016) 4. Vargas, J.: Modelos de Beaver, Ohlson y Altman: ¿Son realmente capaces de predecir la bancarrota en el sector empresarial costarricense? Tec Empresarial. 8(3), 29–40 (2014) 5. Altman, E.: Predicting financial distress of companies revisting the Z-score and ZETA models (2000) 6. Arellano, P., Carrasco, C.: Insolvencia y quiebra en chile “Principales estadísticas desde 1982 a la fecha”, pp. 1–17 (2015) 7. Arquero, J., Abad, M., Jiménez, S.: Procesos de fracaso empresarial en PYMES, Identificación y contrastación empírica. Rev. Internacional de la Pequeña y Mediana Empresa. 1(2), 64–77 (2008) 8. Jequier, E.: Responsabilidad por insolvencia en los grupos empresariales. Una aproximación a la teoría del administrador de hecho en el derecho chileno. Rev. Chilena de Derecho. 42(2), 567–594 (2015) 9. Hernández Ramírez, M.: Modelo financiero para la detección de quiebras con el uso de análisis discriminante múltiple. InterSedes: Revista de las Sedes Regionales, XV(32), 4–19 (2014) 10. Mora, A.: Los modelos predicción del fracaso empresarial: una aplicación empírica del logit. Rev. Española de Financiación y Contabilidad. XXIV(78), 203–233 (1994) 11. Pérez, J., González, K., Lopera, M.: Modelos de predicción de la fragilidad empresarial: aplicación al caso colombiano para el año 2011, pp. 205–228 (2011) 12. Pérez, A., Martínez, P.: Del Sobreendeudamiento a la insolvencia: fases de crisis del deudor desde el derecho comparado europeo. Rev. Chilena de Derecho. 42(1), 93–121 (2015) 13. Paz, A., de la Fuente-Mella, H., Singh, A., Conover, R., Monteiro, H.: Highway expenditures and associated customer satisfaction: a case study. Math. Probl. Eng. 2016, 1–9 (2016). 4630492 14. Coughenour, C., Paz, A., de la Fuente-Mella, H., Singh, A.: Multinomial logistic regression to estimate and predict perceptions of bicycle and transportation infrastructure in a sprawling metropolitan area. J. Public Health 38(4), 401–408 (2016) 15. Altman, E.I.: Financial ratios, discriminant analysis and the prediction of corporate bankruptcy. J. Finan. 23, 589–609 (1968)

90

B. Umaña-Hermosilla et al.

16. Clark, S., Coughenour, C., Bumgarner, K., De la Fuente-Mella, H., Reynolds, C., Abelar, J.: The impact of pedestrian crossing flags on driver yielding behavior in Las Vegas, NV. Sustainability. 11, 4741 (2019). https://doi.org/10.3390/su11174741 17. de la Fuente-Mella, H., Fuentes, J.L.R., Leiva, V.: Econometric modeling of productivity and technical efficiency in the Chilean manufacturing industry. Comput. Ind. Eng. (2019). https:// doi.org/10.1016/j.cie.2019.04.006

Computational Intelligence Applied to Business and Services: A Sustainable Future for the Marketplace with a Service Intelligence Model Mariana Alfaro Cendejas(B) Monterrey Institute of Technology and Higher Education Campus Queretaro, Epigmenio Gonzalez 500, 76130 San Pablo, Queretaro, Querétaro, Mexico [email protected]

Abstract. Today’s world demands faster and more efficient solutions to more complex and far-reaching problems. It is in this context where Computational Intelligence (CI) breaks into high expectations. In the midst of the Fourth Industrial Revolution, the CI cannot be seen only as a platform for solving industrial problems, but for cases with multiple solutions and where the human factor must have a consideration besides quantitative and qualitative. Along these lines, from a business approach and particularly from the services, it is proposed the design of a service model, focused on working as an interface between CI and all the elements that configure the services in the most representative business environments and in its three key scenarios: design, delivery and evaluation, including key elements related to service improvement. In this way, the Intelligence of Services emerges as a new paradigm to consider, enhancing the value proposition of the CI. Keywords: Service intelligence · Service model · Business technology · Service recovery

1 Introduction The era of globalization, which has led to changes in the business environment, is rapidly transforming. The competition has become more intense and has an impact on all organizations, both in the industrial business sectors and in services. Competitiveness in the service industry, beyond the existing in the manufacturing industry activity sectors, will boost the world economy in the 21st century. As a result, many service companies are adjusting their business models and plans to improve their performance and maintain service quality to survive. They are looking for technological solutions that allow them to support their service management models. The rapid changes in the market and in technology are challenging the ability of companies to make modifications to the service, to be able to respond to the needs and desires of the customer in increasing conditions of uncertainty. Year after year, the technology has exceeded the expectations of the public, creating new solutions that serve as support in different professions and, from the business © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Z. Ahram et al. (Eds.): AHFE 2021, LNNS 271, pp. 91–98, 2021. https://doi.org/10.1007/978-3-030-80624-8_12

92

M. A. Cendejas

perspective, boost new business models. Among these new technological solutions is CI, in which elements of learning, evolution, adaptation and fuzzy logic are combined to create programs that are “intelligent” or have certain degree of intelligence. To make decisions in today’s digital economy, business leaders are increasingly turning to computer algorithms, which perform step-by-step analytical operations at incredible speed and scale. Since the invention of computing, its primary function within companies has been as a support tool for its operational or transactional functions. In this way, over the years, a certain idea was established amongst users that computer programs or systems only served to reduce or optimize costs and that was an incorrect thought that extended practically throughout the second part of the last century (including unfounded fears of the problems that would theoretically arise at the turn of the century or millennium). CI came to replace classical Artificial Intelligence (AI). It is based on fuzzy systems, neural networks, and evolutionary algorithms that when joined in a program, somehow create “intelligent programs”. This does not mean that the programs that generate current statistical analyses and that are the basis of Business Intelligence are going to be rejected, but that a fusion between qualitative and quantitative is made in order to give a complementary point of view [1]. Companies that implement this type of intelligence in their systems can benefit from the services they offer in a certain way, since the data obtained or to be obtained will be more accurate, will be verified by a program that will avoid handling errors. Businesses and services can develop several advantages since now learning can be automatic. It’s important to recognize that regardless of the size of a company or its success, no one is immune to technological disruption. Since 2000, 52% of Fortune 500 companies have disappeared. A decade ago, there was only one tech company among the most valuable top 10 companies by market capitalization. Today, it’s seven out of 10; all of which are data-driven companies that operate like software companies based on CI solutions. The World Economic Forum (WEF) indicates that only 15% of businesses are ready for the Fourth Industrial Revolution, which is the merging of the physical and digital world [2]. Over time, software applications have been developed that offer better conversion rates for e-commerce providers. There is still a bit of development to CI in the fields of marketing and finance, what is certain is that these two fields will be among the largest domains of focus for CI firms and will likely generate many scientific innovations in the same. By entering the third decade of this century, it is not known exactly in which labor fields the application of computational intelligence will take place. CI providers are looking for young minds to determine future trends and the path they should follow as a company. Companies like Microsoft and Google managed in their CEO’s, to obtain that mastermind that has been able to land the main idea and make it a reality, which is what is sought with CI so that in the future it can be seen in various businesses and services. CI can be the door of new companies, programs, or services that in the future come to solve more than one problem for different companies or services, implementing new technologies in the development of jobs in companies with different activities.

Computational Intelligence Applied to Business and Services

93

2 Approaches to the Use of Computational Intelligence in Business CI has a long history of applications to business - expert systems have been used for decision support in management, neural networks and fuzzy logic have been used in process control, a variety of techniques have been used in forecasting, and data mining has become a core component of customer relationship management in marketing [3]. Despite the advances already made in different fields, the big challenge now for companies is to make the applications of CI become “tangible value” for organizations, people, and society in general. CI has evolved significantly in recent years, and now all technologies derived from it have become a priority for the top management of companies. The data for decision-making, and CI, are at the heart of the digital transformation of many companies. Looking forward to the next few years, the focus of CI will be to solve regulatory and ethical challenges. The second factor that really helps companies is talent and cultural transformation. In addition, it is not only necessary to have a unique team dedicated to these technologies, but also to transform the rest of the company, because it will have to make decisions related to multiple business areas. Companies must undertake major structural changes, because if not, they will not pass anecdotal exploration. Sharing knowledge within the organization itself is another major challenge to ensure that collaboration works between the different areas of the company responsible for the development of CI. The term “intelligent enterprise” was coined in the early 1990s by James Brian Quinn. It was a business theory that held that technology and computer infrastructure were key to improving business performance. That was before the internet was a familiar word, before smartphones were in all hands, before smart computing became a reality. It is estimated that about 2.5 billion gigabytes of data are generated per day. The systems and processes that have been used so far have become obsolete before this intake. The impact of SMAC technologies (Social, Mobile, Data Analysis and Cloud) has favored a change at the organizational and structural level in most large companies, and has imposed new business models, new types of consumers and new types of demand. The very application of CI in business contexts remounts to the definition of innovation adopted in the academic business literature following Schumpeter, where invention is said to be an idea made manifest, while innovation is an idea successfully applied in practice. A recent study suggests that this hybrid profile of one with a solid academic, technical background, combined with business skills and market insight is the most effective way to drive innovation in knowledge-intensive medium enterprises. Having one or more business leaders with such profile was a common trend in the whole population of firms researched where innovation was found to be nurtured and to have substantial positive impact on the business. This hybrid set of skills, assuming CI is understood and not just utilized as a black box, therefore creates the proper scenario to strengthen CI innovation and produce new CI business applications.

3 Incorporation of Computational Intelligence in the Design and Delivery of Services The current applications of CI, being able to describe human behaviors, begin a broad journey to transform business and services, providing unthinkable scopes until recently.

94

M. A. Cendejas

It was 1956 when John McCarthy at a conference in Dartmouth College, USA, spoke for the first time of the term of Artificial Intelligence, defined then as the science of developing “intelligent machines”. Over time and with its evolution towards CI, the main questions for its practical implementation are being answered in the context of the daily work of companies and organizations: How is it applied in business today? Is it really a real and effective solution? How to design and deliver better services with the support of CI? How to measure services with CI technologies? How to generate new value proposals focused on services with the support of CI? For Tom Davenport, pioneer of knowledge management in organizations, CI currently offers the following solutions, which it classifies into 3: automate business processes (Cognitive Automation), exponentially increase the capabilities to analyze large volumes of data (Cognitive Insight) and amplify the possibilities of interaction between people and machines (Cognitive Engagement) [4]. Today, most physical, and digital processes are regulated with the application of CI, which is gradually included in the business world. The first classification talks about the implementation of robots, since having the ability to work with various sources of information from various systems allows them to be practical. Continuing with the second classification, this is about increasing data analysis and the ability to perform them. It is based on algorithms designed to detect patterns and help interpret information extracted in large quantities with intelligent agents that facilitate the integration of multiple sources and formats. Big Data systems are responsible for managing information and keeping only that which is useful and necessary. This type of system is more efficient because it can be “trained” (which forms the basis of the so-called Machine Learning). That is, through statistical methods the prediction and categorization of data is performed, improving the user experience. For example, when browsing a shopping website (e-Commerce), you can predict the probability that a user buys certain items according to their profile, previous purchases and searches made, then they are shown those items in screen and with promotion. Finally, the third classification is about the expansion of interaction between people and machines. Which is said to be the most promising to improve the Customer Experience with brands. It allows working through chatbots, intelligence algorithms, optimizing customer service attention quickly, efficiently, and always. “Intelligent empathic machines personalize the customer’s shopping experience, making him feel comfortable to such an extent that he doesn’t need to do much to buy any product”. CI is creating a world of intensely personalized and on-demand experiences; Customer Experience Management (CEM) is being promoted and therefore, companies must reinvent their organizations to find and capture those opportunities as they arise. That means seeing each opportunity as if it were an individual market, a momentary market [5]. The emphasis of use for data analytics with the support of CI lies in efficiency, volume and execution in decision making. As a proposal to reach this achievement, among the most widespread models in business, are the European Foundation for Quality Management Model, the Integral Table Model, the Intellect Model, Saint-Onge, and Skandia Navigator. The first model mentioned and the last are the most suitable to allow

Computational Intelligence Applied to Business and Services

95

the development of CI in companies, particularly to strengthen service delivery (both inside and outside). The European Foundation Model emphasizes the importance of people in the processes of knowledge generation. It bases the development of innovation proposals and mechanisms for the improvement of results, thereby improving the quality of services and products by knowledge management, respecting internal processes, consumers, and the impact on society. On the other hand, the Skandia Navigator Model is suitable for its application because this model is based not only on counting the tangible assets of the company, but a company value is evaluated based on the human factor, customers, the processes and a renewal and development approach added to the financial factor. In addition, it suggests emphasizing the evaluation of performance, speed, and quality.

Fig. 1. An integrated business model around the service with CI support ( Source: Own elaboration).

Combining both models to configure the value proposition of a company from the service approach, Fig. 1 is shown in which this proposal is integrated with the business processes and their delivery of customer service, with the follow-up of key indicators of results and over time, a learning process must be formalized, and innovation continued. Although CI can participate in every one of the connectors of each concept, it is precisely key to the process of turning the company into a learner where it could have the greatest impact. In this way, if we relate any of these models to any of the classifications mentioned above to implement CI, positive results will be obtained.

96

M. A. Cendejas

4 A Sustainable Future for the Marketplace: Service Intelligence Model In the past, the combination of the terms “Intelligence” and “Services” was associated with military intelligence, espionage, secrets, or knowledge of competition in the business context. More recently, the companies opened jobs for the administration and analysis of the database of instruments installed through the CRM (Customer Relationship Management), following the procedures established by the company in order to update the information, allow the continuity of the processes and decision making to the financial and commercial area. That is, these are operational areas of service [6]. The Service Intelligence Management Model proposed in Fig. 2, start with a structure divided between the areas of the company and the customer, to separate the different processes that must be carried out in both scenarios. From the company’s perspective, it is your responsibility to design your service proposal, aligned with your Mission and Vision. Tangible elements such as infrastructure, personnel (trained and empowered) and intangibles, for instance, their business processes and even reaching the Value Proposition, emerge from it. From the customer’s perspective, it is identified by the company from the definition of the company’s target market and receives a promise through the display of attraction of the demand exerted by the marketing.

Fig. 2. Proposed service intelligence management model ( Source: Own elaboration).

Both scenarios converge in the “moments of truth” or service meetings typical of the delivery management to the user or client, who in turn lives the experience and validates it in its implicit comparison versus its perceptions. Your acceptance threshold will define the level of satisfaction detached from the service. When satisfied, a success

Computational Intelligence Applied to Business and Services

97

cycle must be activated to direct it towards loyalty. If dissatisfied, it is validated if there was a failure in the service. If the recovery plan has been activated, within which and in accordance with the elements of validated hypotheses, actions should be incorporated for its attention, considering the type and magnitude of the failure, the response time, the attitude of the staff (human factor), the monitoring of processes and the application of the different levels of justice. CI plays a preponderant role in the model by being incorporated into service agents or tools in each of the phases of the service: a) before: providing a perspective from the knowledge of the market and the company itself to create predictive behavior models; b) during the delivery of the service: supporting the monitoring of customer satisfaction and being able to measure or identify their level of service and c) after the delivery of the service: with follow-up metrics that allow making service decisions to activate plans for service recovery or to strengthen the company’s CRM & CEM strategies. The global learning generated by the agents or tools of CI, will turn it into a cycle of growth and learning for the company itself, turning it into a real service-learning company [7]. The human factor is currently and will remain in the future as an element present in all the management models analyzed throughout this chapter and is an implicit part of the proposed service intelligence management model, which stands out as a differentiating element in the conversation that represents the opportunity to deliver a service and how a recovery plan is based on conversational interactions between client and company. CI on the other hand, will be a key factor in the competitive differentiation of the future, by providing intelligent processed data, thereby facilitating the flow of service delivery in organizations and capitalizing on better returns and levels of value delivery to the customer [8].

5 Conclusions An initial mental change must be cultivated to detonate strategic plans and actions around services with the support of CI. Understanding that services, due to their high human influence, are vulnerable and may fail at some point, recognizing that in the event of a failure it is necessary to apply a recovery effort is crucial, given the value that clients represent for companies. Bringing the effort to the strategic environment allows the company to fulfill its promise of value and thereby guarantee customer satisfaction [9]. The proposed Service Intelligence Management Model is based, in addition to the investigation of the application of IC in business and traditional management models, in an analysis of the recovery literature after service failures, in the understanding of the effect of the promise of value as part of the company’s business model, in a study conducted in restaurants in the city of Queretaro, Mexico and in the review of four service management models widely used in world-class companies, as is possible review it in detail in. Future research related to Service Intelligence Management Systems should focus on two fundamental aspects in addition to what has been achieved with the proposed management model: (a) understand the causes that trigger the occurrence of service failures and (b) model a customer relationship strategy that manages, in turn, a deeper knowledge of people’s buying behavior, provokes continuous innovation and allows to

98

M. A. Cendejas

achieve higher levels of loyalty. For both areas, the development of applications with CI as a base will be key to achieve companies with wisdom and that will last for many years with success in the market [10].

References 1. Turchetti, T., Pádua, A.: Introduction to Computational Intelligence Business Applications. Vetta Group, Nova Lima, Brazil (2010) 2. Harvard Business Review Analytics Services. The Road to Intelligent Automation: Turning Complexity into Profit 2019. The Rise of Intelligent Automation (12): MC211970219 3. Chen, N., Liu, W., Bai, R., Chen, A.: Application of computational intelligence technologies in emergency management: a literature review. Artif. Intell. Rev. 52(3), 2131–2168 (2017). https://doi.org/10.1007/s10462-017-9589-8 4. González, J.: Los 3 tipos de Inteligencia Artificial aplicada a los Negocios [Internet] 2018. Think & Sell (2018). https://thinkandsell.com/blog/los-3-tipos-de-inteligencia-artificial-apl icada-a-los-negocios/. Accessed 23 September 2020 5. Zeithaml, V., Bitner, M.: Services Marketing: Integrating Customer Focus Across the Firm, 3rd edn. McGraw-Hill/Irwin, New York (2003) 6. Alfaro, M.: Service recovery model based on the fulfillment of the promise of value [Thesis]. UPAEP, Puebla, Mexico (2017) 7. Stankovic, M., Gupta, R., Rossert, B., Myers, G., Nicoli, M.: Exploring Legal, Ethical and Policy Implications of Artificial Intelligence. ResearchGate (2017) 8. Daugherty, P., Carrel-Billiard, M.: The post-digital era is upon us: are you ready for what’s next? Accenture technology vision (2019) 9. Pir, S.: The future of work is here: what is your HR organization working on? Forbes. 2019. https://www.forbes.com/sites/sesilpir/2019/09/09/the-future-of-work-is-here-what-isyour-hr-organization-working-on. Accessed 01 November 2020 10. Soni, N., Khular, E., Singh, N., Kapoor, A.: Impact of artificial intelligence on businesses: from research, innovation, market deployment to future shifts in business models, p. 23. Department of Electronic Science, University of Delhi South Campus, Delhi, India (2019)

Modeling Cognitive Load in Mobile Human Computer Interaction Using Eye Tracking Metrics Antony William Joseph1(B) , J. Sharmila Vaiz2 , and Ramaswami Murugesh2 1 IT-Integrated Design, National Institute of Design, Bengaluru, Karnataka, India

[email protected] 2 Department of Computer Application, Madurai Kamaraj University, Madurai, India

Abstract. Modeling cognitive load of user interaction based on ocular parameters have become a dominant method for exploring usability evaluation of interfaces for systems and applications. Growing importance of Artificial Intelligence in Human Computer Interaction (HCI) has proposed many approaches to understand users’ need and enhance human centric method for interface design. In particular, machine learning-based cognitive modeling, using eye tracking parameters have received more attention in the context of smart devices and applications. In this context, this paper aims to model the estimated cognitive load values for each user into different levels of cognition like very high, high, moderate, low, very low etc., while performing different tasks on a smart phone. The study focuses on the use behavioural measures, ocular parameters along with eight traditional machine learning classification algorithms like Decision Tree, Linear Discriminant Analysis, Random Forest, Support Vector Machine, Naïve Bayes, Neural Network, Fuzzy Rules with Weight Factor and K-Nearest Neighbor to model different levels of estimated cognitive load for each participant. The data set for modeling consisted of 250 records, 11 ocular parameters as prediction variables including age and type of task; and three types of classes (2-class, 3-class, 5-class) for classifying the estimated cognitive load for each participant. We noted that, Age, Fixation Count, Saccade Count, Saccade Rate, Average Pupil Dilation are the most important parameters contributing to modeling the estimated cognitive load levels. Further, we observed that, the Decision Tree algorithm achieved highest accuracy for classifying estimated cognitive load values into 2-class (86.8%), 3-class (74%) and 5-class (62.8%) respectively. Finally, from our study, it may be noted that, machine learning is an effective method for predicting 2-class-based (Low and High) cognitive load levels using ocular parameters. The outcome of the study also provides the fact that ageing affects users’ cognitive workload while performing tasks on smartphone. Keywords: Ocular parameters · Modeling cognitive load · Machine learning · Classification · Eye tracking metrics · Cognitive load levels · Human-computer interaction

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Z. Ahram et al. (Eds.): AHFE 2021, LNNS 271, pp. 99–106, 2021. https://doi.org/10.1007/978-3-030-80624-8_13

100

A. W. Joseph et al.

1 Introduction The concept of mental workload is coordinated with the demand imposed by tasks on the human’s limited mental resources [1] in which the demand is less than the capacity available or the demand exceeds the capacity [2]. It is not only influenced by the demand/resource but also factors like time pressure, scenario complexity, individual experience and ability [3]. If higher amount of information is delivered to a person at once, it is likely that the person will not retain it for longer duration. Therefore, it is essential to manage the cognitive load of each individual efficiently, for effective learning. Modeling cognitive load of user interaction has become a dominant method for exploring usability evaluation of interfaces of system and applications. Moreover, it can be a useful technique in development cycle of designing systems and benefits HCI [4]. Growing importance of Artificial Intelligence (AI) in the society has proposed many approaches to interpret inferences from machine learning models [5]. Lately, machine learning based cognitive modeling using eye tracking data has received more attention in the context of various smart technologies and applications. Machine learning (ML) is an area that focuses on creation of algorithms which learn on their own, based on data and experience. Since ML models learn to perform tasks by generalizing from instances, they are considered cost effective [6]. In eye tracking research, ML and classification methods were employed to study and analyze eye movement data in order to perform a classification and to discover the right category of information. Thus, this study aims to model the estimated cognitive load values for each user, into different levels of cognition using traditional ML classifiers. A set of eye tracking parameters were extracted along with user interaction behavioural measures [7] for each participant while performing five different tasks on an android mobile phone. Next section focuses on related work that have shown different approaches followed in ML modeling using ocular parameters. Section 3 explains about research methodology, particpularly participants, materials and design of experiment. We present the experiment results in Sect. 4. research methodology we used in our study in Sect. 4. We discussed the results in Sect. 5. Finally, we draw conclusions and future work in Sect. 6.

2 Related Work Researchers applied supervised machine learning to classify cognitive load levels [8] from early 1990s. However, most work has been focused on binary classification; and only a limited number of studies classified cognitive load into three levels predominantly in the area of emotion recognition [9] and attention. There are number of studies explored using ML algorithms and eye metrics to measure mental effort and users’ attention. Klami et al. [10] presented preliminary results on inferring the relevance of images based on implicit feedback about users’ attention measured using eye tracking features with two level classifications. They collected eye data from 27 participants using Linear Discriminant Analysis (LDA) classifier and achieved performance accuracy of 91.2%. The outcome of their study suggests that it is possible to predict relevant images accurately with a simple classifier and ocular parameters. Eivazi and Bednarik [11] explored modeling problem-solving behaviour and performance levels, from visual attention data

Modeling Cognitive Load in Mobile Human Computer Interaction

101

with 14 participants. They employed Support-Vector Machine (SVM) on a set of ocular parameters with 3-class and achieved predictive accuracy of 87%. Their findings confirmed that eye tracking features carries important information in predicting problemsolving behaviour and performance levels from visual attention. Li et al. [12] on the other hand used SVM to predict spatial visualization problem’ difficulty level from eye tracking data. They collected data from 88 participants and achieved an accuracy of 87.60%. Behroozi and Parnin [13] conducted a study to predict stressful technical interview settings through eye tracking saccade metric. They used Naive Bayes, Random Forest, Multi-Layer Perceptron, KNN, Logistic Regression and Decision Tree with two classesLow and High respectively. They collected data from 11 participants and noted Random Forest (RF) algorithm as the best binary classificaiton technique to model stress and cognitive load with 92.24% accuracy.Appel et al. [14] explored predicting cognitive load in an emergency simulation based on behavioral and physiological measure with 47 right-handed participants. They modeled cognitive load using RF algorithm with 2-class (low and high respectively) and achieved performance accuracy of 72%. The outcome of their study indicated that RF prediction model is reliable for prediction of cognitive load level. In summary, we observed that many studies employed ML algorithms to classify eye tracking data predominantly with two and three classes respectively. We noted that, existing research lacked correlation between eye parameters and ageing for estimating cognitive load levels. There was not enough attention paid to model different levels of cognitive load while performing tasks on mobile phones with smaller sample size confined to a particular age group. Thus, our study includes a wide range of participants aged between 20–60+ years, performing five different tasks on a smartphone. We modeled and analised the cognitive level classification for 2-class, 3-class and 5-classes respectively.

3 Proposed Approach This study was designed to model different cognitive load levels for participants of different age groups while performing different tasks on a mobile phone. The study consisted of 50 participants aged between 20 years to 60+ years. Each participant was asked to perform five different tasks on an android mobile phone using Tobii Pro Glass 2eye tracking glasses. More details on participants and their selection criteria, task description, experiment design and experimental flow can be found in a paper by Joseph et al. [15]. We recorded ocular parameters for each participant and extracted relvant features from raw eye gaze data using an extraction algorithm [15]. Ocular parameters like pupil dialation for left and right eyes were used to estimate the cognitive load for each participant while performing each task using low pass filter (LPF) [15]. Thus, in total we documented 250 records with cognitive load values which were used to classify estimated cognitive load values. In this study, we used eight traditional machine learning classifiers such as Decision Tree (DT) [16], Linear Discriminant Analysis (LDA) [17], Random Forest (RF) [18],

102

A. W. Joseph et al.

Support Vector Machine (SVM) [19], Naïve Bayes (NB) [20], Neural Network (NN) [21], Fuzzy Rules with Weight Factor (FRWF) [22] and K-Nearest Neighbours (KNN) [23] that are commonly used in eye tracking research [24–27] to classify the estimated cognitive load values. We modelled the collected cognitive load values into two types of classification namely binary and multiclass classification. Binary classification consisted of two classes (2-class) such as Low and High. Multiclass classification, on the other hand, consisted of 3-class as Low, Moderate and High; and 5-class as Very-Low, Low, Moderate, High, Very-High respectively. In recent studies [28, 29], behavioural measures were employed for measuring cognitive load levels. Similarly, in our study, we considered various behavioural measures such as time taken on a task, number of steps taken to complete task, task completion or failure, gaze interaction behaviour; along with ocular parameters like Fixation Count (FC), Fixation Rate (FR), Saccade Count (SC), Saccade Rate (SR), Average Fixation Duration (AFD), Standard Deviation Fixation Duration (SDFD), Maximum Fixation Duration (MFD), Average Pupil Dilation (APD) and Standard Deviation of Pupil Dilations (SDPD) [15, 30] captured for each participant to classify estimated cognitive load values into different classes as mentioned above.

4 Results In this section, we presented a detailed report on results obtained for classifying the estimated cognitive load values into 2-class, 3-class and 5-class respectively; performance [25] of each ML algorithm to classify the estimated cognitive values; accuracy [14] of each cognitive value being reported as classified rather than being unpredicted; and features contributing to modelling the proposed classification scheme. It may be noted from figure (Fig. 1) that, 2-class classification is reported to have less number of average unpredicted values with 51.25 when compared to 3-class with 82.5 and 5-class with 139.38 across all ML algorithms. In a 2-class classifier the average number cognitive values classified as Low were 145.75 and high were 53 respectively. From this we can conclude that, estimated cognitive load values are best classified as High or Low respectively rather than very-low, moderate and very-high. Additionally, it may be noted that, out of eight ML algorithms used, DT was considered to perform better with an average unpredicted value of 63.67 (2-class = 33; 3-class = 65; 5-class = 93). Age was considered as the root node to construct the decision tree for 2-class, 3-class and 5-class classification. Further, we investigated accuracy of the defined binary and mulitclass classifiers to classify each estimated cognitive load value using each ML algorithm. Results report that, 2-class classifer achieved highest accuracy of 86.8% for correctly predicted values using DT algorithm (Table 1). Overall average percentage of accuracy for correctly predicted values for 2-class (79.59%) was found to be greater than 3-class and 5-class. DT outperformed with an average percentage of accuracy (74.53%) for correct prediction of cognitive load values (Table 1) across binary and multiclass classification. Finally, we employed the SVM-Recursive Feature Elimination (RFE) method [24] for identifying and selecting important features that can contribute to modelling the proposed classification scheme. We noted that, for 2-class, 3-class and 5-class classifiers,

Modeling Cognitive Load in Mobile Human Computer Interaction

103

Fig. 1. Results for modelling prediction of three classes using ML algorithms.

Table 1. Accuracy of 2-class, 3-class and 5-class using different ML algorithms. Algorithm

Accuracy 2-class

3-class 5-class

Decision Tree

86.8%

74%

Linear Discriminant Analysis

84%

72.4% 45.2%

62.8%

Random Forest

83.6%

71.6% 45.5%

Support Vector Machine

82.4%

71.2% 43.5%

Naive Bayes

78.8%

67.6% 43.6%

Neural Network

77.9%

65.2% 42.3%

Fuzzy Rules with Weight Factor 72%

62%

K-Nearest Neighbours

58.8% 33.9%

71.2%

35.5%

Age, FC, SC and APD were the parameters considered to contribute high accuracy for cognitive load value classification. Additionally, SR and Task were also considered as parameters that contributed to high accuracy for 3-class and 5-class respectively.

5 Discussion Study of cognitive workload in HCI domain has gained greater importance in the recent past and considered one of the valuable sources of information for user experience today. The result of cognitive modeling in our study shows that we achieved moderately high classification performance using eye tracking metrics. Results of 8 cognitive load classification models in the present eye tracking experiment achieved moderately good

104

A. W. Joseph et al.

classification performance in estimating cognitive load levels compared with the classification accuracy achieved using psycho-physiological methods in earlier studies. We observed a strong relationship between estimated labels and observations gained from task performance. We achieved highest classification results in all cases using DT classification technique in the 2-class 87%, in the 3-class 74% and in the 5-class 63% cognitive load measurement. Further, the cognitive load classification results suggest that Age, FC, SC, APD, SR are the most important ocular parameters in predicting user’s cognitive load level of different age groups while performing different tasks on mobile phone. It is observed that age remains as a substantial factor along with other ocular parameters influencing cognitive load. Among 8 classifiers used in our study, we noted that 4 classifiers namely, DT, LDA, RF and SVM predicted performance accuracy of 81% and above for 2-class, 71% and above for 3-class and 43% and above for 5-class prediction respectively. This supports the view that ocular parameters can determine the level of cognitive load in relation to age and type of task performed. This has proved that irrespective of type of task (simple or complex) performed by users, ageing plays an important role in determining cognitive load using ocular parameters [15, 30]. Results of this study represent real world situation like different types of tasks performed on a mobile phone, complexity of each task, luminance condition for each task, sequence of task performance and performance of tasks on different days. As discussed in the literature review, eye tracking study conducted in the past to estimate cognitive load and predict performance, had only limited number of samples. However, our study focused on a large sample size, consisting of varied age groups ranging from 20 years to 60+ years, performed five different tasks in real time on a mobile phone. Using real time task with eye movement features, establishes a new direction for eye tracking for cognitive load estimation and prediction research study.

6 Conclusion This paper aims to model the estimated cognitive load values for each user into different levels of cognition as 2-class, 3-class, and 5-class classifiers. Our study suggests that eye tracking technology along with ML algorithms can be employed to predict users cognitive load levels with regard to age and type of tasks. The study began with the aim to measure cognitive load and investigate the relationship between mental workload and eye tracking variables. We selected eight ML algorithms to model different levels of cognition and found Decision Tree to outperform with an average accuracy of 74.53% for both binary and multiclass classifier. Additionally, we noted that a 2-class classifier is most suitable to classify the estimated cognitive load values with lesser values being unpredicted. The outcome of our prediction result shows a strong relationship between estimated cognitive load levels and observations gained from task performed on mobile phone. Dominant features contributing to prediction accuracy were Age, FC, SC and APD. Age remains as an important factor for increased cognitive load along with ocular parameters. This proves that irrespective of the type of task performed by user, age plays a substantial role in contributing to cognitive load. This is a new achievement in estimating cognitive load in the field of HCI.

Modeling Cognitive Load in Mobile Human Computer Interaction

105

Acknowledgment. The authors are thankful to the Department of Computer Application, Madurai Kamaraj University, Madurai, India and National Institute of Design, Bengaluru Campus, India for their encouragement, motivation, and relentless support in carrying out our study.

References 1. Moray, N.: Mental Workload: Its Theory and Measurement. Plenum, New York (1979) 2. Wickens, C.D., Hollands, J.G.: Engineering Psychology and Human Performance, 3rd edn. Prentice Hall, Upper Saddle River (2000) 3. Galy, E., Cariou, M., Mélan, C.: What is the relationship between mental workload factors and cognitive load types? Int. J. Psychophysiol. 83(3), 269–275 (2012) 4. Salvucci, D.D., Lee, F.J.: Simple cognitive modeling in a complex cognitive architecture. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 265– 272 (2003) 5. Abdul, A., von der Weth, C., Kankanhalli, M., Lim, B.Y.: COGAM: measuring and moderating cognitive load in machine learning model explanations. In: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, pp. 1–14 (2020) 6. Domingos, P.: A few useful things to know about machine learning. Commun. ACM 55(10), 78–87 (2012) 7. Brunken, R., Plass, J.L., Leutner, D.: Direct measurement of cognitive load in multimedia learning. Educ. Psychol. 38(1), 53–61 (2003) 8. Gevins, A., et al.: Monitoring working memory load during computer-based tasks with EEG pattern recognition methods. Hum. Factors 40(1), 79–91 (1998) 9. Chen, S.: Cognitive load measurement from eye activity: acquisition, efficacy, and real-time system design. University of New South Wales (2014) 10. Klami, A., Saunders, C., de Campos, T.E., Kaski, S.: Can relevance of images be inferred from eye movements? In: Proceedings of the 1st ACM International Conference on Multimedia Information Retrieval, pp. 134–140 (2008) 11. Eivazi, S., Bednarik, R.: Predicting problem-solving behavior and performance levels from visual attention data. In: Proceedings of Workshop on Eye Gaze in Intelligent Human Machine Interaction at IUI, pp. 9–16 (2011) 12. Li, X., Younes, R., Bairaktarova, D., Guo, Q.: Predicting spatial visualization problems’ difficulty level from eye-tracking data. Sensors 20(7), 1949 (2020) 13. Behroozi, M., Parnin, C.: Can we predict stressful technical interview settings through eyetracking? In: Proceedings of the Workshop on Eye Movements in Programming, pp. 1–5 (2018) 14. Appel, T., et al.: Predicting cognitive load in an emergency simulation based on behavioral and physiological measures. In: 2019 International Conference on Multimodal Interaction, pp. 154–163 (2019) 15. Joseph, A.W., DV, J.S., Saluja, K.P.S., Mukhopadhyay, A., Murugesh, R., Biswas, P.: Eye tracking to understand impact of aging on mobile phone applications (2021) 16. Mao, Y., He, Y., Liu, L., Chen, X.: Disease classification based on eye movement features with decision tree and random forest. Front. Neurosci. 14, 798 (2020) 17. Salojärvi, J., Puolamäki, K., Simola, J., Kovanen, L., Kojo, I., Kaski, S.: Inferring relevance from eye movements: feature extraction. In: Workshop at NIPS 2005, in Whistler, BC, Canada, 10 December 2005, p. 45 (2005) 18. Breiman, L.: Random forests Leobreiman and Adele Cutler. Random Forests-Classification Description (2015)

106

A. W. Joseph et al.

19. Cortes, C., Vapnik, V.: Support-vector networks. Mach. Learn. 20(3), 273–297 (1995) 20. Zhang, Z.: Naïve Bayes classification in R. Ann. Transl. Med. 4(12) (2016) 21. Wang, S.C.: Artificial neural network. In: Wang, S.C. (ed.) Interdisciplinary Computing in java Programming, pp. 81–100. Springer, Boston (2003). https://doi.org/10.1007/978-1-46150377-4_5 22. Abadeh, M.S., Habibi, J., Soroush, E.: Induction of fuzzy classification systems via evolutionary ACO-based algorithms. Computer 35, 37 (2008) 23. Guan, F., Shi, J., Ma, X., Cui, W., Wu, J.: A method of false alarm recognition based on k-nearest neighbor. In: 2017 International Conference on Dependable Systems and Their Applications (DSA), pp. 8–12. IEEE (2017) 24. Behroozi, M., Parnin, C.: Can we predict stressful technical interview settings through eyetracking? In: Proceedings of the Workshop on Eye Movements in Programming, pp. 1–5 (2018) 25. Krol, M., Krol, M.: A novel approach to studying strategic decisions with eye-tracking and machine learning. Judgm. Decis. Mak. 12(6), 596 (2017) 26. Richstone, L., Schwartz, M.J., Seideman, C., Cadeddu, J., Marshall, S., Kavoussi, L.R.: Eye metrics as an objective assessment of surgical skill. Ann. Surg. 252(1), 177–182 (2010) 27. Cai, Y., Huang, H., Cai, H., Qi, Y.: A k-nearest neighbor locally search regression algorithm for short-term traffic flow forecasting. In: 2017 9th International Conference on Modelling, Identification and Control (ICMIC), pp. 624–629. IEEE (2017) 28. Pouw, W.T., Eielts, C., Van Gog, T., Zwaan, R.A., Paas, F.: Does (non-) meaningful sensorimotor engagement promote learning with animated physical systems? Mind Brain Educ. 10(2), 91 (2016) 29. Dubé, A.K., McEwen, R.N.: Do gestures matter? The implications of using touchscreen devices in mathematics instruction. Learn. Instr. 40, 89–98 (2015) 30. Joseph, A.W., Murugesh, R.: Potential eye tracking metrics and indicators to measure cognitive load in human-computer interaction research. J. Sci. Res. 64(1) (2020)

Theoretical Aspects of the Local Government’s Decision-Making Process Maryna Averkyna1,2(B) 1 Estonian Business School, A. Lauteri, 3, Tallinn, Estonia

[email protected] 2 The National University of Ostroh Academy, Seminarska, 2, Ostroh, Ukraine

Abstract. The paper deals with creation a theory of local government’s management. The role of theory of local government for decision-making process is comprehensively examined. The author pointed out that the theory of algebraic system is important in order to form theory of local government. The author assumed that the situation in local government is an algebraic system as ordered pair where on the first place must be a main set and on the second – signature. Theory of algebraic system is also crucial for creating of artificial intelligence. Artificial intelligence helps managers make rational and ethical decisions for problem’s solving. The paper introduces the problems and solutions for public transportation based on interview with managers in Ostroh (Ukraine) and Estonian towns. Keywords: Implementation decisions · Descriptive similarity · Structural similarity · Dialog systems

1 Introduction Local government faces to the problem decisions making under uncertainties. These decisions must be optimal, relevant and made quickly. The decisions in organizations usually make by human. Micle Mintrom pointed out that the human limits are both cognitive and informational. All relevant information about future consequences of actions cannot be readily accessed. Even if it could be, cognitive limits would inhibit effective analysis in service of good decision-making. The number of alternatives to be assessed stretch brain power [11]. In this case, the artificial intelligence is crucial for decisionmaking process in organization. The artificial intelligence helps for managers make the decisions in order to solve the issues. For example, we want to improve situation in public transportation in small town. We can face to a lot of question. What we can do with this issue? How we will make the decisions? What decisions are more suitable? In this way it is important to have a theory of local government, which explains decision-making process. Mackenzie W.J.M. emphasized that there is no theory of local government. There is no normative general theory from which we can deduce what local government ought to be; there is no positive general theory from which we can derive testable hypotheses about what it is [7]. Creating a theory of local government’s management require clear understanding background decision-making process and related with the decision’s theories. In this work, we will look at: © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Z. Ahram et al. (Eds.): AHFE 2021, LNNS 271, pp. 107–115, 2021. https://doi.org/10.1007/978-3-030-80624-8_14

108

M. Averkyna

what we mean by decision-making process which theories related with decision-making process what role algebraic systems could play in explaining local government decision-making processes.

2 Decision-Making Process and Related Theories Forman Ernest pointed that decision-making is a process of choosing among alternative courses of action in order to attain goals and objectives [4]. A decision usually involves three steps: (1) A recognition of a need (2) a decision to change and (3) a conscious dedication to implement the decision [2]. It is necessary to point out that administrative theory, game theory and information theory are dealt with decision-making process. Nobel laureate Herbert Simon pointed out that “decision-making is the heart of administration, and …administrative theory must be derived from the logic and psychology of human choice” [15]. He wrote that the whole process of managerial decision-making is synonymous with the practice of management [14]. Managers’ decisions related with cognitive and rational choice. Simon claimed that individuals are both boundedly rational and intendedly rational, and that appropriate organizational forms can promote more rational decision-making [11]. Simon pointed that: “It is impossible for the behavior of a single, isolated individual to reach any high degree of rationality. The number of alternatives he must explore is so great, the information he would need to evaluate them so vast that even an approximation to objective rationality is hard to conceive” [14]. He emphasized [14]: (1) Rationality requires a complete knowledge and anticipation of the consequences that will follow on each choice. In fact, knowledge of consequences is always fragmentary. (2) Since these consequences lie in the future, imagination must supply the lack of experienced feeling in attaching value to them. But values can be only imperfectly anticipated. (3) Rationality requires a choice among all possible alternative behaviors. In actual behavior, only a very few of all these possible alternatives ever come to mind. “The central concern of administrative theory is with the boundary between the rational and the nonrational aspects of human behavior. Administrative theory is peculiarly the theory of intended and bounded rationality – of the behavior of human beings who satisfice because they have not the wits to maximize” [13, 14]. Rational approach of human behavior through study of mathematical models is presented in game theory. Game theory allows us to conceptualize the strategic choices made by decision-makers when they have different interests [16]. It was initially developed by scientists: in France Émile Borel in 1921, in the United States by John von Neumann, in a paper published in 1928 and book written with Oskar Morgenstern “Theory of Games and Economic Behavior” (zero-sum games, new concepts (cooperative games, coalition, stable sets)), published in 1944; by John Nash Equilibrium points in n-person games published in 1950, Non-cooperative games published in 1951, Two person cooperative games published in 1953.

Theoretical Aspects of the Local Government’s Decision-Making Process

109

The study of group behavior presents some difficulties for game theory, since a group as a whole can turn out to be irrational, even if every member of this group is rational. The rational decision-makers have transitional preferences. This means that if he/she prefers alternative A to alternative B and prefers alternative B to alternative C, then he/she will have to prefer alternative A to alternative B (symbol “ >” means “preferred”[12]). However, when all group members are rational, group preferences may not be transient. That is, for groups A > B and B > C does not necessarily mean that A > C. (Social Dilemma) [12]. Kenneth Arrow is the researcher of Information theory (who won the Nobel Prize in 1972) and tried to solve the problem of group decision-making. He pointed put “If we exclude the possibility of interpersonal comparisons of utility, then the only methods of passing from individual tastes to social preferences which will be satisfactory and which will be defined for a wide range of sets of individual orderings are either imposed or dictatorial” [3]. In what he named the General Impossibility Theorem, he theorized that it was impossible to formulate a social preference ordering that satisfies all of the following conditions [14]: 1. 2. 3. 4. 5.

Nondictatorship. Individual Sovereignty. Unanimity. Freedom From Irrelevant Alternatives. Uniqueness of Group Rank.

In groups that are not controlled by the dictator, there will always be a chance that preferences will become intransient, in which we will reject the choice that could be optimal for everyone, or in which insignificant details affect our choice. These problems cannot be avoided in group decision making (see Pastine). There is another problem with deciding without an alternative. What we can do? How we can solve a problem? Moreover, Forman Ernest pointed out that there are common mistakes when making crucial decisions [4]: 1. 2. 3. 4. 5. 6. 7.

Plunging in: Gathering information and reaching conclusions without thinking about the crux of the issue or how decisions like this one should be made. Frame blindness: Setting out to solve the wrong problem because your framework causes you to overlook attractive options or lose sight of important objectives. Lack of frame control: Failing to define the problem in more ways than one, or being unduly influenced by the frames of others. Overconfidence in your judgment: Failing to collect key factual information because of overconfidence in your assumption and opinions. Shortsighted shortcuts: Relying on ‘rules of thumb’ for crucial decisions, or on the most readily available information. Shooting from the hip. Trying to keep straight in your head all the information relating to the decision rather than relying on systematic procedure. Group failure. Assuming that a group of smart people will automatically make a good decision even without a good decision process.

110

M. Averkyna

8.

Fooling yourself about feedback. Failing to learn from evidence of past outcomes either because you are protecting your ego or because you are tricked by hindsight. 9. Not keeping track. Assuming that experience will make lessons available automatically. 10. Failure to audit your decision process. Failing to create an organized approach to understanding your own decision process. In this case, we can use proposed approach by Lorents and Averkyna [6] about application the method of similarity for the decision-making process. It is crucial to use the set theory and the algebraic system theory. It helps to create the Theory of Local Government’s Management. In addition, it will be a basis for artificial intelligence that provide decision-making process in organization.

3 The Algebraic System and the Theory of Local Government’s Management Many basic concepts, tools and results of the theory of algebraic systems are suitable in order to form theory of local government. In the theory of algebraic systems, a system is called an ordered pair A = M;  in which on the first place is the set of some selected and fixed element M – or main set of the system; in second place is  – the set of selected and fixed properties (of the system elements) or relationships (between system elements) that make the set of system elements a structured set. According to Lorents, we call the main set of system M the elementor in short and we also call the set  – the set of the structors of the system – or the systor in short. If we exchange the properties and relationships necessary for the design of a system from among the (basic set) elements for the names or symbols of these properties and relationships, then we get the signature of this system from the system systor (see Maltsev [8], chapter II, §6, 6.3) things which create a structure among these elements. Structors can be e-structor – the selected element; p-structor – the property of elements; r-structor – relation between elements. (The set formed by the symbols of the structors selected and fixed to form the system is called the signature of this system. The elements of the signature are the symbols of certain structors. The elements of the systor are these structors themselves). If necessary, the elementor (set of elements) can be divided into several separate parts M1, M2, M3, …, Mk. In this case, we are talking about a system with several elementors (sets of elements). (see, e.g., Maltsev 1970, ch. I, §2.2) [8]. Describing situations using formulas of a suitable algebraic system. In order to get the necessary overview of the situation, a description of the situation is usually provided. As a rule, it mentions (I) those things that it would be reasonable to consider as elements - in some sense indivisible or the simplest components; (II) if necessary, coins of particular importance should be singled out; (III) usually also of interest to the elements (in this case, the most important properties); (IV) as a rule, it is also necessary to know the relationships between the elements. The signs of these things are the signs of the elements and structors of the algebraic system. Together with the so-called logic symbols and punctuation symbols, they form the alphabet of the corresponding algebraic system; (V) These symbols (form the alphabet) are necessary to

Theoretical Aspects of the Local Government’s Decision-Making Process

111

write down statements that characterize a situation as an algebraic system - for example, about which elements have what properties, which elements are related to each other, what could be inferred from this, etc., and so on. The algebraic system consists of the elementor and systor. Each situation as an algebraic system can be described by a set of statements, where each statement is a formula for the theory of the corresponding algebraic system. Decisions must be ethical and rational. We assume that the situation in local government is an algebraic system as ordered pair M;  where on the first place must be the elementor or the set (M) of the elements of the system and on the second – systor or the set of the structors of the system. Every situation – as a system – of the town (financial, urban transportation system, educational) is described by numerous statements. We use the Lorents’ coefficient to assess the similarity of situations as systems. To calculate this coefficient, we have to choose which components of one system we want to equalize with some components of another system (and which we do not). Here we see a significant difference between the Lorents coefficient and the Jaccard coefficient - the calculation of the Jaccard coefficient (see Jaccard 1901) is based on the same elements [5], but in the case of the Lorents coefficient the equalized elements are used. Once the choice has been made, a numerical ratio must be found which expresses the proportion of things to be equalized to the sum of all the components of both systems. We propose to compare the town’s statements of one set with similar town’s statements another set. It helps us to find out the best solution for solving an issue. We propose include coefficient similarity (Sim (P, Q)) in signature. The example of the assessment similarity was presented by Lorents and Averkyna 2019, 2020 [1, 6].

4 The Decision Making as Inference The decision-making process must result in a decision. It should be possible to formulate the decision as a statement. Such allegations must be substantiated. There are two aspects here that will sooner or later require the use of applied artificial intelligence tools. (I) How to turn natural language statements into formulas for the theory of a suitable algebraic system. (II) How to turn the “movements” that occur during the decision-making process from one statement into another into derivative steps, the correctness of which, expressed in terms of reliability, could be verified by means of mathematical logic. In both cases described above, it is in principle not possible to create and use systems that are based on algorithms and operate completely autonomously. Thus, one can only rely on self-learning dialogue systems, which gain more and more independence and reliability during use (which is not equivalent to full automation). Among the many self-learning dialogue systems of this kind and their theoretical foundations, those created by Matsak [9, 10]. In a very brief description, the work of DST consists of the following: two modes to transform the sentences. The first mode is teaching the program. The user transforms a sentence phase after phase and writes

112

M. Averkyna

the intermediate variants of the text into the text fields. Each sentence is analyzed morphologically and the outcome is a saved scheme of morphological signs. The change of word order and adding new words and the morphological form of the sign (the order of the scheme) in each sentence is memorized. The second mode is the automatic transformation according to the corresponding morphological scheme found prior. The mode is based on principle that the equal morphological schemes give the equal logical constructions [9]. The dialog system for extracting the logical constructions from Estonian text uses the HTML morph analyzer and the synthesizer of the Estonian language, located at http://www.filosoft.ee. Dialog system automatically executes the query to Filosoft company software for receiving necessary morphological signs and visa versa for creation of words in required morphological forms using the basic form. The separation of derivative steps has been discussed by Matsak in works published in 2007, 2008 [9, 10]. These steps include transforming text formula and interference level. Example 2. We propose to form the solution that can be essential for improving situation in Ostroh. In this case it is necessary to form the set of problems and their solutions (see Table 1). The information of the table based on the interview with mangers Table 1. Problems and solutions for public transportation Problems

Solutions

Control the number of passengers

Tickets’ validation. E-ticket

Satisfaction citizens’ needs

The buses must be comfortable

Satisfaction citizens’ needs

The public transport should follow by established schedules

The control of quality public transportation

Towns’ residents actively interact with managers responsible for public transportation

Managers evaluate road congestion at a special span time

Carriage validation tracking information system

How towns’ residents understand the time of arrival and departure of public transport?

Time schedule near bus station

How towns’ residents understand the time of arrival and departure of public transport?

The passengers use information system via Internet

How to control quality of public transportation?

It is necessary to have the strict requirements for the quality of buses in a town

How managers can quickly make the decisions They use special information systems about public transport for congested routes? Control for the requests’ passengers

Managers receive the letters and answer them

What encourage using public transportation?

Free of charge public transportation

How to reduce NOx emission?

Use green transport

How to reduce COx emission?

The rejection of private cars

Theoretical Aspects of the Local Government’s Decision-Making Process

113

of small towns (Ostroh, Haapsalu, Rakvere, Viljandi, Valga, Sillamäe, Kuressaare, Keila, Maardu, Võru, Jõhvi). This information is crucial for managers in order to make a decision for improving situation in public transportation. This approach helps managers to avoid irrational decisions. At the next step we propose to form algorithms for decision making process. The first algorithm is essential to form the data base problems and their solutions (see Fig. 1).

Fig. 1. Flowchart for form the data base problems and their solution

The second algorithm is basis for software that helps managers to find out the problem’s solution (see Fig. 2).

Fig. 2. Flowchart for problem’s solution

114

M. Averkyna

These algorithms are basis for Python software in order to create a system which helps managers to make a decision.

5 Conclusion A theory of local government is important for decision-making process. Managers have a problem with rational and urgent decisions for solving an issue. The artificial intelligence is crucial in this case. The abhor pointed out that theory of algebraic system is essential in order to form theory of local government that can be implemented for artificial intelligence. The author indicate that algebraic system consists of the set of statements which describe the situation in local government and signature. The author emphasized that signature should include coefficient similarity for comparison situation in local government and create artificial intelligence. Creation artificial intelligence tools required using software for transform natural language. In the next paper the author will present application software based on Python for problem’s solving in public transportation. Acknowledgments. The author of the given work expresses profound to professor Peeter Lorents for assistance in a writing of given clause.

References 1. Averkyna, M.: Implementation of descriptive similarity for decision making in smart cities. In: Stephanidis, C., et al. (eds.) HCII 2020. LNCS, vol. 12427, pp. 28–39. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-60152-2_2 2. Arsham, H.: Leadership decision making (2010) http://home.ubalt.edu/ntsbarsh/opre640/par tXIII.htm. Accessed 23 Feb 2012 3. Arrow, K.: A Difficulty in the Concept of Social Welfare (1950) 4. Forman, E.: Decision by Objectives (How to Convince Others that You are Right), 402 p. (2001) 5. Jaccard, P.: Distribution de la flore alpine dans le bassin des Dranses et dans quelques régions voisines. Bulletin de la Société Vaudoise des Sciences Naturelles 37, 241–272 (1901) 6. Lorents, P., Averkyna, M.: Some mathematical and practical aspects of decision-making based on similarity. In: Stephanidis, C. (ed.) HCII 2019. LNCS, vol. 11786, pp. 168–179. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-30033-3_13 7. Mackenzie, W.J.M.: Theories of local government. In: Explorations in Government. Palgrave Macmillan, London (1975) 8. Maltsev, A.I.: Algebraic Systems. Science, Moscow (1970) 9. Matsak, E.: The prototype of system for discovering of inference rules. In: Proceedings of the International Conference on Artificial Intelligence. IC – AI’2007, Las Vegas, Nevada, USA, vol. II, pp. 489–492 (2007) 10. Matsak, E.: Improved version of the natural language dialog system DST and its application for discovery of logical constructions in children’s speech. In: Proceedings of the International Conference on Artificial Intelligence. IC – AI’2008, Las Vegas, Nevada, USA, vol. II, pp. 332– 338 (2008) 11. Mintrom, M., Simon, H.A.: Oxford Handbooks (2016). https://doi.org/10.1093/oxfordhb/978 0199646135.013.22

Theoretical Aspects of the Local Government’s Decision-Making Process

115

12. Pastine, I., Pastine, T.: Introducing Game Theory: A Graphic Guide (2017) 13. Simon, H.A.: A behavioral model of rational choice. Q. J. Econ. 69(1), 99 (1955). https://doi. org/10.2307/1884852 14. Simon, H.A.: The New Science of Management Decision. Harper & Row, New York (1960) 15. Simon, H.A.: Administrative Behavior. A Study of Decision-Making Processes in Administrative Organizations, 4th edn. Macmillan, NewYork (1997) 16. Tirole, J.: Economics for the Common Good, 551 p.. Princeton University Press, Princeton (2016)

Application of AI in Diagnosing and Drug Repurposing in COVID 19 G. K. Ravikumar1(B) , Skanda Bharadwaj2 , N. M. Niveditha1 , and B. K. Narendra1 1 Adichunchanagiri University-BGSIT, BG Nagar, Mandya 571448, India

[email protected] 2 Lambert High School, 805 Nichols Road, Suwanee, GA 30024, USA

Abstract. Corona virus is a type of virus. We can find diverse kinds of Corona viruses among them only few of them cause disease and when they cause disease, it would be cold and other mild respiratory illness. However, couple of corona viruses causes severe diseases like (MERS) Middle East respiratory syndrome and (SERS) Severe Acute Respiratory Syndrome. Scientists identified this virus as the cause of a disease outbreak in Wuhan, China in December 2019. This virus is identified as (SARS-CoV-2) Severe Acute Respiratory Syndrome Corona Virus 2. The disease is well known as COVID-19. Corona virus is declared as an outbreak pandemic on 11 March 2020 by World Health Organization (WHO). Via biomedical exploration, clinical science, precision medicine and medical diagnostics/devices, Artificial Intelligence (AI) is quickly becoming an important approach. These tools will discover a new ways for researchers, clinicians, and patient, helping to make choices that are educated and to produce better results. These methods have the potential to increase the efficacy and efficiency of health research and treatment ecosystem when applied in healthcare environments, and potentially improve the quality of patient care. Today in this world of AI and network medicine which will give us application of information science. In this work we have discussed about how we are utilizing this new trend of AI in drug repurposing during this pandemic situation. Keywords: Artificial Intelligence · Covid-19 · Drug repurposing · Diagnosing

1 Introduction SARS-CoV-2 virus spreads by respiratory droplets released by someone who has COVID-19 coughs, sneezes, or talks. When a person inhales these droplets, or when it lands on the mouth, eye or nose of a nearer person, the virus will infect that person. It can also spreaded by the person by placing his hands on the floor or any other materials which contain the virus then placing their hands-on mouth, eyes or any other sensitive areas. Symptoms of COVID-19 include fatigue, nausea or vomiting, cough, fever, shortness of breath, diarrhea, sore throat chills, muscle or body ache, loss of taste or smell, headache and congestion, sometime it can lead to respiratory problems, kidney failure or even death sometimes. Symptoms appear after 2–14 days of exposure to the corona virus. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Z. Ahram et al. (Eds.): AHFE 2021, LNNS 271, pp. 116–124, 2021. https://doi.org/10.1007/978-3-030-80624-8_15

Application of AI in Diagnosing and Drug Repurposing in COVID 19

117

Fig. 1. Application of Artificial Intelligence in the fight against COVID-19.

SARS-CoV-2 virus spreads faster than any other corona viruses. The fatality rate is also very high. Since there are no vaccines available, only way to contain the spread is to practice social distancing. Even after practicing public health/CDC guidelines, number COVID-19 cases across the globe is very high. As of Sep 06 2020, there are 27,030,209 confirmed cases and 882,895 deaths across the globe. Modern Technologies are playing an important role in the response to the COVID19 pandemic. Big Data, Artificial intelligence and Machine Learning are one of the few technologies that can be used effectively in detecting, diagnosing, monitoring, and contact tracing. The technologies can also be used for the development of vaccines and drugs and also for projecting the number of cases and death rates. 1.1 Diagnosis Early detection of COVID-19 infection helps in treating the infected person before it becomes severe and helps in minimizing the spread of disease. Artificial Intelligence can analyze regular and irregular symptoms and provides effective and efficient results which helps in making faster decisions. Artificial Intelligence uses (MRI) Magnetic resonance imaging & Computed tomography (CT), scan of human body parts for diagnosing of the infected person using these technologies Fig. 1. A group of Chinese doctors came up with a new model for exact detection of COVID-19 based on the CTs of chest. They insisted that AI and ML can be used to accurate detection of COVID-19 and to differentiate it from other lung diseases. An AI algorithm has been developed that analyze audio and can detect if a person is infected by COVID-19. ACOVID Voice Detector app was developed by Researchers from Carnegie Mellon to detect COVID-19 by analyzing user’s voice. 1.2 Prognosis High-tech AI-enabled instruments have been built to take care of healthcare workers & to protect themselves and support the patients. Physicians may use AI driven voice assistant software such as Suk, Kara, and EPIC to capture and auto-complete clinical notes. EKO,

118

G. K. Ravikumar et al.

Fig. 2. Daily confirmed cases of COVID 19 till Nov 30, 2020 [2]

an AI driven stethoscope, distinguishes abnormal and normal sound produced by the flowing of heart’s blood. This encourages physicians to be able to listen to patients at a distance while wearing large quantities of protective equipment. The Care AI video surveillance app offers a platform for facial recognition and monitoring. It also has thermal scanning sensors capable of screening for fever and sweating. To examine (CT) Computed Tomography-rays, (MRI) magnetic resonance imaging and to perform image segmentation and image analysis, deep learning techniques and neural network models are used and thus provide recommendations. The shown Fig. 3 describes the general protocol for Artificial Intelligence and nonArtificial Intelligence applications [1]. This helps common practitioners to understand the symptoms of COVID-19 Fig. 2. The above process map tells and contrasts the move of limited non-Artificial Intelligence care versus Artificial Intelligence-dependent therapy. Above diagram describes the role of the Artificial Intelligence in the important step of high precision treatments and reduces the time and also the captured complexity. The expert does not only work on the management of patient’s pain, but also on the controlling of illness through the applications of AI. Significant signs and test interpretation was conducted with the utmost accuracy with the assistance of AI. It also indicates that the overall amount of acts performed in the process is limited, keeping them more economical in nature. In this section, we review the applications of AI for COVID-19 detection and diagnosis, tracking and identification of the outbreak, info demiology and info veillance, biomedicine and pharmacotherapy. We observe that the AI based framework is highly suitable for mitigating the impact of the COVID-19 pandemic as a vast amount of COVID-19 data is becoming available thanks to various technologies and efforts. Through the AI studies are not implemented at a large scale and/or tested clinically, they are still helpful as they can provide fast response and hidden meaningful information to medical staffs and policy makers. However, we face many challenges in designing AI algorithms as the quality and quantity of COVID-19 datasets should be further improved, which call for constant effort from the research communities and the help from official

Application of AI in Diagnosing and Drug Repurposing in COVID 19

119

Fig. 3. Applications focused on Artificial Intelligence and non-Artificial Intelligence that help general physicians recognize the signs of COVID-19 [1].

organizations with more reliable and high-quality data. A summary of state-of-the-art studies on AI techniques for is summarized in Table 1. 1.3 Dashboards and Projections The distribution of the virus from the existing evidence, media networks, social media and the possibility of spread can be predicted by AI technology. The number of nonnegative cases and mortality in any area can also be estimated. AI will allow the most vulnerable groups to be found. Chan Zuckerberg Bio Researchers at the California hub have developed a model to quantify the number of undetected COVID-19 infections and the public health implications.

120

G. K. Ravikumar et al. Table 1. Applications of AI in for Covid-19

AI methods

Results

Highlights & contribution

Papers

A 3D convolutional ResNet-50 [4], namely COV Net

AUC is used for detecting From 6 medical centers [5] COVID-19 is of 0.96 4,356 chest CT scans of 3,322 patients: 1,296 tests for the COVID-19 patients, 1,735 tests for the CAP and 1,325 tests for non-pneumonia

Location-attention network Accuracy of 86.7% and ResNet-18 [4]

618 CT scans samples: 219 [6] scans from the 110 COVID-19 affected patients, 224 CT scans from 224 patients infected from the influenza- viral pneumonia, and 175 CT samples from people with good condition of health

Drop-weights based Bayesian CNNs

Accuracy of 89.92%

Posterior-anterior chest radiography from 5,941 pictures from 4 classes (normal: 1,583, bacterial pneumonia: 2,786, non-COVID-19 viral pneumonia: 1,504, and COVID-19: 68)

Modified inception transfer-learning model

Accuracy of 79.3% with specificity of 0.83 and sensitivity of 0.67

1,065 CT images (included [8] 325 COVID-19 and 740 viral pneumonia)

Multilayer perceptron and LSTM [20]

AUC of 0.954

133 Clinical data and a [9] chain of chest CT scan data’s collected from different times patients, of which 54 patients showed their progress to critical periods while the remaining did not

2D deep CNN

Accuracy of 94.98% and AUC of 97.91%

From 496 patients with confirmed COVID-19 and 1,385 negative cases 970 CT scans are used

[7]

[10]

(continued)

Application of AI in Diagnosing and Drug Repurposing in COVID 19

121

Table 1. (continued) AI methods

Results

Highlights & contribution

Papers

A combination of 3D U Net++ and ResNet-50 [4]

Sensitivity of 0.974 and specificity of 0.922

From 5 hospitals CT images of 1,136 training cases are taken

[11]

Pre-trained ResNet-50

Accuracy of 98%

50 normal people and 50 COVID-19 affected peoples CT scan is taken

[12]

A deep CNN, namely COVID-Net

Accuracy of 92.4%

From 2 open access data repositories of 13,645 patients 16,756 chest radiography images are taken

[13]

ResNet-50

AUC of 0.996

157 international patients CT images are taken

[14]

Alex Net [17], ResNet-18 [4], DenseNet201 [18], Squeeze Net [19]

Accuracy of 98.3%

190 COVID19 chest X-ray, [15] 1345 viral pneumonia and 1341 normal peoples scan data is taken

A new CNN and pre-trained Alex Net [17] with transfer learning

Accuracy of 98% on X-ray images and 94.1% on CT images

From the 5 different sources [16] 170 X-ray scans and 361 CT scans from COVID-19 patients are used

One of the first to use AI in predicting disease outbreaks in China was Blue Dot. Blue Dot, along with airline info, goes through news story’s in 65 languages and then applies AI algorithms to diagnose outbreaks and predict disease dispersal. AI models have been developed to interpret and offer a dashboard for big data gathered in a number of ways. Up Coding, Next Strain, John’s Hopkins, BBC, the New York Times, and Health Chart are the top dashboards.

2 Research and Development of Vaccines and Drugs AI models & patterns are for drugs research is done by analization of the existing data on COVID-19. This one are used in expedite drug testing in real-time. Same testing would have taken plenty of time by conventional methods and at times testing itself is not possible at all. This is an powerful tool for diagnostic of test designs and vaccination development. Not only in research development of vaccines, can AI also be used for clinical trials with simulations. A UK AI firm, Benevolent AI, has enhanced its website to understand the reaction of the body to the corona virus. They introduced a campaign using their AI tool to identify licensed drugs that could theoretically prevent the development of the SARS-CoV-2 virus. To help infer qualitative interactions between genes, diseases and medicines, they

122

G. K. Ravikumar et al.

used machine learning, directing to the proposal of a limited number of drug compounds. Using this, a drug which entered clinical trials was identified. The pace at which the drug was developed illustrates the value of AI in tackling the global pandemic. For drug repurposing, above Fig. 4 [3] AI algorithms can be used, which is a faster and cost-effective way to discover new therapy options for emerging diseases. Reproduced via authorization from either the Centre for Medical Art and Photography of the Cleveland Clinic. AI = Intellect Artificial. PARP1 = Ribose polymerase poly-ADP 1. Subfamily 3 group C member of NR3C1 = nuclear receptor 1. Protein kinase-associated AAK1 = AP2 1. MTNR1A = 1A receptor for melatonin. TMPRSS2 = serine protease trans membrane 2. ACE2 = angiotensin I enzyme conversion 2. NRP1 = neuropilin 1. NSP14 = non-structural protein 14. By incorporating biological knowledge, AI approaches can greatly accelerate drug repurposing (e.g., human interactome, organelles, tissues, and organs). [3] The cogs indicate programmes and algorithms for computer systems. In deep neural networks, red and black circles portray neurons. Red indicates that important information from the biological systems is conducted by this neuron. Green and blue individuals indicate different subgroups that would have different treatment responses. The downward arrow represents that AI algorithms are able to analyze information from In order to create more powerful models, multi-level biological systems and drug development pipelines. The left panel shows the biological systems and the right panel shows the AI = Artificial Intelligence drug development pipeline.

3 Prevention/Communication Based on the images and data collected AI produces real-time dashboards. The dashboard helps in prevention of the disease. AI can be used to generate Health Maps, which indicates the virus clusters and hot spots. People can expose themselves into these areas. This information is also helpful to figure out the need for beds and healthcare professionals.

Fig. 4. Drug repurposing for COVID-19 with AI-assisted

Application of AI in Diagnosing and Drug Repurposing in COVID 19

123

Closed loop, an Artificial Intelligence start-up, has developed an open-sourced a vulnerability index of COVID-19, an Artificial Intelligence-based predictive model that identifies people who are at most risk of severe complications from COVID-19. Healthcare systems and care management organizations uses this ‘C-19 Index’ for identifying. Chat bots like Clara or Providence, educate individuals with additional information and will help them to perform self-diagnosis. It will also help individuals connect to the providers.

4 Challenges The quality of the system developed using AI depends on the quality of the data and effectiveness of the algorithms. Algorithms will always have room for improvement based on the data collected. So ultimately quality data is a key. When it comes to COVID-19, there are only few source of truth for data such as CDC, WHO and hence developing a robust system is a challenge. Also, one cannot rely on the data collected from social media. The lack of quality in AI, FAIR, interoperable and ethically reusable data, affects the growth of AI system in health as they pointed out repeatedly in 2018 and 2019 by National Academies. Therefore, building AI applications for COVID-19 may produce interesting tools but may not be useful.

5 Conclusion By developing the useful algorithms, Artificial Intelligence will drastically improve treatment accuracy and decision making strength. Artificial Intelligence is not only useful for the treatment of COVID-19 infected patients, but it is also for the proper maintenance of their health Conditions. For the future prevention of viruses and diseases, Artificial Intelligence is useful. This will become an relevant technology in the future to combat other pandemics and epidemics and will play an important role in providing healthcare that is more preventive and predictive.

References 1. Vaishya, R., Javaid, M., Haleem Khan, I., Haleem, A.: Artificial intelligence (AI) applications for COVID-19 pandemic. Diabet. Metabol. Syndr. Clin. Res. Rev. 14 (2020). https://doi.org/ 10.1016/j.dsx.2020.04.012 2. https://ourworldindata.org 3. Zhou, Y., Wang, F., Tang, J., Nussinov, R., Cheng, F.: Artificial intelligence in COVID-19 drug repurposing. Lancet Digit. Health (2020). https://doi.org/10.1016/S2589-7500(20)301 92-8. 4. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) 5. Li, L., et al.: Artificial intelligence distinguishes COVID-19 from community acquired pneumonia on chest CT. Radiology 200905 (2020)

124

G. K. Ravikumar et al.

6. Xu, X., et al.: Deep learning system to screen coronavirus disease 2019 pneumonia. arXiv preprint arXiv:2002.09334 (2020) 7. Ghoshal, B., Tucker, A.: Estimating uncertainty and interpretability in deep learning for coronavirus (COVID-19) detection. arXiv preprint arXiv:2003.10769 (2020) 8. Wang, S., et al.: A deep learning algorithm using CT images to screen for corona virus disease (COVID-19). medRxiv (2020). https://doi.org/10.1101/2020.02.14.20023028 9. Bai, X., et al.: Predicting COVID-19 malignant progression with AI techniques. medRxiv (2020). https://doi.org/10.1101/2020.03.20.20037325 10. Jin, C., et al.: Development and evaluation of an AI system for COVID-19. medRxiv (2020). https://doi.org/10.1101/2020.03.20.20039834 11. Jin, S., et al.: AI-assisted CT imaging analysis for COVID-19 screening: building and deploying a medical AI system in four weeks. medRxiv (2020). https://doi.org/10.1101/2020.03.19. 20039354 12. Narin, A., Kaya, C., Pamuk, Z.: Automatic detection of coronavirus disease (COVID-19) using X-ray images and deep convolutional neural networks. arXiv preprint arXiv:2003. 10849 (2020) 13. Wang, L., Wong, A.: COVID-Net: a tailored deep convolutional neural network design for detection of COVID-19 cases from chest radiography images. arXiv preprint arXiv:2003. 09871 (2020) 14. Gozes, O., et al.: Rapid AI development cycle for the coronavirus (COVID-19) pandemic: initial results for automated detection and patient monitoring using deep learning CT image analysis. arXiv preprint arXiv:2003.05037 (2020) 15. Chowdhury, M.E., et al.: Can AI help in screening viral and COVID-19 pneumonia? arXiv preprint arXiv:2003.13145 (2020) 16. Maghdid, H.S., Asaad, A.T., Ghafoor, K.Z., Sadiq, A.S., Khan, M.K.: Diagnosing COVID-19 pneumonia from X-ray and CT images using deep learning and transfer learning algorithms. arXiv preprint arXiv:2004.00038 (2020) 17. Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012) 18. Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017) 19. Iandola, F.N., Han, S., Moskewicz, M.W., Ashraf, K., Dally, W.J., Keutzer, K.: SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and 0 / Greater than (upper-tailed) t Critical Value (0.1%) 3.137 p-value

0.006

H1 (0.1%)

Rejected

3 Search for Criteria for Decoding a Pupillogram in the “Gray Zone” and Results As in the case of exposure to a light pulse, with an information pulse, pupillograms describe both the stage of contraction and expansion of the pupils. A preliminary assessment of the rate of change in the size of the pupils was carried out according to the tangent of the angle of inclination of the pupillograms. The rates of both the expansion of the pupils and their contraction can be considered the same (Fig. 3). You can also see the presence of other speeds on the graph. We assumed that the rate of change in pupil size would also be influenced by processes proportional to the degree of arousal of the respondents. The frequency of changing the position of the gaze in different cases will be different, depending on the significance of the information contained in the image. To test the hypothesis, we used the interframe difference of the pupil images, and found the dependence of the rate of change in pupil size on time (Figs. 4 and 5). Fourier analysis of the data showed that both when viewing the calibration frames and test objects, the spectrum contains the same fundamental frequencies, the rest at the noise level.

130

O. Isaeva et al.

Fig. 3. Typical pupillary response to stimulus material.

Fig. 4. Fast Fourier transform of pupillograms obtained when viewing a calibration slide.

However, the pupillary response to slides №6, №10 differs from all others by the presence of new peaks and an increase in the amplitude of the fundamental frequency. It was these test objects that contained information that did not contain novelty, did not evoke emotions, but concerned the respondents. Thus, it becomes possible to detect even a weak reaction described by the pupillogram in the “gray zone”.

4 Discussion The criteria for candidates for a position can be not only competence, but also personal. Sometimes it is required to identify a set of preferences, values, and sometimes it is necessary to assess the likelihood of certain actions by people. The described criterion, although it requires additional research, is a rather sensitive tool that allows one to recognize an imperceptible, hidden reaction. At the next stage, it is planned to adapt the system for remote testing, check the correlation between the pupillary response and the level of professional skills, assess

Using Eye-Tracking to Check Candidates for the Stated Criteria

131

Fig. 5. Fast Fourier transform of pupillograms obtained under the influence of an information pulse.

the likelihood of candidates in the future performing actions designated by a specific employer. The proposed technology for the selection of personnel will improve the reliability of the results of the selection procedure, which, in turn, will help to reduce the percentage of staff layoffs, and will also improve the quality of work performed by employees.

5 Main Results One of the tools that have recently been used in various fields is eye tracking. Most of the works devoted to the use of eye-tracking technologies analyze the focus of attention when performing technical actions (including ignoring visual stimuli), strategies for visual search in the process of activity, measuring the pupil diameter (as an indicator of cognitive load), the number of saccadic eye movements, and also fixations, blinks and other parameters. During the research it was established: – the rate of change in the size of the pupils when viewing stimulus images that are directly related to the subject, leads to changes in its frequency spectrum. – it was proposed to use this fact to solve the problem of recognition (recognition) and other weak reactions by pupillograms. Thus, the constant improvement of the element base of solid-state electronics and the set of developed techniques creates the prerequisites for creating a system of unobtrusive/remote selection of applicants for the position.

132

O. Isaeva et al.

Acknowledgments. The study was carried out with the financial support of the Russian Foundation for Basic Research within the framework of the research project 18-47-860018 r_a.

References 1. Personnel selection and hiring methods. http://el-job.ru/stati/o-podbore-personala/metodyotbora-i-priema-personala 2. Stupina M.V.: Fundamentals of personnel management: a tutorial. Vologda (2014). 3. Kletkina N.V.: Relevance and methods of personnel selection. Problems and ways to solve them. Sci. J. 1(14), 46–50 (2017) 4. Avrutskaya, S.G., Vorobieva, T.: Modern methods of personnel selection in Russia. Adv. Chem. Chem. Technol. 28(4), 107–109 (2014) 5. Demina, N.V.: Non-traditional methods of personnel selection: the effectiveness of application in organizations. Sci. Probl. Humanit. Res. 2, 263–268 (2010) 6. Isaeva, O.L., Boronenko, M.P., Zelensky, V.I., Kiseleva, E.S.: Determination of the fear coefficient by pupillograms. J. Phys. Conf. Ser. 1695, 012062 (2020). IOP Publishing 7. Kiseleva, E.S., et al.: Development of a method for adjusting the coordinates of the center of attention in the absence of fixation of the head. J. Phys. Conf. Ser. 1695, 012087 (2020). IOP Publishing 8. Boronenko, M., Zelensky, V., Isaeva, O., Kiseleva, E.: The problem of tracking the center of attention in eye tracking systems. In: Ahram, T., Karwowski, W., Vergnano, A., Leali, F., Taiar, R. (eds.) IHSI 2020. AISC, vol. 1131, pp. 365–371. Springer, Cham (2020). https:// doi.org/10.1007/978-3-030-39512-4_57 9. Boronenko, M., Boronenko, Y., Isaeva, O., Kiseleva, E.: Model of emotionally stained pupillogram plot. In: Ahram, T., Karwowski, W., Vergnano, A., Leali, F., Taiar, R. (eds.) IHSI 2020. AISC, vol. 1131, pp. 398–403. Springer, Cham (2020). https://doi.org/10.1007/978-3030-39512-4_62 10. Boronenko, M., Boronenko, Y., Zelenskiy, V., Kiseleva, E.: Use of active test objects in security systems. In: Ayaz, H. (ed.) AHFE 2019. AISC, vol. 953, pp. 438–448. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-20473-0_43 11. Bikberdina, N., Bobrova, D., Boronenko, M.: Test object as a subject of biometric studies of eye reaction. J. Phys. Conf. Ser. 1410, 012081 (2019). IOP Publishing 12. Boronenko, M.P., Zelensky, V.I., Kiseleva, E.S.: Application of attention waves as a marker of hidden intentions. Natl. Psychol. J. 2(34), 88–98 (2019) 13. Bobrova, D., Boronenko, M., Bikberdina, N.: The empowerment of biometric systems of psychophysical research. J. Phys. Conf. Ser. 1410, 012082 (2019). IOP Publishing 14. Isaeva, O.L., Kiseleva, E.S., Boronenko, M.P.: The problem of tracking the center of attention by optoelectronic systems without the use of infrared illumination. Sci. Rev. 5, 26–30 (2019) 15. Oksana, I., Marina, B.: Application of ImageJ program for the analysis of pupil reaction in security systems. J. Phys. Conf. Ser. 1519, 012022 (2020)

Towards Understanding How Emojis Express Solidarity in Crisis Events Sashank Santhanam(B) , Vidhushini Srinivasan, Khyati Mahajan, and Samira Shaikh University of North Carolina at Charlotte, Charlotte, NC 28223, USA [email protected]

Abstract. We study how emojis are used to express solidarity on social media in the context of three major crisis events - a natural disaster, Hurricane Irma in 2017; terrorist attacks that occurred in November 2015 in Paris; and the Charlottesville protests in August 2017. Using annotated corpora, we first train a recurrent neural network model to classify expressions of solidarity in text. Next, we use these expressions of solidarity to characterize human behavior in online social networks, through the temporal diffusion of emojis, and their sentiment scores. Our analysis reveals that emojis are a powerful indicator of sociolinguistic behaviors (solidarity) that are exhibited on social media as the crisis events unfold. The findings from this article could help advance research on the pragmatic dimensions of emojis, which have been understudied in extant literature. Keywords: Solidarity · Emoji diffusion · Human behavior

1 Introduction and Related Work Research has shown that emoticons and emojis are more likely to be used in socioemotional contexts [2] and that they may serve to clarify message structure or reinforce message content [3, 10]. Riordan [14] found that emojis, especially non-face emojis, can alter a reader’s perceived affect of messages. Wood et al. [18] found emoji to be far more extensively used as compared to hashtags, and noted that emoji present a faithful representation of a user’s emotional state. While research has investigated the use of emojis over communities and cultures [1, 9] as well as how emoji use mediates close personal relationships [8], the systematic study of emojis as indicators of human behaviors in social movements has not been undertaken. Our work seeks to fill this gap. The collective enactment of certain online behaviors, including pro-social behaviors such as solidarity, has been known to directly affect political mobilization and social movements [4, 16]. Social media, due to its increasingly pervasive nature, permits a sense of immediacy [5] - a notion that produces a high degree of identification among politicized citizens of the web, especially in response to crisis events [4]. Herrera et al. found that individuals were more outspoken on social media after a tragic event [6]. They studied solidarity in tweets spanning geographical areas and several languages relating to a terrorist attack, and found that hashtags evolved over time correlating with © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Z. Ahram et al. (Eds.): AHFE 2021, LNNS 271, pp. 133–140, 2021. https://doi.org/10.1007/978-3-030-80624-8_17

134

S. Santhanam et al.

a need of individuals to speak out about the event. However, these prior approaches do not consider the use of emoji in their analysis. We thus seek to understand how emojis are used when people express behaviors online on a global scale and what insights can be gleaned through the use of emojis during crisis events. We make the following salient contributions: 1. Advance the understanding of the pragmatic function of emoji and how they contribute to enrich expression of sentiment. 2. Demonstrate how emojis are used to express pro-social behaviors such as solidarity in the online context, through the study of temporal diffusion. 3. Three large-scale corpora (made available to the research community upon publication), annotated for expressions of solidarity using multiple annotators and containing a large number of emojis, surrounding three distinct crisis events that vary in time-scales and type of crisis event.

2 Data In this section, we briefly describe the three crisis events we analyze and the annotation procedures used for labeling our social media text data. Hurricane Irma Corpus: Irma was a catastrophic Category 5 hurricane and was one of the strongest hurricanes that ever formed in the Atlantic. The storm caused massive destruction over the Caribbean islands and Cuba before turning north towards the United States. People expressed their thoughts on social media along with tracking the progress of the storm. To create our Irma corpus, we used Twitter streaming API to collect tweets with mentions of the keyword “irma” starting from the time Irma became an intense storm (September 6th, 2017) and until the storm weakened over Mississippi on September 12th, 2017. Paris Corpus: Attackers carried out suicide bombings and multiple shootings near cafes and the Bataclan theatre in Paris on November 13th, 2015. More than 400 people were injured and over 100 people died. People all over the world took to social media to express their reactions. To create our Paris corpus, we collected tweets from November 13th, 2015 to November 17th, 2015 containing the word “paris” using the Twitter GNIP service. Charlottesville Corpus: The Charlottesville Protest, also called the Unite the Right rally. On August 12th, a protester rammed his car into a crowd of counter-protesters, killing Heather Heyer and injuring 19 others. During the next 2 days, marches and observations of solidarity were held throughout the US in remembrance of Heyer and against white nationalism. We collected tweets between February 2017 to October 2017 using the Twitter GNIP service with a carefully curated set of keywords including cville, antifa, Nazi and neo-Nazi. Annotation Procedure: We performed distance labeling [11] by asking two trained annotators to assign the most frequent hashtags in each corpus with one of three

Towards Understanding How Emojis Express Solidarity in Crisis Events

135

labels (“Solidarity” (e.g. #solidaritywithparis, #westandwithparis, #prayersforpuertorico), “Not Solidarity” (e.g. #breakingnews, #facebook) and “Unrelated/Cannot Determine” (e.g. #rebootliberty, #syrianrefugees). Using the hashtags that both annotators agreed upon (κ > 0.65, an acceptable agreement level) [15], we filtered tweets that were annotated with conflicting hashtags from both corpora, as well as retweets and duplicate tweets. Table 1 provides the total of the original (not retweets), non-duplicate tweets, that were annotated as expressing solidarity and not solidarity based on their hashtags. Table 1. Descriptive statistics for crisis event corpora # of Tweets

Solidarity

Not Solidarity

Total

Irma

12000 (13%)

81697 (87%)

93697

Paris

20465 (41%)

29874 (59%)

50339

Charlottesville

25240 (30%)

59588 (70%)

84828

3 Understanding the Emojis of Solidarity We outline our analyses in the form of research questions (RQs) and present our findings in the sections below. 3.1 RQ1: How Useful Are Emojis as Features in Classifying Expressions of Solidarity? After performing manual annotation of the three corpora, we trained classifiers for detecting solidarity in text from all three corpora described in the Data section. We applied standard NLP pre-processing techniques of tokenization, re- moving stopwords and lowercasing the tweets. We also removed hashtags that were annotated from the tweets. Class balancing was used in all models to ad- dress the issue of majority class imbalance. Baseline Models: We used Support Vector Machine (SVM) with a linear kernel and 10-fold cross validation to classify tweets containing solidarity expressions. For the baseline models, we experimented with three variants of features including (a) word bigrams, (b) TF-IDF, (c) TF-IDF + Bigrams. RNN + LSTM Model: We built a Recurrent Neural Network (RNN) model with Long Short-Term Memory (LSTM) [7] to classify social media posts into Solidarity and Not Solidarity categories. The embedding layer of the RNN is initialized with pre-trained GloVe embeddings [13] and the network consists of a single LSTM layer. All inputs to the network are padded to uniform length of 100. Table 2 shows the hyperparameters of the RNN model.

136

S. Santhanam et al. Table 2. RNN + LSTM model hyperparameters Hyper-parameter Value Batch size

25

Learning rate

0.001

Epochs

20

Dropout

0.5

Table 3. Performance of baseline SVM models and LSTM models in classifying messages of solidarity Accuracy

Irma

Paris

Cville

RNN + LSTM (w emojis)

93.5%

86.7%

69.3%

RNN + LSTM (wo emojis) 89.8%

86.1%

68.9%

TF-IDF

85.71% 75.72% 68.56%

TF-IDF + Bigrams

82.62% 76.98% 59.16%

Bigrams only

79.86% 75.24% 63.12%

Table 3 shows the accuracy of the baseline and RNN + LSTM models in classifying expressions of solidarity from text, where the RNN + LSTM model with emojis outperforms the Linear SVM models in both Irma and Paris corpora. However, the performance of the RNN + LSTM model for the Charlottesville (Cville in table) corpus does not show significant improvement over the base- line, likely due to overlap of terms in tweets in the two categories. Finding 1: For all three corpora, we find that the addition of emojis as a feature improves the model performance in classifying solidarity messages. 3.2 RQ2: What Sentiments Are Conveyed by Emojis in Solidarity Expressions During Crisis Events? We created emoji sentiment maps for our three crisis events: Irma, Paris, and Charlottesville. We extracted sentiment and neutrality scores following the method described by Novak et al. [12] from the emoji sentiment website1 . We used the ggplot package2 in R to create the emoji sentiment maps in Fig. 1. The x-axis represents sentiment scores from most negative (−1.0) on the left side towards most positive (+1.0) on the right. As described by Novak et al. [12], the position of an emoji is determined by its sentiment score S and its neutrality p0 (shown on y-axis, representing probability distribution of the neutral class). 1 http://kt.ijs.si/data/Emoji~sentiment~ranking/emojimap.html. 2 https://tinyurl.com/y9ekcutf.

Towards Understanding How Emojis Express Solidarity in Crisis Events

137

Fig. 1. Emoji sentiment maps (left: Irma, middle: Paris, right: Charlottesville)

The maps provide an overall view of the sentiments conveyed by the emojis to express solidarity across the three different crisis events. We observe that negative emojis (left side of each of the three charts in Fig. 1) are much more frequently used across all the events than positive sentiment emojis. The prevalence of negative emojis indicates the sadness, worry and negative emotions that are common across all three crisis events. We observe that is consistent across all three events, symbolizing danger. is consistently present across all the three events and most frequent in Charlottesville, expressing the sorrow and concern towards the people affected by these events. In addition, is present in all three events expressing sorrow, concern and anger or frustration. Another emoji that is consistent across the events is , expressing the concern and love. Finding 2: We observe that while the positive sentiment emojis are less frequent (as seen in the relative size compared to negative emoji), they have more variety. Put another way, there are many more different positive emojis present in each of the three corpora, while negative emojis are more frequent. 3.3 RQ3: How Can Emojis Be Used to Understand the Diffusion of Solidarity Expressions Over Time? For addressing this RQ, we plot the diffusion of emojis across time. For the Hurricane Irma solidarity corpus, we filtered emojis that occur > cpu.txt. top –n 1|grep cdcs|cut-d‘:’–f 1|awk’{print $10}’ > > mem.txt. The results are shown in Fig. 6, 7 and 8:

Fig. 6. Diagram of data delay rate

Fig. 7. Diagram of CPU occupied

Study and Implementation of Cross-Platform Real-Time Message Middleware

357

Fig. 8. Diagram of memory consumption

5 Conclusion and Prospect The current article, on the basis of the general real-time MM as well as the operative features of ATC’s distributed structure, develops LAN MM based on P2P and P/S mode, and proves its effect through application. The MM provides a unified cross-platform interface, and simplifies the development of ATC. By minimize the coupling of data exchange and applications, it strengthens the stability of the system. The separation of data exchange from the application processes balances the use of resources and improve the efficiency of resource usage. It also facilitates maintenance and reuse, lowing the development cost. Next, cluster technology will be used to enhance the viability of the system, and load balancing technology will be used to raise the work efficiency of the middleware. Owing to the attributes of ATC system application, more reasonable encryption algorithms should be studied to ensure data security.

References 1. Air Traffic Control System. http://baike.baidu.com/view/486146.htm 2. The Definition of Middleware. http://www.zydsoft.com/NetPower/n3-1.asp 3. John, A.S.: What is a real-time system. Amherst, Massachusetts: Department of Computer Science. University of Massachusetts, Boston (1992) 4. Stankovic, J.A.: Misconceptions about real-time computing: a serious problem for nextgeneration systems. Computer 21, 10–19 (1988) 5. Bernstein Philip, A.: Middleware: a model for distributed services. Commun. Acm. 39, 86–97 (1996) 6. Wang, C., Wang, Z., Bao, Z., Xing, H.W.: Design of telemetry and command message-oriented middleware system with publish/subscribe mode. J. Comput. Appl. 35, 878–881 (2015) 7. Xiao, L., Wang, J., Wu, B.: The research and implement of the protocol conformance testing system based on message-oriented middleware. J. Sichuan. U. (Nat. Sci.) 51, 1177–1182 (2014) 8. Yuan, Z.: Synchronized updates of data based on message middleware. Ordnance Ind. Autom. 7, 93–96 (2013) 9. Li, F., Gao, X., Zhao, H., Zhang, C.: Discussion on solutions to message middleware based on service-oriented architecture. Telecom. World 12, 29 (2015) 10. Zhang, W.: Research and design of ATC systems based on middleware technology. Sichuan University, Chengdu (2005) 11. Wang, L.: Research on a model and development processes of an ATC simulation system. Sichuan University, Chengdu (2005)

Building an Educational Product: Constructive Alignment and Requirements Engineering Nursultan Askarbekuly(B) , Alexandr Solovyov, Elena Lukyanchikova, Denis Pimenov, and Manuel Mazzara Software Engineering Laboratory, Innopolis University, Universitetskaya st. 1, 420500 Innopolis, Tatarstan, Russian Federation [email protected], {a.solovyov,e.lukyanchikova, d.pimenov,m.mazzara}@innopolis.ru

Abstract. Building an educational software product has two facets: software engineering and educational design. This paper proposes an approach that combines commonly used requirements engineering techniques with the educational concept of constructive alignment to develop an educational software product. The approach is novel in that it shows the direct correspondence of the engineering techniques with the constructive alignment, demonstrates its practical application through the case study project, and outlines a step-by-step procedure for using it. Keywords: Educational technology · Requirements engineering · Constructive alignment · Student profile · User personas · Goal-modelling

1 Introduction An educational software product includes both technological and educational aspects [1]. As a technological endeavor, it involves software engineering activities such as elicitation of requirements, product design, and development. As a pedagogical endeavor, it views the end-user as a learner, and includes the design of educational content, learning activities, and assessment. This paper is a case study exploring how educational theory, in particular constructive alignment [2], can be combined with techniques commonly used in software engineering such as user-centered design [3], goal-modeling [4] and goal-question-metric [5]. This paper presents a practical step-by-step approach demonstrated through a case study project: a mobile application that educates its end-users on the importance and practicalities of staying in touch with family and friends [6]. Thus, the app includes both educational content and a persuasive software tool [7] aspects. In the following sections, we review the related research, describe the methodology employed, and discuss the results.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Z. Ahram et al. (Eds.): AHFE 2021, LNNS 271, pp. 358–365, 2021. https://doi.org/10.1007/978-3-030-80624-8_44

Building an Educational Product

359

2 Related Work This section describes several educational and software engineering methods relevant to the approach suggested in this paper. Then it reviews the existing research on requirements engineering for an educational software product. 2.1 Engineering Concepts The first concept of interest is User-Centered Design (UCD), which focuses on the user’s needs and comfort, and views them as the determinant of system requirements and design [3]. Various user research techniques can be applied within UCD, such as interviews, survey-polls, and focus groups [3]. User Personas are another common tool used to document and refer user needs [3]. A persona, in this case, is the description of a fictional cumulative character that represents an average end-user or a category of end-users. When the user needs and goals are established, goal-oriented requirements engineering can be applied, in particular the KAOS model [4]. KAOS can take high-level goals and convert them into functional requirements assigned to specific system components. Another modeling approach, called Goal-Question-Metric (GQM) can take business and system level goals and derive system metrics for them [5], thus, making them measurable. 2.2 Educational Concepts Constructive alignment (CA) is an idea proposed by Biggs [2], who argues that any educational course should be designed based on two things: student profiling and intended learning outcomes. This simply means that one must have a clear understanding of who the learner is and what the learner is supposed to achieve. Thus, all learning activities within the course, as well as assessments, feedback, and medium of instruction should be aligned together and based on understanding the learner and the intended learning outcomes. The benefit of having such an alignment is that the learner is more likely to conceptualize and apply the knowledge, as opposed to simply acquiring the information [8]. A related concept is the student-focused teaching [9], which argues that learning is a function of what the student does, rather than what the teacher does. Thus, the teacher should focus on the activities that will allow the student to engage with the course material in a proactive, hands-on manner. A correspondence can be drawn between the aforementioned educational and engineering concepts. The following subsection explores how such correspondence relates to existing research, and outlines the novel aspects of the combination. 2.3 Previous Research on Requirements in Educational Software Numerous researches have been done on developing e-learning software [10, 11], and evaluating its effectiveness [12] and appeal [13] to learners. Wong [14] suggested an object-oriented to prototyping educational software, while Squires and Preece [15] look

360

N. Askarbekuly et al.

at predicting quality of educational software from the socio-constructivist point of view. Costa [16] proposed using User-Centered Design (UCD) for developing educational software, and Dantin [17] demonstrated how User Personas can allow one to design more user-friendly educational products. Hadjerrouit [18] emphasized the role of integrating learning theories into requirements of educational software, while Exter [19] analyzed the skills a software engineer requires when developing educational software, and Sarrab [20] proposed a requirements model for developing M-learning software to be used as part of the curriculum in higher educational institutions. This paper combines some of the aspects of these researches, namely it suggests a novel systematic approach to eliciting requirements and developing a minimum-viableproduct (MVP) version for educational software applications from the point of view of a software engineer.

3 Methodology The novelty of the approach is that it demonstrates how user-centered design (UCD) and goal-oriented approach to requirements can be combined with the Constructive Alignment educational theory to form a systematic and intuitive method to developing educational software. In particular, UCD allows the engineer to establish user needs and goals, and connect it to the concept of student profiling from the constructive alignment. The goal-oriented KAOS and GQM models can be used together [21] to turn the highlevel user goals into functional requirements and evaluation metrics used to assess the effectiveness of the product in regards to achieving the user goals. 3.1 Combining Requirements Engineering and Constructive Alignment As the first step to combining requirements engineering with constructive alignment, one should recognize that the end-user is the learner in the context of an educational software product. One can then draw a parallel between student-focused teaching and user-centered design, in the sense that both view the user-learner’s needs as the basis for educational and system design. The concept of student profiling naturally fits into the pipeline of developing a user-centered educational software product, and can be implemented as user personas. Furthermore, the user-research techniques (e.g., interviews or surveys) can help the product creator to establish the intended learning outcomes (ILOs), and compose user personas, which can serve as the student profiles. The requirements engineering techniques provide a practical way to implement the constructive alignment within a software product. On the other hand, the constructive alignment ensures that the learner and the intended outcomes inform the product design, and through the GQM-derived metrics provide an evaluation and assessment tool. Table 1 summarizes this correspondence. Thus, we propose a method that combines the two concepts to form a fertile ground for building an educational software product. The case project serves a demonstration of the combination in practice. The following subsection describes the case project and how the techniques were combined and applied within it.

Building an Educational Product

361

Table 1. Correspondence between constructive alignment and requirements engineering. Requirements concepts

Constructive alignment

End-user

Learner/Student

User-centered design

Student-focused teaching

User personas

Student profiles

User needs and goals

Intended Learning Outcomes (ILOs)

Functional requirements through goal modelling Learning activities & content based on ILOs Evaluation metrics through goal question metric Assessment & evaluation based on ILOs approach

3.2 Step-By-Step Description of the Approach Used in the Case Study Project The case project is a mobile application that educates its end-users on the importance and practicalities of staying in touch with family and friends described in a previous study by Askarbekuly et al. [21]. Figure 1 shows the flow and steps of the approach. As a starting point, we began from user interviews to understand the end-users, examine their needs and whether the problem of keeping-in-touch exists. We have conducted six user interviews and one focus group with five participants. Based on that, one user persona was derived to describe our target learner. In educational terms, this user persona can be referred to as the student profile, and is considered to be one of the cornerstones of constructive alignment [2].

Fig. 1. The suggested approach to combining requirements techniques and constructive alignment.

Based on the user persona/student profile and newly-found understanding of the target end-user, we composed Intended Learning Outcomes (ILOs) for the learner/enduser. Since the project’s domain mainly had to do with behavioral change, Affective Learning Taxonomy [22] was used to formulate the specific ILOs. Establishing ILOs is of critical importance to the approach, as they serve as the input to KAOS and GQM goal-modelling approaches, and allow the engineer to arrive at functional requirements for the product, and the metrics for evaluating the user engagement and the effectiveness of the overall approach. The process of using KAOS and GQM to arrive at functional requirements and product metrics was described by Askarbekuly in [21].

362

N. Askarbekuly et al.

Lastly, to empirically validate the correspondence described above event analytics was integrated into the product to measure the previously established metrics. It allowed us to arrive at product metrics and events, which were tracked through a third-party analytics tool [23].

4 Results and Discussion In this section, the results of applying each step of the suggested approach are listed and discussed. 4.1 The Student Profile and User Persona As a result of conducting the user interviews and focus group, we have established the following recurring traits for the persona of our target learner, whom we gave a nickname of Jamal: Jamal is male and 25 years old. He comes from a historically Muslim culture, with traditionally strong family ties. Currently, he is staying away from the family and relatives due to having studies and job. He is an active user of educational mobile apps, such as Duolingo, watches YouTube habitually. Jamal wants to keep in touch with his family and friends, and feels that it’s important. However, he has various reasons preventing him from doing so such as being busy, forgetful, and having other priorities such as work and studies. 4.2 Intended Learning Outcomes (ILOs) The product’s global goal was to maximize staying-in-touch with family and friends for our end-users. Since the project’s domain mainly had to do with behavioral change, Affective Learning Taxonomy [22] was used to formulate the specific ILOs. More specifically, the ILOs were defined on the Modification level on the taxonomy using verifiable verbs. Modification ILOs: • • • •

model the current situation in relation to staying in touch list and prioritize people they would like to stay in touch with compose a gradual action plan report on the fulfilment of the plan over a period of a month

A great advantage of having formulated Intended Learning Outcomes is that they can serve as the guide and criteria for creating the content.

Building an Educational Product

363

4.3 Product’s Functionality, Learning Activities and Content The resulting product (Fig. 2) consisted of educational content and a persuasive tool [7] for the users to track their staying-in-touch activities. Importantly, the learning activities are the learner’s engagement with the content and tool. In other words, the user’s interaction with the app functionality is the learning activity, in the context of an educational product. In our case, the app teaches the users through educational content, and provides them with a tool to put the knowledge to action.

Fig. 2. The educational content element of the product is on the left and the software tool is in the middle and on the right (a tracker tool with notification and reminders).

Ensuring that both the content and tool were derived from the student profile and intended learning outcomes is the key aspect that allows to argue that the product was built according to the principles of constructive alignment. 4.4 Evaluation and Assessment An important question is how to assess the effectiveness of the approach. Ultimately, if the app solves the users’ problem and they should keep using over time, which then becomes a direct testimony to the app’s effectiveness in reaching the objective. At the moment of writing the paper, we already had the first version of the app available in the App Store [6] with the product analytics integrated using Amplitude [23]. However, several steps still needed to be performed in regards to evaluating the overall approach. To reason about the effectiveness of using constructive alignment within a software product, we will need to employ alternative product configurations, such use of the tracker tool without the content and with some unaligned content from the web. The performance of various configurations can then be analyzed and compared. If the results demonstrate higher levels of the engagement with the tool and content, when

364

N. Askarbekuly et al.

the specifically produced aligned content was used, then this will be an indicator that the approach is valid.

5 Conclusion In this paper, we have outlined how constructive alignment can be used in conjunction with requirements engineering when developing an educational software product. The use of both concepts and the step-by-step description of the approach were demonstrated in the development of an actual educational application that helps users to keep in touch with their family and friends. The approach is novel in that it shows the correspondence and interplay of the engineering techniques with the constructive alignment, demonstrates its practical application through the case study project. and suggests a step-by-step procedure for using it. Among the limitations of this study is that the case study has a specific educational domain related to soft and social skills, and the approach needs to be validated in other domains, such as technical and hard skills. Another important limitation is that the authors were still in the process of gathering the usage data to evaluate the approach in regards to user engagement at the moment of writing the paper. Thus, quantitative evaluation of the approach was left outside of the scope of this paper. Additional studies can be conducted to further examine the correlation between product engagement and use of constructive alignment in combination with the aforementioned engineering practices. The central hypothesis, which needed to be examined, is that the use of constructive alignment results in higher user engagement with the product’s functionality and content. Such assessment can be done in two ways. On one hand product analytics can be used to observe the users’ interaction with both the content and tool. On another hand, one can also survey the users to understand whether the actual outcomes align with the objectives. Lastly, case studies in other educational domains, such as acquisition of technical and engineering skills, should be conducted to further validate the approach. Both the requirements engineering techniques used and constructive alignment are domain agnostic methods, and potentially their combination should also be applicable to various different domains.

References 1. Tchounikine, P.: Computer Science and Educational Software Design: A resource for Multidisciplinary Work in Technology Enhanced Learning. Springer, Grenoble (2011). https://doi. org/10.1007/978-3-642-20003-8 2. Biggs, J.: What the student does: teaching for enhanced learning. High. Educ. Res. Dev. 18(1), 57–75 (1999) 3. Miaskiewicz, T., Kozar, K.A.: Personas and user-centered design: how can personas benefit product design processes? Des. Stud. 32(5), 417–430 (2011) 4. Van Lamsweerde, A.: Goal-oriented requirements engineering: a guided tour. In: Proceedings fifth IEEE International Symposium on Requirements Engineering, pp. 249–262. IEEE Press (2001)

Building an Educational Product

365

5. Caldiera, V.R.B.G., Rombach, H.D.: The goal question metric approach. In: Encyclopedia of Software Engineering, pp. 528–532 (1994) 6. My People: Stay in Touch iOS Application, Apple App Store. https://apps.apple.com/us/app/ my-people-stay-in-touch/id1482512115 7. Fogg, B.J.: Persuasive technology: using computers to change what we think and do. Ubiquity 2002(December), 2 (2002) 8. Prosser, M., Trigwell, K.: Teaching for learning in higher education. Open University Press, Buckingham (1998) 9. Marton, F., Säljö, R.: On qualitative differences in learning: Outcome and process. Br. J. Educ. Psychol. 46(1), 4–11 (1976) 10. Hinostroza, E., Rehbein, L.E., Mellar, H., Preston, C.: Developing educational software: a professional tool perspective. Educ. Inf. Technol. 5(2), 103–117 (2000) 11. Roschelle, J., et al.: Developing educational software components. Computer 32(9), 50–58 (1999) 12. Escudeiro, P., Bidarra, J., Escudeiro, N.: Evaluating educational software (2006) 13. MacFarlane, S., Sim, G., Horton, M.: Assessing usability and fun in educational software. In: Proceedings of the 2005 Conference on Interaction Design and Children, pp. 103–109 (2005) 14. Wong, S.C.: Quick prototyping of educational software: an object-oriented approach. J. Educ. Technol. Syst. 22(2), 155–172 (1993) 15. Squires, D., Preece, J.: Predicting quality in educational software. Interact. Comput. 11(5), 467–483 (1999) 16. Costa, A.P., Reis, L.P., Loureiro, M.J.: Hybrid user centered development methodology: an application to educational software development. In: Cao, Y., Väljataga, T., Tang, J.K.T., Leung, H., Laanpere, M. (eds.) ICWL 2014. LNCS, vol. 8699, pp. 243–253. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-13296-9_27 17. Dantin, U.: Application of personas in user interface design for educational software. In: Proceedings of the 7th Australasian Conference on Computing Education, vol. 42, pp. 239– 247 (2005) 18. Hadjerrouit, S.: Applying a system development approach to translate educational requirements into e-learning. Interdisc. J. E-Learn. Learn. Objects 3(1), 107–134 (2007) 19. Exter, M.: Comparing educational experiences and on-the-job needs of educational software designers. In: Proceedings of the 45th ACM Technical Symposium on Computer Science Education, pp. 355–360 (2014) 20. Sarrab, M., Al-Shihi, H., Al-Manthari, B., Bourdoucen, H.: Toward educational requirements model for mobile learning development and adoption in higher education. TechTrends 62(6), 635–646 (2018) 21. Askarbekuly, N., Sadovykh, A., Mazzara, M.: Combining two modelling approaches: GQM and KAOS in an open source project. In: Ivanov, V., Kruglov, A., Masyagin, S., Sillitti, A., Succi, G. (eds.) OSS 2020. IAICT, vol. 582, pp. 106–119. Springer, Cham (2020). https://doi. org/10.1007/978-3-030-47240-5_11 22. Allen, K.N., Friedman, B.D.: Affective learning: a taxonomy for teaching social work values. J. Soc. Work Values Ethics 7(2), 1–12 (2010) 23. Amplitude Analytics. https://amplitude.com

Analysis of the Application of Steganography Applied in the Field of Cybersecurity Luis Serpa-Andrade1(B) , Roberto Garcia-Velez1 , Eduardo Pinos-Velez1 , and Cristhian Flores-Urgilez2 1 Research Group on Artificial Intelligence and Assistive Technologies GIIATa, UNESCO

Member for Inclusion, Universidad Politecnica Salesiana, Cuenca, Ecuador {lserpa,rgarciav,epinos}@ups.edu.ec 2 Universidad Catolica de Cuenca, Cuenca, Ecuador [email protected]

Abstract. The image files have data areas that are not very important for the visualization of the corresponding image and, if these areas are changed or altered, the image does not have a perceptible visual change, therefore they can be used to hide information, having almost certainty that the image has not been altered. The least significant bit (LSB) method consists in replacing the last bit of each byte and can do it in several bytes at the same time and the image will not have a noticeable change. Each substituted bit is part of a letter of the message that you want to hide in the given image. In general, images with PGM extension are very useful to apply the technique (LSB) since they are images whose content is given without following some algorithm of compression; This means that the content of the file is more readable and easy to work since it is in the ascci format (US standard code for information exchange). The content of an image with PGM extension, can be visualized using a simple text editor, so that the ASCII information can be seen as a representation of whole numbers, the value of these numbers will represent the intensity of color, 255 is white and 0 is black. If each row of the matrix represents an image data and every last digit, which is the least significant bit, is altered, then the eight bits corresponding to each altered byte would represent a letter belonging to a hidden message word. If this sequence is repeated several times you can hide more information such as phrases or sentences. Steganography is a useful tool in the field of computer security where it is required to verify that private information is correctly used by the user determined using the encryption in the personal key. There are already several software products that are responsible for carrying out this process, one of these cases is OpenStego, which encrypts the information using steganography and through passwords ensures that only the bearer of it will be able to visualize said information. We will show that the use of the Fortran, MatLab programming language presents great advantages for steganography since access to memory spaces through pointers allows freeing up memory at runtime, which provides a high degree of efficiency in the execution of a Program. Fortran, being a programming language oriented to mathematical calculation in general, has tools that facilitate its application and expressly the matrix calculation used in steganography. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Z. Ahram et al. (Eds.): AHFE 2021, LNNS 271, pp. 366–371, 2021. https://doi.org/10.1007/978-3-030-80624-8_45

Analysis of the Application of Steganography Applied in the Field of Cybersecurity

367

Keywords: Steganography · Cybersecurity · LSB · PGM · Information · Encryption · Key

1 Introduction A long time ago, techniques were already used to hide information. Herodotus, 400 years before Christ described the way a man makes use of a wooden tablet and placing wax writes a message and then places massera on the same message to hide it, writing another message on it [3]. Another technique described by Herodotus is to tattoo a message on the skull of a hairless person and then wait for that person’s hair to grow and send it to a recipient giving orders that he should shave the individual’s head to read the corresponding message [3]. In recent years with the management of digital information it is of great importance the use of security in digital files, which can be carried out by means of data encryption, obfuscation of information and also using steganography being the latter the art of hiding information in digital images, where such information will not be noticeable to the human eye, that is to say that steganography conceals a message and in this way secret communication can be created [1]. The required information can be hidden in different media such as documents, audio, or digital images, often called carriers or container objects, which will have the hidden information so that only a user with a certain decryption algorithm can access such information [2, 3]. In Ecuador, algorithms are used to hide information within the identity card in order to determine the legality and validity of the same by applying the verification algorithm that is known only to technical persons accredited for verification.

2 Related Projects: A Brief Review The exchange of cybersecurity information is improving the detection and prevention of cyber incidents by reducing the loss caused by attacks and eliminating the costs of duplication efforts for cyber defense. However, privacy is one of the main concerns of organizations, while collecting security information to share it externally [4]. So as our cyber society develops and expands, the importance of cybersecurity operations is growing in response to cybersecurity threats that come from beyond national borders [5]. Therefore, several industry specifications are emerging that define the information schemes for such exchanges. However, these specifications define their own schemes, since their objectives and the types of information they handle differ, and desirable schemes differ according to the purposes, a way of coding the information through steganography is presented. Below are some projects in which steganography is used A. Steganography starting the TCP/IP protocol The TCP/IP protocol is used for communication between several people making use of computers interconnected by internet. This protocol is appropriate to create

368

L. Serpa-Andrade et al.

covert channels of information because through the headers used by this protocol data can be sent that can be seen or discovered only by two entities that agree on an algorithm for decryption of information [6, 7]. B. Malware Control Nowadays malware is used in order to establish a cyber-attack control point on a software product installed on a computer, for that purpose of covering up the information to the end user by carrying channels of malicious communication. Steganography is used, for example, in the Waledac worm, which is used to review information about software downloads in modules installed on a computer [3]. C. Watermarks It is a technique that is used to hide information on certain objects, for example, the currency in the form of banknotes has their watermark that is used to verify whether the ticket is legal or false since by means of printers and photocopiers they add information to the paper in said watermark and, if required, said information is extracted to be analyzed visually and validate the currency. When the same procedure is done on digital images it is known as a digital watermark, using it to add copyright information to the content [8, 9]. D. Steganography in operating systems There are tools to be used in operating systems that allow hiding virtual units, hiding information in images and audio such as the Steganos Security Suite tool used in windows operating systems [9]. Another tool such as Opensource also called OpenStego allows the concealment of messages by encrypting passwords so that only the holder of the same can decrypt the message [9].

3 Design of an Algorithm to Hide Information in a Digital Image Using the LSB Technique by Fortran The image files have data areas that are not very important for the visualization of the corresponding image and, if these areas are changed or altered, the image does not have a perceptible visual change, therefore they can be used to hide information, almost certain that the image has not been altered [10, 11]. The least significant bit (LSB) method consists in replacing the last bit of each byte being able to do it in several bytes at the same time and the image will not have a noticeable change. Each substituted bit is part of a letter of the message that you want to hide in the given image [12]. In general, images with PGM extension are very useful to apply the technique (LSB) since they are images whose content is given without following some compression algorithm; This means that the content of the file is more readable and easy to work since it is in the ASCII format (US standard code for information exchange) (Fig. 1). The content of an image with PGM extension, can be visualized using a simple text editor, so that the ASCII information can be seen as a representation of whole numbers, the value of these numbers will represent the intensity of color, 255 is white and 0 is black.

Analysis of the Application of Steganography Applied in the Field of Cybersecurity

369

Fig. 1. Example image with PGM extension

Fig. 2. Example image with PGM extension

If each row of the matrix in Fig. 2 represents an image data and every last digit is altered, which is the least significant bit, then the eight bits corresponding to each altered byte would represent a letter belonging to a hidden message word. If this sequence is repeated several times, more information such as phrases or sentences can be hidden (Fig. 3).

Fig. 3. a) Byte alteration to hide a letter using the least significant bits, b) Obtain several bytes of bLSB’S to represent a word.

3.1 Design of an Algorithm for Information Concealment in a PGM Image Below is a flow chart which indicates an algorithm model for reading the hidden information in PGM images (Fig. 4). The different subroutines and functions to carry out the previous process were carried out in the Fortran programming language, presenting a quick response in the execution of the same, as both the process of reading the image content, as well as the conversion of whole numbers to binary, the process of searching for less significant bits and converting to ASCII to represent whole numbers as letters had an execution time of approximately 0.002 s.

370

L. Serpa-Andrade et al.

Fig. 4. Algorithm model for reading the hidden information in PGM images

4 Conclusions Steganography is a useful tool in the field of computer security where it is required to verify that the private information is correctly used by the user determined using the encryption in the personal key. There are already several software products that are responsible for carrying out this process, one of these cases is OpenStego, which encrypts the information using steganography and through passwords ensures that only the bearer of it will be able to visualize said information. The use of the Fortran programming language has great advantages for steganography since access to memory spaces by means of pointers allows the release of memory at runtime which provides a high degree of efficiency in the execution of a program. Strengthened to be a programming language oriented to mathematical calculation in general has tools that facilitate its application and expressly the matrix calculation used in steganography. The exchange of cyber security information is improving the detection and prevention of cyber incidents by implementing the proposed method in text images, apart from the protocol used for information privacy there is an integrated coding system without high computational cost using Fortran.

Analysis of the Application of Steganography Applied in the Field of Cybersecurity

371

References 1. Paz Menvielle, M.A., et al.: Metodología para usar la esteganografía como medio de acreditar la validez de la documentación publicada electrónicamente. In: XIII Workshop de Investigadores en Ciencias de la Computación (2011) 2. Zaynalov, N.R., Qilichev, D., Rahmatullaev, I.: Classification and ways of development of text steganography methods. Theor. Appl. Sci. 228–232 (2019) 3. Febryan, A., Purboyo, T.W., Saputra, R.E.: Steganography methods on text, audio, image and video: a survey. Int. J. Appl. Eng. Res. 12(21), 10485–10490 (2017) 4. Vakilinia, I., Tosh, D.K., Sengupta, S.: Privacy-preserving cybersecurity information exchange mechanism. In: 2017 International Symposium on Performance Evaluation of Computer and Telecommunication Systems (SPECTS), pp. 1–7. IEEE, July 2017 5. Takahashi, T., Panta, B., Kadobayashi, Y., Nakao, K.: Web of cybersecurity: Linking, locating, and discovering structured cybersecurity information. Int. J. Commun. Syst. 31(3), e3470 (2018) 6. Kadhim, J.M., Abed, A.E.: Steganography using TCP/IP’s sequence number. Al-Nahrain J. Sci. 20(4), 102–108 (2017) 7. Sanchez Mamani, J.W., Huirse Cruz, S.A.: Prototipo de software para el control de las vulnerabilidades esteganográficas del protocolo HTTP de la capa aplicación en la oficina de tecnología informática de la Universidad Nacional del Altiplano 2015 (2017) 8. AlKhamese, A.Y., Shabana, W.R., Hanafy, I.M.: Data security in cloud computing using steganography: a review. In: 2019 International Conference on Innovative Trends in Computer Engineering (ITCE), pp. 549–558. IEEE, February 2019 9. Cuzco Naranjo, R.H.: Propuesta de un método esteganográfico como soporte al proceso de seguridad de transferencia de imágenes (2017) 10. Sharikov, P.I., Krasov, A.V., Gelfand, A.M., Kosov, N.A.: Research of the possibility of hidden embedding of a digital watermark using practical methods of channel steganography. In: Kotenko, I., Badica, C., Desnitsky, V., El Baz, D., Ivanovic, M. (eds.) IDC 2019. SCI, vol. 868, pp. 203–209. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-32258-8_24 11. Alvaro Navarro Clemente-”Esteganografia”, universidad ReyJuan Carlos, España, October 2019 12. Rehman, A., Saba, T., Mahmood, T., Mehmood, Z., Shah, M., Anjum, A.: Data hiding technique in steganography for information security using number theory. J. Inf. Sci. 45(6), 767–778 (2019)

Multiple LDPC Code Combined with OVCDM to Improve Signal Coding Efficiency and Signal Transmission Effects Zhang Fan1 and Zhang Hong2(B) 1 Electronic Communication Engineering School, Anhui Xinhua

University, Hefei, Anhui, China 2 Electrical Engineering School, Anhui Polytechnic University, Wuhu, Anhui, China

Abstract. In the field of artificial intelligence-related computing, as an important part of the Internet of Things, WSN (Wireless Sensor Network) is a data-centered network. As WSN generally adopts digital communication system, there are problems such as limited sampling rate, low data encoding efficiency and low spectrum efficiency. The project team plans to combine multiple LDPC code with OVCDM system to conduct an in-depth study on WSN node data coding. The research idea is to use multiple LDPC codes combined with OVCDM under the premise of source coding and channel coding to improve the error-correction performance and greatly improve the coding efficiency and spectral efficiency. The specific method is to establish the WSN data encoding and decoding system model, and study the bit error rate and signal-to-noise ratio of data encoding under different fading channels when different processing methods are used, in order to study the improvement effect of data encoding index. Keywords: Multiple LDPC code · Overlapping code division multiplexing · Wireless sensor network · Compressed sensing · Data encoding · Spectrum efficiency

1 Introduction In order to reduce the redundancy and workload, the research on the systematic and efficient data source coding and data channel coding of the massive data collected by WSN nodes has become an issue of increasing concern and considerable challenge. At present, source coding focuses on compressing the rate of information output from the source to improve the effectiveness of the system and achieve the purpose of compressing the rate of information output from the source. Channel coding is based on certain rules when sending information to add a specific element. At the receiving end by using special algorithms and rules, finding and correcting element errors, it is tried to with minimal supervision and symbols for enhancing the reliability of the cost, thus effectively to overcome noise and interference in the channel, which is to ensure that the communication transmission reliable and safe. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Z. Ahram et al. (Eds.): AHFE 2021, LNNS 271, pp. 372–379, 2021. https://doi.org/10.1007/978-3-030-80624-8_46

Multiple LDPC Code Combined with OVCDM to Improve Signal Coding Efficiency

373

However, the above coding methods are based on the traditional Nyquist sampling criteria, and the data encoding efficiency and spectral efficiency are not very high, and some of them have even approached the Shannon limit. Digital communication system is generally adopted in WSN, especially in the stage of rapid development of 5G, which also has many problems such as limited sampling rate and low data coding efficiency. It is urgent to study how to improve the data encoding performance, effectively enhance the spectrum efficiency and signal transmission effect.

2 Multiple LDPC Coding and Decoding System In recent years, digital compression coding has been widely used in communication. In order to ensure the stability and security of signals, the coding types of signals are very strict. Common digital signal codes include linear block codes, convolutional codes, LDPC codes, Turbo codes and so on. LDPC Code (Low Density Parity Check Code) is a famous Low Density Parity Code, which is a linear block Code with sparse Parity matrix. With the rapid development of coding technology, it has become one of the research hotspots in this field. Its outstanding feature is that it is applicable to almost all channels, and its decoding complexity is very low, so it is easy to carry out theoretical analysis and research. 2.1 Features of LDPC Through the mathematical generation matrix G, LDPC code send information sequence need, which is mapped into the send sequence one by one, i.e., code word sequence. For each specific generation matrix G, there is mathematically equivalent existence of another parity matrix H through matrix theory. And all codeword sequences C constitute the null space of H. Due to the different construction rules, the sparsity of check matrix H is also different, and the coded binary graphs of different LDPC codes have different closed loop distributions, among which the closed loop is an important factor affecting the performance of LDPC codes. In addition, LDPC code based on confidence propagation decoding algorithm is essentially a parallel algorithm, whose decoding speed is very high. So it is very easy to implement in parallel on hardware. 2.2 Encoding and Decoding Characteristics of Multiple LDPC Codes The definition of LDPC code is based on the binary domain. Scholars continue to promote the binary domain LDPC code, so multiple LDPC code is formed. By contrast, Multiple LDPC code has a better error correction performance than binary LDPC codes, stronger ability to resist burst error. It can be increased by choosing a non-zero yuan equivalent binary domain LDPC code column method to improve the coding performance of error correction, which has the superiority to guarantee with the equivalent length of binary domain LDPC code under the condition of invariable binary chart of the structure.

374

Z. Fan and Z. Hong

3 Overlapped Code Division Multiplexing System 3.1 OVCDM Systems OVCDM (Overlapped Code Division Multiplexing) is Overlapped Code Division Multiplexing technology, invented by Professor Li Daoben of Beijing University of Posts and Telecommunications in China. Over the past ten years, it has formed a series of high spectrum efficiency coding multiplexing technologies with OVTDM, OVSDM, OVFDM, OVHDM and other technologies [1–3]. Through overlapping principle (between symbols within users) and superposition principle (between symbols between users), the coding is adopted the principle and technology of combining coding and modulation to form a specific coding constraint relationship, and then effectively achieve ultra-high spectral efficiency. 3.2 Operating Principle of OVCDM In essence, the overlapping code division multiplexing system is a generalized convolutional coding system, and its system model can be expressed as follows: ⎛ ⎞ Min(n,L−1)  T Un−l Bl ⎠ (1) VnT = F ⎝ l=0

Where F(*) represents the monotone nonlinear transformation with one-to-one correspondence to *. UTn = [˜un,0 , u˜ n,1 , . . . , u˜ n,K−1 ]T is K-dimensional parallel complex data vector. The encoding matrix of order K*LN is expressed as B = [B0 , B1 , . . . , BL−1 ]. See Fig. 1 for the working principle diagram of the corresponding system [4].

b1

1

b1

1

bk-1

0

L-1

F(*)



S/P

b0

b0

b1

L-1

1

0

b0

bk-1

0

bk-1

L-1

Fig. 1. OVCDM system model

The essence of channel coding is to add various constraint conditions artificially between symbols to effectively ensure the reliability and security of symbol transmission.

Multiple LDPC Code Combined with OVCDM to Improve Signal Coding Efficiency

375

According to Professor Li Daoben’s theory, the overlapping of weighted symbols to some extent can also be regarded as a special constraint relationship. The reliable transmission of symbols can be guaranteed without reducing the spectral efficiency. OVCDM is a parallel convolutional coding model in complex domain. By overlapping code division multiplexing in complex domain, the coding output of the system approximates the Gaussian distribution, and the spectral efficiency of the system is greatly improved. If the coding matrix is chosen properly, the system still has obvious coding gain compared with the high-order modulation under the same spectral efficiency.

4 Research on Improvement of WSN Data Coding and Decoding by Using Multiple LDPC and OVCDM System In wireless sensor network communication architecture, a large amount of data collected by sensor nodes can be compressed by overlapping time-domain multiplexing and compressed sensing [5]. The compressed small amount of data is transmitted by cluster head node to node fusion center through multi-hop mode, and the collected data can be recovered according to the corresponding algorithm. Through this good design, only a small amount of data needs to be transmitted and specific algorithm analysis can be carried out to get the detailed situation of the whole monitoring area. In terms of theoretical research content, it is proposed to adopt LDPC coding to improve the error-correcting performance, and then combine with OVCDM to greatly improve the coding efficiency and spectral efficiency. In terms of experimental research content, LDPC coding is proposed to conduct simulation research on data coding under different fading channels for one-dimensional signals (taking voice as an example) and two-dimensional signals (taking image as an example). And then the convolutional coding model of OVCDM is established to verify the correlation degree consistent with the theory on this basis. The system model of the specific study is shown in Fig. 2. In this case, the encoding part is the same as Fig. 1, but the decoding part uses the MAP algorithm [6].

GF(q) LDPC Encoder

Interleaver

Digital Modulation

S/P

OVCDM Encoder

F(*)

Adding Cyclic Prefix

Digital Up Converter Channel

GF(q) LDPC Decoder

Deinterleaver

Digital DeModulation

P/S

OVCDM Decoder

S/P

Removing Cyclic Prefix

Digital Down Converter

Fig. 2. System model of multi-LDPC code combined with OVCDM

Specifically, it is established a typical data coding and decoding system model of WSN, including PCM (source) coding, FM modulation, FM demodulation, PCM (source) decoding and other basic parts. On this basis, the speech and image are processed and simulated respectively. By using different processing methods in turn to study

376

Z. Fan and Z. Hong

the bit error rate and signal-to-noise ratio of data encoding in different fading channels, we explore whether LDPC and OVCDM can greatly improve the data encoding efficiency and spectral efficiency. Among them, the speech coding/decoding scheme based on wavelet transform and CS is adopted for speech, and the image is segmented, DCTed and DWTed by CS compression coding and OMP reconstruction. The proposed research steps are detailed as follows. 1. Firstly, the typical data coding and decoding system of WSN is established, and the speech and image are processed and simulated respectively. The bit error rate and signal-to-noise ratio are obtained in Rayleigh channel, Gaussian channel, Rice channel (Rice factor k takes 2) and other fading channels, and then the system performance is analyzed. As shown in Fig. 3. Traditional Sampling

PCM Encoder

FM Modulation

FM Demodulation

PCM Decoder

Follow-up Data Processing

Fig. 3. Typical data coding and decoding systems of WSN process and simulate on speech and image respectively under different fading channels

2. The system is encoded and decoded by TPC/LDPC with approximate Shannon limit, and the speech and image are processed and simulated respectively. The bit error rate and signal-to-noise ratio are obtained in Rayleigh channel, Gaussian channel, Rice channel (Rice factor k takes 2) and other fading channels, and then the system performance is analyzed. As shown in Fig. 4. Traditional Sampling

PCM Encoder

TPC/LDPC Encoder

FM Modulation

FM Demodulation

Follow-up Data Processing

TPC/LDPC Decoder

PCM Decoder

Fig. 4. Typical data coding and decoding systems of WSN process and simulate on speech and image respectively under different fading channels

3. In this system, the traditional sampling method is replaced with compressed sensing sampling, and the speech and image are processed and simulated respectively. The bit error rate and signal-to-noise ratio are obtained in Rayleigh channel, Gaussian channel, Rice channel (Rice factor k takes 2) and other fading channels, and then the system performance is analyzed. As shown in Fig. 5.

Multiple LDPC Code Combined with OVCDM to Improve Signal Coding Efficiency Compressed Sensing Sampling

PCM Encoder

FM Modulation

FM Demodulation

PCM Decoder

377

Compressed Sensing Reconstruction

Fig. 5. Typical data coding and decoding system of WSN replaced the traditional method sampling by compressed sensing sampling to process and simulate speech and image respectively under different fading channels

In compression perception sampling of Fig. 5, for voice, it is adopted voice encoding/decoding scheme based on wavelet transform and CS. For the image, it is adopted two-dimensional image CS block reconstruction method which is based on discrete cosine transform (DCT). And image blocking and DCT is carried out in the CS compression subroutine and OMP refactoring subroutine. 4. The system is adopted compressed sensing sampling and TPC/LDPC coding and decoding to process and simulate the speech and image respectively. The bit error rate and signal-to-noise ratio are obtained in Rayleigh channel, Gaussian channel, Rice channel (Rice factor k takes 2) and other fading channels, and then the system performance is analyzed. As shown in Fig. 6. Compressed Sensing Sampling

PCM Encoder

TPC/LDPC Encoder

FM Modulation

FM Demodulation

Compressed Sensing Reconstruction

TPC/LDPC Decoder

PCM Decoder

Fig. 6. Typical data coding and decoding systems of WSN combined compressed sensing sampling and TPC/LDPC coding and decoding to process and simulate speech and image under different fading channels

Here, the voice and image methods used for compressed sensing sampling are consistent with Step 3. Compressed Sensing Sampling

PCM Encoder

OVCDM Encoder

FM Modulation

FM Demodulation

Compressed Sensing Reconstruction

OVCDM Decoder

PCM Decoder

Fig. 7. Typical data coding and decoding systems of WSN combined compressed sensing sampling with OVCDM system to process and simulate speech and image respectively under different fading channels

5. In this system, compressed sensing sampling and OVCDM coding and decoding are used to process and simulate the speech and image respectively, and the bit error

378

Z. Fan and Z. Hong

rate and signal-to-noise ratio are obtained in Rayleigh channel, Gaussian channel, Rice channel (Rice factor k takes 2) and other fading channels, and then the system performance is analyzed. As shown in Fig. 7. 6. In this system, compressed sensing sampling, LDPC and OVCDM coding and decoding are used to process and simulate speech and image respectively, and the bit error rate and signal-to-noise ratio are obtained in Rayleigh channel, Gaussian channel, Rice channel (Rice factor k takes 2) and other fading channels, and then the performance of the system is analyzed. As shown in Fig. 8. Compressed Sensing Sampling

PCM Encoder

TPC/LDPC Encoder Compressed Sensing Reconstruction

OVCDM Encoder PCM Decoder

FM Modulation TPC/LDPC Decoder

FM Demodulation

OVCDM Decoder

Fig. 8. Typical data coding and decoding systems of WSN combined compressed sensing sampling and TPC/LDPC coding and decoding with OVCDM system to process and simulate speech and image respectively under different fading channels

Due to the variety of speech and image coding, the representative signals should be properly processed. The key scientific problems to be solved include: the establishment of the model and scheme of using CS to process speech signals, the problem of sparse speech signals, the reconstruction of sparse matrix of image signals in the transformation domain, and the conditions for the application of Multiple LDPC Code and OVCDM system to speech or image signals. Therefore, how to design a reasonable and effective experimental scheme is a strong guarantee to obtain reliable results. At the same time, sensor network has the characteristics of signal instability and poor link state, etc. In order to solve this problem, network coding technology should be introduced, such as flood protocol based on network coding. Restricted by the length, the above specific content will not be launched, but only provide a research framework.

5 The Conclusion The project team plans to improve the wireless sensor network node data coding by combining multiple LDPC code and OVCDM. By establishing a typical data encoding and decoding system model of WSN, the bit error rate and signal-to-noise ratio of data encoding under different fading channels with different processing methods are studied. Based on the excellent performance of multi-domain LDPC code and the characteristics of OVCDM, the project team tries to build a system model combining multi-domain LDPC code and OVCDM technology, in an effort to conduct in-depth research on data transmission and coding and decoding, which will provide reference ideas for further enriching the theoretical framework of data coding and decoding.

Multiple LDPC Code Combined with OVCDM to Improve Signal Coding Efficiency

379

Acknowledgments. This work was supported by the Special Funding Project of China Postdoctoral Science Foundation (Grants No. 2014T70967); Natural Science Research Key Project of Anhui Province Higher School (Grants No. KJ2017A630); Key Construction Discipline Project at College Level of Anhui Xinhua University (Grants No. zdxk201702); Institute Project at College Level of Anhui Xinhua University (Grants No. yjs201706); the Ninth Batch of Young and Middle-aged “Academic Leaders” Training Objects Project of Anhui Xinhua University (Grants No. 2018xxk14).

References 1. Chen, X.-N.: OVTDM Communication System Implementation and Related Issues. Beijing University of Posts and Telecommunications, Beijing (2019) 2. Li, D.: Theory of Statistical Monitoring and Estimation of Signals, 2nd edn. Science Press, Beijing (2005) 3. Li, D.: Waveform Coding Theory with High Spectral Efficiency. Science Press, Beijing (2013) 4. Gong, X., Cai, X., Chen, Q.: On the performance of the OVCDM technologies. In: Proceedings of 2008 Western China Youth Communication Academic Conference, Chengdu, pp. 403–407 (2009) 5. Bajwa, W., Haupt, J., Sayeed, A., et al.: Compressive wireless sensing. In: Proceedings of the Fifth International Conference on Information Processing in Sensor Networks, IPSN 2006, pp. 134–142. Association for Computing Machinery, New York (2006) 6. Ping, H.: Coding and Modulation Technology. Peking University Press, Beijing (2012)

An Intelligent Systems Development for Multi COVID-19 Related Symptoms Detection: A Framework Mohammed I. Thanoon(B) College of Computers in Al-Leith, Umm Al-Qura University, 28434 Al-Leith, Kingdom of Saudi Arabia [email protected]

Abstract. Since it has been announced that COVID-19 is a pandemic, most authorities have been using thermal sensors to detect people with fevers in order to isolate them. Although a wide range of symptoms has been reported in patients confirmed to have COVID-19, fever is the symptom most commonly used for detection in this setting. However, this symptom is common to many other diseases, such as influenza and the common cold. As many studies have suggested, COVID-19 can be detected more accurately if other symptoms are considered. Therefore, the proposed system will concatenate multiple COVID-19-related symptoms for a more accurate classification system. The proposed system will consider the detection of three symptoms: fever, dry cough, and shortness of breath. The proposed system will use a Raspberry Pi and connected sensors to detect these symptoms and then, by using artificial intelligence, will classify people as suspected COVID-19 patients or not. The sensors used to detect the symptoms are a thermal camera, a microphone, an infrared-based camera, and a depth-sensing device. The proposed system was functioning well with accuracies around 97%, 85%, and 96% for fever, cough class, and respiration rate, respectively. Keywords: COVID-19 · Raspberry Pi · Fever · Dry cough · Shortness of breath · Artificial intelligent

1 Introduction In 2019, in Wuhan City, China, a new infection disease was identified and called COVID19. Soon after, COVID-19 spread around China, and it has been recognized in several countries around the world [1, 2]. Subsequently, the rising number of patients caused rapid person-to-person spread [3]. At the beginning of the outbreak, specifically on January 30, 2020, the Emergency Committee of the World Health Organization (WHO) announced that COVID-19 is a pandemic disease because of its fast-spreading nature. In particular, the most infected patients have low immunity to it. On March 11, 2020, WHO itself affirmed that COVID19 is a pandemic due to the many cases. At the time, there were 118,000 cases globally [3]. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Z. Ahram et al. (Eds.): AHFE 2021, LNNS 271, pp. 380–386, 2021. https://doi.org/10.1007/978-3-030-80624-8_47

An Intelligent Systems Development

381

In December of 2019, WHO acknowledged that SARS-CoV-2, the virus that causes COVID-19, can cause respiratory disease. According to WHO, the most frequent symptoms of COVID-19 are fever, dry cough, and tiredness. Other less common symptoms include body pain, aches, nasal congestion, headache, conjunctivitis, sore throat, diarrhea, loss of taste or smell, rashes, and discoloration of fingers or toes. Moreover, the most promising techniques for diagnosing COVID-19 are laboratory testing and medical imaging, specifically computed tomography (CT) scanning. Furthermore, a CT scan is a more precise method for diagnosing COVID-19 because of its ability to detect the severity of pneumonia [4]. Since COVID-19 is a newly emergent disease, few state-of-the-art resources are available to diagnose it. Most available resources utilize deep learning techniques to classify COVID-19 and other respiratory diseases based upon CT scan images [1, 5–8]. Administration of CT scans could increase cross-infection since patients are required to wait in hospitals for hours to be examined. However, a study has suggested that if the number of symptoms considered increases, the reliability of COVID-19 detection increases as well [5]. Also, WHO stated that “people of all ages who experience fever and dry cough associated with difficulty breathing/shortness of breath, chest pain/pressure, or loss of speech or movement, should seek medical attention immediately.” Therefore, the proposed system will detect multiple symptoms. The system will utilize sensors connected to a Raspberry Pi to detect symptoms. Then by using artificial intelligence, the system will classify individuals as suspected COVID-19 patients or not.

2 Background COVID-19 can be diagnosed using many techniques. According to the American Society of Microbiology, COVID-19 can be diagnosed if a viral test is performed, specifically a nucleic acid amplification test [9]. Samples for nucleic acid amplification testing can be collected in many ways, such as nasal swabs. In addition, the nucleic acid amplification test indicates whether or not an individual is infected with SARS-CoV-2. Usually, the nucleic acid amplification test takes around two days to obtain results. Another method for the diagnosis of COVID-19, according to [4], is a CT scan of the chest. A CT scan is an examination that produces cross-sectional images of the body using special X-ray equipment. For COVID-19 patients, a CT scan is used to find the degree and severity of lung inflammation [4]. Most confirmed COVID-19 patients have respiratory symptoms (dry-coughing, shortness of breath, and pneumonia) according to the publication of the Ministry of Health of Saudi Arabia entitled “Coronavirus Disease 19 (COVID-19) Guidelines V1.1” [10]. Without including preparation time, a CT scan takes approximately twenty minutes. CT scans need to be analyzed by radiologists, which consumes time and resources. Finally, detecting the high body temperature of an individual will most likely indicate if they are infected with COVID-19 because a published study has documented that fever was present in 98.6% of confirmed COVID-19 patients [4]. These symptoms—fever, dry-coughing, and shortness of breath—can be measured by several methods. Despite the method, the measurements are ready in seconds. Based on the above findings, the symptoms that are considered in the proposed system are fever, dry cough, and shortness of breath.

382

M. I. Thanoon

3 Methodology The workflow of the detection system creates a low-cost ambulatory system from a Raspberry Pi, algorithms, and sensors. Therefore, the system and its corresponding algorithms will use sensor readings to test for COVID-19 symptoms. Next, it will integrate the preliminary results into a single indicator of whether the person is a suspected patient or not using an artificial intelligence method. Although there are several techniques to test for COVID-19 symptoms, the proposed system will consider only these frequent symptoms of COVID-19: fever, dry cough, and shortness of breath. As a result, the platform will provide a user-friendly, easy to use, and fast solution. Such a system could become mandatory in crowded places such as airports and hospitals for isolating suspected COVID-19 patients from others. Figure 1 details the workflow of the proposed system.

Fig. 1. A top-level view of the system workflow.

As shown in Fig. 1, the first phase of the workflow is collecting the data. Several different sensors, such as thermal cameras and a DS18b20 board, can be used to collect data for the symptom of fever. In addition, microphone sensors are capable of listening to the individuals’ cough [13]. Watching the air temperature changes, measuring the CO2 level, monitoring air pressure changes, or listening to or counting the air inspired and expired by our lungs can be used to detect the shortness of breath symptom [14]. The pre-processing data phase, which is the second phase of the system workflow, is the phase where the data is transformed so that the machines and algorithms can quickly parse it. In other words, the machines and algorithms can smoothly interpret the features of the data. The third phase of the system workflow is symptom classification. Knowing that the previously discussed symptoms have their level or class, the proposed architecture is

An Intelligent Systems Development

383

intended to investigate the level or class of the symptoms to determine if a patient is ill or not. The inspection will be by the proposed system sensor readings. Detecting the symptoms requires the development of a separate subsystem for each symptom. Lastly, the integration of each level or class of the symptoms is the main functionality of phase four of the proposed design workflow. This integration will be developed based on an artificial intelligence technique. The artificial intelligence methods that should be utilized is according to the nature of the measured information. The final result of this phase should predict whether a person is suspected infected with COVID-19 or not.

4 Design This section describes the detailed framework of the proposed system. It is aimed at using some of the sensors mentioned above to detect symptoms. The proposed system is divided into four subsystems: the fever detection subsystem, the dry cough detection subsystem, the shortness of breath detection subsystem, and the symptoms integration subsystem. The development of the first three subsystems will follow the first three phases mentioned above. Next, the symptoms integration subsystem uses the output data of the first three subsystems as input to classify an individual as a suspected COVID-19 patient or not. Figure 2 shows the detailed framework of the proposed system.

Fig. 2. The detailed framework of the proposed system.

4.1 Fever Detection Subsystem This subsystem detects the human body temperature level. The sensor used to do so is a thermal or thermographic camera. The thermal camera is utilized for body temperature level detection by implementing the variability of high temperature compared with other objects. The thermal camera measures high-intensity levels of infrared spectra if it detects a high body temperature. Then, the body temperature level (BTL) is extracted. The BLT value will be either H, if the detected temperature is above a threshold value, or L, otherwise.

384

M. I. Thanoon

4.2 Dry Cough Detection Subsystem Classifying the coughing of the individual is the objective of this subsystem. In this research, a microphone is used to record the cough. Then, the recorded signal is classified to determine the cough class (CC). The CC value is either W, for wet cough, or D, for dry cough. That was achieved by implementing an artificial intelligence module. The module was developed to include five stages: recording, labeling, pre-processing, feature extraction, and classification. A published dataset was used to cover the first three stages [15]. Then, by using the OpenSmile toolkit, features were generated from the cough signals [16]. After that, the random forest classification algorithm was utilized to classify coughs. 4.3 Shortness of Breath Detection Subsystem The design of this subsystem is structured to determine the patients’ level of breath. As mentioned above, there are several methods to achieve this [14]. However, this research uses the respiratory rate technique (RR) to classify patients’ levels of breath. The value of RR is either N, for a normal respiratory rate, or S, otherwise. Furthermore, the RR will be calculated as follows: First, infrared light will be illuminated by a depth-sensing device, such as a Microsoft Kinect or an Intel RealSense camera, in the view of the infrared camera [17]. Then, a sequence of images of patients is captured by the IR camera [17]. Lastly, by encoding and analyzing these frames, the RR is obtained [17]. 4.4 Symptoms Integration Subsystem The fourth and last subsystem functionality of the project is the integration of the COVID19 symptoms tests results to determine the patient’s status (suspected patient “iSP” or not “nSP”). This status determination will be accomplished using an artificial intelligence technique. The selected technique is the adaptive neuro-fuzzy inference system (ANFIS) due to its simplicity and accuracy. ANFIS is a combination of two well-known artificial intelligence techniques: fuzzy logic and artificial neural networks. ANFIS transforms its input into an output using fuzzy logic, for which the parameters are tuned by an artificial neural network [18]. For now, our ANFIS approach adjusts the patient status based on the majority of COVID-19 symptoms tests level or class to create the training output dataset. Specifically, if more than one-third of the level or class of the symptoms is true, the associated training output is iSP.

5 Preliminary Results Here we discuss the results obtained from the developed subsystems. For the fever detection subsystem verification, the extracted BTL from different volunteers was compared with readings from a commercial medical device. Fortunately, both readings were within an acceptable range to trust the fever detection subsystem. The achieved accuracy that has been obtained was 97%. Verifying the dry cough detection subsystem requires dividing the above-mentioned dataset into two groups: training and testing. Using the testing

An Intelligent Systems Development

385

group, the accuracy that has been obtained was around 85%, which makes the algorithm trustable enough to be used. The verification of the shortness of breath detection subsystem was based on video recordings of users. The recorded video was replayed, and the RR of the user was determined. The RR was determined in this way and the RR detected by the subsystem matched for many of the video clips, with an accuracy of 96%. Finally, the symptoms integration subsystem has been tested and performed successfully during many trial runs.

6 Conclusion In this research, the methodology and development of a novel COVID-19 classification system were discussed in detail. Most recent solutions are focused on either analyzing CT scan images utilizing artificial intelligence or machine learning engines or monitoring patients in isolation using thermal sensors. CT scan solutions suffer from the time and resources required. The thermal sensor-based solutions suffer from a lack of accuracy. However, the proposed system is intended to sidestep these problems. Moreover, the proposal is based on a Raspberry Pi with sensors attached to it, which gives the system mobility and simple features. Furthermore, the design considers multiple COVID-19 related symptoms to increase the accuracy of the system, making it more reliable compared with other recent solutions. In addition, if the proposed design supports saving and exchanging the data on the cloud, a dataset can be established. Therefore, by using multiple systems to gather data from users, the framework dataset will grow continuously. Thus, this approach will provide the transfer learning process from multiple devices. Following the above, the proposed system could be enhanced by using transfer learning methodology.

References 1. Chen, J., et al.: Deep learning-based model for detecting 2019 novel coronavirus pneumonia on high-resolution computed tomography. Sci. Rep. 10(1), 1–1 (2020) 2. Holshue, M.L., et al.: First case of 2019 novel coronavirus in the United States. New Engl. J. Med. (2020) 3. World Health Organization. http://www.who.int/emergencies/diseases/novel-coronavirus2019 4. Wang, D., et al.: Clinical characteristics of 138 hospitalized patients with 2019 novel coronavirus–infected pneumonia in Wuhan. China. Jama. 323(11), 1061–1069 (2020) 5. BioWorld. https://www.bioworld.com/articles/433530-china-uses-ai-in-medical-imaging-tospeed-up-covid-19-diagnosis 6. Li, L., et al.: Artificial intelligence distinguishes COVID-19 from community acquired pneumonia on chest CT. Radiology (2020) 7. Gozes, O., et al.: Rapid AI development cycle for the coronavirus (covid-19) pandemic: Initial results for automated detection & patient monitoring using deep learning CT image analysis. arXiv preprint arXiv:2003.05037. 10 Mar 2020 8. Yang, Z., et al.: Modified SEIR and AI prediction of the epidemics trend of COVID-19 in China under public health interventions. J. Thorac. Dis. 12(3), 165 (2020) 9. Patel R, et al.: Report from the American Society for Microbiology COVID-19 International Summit, 23 March 2020: value of diagnostic testing for SARS–CoV-2/COVID-19

386

M. I. Thanoon

10. The Ministry of Health of Saudi Arabia. https://www.moh.gov.sa/CCC/healthp/regulations/ Documents/Coronavirus%20Disease%202019%20Guidelines%20v1.1.pdf 11. Mohammed, M.N., Syamsudin, H., Al-Zubaidi , S., AKS, R.R., Yusuf, E.: Novel COVID-19 detection and diagnosis system using IOT based smart helmet. Int. J. Psychosoc. Rehabil. 24(7), 2296–303 (2020) 12. Sollu, T.S., Bachtiar, M., Bontong, B.: Monitoring system heartbeat and body temperature using raspberry PI. In: E3S Web of Conferences 2018, vol. 73, p. 12003. EDP Sciences (2018) 13. Nemati, E., Rahman, M.M., Nathan, V., Vatanparvar, K., Kuang, J.: A comprehensive approach for cough type detection. In: 2019 IEEE/ACM International Conference on Connected Health: Applications, Systems and Engineering Technologies (CHASE) 2019, 25 September 2019, pp. 15–16. IEEE (2019) 14. Woollard, M., Greaves, I.: 4 Shortness of breath. Emerg. Med. J. 21(3), 341–350 (2004) 15. Orlandic, L., Teijeiro, T., Atienza, D.: The COUGHVID crowdsourcing dataset: A corpus for the study of large-scale cough analysis algorithms. arXiv preprint arXiv:2009.11644. 24 September 2020 16. Eyben, F., Wöllmer, M., Schuller, B.: Opensmile: the Munich versatile and fast opensource audio feature extractor. In: Proceedings of the 18th ACM International Conference on Multimedia 25 October 2010, pp. 1459–1462 (2010) 17. Non-contact real-time monitoring of heart and respiration rates using Artificial Light Texture. https://www.linkedin.com/pulse/use-artificial-light-texture-non-contact-realtime-heart-misharin 18. Thanoon, M.I., McCurry, C.D., Zein-Sabatto, M.S.: A multi-modular sensor fusion and decision-making approach for human-machine teaming. In: NAECON 2018-IEEE National Aerospace and Electronics Conference, 23 July 2018, pp. 203–207. IEEE (2018)

Analysis of Influencing Factors of Depth Perception Yu Gu1,2 , Wei Su3(B) , and Minxia Liu3 1 Springer-Verlag, Computer Science Editorial, Tiergartenstr. 17, 69121 Heidelberg, Germany 2 Beijing Railway Signal Company Limited, No. 456 Sicun Langfa Huangcun, Daxing District,

Beijing 102613, China 3 Beijing Institute of Technology, 5 South Zhongguancun Street, Haidian District, Beijing

100081, China

Abstract. In this experiment, the accuracy of depth perception of 80 undergraduates was measured by using depth perception measuring instrument, and the factors such as gender, age, vision level, personality characteristics, hobbies and interaction characteristics were analyzed. The results show that: the accuracy of depth perception in both eyes is significantly higher than that in one eye, and the accuracy of depth perception in the subjects who love ball games is significantly higher than that in the subjects who love literature and art activities. The influence of acquired factors on individual depth perception is higher than that of genetic factors. Keywords: Deep perception · Binocular effect · Personality · Gender · Hobby · Vision

1 Introduction Depth perception refers to the ability of human visual organs to perceive the threedimensional space of objects at a distance and in a short distance. Human beings live in three-dimensional space, and what the human retina accepts is two-dimensional image. We can correctly understand the world and understand what we depend on. This problem has been the focus of psychologists and physiologists. The mechanism of depth perception is also a frontier topic in the field of cognitive science. Depth perception comes from two aspects, one is monocular mechanism, the other is binocular parallax, both of which constitute the system of depth judgment. Monocular judgment of spatial depth is mainly based on the perspective effect, texture and familiar scale of the object, while binocular judgment of spatial depth comes from different visual images received by both eyes for the same object, and the stereo image is obtained after brain processing. Because of their different principles, the effect of depth judgment is also different. Wei Su—The project number supporting this study is 2017YFB1102802. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Z. Ahram et al. (Eds.): AHFE 2021, LNNS 271, pp. 387–394, 2021. https://doi.org/10.1007/978-3-030-80624-8_48

388

Y. Gu et al.

There are many factors affecting depth perception, including many objective factors, such as test methods, lighting conditions, test time, etc., as well as some subjective factors, such as age, gender, vision, hobbies, as well as their own physiological and psychological state. Depth perception has special assessment significance for some special tasks, such as car driving, aircraft driving and referee of sports competition. The purpose of this experiment is to discuss the causes and effects of depth perception by exploring the relationship between depth perception and subjective and objective factors.

2 Method It is composed of 80 undergraduates from Beijing University of technology, aged 18– 23, with an average age of 20.95. There were 47 males and 33 females, 15 with normal vision and 65 with myopia, 30 with love of literature and art and 23 with love of sports. Used Ep503 depth perception instrument.

3 Results Through the mathematical statistics of 80 samples, we analyze the correlation of depth perception from the aspects of gender, vision, dominant eye, personality, hobbies, age, etc. to find the main factors related to depth perception, and study the relationship and causes between these factors and depth perception. 3.1 Deep Perception and Gender Factors The disparity judgment threshold under various experimental conditions was used as the test variable, and the gender was used as the grouping variable. The results of the statistical analysis are as follows in the Tables 1, 2 and 3. In the 95% confidence interval, the experimental data of each group showed that there was no significant difference in the mean value of depth perception between male and female students (the test statistic was greater than 0.05), indicating that the depth perception of male and female students had similar rules. After analyzing the square error, it is found that there is no significant difference in the variance of depth perception between male and female students in the 95% confidence interval (the test statistic is greater than 0.05), which indicates that the fluctuation degree of binocular parallax of male and female students is similar. According to the research of human evolution, there is no necessary relationship between human eye structure and gender, so there should be no significant difference between men and women. The experimental results reflect this rule. Using the same comparative analysis method to analyze the three factors of vision, dominant eye and personality, we can find that there is no significant effect on depth perception.

Analysis of Influencing Factors of Depth Perception

389

Table 1. Group statistics Gender

N

Mean value

Male

47

1.9668

.77194

.11260

Female

33

2.0445

.66725

.11615

Average single increase

Male

47

2.7911

1.32769

.19366

Female

33

3.0264

1.23484

.21496

Average single minus

Male

47

2.7719

1.39444

.20340

Female

33

2.6812

1.02529

.17848

Both increased

Male

47

1.1338

.57568

.08397

Female

33

1.2973

.51814

.09020

Male

47

1.1651

.58254

.08497

Female

33

1.1733

.72752

.12664

Total mean

Double subtraction

Standard deviation

Standard error of mean

Table 2. Independent sample test Levene test

Total mean

Equal variance

T test of mean value equation

F

Sig.

t

df

Bilateral sig.

Mean difference

Standard error value

95% confidence interval Lower limit

Upper limit

1.362

.247

−.468

78

.641

−.07774

.16597

−.40817

.25269

−.481

74.583

.632

−.07774

.16177

−.40003

.24456

−.803

78

.424

−.23530

.29307

−.81875

.34815

−.813

72.022

.419

−.23530

.28933

−.81207

.34147

.318

78

.751

.09070

.28529

−.47728

.65868

.335

77.803

.738

.09070

.27061

−.44805

.62946

−1.302

78

.197

−.16344

.12555

−.41339

.08650

−1.326

73.237

.189

−.16344

.12323

−.40903

.08215

−.056

78

.955

−.00823

.14671

−.30030

.28384

−.054

58.981

.957

−.00823

.15251

−.31340

.29694

Unequal variance Average single increase

Equal variance

Average single minus

Equal variance

Both increased

Equal variance

.185

.668

Unequal variance 3.168

.079

Unequal variance .344

.559

Unequal variance Double subtraction

Equal variance Unequal variance

.016

.899

3.2 Depth Perception and Hobby (Sports) Factors The disparity judgment threshold under various experimental conditions was used as the test variable, and hobby as the grouping variable. The analysis results are shown in Table 3 and Table 4.

390

Y. Gu et al. Table 3. Group statistics Hobby

N

Mean value

Male

30

2.3850

.81497

.14879

Female

23

1.6500

.53354

.11125

Average single increase

Male

30

3.5023

1.29389

.23623

Female

23

2.3591

1.04310

.21750

Average single minus

Male

30

3.2547

1.50526

.27482

Female

23

2.3035

1.03786

.21641

Both increased

Male

30

1.4223

.58104

.10608

Female

23

.9409

.45558

.09500

Male

30

1.3567

.73933

.13498

Female

23

.9930

.47415

.09887

Total mean

Double subtraction

Standard deviation

Standard error of mean

In the 95% confidence interval, the experimental data showed that there were significant differences in the mean value of depth judgment threshold between subjects with different hobbies (the test statistic was less than 0.05). In each group of statistics, the error of depth judgment of the subjects who like ball games is far less than that of the subjects who like literature and art activities (average 2.3850 > 1.6500). It shows that hobby is one of the factors that affect the minimum judgment threshold of parallax. After the square difference analysis, it is found that in the 90% confidence interval, the experimental data of each group show that there are some differences in the depth perception variance of different hobby subjects (the test statistic is greater than 0.10). In the variance statistics of each group, the fluctuation range of the depth error of the subjects who like ball games is far less than that of the subjects who like literature and art activities (standard deviation 0.81497 > 0.53354). The results show that the fluctuation of the minimum discrimination threshold of binocular parallax for the subjects who like ball games is small. Specifically, the factors explored in this group of experiments are whether regular participation in some sports (especially football, basketball, volleyball, tennis, badminton, table tennis and other ball games) can lead to differences in depth perception. Since the 19th century, many studies have shown that perception is controlled by genes and is hereditary. In the 1970s–1980s, a series of studies showed that perception (especially depth perception) could change through the accumulation of experience, and gradually form in the process of individual development. The discussion about the cause of perception is still going on until now, and there is no unified conclusion. According to the results of this experiment, we can found that depth perception does have a certain relationship with whether to engage in a ball game.

Analysis of Influencing Factors of Depth Perception

391

Table 4. Independent sample test Levana test F Equal variance Unequal variance Aver- Equal variage ance single inUnequal creas variance e Aver- Equal variage ance single Unequal minus variance Both Equal variinance creas Unequal ed variance Dou- Equal varible ance subUnequal tracvariance tion Total mean

Sig.

T test of mean value equation df

Bilateral sig.

Mean difference

Stand- 95% confidence inard erterval ror lower upper value limit limit

51

.000

.73500

.19606

.34138

1.12862

3.956 49.922 .000

.73500

.18578

.36183

1.10817

.001 1.14320 .33042

.47987

1.80654

3.560 50.844 .001 1.14320 .32111

.49850

1.78791

t

3.6 .062 3.749 36 2.7 .103 3.460 53

3.6 .063 2.592 22

51

.012

.95119

.36695

.21450

1.68788

2.719 50.513 .009

.95119

.34980

.24877

1.65360

.002

.48146

.14705

.18625

.77667

3.381 50.961 .001

.48146

.14240

.19558

.76735

.045

.36362

.17699

.00831

.71894

2.173 49.632 .035

.36362

.16732

.02749

.69975

4.4 .041 3.274 15 2.4 .124 2.055 49

51

51

51

3.3 Depth Perception and Binocular Factors Using the parallax judgment thresholds under various experimental conditions as test variables, and monocular and binocular were used as test pairs. Paired sample t-test was performed (confidence 0.05). The analysis results are shown in Table 5, Table 6 and Table 7. In the 95% confidence interval, the experimental data show that there are significant differences in depth judgment threshold between monocular and binocular subjects (test statistic is less than 0.05). After using binocular test, depth perception error decreased significantly (2.8881 > 1.2013, 2.7345 > 1.1685). It is suggested that the use of monocular is the main factor affecting the minimum judgment threshold of parallax. In the 95% confidence interval, the correlation coefficients of paired samples were less than 0.05. It shows that for individuals, the experimental results are strongly correlated: individuals with good/poor monocular test results usually have good/poor binocular test results.

392

Y. Gu et al. Table 5. Paired sample statistics

Paired 1

Paired 2

Mean value

N

Standard deviation

Standard error of mean

Average single increase

2.8881

80

1.28750

.14395

Both increased

1.2013

80

.55522

.06208

Average single minus

2.7345

80

1.24902

.13964

Double subtraction

1.1685

80

.64188

.07176

Table 6. Paired sample correlation coefficient N

Correlation coefficient

Sig.

Paired 1

Average single increase & Both increased

80

.300

.007

Paired 2

Average single minus & Double subtraction

80

.341

.002

Table 7. Paired sample test Pairwise difference 95% confidence inStandard terval mean standard error of value deviation lower upper mean limit limit Average single Paired increase & Both 1.68687 1.24000 1 increased Average single Paired minus & Double 1.56600 1.19393 2 subtraction

t

df

Sig. (bilateral)

.13864

1.41093 1.96282

12.16 79 8

.000

.13349

1.30030 1.83170

11.73 79 2

.000

3.4 Depth Perception and Interaction Factors Using the same method to analyze the interaction between the various factors. The analysis results shown in Table 8 and Table 9. Through the comparative analysis of the data of different influencing factors, we can see (see Table 8): the correlation between hobby and depth perception is the highest. Under the interaction condition of different influencing factors, the interaction between gender and personality is the most significant (the test statistic is less than 0.05).

Analysis of Influencing Factors of Depth Perception

393

In the correlation analysis matrix, the autocorrelation between gender and hobbies was more significant (the test statistic was less than 0.05). Table 8. Test of intersubjective effect (dependent variable: total mean)

Table 9. Correlation matrix

394

Y. Gu et al.

If the interaction between factors is not considered, it is easy to get wrong correlation analysis results (for example, A relate to B, B relate to C, and A is related to C). In this study, gender and hobbies, gender and personality are closely related. Most women are introverted, love literature and art, while most men are extroverted and love ball games. However, due to the large number of samples, only part of the subjects filled in the items of personality and hobbies, so in the analysis of gender factors, there is no conclusion that there is a relationship between gender and depth perception.

4 Conclusion Through the analysis of the experimental data, we can draw the following conclusions: (1) the accuracy of depth perception in binocular test is higher than that in monocular test. (2) The accuracy of depth perception of people who like ball games is higher than that of people who like literature and art activities. (3) Other factors had no significant relationship with depth perception.

Software Instruments for Analysis and Visualization of Game-Based Learning Data Boyan Bontchev1(B) , Yavor Dankov1 , Dessislava Vassileva2 , and Martin Kovachev1 1 Faculty of Mathematics and Informatics, Sofia University “St Kl. Ohridski”, Sofia, Bulgaria

{bbontchev,yavor.dankov}@fmi.uni-sofia.bg 2 Scientific Research Department, Sofia University St Kl. Ohridski, Sofia, Bulgaria

Abstract. With the development of technology, the consumer requirements of users of educational video games are constantly changing, determining the need for integration and use of tools for monitoring information, data, activity related to the processes of creating, designing, managing, and playing educational video games. This paper is based on the authors’ research and publications on the development of specialized software tools and their implementation in the educational software platform APOGEE (smArt adaPtive videO GamEs for Education). We present an initial prototype of an online system for learning and gaming analytics of playing enriched educational maze games and discuss issues of its system design such as the software architecture, the organization of its data model, and the user interface. The integration of these tools in the software platform will provide the necessary functionalities for data monitoring, data processing, and evaluating the individual results of the learners. Keywords: Analytics · Educational games · Software instruments · APOGEE

1 Introduction Educational video games are a popular topic that is gaining daily interest from practitioners, researchers, and educators, as well as the growing number of users of educational video games. The benefits of the application of educational video games in the process of teaching students by video games (so-called game-based learning) have been proven in many scientific publications and scientific developments [1–4]. It is definitely considered that, with the development of technology, the consumer requirements of users of educational video games are constantly changing. This is one of the reasons that determine the need for integration and use of tools for monitoring information, data, activity related to the processes of creating, designing, managing, and playing educational video games. This paper is based on the authors’ research and publications on the development of specialized software tools for the application of Taxonomy of Instruments for Facilitated Design and Evaluation of Video Games for Education (TIMED-VGE) [5], the design of the basic functionality of these tools [6, 7], and their implementation in the educational © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Z. Ahram et al. (Eds.): AHFE 2021, LNNS 271, pp. 395–402, 2021. https://doi.org/10.1007/978-3-030-80624-8_49

396

B. Bontchev et al.

software platform APOGEE (smArt adaPtive videO GamEs for Education) [8, 9]. Educational video games definitely need data to meet the rapidly changing and dynamic user requirements of different stakeholders for a specific design and architectural environment [10], for educational games with defined and personalized educational goals, and rich educational resources [11] – learning content, and captivating educational games and learning user experience through “game-based learning” (playing user-experience). The focus of this paper is on developing such tools to support educational video gameplay and design. In this paper, we present a collection of learning and gaming analytics software instruments and their progressive and partial integration into the APOGEE software platform. The paper presents the initial prototype of an online system for learning and gaming analytics of playing enriched educational maze games, which makes a part of the assessment and visualization tools of the APOGEE platform. It discusses issues of its system design such as the software architecture, the organization of its data model, and the user interface. Special attention is paid to tracking of learning and playing progress of the individuals conforming to the APOGEE student model. Next, the paper reveals how this content management system is integrated into the APOGEE platform for the design and creation of educational video games. The integration of these tools in the software platform will provide the necessary functionalities for data monitoring, data processing, setting various metrics on the data, and defining criteria for evaluating both the individual data/results of each of the learners and the overall process of designing, creating, managing and evaluating educational video games. The processed data from the analytics tools and their visualization of the users, through various methods of data and insights visualization, will serve as a basis for performing additional analyses and improvements of the educational maze games designed by applying specific educational game-based learning scenarios. These improvements will enhance both the playability and learnability of educational video mazes enriched by various puzzle mini-games embedded into the halls of the maze, together with the personalization of the learning contents based on specific characteristics of the player model. The paper is organized as follows: the paper will proceed with the introduction of the analytics instruments of the APOGEE platform developed by applying the TIMED-VGE taxonomy. Section 3 will present the development of the Online System for Learning and Gaming Analytics of Playing Enriched Educational Maze Games, including the descriptions of the software architecture, the organization of its Data Model, and the user interface. The paper ends with e brief discussion and a conclusion.

2 Background Works As we stated in the introduction of this paper, we base our research on previous publications [5–7], presenting the development of the APOGEE software platform instruments and their main capabilities. The APOGEE software platform for educational video games consists of two main categories of software instruments – assistive and analytics instruments. As presented in Fig. 1 [5, 7]. the analytics instruments are divided into three groups as follows: Learning Analytics, Gaming Analytics, and Analytics for Users. The process

Software Instruments for Analysis and Visualization of Game

397

Fig. 1. Application of the TIMED-VGE for the development of the analytics instruments in the APOGEE software platform [7] based on [5]

of developing and designing the functionalities of the Analytics instruments is described in [7]. The Analytics instruments in the APOGEE platform will serve as useful tools to bridge the gap between the abundance of data and the user’s need for the relevant data [12]. These tools provide the necessary functionalities for data monitoring, data processing, data visualization, and evaluation of both the individual data and learners’ results, as well as the overall process of designing, creating, managing, and evaluating educational video games. For the purposes of this study, this paper focuses primarily on the Analytics instruments.

3 Online System for Learning and Gaming Analytics of Data About Playing Enriched Educational Maze Games 3.1 Software Architecture The created online system for learning and gaming analytics of data about playing enriched educational maze games provides all the analytics instruments presented in Fig. 1. In order to show learning and gaming analytics, teachers have to create and deploy on the server their educational mazes (enriched with various mini-games representing appropriate didactic tasks) and students have to provide self-report (by filling quizzes about basic characteristics, playing/learning style, and game playability/learnability).

398

B. Bontchev et al.

Fig. 2. Software architecture of the system

Figure 2 provides a simplified view of system architecture. Teachers, students and administrators log to the system through the User Manager module. Next, teachers use Quiz Builder to construct their quizzes and Game Manager to deploy their games, while students can play each game once or more times. For each game session, a log file is created. The Log File Processor applied the detailed log file data for calculating and preserving into the database (DB) learning and gaming features like outcomes, effectiveness, efficiency, time, and others. Description of the software architecture. The administrators apply a Statistic Editor for saving into DB various statistics and formulas for their calculation, like correlations, p value, etc. The Analytics Viewer presents playing data together with selected statistics for these data and results from self-reports obtained by the Quiz Player. 3.2 The Organization of Its Data Model In this subsection, we present the organization of the APOGEE Data Model of the Instruments of the APOGEE Software Platform. We illustrate the model in Fig. 3 with the following diagram. Figure 3 shows the data model of the main instruments of the APOGEE software platform. It presents the main entities that store data and information in the APOGEE database. The model includes a classification of 26 entities that are interconnected and specifically related to each other with the respectable relationships described in the model. At the centre of the model is the Users entity which stores all data and information about the users of the APOGEE platform. The system users can have different roles (described as Role entity in the model) such as Admin, Content User, Game Creator, or Student. The User entity is also directly interconnected with the entities Games, Game Assets, Learning Contents, Maze Game Results, Puzzle Games Results, Other Student

Software Instruments for Analysis and Visualization of Game

399

Data, Playing Styles, Quizzes, Quiz Question Responses. Each of these entities stores specific and defined data and information.

Fig. 3. The data model of the instruments of the APOGEE software platform

We managed to present the Data Model of the Instruments of the APOGEE Software Platform as simplified as possible for the purpose of this paper and to use this model as a base for further development of the initial prototype of an Online System for Learning and Gaming Analytics playing enriched educational maze games. 3.3 Organization of the System User Interface The organization of the system user interface realizes the functionalities of the online system for learning and gaming analytics of data about playing enriched educational maze games. Figure 4 provides a simplified view of the organization of the main user interface pages of the system used by students and teachers (pages for administering the system are now show here). All the users should first registered at the system (with e-mail and password), in order to be able to log into. After a successful login, teachers can upload their educational maze game (not show in Fig. 4). Students can edit their profile (including names, nickname, email, and password) and answer some questions: • about their basic student properties (such as age, gender, school grade, experiences in fun gaming and in educational game playing, learning goals, playing goals, and initial knowledge in the learning subject); • about their learning style; • about their playing style.

400

B. Bontchev et al.

Fig. 4. Organization of the main user interface pages of the system

Students can answer the questions of the basic characteristics and learning and playing styles as many times as they like. Next, they are allowed to play gaming sessions with anyone of the maze games deployed and registered by the teachers. In order to be able to see their results about given maze game (like Game001) – together with the results of the other players – students have to answer at least once the questions of the post-game quiz, which is regarding game learnability and playability. Next, they are allowed to see the results for playing the maze Game001 and, if like, to check their results for all the mini-games contained in the halls of the Game001maze. For viewing these results, students should select the name of the mini-game they are interested in. Their results are shown together with the results of the other students played the same mini-game. Figure 5 presents the students’ result for the mini-game “Word Soup” available at the maze hall named “Epoch”. Teachers are able to see the same results from the game playing sessions, as the students do. Moreover, teachers see the analytics menu shown down in Fig. 4, where they can select two data columns and a statistic to be applied for them such as Pierson correlation r, Student T-test (p value), Cohen effect size d, etc. For this purpose, the formulas for calculation of these statistics should be saved to the DB (through the Statistic Editor).

Software Instruments for Analysis and Visualization of Game

401

Fig. 5. A view of a web page showing results about playing a Mini-Game

4 Conclusion The paper presented the initial prototype of an online system for both learning and gaming analytics of playing enriched educational maze games as part of the assessment and visualization tools of the APOGEE platform. The Data Model of the Instruments of the APOGEE Software platform presents the main entities that store all data and information in the APOGEE database and describe the relationships between those entities. The user interface is presented with a description of the UI organization and a description of the Web page. We argue that the integration of “Analytics Instruments” in the APOGEE software platform will provide the necessary functionality for data manipulation and extraction of valuable data insights and will serve as a basis for further analysis and improvements of educational maze games and development of the APOGEE software platform itself. Acknowledgments. The research leading to these results has received funding from the APOGEE project, funded by the Bulgarian National Science Fund, Grant Agreement No. DN12/7/2017, and from the National Scientific Program “Information and Communication Technologies in Science, Education and Security” (ICT in SES) financed by the Ministry of Education and Science of Bulgaria.

402

B. Bontchev et al.

References 1. Plass, J., Mayer, R., Homer, B.: Handbook of game-based learning. In: Plass, J., Mayer, R., Homer, B., Cambridge, M.A. (eds.) MIT Press, Cambridge (2020). ISBN 9780262043380 2. O’Connor, E.: Virtual reality: bringing education to life. In: Bradley, E. (ed.) Games and Simulations in Teacher Education. Advances in Game-Based Learning AGL, pp. 155–167. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-44526-3_11 3. Song, K., Lockee, B., Burton, J.: Theories for gamification in learning and education. In: Gamification in Learning and Education. Advances in Game-Based Learning. Springer, Cham (2018).https://doi.org/10.1007/978-3-319-47283-6_5 4. Jabbar, A., Felicia, P.: How game-based learning works and what it means for pupils, teachers, and classroom learning. In: Tan, W.H. (Ed.) Design, Motivation, and Frameworks in GameBased Learning, IGI Global, pp. 1–29. (2019). https://doi.org/10.4018/978-1-5225-6026-5. ch001 5. Dankov, Y., Bontchev, B.: Towards a taxonomy of instruments for facilitated design and evaluation of video games for education. In: Proceedings of the 21st International Conference on Computer Systems and Technologies 2020 (CompSysTech 2020), ACM, pp. 285–292 (2020). https://doi.org/10.1145/3407982.3408010 6. Dankov, Y., Bontchev, B.: Software instruments for management of the design of educational video games. In: Ahram, T., Taiar, R., Groff, F. (eds.) Human Interaction, Emerging Technologies and Future Applications IV. IHIET-AI 2021. Advances in Intelligent Systems and Computing, vol. 1378, pp. 414–421. Springer, Cham (2021). https://doi.org/10.1007/978-3030-74009-2_53 7. Dankov, Y., Bontchev, B.: Designing software instruments for analysis and visualization of data relevant to playing educational video games. In: Ahram, T., Taiar, R., Groff, F. (eds.) Human Interaction, Emerging Technologies and Future Applications IV. IHIET-AI 2021. Advances in Intelligent Systems and Computing, vol. 1378. pp. 422–429. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-74009-2_54 8. APOGEE Official Website. http://www.apogee.online/index-en.html. Accessed 22 Nov 2020 9. Bontchev, B., Vassileva, D., Dankov, Y.: The APOGEE software platform for construction of rich maze video games for education. In: Proceedings of the 14th International Conference on Software Technologies - Volume 1: ICSOFT, SciTePress, pp. 491–498 (2019). https://doi. org/10.5220/0007930404910498 ISBN 978–989–758–379–7 10. Andreeva, A.: Colorful and general-artistic aspects of architecture and design. View-points, In: Aesthetic Achievements of the Exhibition Activities of Technical University-Sofia 2009_2019, vol. 1(1), pp.78–96 (2019). Technical University-Sofia, ISSN: 2682–9797 11. Terzieva, V., Paunova-Hubenova, E., Todorova, K., Kademova-Katzarova, P.: Learning analytics - need of centralized portal for access to E-Learning resources. Big Data, Knowledge and Control Systems Engineering (BdKCSE), Sofia, Bulgaria, pp. 1–8 (2019). https://doi. org/10.1109/BdKCSE48644.2019.9010600 12. Dankov, Y., Birov, D.: General architectural framework for business visual analytics. In: Shishkov, B. (ed.) BMSD 2018. LNBIP, vol. 319, pp. 280–288. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-94214-8_19

Work Accident Investigation Software According to the Legal Requirements for Ecuadorian Companies Raúl Gutiérrez1(B) and Karla Guerra2 1 Universidad de Las Américas Quito, Avenida de Los Granados,

E12-41 y Colimes, Quito, Ecuador [email protected] 2 Universidad de Sevilla, Camino de los Descubrimientos s/n., 41092 Sevilla, España [email protected]

Abstract. In Ecuador, workplace accidents are rarely reported to control entities. This under-registration limits the investigation of such events, which prevents the identification and control of their generating causes. This study proposes a methodological framework for investigating occupational accidents, which encompasses the technical and legal parameters that govern the process in Ecuador. A computer tool called “INVAC Software” has been developed to facilitate its application. It is a web-type application, with a modular structure, in whose construction the stages of the life cycle corresponding to the waterfall model were applied. INVAC could benefit Ecuadorian companies of all kinds to systematically outline and manage the activities and documentation related to the work accident investigation process. In this way, it favours compliance with the current legal framework and strengthens internal preventive management. Keywords: Occupational accidents · Incident-investigation · Safety and Health · Safety management · Software · Root cause analysis

1 Introduction The worker damages derived from the materialization of occupational risks are very varied. However, in work accidents, the consequences appear violently and immediately, and the potential for both human and material loss is more significant, sometimes leading to the death of the affected person. According to data compiled by the International Labour Organization (ILO, 2005), around 270 million work accidents occur globally per year, with more than 2 million deaths [1]. Estimates made from state agencies’ data put the annual number of occupational accidents in Ecuador at 400 000, with a balance of 534 deceased workers and underregistration with the control entity of about 80.7% of the total [2]. The underreporting of accidents brings an even greater problem with it since such events have not been subject to an investigation process. Therefore, causes have not been identified, much less controlled. This problem poses new challenges for organizations, which find it © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Z. Ahram et al. (Eds.): AHFE 2021, LNNS 271, pp. 403–410, 2021. https://doi.org/10.1007/978-3-030-80624-8_50

404

R. Gutiérrez and K. Guerra

necessary to innovate their processes related to the treatment of accidents, to ensure compliance with legal obligations. There are a wide variety of reference frameworks used in workplace accident investigation. Among the most widely used, we can name the guides developed by the international labour organization (ILO) [3], the Occupational Safety and Health Administration (OSHA) of the United States [4], Health and Safety Executive (HSE) of the United Kingdom [5] and the National Institute of Occupational Safety and Health (INSSL) of Spain [6]. Likewise, there is an important diversity of causal analysis methods used in investigating occupational accidents, among which the fault tree method, the Symptoms, Causes, Remedies, Actions (SCRA) method, the 5-why method, stand out for their widespread use [7, 8]. In Ecuador, the report and investigation of work-related accidents constitute an obligation for companies of all kinds, enshrined in the current legal framework through the Regulations of the General Insurance of Work Risks, Resolution CD 513 [9], of the Ecuadorian Institute of Social Security (IESS). This regulation establishes, among other things, the typology, characteristics, qualification criteria, levels of incapacity and coverage of benefits associated with occupational diseases and accidents at work. Employers’ obligation is also included, concerning the reporting and investigation of the causes generating these events and the implementation of control activities aimed at reducing their incidence. However, this document does not establish a clear methodological framework, which allows the development of an effective accident investigation process, beyond compliance with a single reporting format. In this way, prevention professionals have the freedom to develop their own research protocols and formats, in which a wide variety of technical and methodological recommendations are adapted and incorporated. This can make it difficult to compare companies in similar sectors, limiting the generation of collective knowledge about the main causes of these events. This study integrates the technical and legal requirements that govern the investigation of work accidents in Ecuador, through a logical and systematic scheme of activities, whose central point is to analyse the causality of the claims. To facilitate compliance and application, a software-type tool has been developed to systematise the information regarding all cases in a company. Favouring the reduction of accident rates, monitoring control actions aimed to avoid repetition and improving working conditions.

2 Materials and Methods As shown in the literature referenced in previous sections, there is no single methodological framework with universal validation for the accident investigation process’s execution. Therefore, any methodology that allows the achievement of the objectives could be validated. However, and in recognition of the complexity of the process, the advisability of having a method that follows the following logical sequence of activities is determined [10]: i) information gathering, ii) information integration, iii) detection of causes, iv) arrangement of causes; v) proposal and application of corrective activities. Figure 1 shows the logical scheme proposed in this study for developing the process, which describes the systematic sequence of activities to investigate workplace accidents

Work Accident Investigation Software According to the Legal Requirements

405

within organizations. This scheme integrates all the technical and legal requirements established in resolution C.D 513 (IESS) for Ecuadorian companies, in addition to the guidelines established in international guides. The central point of this methodological scheme is based on the multidisciplinary application of a causal analysis method. Fault Tree and SCRA have been selected as root cause analysis methods because of their widespread use nationally and internationally. These methods have in common the breakdown of the problem to find a root cause that must be controlled. As shown in Fig. 1, after the occurrence of an accident, efforts are focused on providing the health care required by the affected worker (s), either through internal means (the company’s medical clinic) or through external media (public hospitals). Once the situation is controlled, the preliminary information related to the worker and the event is recorded. Subsequently, an initial assessment of the event’s characteristics is carried out to define the need to start the investigation process. The investigation process begins with a survey of the primary information related to the case, for this, on-site inspections are carried out, video recordings are collected (if any), the testimony of witnesses and involved personnel is taken. Subsequently, a summary of the event’s characteristics is developed to complete the report format to be delivered to the control entity (IESS). Next, a multidisciplinary investigation team is formed that will apply the causal analysis methods (tree of causes and SCRA method) in a participatory way to establish the relationship of events that triggered the accident. Once the causes of the accident have been raised and organized, the investigating team proceeds to establish preventive and corrective actions to avoid the repetition of events with similar characteristics. Finally, based on all the information collected, the research extended report is prepared, which is delivered to the control entity for validation after being approved within the company. Following the proposed framework to facilitate its application, a computer tool called “INVAC Software” was developed. In determining the software life cycle, the waterfall model was followed, due to its maturity and widespread use in computer tools development [11]. The life cycle stages are shown in Fig. 2.

3 Results 3.1 System Architecture and Functionalities The INVAC Software is a web-type application, developed through the Java® programming language, which can be used through an internal data network to several users, or through the Internet to a universe of users with access to the network. The tool has been designed with a modular structure, which allows the different actors to manage it according to the activities that correspond to them following their responsibilities specified in the process’s methodological framework. These modules are i) Administration module, ii) Health care module, iii) Investigations module. Figure 3 shows the interface corresponding to the software login screen (All system elements have been programmed in Spanish, Ecuador’s official language).

406

R. Gutiérrez and K. Guerra

Fig. 1. Accident investigation process framework for INVAC software.

Work Accident Investigation Software According to the Legal Requirements

407

Fig. 2. Life cycle development model for INVAC software, based on [11].

As a previous step to applying the software, the information related to the company and users, i.e., intervention roles, permissions and access credentials is set. That is done through the Administrator module.

Fig. 3. INVAC login interface.

3.2 Opening a New Case According to the methodological framework, the investigation case only begins when control of the situation has been assured, and the affected person stabilized. At that time,

408

R. Gutiérrez and K. Guerra

the person in charge of the Company Medical Service proceeds to collect the information regarding health care and the event’s general details. A new case is opened in the INVAC medical care module, in which the information pertinent to the affected worker and the necessary details of the event is entered, as shown in Fig. 4.

Fig. 4. Box to enter the information related to a new case.

3.3 Safety Inspection and Witnesses Interview Within 24 h after the accident, the prevention technician conducts a safety inspection at the accident site. The technician collects all the information related to material and organisational elements to determine possible unsafe conditions that could have intervened in the event’s materialisation. This information is entered in the corresponding fields of the software, as shown in Fig. 5. Subsequently, interviews are carried out with eyewitnesses, non-eyewitnesses or technical witnesses of the event, including their testimonies and signature of responsibility in the spaces provided within the software.

Fig. 5. On-site safety inspection interface.

Work Accident Investigation Software According to the Legal Requirements

409

3.4 Root Cause Analysis and Investigation Report At this stage of the process, a multidisciplinary team is formed whose members are selected based on their experience and knowledge of the process, machinery, materials, and personnel involved with the event. Once the team is assembled, all the information gathered regarding the case is socialized. A participatory application of the causal analysis methodology is carried out. The prevention technician acts as part of the team and as the moderator of the meeting. As a result of this participatory assessment, the sequence of events is obtained from the accident’s occurrence to the detection of root causes. Figure 6 shows an example of the scheme resulting from the application of the cause tree through INVAC. The detected causes are ordered and prioritized, and then related to the reference causes included in resolution CD 513, which may be: i) direct causes (related to dangerous conditions and actions), ii) indirect causes (related to work and worker factors); iii) basic causes or management. Subsequently, the planning of corrective and preventive actions aimed at controlling the causes is included. Finally, and based on all the information entered in the different stages, the tool produces a comprehensive report of the investigation, which constitutes the central document and evidence of the case’s treatment.

Fig. 6. Scheme example of the fault tree methodology result in INVAC.

4 Conclusions This study proposes a reference framework for investigating workplace accidents, applicable to organizations of all kinds, but specially designed for Ecuadorian organizations since it integrates the technical and legal requirements established in resolution C.D 513

410

R. Gutiérrez and K. Guerra

(IESS). This methodological framework includes the systematic sequence of activities, resources, and responsibilities to identify, evaluate, and control the causes that generate accidents and occupational incidents. The tool called “INVAC Software” has been developed to facilitate and extend its application. INVAC is a tool that facilitates the investigation and comprehensive management of accidents and incidents related to work. It is a web-type application, with a modular structure, which allows its differentiated use according to the assignment of permissions and user roles about their specific responsibilities in the process’s internal management. The tool integrates two of the most used cause analysis methods, the fault tree and the SCRA method. An intuitive and friendly environment favours its application for the entry of information; thus, the graphical application of each of the diagrams corresponding to these methods highlights the identified root causes. Among the characteristics of INVAC, the generation and management of a accidents’. general database in an organization is highlighted. Besides, a specific database for information and documentary evidence of each of the cases’ treatment is analyzed. That facilitates the reporting of accidents to the control entities and the fulfilment of the applicable legal requirements. Acknowledgments. The authors want to thank Universidad de las Américas Quito – Ecuador for its support in the presentation of this research.

References 1. Organización Internacional del Trabajo.: Información Sobre Seguridad en el Trabajo (2005) 2. Valenzuela, R., Bravo, M.E., Gómez, A.R.: Subregistro de accidentes de trabajo en ecuador: nuevas evidencias, limitaciones y prioridades. Universidad, Cienc. Y Tecnol. 24, 33–40 (2020) 3. International Labour Organization: Investigation of Occupational Accidents and Diseases: A Practical Guide for Labour Inspectors. 11(2). Geneva (2015) 4. Occupational Safety and Health Administration: Incident (Accident) Investigations: A Guide for Employers (2015) 5. Health and Safety Executive.: Investigating accidents and incidents: A workbook for employers, unions, safety representatives and safety professionals (2004) 6. Ardanuy, T.P.: NTP 442: Investigación de accidentes-incidentes: procedimiento (1995) 7. Aurisicchio, M., Bracewell, R., Hooey, B.L.: Rationale mapping and functional modelling enhanced root cause analysis. Saf. Sci. 85, 241–257 (2016) 8. Chi, C.F., Lin, S.Z., Dewi, R.S.: Graphical fault tree analysis for fatal falls in the construction industry. Accid. Anal. Prev. 72, 359–369 (2014) 9. Instituto Ecuatoriano de Seguridad Social.: Resolución C.D 513: Reglamento del Seguro General de Riesgos del Trabajo (2016) 10. Bestraten, M., Gil, A.: NTP 592: La gestión integral de los accidentes de trabajo (I): tratamiento documental e investigación de accidentes (2001) 11. Ruparelia, N.B.: Software development lifecycle models. ACM SIGSOFT Softw. Eng. Notes. 35(3), 8–13 (2010)

Systemic Analysis of the Territorial and Urban Planning of Guayaquil María Lorena Sánchez Padilla1(B) , Jesús Rafael Hechavarría Hernández1 , and Yoenia Portilla Castell2 1 Faculty of Architecture and Urbanism, University of Guayaquil, Guayas, Ecuador

{maria.sanchezpa,jesus.hechavarriah}@ug.edu.ec 2 Higher University Training Institute, Guayaquil, Ecuador [email protected]

Abstract. Urban planning in Latin America has historically affected rural areas, giving way to urban concentration in an uncontrolled way. The metropolis of Guayaquil registers the highest urban growth rate in Ecuador, occupying an urban area of 97.46%, where only 2.54% of the inhabitants are in rural areas. In this study, an analysis is made of the main problems detected in the urban planning of Guayaquil where social actors, endogenous resources, among other fundamental components for the development of local development programs, are identified, focused on the five rural parishes of Guayaquil canton. It is concluded that local development historically has not obeyed models or strategies conceived with the participation of the academy, although a group of researchers has shown special interest in recent years in bringing to society the scientific results obtained in the scientific field. The systemic analysis carried out in the planning of Guayaquil constitutes an opportunity to position the territory as an option for rural and community tourism, also considering the urban regulations regarding health that the COVID-19 pandemic has imposed. Keywords: Systemic approach · Urban planning · Local development · Guayaquil

1 Introduction In 2010, the population of Ecuador was 14,483,499 inhabitants, placing Guayaquil, port city and main economic center of the country, among the ten most populated cities, with a population of 2,350,915, which represented 16.23% of national level [1]; also with the highest rate of urban growth since it concentrated 97.46% of the population of the total of the canton; and the remaining percentage of 2.54% comprises the population of its María Lorena Sánchez Padilla, Architect graduated from the Catholic University of Santiago de Guayaquil, with 25 years of experience in the supervision of Specific Projects, Directorate of Urban Planning, Appraisals and Land Use Planning. She master’s in landscape architecture from the University of Cuenca, Ecuador. She is currently a professor at the Faculty of Architecture and Urbanism of the University of Guayaquil. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Z. Ahram et al. (Eds.): AHFE 2021, LNNS 271, pp. 411–417, 2021. https://doi.org/10.1007/978-3-030-80624-8_51

412

M. L. Sánchez Padilla et al.

five rural parishes, according to National Institute of Statistics and Censuses projections to 2020 the population of the Guayaquil canton increased to 2,723,665 inhabitants. The local development historically has not obeyed models or strategies conceived with citizen participation and the academy. Neither has urban legislation oriented to the Right to the City been considered. Practical experience has predominated and on occasions similar cases have been replicated, without considering that each locality must adjust to the characteristics and singularities of each territory and its community, no matter how close or similar to each other. However, in recent years the territorial and urban planning has aroused the scientific interest due to the growing housing deficit experienced in Guayaquil. Special interest has been given to a very disadvantaged sector of the population in terms of proposals for housing spaces, as is the case of Low-Income Dwelling [2–4], where analyzes are carried out on adequate construction systems [5] and sustainable bioclimatic alternatives are sought under a systemic approach [6–8]. Taking into account the fulfillment of the Sustainable Development Goals in the urban planning of Guayaquil, ecological proposals are considered in the urban planning processes [9–11], noise pollution [12] and analysis on the inclusion and increase of green areas with proposals for native vegetation resistant to environmental challenges [13]. New methods are also used to help in the decision-making process [14]. In addition to considering the special conditions of disability [15, 16] and universal access in the design stages [17]. Urban planning must take into account aspects such as: natural resources, production systems, diversification of the local economy; as well as the social and cultural sphere, where it has a significant influence: the diversity and heterogeneity of tangible and intangible human groups, social capital and ethnic diversity.

2 Systemic Analysis of the Planning of Guayaquil Ecuador is made up of 24 provinces, the province of Guayas is in the coastal region of the country (see Fig. 1), politically divided into 25 cantons, from which 50 urban parishes and 29 rural parishes are derived.

Fig. 1. Political map of Ecuador, location of the Guayas province. Source: [18]

Systemic Analysis of the Territorial and Urban Planning of Guayaquil

413

The capital of this province is Guayaquil located between the Guayas River and the Salado estuary, with easy access to the Pacific Ocean through the Gulf of Guayaquil with an approximate area of 6,020 km2. The National Secretariat for Planning and Development establishes 9 administrative planning zones [19], with the aim of equitably distributing the services that the Ecuadorian state must provide to its inhabitants, see Fig. 2.

Fig. 2. Ecuador Planning Zones. Source: [19]

Planning zone 8 (see Fig. 3) is made up of Guayaquil, together with neighboring cantons of Duran and Samborondón. This territory brings together important industrial, commercial and tourist productive activities with important state and public infrastructures, among which the port terminals through which 85% of the country’s non-oil cargo

Fig. 3. Planning Zone 8, Guayaquil, Durán and Samborondón cantons. Source: [21]

414

M. L. Sánchez Padilla et al.

passes [20] and terminal stand out airport. In addition, financial, educational and real estate services that consolidate zone 8 as an economic power [21]. 2.1 Rural Territory The planning of rural territories proposes a differential treatment of development with respect to the urban environment, where all entities bring together their synergies under a territorial and comprehensive systemic approach, as a rural development policy, reaffirms that local development does not only respond to a model economic development, also a dynamic interaction of rural society influencing society in general. In 2016, four years before the COVID-19 pandemic, in the city of Cork in Ireland [22], the issue of rural areas, their challenges and opportunities, was debated, since it does not lose validity that these territories are attractive places to live and work, having to adapt to the challenges of interrelation and collaboration between the countryside, the city and the services that its inhabitants require to avoid their migration and the development of a rural economy in accordance with the challenges of globalization. The rural territory has historically been linked to agriculture that provide us with food and raw materials, other goods and services are not perceptible or measurable, such as the conservation of biodiversity, the regulation of the water cycle, the capture of CO2 and the prevention of erosion. Guayaquil, within its territorial circumscription, has five rural parishes (see Fig. 4), with their parochial heads whose cantonal and parochial government about the stable administrative political organization the different levels of decentralized autonomous governments[23]. The purpose of the Territorial Ordinance of the Guayaquil canton is to apply a set of democratic and participatory policies, which allows its appropriate

Fig. 4. Circumscription of the territory of the Guayaquil. Source: Guayaquil Municipality [24]

Systemic Analysis of the Territorial and Urban Planning of Guayaquil

415

territorial development, as well as the conception of planning and autonomy for territorial management, complementing economic and social planning. and environmental with a territorial dimension, rationalizing interventions on the territory and guiding its development and sustainable use [24]. 2.2 Endogenous Resources A principle intrinsically linked to the use of endogenous resources that a territory engenders and that may be capable of generating and sustaining its development, allowing it to improve the quality of life of its population, are those related to the environment to guarantee the long-term sustainability under a rational exploitation of them. However, the world not only faces economic, social, or environmental challenges (the three pillars currently recognized as components of sustainable development), but also cultural challenges, which is one of the essential components of development. The actions and challenges that are implemented in the territory under a systemic approach must be integrated under a geographic and territorial economic unit; because endogenous resources are not only limited to material goods but also to the population as an independent subject, because it is the most important resource, based on it and for it, local development initiatives are developed.

3 Conclusions Meeting the objectives of sustainable development under current economic and health conditions is a challenge for developing countries, especially considering the new challenges imposed by the 2019 COVID pandemic and its mutations. However, in urban and territorial planning of Guayaquil are being able to integrate: society, the productive sector and academia in the search for sustainable alternatives under a systemic approach where criteria are considered: economic, social and environmental; including citizen participation in decision-making processes as a practice of the Right to the City.

References 1. INEC. Instituto Nacional de Estadística y Censos. Result del Censo (2010). http://redatam. inec.gob.ec/cgibin/RpWebEngine.exe/PortalAction?&MODE=MAIN&BASE=CPV2010& MAIN=WebServerMain.inl 2. Hechavarría, J., Forero, B., Bermeo, P., Portilla, Y.: Social Inclusion: A Proposal from the University of Guayaquil to Design Popular Housing with Citizen Participation. https://weefgedc2018.org/wp-content/uploads/2018/11/58_Social-Inclusion-A-proposal-from-the-Uni versity-of-Guayaquil-to-design-popular-housing-with-citizen-participation.pdf 3. Ricaurte, V., Almeida Chicaiza, B.S., Hechavarría Hernández, J.R., Forero, B.: Effects of the urban form on the external thermal comfort in low-income settlements of Guayaquil, Ecuador. In: Charytonowicz, J., Falcão, C. (eds.) AHFE 2019. AISC, vol. 966, pp. 447–457. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-20151-7_42 4. Hechavarría Hernández, J.R., et al.: Low-Income dwelling bioclimatic design with CAD technologies. a case study in monte Sinahí, Ecuador. In: Ahram, T., Karwowski, W., Vergnano, A., Leali, F., Taiar, R. (eds.) IHSI 2020. AISC, vol. 1131, pp. 546–551. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-39512-4_85

416

M. L. Sánchez Padilla et al.

5. Almeida Chicaiza, B.S., Díaz, J.A., Rodríguez, P.B., Hechavarria Hernández, J., Forero, B.: An alternative graphic and mathematical method of dimensional analysis: its application on 71 constructive models of social housing in Guayaquil. In: Ahram, T. (ed.) AHFE 2019. AISC, vol. 965, pp. 598–607. Springer, Cham (2020). https://doi.org/10.1007/978-3-03020454-9_59 6. Dick, S., Hechavarría Hernández, J.R., Forero, B.: Systemic analysis of bioclimatic design of low-income state-led housing program “socio vivienda” at Guayaquil, Ecuador. In: Ahram, T., Karwowski, W., Taiar, R. (eds.) IHSED 2018. AISC, vol. 876, pp. 647–651. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-02053-8_99 7. Hernández, J.R.H., Jaramillo, R.V., Fuentes, B.F.: Multi-objective optimization applied to the bioclimatic design of dwellings with ecomaterials. In: Ahram, T., Karwowski, W., Taiar, R. (eds.) IHSED 2018. AISC, vol. 876, pp. 506–511. Springer, Cham (2019). https://doi.org/10. 1007/978-3-030-02053-8_77 8. Forero, B., Hechavarría, J., Vega, R.: Bioclimatic design approach for low-income dwelling at Monte Sinahí, Guayaquil. In: Di Bucchianico, G. (ed.) AHFE 2019. AISC, vol. 954, pp. 176– 185. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-20444-0_17 9. Avila Beneras, J., Fois Lugo, M., Hechavarría Hernández, J.R.: Strategies for accessibility to the teodoro maldonado hospital in Guayaquil. a design proposal focused on the human being. In: Ahram, T., Karwowski, W., Vergnano, A., Leali, F., Taiar, R. (eds.) IHSI 2020. AISC, vol. 1131, pp. 1256–1262. Springer, Cham (2020). https://doi.org/10.1007/978-3-03039512-4_192 10. Tisalema, S., Hechavarría, J., Vega, G., Calero, M.: Ecological waste planning. case study: comprehensive waste management plan at the Simón Bolívar Air Base, Guayaquil, Ecuador. In: Karwowski, W., Ahram, T., Etinger, D., Tankovi´c, N., Taiar, R. (eds.) IHSED 2020. AISC, vol. 1269, pp. 245–250. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-582821_39 11. Salvador Minuche, A., Pin Guerrero, R.M., Hechavarría Hernández, J.R., Leyva Vázquez, M.: Ecological border implementation: proposal to urban-natural transition in Nigeria, Guayaquil. In: Ahram, T., Taiar, R., Langlois, K., Choplin, A. (eds.) IHIET 2020. AISC, vol. 1253, pp. 134–139. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-55307-4_21 12. Calero, L., Calero, M., Pazmiño, C., Hechavarría, J., Vélez, E.: Validation of an acoustic model for four prototypes sectors in Guayaquil City. In: Charytonowicz, J. (ed.) AHFE 2020. AISC, vol. 1214, pp. 49–55. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-515 66-9_8 13. Salinas, J., Vega, G., Hechavarría Hernández, J.R.: Green areas plan with native and/or endemic plants for mitigation of heat Island in Letamendi Parish, Guayaquil. In: Charytonowicz, J. (ed.) AHFE 2020. AISC, vol. 1214, pp. 56–62. Springer, Cham (2020). https://doi.org/ 10.1007/978-3-030-51566-9_9 14. Lopez, A.P., Vazquez, M.L., Hernández, J.R.H.: Pedestrian traffic planning with topsis: case study Urdesa Norte, Guayaquil, Ecuador. In: Ahram, T., Taiar, R., Langlois, K., Choplin, A. (eds.) IHIET 2020. AISC, vol. 1253, pp. 69–76. Springer, Cham (2021). https://doi.org/10. 1007/978-3-030-55307-4_11 15. Colorado Pástor, B.A., Fois Lugo, M.M., Leyva Vázquez, M., Hechavarría Hernández, J.R.: Proposal of a technological ergonomic model for people with disabilities in the public transport system in Guayaquil. In: Ahram, T., Falcão, C. (eds.) AHFE 2019. AISC, vol. 972, pp. 831– 843. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-19135-1_81 16. Forero, B., Hernández, J.R.H., Alcivar, S., Ricaurte, V.: Systemic approach for inclusive design of low-income dwellings in popular settlements at Guayaquil, Ecuador. In: Ahram, T., Karwowski, W., Taiar, R. (eds.) IHSED 2018. AISC, vol. 876, pp. 606–610. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-02053-8_93

Systemic Analysis of the Territorial and Urban Planning of Guayaquil

417

17. Hechavarría Hernández, J.R., Forero, B., Vega Jaramillo, R.: Universal access and inclusive dwelling design for a family in Monte Sinahí, Guayaquil, Ecuador. In: Ahram, T., Karwowski, W., Vergnano, A., Leali, F., Taiar, R. (eds.) IHSI 2020. AISC, vol. 1131, pp. 1094–1100. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-39512-4_166 18. Provinces of Ecuador and their capitals. https://provinciasecuador.com/ 19. SENPLADES. Informative brochure. Published (2012). www.planificacion.gob.ec 20. Economic X-ray of the Guayas province | El Comercio. Published (2018). https://www.elc omercio.com/pages/economia-provincia-guayas.html 21. Planning zone 8 – Technical Secretariat plans Ecuador. https://www.planificacion.gob.ec/ zona-de-planificacion-8/ 22. Cork City: Top 9 Attractions | Ireland.com. https://www.ireland.com/en-us/destinations/rep ublic-of-ireland/cork/cork-city/articles/cork-city-top-nine-attractions/ 23. Organic code of territorial organization, autonomy and decentralization of Ecuador | Regional Observatory of Planning for Development. https://observatorioplanificacion.cepal.org/es/ marcos-regulatorios/codigo-organico-de-organizacion-territorial-autonomia-y-descentraliz acion-de 24. Land use ordinance of the Guayaquil canton. https://es.scribd.com/document/431999634/Ord enanza-de-Ordenamiento-Territorial-Del-Canton-Guayaquil-17

Spatial Model of Community Health Huts Based on the Behavior Logic of the Elderly Zhang Ping(B) , Wang Chaofan, and Zhang Yuejiao HeFei University of Technology, Hefei, Anhui, China

Abstract. Health Huts are being promoted in Chinese society, and they are facing the challenge of improving efficiency and comprehensive service with the increase of demand. The traditional research is mainly based on the spatial physical logic, that is, emphasizes the rational allocation of the property of “things”. This paper puts forward the design model of Health Huts based on the logic of user’s behavior. Through the analysis of users, the types and needs of users and the spatial characteristics of community health Huts are obtained; Functional and equipment requirements for Health Huts are identified; The spatial planning of the equipment is carried out in combination with the environmental psychology, scene theory and the user’s behavioral logic; Digital man model and device model are built, and Siemens Tecnomatix Jack is used for simulation; Comprehensive evaluation of the Health Hut simulation results and draw relevant conclusions. Keywords: Health Hut · Spatial model · Logic of behavior · Environmental psychology · Human-machine interaction · Man-machine engineering simulation

1 Introduction With the deepening of the aging population in China, it is crucial to diagnose and monitor chronic diseases in time for the elderly, in case them become major threats to the old’ s healthy and quality life. Health Hut is a comprehensive activity center set up by the Chinese government in which community residents can carry out disease detection, publicity and education. By using the health game device and developing the social leisure activities and stuff, Health Hut mainly detects body indexes of the aged who suffer from physiological sensory degeneration and psychological loneliness. Chen Liqun compared 3 main factors on behavior changes before and after people experience the community service in health center to conclude that the self-help health management mode of the residents is beneficial to improve their health awareness and form a healthy lifestyle [1]. Current researches focus on the role and effectiveness of Health Huts, but there is a lack of research on the use of the Huts, the functional combinations, and the spatial layout design of Health Huts. This paper will focus on the health care and activity hardware for the elderly, and structure the design model of Health Huts with software data sharing. In the traditional sense, the design of healthy cabins is often understood as similar to interior design, obeying the logic of things and the order of functional areas between © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Z. Ahram et al. (Eds.): AHFE 2021, LNNS 271, pp. 418–424, 2021. https://doi.org/10.1007/978-3-030-80624-8_52

Spatial Model of Community Health Huts

419

devices. However, in the actual space, there are a series of interaction behavior between the user and the object, the core is the creation of behavior, and the object is only the tool to realize the behavior. Professor Xiangyang Xin expounded in his writing that “physical logic” is to emphasize the rational allocation of the function of things; however, “behavior logic” is to emphasize the rational organization of human behavior [2]. The users will form specific behavior habits and operation mode. When the user behavior and the product-preset behavior match, the gap between the user and the product can be eliminated [3]. This paper focuses on the behavior logic of the elderly, and puts forward the spatial design model of the Health Huts by combining the human, human action, detection equipment and environment, through the behavior process and efficacy results of the elderly.

2 Analysis of User Behaviors 2.1 User Requirements Analysis The generation of behavior originated from the instinctive demand, which is the fundamental reason that causes motivation and leads to the generation of behavior. According to the results of the survey, the main users of Healthy Hut could be divided into elderly with chronic diseases and healthy elderly. Since the difference in physical conditions between these two types, they have different needs for the use of healthy cottage space. The physical and mental health needs of both are shown in the Table1. Table 1. User types and requirements User type

Health needs

Elderly with chronic diseases Illness track Living advice Recommendations for improvement Peer communication Psychological counseling Leisure and entertainment Healthy elderly

Physical monitoring health education Scientific exercise Social entertainment Study

At the same time, there is a need outside the Health Hut for families of elderly people who want to know the health of their elders. Due to the particularity of the elderly population, some of the elderly’s use psychology may be considered as much as possible.

420

Z. Ping et al.

1) Dependency: The elderly are willing to gather and chat with their peers. So it should be considered to provide a rest area for the elderly to communicate. 2) Curiosity: Considering to add some new things in the space (such as VR games, intercom robots, etc.) to provide learning and entertainment. 3) Fear of difficulty: Professional guidance can increase the elderly’s interest in new things. Human needs are the basis and internal driving force for the behaviors, which will lead to the formation of motives and guide the generation of behaviors. While motives are the internal forces and psychological thoughts generated by behaviors to meet, specific needs [4]. According to the user needs that have been obtained, through comparative analysis, it is found that there are many similarities between the user group’s needs for Healthy Hut space. Inductive summary can determine the motivation of user behavior or purpose of the behavior. The behavioral purpose is divided into a primary purpose and a secondary purpose.

Fig. 1. User motivation

As shown in the Fig. 1, the primary purpose refers to the main reason for the elderly to go to the healthy cabin, and the secondary purpose refers to the elderly’s other purposes in the Healthy Hut, such as solving physiological needs. According to different users, the main purpose and the secondary purpose can be further subdivided. 2.2 Building Task Flow According to the acquired user behavior motivation, the user behavior logic was analyzed, and then the main interactive task graph was drawn. The behavior process could be divided into basic behavior and subsidiary behavior according to the user’s behavior goal. The basic behavior acted as the mainline task to support the subsidiary behavior. Health testing and recreation behavior were distinguished by two goals: “testing” and

Spatial Model of Community Health Huts

421

“leisure”. Moreover, the relevant behavior logic diagram and interaction task diagram were obtained as follows (Figs. 2, 3, 4).

Fig. 2. Subsidiary behavior

Fig. 3. General health checkup behavior

Fig. 4. General leisure behavior

The user behavior is divided into two general behavior processes according to the main goal: the general testing process and the general leisure process. and because the user’s testing in the Health Hut and the leisure activities do not interfere with each other, the user can choose separate behaviors or two types at different times The sequential execution of behaviors will then produce changes and combinations, which will produce a complex task flow formed by the common testing process and the general leisure process.

422

Z. Ping et al.

3 Spatial Design Model of Health Hut 3.1 Environmental Psychology and Scene Theory Environmental psychology is a subject that studies the relationship among people, the surrounding spiritual and material environment. It advocates the use of scientific means to solve the problems between the spirit and the material, and then explores the relationship between psychology and the environment. It analyzes and studies the interaction between human thought and behavior. As for, it is a field that systematically explains the relationship between human and environment [5]. The “scenario theory” is an academic paradigm of the research for post-industrial city proposed by the new Chicago school. The scenario theory is not only applicable to the analysis of large areas or complex areas, but also to the analysis of small community space [6]. 3.2 Design Model of the Health Hut The model of the Health Hut space design is based on the logic of behavior and the types of users and related users’ behavioral motivations. Then the user task flow was analyzed. Ultimately, the device type and location was determined according to the behavior node, and the spatial layout was planned according to the behavior process combined with environmental psychology and the scenario theory (As shown in the Fig. 5).

Fig. 5. Design model of the Health Hut space.

4 Facility Layout of the Health Hut The equipment in the Health Hut is the tools and recording methods used by users to achieve behavioral purposes in the space. According to the user task process that has been obtained, related behavior nodes can be divided into detection and result data sharing, learning and entertainment. According to the behavioral logic diagram of the elderly, the diversity of behavioral combination methods leads to richer spatial composition and combination of healthy

Spatial Model of Community Health Huts

423

cabins and longer streamlines. Due to the many possibilities of streamlines, the longest task flow of general detection behavior and general leisure behavior is used as a design reference in the spatial layout planning. Then the spatial layout of the equipment is carried out according to the flow of tasks. Due to the different types of users, leisure behavior and detection behavior are the main behaviors in space, and leisure behavior may generate a lot of noise. Therefore, it is necessary to separate the area of leisure behavior from detection behavior.

5 Man-Machine Engineering Simulation by Jack Software 5.1 Establishment of Virtual Person and Device Model According to the spatial layout form planned above, we built a 3D spatial model in the Jack software. Meanwhile, in order to compare and verify the feasibility of the design in this paper, the existing Health Hut model based on the logic of things was also established in Jack (As shown in the Figs. 6).

Fig. 6. Layout planning of Health Hut space based on logic of behavior(left) and logic of things (right).

5.2 Task Flow and Simulation Evaluation Put the 3D model of the spatial layout designed above and existing layout into Jack software for simulation evaluation. In the simulation process, due to the randomness and diversity of human behavior streamline, the longest behavior flow was considered to be used as the evaluation path. Metabolic Energy Expenditure in Jack’s TAT module was used to analyze the Energy consumption of the user in the space. Relevant parameters were input and the Energy consumption rate of the user when using the space could be obtained. At last, the user streamline of the two spaces was simulated and the results are shown in the Table 2. It can be seen that, the spatial layout designed according to the logic of behavior is superior to the spatial layout designed according to the logic of things in the convenience of streamline.

424

Z. Ping et al. Table 2. Jack simulation output result

Layout

Waypoint

Behavior logic

26

Physical logic

34

Path length (cm)

Path energy consumption (kcal)

Total task energy consumption (kcal)

Energy consumption rate (kcal/min)

7048.13

324.088

245.443

2.701

12675.55

733.597

654.952

6.114

6 Conclusion This paper proposes a space design model of Health Hut based on the logic of behavior. Aiming at determining the interaction behavior between the elderly and equipment according to their perspective, and then designed the equipment layout of the space. The goal is to make the layout of the space more user-friendly and efficient. In the design process, the behavior motivation of users should be fully considered, and the path of space should be consistent with the user’s streamline and meet the user’s psychology, to make the final design result more efficient and enable the elderly to obtain better physical and mental experience. Based on the theory of the logic of behavior and combining the knowledge of environmental psychology and other disciplines, this paper designed the space model of healthy cabin, and finally simulated the results in the ergonomic software of Jack, then compared the results with the relevant data of the existing space design model. This design model can give better design ideas from the behavior layer. Acknowledgements. This paper supported by the 2020 Humanities and Social Science Research Foundation of Ministry of Education of China, Title: Research on the Design Strategy of Elderly Products Based on Sensory Compensation (20YJA760101). In addition, P. Zhang thanks to the funding and research conditions provided by Hefei University of Technology.

References 1. Liqun, C., Rong, W., Xiaoling, C., et al.: Impact of self-service health management model on health behaviors of community residents. J. Nurs. Sci. 26(17), 1–3 (2011) 2. Xiangyang, X.: Interaction design: from physical logic to behavioral logic .Decoration 1, 58–62 (2015) 3. Liu, S.: Psychology of Design Art. Tsinghua University Press, Beijing (2006) 4. Qiu, Z.: The Research of Intelligent TV Interactive Design Based on User Behavior Oriented. Jiangnan University (2017) 5. Lin, Y., Hu, Z.: Environmental Psychology. China Architecture & Building Press, Beijing (2000) 6. Gai, Q.: Construction of urban youth public cultural space from the perspective of scene theory - a case study of Beijing 706 youth space. Dong Yue Tribune, pp. 72–80 (2017)

Analysis of the Proposal for the SOLCA Portoviejo Hospital Data Network Based on QoS Parameters José Antonio Giler Villavicencio1(B) , Marely del Rosario Cruz Felipe2 , and Dioen Biosca Rojas3 1 Instituto de Postgrado, Universidad Técnica de Manabí, Portoviejo, Ecuador

[email protected] 2 Facultad de Ciencias Informáticas, Universidad Técnica de Manabí, Portoviejo, Ecuador

[email protected] 3 Facultad de Telecomunicaciones, Universidad Tecnológica de la Habana, Havana, Cuba

Abstract. The SOLCA Portoviejo hospital network was initially designed for a group of services; however, over time, more demanding services have increased, as well as workstations, affecting its performance. In this research, a proposal for the Solca Portoviejo hospital data network is obtained based on performance through the evaluation of QoS parameters. The study is descriptive and experimental, where several designs are configured applying redundancy and QoS mechanisms. The proposals are evaluated by simulation in Opnet software through latency parameters, packet loss and effective transfer rate. The simulation results have shown that applying redundancy and QoS mechanisms considerably improve performance of the current network, highlighting the WFQ mechanism as the best option to optimize the parameters of the hospital data network. Keywords: Network performance · Network redundancy · QoS mechanisms · QoS parameters · Opnet

1 Introduction Nowadays, data networks have revolutionized the way of communication, interaction, education, entertainment, business, among others. Globally, these networks are growing at an accelerated pace and require more and more features to be able to provide all current services. In the business field, the growth of data networks has not been the exception, being nowadays an indispensable tool to share information, connect office devices, communication and control, becoming a fundamental pillar to help companies achieve their goals. The term QoS or quality of service in data networks can be defined as a set of technologies that allow network administrators to manage the effects of traffic congestion by optimally using the different network resources, rather than by increasing capacity [1]. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Z. Ahram et al. (Eds.): AHFE 2021, LNNS 271, pp. 425–432, 2021. https://doi.org/10.1007/978-3-030-80624-8_53

426

J. A. Giler Villavicencio et al.

The SOLCA “Dr. Julio Villacreses Colmont” oncology hospital in the city of Portoviejo is one of the hospitals that make use of a data network for its operation. However, the current network design does not have traffic differentiation policies for the optimal flow of packets through the network. This affects the performance of services in real time, which is often reflected in the discomfort of patients and hospital staff. The above limitations motivated this research, which analyzes different variants of network proposals with QoS and redundancy incorporation, evaluating network performance based on QoS parameters to select the most appropriate proposal for the hospital’s data network services.

2 Related Works Regarding topologies, in [2] it is indicated that the hierarchical topology obtained an optimal, functional, manageable, scalable redesign and adapted to the needs to achieve QoS in a satisfactory manner. However, in other investigations such as in [3], it is proved that the ring topology can provide a solution for networks with higher bandwidth consumption and real-time traffic. From the above, it is important to study the context of the network, the services offered and the distribution of equipment, as well as the number of users and connected equipment, for the selection of the appropriate topology. According to [4], another improvement to increase the performance of a network is to classify the packets at the edges of the network into different classes, so that differentiated services can be provided to the packets without having to examine each one in detail at each hop. After being marked once by IP precedence or DSCP, congestion management and avoidance mechanisms can act on them as they circulate through the network, this being the essence of the Diffserv model. This model allowed in [5], to provide QoS guarantees for different types of traffic such as VoIP, streaming and data. To simulate QoS network environments and obtain the best mechanism for them Opnet software is used, as demonstrated by [6] in their research, where through the simulator they defined that the end-to-end QoS requirements reflect values below those offered by the MPLS solution. According to [7], using metrics such as throughput, delay, jitter, packet loss, throughput and bandwidth it is possible to validate the QoS application. The selection of the ideal QoS mechanism is delimited to each network, however in [1] it is concluded that the QoS mechanisms of service differentiation is the most outstanding, emphasizing the PQ, WFQ, LLQ and CBWFQ mechanisms. In [8], the advantages and disadvantages of the FIFO, PQ, CQ and WFQ mechanisms were verified, as well as the best conditions of use, by means of the parameters: packet loss, average delay and jitter.

3 Metodology Summarizing the reviewed papers, the hierarchical topology proposed by Cisco [9] is defined as the optimal for the redesign of the SOLCA hospital network and thus achieve the objectives of quality of service (QoS).

Analysis of the Proposal for the SOLCA

427

To perform the redesign of the network the top-down methodology is used. The phases presented in the research are: analysis of the current situation, improvement proposals, simulation and validation of results. In order to facilitate the study, scenarios will be, scenarios will be simulated with several improvement proposals. The Opnet simulator was used for this purpose, due to its success in showing results and thus validating similar research.

4 Analysis of the Current Situation The current network infrastructure of SOLCA Manabí Hospital is based on a star topology, with several switches of different vendors and models. It is a data network of approximately six hundred computers, deployed completely under the best-effort mode of information distribution. The links between network devices are Cat5 and Cat6 UTP copper cable, there are also multimode fiber optic sections and a single-mode fiber section, as shown in Fig. 1.

Fig. 1. Current topology of the SOLCA Manabí hospital network.

The SOLCA’s hospital has several essential services such as the medical management system, Internet browsing, web applications, mail service and IP telephony, for which it is intended to have a network that can provide redundancy and quality of services. In the SOLCA Manabí network there are up to four network elements in cascade, which means that the elements with the highest hierarchy in the topology receive or aggregate all the traffic coming from the equipment of the lower layers, causing bottlenecks in the higher level links.

5 Proposals for Improvements In this phase a redesign of the data network is proposed, with a 10Gbps network backbone, in a concurrent ring topology with the help of the OSPF protocol, the main network

428

J. A. Giler Villavicencio et al.

Fig. 2. SOLCA data network redesign proposal.

nodes are proposed to be deployed in high availability (HA) hard-ware configuration. The main ring would consist of six nodes, five nodes as distribution and the data center as core node identified in Fig. 2 as DC node using fiber optic links. It is proposed to apply three QoS mechanisms: PQ, CQ and WFQ. The priority queuing mechanism (PQ) gives strict priority to traffic; one limitation is that it only handles four traffic priority classes (high, medium, normal or low). The custom queuing mechanism, CQ, allows applications to share the network and the bandwidth is distributed proportionally among the applications, it uses 8 to 16 queues and manages them in a Round Robin fashion. The weighted fair queuing mechanism, WFQ, is adaptive to changes in the network, unlike the previous ones; it works by performing two simultaneous tasks: traffic organization and fair sharing of the remaining bandwidth using 8 queues. The proposed solutions include IP telephony service, which does not currently exist in SOLCA’s network. The services parametrization priority will be delimited as shown in Table 1, with their respective DSCP. Table 1. SOLCA hospital hierarchy of services. DSCP

Behavior

Equivalency

Traffic

CS0

Best effort

Best effort

Internet

CS1

AF1

Priority

Institutional Mail

CS2

AF2

Immediate

Web systems

CS3

AF3

Flash

Medic System

CS4

AF4

Flash override

CS5

EF

Critical

CS6

Internetwork control

Internetwork control

CS7

Network control Network control

VoIP

Analysis of the Proposal for the SOLCA

429

6 Simulation and Validation of Results The scenarios simulated in Opnet software are: current network, network with PQ, network with CQ and network with WFQ. In order to obtain more representative results, all the simulation scenarios were configured with a higher traffic load than the than the existing one, in this way it will be possible to observe if the scenarios with proposed QoS mechanisms offer guarantees for the future network operation. Configurations applied in simulation scenarios are: in the medical management system service, the transaction interval time is configured as every 10 s makes a 32Mb transaction. The institutional mail was configured as a transaction of approximately 8Mb, with a send and receive interval time every 20 s. The HTTP service or web applications transaction interval time is every 30 s, with a transaction of loading a page containing a medium image and a short video. The video service simulates a video streaming loaded by default from the simulator, while the VoIP service is configured in the simulator using the G. 729 codec, with a compression and decompression of 0.02 s. Simulations performed lasted around 16 h, distributed in 4 simulations of 4 h for each proposal respectively. One hundred samples were collected for each metric in order to observe and compare the performance of the network in the different scenarios. The metrics contemplated in this research for testing network improvements are latency, throughput or effective transfer rate and packet loss. For the validation of the results it is defined that the samples are related because the same network is evaluated by applying improvements, in the determination of the normality of the data using the 0.05 of significance, the four samples obtained in the simulation of the parameters contemplated do not follow a normal distribution, therefore, the validation in each parameter was performed using the Friedman test and thus it was defined whether or not there are significant differences in the performance of the network in the simulated proposals. Network latency determines the time it takes for a data packet to travel through the network from a source node to an end node. Its unit of measurement is seconds. The simulations performed provided the following latency data. It can be seen in Fig. 3 that in the summary statistical samples that the network redesign applying QoS has considerably decreased the latency, in the current network it is almost 18 s, while, the delay presented in the proposals with QoS mechanisms is almost null. The three QoS mechanisms applied reflect values ranging from 0.09ms to 0.1ms. Therefore, there is no significant difference in this parameter. Effective transfer rate is defined as the amount of net information flowing through a data network and is represented by the average number of packets per second delivered. As the topologies are different for the comparison of this parameter, a similar link was used in both topologies in order to have a better concept of the difference. In the distributions obtained from the samples, it can be seen in Fig. 4 that the three QoS mechanisms present improvements in the effective transfer rate. The current network has an average of 0.0006 packets per second approximately, the WFQ mechanism has an average of 1.89 packets per second as well as the PQ mechanism, while the CQ has an average of 2.01 packets per second, the latter being the best in relation to throughput. Packet loss is the percentage of transmitted packets that are discarded in the network, this is caused by several factors such as: high link error rate, exceeding the operational

430

J. A. Giler Villavicencio et al.

Fig. 3. Statistical summary of latency samples in the SOLCA network.

Fig. 4. Summary statistics of the effective transfer rate samples of the SOLCA network.

capacity of an interface in times of congestion, among others. It is expressed in packets over seconds lost. In the representation of the samples obtained in Fig. 5, it is observed that in both the CQ and PQ mechanisms there is a small loss of packets, while in the network with WQF mechanism it always remains at zero, therefore, a significant difference in relation to the other mechanisms is demonstrated.

Analysis of the Proposal for the SOLCA

431

Fig. 5. Statistical summary of packet loss samples from the SOLCA network.

7 Conclusions The analysis of the proposed network was based on the behavior of QoS parameters, simulating different scenarios in which redundancy and QoS mechanisms, such as PQ, CQ and WFQ, are applied. Latency is one of the parameters that are optimized by including QoS mechanisms and redundancy in a data network design, in this case latency has been reduced from 20 s in the current network to values less than 0.1 ms. When comparing latency results using QoS mechanisms, these values were very similar; therefore, it is concluded that the application of any of these mechanisms does not make a significant difference in the hospital’s data network. With respect to the effective transfer rate, the application of QoS mechanisms shows significant improvements with respect to the current network. Among the QoS mechanisms, CQ showed minimal improvement values in relation to the others; however, it was found that there were no significant differences. The CQ and PQ mechanisms reflected an amount of 0.31% and 1.03% of packet losses, however, the WFQ mechanism reflected zero losses in the simulation performed, this allows providing guarantees in the IP telephony service projected in the hospital. The simulation carried out considering 100 samples representing 16 h in each network design, has allowed obtaining data to conclude that the WQF mechanism is the most appropriate to implement in the SOLCA hospital data network.

References 1. Jara, J., Santiago, P.: Estudio de la implementación de Calidad de Servicio (QoS) para el mejoramiento de la red de datos que optimice el acceso a los servicios en la Planta de Producción de la Compañía Yanbal Ecuador S.A. (2016). http://repositorio.puce.edu.ec:80/xmlui/handle/ 22000/12463. Accessed: 28 January 2021

432

J. A. Giler Villavicencio et al.

2. Aguaiza Tenelema, D.: Propuesta de rediseño de la infraestructura de red de la Universidad Laica Eloy Alfaro de Manabí, para ofrecer un modelo de servicios con calidad de servicios (QoS) (2016). http://repositorio.puce.edu.ec/bitstream/handle/22000/12638/TESIS_ DANNY_AGUAIZA.pdf?sequence=1&isAllowed=y 3. Friedrich, G.R., Ardenghi, J.R.: Un modelo para el análisis de la confiabilidad de Ethernet Industrial en topología de anillo. Revista Iberoamericana Autom. Inf. Ind. RIAI 6(3), 101–109 (2009). https://doi.org/10.1016/S1697-7912(09)70269-9 4. Álvarez Moraga, S.A., González Valenzuela, A.J.: Estudio y cofiguración de alidad de servicio para protocolos IPV4 e IPV6 en una red de fibra óptica WDM. Rev. Fac. Ing. Univ. Tarapacá13(3), 104–113 (2005). https://doi.org/10.4067/S0718-13372005000300015 5. Miroslava, Z.: Evaluación de parámetros de calidad de servicio (QOS) para el diseño de una red VPN con MPLS. (2016). Available: http://repositorio.puce.edu.ec:80/xmlui/handle/22000/ 12327. Accessed 28 January 2021 6. Álvarez, O., Mayoral, M., Moliner, C.: Contribución para QoS en Redes Metropolitanas Ethernet. Revista de Ingeniería 26, 7–13 (2007). https://doi.org/10.16924/revinge.26.1 7. Rojas, M., Mercedes, A.: “Diseño de una red LAN y WLAN que brinde calidad de servicio, caso de estudio Unidad Educativa ‘San Rafael,’” p. 99 (2017) 8. Attar, H., Khosravi, M.R., Igorovich, S.S., Georgievan, K.N., Alhihi, M.: Review and performance evaluation of FIFO, PQ, CQ, FQ, and WFQ algorithms in multimedia wireless sensor networks. Int. J. Distrib. Sens. Netw. 16(6), 1550147720913233 (2020). https://doi.org/10. 1177/1550147720913233 9. Cisco: “Resumen del diseño de tecnología de red LAN inalámbrica de campus,” p. 19 (2014)

Design and Optimization of Information Architecture of NGC Cloud Platform Jinbo Hu, Zhisheng Zhang(B) , and Zhijie Xia Faculty of Mechanical Engineering, University of Southeast, Nanjing, China [email protected], [email protected]

Abstract. Information architecture is particularly important for cloud platforms. This article first obtains the conventional functions of the cloud platform through research, and then makes the function points into cards, recruits users to participate in open card classification, and obtains grouped data. Through the cluster analysis method, the function points are appropriately grouped and named through the tree diagram to obtain the information architecture of the cloud platform. In terms of evaluating information architecture, this article first explores the probability of function points appearing in each group through the closed card classification method, and obtains the probability table of function point classification. Then, through the completion rate of the situational task, iterate continuously until a satisfactory design result is obtained. The research method of the paper provides a design mode for subsequent product iteration. Keywords: Equipment operation and maintenance system · Information architecture · Mental model · Card sorting

1 Introduction NGC’s industrial cloud platform aims to help small and medium-sized manufacturing enterprises achieve smart upgrades. The platform has been iteratively updated, the main purpose is to create a universal, configurable equipment management system, such as equipment assets, maintenance statistics, work order management, etc. The design of the product hardly considers the information architecture. It is mainly the functions needed by engineers and the leaders of the user company directly verbally passed on, and then the engineers directly transform the engineering codes into real products. Therefore, there has been a problem that the iteration progress is slow and the original product framework has been unable to break through. In the process of using the product, the common problems encountered include the following points: users cannot understand the meaning of label names; users find the required information from the wrong path; user companies often need product developers to conduct training and guidance. Obviously, these issues belong to the category of product information architecture, indicating that the current product information architecture is not clear. In addition, the current research objects of information architecture are mostly e-commerce websites and education websites. In the more specialized industrial fields, there are relatively few researches, so there are more research contents, and of course it is also challenging. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Z. Ahram et al. (Eds.): AHFE 2021, LNNS 271, pp. 433–438, 2021. https://doi.org/10.1007/978-3-030-80624-8_54

434

J. Hu et al.

Card sorting is a low-tech and low-cost method for understanding how users organize and structure content that is meaningful to them [1]. The purpose of card sorting is to understand the user’s mental model. Through card sorting, you can understand how users expect to see the grouped content on the website and how they see these labeled groups. According to the data relationship between these cards, we can further establish Information architecture.

2 Research Methods 2.1 Cloud Platform Function Points Combing According to the equipment life cycle, the author divides equipment management into pre-management and post-management. The pre-equipment management includes equipment planning, equipment selection, procurement tracking, installation and commissioning of the whole process management, and the purpose is to introduce equipment that meets production tasks into the enterprise. Post-stage management is equipment operation and maintenance management, including equipment operation status, equipment maintenance, equipment spot inspection, equipment inspection visualization, equipment shutdown, equipment accidents, data sharing, etc. The purpose is to maximize equipment utilization benefits. Through analysis, the author selected 53 function points as shown in the figure below (Fig. 1).

Fig. 1. The forms of 53 function points.

2.2 Invite Participant Tullis and Wood recommend testing 20–30 users for card sorting [2]. Jakob Nielsen believes that the more participants in the card sorting experiment, the lower the actual benefits, but the number of experiments should be three times that of the traditional usability test, that is, at least 15 people.The correlation between 5 users and the final result is only 0.75, while 15 users reach a correlation of 0.9. If the number of users increases, the gain is not obvious. It is even more important to use the remaining users in other qualitative usability tests which can guarantee the experimental value of users [3]. Spencer proposed that the ideal user should be the actual user of the product [4].

Design and Optimization of Information Architecture

435

I agree with this point very much, especially for products such as cloud platforms that are related to actual work and involve professional skills. Therefore, the author invited 15 users who are actually engaged in manufacturing work and have accumulated certain industry experience to participate in this experiment.

3 Research Results and Analysis 3.1 Cards Correlation Matrix From the card sorting results, we can see that there are some cards evenly distributed in different groups, while some cards are more concentrated. To further analyze the grouping data of participants, it is necessary to further explore the relationship between the cards, that is, the probability of two different cards being divided into the same group. Define two cards that are not in the same group as 0, and define the correlation degree for the same group as 1. If there are multiple classifications, the correlation degree in the first and second classification at the same time is 2, and so on, generate card correlation degree matrix. When performing a multi-user experiment, add the corresponding data of each matrix. Part of the data is shown in the following figure (Fig. 2).

Fig. 2. The forms of cards correlation matrix.

3.2 Cards Correlation Matrix Convert the correlation matrix into a card distance matrix. Here, “distance” is used to measure the true similarity between two cards. The larger the value, the farther the distance is, and the group cannot be classified into the same group. The specific conversion formula is Distance = 1 − preValue/(2∗ N) Part of the data is as follows (Fig. 3).

(1)

436

J. Hu et al.

Fig. 3. The forms of cards distance matrix.

3.3 Information Architecture Dendrogram In order to visually express the hierarchical relationship of each function point, the function point distance tree diagram is introduced. The dendrogram can clearly show the groups of function points, and the number at the bottom indicates the distance of the function points, that is, the correlation. The greater the distance, the smaller the correlation (Fig. 4).

Fig. 4. Dendrogram.

3.4 The Results of Analysis After obtaining the dendrogram, generally divide the information level dividing line where the distance value ratio is 0.3 and 0.7. The dividing line with a dis-tance ratio of 1 is the summary of all cards; the dividing line with a distance ratio of 0.7 is the primary classification of all function points; the dividing line with a distance ratio of 0.3 is the secondary classification of all function points [5].

Design and Optimization of Information Architecture

437

Based on the analysis, we divide all function points into seven groups then the design of the information architecture, look at Fig. 5. They are, respectively, asset and work management, spare parts management, spot inspection management, service management, maintenance management, others and equipment related management.

Fig. 5. Information architecture.

4 Conclusions This paper reports the exploration of the establishment of information architecture establishment on the NGC Cloud Platform. It provides a design pattern for the construction of cloud platform information architecture in the future. That is user involvement in defining the information architecture is a significant thing for website usability to be achieved. However, this research is only exploratory. Whether this information architecture can meet actual needs further experimental verification. The future research work includes convening users to participate in experiments and iteratively adjusting the current information architecture until most users can easily find the information they need. Acknowledgement. Project supported by the Foundation for Innovative Research Groups of the National Natural Science Foundation of China (Grant No.51775108).

438

J. Hu et al.

References 1. Morvile, P., Rosenfeld, L.: Information Architecture for the World Wide Web:Designing LargeScale Web Sites, China (2008) 2. Tullis, C., Wood, L.: A Modified Delphi Approach to a New Card Sorting Methodology. vol. 3, no. 6, pp. 4–5 (2008) 3. Card Sorting: How Many Users to Test. https://www.nngroup.com/articles/card-sorting-howmany-users-to-test/ 4. Tang, G.-M., Hu, H.-Y., Chen, S.-Y., Jeng, W.: A cross-cultural study on information architecture: culture differences on attention allocation to web components. In: Sundqvist, A., Berget, G., Nolin, J., Skjerdingstad, K.I. (eds.) iConference 2020. LNCS, vol. 12051, pp. 391–408. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-43687-2_31 5. Wei, Y.L.: Research on Mobile Tablet Information Architecture Based on Card Classification. Packaging Engineering, pp. 45–47 (2013)

Design of Self Rescue Equipment for High Rise Building Fire Based on Integrated Innovation Theory Chang Zong and Zhang Zhang(B) East China University of Science and Technology, Shanghai 200237, People’s Republic of China [email protected]

Abstract. Research purposes This paper focuses on the difficulty of fire rescue - height problem, aiming to design an intelligent self rescue equipment. Research process (1) through the desktop research method to analyze the competitive products of the existing fire self rescue products, summarize the characteristics of the product. (2) Questionnaire survey and user interview were conducted to investigate the psychological status, behavioral response and real needs of disaster-affected users. (3) Based on the integrated innovation method, the innovative elements are optimized and integrated to design an intelligent self-help device. Conclusion This paper outputs an intelligent smoke prevention and drug storage robot, which provides a reference for the design of fire rescue products from the inside of buildings. It is committed to realizing the intelligent task of fire prevention and rescue and building a sustainable development direction. Keywords: Integrated innovation · Fire rescue · High rise building · Product design · Intellectualization

1 Introduction The research report shows that most of the existing high-rise buildings are not equipped with automatic water sprinkler system, fire elevator, smoke fan and other equipment, and the allocation rate of home escape self rescue equipment is less than 1%. The government should focus on this problem and improve the design of rescue products so as to meet the needs of consumers. [1] The following will analyze the existing fire self-help products, summarize their advantages and disadvantages and characteristics.

2 Investigation and Analysis of High Rise Building Fire Self Rescue Products 2.1 Analysis of Advantages and Disadvantages of Fire Self Rescue Products Self help products on the market are mainly divided into three categories: alarm, fire fighting and escape. Alarm products include fire alarm and household gas alarm; fire © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Z. Ahram et al. (Eds.): AHFE 2021, LNNS 271, pp. 439–445, 2021. https://doi.org/10.1007/978-3-030-80624-8_55

440

C. Zong and Z. Zhang

extinguishing products include fire extinguisher, which is generally set in the corridor of each floor. [2] Escape products include self rescue respirator, emergency lighting, descent retarder, fire blanket, etc. The research found that the existing self-help products have limitations, the descent device is only suitable for buildings below 30m, not for high-rise self-help; and only one person can use it at a time; in addition, it is not universal, so it is difficult for the elderly and children to use it. Therefore, from the practical point of view, fire alarm (alarm), fire extinguisher (anti fireworks), self-help breathing apparatus, emergency lighting are generally applicable products in fire self-help equipment. 2.2 Characteristic Requirements of Fire Self Rescue Products Fire hazard is high, so fire self-help products need to have some special properties, such as easy operation, time-saving, high efficiency, safety, generality and so on. Easy to operate refers to the product to meet the user’s operating habits. Time-saving and efficient means that self-help products can play an effective role in a short time, and the products should be placed in a conspicuous place. Safety is reflected in that when there is no fire, the product will not affect the life of residents. In case of fire, it can meet the physiological needs of the victims. [3] At the same time, the material of self-help products should be high temperature resistant, burning resistant, non-combustion-supporting, otherwise it will cause secondary fires. The principle of universality is also important. The age span of the people in the building will be relatively large, including children, the elderly and adults, so the self-help products should be universal as far as possible.

3 Demand Analysis of Victims in Fire First of all, keeping normal breathing is the most basic physiological requirement in a fire. Smoke is the main cause of people’s inability to breathe in a fire. Excessive smoke inhalation will lead to suffocation in a short time. Statistics show that the main cause of death in fires is inhalation of smoke and death from oxygen deprivation. [4] Therefore, in the following product design, the most important function is to isolate the contact between the disaster victims and the smoke. Secondly, avoid high temperature. Research shows that although the death rate of burn in fire is far less than that of smoke, it can also cause harm to human body. If the ambient temperature exceeds 150 degrees, people will lose consciousness, such as shock and disability. And high temperature environment will produce a lot of smoke. Therefore, in the product design below, the self-help product should also have the function of cooling or isolating the flame.

4 Analysis of Design Principles of Fire Self Rescue Products First, the principle of functional integration. In an escape, too much functionality is a waste of time. Therefore, the functions of existing self-help products should be integrated and designed to form a compound self-help product. The common functions include fire extinguisher + lighting, respirator + lighting, flashlight + alarm, etc. [5] In this process,

Design of Self Rescue Equipment

441

these elements should be safely and effectively integrated, taking into account the needs of the trapped person. Second, the principle of technological innovation. Desktop survey found that the future self-help products are more inclined to intelligent processing, will add relevant technologies. Air curtain technology plays a role in blocking dust, and is widely used in shopping malls, hotels, hospitals and other places. When the fire occurs, it can also effectively block the smoke. Some studies have shown that the smoke air curtain is a curtain like airflow with a certain jet angle and speed ejected by the air curtain machine, which blocks the spread of high-temperature smoke, so as to prevent the smoke from entering the evacuation passage, prolong the safe evacuation time, and does not affect the access of people. [6] This technology has not been used in fire self-help products, and will be introduced in the next product design.

5 Design Process of Self Rescue Products for High Rise Buildings 5.1 User Demand Survey A questionnaire survey was conducted on residents in high-rise buildings, with a total sample number of 50 people.The research content mainly includes the self-help method and route chosen when the disaster occurs. The survey results are as follows:(1) 47.73% of the residents live on the 7th floor; 81.82% of the people did not regularly check the line, fire awareness; In addition, most residents are not satisfied with fire prevention and firefighting equipment in the community, as shown in Fig. 3 and 4. (2) As for fire-fighting products, people buy flashlights at the highest rate, while smoke masks with great effect are rarely purchased, as shown in Fig. 5. The most well-known fire self-rescue methods are to cover the mouth and nose with a wet towel, wear a smoke mask, and walk upstairs, as shown in Fig. 6. It shows that this is the quickest action that residents will take when they are unconscious, and the method is correct, so the inertial behavior of users should be considered in product design. (3) When a fire just breaks out, 20% of users will choose to put out the fire. In terms of self-help methods, users tend to leave the building through the internal stairs, and few people choose the descending device, considering that the safety factor is not high. (4) When the fire is getting bigger, users will choose fire prevention and smoke blocking, and wait for rescue at the stairway safety exit. In addition, most people expect the use of relief products to be communal. See Fig. 7 and 8. (6) The most desired fire escape product is self-rescue breathing apparatus, followed by fire alarm, fire blanket, lighting tools, etc. According to the users who have experienced fire, the most injuries suffered in fire are burns and smoke choking, which are also the most desired protection for users (Figs. 1, 2). Through the user survey, it can be concluded that the key function points should first focus on the monitoring and alarm. In addition, the key function is to block the smoke as much as possible. Finally, the necessary self-help respirators, fire blankets, scald drugs, etc. should be provided to the affected users.

442

C. Zong and Z. Zhang

Fig. 1. User survey (1)

Fig. 2. User survey (2)

Fig. 3. User survey (3)

Fig. 4. User survey (4)

Fig. 5. User survey (5)

Fig. 6. User survey (6)

5.2 Design Concept First, fire alarms should be integrated into the products. Secondly, air curtain technology should be integrated into product design. The smoke blocking effect of air curtain has been tested. In the test of smoke blocking effect of air curtain at subway station open stairway, it is verified that air curtain is a good choice to prevent smoke spreading in the early stage of fire. After testing, it is suggested that the jet flow rate at the outlet of air curtain should be 5–7m/s and the inclination angle should be 15–30°. [7] Therefore, it is very desirable to apply this smoke prevention technology to self-help products in residential buildings. The core component of the air curtain is the bearing wind wheel, which needs to be fixed in a product shape.Third, after creating a temporary smoke-free environment for the affected users through the above methods, add areas for placing fire blankets and scald medicines on the products.

Design of Self Rescue Equipment

443

5.3 Concept Generation and Sketch Divergence The first scenario is shown in Fig. 7. The product adopts the upper and lower shape, and the fire hydrant structure to do the fusion. The top structure is used to place the air curtain machine. The lower structure is divided into upper and lower storehouses, which store smoke masks above and fire hydrants below. But the overall relatively bulky, can not meet the principle of flexible mobility of self-help equipment. The second scenario is shown in Fig. 8. The product is designed into a movable car shape to meet the flexibility of people’s escape place. Secondly, the use of the air curtain machine to block the smoke, set up the upper, left, right three directions of the outlet. There is also a storage area for self-help breathing masks and medicines. But the distance between the road is narrow, so from the modeling, still need to be further lightweight. The third scenario is shown in Fig. 9. Based on scheme 2, it is further refined and thinned. Second, add the structure of fireproof board. The structure of the product is composed of three parts, the front is a fire board, can block a small range of open fire; Air curtain machine is installed in the middle, and three air outlets are set up, left and right, which can block the smoke in all directions; A smoke sensor is set at the top, which senses the fire in the first time and starts to work. At the back, there is a storage bin, which can store smoke masks and first aid medicines for burns and scalds. There are four wheels at the bottom and two spherical universal wheels at the front. Two main driving wheels in the rear (Figs. 10, 11).

Fig. 7. Sketch (1)

Fig. 8. Sketch (2)

Fig. 9. Sketch (3)

444

C. Zong and Z. Zhang

5.4 Design Presentation

Fig. 10. Details display (1)

Fig. 11. Details display (2)

5.5 Product Size The product’s fireproof board is 180cm in height, 80cm in width and 55cm in thickness. The height is higher than ordinary people, people can effectively avoid behind. This size not only conforms to ergonomics, but also conforms to the proportion of length, width and height.

6 Conclusion This paper analyzes the status quo, categories and functions of existing self-help products. Summarized the fire self-rescue products should have the following characteristics, easy operation, time saving, high efficiency, enough safety and a certain durability. According to the development of rescue products, self-rescue products will become more and more intelligent and systematic in the future. This paper also analyzes the needs of the disaster victims, such as smoke prevention, cooling, urgent need of medicine, to the outside world for help, etc. Finally, based on the theory of integrated innovation and the integration of the above functions, this intelligent smoke prevention and drug storage machine is designed. This machine will be placed in the corridor of a large office building and fit to the wall to achieve the dual functions of beautiful decoration in daily life and timely rescue in the scene of fire. Acknowledgement. Thanks for being sponsored by Shanghai Pujiang Program.

References 1. Luyan, T.: Research on Design of emergency self rescue and escape products for family fire [D]. Shenyang University of Aeronautics and Astronautics (2013) 2. Jie, H.: Analysis and Research on the design of self rescue emergency rescue products [D]. Zhejiang University of technology (2016)

Design of Self Rescue Equipment

445

3. Xingmin, C., Qiang, G.: On the psychological basis of individual behavior response to disasters. Nandu xuetan 01, 64–67 (2000) 4. Qingsong, S.: Fire fighting and safe escape strategies for urban high-rise buildings. Fire Ind. (Electron. Vers.) 5(24), 53 (2019) 5. Jun, Y., Jinyong, Q., Dihu, X.: Industrial design and research of construction machinery. Packag. Eng. 40(18), 1–11 (2019) 6. Yibing, C., Hao, Z., Dongjiu, Y.: Design of urban main fire engine based on integrated innovation theory. Packag. Eng. 40(18), 118–122 (2019) 7. Zhenkun, W., et al.: Smoke blocking effect test of air curtain at subway station open stairway. Fire Sci. Technol. 32(03), 257–326 (2013)

Adoption, Implementation Information and Communication Technology Platform Application in the Built Environment Professional Practice Lekan Amusan(B) , David Adewumi, Adekunle Mayowa Ajao, and Kunle Elizah Ogundipe Building Technology Department, College of Science and Technology, Covenant University, PMB1023, Cannaland, Ota, Ogun State, Nigeria [email protected]

Abstract. The impact of ICT on professional practice has been mainly in making jobs easier for the professions, facilitating decision-making and savings in operating costs, among others. The inefficient national electric power supply system and the high cost of computer hardware and software in relation to the dwindling fortunes of the professions in Nigeria’s depressed economy are the key obstacles to increased investments in ICT. The aim of this study is to understand the extent of ICT applications by professionals in built environment related vocations, with a view to improving the level of ICT application and adoption in Nigeria. A sample size of 82 respondents were used in this study, with questionnaires distributed to construction professionals. Three methods of data analysis were employed for this research. The study assessed the level of ICT application by professionals in built environment related vocations, with a view to improving the level of ICT application in Nigeria via a questionnaire survey with its respondents comprising of Architects, Builders, Engineers, Surveyors, and Quantity Surveyors. It examined the current status of ICT use in the built environment. The study discovered that the most commonly used softwares are; Microsoft Excel (100.0%), Microsoft Word (98.8%) and Microsoft PowerPoint (93.8%). Whereas, AutoCAD is the most popular at 87.7% for Architectural/ Engineering design and drawing, QSCAD (21.0%) for quantity surveying, BIM 360 at 32.1% for project management and Co-Construct at 19.8% for Building Management. The top three benefits of ICT as perceived by the respondents include time saving, makes job easier, and enhances productivity. Three major challenges faced were erratic power supply, high cost of purchasing ICT related softwares and/ or hardwares, job size and fees. The study recommend the following based on research results; the government should enable provision of steady power supply, as well as each organization to also provide back up options for power in case of power failure. Keywords: Integration · Adoption · Information · Construction

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Z. Ahram et al. (Eds.): AHFE 2021, LNNS 271, pp. 446–455, 2021. https://doi.org/10.1007/978-3-030-80624-8_56

Adoption, Implementation Information and Communication Technology Platform

447

1 Introduction Information and Communication Technology can be broadly defined as any technology that provides an enabling environment for physical infrastructure and services development of applications for generation, transmission, and processing, storing and disseminating of information in all forms. The sustainability of both the high economic growth and efficiency in operations of both the private and public institutions, are dependent on the use and effective utilization of ICT and its various tools. Information is a stimulus that has meaning in some context for its receiver while communication is a process whereby information is enclosed in a package and is discrete and transferred by a sender to a receiver through a channel or medium. Communication entails that all parties involved have an area of communicative similarities. Information and Communication Technology entails any product that will store, retrieve, manipulate, transmit or receive information electronically. Literally every construction project involves; the clients, consultants, contractors, local authority; residents; workers and suppliers, who all have differing interests in the project which demands heavy exchange of data and information [1]. Therefore, the built environment is one of the most information-intensive environments, and requires close coordination of a large number of specialized interdependent organizations/individuals to achieve cost, time and quality goals of a project [1, 2].

2 Research Design and the Study Population Research is defined as structured inquiry that utilizes acceptable scientific methodology to solve problem and creates new knowledge that is generally applicable. This study was designed to study integration of information and communication technology application in the built environment professional practices in Lagos State. In this regard, it will be a survey research because questionnaires will be used as tool to collect data for the purpose of answering the postulated research questions. The study, Integrating of ICT Application in the Built Environment Professional Practice in Lagos State will involve major stakeholders/practitioners in the construction industry within Lagos State Nigeria. They are Contractors, Quantity Surveyors, Surveyors, Architects, Engineers and Builders within the study area [2, 3].

3 Research Design and the Study Population Samples are normally used in response to time and financial constraint. The adequacy of a sample is assessed by how it represents the whole population of participants from which the sample is to be drawn. The total population for this study will include all registered Quantity Surveyors, Architects, Surveyors, Builders as at December/January 2018/19 in Lagos State as obtained from respective institution/ professional bodie.The sample size in respect of this study, include categories of respondent as shown from the following formula as used by [4, 5] study as cited in [6]. To be able to rank the factors identified by the respondents to impact the various aspects of the study, it will be necessary to highlight the relative importance of the factors [7]. Due to certain challenges

448

L. Amusan et al.

faced in obtaining accurate and reliable data on the population of built environment professionals within the specified class, a total sample size of 100 was adopted and used for the study [8]. This can also be carried out using purposive sampling method. Where total respondents can be taken at random. Again, a sample size of 100 agrees with Kish, (1965) formula. Using Formula: n = n1/ (1 + n1/ N) [7] where n = Sample size n1 = S2/ V2. N = Population size, V = Standard error of sampling distribution = 0.05 S2 = P(1-P) = (0.5) (0.5) = 0.25, P = the proportion of standard deviation in the population element (total error = 0.1 at 95% confidence level). Based on the above parameters, a sample size of 100 is also adequate for larger. Populations because as the population size (P) increases, the sample (n) size approaches one hundred (100). Research is defined as structured inquiry that utilizes acceptable scientific methodology to solve problem and creates new knowledge that is generally applicable. This study was to study integrating of information and communication technology application in the built environment professional practices in Lagos State. In this regard, it will be a survey research because questionnaires will be used as tool to collect data for the purpose of answering the postulated research questions. The study, Integrating of ICT Application in the Built Environment Professional Practice in Lagos State will involve major stakeholders/practitioners in the construction industry within Lagos State Nigeria. They are Contractors, Quantity Surveyors, Surveyors, Architects, Engineers and Builders within the study area.

4 Data Collection Instrument, Data Presentation and Discussion Data for the study was generated through an opinion - based questionnaire survey. This approach was used in this study due to the surveying nature of the research. The questionnaire was structured in a way that the respondents are expected to choose from the options provided. The questionnaire is aimed to reflect the main area of interest to the researcher and thus providing relevant information to the research questions which brought about the objectives. From the Table 1 above, gotten through field survey, it can be deduced that the use of ICT tools is wildly accepted by clients, with it ranking first on the index ranking. It means the clients have encouraged building environment professionals to use these tools as they find it effective and easier to understand. Secondly, professionals in the industry have taken the use of these ICT tools on board, as they find it effective in carrying out their various activities related to their various professions. Also, it can be said that the professionals find it very easy to use, with that ranking third on the index rankings. Which has in turn improved productivity and work effectiveness. With acceptance by management coming 4th on the index rankings, it can be said that those in authority are starting to get on board with the use of ICT tools in carrying out various construction related activities. It can also be said that government policies been implemented are not best suited to the use of ICT tools in the built environment. Also, professionals find these tools hard to access, which can be due to scarcity in some of these tools, or their expensive cost, limiting their use of them, and with it been last on the list, professionals agree that these tools are very hard to procure due to their cost, with these ICT tools not coming at a cheap price. For the hardware been used, it can be said that all professions

Adoption, Implementation Information and Communication Technology Platform

449

Table 1. Evaluation of extent to which ICT applications are been deployed in construction operations by built environment professionals Status

Index

Rank

Section A Accepted by The Client

0.4123 1st

Accepted by Professionals

0.4370 2nd

Easily Usable

0.4444 3rd

Accepted by Management

0.4716 4th

Hindered by Government Policies 0.5284 5th Easily Accessible

0.5457 6th

Aided by Government Policies

0.5728 7th

Cheap to Procure

0.6099 8th

SECTION B Laptops

0.3358 1st

Printers

0.3630 2nd

Desktops

0.3753 3rd

Projectors

0.3852 4th

Scanner

0.3951 5th

Notebooks

0.5605 6th

Fax Machine

0.5654 7th

Mainframe

0.6247 8th

Source: Field Survey (2019)

happen moving along with the times, with majority of them citing laptops as their major tool of work, followed by the use of printers to create hard copies of their work. The mainframe is a type of old age computer, which is basically more or less out of use in the present times, and can be at testified to with it coming last in the ranking for hardware tools been used by the professionals in the built environment [2, 4, 5, 7] and [9] (Table 3). This section ranks the factors that influence the implementation of ICT in the built environment by professionals. The Table 2 shows the ranking of these factors. The results obtained showed that Technical Know How was the most relatable factor, as it ranked 1st on the index ranking, meaning that professionals consider their ability to be able to operate these ICT tools as very important in their use of it. Also, Management Preference ranked 2nd , meaning that the choice of the employers decides what ICT tools can be used, and when these ICT tools can be used. The availability of these tools is also considered a huge factor in the application of ICT tools in the built environment, which can be if these tools are cheap to procure or readily available to procure for use. The type of profession also serves as a factor for the use of these ICT tools, with some professions feeling they need less of these tools to carry out their work as they feel it makes less to no difference

450

L. Amusan et al.

Table 2. Evaluation of factors that influence the implementation of ICT by professionals Factors

Index

Rank

Technical Know How

0.3778 1st

Management Influence

0.4049 2nd

Availability of Tools

0.4148 3rd

Profession

0.4222 4th

Ease of Use

0.4247 5th

Job Size

0.4346 6th

Client Preference

0.4420 7th

Changing Trends in Technology 0.4469 8th Technical Certifications

0.4469 9th

Construction Industry Demands 0.4519 10th Government Policies

0.4790 11th

Lack Awareness

0.4840 12th

Language Barrier

0.5753 13th

Virus Attack

0.5852 14th

Source: Field Survey (2019)

on their, while some see it as a more effective means to carrying out their activities in their own profession. Ease of use rounds off the top 5 most important factors considered by professionals in the application of ICT tools in the built environment. They consider this to be important, so that they don’t spend of their work period figuring out how to use these tools, instead of actually using it. The least ranked factors included Virus Attack, with professionals not finding it as an important factor in using these tools, can be due to the ease at which these virus attacks can either be prevented or removed. Language Barrier is also ranked low among the factors considered in the use of these tools, due to these tools been available in an understandable language for the professionals. To round up the least three considered factors is lack of awareness, indicating that professionals are well aware of these tools that can be applied to their various professions. Based on survey, most professionals (93% upwards) who were involved in the research, agreed to using presentation software, whereas, software made for Designing, Project management, building management, and Quantity Surveying professions, are in use by an average of 25%. Building management softwares are tools used mainly by builders and management to help plan and coordinate their various activities as related to the built environment. Project management softwares are tools used to help coordinate various project works in the built environment. Quantity Surveyors softwares are tools used to help quantity surveyors in their taking off for quantities and calculation of prices of materials. Design softwares, are tools used to make designs in different aspects of construction, from site plans, to design of buildings, structures, electrical grids, pipelines

Adoption, Implementation Information and Communication Technology Platform

451

Table 3. Evaluation of uniqueness in ICT tools used by different professions in the built environment S/N

Softwares

% Of Users

Building Management Coconstruct

19.8

Procore

16.0

Buildertrend

18.5

Plangrid

14.8

Buildtools

17.3

E-Builder

16.0

Eclipse

12.3

Project Management Bim 360

32.1

Bim Track

18.5

Stack

12.3

Sage 300 Construction

11.1

Oracle Primavera

19.8

Quantity Surveying Workmate

16.0

Qscad

21.0

On-Screen Takeoff And Quick Bid

14.8

Qsplus

18.5

Electrical Bid Manager

16.0

Matlab

13.6

Catia

12.3

Design

Cogo

9.9

Geniebelt

9.9

Xcircuit

16.0

Staad

21.0

Archdesk

33.3

Revit

53.1

Autocad

87.7

Lumion

34.6

Presentation (continued)

452

L. Amusan et al. Table 3. (continued)

S/N

Softwares

% Of Users

Microsoft Power Point

98.8

Microsoft Excel

100.0

Microsoft Word

93.8

Source: Field Survey (2019)

among others. Presentation softwares, are tools used to give a more comprehensive representation of data and information garnered from other tools used or through general knowledge [9, 10]. Table 4. Evaluation of challenges and opportunities relevant to ICT tools deployment by professionals Challenges

Index

Rank

Erratic Power Supply

0.3778 1st

High Cost Of Hardware/Software

0.3951 2nd

Job Fees

0.4370 3rd

Job Sizes

0.4395 4th

Inadequate ICT Content In Construction Education 0.4790 5th Scarcity Of Professional Software

0.5111 6th

Lack Of Management Desire

0.5111 7th

Security

0.5235 8th

Low Return On Investment

0.5235 9th

Personnel Abuse

0.5556 10th

Makes Professionals Redundant

0.5778 11th

This section evaluates the challenges and opportunities relevant in deployment of ICT tools in the Built Environment Professions by professionals. Table 4 shows the ranking of the challenges faced. Erratic Power Supply is ranked as the most relevant challenge that faces the deployment of ICT tools in the built environment, with virtually all the tools been power intensify, and the erratic power situation in the country having an effect on its deployment. The High Cost of Hardware/ Software serves as a stumbling block in the deployment of these ICT tools, with them not been cheap to procure. Job Fees and Sizes also serves as an important challenge, with the job fees been paid not worth the cost and hassle of using these tools, or the job size been too small to consider using ICT tools to carry them out. Inadequate ICT Content in Construction Education also serves as a challenge, with most professions not been exposed to the various tools available for use in the built environment at an early age.However, Table 5 shows the

Adoption, Implementation Information and Communication Technology Platform

453

Table 5. The challenges and opportunities relevant in deployment of ICT tools in the built environment professions by professionals Opportunities

Index

Rank

Time Efficiency

0.3259 1st

Increased Productivity

0.3358 2nd

Ease Of Work

0.3506 3rd

Reduction In Errors

0.3827 4th

Competitive Advantage 0.4049 5th Cost Reduction

0.4222 6th

Source: Field Survey (2019)

ranking of the opportunities relevant in ICT tools deployment in the Built Environment Professions by professionals. The Table 5 shows that these tools tend to be time efficient, meaning less time is spent in carrying out their respective tasks while using them, as compared to the excess time spent when they’re not in use.Increased productivity is also seen as a big opportunity that can be derived from the use of these ICT tools, with more work covered at a relative shorter period. It can also be said that the works become relatively easier to carry out thanks to the deployment of these ICT tools [7, 8] and [11]. Table 6. Evaluation of professionals’ opinions on effectiveness of ICT tools deployment in built environment professions. Opinion

Index

Rank

Saves Time

0.2889 1st

Makes Job Easier

0.3111 2nd

Enhances Productivity

0.3333 3rd

Improves Document Presentation 0.3432 4th Increases Speed of Work

0.3531 5th

Reduces Difficulty of Work

0.3556 6th

Reduces Construction Error

0.3778 7th

Gives Competitive Advantage

0.3975 8th

Facilitates Decision

0.4099 9th

Reduces Operational Cost

0.4296 10th

Source: Field Survey (2019)

From these Table 6, it shows that the Professionals believe ICT tools to be effective in saving time in carrying out and completing of their various tasks. Its also seen to make the job easier to carry out, with less difficulties encountered due to the use of these ICT tools. Enhancement of productivity rounds off the top 3, with ICT tools

454

L. Amusan et al.

believed to help increase productivity of work been done overtime. It can also be said that the deployment of ICT tools has no effect on operational cost with it been bottom of the survey with an index score of 0.4296, Though few still consider it to help in the facilitation of decision making in their own field of operations. Whereas some agree it gives no competitive advantage which can be as a result of majority of professionals already using the same tools, making it an even playing field for all. The study assessed the level of ICT application by professionals in built environment related vocations, with a view to improving the level of ICT application in Nigeria via a questionnaire survey with its respondents comprising of Architects, Builders, Engineers, Surveyors, and Quantity Surveyors. It examined the current status of ICT use in the built environment. While also discovering that the most commonly used softwares are; i.Microsoft Excel (100.0%), ii.Microsoft Word (98.8%) and iii.Microsoft PowerPoint (93.8%). Whereas, i.AutoCAD is the most popular at 87.7% for Architectural/ Engineering design and drawing, ii. QSCAD (21.0%) for quantity surveying, iii. BIM 360 at 32.1% for project management and iv.CoConstruct at 19.8% for Building Management. The top three benefits of ICT as perceived by the respondents were i.Saves Time, ii.Makes Job Easier, and iii.Enhances Productivity. Three major challenges faced are: i.Erratic Power Supply, ii. High Cost of Purchasing ICT Related Softwares and/Or Hardwares, iii. Job Size and Fees [12, 13] and [14, 15].

5 Recommendations On erratic power supply, the government should enable to provide steady power supply, as well as each organization should also provide back up options for power in case of power failure. The cost of procuring this hardwares and softwares should be made relatively cheaper, and affordable. Job sizes and fees also should be of equal proportion to encourage the use of ICT related tools to improve and enhance the built environment operations. Also, professionals should take more advantage of the various available ICT tools to carry out their operations. Acknowledgement. The support of Covenant Center for Research and Development (CUCRID) for sponsoring this research.

References 1. Amor, R., Betts, M., Coetzee, G., Sexton, M.:Information technology for construction: recent work and future direction. J. Inf. Technol. Constr. 7, 245 – 258 (2002). http://WWW.itcon. org/cgi-bin/works/show2002-16 2. Arif, A.A., Karam A.H.: Architectural practices and their use of IT in the Western Cape Province, South Africa. J. Inf. Technol. Constr. 6, 17–34 (2001). http://www.itcon.org/2001/2 3. Björk, B.C.: The Impact of Electronic Document Management on Construction Information Management. In: Proceedings of the International Council for Research and Innovation in Building and Construction, CIB W78 Conference 2002, Aarhus, 12–14 June 2002 4. Boyd, C., Paulson, J.R.: Computer Applications in Construction. McGraw-. Hill, Inc., Singapore (1995)

Adoption, Implementation Information and Communication Technology Platform

455

5. Doherty, J.M.: A survey of computer uses in the New Zealand building and construction industry. J. Inf. Technol. Constr. 2, 45–57 (1997) 6. El-Haram, M.A., Homer, M.W.: Factors affecting housing maintenance cost. J. Qual. Maint. Eng. 8(2), 115–123 (2002) 7. Issa, R.R.A., Flood, I., Caglasin, G.: A survey of E-business Implementation in the US construction industry. J. Inf. Technol. Constr. 8, 15–28 (2003). http://www.itcon.org/2003/2. ITU (2003) ITU Digital Access Index: World’s First Global ICT Ranking, http://www.itu.int/new sarchive/press_releases/2003/30.html 8. Kangwa, J., Olubodun, F.: An Investigation into Home Owner Maintenance Awareness, Management and Skill-knowledge Enhancing Attributes. Struct. Surv. 21(2), 70 (2003) 9. Li, H., Irani, Z., Love, P.E.D.: The IT performance evaluation in the construction industry. In: Proceedings of the 33rd Hawaii International Conference on Systems Science (2000) 10. Knight, P., Boostrom, E.: “Increasing Internet Connectivity in Sub-Saharan Africa: Issues, Options, and World Bank Group Role”, Online World Bank Publications (1995) 11. Lim, Y.M., Rashid, A.Z., Ang, C.N., Wong, C.Y. and Wong, S.L.: A survey of internet usage in the Malaysian construction industry. J. Inf. Technol. Constr. 7, 259–269 (2002). http:// www.itcon.org/2002/17 12. Liston, K.M., Fischer, M.A., Kunz, J.C.: Designing and evaluating visualization techniques for construction planning. In: Proceedings of the 8th International Conference on Computing in Civil and Building Engineering (ICCCBE-VIII), Stanford University, Stanford, CA, pp. 1293– 300 (2000) 13. Marshall, S., Taylor, W.: Using ICT for rural development. Int. J.Educ. Dev. 2(2), 1–7 (2007). http://ijedict.dec.uwi.edu/2007 14. Maqsood, T., Walker D.H.T., Finegan, A.D.: An investigation of ICT diffusion in an Australian construction contractor company using SSM. In: Proceedings of The Joint CIB-W107 and CIB-TG23 Symposium on Globalisation and Construction, Bangkok, Thailand, pp. 485–495 (2004) 15. Smith, T.F., Waterman, M.S.: Identification of Common Molecular Subsequences. J. Mol. Biol. 147, 195–197 (1981)

The Influence of E-commerce Web Page Format on Information Area Under Attention Mechanism Fan Zhang1(B) , Yi Su1 , Jie Liu2 , Nan Zhang1 , and Feng Gao3 1 Department of Industrial Design, Beijing Institute of Fashion Technology, Beijing, China 2 China Assistive Devices and Technology Center for Persons with Disabilities, Beijing, China 3 Creative Design Center, Alibaba, Inc, Hangzhou, China

[email protected]

Abstract. With the rapid development of Internet technology and industry, ecommerce has become one of the major shopping channels for Chinese consumers. Different from the traditional web page design. The basic design goal of the e-commerce web page is to promote the use of the e-commerce website and ultimately transfer it to the purchase decision. According to previous studies, web page format design can affect users’ browsing behavior. This paper studies how e-commerce web page format affects users’ attention to information through eye tracking experiment. The eye tracker capture experiment was used to obtain the difference data of user browsing behaviors (browsing path, fixations) under different column ratios. On the one hand, it helps designers to accurately lay out the information content to be presented according to the browsing rules of users, i.e., to place the attention at the corresponding position according to the importance of information; On the other hand, the designer judges in advance the overall information attention effect brought by the design scheme. Keywords: E-commerce · Eye tracking experiment · Viewer’s attention · Web page format · Column ratio · Evaluation methodology

1 Introduction With the rapid development of computer technology, e-commerce websites accurately recommend commodities through intelligent technologies such as data mining and machine learning, and release advertisements related to user needs, thus improving the user purchase rate. However, the rapid growth of information brings more cognitive load to users, and the scarcity of “attention” puts forward new requirements for the visual design of e-commerce web pages. At present, there is little systematic research on visual presentation forms. How to obtain users’ attention resources through visual precision design has become the key to improve marketing effect and promote the effectiveness of communication between enterprises and users. Attention is “the direction and concentration of psychological activities to a certain object (excluding other factors)”, and is a psychological feature accompanied by the process of perception, memory, thinking, © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Z. Ahram et al. (Eds.): AHFE 2021, LNNS 271, pp. 456–464, 2021. https://doi.org/10.1007/978-3-030-80624-8_57

The Influence of E-commerce Web Page Format on Information Area

457

imagination, etc. Its function is to regulate and control the information input channel, sample, compress and select the input information. It is the first link to realize people as active information processors. Consumer behavior includes consumer’s attention, understanding, memory, judgment and choice. Focusing on the “attention” of users when using products is a key factor in commercial marketing. E-commerce web pages are an important way for consumers to obtain commodity information. The visual design strategy of web pages is what kind of information presentation method can better attract consumers’ attention so as to produce the best marketing effect. In the study of human-computer interface, capturing user browsing behavior through eye movement is one of the important objective evaluation methods. A large number of studies have demonstrated the influence of user behavior principles and interface layout on browsing behavior. According to an earlier study by Nielsen/Norman research team, people’s reading patterns show an F-shape, which means that users are likely to miss the non-left key information [1]. More eye tracking experiments have proved that the type of search task and information content will affect the F-shape browsing. Noton and Stark put forward the global scanning theory in 1971, modeling the eye movement pattern in the process of browsing and recognizing objects [2]. R.Boha et al. compared the advantages of horizontal and vertical scanning strategies. A series of experiments carried out by Janiszewski have proved that the competition for attention will not only affect the duration of the eyeball fixed point received by the object, but also affect the efficiency of information processing [3]. Therefore, eye movement technology provides a reliable method for testing the degree of users watching web pages. To sum up, the visual design of web pages is related to the behavior patterns of users browsing, thus affecting the possibility of information arrangement being noticed in different areas. Improving the effective communication between key information on the page and users is an important goal of web visual design. Therefore, based on the user’s eye movement and cognitive processing mode during browsing, through eye tracking experiments, this paper studies how e-commerce web page format affects the effect of information recognition by users, i.e. the degree of attention and browsing path.

2 Background 2.1 Web Page Format Design Visual element is the basic unit that forms the web page interface. It’s different forms of expression and combination affect users’ subjective perception of the web page (aesthetics, ease of use, etc.) and users’ browsing behavior. Yang-Cheng Lin obtained 7 main design elements through the morphological analysis experiment of the web page, including the proportion of pictures and texts, the proportion of white space, the format, the layout style, the hyperlink style, the number of colors, and the background color [4]. Format is one of the important factors that affect users’ browsing. A large number of articles have studied the difference between the perception of e-commerce page list format and matrix format. List format is more suitable to complete the task of searching for certain information and compare information due to the top-down browsing path. Matrix format is suitable to browse randomly due to the evenly distributed browsing path [5]. Common web pages are divided into two columns. The ratio of columns is an

458

F. Zhang et al.

important factor that affects users’ browsing [6, 7]. In recent years, many researchers have begun to study the ratio of columns in web pages. Ngo and others pointed out that the golden ratio is applicable to the length and width of pictures to improve the aesthetic value [6]. There are also researches on the proportion of web page text segmentation, trying to find the optimal proportion to help users improve efficiency. The most important achievement comes from Van Schaik and Ling [8]. They investigated five column ratios, and the results show that the search time is the shortest under the condition of 23:77 column ratio. Chien-Hsiung further studied the effect of resolution on column ratio. It was proved that the column ratio of 24:76 was the best when the resolution was 800 × 600, and 15:85 was the best when the resolution was 1024 × 768. The structure with narrow left and wide right was preferred by users [9]. 2.2 Visual Attention Theory The visual design of web pages affects the possibility of information being noticed in different positions. Previous researchers have done a lot of research on the relationship between attention and information recognition, which provides a theoretical basis for this experiment. Visual Attention Hierarchy Model. According to the visual attention hierarchy model, browsing mode is guided by two obvious cognitive processes: searching and scanning. The browser attempts to find the entry point of the page, and then extracts the information near the entry point [10]. All processes will be affected by the attributes of the visual elements of the page. When these elements are effectively designed, these attributes can guide users to browse the page. Competition-for-Attention Theory. Attention is considered as limited processing power and is distributed to many different positions in the visual range. Attention allocation depends on the saliency of the object in the sight and the distance [11]. Research on human visual perception shows that the distance of an object from the visual focus can determine how much attention it receives. A series of experiments by Janiszewski have proved that attention competition not only affects the duration of eye fixation received by an object, but also affects the information processing efficiency [3].

2.3 Eye Movement Measurement Methods This paper mainly uses the method of eye tracker capture to test how the proportion of columns in the page format affects the user’s attention. Eye movement behavior plays an important role in the process of visual cognition, which mainly consists of alternating fixation and saccade to form the sequence. “Fixation” means that the sight stays at a certain position for more than 100 ms. This pause is mainly used by users to obtain information from the interface or to process internally. “Saccade” refers to the rapid jump of eyes between two fixation points, lasting 30–120 ms. Currently, eye tracking experiments use more analysis indicators including Fixation Duration, Fixation Count, Visit Duration, Visit Count, Time to First Fixation, etc. Visualization methods include

The Influence of E-commerce Web Page Format on Information Area

459

heat map and fixation trajectory map. Some researchers believe that attention can be measured by fixation time in the AOI: fixation frequency can be used as an indicator of the importance of the AOI, and fixation duration can be used as an indicator of the difficulty and complexity of visual presentation.

3 Research Method “Column” is the basic element of web page layout design. It controls the presentation and segmentation of information content through column division, thus affecting users’ attention in browsing information. The number of columns commonly used in e-commerce PC-client pages is mainly from “1 column” to “5 columns”, and the columns are divided into “equal proportion division” and “non-equal proportion division”. This experiment mainly focuses on the 2-column design for high frequency usage of e-commerce operation pages, the user’s browsing behavior is captured by eye tracker, and the browsing path and attention difference in each column caused by different column ratios in page layout design are studied. The conclusion helps designers to get attention dominance areas under different column ratios and consider the layout according to the importance of information content. 3.1 Experimental Design This experiment is a two-factor 4 × 2 experiment of column ratio and block density. The difference of attention distribution caused by different ratios of left and right columns and block density is studied as a whole. Block density refers to the number of blocks divided into different numbers when the area of each floor is fixed. According to the common proportion of skeleton layout in operation pages, we studied the browsing trajectory and fixation degree (visit duration, number of fixation points) under the condition of 4 column ratios (equal ratio division 1:1, unequal ratio division 3:2, 1:2, 1:3) and mixed 2 block densities (4 blocks, 6 blocks). 3.2 Materials According to the experimental research content, we have made 8 kinds of two-column skeleton operation page experimental samples. The left column of each floor of the sample is a large advertisement banner, and the right column is a plurality of commodity pictures. Except for the change of column ratio and density in the floor, the page head, page margins, floor height and floor distance are consistent. A total of 3 sets of samples with 8 different formats were produced. The user was engaged in selecting commodity pictures from the picture material library prepared in advance. The user would browse one set of pages randomly in the experiment to eliminate the influence of the experimental results caused by the preference. The pages are still picturing of 1440 pixels * 1650 pixels, which can be scrolled with the mouse but cannot be clicked on the commodity map. After browsing at will, the user clicks to switch to the next page.

460

F. Zhang et al.

3.3 Apparatus and Participants Thirty-one subjects participated in the experiment. They were all staff or students in Hangzhou, aged 18–35. Since the purpose of the experiment is to find the common rules of users’ browsing behavior and population factors are not variables, the experiment tries to recruit subjects from different countries and regions. Education: bachelor degree or above, all with more than 3 years of internet use experience and e-commerce shopping experience, the subjects’ naked or corrected eyesight is normal, and they are used to reading characters from left to right, and the subjects get compensation after the experiment is completed. A desktop computer was used to play experimental materials and connect with an eye tracker. The screen is a commonly used 19-inch display. The Tobii X2 non-contact eye tracker is used to collect user eye movement data. The sampling rate of the eye tracker is 120 Hz, and eye movement data is collected every 20 ms. The high-resolution camera equipped with near-infrared light emitting diodes has lower constraints on the subject and environment, and can capture user behavior more naturally. The eye movement trajectory was recorded by Tobii Studio, and the data of each AOI was analyzed. The experimental site is Alibaba Taobao Eye Movement Laboratory. The laboratory is equipped with one-way glass and the main test is conducted in the observation room. 3.4 Data Measurement and AOI Definitions The eye tracker analysis software Tobii Studio was used for data analysis. The main purpose of the experiment was to test the attention performance of users with different “column ratios” and different “densities” on the left and right sides of the 2-column skeletal pages, and find out the rules of attention dominance areas, so we superimposed the eye movement data of all subjects (25 people) in Tobii Studio software. Firstly, the heat map of the whole page was derived to intuitively analyze the whole browsing and watching situation. In addition, all the blocks in the left and right columns and the right column of each floor were separately divided into AOI and eye movement data of the following indexes were analyzed one by one: visit duration, number of fixation points, and time to first fixation of the column.

4 Results 4.1 Heat Map First, the left/right column ratios were compared as a whole. When the left/right column ratio was 3:2, 1:1, 1:2, the fixation of the left column was higher regardless of the number of blocks, while when the left/right column ratio was 1:3, the fixation of the left column and right column was more average. Secondly, the blocks in the right column were compared. When the number of blocks was less than 3:2 and 1:1, the fixation of the left block in the right column was higher than that in the right. When the number of blocks was less than 1:2 and 1:3, the fixation of each block in the right column was more average. When the number of blocks was 3:2 and 1:1, the fixation degree of the middle block in the right column was higher than that of the left and right blocks. When there were more than 1:2 and 1:3 blocks, the fixation of each block in the right column was more average. (shown in Table 1).

The Influence of E-commerce Web Page Format on Information Area

461

Table 1. Heat maps-different ratio and blocks of two columns. High density 3:2

High density 1:1

High density 1:2

High density 1:3

Low density 3:2

Low density 1:1

Low density 1:2

Low density 1:3

4.2 AOI Data Analysis We further analyzed the deep data of each divided AOI, including the time to first fixation and the visit duration. Time to First Fixation. First, the left column and the right column were compared as a whole. When the ratio of the left column to the right column was 1:3, the time to first fixation of the left column was basically the same as that of the right column, even later than that of the right column. The subjects’ sight entered the left column first in other ratios. Second, compared with each block in the right column, in the case of low-density column ratio of 3:2, 1:1, 1:2, the time to first fixation in the right column was from left to right in turn. When the column ratio was 1:3, the browsing order of the blocks in the right column was not obvious. When the ratio of high-density columns was 3:2 and 1:1, the time to first fixation of the block in the middle of the right column was earlier than Table 2. Time to first fixation(s). High density 3:2

High density 1:1

High density 1:2

High density 1:3

Low density 3:2

Low density 1:1

Low density 1:2

Low density 1:3

462

F. Zhang et al.

that of the left and right blocks. In the case of 1:2 and 1:3, the browsing order of the blocks in the right column had no obvious rule. (shown in Table 2). Visit Duration. First of all, compared with all blocks, the visit duration of the left column block in the first floor was higher than that of the right column block under any column ratio under low density. However, the block fixation time in the second floor gradually tended to be similar as the area of the left column block decreased. In the case of high density, when the column ratio was only 1:3, the visit duration of the left column block was basically the same as that of each block in the right column, and there was no advantage of the left column. In other cases, the visit duration of the left column block was higher than that of the right column block. Secondly, compared with each block in the right column, when the ratio of left column to right column was 3:2 and 1:1, the fixation time of the left block was slightly higher than that of the right, and when the ratio was 1:2 and 1:3, the fixation time of each block was more average. In the case of high density 3:2 and 1:1, the block visit duration in the middle of the right column was longer, and in the case of 1:2 and 1:3, the block visit duration in the right column was more average. (shown in Table 3).

Table 3. Visit duration(s). High density 3:2

High density 1:1

High density 1:2

High density 1:3

Low density 3:2

Low density 1:1

Low density 1:2

Low density 1:3

5 Discussion and Conclusion The results of the eye tracking experiment revealed the attention situation under different left and right column ratios (3:2, 1:1, 1:2, 1:3). We draw the following conclusions through heat map and AOI data respectively. 5.1 Browse Path The Browsing Order of the Left and Right Columns. Except that the left column was obviously smaller than the right column (ratio 1:3), the subjects’ sight entered the

The Influence of E-commerce Web Page Format on Information Area

463

left column first under other ratios. The browsing order of users generally conformed to the browsing and reading habits from left to right, and the attention dominance of the left column was more obvious. Specific Browsing Sequence. When the column ratio was 1:3, there was no obvious browsing rule for each block, and the left column lost its attention dominance due to the small area of the block. At 3:2, 1:1 and 1:2, the browsing order of each block was basically from left to right and from top to bottom. (shown in Fig. 1).

Fig. 1. Browsing paths of various ratio

5.2 Attention Comparison Between Upper and Lower Floors. The time to first fixation of the firstfloor block was earlier than that of the second-floor block, but the difference of fixation time was not obvious. As the experimental material had only two floors, users could scan all areas smoothly during browsing. For the browsing rules of upper and lower floors, reference could be made to previous research conclusions. The attention of the first floor was the most concentrated, and then attenuated floor by floor. Overall Comparison of Left and Right Columns. Visit duration represents the user’s attention intensity. First, in the case of low density, when the width of the left column of the first floor was wider than that of the right column and the area of the left and right columns were equal (3:2, 1:1), the visit duration of the left column was slightly higher. When the left column was narrower than the right column (1:2, 1:3), the visit duration of the left column was lower than that of the right column. However, the fixation time of the left column on the second floor was all lower than that of the right column. When users browsed the first floor, the area of the left and right columns was directly related to the strength of attention, but after users adapted to the page layout, they would look more carefully at the commodity information of the cell blocks in the right column, resulting in the right column of the second floor obtaining more attention. Secondly, under the condition of high density, the fixation time of the left column was all lower than that of the right column. The blocks in the right column were dense, with more commodity information and more attention resources.

464

F. Zhang et al.

Comparison of All Blocks in Left and Right Columns. Except when the ratio of left column to right column was 1:3, when the ratio was 3:2, 1:1, 2:1, the blocks in the left column had obvious attention dominance. In most cases, the attention deficit area was the right block on the second floor of the right column. To sum up, based on the user’s attention theory, this paper used eye tracking experiments to capture and obtain the user’s browsing path and attention data, and studied how the format of e-commerce pages affected the degree of information attention. The experimental data verified that the principle of “rule of attention” and the principle of “from left to right, from top to bottom” in most cases, the factors of column ratio and block density all affected the browsing path and attention distribution. Experiments showed that the method could effectively judge the attention concentration of each area of the page and assist designers in information layout. Acknowledgment. The research is supported by research project fund of Beijing Institute of Fashion Technology. We also thank Feng Gao and Minneng Lin of Alibaba UED for encouraging us to publish this work.

References 1. Nielsen, J.: F-Shaped Pattern For Reading Web Content (2006). http://www.useit.com/ale rtbox/reading_pattern.html 2. Noton, D., Stark, L.: Eye movements and visual perception. Sci. Am. 224(6), 35 (1971) 3. Janiszewski, C.: The influence of display characteristics on visual exploratory search behavior. J. Consum. Res. 25(3), 290–301 (1998) 4. Lin, Y.-C., Yeh, C.-H., Wei, C.-C.: How will the use of graphics affect visual aesthetics? A user-centered approach for web page design. Int. J. Hum.-Comput. Stud. 71(3), 217–227 (2013). https://doi.org/10.1016/j.ijhcs.2012.10.013 5. Schmutz, P., Roth, S.P., Seckler, M., et al.: Designing product listing pages—effects on sales and users’ cognitive workload. Int. J. Hum Comput Stud. 68(7), 423–431 (2010) 6. Ngo, D.C.L., Ch’ng, E.: Screen design: composing with dynamic symmetry. Displays 22(4), 115–124 (2001). https://doi.org/10.1016/S0141-9382(01)00060-9 7. Ch’ng, E., Ngo, D.C.L.: Screen design: a dynamic symmetry grid based approach. Displays 24(3), 125–135 (2003). https://doi.org/10.1016/j.displa.2003.09.002 8. Schaik, P.V., Ling, J.: The effects of graphical display and screen ratio on information retrieval in web pages[J]. Comput. Hum. Behav. 22(5), 870–884 (2006) 9. Chen, C.H., Chiang, S.Y.: Effects of screen resolution and column ratio on search performance and subjective preferences. Displays 33(1), 28–35 (2012) 10. Faraday, P.: Visually critiquing web pages. In: Correia, N., Chambel, T., Davenport, G. (eds.) Multimedia ‘89, pp. 155–166. Springer Vienna, Vienna (2000). https://doi.org/10.1007/9783-7091-6771-7_17 11. Hornik, J.: Quantitative analysis of visual perception of printed advertisements. J. Adv. Res. 41–48 (1980)

Usage of Cloud Storage for Data Management in the Built Environment Ornella Tanga(B) , Opeoluwa Akinradewo, Clinton Aigbavboa, and Didibhuku Thwala Cidb Centre of Excellence, Faculty of Engineering and the Built Environment, University of Johannesburg, Johannesburg, South Africa

Abstract. Nowadays, all organization without exception depend on data, its management and storage to ensure success is achieved on a project. Data management involves the process of collecting, storing, sharing, controlling, and retrieval of information contained in the project coming from different sources. These data need to be stored properly to serve as a reference whenever needed (availability) for smooth project execution. The aim of this study is to discuss the capabilities of cloud storage for data management in the built environment during various projects execution phases. The study adopted a literature review methodology to draw knowledge on the way cloud storage operates as well as the services it provides. It is evident from the reviewed literature that data storage is highly important for data record and protection during the project lifecycle. This is because it gives maximum security to data owners compared to traditional data storage facilities that are available. The study concluded that a good data storage platform will help project parties to avoid data and information loss, scattered data, data incompleteness, project interruption due to data unavailability as well as theft of data. The study recommended that construction parties need to have a knowledge of the different technologies required for cloud data storage to improve data management on built environment projects. Keyword: Data management · Cloud storage · Built environment · Construction industry

1 Introduction Stonier [1] and Stonier [2] put forward that data refers to the collection of observations and facts which are disconnected. These can be translated into information by crossreferencing, examining, filtering, sorting, and organizing the data in some way. Zins [3] defined data as a class of objects of information, consisting of units of binary code. In binary code, the data units are not easily understandable or do not make sense to human beings immediately. These binary codes units are designed to be communicated, kept, and managed through the help of digital computers. Soibelman [4] stated that in the construction industry data comes in different forms and with structures that are different from each other. The different forms of data are videos, binary pictures, images, text documents, spreadsheets, software, surveys, reports, drawings, audios, financial statements, employees detail documents, contracts, among others. Ismail [5] asserted © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Z. Ahram et al. (Eds.): AHFE 2021, LNNS 271, pp. 465–471, 2021. https://doi.org/10.1007/978-3-030-80624-8_58

466

O. Tanga et al.

that the increase in the usage of modern technologies has resulted in the augmentation of data volume collected in the construction industry. This large accumulation of data is called big data which has led the building sector into a revolutionary phase. This large amount of data requires good management for better project success. In all organizations, data management is the source of project success and firm growth [6]. Data management represents a very powerful weapon that enhances project success by facilitating the decision making, money and time saving, quality delivery, exchange of documents and communication among stakeholders, continuous project improvement, problem-solving, among others [6–8]. Opitzak et al. [9] opined that data management can be defined as the process of collecting, storing, sharing, controlling, and retrieval of information contained in the project documents coming from different sources. Carillo et al. [10] and Cheung et al. [7] opined that the construction sector is a complex field that has to deal with a large amount of data coming from either on-site or off-site which need to be stored, kept safely, shared among parties involved in the project. Grover [11] asserted that the construction industry very often experiences low productivity due to poor data management during the project lifecycle. This has led to many mistakes on projects which can also lead to more mistakes on future projects [11]. Data management enables the organization’s data handling to promote project success and gain competitive advantage [10, 12]. The DAMA [13] defined data management as the development, execution, and control of regulations, programs, plans, and practices that enable the tracking, protection, delivery, and improvement of data value and information assets. To achieve data management, cloud storage is useful because it helps in money saving as you only pay for what you use and the data is stored remotely [14]. Based on the usefulness of cloud storage, the objective of this study is to review how it can be incorporated into data management for the built environment. This research work relies entirely on the review of usage of cloud storage for data management (taking into consideration the cloud storage mechanism, security system as well as benefits and advantages) for built environment projects. The usage of secondary data was applied to achieve the research objective. To search for related literature, keywords such as “construction industry”, “built environment”, “cloud storage” and “data management” were utilized on Emerald, IEEE, ISI web of Science, SCOPUS, and Taylor & Francis databases. The use of these databases was because these databases are the most popular and commonly used databases for scientific research purposes [15]. After the search, more than 250 publications were found which were examined carefully in order to determine whether they are relevant to the subject of this research study. 72 articles were found to be eligible for this study but the search was restricted to journals and conference publications between 1996–2018. In addition, other cloud storage developers’ publications were utilized as they are authority in the field. A total of 24 articles were eventually considered significant as they discussed the characteristics of cloud computing in modern day advancement in technology.

2 Data Management in the Construction Industry The construction industry is complex which involves the exchange of a huge amount of data for different projects execution [7]. These data need to be stored properly to serve as a

Usage of Cloud Storage for Data Management in the Built Environment

467

reference whenever needed (availability) for smooth project execution. This is explained by the fact that data management is a process of collecting, storing, sharing, protecting as well as retrieving data [16]. Data is the heart of all the organizations including the construction industry. Thus, the storage of data matters because it helps in the protection and keeping of important information and data required before project commencement, during the project execution and after project completion [17]. According to Zhu [18], the lack of good data storage system will lead to risks such as loss of data or information, time wastage, delay, incorrect decisions, miscommunication among others. Data storage for construction projects is similar to a mirror because it represents the project evolution condition. This implies that a good data storage promotes better project outcome and bad data storage lead to failure [19]. Besides, Gangane et al. [19] also put forward that a quality control and work quality of a project is driven by the state of data storage. The way data was stored before digitalization is different compared to the way they are stored nowadays. According to Gray [20] before the second century, data was stored on wood strips and clay. During the second century, Goda [21] opined that data was stored manually with the help of papers. In 1762, punch cards were invented which served as storage tool as explained by Hauksson [22]. Kimizuka [23] stressed that in 1928, Fritz Pfleumer invented the magnetic tapes for sound recording purposes. Around the 1980s, the optical magneto storage media took place where we find the utilization of CDs, DVDs, BDs [21]. From the 1990s to 2000s, data was stored in devices such as Network-Attached Storage (NAS) and Storage Area Networks (SANs) [21, 24]. During the early 2000s, class memory storage was used. These were flash memories and SD cards. In late 2000s, consolidation and virtualization storage were utilised to store data. Furthermore, sophisticated storage applications in the 2000s were used for advanced and innovative data storage. This storage system involves cloud computing (Windows Azure Storage, Amazon S3 and Google Cloud Storage) whose major services help clients to manage and store in remote cloud their purchased contents [21, 25]. 2.1 Cloud Storage Yan [26] stated that nowadays with the high utilization of computers, cloud computing has become a hot topic everywhere in the world. It was further opined that there is a direct relationship between big data and cloud computing. This implies that the more data grows, the more cloud computing utilization grows. Agrawal et al. [27] defined cloud computing as a computing form that focuses on exchanging, transferring pooling computing resources instead of having personal computers or local servers to manage applications. Ji et al. [28] opined that cloud computing network and stage offers a single system view of all the measurement of assets processing (application, equipment and programming). Calheiros et al. [29] also asserted that cloud computing refers to the on-demand computer systems and resources capable of offering a variety of integrated computer services without being restricted or limited by local resources for user easy access. These resources incorporate all tasks scheduling and software processing as well as the storage, self-synchronization and backup of data. Also, cloud computing can be defined as a hybrid development that has arisen from Virtualization, computing utility, IaaS (Infrastructure as a Service), SaaS (Software as a Service), PaaS (Service as a

468

O. Tanga et al.

Platform), and the new Distributed Computing innovations, grid and parallel computing, or the commercial use of such computer Concepts of Science [30]. Ahmed [31], Vacca [30] and Simplilearn [32] explained that IaaS, PaaS and SaaS represent the three traditional cloud computing service models. Firstly, SaaS refers to when cloud service providers such as Oracle, Azure among others offer to users’ different software application which can be used without being installed on the computer. This model is adopted by organizations that do not require information technology (IT) tools maintenance. Secondly, IaaS model is adopted when an organization or company require a virtual machine. Moreover, IaaS is based on pay as you go system and service providers offer various infrastructure like computing capacity, storage among others. Furthermore, in this cloud service model type, a third-party provider manages servers, hardware, storage among others on behalf of its users. Lastly, PaaS is adopted by organizations that needs a platform and tools to create a software product. In this type of cloud service, cloud service providers handle everything including the middleware and operating system. Terzo et al. [33] opined apart from these three cloud computing service models, an alternative cloud computing model called DaaS (Desktop as a service) has been developed for effective big data processing, distribution and management. This model is closely connected to SaaS which can be easily paired with either one or both of the mentioned models [34]. Yan [26] submitted that cloud computing main objective is cloud storage. Galloway [14] opined that cloud storage involves using the cloud for data and files storing instead of making use of a local system. Cloud storage in other words can be defined as the system that enables data storage on the internet. It is very advantageous because a server/client can access and retrieve information at any time from different devices and locations [28]. Loomis [35] asserted that in time past, organizations utilise SANs which required a lot of money for maintenance. Moreover, the increased rate of stored data forced organizations to invest more in infrastructure and adding servers to meet the increased demand. Contrarily to traditional data storage where SANs was utilized to store organizational files, cloud storage is far better in functional and performance requirements, demand for services, cost demand, and portable needs. Rai et al. [36] submitted that data storage in cloud computing is achieved via cloud service provider (CSP). Moreover, through this CSP, the user can communicate with the cloud servers to retrieve and access data in a cloud server environment. Ferraiolo [37] asserted that cloud storage developers also took data management into consideration by ensuring data availability and security. Thus, a range of protection methods and techniques such as Secure Socket Layer (SSL), Erasure Code (EC) and Access Control List (ACL) are used for sensitive data protection. Ferraiolo [37] also emphasized that cloud storage ensures the reliability, flexibility, scalability and other characteristics of data. The service does not stop while ensuring data integrity and availability in unexpected hardware disruption. Furthermore, Loomis [35] explained the different advantages of using cloud storage instead of traditional storage. For instance, with cloud storage, you pay as you go meaning you pay for what you use and no server space is required to store big data. A pictorial connectivity of cloud storage for data storage and management is depicted in Fig. 1. From Fig. 1, information gathered from various devices are communicated via the internet to the cloud storage where the data is

Usage of Cloud Storage for Data Management in the Built Environment

469

secured for retrieval by authorised users in any location and also via any device connected to the internet.

Fig. 1. Cloud storage illustration. Source: Author’s compilation (2021)

3 Lessons Learnt Data management in the built environment main objective is to secure, store, control, retrieve as well as exchange project data among the various members involved in the project. Data management being the factor that indicates whether a project will either fail or succeed, requires construction project parties to pay more attention to it to avoid unpleasant project results. To address the issues of construction project data management, the utilization of cloud storage is inevitable because of the various benefits it provides. Unlike the previous type of storage that involved a large amount of money for maintenance, more investment in infrastructure and addition of servers to satisfy the increased demand, cloud storage offers better functionality and performance requirement, portable needs, cost demand among others. Cloud storage is also very useful as it ensures data availability and security. Furthermore, cloud storage offers reliability, flexibility, scalability among other important data management characteristics. Additionally, with cloud storage, you pay as you go implying that you only pay for what you use.

4 Conclusion and Recommendation In conclusion, the development of technology has led to the generation of a large amount of data in the built environment. The data generated needs good management for smooth project execution and project outcome. A good data record keeping will help project parties to avoid data and information loss, scattered data, data incompleteness, project interruption due to data unavailability as well as theft of data. Data storage will enable

470

O. Tanga et al.

data availability during the project execution phase that will promote time-saving. Moreover, a good data storage will help in avoiding disputes among project parties. With cloud storage, data for the built environment can be adequately managed effectively to ensure proper project decisions are made as at when necessary. By recommendation, the built environment professionals should take into consideration and apply the cloud data storage technologies for better project outcome and execution. Also, organisations will need to subject their staffs to trainings to equip them with knowledge of how they can store data reliably in the cloud. This study was limited to theoretical findings alone due to time and cost constraint. Further study can be carried out by conducting an empirical data collection to validate the assertions made in this study. Acknowledgement. The authors would like to acknowledge the financial support provided by National Research Foundation, South Africa to fund this research.

References 1. Stonier, T.: The nature of information. In: Information and Meaning, pp. 11–29. Springer, London (1997). https://doi.org/10.1007/978-1-4939-1249-0_2 2. Stonier, T.: Information and Meaning: an Evolutionary Perspective. Springer (2012) 3. Zins, C.: What is the meaning of “data”, “information”, and “knowledge”? Dr. Chaim Zins. (2009) 4. Soibelman, L., Wu, J., Caldas, C., Brilakis, I., Lin, K.Y.: Management and analysis of unstructured construction data types. Adv. Eng. Inform. 22(1), 15–27 (2008) 5. Ismail, I.E., Ahmad, A.S.H.: International journal of built environment and sustainability: spatial arrangement of coastal sama-bajau houses based on adjacency diagram. Borneo Res. Bull. 46, 401–402 (2015) 6. Renzl, B.: Trust in management and knowledge sharing: the mediating effects of fear and knowledge documentation. Omega 36(2), 206–220 (2008) 7. Cheung, S.O., Yiu, T.W., Lam, M.C.: Interweaving trust and communication with project performance. J. Constr. Eng. Manag. 139(8), 941–950 (2013) 8. Zhang, N., Yuan, Q.: An overview of data governance. Economics Paper, December 2016 9. Opitzak, F., Windischa, R., Scherera, R.J.: Integration of documentY and modelYbased building information for project management support. Procedia Eng. 85, 403–411 (2014) 10. Carrillo, P., Robinson, H., Al-Ghassani, A., Anumba, C.: Knowledge management in UK construction: strategies, resources and barriers. Proj. Manag. J. 35(1), 46–56 (2004) 11. Grover, R., Froese, T.M.: Knowledge management in construction using a SocioBIM platform: a case study of AYO smart home project. Procedia Eng. 145, 1283–1290 (2016) 12. Elfar, A., Elsaid, A.M., Elsaid, E.: How knowledge management implementation affects the performance of Egyptian construction companies. J. Appl. Bus. Res. (JABR) 33(3), 409–438 (2017) 13. Dama International: The DAMA Guide to the Data Management Body of Knowledge (DAMA-DMBOK) (2009) 14. Galloway, J.M.: A cloud architecture for reducing costs in local parallel and distributed virtualized cloud environments (Doctoral dissertation, University of Alabama Libraries) (2013) 15. Guz, A.N., Rushchitsky, J.J.: Scopus: a system for the evaluation of scientific journals. Int. Appl. Mech. 45(4), 351 (2009)

Usage of Cloud Storage for Data Management in the Built Environment

471

16. Intra-governmental Group on Geographic Information : The principles of good data management. Report (2000) 17. Björk, B.C.: Electronic document management in construction–research issue and results (2003) 18. Zhu, Y., Issa, R.R.: Viewer controllable visualization for construction document processing. Autom. Constr. 12(3), 255–269 (2003) 19. Gangane, A.S., Mahatme, P.S., Sabihuddin, S.: Impact of construction documents and records on project management. Architecture 3, 2–91 (2017) 20. Gray, J.: Evolution of data management. Computer 29(10), 38–46 (1996) 21. Goda, K., Kitsuregawa, M.: The history of storage systems. Proc. IEEE, 100(Special Centennial Issue), 1433–1440 (2012) 22. Hauksson, A.G., Smundsson, S.: Data storage technologies (2007). http://olafurandri.com/ nyti/papers2007/DST.pdf 23. Kimizuka, M.: Historical development of magnetic recording and tape recorder. Surv. Rep. Systemization Technol. 17 (2012) 24. Nagy, P.G., Schultz, T.J.: Storage and enterprise archiving. In: PACS, pp. 319–345. Springer, New York (2006). https://doi.org/10.1007/0-387-31070-3_16 25. Backupify: A history of data storage. https://www.backupify.com/history-of-data-storage/. Accessed 30 June 2020 26. Yan, C.: Cloud Storage Services (2017) 27. Agrawal, D., Das, S., Abbadi, A.E.: Big data and cloud computing: current state and future opportunities. In: Proceedings of the 14th International Conference on Extending Database Technology, pp. 530–533, March 2011 28. Ji, C., Li, Y., Qiu, W., Awada, U. Li, K.: Big data processing in cloud computing environments. In: 2012 12th international Symposium on Pervasive Systems, Algorithms and Networks, pp. 17–23. IEEE, December 2012 29. Calheiros, R.N., Ranjan, R., Beloglazov, A., De Rose, C.A., Buyya, R.: CloudSim: a toolkit for modeling and simulation of cloud computing environments and evaluation of resource provisioning algorithms. Software: Practice and experience. 41(1), pp.23–50 (2011) 30. Vacca, J.R.: Cloud Computing Security: Foundations and Challenges. CRC Press, Boca Raton (2016) 31. Ahmed, F.F.: Comparative analysis for cloud-based e-learning. Procedia Comput. Sci. 65, 368–376 (2015) 32. Simplilearn: Cloud Computing Tutorial for Beginners 33. Terzo, O., Ruiu, P., Bucci, E. Xhafa, F.: Data as a service (DaaS) for sharing and processing of large data collections in the cloud. In: 2013 Seventh International Conference on Complex, Intelligent, and Software Intensive Systems, pp. 475–480. IEEE, July 2013 34. Rajesh, S., Swapna, S. Reddy, P.S.: Data as a service (daas) in cloud computing. Global J. Comput. Sci. Technol. (2012) 35. Loomis, C., Airaj, M.: Review of the use of cloud and virtualization technologies in grid infrastructures (2010) 36. Rai, R., Sahoo, G. Mehfuz, S.: Securing software as a service model of cloud computing: Issues and solutions. arXiv preprint arXiv:1309.2426 (2013) 37. Ferraiolo, D., Kuhn, D.R., Chandramouli, R.: Role-based access control. Artech House (2003)

Digital Model for Monitoring SME Architecture Design Projects Luis Cardenas1 , Gianpierre Zapata1(B) , and Diego Zavala2 1 Universidad Peruana de Ciencias Aplicadas, Lima, Peru

[email protected] 2 Universidad Tecnológica de México, Mexico City, Mexico

[email protected]

Abstract. In Peru, architecture SMEs have a very large gap between technology and information, it is because many times companies only sought to meet the objectives of each project without thinking about the automation of processes, reduction of delivery times or costs of personal. Generating in many situations, delays in deliveries or requests for time extensions (a traditional practice in the country) that directly affects personnel costs and company profits, therefore the proposal seeks to design a monitoring model that allows reducing the times between activities and rework, optimizing deliveries to the client and ensuring the quality of the projects delivered. Keywords: Digital model · Architecture design projects · SME

1 Introduction Innovation currently plays a key role in Peruvian companies, which is why many of these companies point to digital models, as well as success stories in a first-world country, this due to the improvement in productivity and in turn competition generated in the companies gives an increase in the quality of the service in turn greater profitability in the market. Currently, architecture companies, although they monitor the execution of a design project, do not have systematic tools that allow them to appreciate the effects generated on results, time management and cost management. and above all to the impact of the final product. For this reason, the monitoring and follow-up work serves as the basis to make the responsibility of human resources effective in the designated time of each project, in general terms in the logical framework approach, which is one of the main instruments of the management methodology. of projects allows a proportional integral project design, by facilitating its monitoring and evaluation.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Z. Ahram et al. (Eds.): AHFE 2021, LNNS 271, pp. 472–476, 2021. https://doi.org/10.1007/978-3-030-80624-8_59

Digital Model for Monitoring SME Architecture Design Projects

473

2 State of Art 2.1 Innovation According to the authors [1–4], innovation is the beginning of something new or better recognized as the improvement of a product or service, process, method or internal practices that are executed in an organization. But also considering that for the execution of innovation, the scope and objectives of the organizations must be well defined in such a way that the limits can be adjusted, and the processes can be restructured to achieve these objectives. 2.2 Monitoring and Evaluation On the other hand, emphasis was placed on the strategies and control of the management or continuous improvement, which sought not only to improve the service but to identify the shortcomings of the company and be able to guide them to seek customer satisfaction [5, 6]. In addition, the diagnosis of each of the stages is evaluated, including the analysis of the design, implementation and results obtained. Relevant to be able to verify the fulfillment of the planned objectives, verify the effectiveness of the activities carried out and evaluate the performance of the personnel. This evaluation provides reliable and useful information that can be incorporated as lessons learned and later improved in future projects [7].

3 Proposal The approach of the digital methodology is composed of 4 stages: planning, exploration, ideation, and prototyping (Fig. 1), where the need of the company will be identified, which are the necessary resources to be able to execute a project. These elements built into a rigor in the project for which they must be controlled at each stage to be able to correctly distribute future costs and benefits.

Fig. 1. Methodical, cost efficient and effective plan

On the other hand, to achieve adequate monitoring and follow-up of the project, it is applied through 5 key components:

474

L. Cardenas et al.

1. Identify the objectives of the project, even if they are coherent and performance indicators can be defined for each of them. 2. Said indicators must comply with a structure based on the hierarchy of the project’s objective, referring to the activities, processes, materials, or products that seek to fulfill the project’s results. 3. Data collection and control of milestones or stages in the project, in order to estimate performance indicators and evaluate possible failures or improvements. 4. With the information collected, implement training and reinforce the weak points of the company, guaranteeing the correct implementation of the activities, in addition to ensuring adequate monitoring of the project. 5. Continuous improvement mechanism that allows to analyze the flaws and optimize them in future projects, thus increasing the benefits and reducing the risks of the project.

4 Validacion 4.1 Understanding the Need 4.1.1 Need • Find low-cost control and monitoring alternatives. • Design the digital experience applying agile methodologies and user-centered approaches. • Quickly and cost-effectively test the designed service, to fine-tune it prior to market release. • Have a business case for its implementation. 4.1.2 Challenges • Leading change with its own personality that differentiates it from other architecture companies, presenting a better service and results. • Generate a strong and lasting differentiation • See beyond traditional innovation and take the world as your benchmark. Achieving a creative and structured approach to innovation. 4.1.3 Future Situation • Digital model for follow-up monitoring for architecture design companies 4.2 Objectives of the Project • Contribute to the design of the digital user experience of architecture SMEs. Facilitating an agile, collaborative, and user-centered methodology. • That allows to explore the behavior of the main segments, identify problems and real needs, and identify differentiated and innovative solutions.

Digital Model for Monitoring SME Architecture Design Projects

475

4.2.1 Explore Real Needs Connect with users and identify real, unmet, or undiscovered needs. 4.2.2 Come up with Solutions Identify digital solutions for the identified needs, which will allow us to design the future digital journey. 4.2.3 Prototype and Test 3 Solutions In an agile and collaborative way, 3 prototypes will be made and tested, selecting one to be subsequently implemented. 4.2.4 Build the Business Case The business case of the most successful prototype implementation will be developed. 4.3 Digital Prototype Model for Monitoring A preliminary prototype of the digital monitoring model is designed, which consists of a blackboard divided into 3 important sections, the time per project stage, the person in charge and nodes to show if a stage generated a delay with colors that show it. In addition, to have the name of the project, the start and end dates and the general commitments, having the purpose of fully verifying all the activities of the human resources and the projects of the company. In this way, we can control the workload of each staff and, in turn, be able to verify the need for support staff, plant staff, and enough supplies to be able to correctly develop a project (Fig. 2).

Fig. 2. Digital Prototype model for monitoring

476

L. Cardenas et al.

5 Conclusions • Reworks were reduced by 40% by identifying all stages of performance and having prior acceptances from customers. • Coordination with the end customer was essential for architecture companies to continue advancing following the development guidelines. • Personnel costs were adapted to the performance of its operations and reduced the company’s losses by 30%. In addition, to better identify the functions of the personnel and generate a better work environment. • The high turnover of personnel was reduced due to the excessive workload as the functions of each worker were not identified.

References 1. Ruggles, W.: Innovative Project Management Systems (2020). https://doi.org/10.4324/978100 3004554-6 2. Alam, S.: An innovative project management system, 180–185 (2019). https://doi.org/10.1109/ ICIMTech.2019.8843768 3. Kreutzer, R.: Innovative Project Management Tools (2019). https://doi.org/10.1007/978-3-03013823-3_8 4. Chernova, L.: Key competence as a basis for innovation projects management. Innovative Technologies and Scientific Solutions for Industries, 113–120 (2019). https://doi.org/10.30837/ 2522-9818.2019.7.113 5. Anbari, F.: Innovation, Project Management, and Six Sigma Method (2018). https://doi.org/ 10.4324/9780203794043-6 6. Kerzner, H.: From Traditional to Innovation Project Management Thinking (2019). https://doi. org/10.1002/9781119587408.ch5 7. Warren, E., Brian, G., Robinson, S.: Managing Research and Evaluation Projects (2018). https:// doi.org/10.4324/9781315163727-26

Charging and Changing Service Design of New Energy Vehicles Under the Concept of Sustainable development—A Case Study of NIO Yan Wang, Zhixiang Xia(B) , Xiuhua Zhang, and Yanmei Qiu East China University of Science and Technology, Shanghai 200237, People’s Republic of China

Abstract. This paper briefly explains the framework of the existing service design system by analyzing the intelligent charging and changing system of NIO. Combined with the 3R principle advocated by the concept of sustainability, the service design strategy of sustainable development is put forward through the methods of key figure map, service process tracking, and service contact analysis in service design. Besides, a user-centered service mode involving all service contacts is summarized to form a closed-loop service mode of charging and changing, and ultimately promote the benign development of the charging and changing industry of new energy vehicles. Keywords: Sustainable development · Charging and changing · Service design · Nio automobile

1 NIO Power NIO is a global brand of intelligent electric vehicles. As the leader of China’s “new power of car building”, NIO creates a pleasant lifestyle for users by providing high-performance intelligent electric vehicles and extreme user experience. This must include the combination of the online and offline intelligent Power system - NIO Power. NIO Power is a charging solution based on mobile Internet. It has a wide layout of charging infrastructure network. Based on NIO cloud technology, it has built an energy service system that is “rechargeable, exchangeable and upgradable” to provide full scene charging service for car owners. NIO Power is divided into two service segments based on scenarios: home service segment includes home charging pile (7 kWh alternating current) and home quick charging pile (20 kWh direct current), while outside service segment includes charging map APP software service, supercharging pile, electrical changing station, and one-button charging service (Fig. 1). In the past few years, NIO Power has built charging and changing service network to ensure the user experience. As the user scale increases, so does the efficiency. In combination with these six service modes, NIO has built a preliminary full scene charging and changing the system and has been committed to the continuous layout of charging and changing network to meet users’ various travel energy supply needs. At the same © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Z. Ahram et al. (Eds.): AHFE 2021, LNNS 271, pp. 477–482, 2021. https://doi.org/10.1007/978-3-030-80624-8_60

478

Y. Wang et al.

Fig. 1. The system architecture of NIO power

time, NIO is also trying to serve the entire EV industry as much as possible. It can not only obtain marginal revenue from these services but also expand the pie of the entire EV industry through such services, which can bring higher value to the industry level. All these are based on the thinking of systematic efficiency.

2 Sustainable Theory and 3R Principle The theory of sustainability was first proposed by Hans Carl von Carlowitz.In 1987, according to the Brundtland Report, sustainability was defined as “economic development that meets the needs of the present without compromising the ability of future generations to meet their own needs” [1]. Under the promotion of the United Nations, the theory of sustainable development is gradually concerned by the world. Based on the research of education and academia, with different periods and backgrounds, the development of sustainable design has gone through four development stages, namely green design, ecological design, product and service system design, and inclusive design [2]. This paper starts with the product service system design of new energy vehicles and combines the author’s internship experience in NIO to illustrate its sustainable charging and changing service concept through the 3R principle [3] (Fig. 2).

Fig. 2. 3R principle

Charging and Changing Service Design of New Energy Vehicles

479

The sustainable concept of NIO is first reflected in the service mode of battery utilization. At present, recycling has been considered in the design stage of the battery. For retired batteries, combined with the health status of the battery detailed by big data, rapid screening can be conducted to determine whether to carry out cascade utilization or recycling. Retired batteries of electric vehicles can be used in energy storage and other industries for step-by-step utilization to improve the life cycle value of the battery. The waste battery that cannot be used in the echelon is recycled, and the raw materials are extracted for battery production through various recycling methods. Based on ensuring the sustainable use of batteries, NIO also improves the sustainability of charging and changing services.

3 Charging and Changing Service Design 3.1 Stakeholders From the perspective of the government, it is necessary to provide basic power grid facilities for car owners, connect charging hardware to the national power grid, and provide support in policies and electricity prices. From the perspective of enterprises, batteries are the most expensive component of electric vehicles, accounting for 1/2 or 1/3 of the total cost. How to excavate the charging value has become the new competitiveness of automobile enterprises [4]. From the perspective of investors, what they care about is how to operate the business model of charging and changing electricity well and bring returns and earnings to investors. From the user’s point of view, range anxiety is still the core problem that troubles the users of new energy vehicles. Therefore, an initial charging and changing service involving four core stakeholders, the government, car companies, investors, and users, has been formed (Fig. 3).

Fig. 3. Stakeholders map

3.2 Service Pain Points Charging has always been a pain point for electric vehicles. Charging time is long, charging parking Spaces may be occupied, and even public charging piles may be damaged. In this situation, most car companies choose to rely on society, such as third-party

480

Y. Wang et al.

charging operators, or hope users to solve these problems by themselves; Only a few car companies, such as Tesla Nio, choose to do some of their solutions. In essence, the pain point of charging and changing service is the contradiction between supply and demand. Different charging and changing scenes of users have completely different demands, so it is necessary to solve the contradiction by subdividing the situation and accurately positioning the user groups. 3.3 Service Situation There is almost only one situation of refueling. Oil is transported to the gas station and every car comes to the gas station for refueling, which is a very typical centralized service mode. Since the power grid is almost everywhere, situation segmentation can help us find a decentralized and distributed way to add power, which is completely different from refueling [5]. For example, NIO has built an all-around power-up service system of 7kW charging, 20kW charging, overcharging, power changing, and one-button power-up, to ensure users’ charging and changing experience in each subdivided situation. There are three types of charging and changing situation. Situation 1: Charging at home or in the workplace requires a special charging pile to complete the charging. Situation 2: Charge up for the customer. For example, if there is no charging pile near the car owner and there is no time to go to a third-party charging pile or to an electrical changing station to change electricity, you can set up the power supply through the APP at leisure time. The car attendant will drive away from the car within the set time and return the car to the destination set by the car owner after completing the charging. Situation 3: Charging the map. As the name implies, it is to find the available charging pile around the car owner through the way of software, including the power of the charging pile, the use of the situation, whether there is a fault, and so on. Car use situations can also be subdivided into three different types: city commuting, short-distance travel, and long-distance travel, which are based on geographical dimension. In these three situations, should the user choose a single battery to cover all the situation, or provide multiple batteries for the user to choose flexibly? In the three situations, user experience requirements are different. According to a different situation, users can choose the required battery with the appropriate rent, so that users can meet all car use situations with the least money, to bring the best experience to users under the subdivided situation. This is the situation-based thinking behind flexible battery rental services. 3.4 Service Blueprint The service blueprint is the main tool for service design. It starts from the behavior Angle of users, combines the foreground behavior and background behavior of the service system, and sorts out the steps of the service [6] (Fig. 4).

Charging and Changing Service Design of New Energy Vehicles

481

Fig. 4. New energy vehicle charging and changing service blueprint

4 Service Design Strategies for Sustainable Development 4.1 User-Centric At present, when users buy trams, they will generally compare them with petrol cars, intentionally or not. At this time, a series of experiences around electricity will be particularly important, which will directly affect whether users buy cars or not. Do you have range anxiety? Is it as easy to fill up the gas as it is to fill up the gas? Are batteries safe? These are several issues that users are most concerned about surrounding batteries and recharging. The car is our third living space, is the extension of our legs, which is to accompany us crowded and busy, but also to take us to poetry and distance. The sense of freedom and security is the most important and basic user experience. As the core element of the whole service, the user experience is the actual feeling of the user, which cannot be separated from the specific situation. The experience needs of different scenes are different, which requires us to constantly subdivide and think: what situation is there? What are the user experience requirements in each situation? Ultimately, it allows us to provide users with an experience that exceeds their expectations. The principles of improving the user experience are speed, ease of use, ease of mind, understanding, and freedom. To improve user experience, it is necessary to focus on the whole experience of users and track the whole service process through situation segmentation. 4.2 Full-Service Process Tracking The service process connects the whole charging and changing service system, so it is very necessary to track the whole service process. From the owner, in power stations, cars, charging pile, in electric staff, charging software to the investors, etc., are included in the service process, so establishing service process tracking is to satisfy the interests of the overwhelming majority of people, make the whole commercial service models get good operation, in this way can the whole filling in electrical service and sustainable development [7].

482

Y. Wang et al.

4.3 Setting up a Service Loop In the whole service, the user pays for the experience, and the enterprise makes money by the efficiency. Both the experience and the efficiency are inseparable from the specific scene. To provide a great experience in a variety of scenarios requires the pursuit of extreme efficiency and continuous innovation. The service loop must be combined with commercial operations, such as the establishment of service communities, which can generate high-quality professional content and attract more users to use the charging and changing service. The improvement of the efficiency of the whole service is conducive to further commercial operations, which will gradually form a closed loop of high-quality service experience -- commercial innovation -- better service experience [8].

5 Conclusion First of all, a good charging and changing service design pattern needs to improve the use efficiency of charging piles and changing stations. Secondly, it needs to focus on the owner and user, involving all the object contact points in the charging and changing process. And also it can track the experience process of charging and changing service, Committing to creating a perfect charging and changing service experience. Finally, a virtuous cycle of charging and changing electricity service is formed.

References 1. Epstein, M.J., Roy, M.-J.: Sustainability in action: Identifying and measuring the key performance drivers. Long Range Plan. 34, 585–604 (2001) 2. Ceschin, F., Vezzoli, C.: The Role of Public Policy in Stimulating Radical Environmental Impact Reduction in the Automotive Sector: The Need to Focus on Product-Service System Innovation’. Int. J. Automot. Technol. Manage. 10(2/3), 321–341 (2010) 3. Hou, K., Hou, L.: Research on the Application and Development of Sustainable Design. Design 4(01), 102–104 (2013) 4. Chen, W., et al.: A fuzzy demand-profit model for the sustainable development of electric vehicles in china from the perspective of three-level service Chain. Sustainability. 12(16), 6389 (2020) 5. Abouee-Mehrizi, H., et al.: Adoption of electric vehicles in car sharing market. In: Production and Operations Management (2020) 6. Xu, S.: Research on design of public charging pile for new energy vehicles based on the concept of service design. East China Univ. Sci. Technol. (2018) 7. Xin, L.: Concept, development and practice of sustainable design. Creativity Des. 02, 36–39 (2010) 8. Arias, N.B., et al.: Distribution system services provided by electric vehicles: recent status, challenges, and future prospects. IEEE Trans. Intell. Transp. Syst. 20(12), 4277–4296 (2019)

Human Factors in Energy

What Niche Design Can Learn from Acceptance Mining Claas Digmayer(B) and Eva-Maria Jakobs Department of Textlinguistics and Technical Communication, Human-Computer Interaction Center, RWTH Aachen University, Campus-Boulevard 57, 52074 Aachen, Germany {c.digmayer,e.m.jakobs}@tk.rwth-aachen.de

Abstract. This paper investigates how text-mining approaches and acceptance research can contribute to niche development. Niches are embedded in social contexts. The study assumes that socially acceptable niche development requires a deeper understanding of the niche object’s social perspectives. The study investigates Internet discourses as marketplaces of social opinion-making with an acceptance-mining approach. It focuses on two niche technologies (solid state transformers, charging infrastructures for electric cars) differing in their niche maturity. The findings reveal that Internet discourses offer valuable input for niche development in socially acceptable directions, e.g., expectations (integration of the technology into public life; nurturing), ideas for empowerment (market conditions), and selection processes considered to be relevant for opening the niche to the mass market. Acceptance mining allows access to actor perspectives on novel technologies and embedding contexts. Keywords: Niche design · Niche development · Acceptance mining · Text mining · DC technologies · Acceptance of technology innovations · Innovation management · Communication strategies · Technical communication

1 Introduction This paper investigates how text-mining approaches and acceptance research can contribute to niche development. The study is part of the large-scale research project ‘Flexible Electrical Networks’ (FEN), which investigates requirements for establishing a sustainable direct-current (DC) grid and related technologies. In this regard, niche development plays an important role. In the project, niches are conceptualized as complex structures embedded in socio-economic contexts [1]. The project aims to gain a deeper understanding of how niches must be developed to support a fast and successful market diffusion. It focuses on social-economic factors, which include actor-related social perspectives on the niche object. This paper assumes that new forms of acceptance research that examine acceptancerelated evaluations in Internet discourses [2, 3] can provide valuable input for the development of niches. The term used for this approach is ‘acceptance mining for niche development’. In highly digitalized societies, the Internet has become a modern marketplace of social opinion-making. In this paper, text-mining approaches are used to © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Z. Ahram et al. (Eds.): AHFE 2021, LNNS 271, pp. 485–492, 2021. https://doi.org/10.1007/978-3-030-80624-8_61

486

C. Digmayer and E.-M. Jakobs

examine Internet discussions regarding the public’s requirements towards niche development focusing on two DC technologies that differ in their niche maturity: solid state transformers (SST) and charging infrastructures for electric cars (CIEC). DC SST are based on the ‘dual-active bridge dc-dc converter’ technology [4] and enable full-range control of active and reactive power flows. SST are in the phase of early niche development. CIEC manage the charging of electric vehicles; they are in the phase of diffusing into the mass market. The following research questions are addressed in the study: • Which aspects of the technology and the niche development are discussed and how? • What can niche development learn from the findings?

2 Theoretical Background 2.1 Niche Design and Development Niche development of technological innovations is a field researched by various disciplines such as economics, social sciences, and engineering. The methods commonly used in this research area include interviews, qualitative reviews, case studies, and participant observation. This paper focuses on the research strand of ‘strategic niche management’ as this approach explores intra-niche development processes [1]. In this context, several key factors are considered to facilitate successful niche design in ‘protected’ environments [5]: (a) shielding innovations against mainstream selection pressures, (b) nurturing innovations to facilitate their development, and (c) empowering innovations to become competitive and thus marketable. The early phases of niche development usually involve a small community of technological and economic actors [6]. Here, the public is given (if any) a passive role. However, studies show that empowerment can be hampered if public demands are not considered in the development phase [7]. 2.2 Technology Acceptance Research Decisions in favor of or against technologies depend primarily on how the stakeholders perceive and evaluate the technology. One relevant construct in this context is technology acceptance, described in the literature as a multi-layered construct of interacting variables [8]. The understanding of the term ‘acceptance’ varies depending on the discipline (e.g., sociology, psychology, communication science) and focus (e.g., research on attitudes of those affected or on technology impacts). Recent approaches conceptualize acceptance as an (individual or collective) evaluation [9]. Standard methods of acceptance research are case studies, questionnaires, and interviews. Recent approaches use text-mining approaches [e.g., 2]. Few acceptance studies investigate the topic of DC technologies and their application areas. They focus on investigating topics such as information strategies [10], comparison with other energy technologies [11], or risk perception [2] with methods such as interviews, questionnaires, focus groups, and text mining.

What Niche Design Can Learn from Acceptance Mining

487

2.3 Niche Development Using Technology Acceptance Approaches To our knowledge, there have been no attempts to date to combine research on niche development and technology acceptance. This applies to both sides: Neither has niche research used acceptance approaches nor has acceptance research investigated niche development processes.

3 Acceptance-Mining Approach Data Collection: The data collection focuses on two objects of our niche research: SST and CIEC. Comments were extracted with automated scripts from German social media, news portals, and discussion fora [see 2]. The data was stored as text files, including their title, content, date of creation, and source. The body contains 541 comments related to SST and 1738 comments related to CIEC. All comments were posted between 2018 and 2021. Data Preparation: The comments were automatically POS (part of speech) tagged and transferred into a multilevel scheme as described in [3]. Data Analysis: The investigation comprises four steps. First, a sentiment analysis was carried out on both subcorpora [see 2]. The first step aimed to identify positive, negative, and mixed evaluations of both technologies. Second, the first step results were checked manually to delete false hits (e.g., in case discussions shift to other technologies) and correct false sentiments (e.g., in case of irony was used as a stylistic means). The cleaned subcorpus 1 (SST corpus) comprises 191 comments, subcorpus 2 (CEIC corpus) includes 335 comments. Third, the comments of the cleaned subcorpora were categorized qualitatively (niche aspects versus niche-related dimensions). Niche-related aspects were subcategorized into economic, social, legal, infrastructural, and environmental aspects. The frequencies of the thematized aspects and their evaluation were analyzed quantitatively to answer RQ1. Fourth, reasons stated for the evaluations and statements towards niche development were annotated. Statements towards niche development were categorized qualitatively as expectations for niche development, information needs, demands on niche development, and preferences for niche development involvement.

4 Results 4.1 Technology-Related Aspects and Technology Evaluation Both technologies are positively evaluated (SST: 63.2% of the comments; CIEC 66.20%). Only SST is controversially discussed in some comments (7%). In both cases – SSTand CIEC-related discussions – technology-related aspects are discussed, but they seem to be of different importance. In SST-related discussions, they form the main topic (63.9% of all comments). In CIEC discussions, they are only a secondary topic (21.20% of all comments). SST are particularly advocated for their potential to control power flows flexibly. The technology is perceived as being reliable and long-lasting. CIEC are discussed with regard to the technologies’ intelligent load distribution management and maintainability.

488

C. Digmayer and E.-M. Jakobs

4.2 Niche-Related Aspects and Evaluations Both SST and CIEC discussion thematize niche-related aspects (see Tables 1 and 2). The three most often discussed topic categories are infrastructural, economic, and social aspects of the technology, followed by environmental and legal aspects. The evaluation of these aspects differs strongly depending on the technology. In the case of SST, niche-related aspects are discussed positively; only legal aspects are evaluated negatively (100%). In contrast to this, niche aspects of CIEC are perceived critically. Table 1. Quantitative distribution of niche aspects discussed in relation to SST. Evaluated topic

Percentage of the SST sub corpus

Positive evaluations

Negative evaluations

Mixed evaluations

Infrastructural aspects

36.65%

84.29%

14.29%

1.43%

Economic aspects

14.14%

59.26%

37.04%

3.71%

Social aspects

9.95%

57.90%

36.85%

5.27%

Environmental aspects

4.72%

100.00%

0.00%

0.00%

Legal aspects

0.53%

0.00%

100.0%

0.00%

Table 2. Quantitative distribution of niche aspects discussed in relation to CIEC. Evaluated topic

Percentage of the CIEC subcorpus

Positive evaluations

Negative evaluations

Mixed evaluations

Infrastructural aspects

50.45%

24.86%

72.79%

2.37%

Economic aspects

25.08%

34.53%

65.48%

0.00%

Social aspects

14.33%

6.25%

93.75%

0.00%

Legal aspects

7.17%

8.34%

91.67%

0.00%

Environmental aspects

6.87%

43.48%

56.53%

0.00%

4.3 Requirements for Niche Development The qualitative analysis of the data indicates different requirements regarding niche development. They relate to expectations, information needs, demands, and preferences for involvement. Expectations for niche development indicate what role the technology should play in the future in the eyes of the public and in which direction it should therefore be developed in the niche. Information needs describe knowledge gaps with

What Niche Design Can Learn from Acceptance Mining

489

regard to the technology and niche development that need to be filled in the public’s perception. Demands on niche development comprise requirements for the development of the technology that the public believes must be met for successful market diffusion. Preferences for involvement in niche development describe approaches through which the public wants to be involved in niche development. 4.3.1 Findings for SST Expectations for Niche Development: The results indicate a generally positive perception of the technology. The commentators attach high expectations to the technology. SST are expected to modernize the current energy grid, leading to massive load reductions and greater overall stability. The technology is described as a ‘cutting edge’ or ‘game changer’ in terms of its impact on business and society. However, the exact effects are only assumed or vaguely hinted at. In general, commentators demand that the influence of SST should not be limited to benefits for energy suppliers – especially if public funds should subsidize the development. Rather, SST development should focus on enabling applications that improve public life, such as establishing infrastructures for fast charging of electric cars. Information Needs: In general, there is a high demand for information on the technology, especially on how it works and whether it has the potential to solve current risks of the energy grid, e.g., cascading failures or the efficient integration of renewable energies. Comments inquire which social impacts SST could entail, e.g., economic growth or the creation of new energy-related services. Commentators state that a comprehensive discussion is only possible after all the facts are available. Demands on Niche Development: Comments call for a high sense of responsibility from SST developers. Consideration should be given to whether the expected benefits justify the high development costs. Furthermore, it should be thoroughly examined during the development whether the technology may cause risks for the immediate environment (especially for residents). The results indicate that commentators attribute risks known from other technologies to SST. Discussed drawbacks of the technology focus on suspected risks such as sabotage, fire hazards, noise, overheating, failures, or electromagnetic radiation that are known from conventional transformers. Comments demand that developers sufficiently communicate such risks to the public as early in the development as possible. Otherwise, losses of trust are expected. Concepts for limiting potential risks are seen as an essential requirement for successful market diffusion. Preferences for Involvement in Niche Development: The results suggest that SST are perceived as a component in the larger context of grid expansion. In this respect, the public expects to be involved in the discussion about the desirability and implementation of the technology, especially regarding whether DC should replace AC and what the social consequences might be. The results further indicate a willingness to take legal action against SST and its operators, should SST be risky and the public not be involved in discussions early in the niche development.

490

C. Digmayer and E.-M. Jakobs

4.3.2 Findings for CIEC Regarding CIEC, discussions mainly focus on expectations and demands. The other aspects are addressed less frequently. Information Needs: The findings indicate a considerable knowledge gap regarding plans by industry and politics for the market diffusion of CIEC. In this context, commentators want to learn about the general plan for the rollout of CIEC, after initial attempts to establish charging stations were perceived as somewhat disorganized. Furthermore, commentators address the question of whether there are efforts towards standardization, e.g., regarding the negatively perceived diversity of different tariffs, billing systems, and plug types. Expectations for Niche Development: The results indicate that the technology itself is perceived positively, but current implementations are often evaluated negatively. Despite negative experience with their implementation, CIEC are seen as a necessary component for sustainable mobility. Expectations of the technology are primarily directed at its suitability for everyday use: electromobility should improve public life and should not become an obstacle to daily tasks. Therefore, the design of CIEC is expected to adapt to the needs of public life, for example, that it is well integrated into existing transport infrastructure and does not compete with other forms of mobility. In the view of the commentators, the promotion of electromobility should not lead to an increase in the volume of traffic in inner cities and a cut back on public transport. It is further expected that decision-makers will carefully weigh up individual application options between electromobility and alternative drive types such as hydrogen. Demands on Niche Development: Commentators are often dissatisfied with the current implementation of CIEC. The findings reveal demands towards policymakers and industry to address specific problems such as the low availability of charging stations and the low support for fast charging. According to commentators, the development of CIEC should consider the public’s actual needs, e.g., in the form of mobility profiles. In this regard, requirements are distinguished in three areas (private, professional, public sector). For the private sector, concepts are called for that enable vehicle owners to expand CIEC with wall boxes. Commentators expect support from companies and corresponding legal regulations and funding from the government. In the professional sector, commentators expect concepts from employers that establish CIEC as a mandatory component of company benefits for employees. In this way, employees can decide flexibly whether they want to charge their electric car at home or at work. In the public sector, the business community is expected to develop concepts that adequately integrate CIEC in public areas (e.g., shopping arcades) and events (e.g., large concerts). Commenters view such concepts as the basis for market diffusion of CIEC. Some of the comments include suggestions on how known problems should be addressed in these concepts: For example, commentators suggest solving the blocking of charging stations with approaches such as ‘countdown charging stations’ that companies already offer. Preferences for Involvement in Niche Development: Although current problems with CIEC are known, commenters believe they are taking too long to resolve. The findings

What Niche Design Can Learn from Acceptance Mining

491

indicate significant trust issues: Commentators suspect that industry and government are deliberately neglecting the problems not to damage the combustion car industry. Only by directly involving the public in plans to establish the technology and taking their needs into account can such trust issues be addressed and the successful market entry of CIEC be ensured. For this purpose, the comments suggest approaches for local mobility roundtables between the public, politics, and business.

5 Discussion The analysis of ongoing Internet discussions shows that both technologies are perceived and discussed by the public and how. The discussions are rich in detail and cover a broad range of technology- and niche-related aspects. The focus and evaluation seem to depend on the technology’s niche maturity. In an early phase of niche development, the interest seems to be focused on the technology itself (SST); in later phases, the interest seems to be shifting to socio-economic aspects of niche development (CIEC). The results further indicate that even if a technology is perceived positively, the perception can turn into the opposite if its implementation does not meet public expectations. For niche development, the findings have consequences for the three key elements shielding, nurturing, and empowering. Shielding: Legal and financial measures to shield a technology in protected spaces are viewed critically by the public if alternative technologies are disadvantaged as a result. If decision-makers inadequately communicate reasons for promoting certain technologies, this can lead to open rejection before the technology even leaves the niche development stage. Nurturing: An important component of nurturing are learning processes. In this context, the results reveal advantages of involving the public at early stages in niche development: Stakeholders from politics and business can align their technology expectations with those of the public and thus develop niches in socially acceptable directions. The results show various information needs, demands, and feared risks that must be addressed in adequate innovation communication. Empowerment: The comments contain a broad variety of suggestions that could be integrated into niche development using co-creation approaches. In this way, shortcomings of the niche design are identified during development that can lead to bottlenecks when a technology is brought to market. Some comments announce a willingness to support such co-creation processes actively. In general, the findings reveal a need for information campaigns and participation approaches that offer the public possibilities to engage in niche development. Limitations: The study uses an acceptance-mining approach based on sentiment analysis. Larger amounts of data and further niche technologies could be examined to verify the results. Moreover, the data does not allow any conclusions to be drawn about the authors, which makes it difficult to classify them into actor groups (e.g., technology expert vs. novice). Other forms of opinion expression apart from Internet comments were not considered (e.g., expert articles about technologies).

6 Conclusion Acceptance mining for niche development allows access to actor perspectives on novel technologies and embedding contexts. It offers valuable input for niche development

492

C. Digmayer and E.-M. Jakobs

in socially acceptable directions, e.g., expectations (integration of the technology into public life; nurturing), ideas for empowerment (market conditions), and selection processes considered relevant for opening the niche to the mass market. Limitations of the proposed approach should be minimized by combining it with qualitative approaches such as expert interviews. Acceptance research should accompany the process of niche development. Depending on the niche dynamics, data must be collected iteratively. The analysis of Internet discourses allows checking whether and how niche technologies and niche development become a topic of public interest. The approach should be contrasted and complemented by text-mining methods that map the technical literature discussions and compare arguments from experts and laypersons. For a better overall picture of the current state of niche development, different methods should be combined that address and compare the perspectives of involved niche actors, e.g., developers, operators (companies and municipalities), and the personnel who will later be responsible for maintenance. Acknowledgments. Funded by the Federal Ministry of Education and Research (BMBF, FKZ 03SF0592), Flexible Electrical Networks (FEN) Research Campus.

References 1. Raven, R., Van den Bosch, S., Weterings, R.: Transitions and strategic niche management: towards a competence kit for practitioners. Int. J. Tech. Man. 51(1), 57–74 (2010) 2. Digmayer, C., Jakobs, E.-M.: Risk perception of complex technology innovations: perspectives of experts and laymen. In: IEEE International Professional Communication Conference (2016) 3. Trevisan, B., Neunerdt, M., Hemig, T., Jakobs, E.-M., Mathar, R.: Detecting ironic speech acts in multilevel annotated German web comments. In: The 12th Conference on Natural Language Processing (2014) 4. De Doncker, R.W., Divan, D.M., Kheraluwala, M.H.: A three-phase soft-switched highpower-density dc /dc converter for high-power applications. IEEE Trans. Ind. Appl. 27(1), 63–73 (1991) 5. Smith, A., Raven, R.: What is protective space? Reconsidering Niches Trans. Res. Pol. 41(6), 1025–1036 (2012) 6. Kemp, R., Schot, J., Hoogma, R.: Regime shifts to sustainability through processes of niche formation: the approach of strategic niche management. Tech. Anal. Strat. Man 10(2), 175– 198 (1998) 7. Lee, H., Jung, E.-Y., Lee, J.-D.: Public–private co-evolution and niche development by technology transfer: a case study of state-led electricity system transition in South Korea. Energ. Res. Soc. Sci. 49, 103–113 (2019) 8. Sellke, P., Renn, O.: Risk governance of pervasive computing technologies. Int. J. Technol. Knowl. Soc. 4(1), 215–224 (2008) 9. Jakobs, E.-M.: Technikakzeptanz und -kommunikation. In: Fraune, C., Knodt, M., Gölz, S., Langer, K. (eds.) Akzeptanz und Politische Partizipation in der Energietransformation. Springer, Wiesbaden (2019). https://doi.org/10.1007/978-3-658-24760-7 10. Mauelshagen, C., Jakobs, E.-M.: Science meets public: customized technology research communication. In: IEEE International Professional Communication Conference (2016) 11. Thomas, H., Marian, A., Chervyakov, A., Stückrad, S., Salmieri, D., Rubbia, C.: Superconducting transmission lines – sustainable electric energy transfer with higher public acceptance? Ren. Sust. Energ. Rev. 55, 59–72 (2016)

When Dullscreen is Too Dull Ronald Laurids Boring(B) Human Factors and Reliability Department, Idaho National Laboratory, PO Box 1625, Idaho Falls, ID 83415, USA [email protected]

Abstract. Dullscreen as an interface design concept suggests minimizing colors to avoid interface confusion and clutter. Dullscreen designs have been applied in several of the author’s efforts to modernize legacy control rooms in nuclear power plants with digital upgrades. A study on system overviews revealed an unexpected finding. One set of screens followed a vendor’s human-computer interface style guide, which featured a series of colorful elements on the screens. Another set of screens followed an in-house style guide that adhered to dullscreen principles. The discovery that the colorful elements were more visually salient, especially when seen across the control room, pointed to a need to reconsider strict adherence to dullscreen design principles. Keywords: Human-computer interface · Control room · Dullscreen design · Color saliency · System overviews

1 Introduction As part of a control room modernization effort at a U.S. nuclear power plant, two studies were conducted to review the effects of system overviews. In the modernization process, existing analog control boards are replaced with digital systems. Such upgrades have primarily focused on replacing existing instrumentation and controls (I&C) for subsystems like the turbine control system (TCS) or chemical and volume control system (CVCS), which are non-safety systems that are readily replaced within current regulations. A distributed control system (DCS) is introduced to the control room, which includes control logic on the backend and introduces digital displays and input devices (e.g., touchscreen, keyboard, mouse, or trackpad) [1]. The Human System Simulation Laboratory (HSSL) [2] at Idaho National Laboratory (INL) is used as a flexible research environment in which full-scope plant simulator models drive interfaces on glasstop panels aligned with representations of existing analog I&C or prototypes for new digital human-computer interfaces (HCIs) [3]. Using the HSSL, operator performance on the new HCI may be benchmarked to the legacy I&C. The goal of a modernization effort is to ensure that the reactor operators perform tasks at least as reliably using the new system as the predecessor system. While it is desirable to achieve higher reliability using the new system, such design goals belie the already high reliability of existing operations. Efficiency gains such as reduced time or workload to complete plant operations are a more likely outcome [4]. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Z. Ahram et al. (Eds.): AHFE 2021, LNNS 271, pp. 493–501, 2021. https://doi.org/10.1007/978-3-030-80624-8_62

494

R. L. Boring

2 System Overviews Industry best practice prescribes two DCS monitors side by side or stacked vertically on the control boards as part of a modernization effort. The two monitors help provide redundancy in case of failure of one monitor. Additionally, a second monitor provides the chance to display supplemental information not found on a single monitor, thereby avoiding the need for navigation between screens or windows. Finally, a second monitor may overcome an interface shortcoming when pop-up windows are implemented, such as when a confirmation dialog is displayed before taking a control action. The confirmation dialog windows may in some designs cover other needed information on the screen. Nonetheless, these confirmation dialog windows are especially important for safety critical process functions or processes that may not be reversed once initiated. They are also essential for touchscreen displays, which may not feature a hover capability and are prone to accidental activation. Four levels of process HCIs have been proposed [5, 6]: • Level 1 (Plant): This is the overview of the entire process span, sometimes called a groupview, typically at the plant level of detail, highlighting the most important indicators and functions of the plant for at-a-glance monitoring. • Level 2 (System): This is the more detailed overview of the specific indicators and functions associated with a particular system or subsystem of the plant. This type of overview may take the form of a piping and instrumentation diagram (P&ID) or mimic, to show the interconnections between components. • Level 3 (Component): At this level, the very detailed aspects of a particular component or set of related components is displayed, typically isolated from their larger context in a system. These component-level HCIs are typically used for controlling a system. • Level 4 (Diagnostics): These displays should ideally not be regularly invoked. These HCIs are often used for calibration and troubleshooting and are not part of regular operations. These HCIs may provide health information about sensors and controls rather than the actual states of the sensors and controls. A two-monitor configuration on a control board will usually be centered on a particular system. Thus, the screens available to operators will revolve around monitoring and controlling that subsystem and will feature Level 2 and 3 screens. These screens are typically user selectable, and preference data from previous modernization efforts suggest that reactor operators prefer one screen with Level 3 component details for control and one screen with Level 2 details for monitoring process effects. The exact screens will vary according to function or mode of operation. For example, for the TCS, a different set of screens would be used during the various stages of turbine startup vs. electricity generation vs. emergency shutdown. One variation of TCS HCIs prototyped in the HSSL featured over 30 separate screens, including rarely used diagnostic and maintenance screens. Two monitors may not be sufficient to cover all required uses. With the advent of computer-based procedures [7] and digital alarm screens [8], for example, the competition for screen real estate is considerable. Even without these added technologies, it remains a challenge to navigate between two screens and maintain a process overview.

When Dullscreen is Too Dull

495

For example, aspects of TCS manipulations may require closely monitoring and controlling two sets of components (e.g., vibration monitoring and governor valve adjustment). The requirement to select two sets of detailed Level 3 screens causes the Level 2 screen not to be visible, potentially omitting crucial information on overall system evolutions as they occur. Some of these limitations can be overcome by providing hybrid screens that include Level 2 and Level 3 information. However, such designs may not realize the advantages of dedicated screens for Levels 2 and 3. The available screen real estate remains limited, and compressing screens of separate purposes into a single screen may compromise the level of detail or the readability of the information. New, higher resolution monitors such as 4K and 8K monitors that offer four and sixteen times the number of pixels on screen compared to the industry-standard 1080 monitors may present the temptation to fit more indicators and controls into each screen, but available standards [9] do not suggest smaller imagery improves the usability of readability of HCIs. Most HCI designs do, however, provide a dedicated billboard area that features a few of the most important monitored parameters. This billboard does not attempt to be as extensive as a standalone overview screen but does allow operators to have the most important parameters visible at all times regardless of which screens are selected for the monitors.

3 Operator Study on System Overviews (OSSO) To overcome the competition for screen real estate and maintain the advantages of having overview information available at all times, INL staff reviewed the prospect of having a dedicated system overview monitor to complement the two other DCS monitors. Essentially, two monitors would serve for Level 3 activities, and a third monitor placed above the others would serve for Level 2 activities. In this regard, it is important to note that the Level 2 system overview screen would not allow control actions and would only serve monitoring or information display purposes. The availability of a third monitor as a dedicated system overview screen addressed another concern in the control room. DCS displays in updated control rooms are localized displays that to date largely maintain the operator standing at the boards. The upgraded HCIs on the DCS monitors will of course not have reduced readability compared with the individual gauges and controls on their predecessor boards. However, instead of the expansive area of a horizontal board panel, information is consolidated to the smaller area of monitors. The distributed I&C of the legacy board layout provides a broader field of view that is not as likely to be occluded by the operator(s) standing at the panels. The monitors are more likely to be blocked by an operator at the controls, making it more difficult for a second operator at the boards or the control room supervisor (a.k.a., senior reactor operator) at the center of the room to see the information at the boards. The DCS may open up functionality for the control room supervisor by allowing a mirrored workstation display. Without a dedicated workstation display for the control room supervisor, the smaller region of the boards occupied by the monitors may actually restrict the control room supervisor’s situation awareness of the system or mutual awareness of the actions of the operator at the controls. As such, a dedicated system overview placed above the reactor operators avoids occlusion and affords a shared screen between operators at the boards and the control room supervisor.

496

R. L. Boring

Two studies were conducted in the HSSL to review the value of system overview screens. These studies were referred to as Operator Study on System Overviews (OSSO) 1 and 2. Both studies looked at two systems in the control room: CVCS and TCS. The TCS upgrade represented the first phase of turbine upgrades planned at the plant and had a preferred DCS vendor. As such, the DCS design followed the HCI style guide from that vendor, including the preferred format for onscreen widgets like P&ID symbols and their corresponding color palette. The CVCS featured several features not yet approved by the plant for upgrades such as a prototype computer-based procedure system and a prognostics system to provide early fault warnings [10]. The CVCS prototype conformed to an in-house dullscreen style guide that prescribed minimal use of color [11]. OSSO-1 featured two crews, each with an operator at the controls, balance-of-plant operator, and control room supervisor. OSSO-2 featured a single crew of identical composition to the other crews. Both studies looked at CVCS and TCS, while OSSO-2 featured updated HCI prototypes derived from findings in the first study. Each crew performed a series of scenarios related to the range of features for the TCS, including startup, sync to grid, response to grid disturbances, load shedding, and shutdown. For the CVCS, a series of normal and abnormal operating scenarios was presented, including scenarios specific to the prognostics system. CVCS and TCS scenarios were mirrored between the existing analog board configuration and the upgraded configuration, with the exception of CVCS prognostic scenarios that were not possible with the conventional boards. (a)

(b)

Fig. 1. Chemical and volume control system prototype with system overviews (a) and turbine control system with system overviews (b).

The TCS featured three board variants: conventional analog, two-monitor HCI, and three-monitor HCI with one monitor as a dedicated system overview. The CVCS only featured two variants: conventional analog vs. multi-monitor configuration featuring overviews. The overview screen variants for CVCS and TCS are found in Fig. 1a and 1b, respectively.

4 OSSO Findings The findings for OSSO-2 were summarized in [4] and are representative of the overall findings across both studies and all three crews. There was no difference found in error

When Dullscreen is Too Dull

497

rates between the overview (three-monitor) and standard DCS (two-monitor) variants. Operators performed tasks more quickly in the digital conditions but did not show differences in workload or situation awareness between analog, digital, and digital with overview conditions, nor did they have a clear preference for any particular system, although they were favorably disposed to the digital variants. The only place where there was a clear difference between conditions was with visual scan patterns as captured by eye trackers. The operators using the analog boards showed a widely distributed scan pattern with no long dwell times on particular areas. In the two-monitor condition, operators tended to have a long dwell time on the two monitors, relatively ignoring the I&C at the boards. Finally, in the condition with the system overview screen, operators again showed a more distributed scan and dwell pattern. Self reports by the operators suggested that the operators were using the information in the system overview screens to determine the overall system process and were then using indicator values on the boards or in the two-screen HCIs to verify the information. This finding does not indicate that one scan pattern is preferable to another, just that they differ. If the objective is to retain some legacy I&C for redundancy or cybersecurity hardening, then narrowly focusing on the digital HCIs is not a good operational adaptation. If the legacy I&C is subsumed under the digital HCIs, there is no advantage to scanning the boards broadly. A finding not explicitly part of the experimental design was the reaction of the crews to the different valve indicators between the CVCS and TCS. In P&ID and mimic views, valves are represented by two connected triangles. In a schematic drawing, the triangles are filled in black to indicate closed state or left with a white center to indicate an open state (see Fig. 2a). A common color scheme employed by DCS vendors is to use color to indicate the state of the valve. An open valve is indicated by a bright green color, whereas a closed valve is indicated by a bright red color (see Fig. 2b). (a)

(b)

Fig. 2. Monochrome or dullscreen rendering of valves (a) and color rendering of valves (b).

Recall that in the OSSO studies, the TCS prototype followed the coloring convention specified in the vendor’s HCI style guide, while the CVCS prototype followed an internal style guide [11]. The vendor’s HCI style guide featured a variety of colors, some of which served contradictory roles. For example, red was used to signify an alarm, to signify a closed valve, or to signify an electrified (i.e., on) pump. Particularly to people outside the nuclear and process control industries, the use of red to signify off for fluid flow yet on for electrical flow seems a glaring contradiction. Operators are fully conditioned to be

498

R. L. Boring

aware of these differences through extensive training. To avoid such contradictory roles of colors and the need for extensive training, INL’s internal HCI style guide followed a dullscreen philosophy. In this case, it meant that the valves were rendered in monochrome similar to Fig. 2a to avoid color role confusion. This approach also avoided problems inherent to color recognition for those individuals with deuteranopia, i.e., red-green colorblindness. The concept of dullscreen was first proposed by Veland and colleagues [12]. Simply put, dullscreen is a design concept for HCI in which the use of color is minimized except to draw attention to crucial information, usually alarm states. In practice, dullscreen has been widely implemented in process control settings such as overview displays [5, 13]. In such screens, the typical process indicators are presented in a muted color palette. Whereas historically graphical process control displays might have displayed valves in red or green state to represent open or closed state, the dullscreen valves are presented in monochrome, often mimicking the appearance of paper P&IDs. In dullscreen, the color red, for example, would be reserved only for alarm states. As the only vivid color, a red visual alarm stands out clearly from the rest of the process information. The vivid color catches the operator’s eye and affords an unambiguous and quick response. Dullscreen has resulted in the awareness of the perils of color in screens. Some process control displays released to market have been notoriously colorific [11], perhaps simply as a reflection of an emerging technology able to support a richer color palette or as a type of mutiny against the amber monochrome displays of early monitors ubiquitously installed at plants. In theory, the use of color would increase the visual channels for operators, allowing them to connect color-coded flow paths or decipher complex meanings through extensive color schemes. But, the reality is that color often overloads these screens, decreasing any semantic associations and minimizing saliency of distinct pieces of information. The surprise finding for the dullscreen implementation of the CVCS occurred when the control room supervisor in the first crew of the OSSO-1 study remarked that the color valves from the TCS were much easier to see than were the dullscreen valves from the CVCS, despite being equidistant to both systems. This finding was confirmed by the operators at the boards in the first study and by both of the subsequent crews. The valves, which measured 2 cm, represented a visual angle (i.e., the size projected on the retina) of 1° 38 0.21 for operators 70cm from the onscreen valve (representing an operator at the boards), 0° 57 0.29 for operators 120cm from the onscreen valve (representing the operator serving as a second checker to the first operator), and 0° 18 0.58 for operators 370cm from the onscreen valve (representing the control room supervisor in the middle of the room). The visual angle is calculated according to the following equation:   s 2 −1 a = 2tan (1) d where a is the visual angle, s is the size of the object in cm, and d is the distance in cm. The visual angle for the control room supervisor is below the lower range of discriminability for gauges, which should be a minimum of 0° 20 0.00 [9]. The color discriminability of an object occurs at a much lower level, depending on hue and brightness [14]. In other words, the color of an object may be recognizable before the actual

When Dullscreen is Too Dull

499

object is readily discernable. The implementation of a prototype dullscreen valve compared to a conventional colorful valve actually resulted in degraded recognizability of component states by operators.

5 Discussion The finding of greater salience of the color valves compared to the dullscreen valves has prompted some reflection and redirection in the use of dullscreen as the main driver for our HCI style guide. A review of dullscreen principles suggests that there may be three factors that need to be considered when designing for process control. First, reservation of color only for alarms is fallacious, because alarms are not the only important source of information for operations. In fact, alarms are often the last line of defense against failure, and it is important that crucial information that can help normal operations and prevention of abnormal states should be prioritized. The importance of avoiding redundant color use stands. Additionally, minimizing the use of bright colors helps to preserve the visual and semantic saliency of those colors to draw attention when needed. The optimal color palette likely resides somewhere between the vibrant mosaic of a paint store and the desolate greys of a cold winter day in Scandinavia. Dullscreen, which favors the latter, should not mean an absence of color. It should mean saving color for meaningful purposes. Second, there is a difference in salience between monochromatic brightness and color contrast. Color contrast is much more salient than monochrome gradients, and the usability of visual information may actually be decreased when minimizing the color palette. There are some implications of this consideration in the newer design trend of neomorphism [15]. Neumorphism shares some qualities with dullscreen. It does not eschew color but instead relies on gradients and shading to convey visual separation on a screen. Neumorphic design may, however, do away with the edge crispness that enhances object saliency at a distance and may be subject to some of the same limitations of muted saliency as dullscreen when implemented in overview screens. Finally, dullscreen may overlook key design principles such as color grouping and color semantics that afford good visual design. Color grouping occurs when different visual objects are linked together by a common color scheme. This approach has often been used in HCIs to convey process trains and flow paths. Color semantics refers to the meaning associated with specific colors, such as the traffic light convention of red for stop, yellow for caution, and green for go. One must be careful not to overanalyze color symbolism when selecting colors for particular purposes. Few colors are so richly imbued with tradition and symbolism that they cannot be purposed to represent a novel meaning in process control HCIs. Ensuring the singular purpose for specific colors and user-testing those colors for unanticipated interactions will ensure that color may be used meaningfully in a design. While dullscreen remains a useful design concept to help minimize visual confusion and clutter, it should not be applied indiscriminately. There are possible tradeoffs in minimizing the color palette. Recent experience implementing dullscreen design concepts in nuclear power plant control rooms suggests dullscreen may result in a loss of salience for important indicators, especially when presented on system overviews. Because nuclear

500

R. L. Boring

power plants feature control boards that must be seen by a crew of operators across the control room, reactor operators may benefit from the use of color cues to augment lower visual acuity at a distance. Further research is needed to understand the limitations and advantages of dullscreen concepts in control room design. Acknowledgments. The author thanks Drs. Thomas Ulrich and Roger Lew, whose technical talents enabled the control room modernization activities described in this paper. This work of authorship was prepared as an account of work sponsored by Idaho National Laboratory (under Contract DE-AC07-05ID14517), an agency of the U.S. Government. Neither the U.S. Government, nor any agency thereof, nor any of their employees makes any warranty, express or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights.

References 1. Ulrich, T.A., Boring, R.L., Lew, R.: Control board digital interface input devices—touchscreen, trackpad, or mouse? In: Proceedings of the International Symposium on Resilient Control Systems (Resilience Week), pp. 168–173 (2015) 2. Boring, R.L.: The first decade of the human systems simulation laboratory: a brief history of human factors research in support of nuclear power plants. In: Ahram, T. (ed.) AHFE 2020. AISC, vol. 1213, pp. 528–535. Springer, Cham (2021). https://doi.org/10.1007/978-3-03051328-3_72 3. Boring, R., Lew, R., Ulrich, T.: Advanced nuclear interface modeling environment (ANIME): a tool for developing human-computer interfaces for experimental process control systems. In: Nah, F.F.-H., Tan, C.-H. (eds.) HCIBGO 2017. LNCS, vol. 10293, pp. 3–15. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-58481-2_1 4. Boring, R.L., Ulrich, T.A., Lew, R., Kovesdi, C., Al Rashdan, A.: A comparison of operator preference and performance for analog versus digital turbine control systems in control room modernization. Nucl. Technol. 205, 507–523 (2019) 5. Hollifield, B., Habibi, E., Nimmo, I., Oliver, D.: The High-Performance HMI Handbook: A Comprehensive Guide to Designing, Implementing and Maintaining Effective HMIs for Industrial Plant Operations. Plant Automation Services (2008) 6. Electric Power Research Institute: Operator Human Machine Interface Case Study: The Evaluation of Existing “Traditional” Operator Graphics Versus High-Performance Graphics in a Coal-Fired Power Plant Simulator, TR-1017637 (2009) 7. Lew, R., Boring, R.L., Ulrich, T.A.: Task engine for job and user notification (TEJUN): a tool for prototyping computerized procedures. In: Proceedings of the 11th Nuclear Plant Instrumentation, Control and Human-Machine Interface Technologies (NPIC&HMIT 2019), pp. 932–940 (2019) 8. Boring, R.L., Persensky, J.J.: Hybrid alarm systems: combining spatial alarms and alarm lists for optimized control room operation. In: 8th International Topical Meeting on Nuclear Power Plant Instrumentation, Control, and Human-Machine Interface Technologies (NPIC&HMIT), pp. 1914–1923 (2012) 9. O’Hara, J.M., Fleger, S.: Human-system interface design review guidelines, NUREG-0700, Rev. 3. U.S. Nuclear Regulatory Commission (2020) 10. Lew, R., Boring, R.L., Ulrich, T.A.: Computerized operator support system for nuclear power plant hybrid main control room. In: Proceedings of the Human Factors and Ergonomics Society Annual Meeting, vol. 63, pp. 1814–1818 (2019)

When Dullscreen is Too Dull

501

11. Ulrich, T., Boring, R., Phoenix, W., DeHority, E., Whiting, T., Morrell, J., Backstrom, R.: Applying Human factors evaluation and design guidance to a nuclear power plant digital control system, INL/EXT-12-26797. Idaho National Laboratory (2012) 12. Veland, Ø., Eikås, M.: A novel design for an ultra-large screen display for industrial process control. In: Dainoff, M.J. (ed.) EHAWC 2007. LNCS, vol. 4566, pp. 349–358. Springer, Heidelberg (2007). https://doi.org/10.1007/978-3-540-73333-1_42 13. Braseth, A., Nihlwing, C., Svengren, H., Veland, Ø., Hurlen, L., Kvalem, J.: Lessons learned from Halden project research on human systems interfaces. Nucl. Eng. Technol. 41(3), 215– 224 (2009) 14. Wandell, B.A.: Foundations of vision. Sinauer (1995) 15. Darrehshourian, S.Z.: Compare visibility and affordance of flat and neumorphic buttons. In: Proceedings of the 19th Student Conference in Interaction Technology and Design, pp. 46–51 (2020)

Examining the Use of the Technology Acceptance Model for Adoption of Advanced Digital Technologies in Nuclear Power Plants Casey Kovesdi(B) Idaho National Laboratory, Idaho Falls, ID, USA [email protected]

Abstract. The United States nuclear industry is being economically challenged and needs wide-scale adoption of enabling technology like advanced automation to remain viable. To enable wide-scale adoption of advanced technology, addressing the human-technology integration challenges and factors that influence technology acceptance (i.e., ‘buy in’) is pertinent. This work examines the Technology Acceptance Model as a framework for characterizing the factors influencing technology adoption in the nuclear industry. An outcome of this work is to support the development of future tools such as a technology acceptance survey that can be used to assess technology acceptance throughout the lifespan of a modernization program to provide a systematic way of ensuring stakeholder and operations ‘buy in’. Keywords: Human factors engineering · Technology acceptance model · Nuclear power plant modernization

1 Introduction The United States (U.S.) commercial nuclear industry is being challenged economically by other electricity generation sources including natural gas and renewables in part due to changes in market and use of advanced automation [1]. The U.S. nuclear industry has historically taken a reserved approach to modernizing with enabling technologies due to a combination of barriers such as having a risk adverse culture and lacking clarity for a new state vision [2]. Common contributors of these barriers previously cited have included misperceived return on investment (ROI) of advanced digital technologies, concerns with overcoming licensing and regulatory barriers, and addressing cybersecurity risks with software [3]. For the U.S. nuclear industry to widely adopt enabling technology like advanced automation, a systems approach is needed that includes applying human factors engineering (HFE) to address the human-technology integration challenges associated with these barriers and their contributors. One important direction pertains to characterizing the attitudinal factors that influence intention of adopting available advanced technologies for nuclear power plants (NPPs). For instance, stakeholder and operations ‘buy in’ © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Z. Ahram et al. (Eds.): AHFE 2021, LNNS 271, pp. 502–509, 2021. https://doi.org/10.1007/978-3-030-80624-8_63

Examining the Use of the Technology Acceptance Model

503

is often needed when selecting advanced technologies to be implemented in subsequent modernization efforts [2]. This work explores the use of the Technology Acceptance Model (TAM) to characterize the factors influencing technology acceptance in the U.S. nuclear industry. Common barriers are described next.

2 Common Barriers to NPP Modernization Without a license renewal, much of the existing U.S. NPP fleet is approaching the end of their licensed operating lifespan [2]. These plants’ existing infrastructures have been largely left unchanged, comprised of mostly analog technology that requires a laborcentric approach to operate, maintain, and support these plants [1]. Historically, the nuclear industry has been reluctant to modernize due to having a risk adverse culture and lack of clarity for a transformative new state vision [2, 3]. Common contributors to these barriers include (1) the perceived value and ROI of digital technology and (2) the perceived risk associated with licensing, regulatory, and cybersecurity. 2.1 Perceived Value and ROI of Digital Technology One challenge for utilities has been developing a clear business case regarding the actual cost reductions seen with advanced technology [4]. Without a specific business case that justifies the ROI when implementing advanced technology, the value of a new technology cannot be fully realized. Hence, the added costs associated with implementation compounds with any misalignment of perceived value or ROI related to the potential benefits that the technology has on the plant and overall organization. 2.2 Perceived Risk: Licensing, Regulatory, and Cybersecurity Perceived risk associated with licensing and regulatory considerations pose another challenge to modernizing. The U.S. nuclear industry has two primary paths for regulatory acceptance of digital upgrades, including: (1) the License Amendment Request (LAR) and (2) following the 10 Code of Federal Regulation (CFR) 50.59 process [5]. While a detailed description of the distinction between these two paths is beyond the scope of this work, it is important to note that the latter process bounds any modification to the existing plant’s design and licensing basis (hence, not requiring an LAR). While modifications made to non-safety systems of the plant may follow 10 CFR 50.59, major plant changes with an added scope for modifications made to safety systems would require an LAR. There have been challenges from a licensing and regulatory standpoint in both paths. The U.S. nuclear industry’s perception of performing upgrades via LAR has generally been less desired in part due to the perceived project risks that result in unforeseen cost and schedule creep [5]. Likewise, utilities who follow the 10 CFR 50.59 path are often faced with their own challenges such as with responding to the associated screening and evaluation questions that require specific expertise like in HFE. There are also perceived risk associated with cybersecurity for digital upgrades. Digital technology can enable the distribution of data from non-safety and safety systems

504

C. Kovesdi

across the plant, which can create new capabilities that support overall plant-wide decision making. However, a pitfall of this very advantage is the added risk with cyber threats [3]. While cybersecurity is a known barrier in the industry and there are ongoing efforts to minimize the risks, an important consideration pertains to understanding the impact of perceived cybersecurity risk on technology acceptance.

3 Addressing Attitudinal Factors for Technology Adoption At a glance, it may be reasoned that the relevance of technology acceptance is less significant in industries where there is a mandatory use of technology like in NPPs. Yet, it is important to note that the end users of the technology at NPPs (e.g., licensed operators) are strongly encouraged to be involved in the entire modernization process [5]. In early stages, utilities typically select a vendor with available technology that can be configured to develop new capabilities that improve plant performance and efficiency [3]. The identification and selection of new technology for the new state vision ultimately requires ‘buy in’ from the stakeholders and end users. Here, the organization’s attitude and intent of adopting a new technology is reasoned to be an important factor in the implementation of a given technology to achieve the plant’s new state vision. That is, it is posited that if the end users of the technology have a negative attitude about a candidate technology, the technology will likely not be considered in the new state vision and ultimately not implemented. As a result, the new state vision may be less transformative and fail to leverage technology to the greatest extent in reducing costs and improving performance.

4 Using TAM as a Framework for Technology Acceptance TAM is a candidate model to support the characterization of technology adoption. TAM is an established framework that characterizes the factors that contribute to the attitudes and behaviors of using technology [6]; the underlying basis of TAM is that perceived usefulness (PU) and perceived ease of use (PEOU) contribute to the attitudes and behaviors toward using technology. TAM has been applied across several domains, including information technology, healthcare systems, robotics, antonymous vehicles, and even urban planning (e.g., [7]). Through these applications, there have been several extensions to the original TAM, with one notably pertaining to the adoption of automation [8]. A description of TAM and its extensions relevant to technology adoption in the nuclear industry is described next. 4.1 Introduction to the Technology Acceptance Model The TAM framework theorizes that actual use, herein referred to as technology adoption, is driven by the intent to use or adopt [6]. The intent is consequentially influenced by the attitude towards using or adopting, which is further influenced by the internal PU and PEOU variables. PU, PEOU, attitude, and intention are all internal variables of TAM that drive actual use. Further, these internal variables are influenced by external

Examining the Use of the Technology Acceptance Model

505

Fig. 1. Overview of TAM [6].

variables, which may be domain specific [7]. Figure 1 illustrates TAM and the relations between internal and external variables. PU is the degree to which a user believes the technology will benefit them in their work [9]. TAM suggests that as the PU of a given technology increases, the attitude and intention towards using will also increase, which will consequentially lead to actual use or technology adoption. PEOU is the degree to which a user believes using the technology will be effortless. TAM theorizes that PEOU positively influences the PU, attitude towards using, and intention to use. As PEOU increases, PU, attitude towards using, and the intention to use also increases, leading in turn to actual use. 4.2 Applications and Extensions of the TAM Since TAM’s initial development [6], it has been extensively used and extended to include additional variables dealing with the particulars of the specific domains that it has been applied to [7]. While a detailed literature review of TAM’s extensions goes beyond this paper, it is worth highlighting that Maranguni´c and Grani´c [7] referenced over 20 extensions from the original TAM. The authors characterized these extensions into four generalized categories, including expansions of the model through: (1) external variables, (2) factors from other theories, (3) added contextual factors, and (4) added usage measures. Modifications to TAM via external variables can be characterized as added specificity to the external variables seen in Fig. 1. Examples of external variables include confidence in technology, as well as prior experience with similar technology. Added factors from other theories refers to the addition of internal variables to TAM in an effort to increase the predictive validity of the model for specific research applications. Examples of added factors include the inclusion of trust [7], task-technology compatibility [8], and perceived risk [10]. Added contextual factors refer to the inclusion of overall moderating variables such as gender or specific cultural and technology characteristics that moderate the relations seen in TAM [7]. Usage measures include added measures that influence actual use such as the attitude towards using and intent to use. One such extension of TAM with particular application to technology acceptance for NPP modernization is the Automation Acceptance Model (AAM), developed by Ghazizadeh and colleagues [8]. AAM was developed to serve as a generalized integrated framework for assessing the adoption of automation. While TAM is a core constituent of AAM, AAM borrows from the cognitive engineering literature to include task-technology compatibility and trust in the model (see Fig. 2).

506

C. Kovesdi

Fig. 2. Recreation of the automation acceptance model from Ghazizadeh and colleagues [8].

Task-technology compatibility refers to the extent to which automation matches the needs of the task performed by users [8]. Compatibility pertains to the integration of appropriate levels of automation to perform the task. Compatibility applies to integrating the appropriate level of automation based on the demands of the task, including its degree of complexity, predictability, and criticality. In a simple, highly predictable, and noncritical situation, high compatibility may refer to designing the system with high levels of automation where the transparency of the automation is less important. Conversely, a less predictable, complex, and critical situation may lend itself towards a system’s design having lower levels of automation and/or maximizing automation transparency to ensure the utmost levels of situation awareness and system resilience. Trust in automation is considered to mediate the relation between people and technology and is greatly influenced by the perceived system reliability and one’s experience with the given system [8]. An important consideration with trust is the calibration to ensure an appropriate use of automation. All things equal, calibrated trust is characterized by displaying a lower trust with less reliable systems and a higher trust with more reliable systems. AAM [8] hypothesizes that perceived task-technology compatibility is influenced by the degree of agreement between the design of automation and the user’s past experience with similar technology. The relation between compatibility and attitude towards using is mediated by PU and PEOU; thus, high compatibility positively contributes to PU and PEOU, which positively influences the attitude towards use. Further, compatibility directly influences trust. AAM theorizes that trust influences the intent to use through direct and mediating relations between PU and PEOU. Collectively, AAM suggests that high task-technology compatibility will have a positive influence on trust, as well as PU and PEOU, which will consequentially positively influence the attitude to use. Moreover, high compatibility coupled with increased experience with a technology may positively influence trust. Higher trust has a positive influence on PU and PEOU, as well as the intention to use. Figure 2 illustrates these relations of compatibility and trust to TAM, as theorized in AAM [8].

Examining the Use of the Technology Acceptance Model

507

A final extension worth noting includes perceived risk. While Ghazizadeh and colleagues [8] indirectly discuss perceived risk and its influence on trust, recent research from Zhang and colleagues [10] explicitly modeled perceived risk in the TAM framework. The authors’ work was developed within the context of public acceptance for automated vehicles. Their work empirically tested the model through structural equation modeling and confirmed an overall good model fit. Notably, their model, herein referred to as the Perceived Risk-TAM (PR-TAM), included perceived risk to safety and privacy (i.e., analogous to cybersecurity risk discussed earlier) and trust. Perceived risk to safety was found to significantly influence trust. Zhang and colleagues’ work [10] builds on the technology acceptance for autonomous vehicles, which may be qualitatively different from the application of technology adoption in process control applications like NPPs. Nonetheless, the explicit inclusion of interrelating perceived risk with trust warrants consideration in its role in technology acceptance for advanced digital technologies in NPPs where higher levels of autonomy are generally desired to reduce operating and maintenance costs.

5 Applying TAM to Technology Acceptance in NPPs

Fig. 3. Proposed TAM for Nuclear Power Plant Modernization (TAM-NPP-M).

The proposed framework for characterizing the factors that influence technology acceptance in NPP modernization builds on the TAM frameworks discussed previously (see Fig. 3). That is, the TAM-NPP-M expands on TAM, AAM, and PR-TAM by including three explicit external variables (i.e., familiarity with the new technology, familiarity with existing technology, and the technology’s data integration capabilities) and one moderating variable (i.e., the technology’s track record in the nuclear industry). Here, familiarity with the new technology refers to one’s awareness and experience with the proposed technology being implemented. Familiarity can be gained through gaining awareness of the technology’s capabilities through operating experience at other plants,

508

C. Kovesdi

attending demonstrations of the technology, and being involved in the overall modernization process. Experience with existing technology refers to the extent of familiarity one has with existing technology and their comfort using it. Data integration capability refers to the extent that the given technology enables data from non-safety and safety plant systems to be distributed to new applications that support plant-wide decision making. Finally, the track record of a given technology in the industry refers to the operating experience of a given technology implemented across the nuclear industry; the track record can have a negative or positive impact on the overall TAM-NPP-M model depending on its operating experience. TAM-NPP-M hypothesizes that the degree of familiarity with new technology will directly influence task-technology compatibility, trust, PU, and PEOU (blue paths in Fig. 3). For instance, a high degree of familiarity with the new technology will better inform task-technology compatibility, trust, PU, and PEOU given that the technology is suited for the task and is reasonably reliable. Experience with the existing technology also influences compatibility (red path in Fig. 3). An operator who has extensive experience with legacy technology and has little familiarity with the new technology may be reluctant to accept the technology. Moreover, if the operator is familiar with the new technology but it is radically different from the existing technology, the operator may perceive the new technology as less compatible. In either case, a negative influence on compatibility will also negatively influence trust, PU, PEOU, and consequentially overall attitude and intent to use. TAM-NPP-M also hypothesizes that the technology’s degree of data integration capability has a direct influence on perceived risk (green path in Fig. 3). That is, it is hypothesized that technology with a high data integration capability would be perceived to have higher regulatory and cybersecurity risks, all things equal. Finally, the track record of the given technology in the nuclear industry is hypothesized to moderate the relations in TAM-NPP-M. Technology that is well vetted from positive operating experiences would positively influence technology acceptance throughout overall model; likewise, negative industry-wide operating experiences would have an opposite effect. A lack of industry track record may also negatively influence familiarity with the new technology, as well as negatively influence compatibility and trust directly and indirectly.

6 Final Remarks and Future Directions This paper introduces an extension of TAM to characterize the factors influencing acceptance of advanced digital technologies in NPP modernization. The use of TAM-NPP-M serves as one piece of the overall efforts in transforming the industry. By characterizing the influence of human-technology integration barriers on technology acceptance through TAM-NPP-M, this work supports understanding how addressing these barriers can better support technology acceptance and wide-scale adoption. While this work is preliminary and warrants future empirical investigation of the overall fit of the proposed model, a potential outcome of this work is to support the development of tools such as a survey to assess technology acceptance throughout the lifespan of a modernization program to provide a systematic way of ensuring stakeholder and operations ‘buy in.’ It is expected that such a survey could be readily integrated into

Examining the Use of the Technology Acceptance Model

509

existing HFE activities such as during the collection of operating experience, formative evaluations during the design phase, and even summative evaluations seen during integrated system validation [11, 12]. This work is intended to complement existing practices, as well as provide the added value of characterizing the factors that influence the early adoption of advanced technologies crucial to ensuring the continued operation of the existing U.S. NPP fleet. Acknowledgments. This work of authorship was prepared as an account of work sponsored by Idaho National Laboratory (under Contract DE-AC07-05ID14517), an agency of the U.S. Government. Neither the U.S. Government, nor any agency thereof, nor any of their employees makes any warranty, express or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights.

References 1. Thomas, K., et al.: Analysis and planning framework for nuclear power plant transformation. INL/EXT-20-59537, Idaho National Laboratory (2020) 2. Kovesdi, C.R., St Germain, S., Le Blanc, K., Primer, C.: Human factors engineering insights and guidance for implementing innovative technologies from the nuclear innovation workshop: a summary report. INL/EXT-19-55529, Idaho National Laboratory (2019) 3. Hunton, P.J., England, R.T.: Addressing nuclear I&C modernization through application of techniques employed in other industries. INL/EXT-19–55799, Idaho National Laboratory (2019) 4. Thomas, K., Lawrie, S., Niedermuller, J.M.: A business case for nuclear plant control room modernization. INL/EXT-16-39098, Idaho National Laboratory (2016) 5. Joe, J.C., Kovesdi, C.R.: Developing a strategy for full nuclear plant modernization. INL/EXT18-51366, Idaho National Laboratory (2018) 6. Davis, D.: Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Q. 13, 319–339 (1989) 7. Maranguni´c, N., Grani´c, A.: Technology acceptance model: a literature review from 1986 to 2013. Univ. Access Inf. Soc. 14(1), 81–95 (2015) 8. Ghazizadeh, M., Lee, J., Boyle, N.: Extending the technology acceptance model to assess automation. Cogn. Technol. Work 14(1), 39–49 (2012) 9. Sauro, J., Lewis, J. R.: Quantifying the User Experience: Practical Statistics for User Research. Morgan Kaufmann (2016) 10. Zhang, T., Tao, D., Qu, X., Zhang, X., Lin, R., Zhang, W.: The roles of initial trust and perceived risk in public’s acceptance of automated vehicles. Transp. Res. Part C: Emerg. Technol. 98, 207–220 (2019) 11. Boring, R.L., Ulrich, T.A., Joe, J.C., Lew, R.T.: Guideline for operational nuclear usability and knowledge elicitation (GONUKE). Proc. Manuf. 3, 1327–1334 (2015) 12. U.S. Nuclear Regulatory Commission. Human Factors Engineering Program Review Model, NUREG-0711, Rev. 3 (2012)

Developments in the Application of Nano Materials for Photovoltaic Solar Cell Design, Based on Industry 4.0 Integration Scheme Rosine Mouchou1 , Timothy Laseinde1(B) , Tien-Chien Jen1,2 , and Kingsley Ukoba2 1 Mechanical and Industrial Engineering Department, University of Johannesburg,

Johannesburg, South Africa {otlaseinde,tjen}@uj.ac.za 2 Mechanical Engineering Science Department, University of Johannesburg, Johannesburg, South Africa

Abstract. Nano materials now have vast relevance and application in product design, such as power generation systems. This paper focuses on ongoing research involving Nickel Oxide deposition on glass substrate for power generation in solar cells. Various technologies are being explored using diverse nano materials that have the potential of solar irradiation absorption. Likewise, multiple technologies for impregnating the materials to Indium Titanium Oxide (ITO) substrate yield results even though not yet to optimal expectation currently. In this context, the discussion is mainly on the latest technologies, materials, and conditions that influence the choice and technological approach adopted, based on industry 4.0 imperatives. The research is informed by core drivers such as evolving technology, affordability and domestication of solar cells that meet evolving power generation changes, using miniaturized and optimized power systems. The study relied on big data for some design considerations, which is very much in line with optimization systems applied in industry 4.0. Various options of choice for fabrication, characterization, and analysis explored are discussed herein, and this is a starting point as it provides insight to academia for further exploration of the latent potentials. Keywords: Photovoltaic · Nano materials · Renewable energy · Solar cell · Industry 4.0

1 Introduction Man has existed for over two hundred thousand years on earth. The modern man continues to adapt to his environment and take advantage of nature’s many resources to improve his living condition [1]. Life expectation is constantly increasing, and knowledge is pushed back every day. Moving from manual farming to intensive industrial agriculture, from hunting to excessive animal agriculture, from the shock of flint to start of fire at the flick of a switch to turn on the light, man progressively modified his environment [2]. The last few centuries have been marked by a surge in energy consumption, mainly from fossil resources [3]. The use of these resources is responsible for © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Z. Ahram et al. (Eds.): AHFE 2021, LNNS 271, pp. 510–521, 2021. https://doi.org/10.1007/978-3-030-80624-8_64

Developments in the Application of Nano Materials

511

most greenhouse gas emissions, causing the extremely rapid rise and an unprecedented average temperature of the oceans and the earth’s surface [IPCC13]. Usually, awareness of climate change is real, but actions are timidly implemented. In developed countries, energy consumption has become very important and can be reduced without the Comfort being impacted [4]. The use of energy must be rethought and optimized in manufacturing processes as well as in the use of goods and services. However, the increase in energy efficiency, population growth, and the economic and social development of the countries (least developed and emerging countries) should lead to an increase in global energy demand for several decades to come. To avoid an excessive runaway of global warming, the energy consumed by people in the coming decades must be produced in quantity to rising from renewable resources or energy [4] there are many renewable energy sources, each of which has specific characteristics. These energy forms emit less gas at greenhouse effect than the transformation of energy from fossil resources [5]. Among these renewable energies, the energy received from the sun each day is considerable and represents 7000 times the energy consumed by humanity during the same period [6]. This solar energy can be used for heat or radiation. The photovoltaic systems are odorless, inaudible, and low-polluting devices that can generate electricity from light radiation. In several countries in the world, the cost of this mode of electricity production is already lower than that produced from fossil resources [7]. Photovoltaic energy (PV) is the conversion of the energy of the sun into electricity. It has a major role to play in energy for global population. The merit of PV is that it is renewable, abundant and globally distributed. Also, the energy generated from solar does not affect the environment. The development of the connection capacity is fast. However, a significant drawback is the production costs of the photovoltaic systems vis-a-viz the solar cells. Thin-film Nano oxide for solar cells is among the widely used technology for photovoltaic conversion today. There is new thin-film nanomaterial that has been used.in order to continue and accelerate the reductions in costs of photovoltaic, it is necessary to make a technological leap. This paper aims to develop new methods or techniques deposition and ideas based in industry 4.0 to reduce these costs.

2 Overview of Nanomaterial for Solar Cells Nanostructured solar cells have been suggested as a viable alternative to traditional fossil fuel in bid to meet the global electricity demand. Solar and wind energy are among the top renewable sources. It comprises of long-chain nanostructured thin film photovoltaic solar cells. In South Africa for about 55 years ago, the group Total (oils and Gas Company) was chosen to build and operate a huge ground-based solar photovoltaic plants in Africa in Prieska. This was custom made in 2016 to provide clean and sustainable energy to the South African population [8]. According to Green [9], at the wholesale market rate, the operation on a merchant model for photovoltaic solar power plant under electricity was sold at the national power grid (Fig. 1) and the proposed wind integration model (Fig. 2). This national power grid model capitalizes on the region’s distinct characteristics: low construction cost, growing electricity, and abundant sunshine.

512

R. Mouchou et al.

Fig. 1. Voltage and electrical line diagrams for a general electricity network [9]

Fig. 2. Proposed wind model integration with national power grid [10]

3 Latest Research and Development 3.1 Photoelectric Effect and the P-N Junction Converting solar energy into electrical energy is based on the photoelectric effect. Solar cells exercise this effect in a solar system. Photons of light are absorbed when sunlight falls on a semiconductor material resulting in free-electron [11]. The photon energy must be at least equal to that of the energy gap of the material. The energy of photons absorbed allow electronic changes from the tape valence to a semiconductor’s conductor band of a semiconductor, thus creating electron-hole pairs, which can contribute to current transport (photoconductivity) by the material when polarized [12]. A P-N junction is formed when a semiconducting device is doped (Fig. 3). The electron-hole pairs created in the junction’s space load zone are immediately separated by the electric field which reign in the same region. This is drawn into neutral zones on either side of the junction [13]. A potential difference is formed wherever the device is isolated along the junction [14]. Hence, if it is connected to an electrical load outside, there is passage of the current without applying voltage to the device. This is called the principle of the photovoltaic cell.

Developments in the Application of Nano Materials

513

Fig. 3. P-N junction in a photovoltaic cell (Ukoba K. I.-E., 2018)

Limitation: If each incident photon made is possible to inject an electron into the electrical circuit, the devices photovoltaics would be very efficient [16]. In practice, several factors limit this photo conversion. This includes wavelength incident radiation which must be low enough so that the photon energy is greater than that of the gap and can be absorbed [14]. Thus, with a bandgap of 1.24 eV, silicon monocrystalline only absorbs wavelength photon less than 1200 nm. The optimum absorption being around the energy of the gap. Since the emission spectrum solar range from 250 to 2000 nm with peak emission in visible between 550 and 700 nm, only part of the radiation is useful for creating charge carriers. Secondly, not all photon generated carriers are recovered after separation [16].

4 Deposition Methods Numerous techniques have been used to deposition thin film Nano oxides material as NiO, TiO2 , and CuO. This include spin coating [17], dip coating [18], spray pyrolysis (SP) [19], sol-gel [20], Pulsed laser deposition approach [21], chemical vapour deposition (CVD) [22], atomic layer deposition (ALD) [23], electron beam evaporation(EBE) [24]. Figure 4 shows the various classification of the thin films deposition method. The paper shed light on NiO, and CuO thin film deposition methods were carried out with this unit. 4.1 Spin Coating For the fabrication of smooth and uniform thin-film device on a flat glass substrate, spin coating is one of the most common and practical sol-gel solution-based deposition methods. [26]. This it is a type of centrifugal deposition technique in which the substrate rotates at high speed and centrifuge force, as shown in Fig. 5a. It is used to produce photosensitive organic materials having thickness of micrometers and nanometers. Also, spin coating is a predominant method introduced to produce unchanging thin films of photovoltaic materials with a unit thickness in micrometers and nanometers [27]. According to Emil et al. [28] the spreading of a thin axisymmetric film of Newtonian fluid on a planner glass substrate has a constant angular velocity when it is rotating. The micropipette is used to deposit the solution onto the substrate during the coating of organic material as shown in Fig. 5b [29]. A vacuum pump is used as a holder and for acting as pressure to suck the substrate to the plate. The spinning rotation speed

514

R. Mouchou et al.

Fig. 4. Thin films deposition method classification [25]

is at high-speed velocity ranging between 1000 to 8000 rpm [30]. According to [31] for more than decades ago, a spin coating analysis was carried out. As a result, the spreading of a thin film on a moving substrate at a constant angular velocity was known as a Newtonian fluid [28]. The application of spin coating process is polymeric and is applied as a solution in which the solvent evaporates.

Fig. 5. Spin coating process [17]

4.2 Dip Coating This method or process aim to produce photovoltaic for conductive thin film from a conductive metal oxide. The intensive use of the vacuum evaporation method has been introduced since the used of dip coating to fabricate thin film metal oxide was highly prominent. When the coating material does not react to magnetic or electric fields, this method may be used. [18]. It also shares a similar deposition procedure with a sol-gel technique. The process involves cleaning the substrate and dipping it into the precursor solution to be used or deposited [32]. This will be removed at a regulated speed at a vertical position [33]. The substrate is deposited during the withdrawal from the film [34]. The thickness after vaporisation of the solvent going to start forming (Fig. 6). Similarly, changing the duration of the steps control the thickness of the film. Dip coating

Developments in the Application of Nano Materials

515

method is economical and offers flexibility in doping of metal oxide [35]. This technique has the capability of coating wide substrate area [36]. However, it has the demerit of being a slow process. Its ability to hinder visibility through the screen results has a vital consequence on the finished product’s outcome [37]. The film thickness deposited by this method depend on the number of dips [38]. The more the dips the higher the film thickness and vice versa.

Fig. 6. Dip coating technique for thin film [35]

4.3 Spray Pyrolysis Spray pyrolysis deposition is a simple chemical solution process that is fast, inexpensive, scalable. It is a compatible way of material deposition used to obtain a conductivity thin film directly on the substrate at a standard room temperature using a spraying device base on forced air (airbrush or spray gun) [39]. The process depends on the temperature of the substrate, variety of the soluble concentration and the spray interval fraction [24]. However, this method is a prime coating method used in industry with any substrate (rigid or flexible) and diverse geometric types [40]. The temperature and concentration of the substrate is very vital in the film production process [41]. When the substrate’s temperature is less than 300 0C it undergoes partial heat oxidation resulting in a foggy film and U-V conductivity would be extremely weak [42]. Whereas, if the substrate’s temperature is above 600 OC it results in vaporisation of the spray before having contact with the substrate. Hence, we have a virtually powdery of the film [43]. The annealing of the substrate during spray deposition can boost the acceleration of the drying deposited materials. The thin film washing with a water or flexible acid treatment is recommended after deposition so that the residual solvent could completely be removed. The studies on spray pyrolysis method of NiCl2 :2H2 O, CuONa: 2H2 O; TiO2 /HfO2 [41, 43] shown the workability and feasibility of the deposition method. Figure 7 spray pyrolysis deposition is among the most widely introduce technique deposition for processing thin film metal oxide [40]. 4.4 Chemical Vapour Deposition (CVD) Fabrication of NiO; CuO; TiO2 /HfO using the Chemical Vapour Deposition technique process is an industrial procedure utilized for depositing a conductive and transparent thin oxide on substrate applicable for solar cells, low-emissivity windows and flat panel

516

R. Mouchou et al.

Fig. 7. Thin film deposition using spray pyrolysis [40]

display [22]. CVD preceded many other deposition methods, including Atomic Layer Deposition (ALD). It existed since the 1920s [44]. It is a gaseous flexible thin film deposition method with chemical reaction across the substrate [45]. CVD is an example of a solid vapour reaction. It is a process whereby the gas precursor decomposes on the heated substrate via a chemical reaction to obtain the final product. Hence, the solid material is formed as a coating or as a single crystal. The variation of different experimental conditions of substrate materials, temperature, the composition of the reaction’s gas mixture, total pressure gas flow can increase or grow the materials with different properties [46]. CVD process (Fig. 8) is a combination of precursor solution to atomizer form gas and then evaporated and deposited on a heat glass substrate [47].

Fig. 8. Diagram of conventional CVD chemical material [47]

4.5 Other Deposition Methods Other deposition methods such as sol-gel, Pulsed laser deposition approach, atomic layer deposition (ALD) and electron beam evaporation (EBE) for fabrication of solar cells or production of a conductive thin film from conductive metal oxide nanomaterial are in more cases less used than the others methods mentioned above, and some are due to the global, regional availability. ALD is more used in the USA than in other parts of the world as Africa, like South Africa, started implementing and using ALD in renewable energy exposure [48]. Although, only some of the listed deposition methods are feasible and scalable for industrial and commercial production. Therefore, more research is needed to increase the compatibility with industrial demands.

5 Modelling and Simulation The theoretical route is also employed to obtain improved solar cells and the experimental route of fine-tuning the efficiency of solar cells. This involves producing a mathematical

Developments in the Application of Nano Materials

517

model or using a modeling tool compatible with the photovoltaic (PV) system [49]. The model is based on the nominal values: the open-circuit voltage, the short circuit current, the current and voltage corresponding to the maximum power point [50]. Hence, the obtained model offers the opportunity of improved solar cells and the influence of various physical quantities such as temperature and irradiation [51]. 5.1 Description of the Model To infuse renewable energies into the power network and minimize contamination resulting from fossil powers’ use and ensure a better efficiency within the generation of green power [52]. It is vital to control renewable sources such as solar photovoltaic or wind power, in terms of both scientific and technical knowledge. However, the production of this energy is non-linear and varies according to light intensity and temperature. This model’s description is organized as modeling beneath the Matlab/SIMULINK and SCAPS for industry use and the characteristics simulation used under it [53]. Although, other simulation tools such as AMPS, wxAMPS, has been used hitherto. 5.2 Modeling Under the Matlab/Simulink/SCAPS The model permits for the characterization of the photovoltaic module activity for various irradiation value G. therefore, with a constant temperature of 25 °C, the curve in Fig. 9 was plotted for three different G: 600 W/m2 , 800 W/m2 and 1000 W/m2 .

Fig. 9. Influence of illumination on I (V) & P (V) characteristics @ Tref = 25 °C [53]

5.3 Simulation In this simulation part, we presented the mathematical modeling of a PV system as well as the I (V) and P (V) characteristics as simulated in Matlab (SIMULINK/SCAPS). To improve the efficiency of a PV system, it is necessary to integrate a Maximum power point tracking (MPPT) control ensuring the pursuit of the maximum power supplied by the PV generator [53]. Maximum Power Point Tracking (MPPT) can be achieved in different ways, using diverse approaches and algorithms, namely, fuzzy logic and neural networks. Nevertheless, Perturbation and Observation (P&O) technique and the

518

R. Mouchou et al.

Fig. 10. Graph of Voc, Jsc, Fill factor, performance and J-V characteristics of p-NiO/n-TiO2 solar cells [54]

conductance increment (INC) are widely used, due to their simplicity and efficiency (Fig. 10). The SCAPS simulation software aim to solve basic semiconductor equations as: the Poisson’s equations for electrons holes (1), equations of continuity for electrons holes (2a, 2b) and the carrier transport equations for electrons and holes (3a, 3b) to obtain the J-V characteristics performance of each simulated solar cell devices [54]. These are shown in Eqs. 1 to 3a, 3b. e d 2 ϕ(x) = (p(x) − n(x) + ND − NA + ρp − ρn ) dx ε0 εr

(1)

Where ϕ, e, ε0 , εr , ND , NA , ρp and ρn are the electrostatic potential, electrical charge, vacuum permittivity, relative permittivity, charged impurities of donor, charged impurities of acceptor, holes distribution and electrons distribution [55]. dJn =G−R dx

(2a)

dJp =G−R dx

(2b)

Where R is the rate of recombination and G is the rate of generation. Jn = Dn

dn + μn n d ∅/dx dx

(3a)

Jp = Dp

dp + μn p d ∅/dx dx

(3b)

Jp and Jn are hole current and electron densities.

6 Conclusion This paper discusses the application of different forms of deposition methods for nanomaterial solar cells for photovoltaics application based on industry 4.0 integration scheme.

Developments in the Application of Nano Materials

519

It revealed that deposited films are characterized and analyzed using SEM, XRD, TEM, FTIR, TGA, EDX, and UV- visible spectrophotometer. Furthermore, the paper shares an overview of nanomaterial thin film, the different type of methods, the latest research in this field of study, developments of photovoltaic cells, and finally, the simulation and modelling. The nanomaterial fabrication in this research focused on different parameters such as thickness, annealing, and speed. NiO/CuO/TiO2 films are expected to gain industrial usage and application from the latest development in material synthesis [56]. Acknowledgment. The authors would like to appreciate the funding of NRF and the URC funding mechanism of the University of Johannesburg, South Africa.

References 1. Parida, B., Iniyan, S., Goic, R.: A review of solar photovoltaic technologies. Renew. Sustain. Energy Rev. 15, 1625–1636 (2011) 2. Werner, J.H.: Second and third generation photovoltaics – dreams and reality. In: Kramer, B. (ed.) Advances in Solid State Physics, vol. 44, pp. 51–68. Springer, Heidelberg (2004). https://doi.org/10.1007/978-3-540-39970-4_5 3. Green, M.A.: Prospects for photovoltaic efficiency enhancement using low-dimensional structures. Nanotechnology 11(4), 401–405 (2000). https://doi.org/10.1088/0957-4484/11/ 4/342 4. Frankl P, N.S.: Technology roadmap. Solar photovoltaic energy. OECD/IEA, Paris France (2010) 5. Taylor, R.A., et al.: Applicability of nanofluids in high flux solar collectors. J. Renew. Sustain. Energy 3, 23–104 (2011) 6. Kazmerski, L.L.: Solar photovoltaics R&D at the tipping point: a 2005 technology overview. Electron. Spectroscopy 150, 105–135 (2006) 7. Nozik, A.J., Conibeer, G., Beard, M.C.: Advanced Concepts in Photovoltaics. Royal Society of Chemistry, Oxford (2014) 8. Conibeer, G.: Proceedings 25th European Photovoltaic Solar Energy Conference, Valencia (2010) 9. Green, M.A.: “Third generation photovoltaics:” advanced Solar energy conversion. Phys. Today 57, 71–74 (2004) 10. Rojas, D., Beermann, J., Klein, S., Reindl, D.: Thermal performance testing of flat plate collectors. Sol. Energy 82, 746–757 (2011) 11. Alferov, Zh.I.: Trends and perspectives of solar photovoltaic. Semiconductors 938–948 (2004) 12. Yamaguchi, M.: Japan programs on novel concepts in PV. Semiconductor 394–399 (2004) 13. Kempa, K., Naughton, M.J., Ren, Z.F., Herczynski, A., Kirpatrick, T.: Hot electron effect in nanoscopically thin photovoltaic junctions. Appl. Phys. Lett. 95, 1–3 (2009) 14. Gradauskas, J., Širmulis, E., Ašmontas, S., Sužied˙elis, A., Dashevsky, Z., Kasiyan, V.: Peculiarities of high power infrared detection on narrow-Gap semiconductor p-n junctions. Acta Phys. Polonica 119, 237–240 (2011) 15. Ukoba, O.K.: Fabrication of affordable and sustainable solar cells using NiO/TiO2 PN heterojunction. Int. J. Photoenergy 2018, 7 (2018) 16. Ašmontas, S.: Photoelectric properties of nonuniform semiconductor under infrared laser radiation. In: Proceedings of the SPIE, pp. 18–27 (2001)

520

R. Mouchou et al.

17. Chou, K.S., Huang, K.C., Lee, H.H.: Fabrication and sintering effect on the morphologies and conductivity of nano particle films by spin coating method. Nanotechnology 16, 779–784 (2005) 18. Atwa, Y., Goldthorpe, I.A.: Metal-nanowire coated threads for conductive textiles. In: Proceeding of the IEEEE 14th International Conference on Nanotechnology Toronto on Canada, pp. 18–21 (2014) 19. Shamala, K.: Studies on tin oxide films prepare by electron beam evaporation and spray pyrolysis methods. Bull. Mater. Sci. 27, 295–301 (2004) 20. Thiagarajan, S.: Facile methodology of sol gel synthesis for metal oxide nanostructures. In: Recent Application in Sol Gel Synthesis, pp. 1–17 (2017) 21. Park, J.J., Kim, K., Roy, M., Song, J.K., Park, S.M.: Characterization of SnO2 thin films grown by pulsed laser deposition under transverse magnetic field. Rapid Commun. Photosci. 4(3), 50–53 (2015). https://doi.org/10.5857/RCP.2015.4.3.50 22. van Mol, A.M.B., Chae, Y., McDaniel, A.H., Allendorf, M.D.: Chemical vapor deposition of tin oxide: fundamentals and applications. Thin Solid Films 502(1–2), 72–78 (2006). https:// doi.org/10.1016/j.tsf.2005.07.247 23. Ponraj, J.S., Attolini, G., Bosi, M.: Review on atomic layer deposition and applications of oxide thin films. Crit. Rev. Solid State Mater. Sci. 38(3), 203–233 (2013). https://doi.org/10. 1080/10408436.2012.736886 24. Shamala, K.S., Murthy, L.C.S., Narasimha Rao, K.: Studies on tin oxide films prepared by electron beam evaporation and spray pyrolysis Methods. Bull. Mater. Sci. 27(3), 295–301 (2004). https://doi.org/10.1007/BF02708520 25. Ukoba, K., Eloka-Eboka, A., Inambao, F.: Review of nanostructured NiO thin film deposition using the spray pyrolysis technique. Renew. Sustain. Energy Rev. 82(3), 2900–2915 (2018) 26. Vuong, D.D., Sakai, G., Shimanoe, K., Yamazoe, N.: Preparation of grain size-controlled tin oxide sols by hydrothermal treatment for thin film sensor application. Sens. Actuators B: Chem. 103(1–2), 386–391 (2004). https://doi.org/10.1016/j.snb.2004.04.122 27. Cavicchi, R.E., Walton, R.M., Aquino-Class, M., Allen, J.D., Panchapakesan, B.: Spin-on nanoparticle tin oxide for microhotplate gas sensors. Sens. Actuators B: Chem. 77(1–2), 145–154 (2001). https://doi.org/10.1016/S0925-4005(01)00686-4 28. Adedokun, O., Odebunmi, B.M., Sanusi, Y.K.: Effect of fluorine doping on the structural, optical and electrical properties of spin coated tin oxide thin films for solar cells application. Sci. Focus J. (2018–2019). https://doi.org/10.36293/sfj.2019.0002 29. Yilbas, B.S., Al-Sharafi, A., Ali, H.: Surfaces for self-cleaning. Self-Cleaning Surf Water Droplet Mobil, pp. 45–98 (2019). https://doi.org/10.1016/b978-0-12-814776-4.00003-3 30. Boudrioua, A., Chakaroun, M., Fischer, A.: Introduction. Org Lasers, Elsiver (2017). https:// doi.org/10.1016/b978-1-78548-158-1.50009-2 31. Gu, F., et al.: Luminescence of SnO2 thin films prepared by spin-coating method. Cryst Growth 262, 182–185 (2004). https://doi.org/10.1016/j.jcrysgro.2003.10.028 32. Meulendijks, N., Burghoorn, M., Van Ee, R., Mourad, M., et al.: Electrical conductive coatings consisting of Ag-decorated cellulose nanocrystals. Cellulose 24, 2191–2204 (2017) 33. Levy, D., Zayat, M. (eds.): Synthesis, Characterization, and Applications. The Sol-Gel Handbook (2015) 34. Korotcenkov, G., et al.: Structural stability of indium oxide films deposited by spray pyrolysis during thermal annealing. Thin Solid Films 479, 38–51 (2005). https://doi.org/10.1016/j.tsf. 2004.11 35. Dislich, H., Hussmann, E.: Amorphous and crystalline dip coatings obtained from organometallic solutions: procedures, chemical processes and products. Thin Solid Films 77(1–3), 129–140 (1981). https://doi.org/10.1016/0040-6090(81)90369-2 36. Chatelon, J.P., Terrier, C., Bernstein, E., Berjoan, R., Roger, J.A.: Morphology of SnO2 thin films obtaibed by the sol-gel technique. Thin Solid Films 247(2), 162–168 (1994)

Developments in the Application of Nano Materials

521

37. Kumar, A., Nanda, D.: Methods and fabrication techniques of superhydrophobic surfaces. In: Superhydrophobic Polymer Coatings, pp. 43–75. Elsevier (2019). https://doi.org/10.1016/ b978-0-12-816671-0.00004-7 38. Neac¸su, I.A., Nicoar˘a, A.I., Vasile, O.R., Vasile, B.S.: ¸ Inorganic, micro- and nanostructured implants for tissue engineering. Nanobiomater Hard Tissue Eng. 4, 271–295 (2016) 39. Epifani, M., et al.: SnO2 thin films from metalorganic precursors: synthesis, characterization, microelectronic processing and gas-sensing properties. Sens. Actuators B: Chem. 217–226 (2007). https://doi.org/10.1016/j.snb.2006.12029 40. Akl, A.: Optical properties of crystalline and non-crystalline iron oxide thin films deposited by spray pyrolysis. Appl. Surf. Sci. 233, 307–319, 295-301 (2004). https://doi.org/10.1007/ bf02708520 41. Yadav, A.A.: SnO2 thin film electrodes deposited by spray pyrolysis for electrochemical supercapacitor applications. J. Mater. Sci.: Mater. Electron. 27(2), 1866–1872 (2015). https:// doi.org/10.1007/s10854-015-3965-4 42. Patil, G.E., Kajale, D.D., Gaikwad, V.B., Jain, G.H.: Spray pyrolysis deposition of nanostructured tin oxide thin films. ISRN Nanotechnol. 2012, 1–5 (2012). https://doi.org/10.5402/ 2012/275872 43. Sears, W.M., Gee, M.A.: Mechanics of film formation during the spray pyrolysis of tin oxide. Thin Solid Films 165(1), 265–277 (1988). https://doi.org/10.1016/0040-6090(88)90698-0 44. Pengyi, L., Junfang, C., Wangdian, S.: Sheet resistance angas-sensing properties of tin oxide thin films by Plasma enhanced chemical vapor deposition. Plasma Sci. Technol. 6, 22–59 (2004). https://doi.org/10.1088/1009-0630/6/2/015 45. Moorthy, S.B.K. (ed.): Thin Film Structures in Energy Applications. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-14774-1 46. Manawi, Y., Ihsanullah, A., Al-Ansari, T., Atieh, M.: A review of carbon nanomaterials’ synthesis via the chemical vapor deposition (CVD) method. Materials 11(5), 822–829 (2018). https://doi.org/10.3390/ma11050822 47. Seshan, K., Schepis, D.: Handbook of Thin Film Deposition. William Andrew, p. 103 (2018). https://doi.org/10.1016/b978-0-12-812311-9.00030-x 48. Shaeri, M.R., Jen, T.-C., Yuan, C.Y.: Reactor scale simulation of an atomic layer deposition process. Chem. Eng. Res. Des. 94, 584–593 (2015). https://doi.org/10.1016/j.cherd.2014. 09.019 49. Ahmed, A., Zahedi, G.: Sustainable energy systems: role of optimization modeling techniques in power generation and supply: a review. Renew. Sustain. Energy Rev. 15(8), 3480–3500 (2011) 50. Jain, M., Ramteke, N.: Modeling and simulation of solar photovoltaic module using matlab/simulink. J. Comput. Eng. 15(5), 27–34 (2013) 51. Zeng, Y., et al.: Review on optimization modeling of energy systems planning and GHG emission mitigation under uncertainty. Energies 4(1), 1624–1656 (2011) 52. Singh, K.J., et al.: Artificial neural network approach for more accurate solar cell electrical circuit model. Int. J. Comput. Appl. 4(3), 101–116 (2014) 53. Selmi, T., Abdul-niby, M., Alameen, M.: Analysis and investigation of a two-diode solar cell using MATLAB/simulink. Int. J. Renew. Energy Res. 4(1), 99–102 (2014) 54. Atourki, L., Kirou, H., Ihlal, A., Bouabid, K.: Numerical study of thin films CIGS bilayer solar cells using SCAPS. Mater. Today: Proc. 3(7), 2570–2577 (2016) 55. Khoshsirat, N., Md. Yunus, N.A.: Numerical simulation of CIGS thin film solar cells using SCAPS-1D. In: IEEE Conference on Sustainable Utilization and Development in Engineering and Technology (2013) 56. Mouchou, R.T., Jen, T.C., Laseinde, O.T., Ukoba, K.O.: Numerical simulation and optimization of p-NiO/n-TiO2 solar cell system using SCAPS. Mater. Today: Proc. 38, 835–841 (2021)

Autonomous Emergency Operation of Nuclear Power Plant Using Deep Reinforcement Learning Daeil Lee and Jonghyun Kim(B) Department of Nuclear Engineering, Chosun University, 309 pilmun-daero, Dong-gu, Gwangju 501-709, Republic of Korea [email protected], [email protected]

Abstract. The goal of emergency operation in nuclear power plants (NPPs) is to ensure the integrity of reactor core as well as containment building under undesired initiating events. In this operation, operators perform the situation awareness, the confirmation of automatic actuation of safety systems, and the manual operations to cool down the reactor according to the operating procedures. This study aims to develop an autonomous operation agent that can reduce the pressure and temperature of primary system. The agent applies the Soft Actor-Critic (SAC) algorithm, which is a kind of deep reinforcement algorithm for optimizing stochastic actions. With the SAC, the agent is trained to find actions to meet the pressure and temperature curve criteria and the cooling rate. In addition, the test using a compact nuclear simulator demonstrates that the agent can cool down the reactor by manipulating the necessary systems in compliance with the constraints. Keywords: Autonomous operation · Abstraction decomposition space · Emergency operation · Nuclear power plant · Reinforcement learning

1 Introduction The goal of emergency operation in nuclear power plants (NPPs) is to mitigate abnormal situations and to ensure the integrity of reactor core as well as containment building under undesired initiating events. In this operation, operators carry out necessary actions based on operating procedures. The operator actions include the situation awareness, the confirmation of automatic actuation of safety systems, and the manual operations to cool down the reactor. Current NPPs are operated in cooperation with operator’s manual control and automatic control of the conventional controller, such as proportional-integral-differential controller (PID). NPPs are highly automated systems that are designed to increase electricity availability, reduce accident risk, and decrease operating costs [1]. However, it may not be defined as an autonomous system, even though most systems are automated. As the application of artificial intelligence (AI) to the autonomous operation is increasing in many industries [2], the possibility of autonomous operation for NPPs is © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Z. Ahram et al. (Eds.): AHFE 2021, LNNS 271, pp. 522–531, 2021. https://doi.org/10.1007/978-3-030-80624-8_65

Autonomous Emergency Operation of Nuclear Power Plant

523

investigated. Especially, some small modular reactors under development are attempting to apply the autonomous operation [3, 4]. Deep reinforcement learning, which is a kind of AI technology, is considered to help the decision-making of an operating system in the industrial fields, such as managing plants, playing computer games, and car/train traffic control. Deep reinforcement learning is a combined method with the reinforcement learning and the deep neural network. This learning method through sampling from the target environment (i.e. plant system, simulator, and computer game) is known to overcome the limitations of traditional controllers (i.e. if-then logics, PIDs, fuzzy controllers). Moreover, in the computer game area, deep reinforcement learning has sometimes achieved higher scores than humans [5]. Soft Actor-Critic (SAC) is a kind of deep reinforcement learning method that optimizes a stochastic policy in an off-policy way. SAC can find the policy to explore more widely while giving up on clearly unpromising avenues. The policy can capture multiple operational paths of near-optimal behavior. In addition, SAC has also proven its data efficiency and learning stability as well as hyper-parameter robustness [6]. This study aims to develop an agent that can manage emergency situations by reducing the pressure and temperature of reactor and cooling systems down to the shutdown cooling entry condition, i.e. the pressure should be decompressed from 156.2kg/cm2 to 29.5kg/cm2 , the temperature should be reduced from 309 ºC to 170 ºC. The agent applies the Soft Actor-Critic (SAC) algorithm and a deep neural network. In order to define input/output values in the deep neural network, this study analyzes the functional recovery procedures (FRPs) by using abstraction decomposition space (ADS). With the ADS, this study analyzes the target domain through a step-down decomposition and draws constraints of the given domain. These identified constraints are used for designing reward algorithms, which provide training directions for the agent. The test results using a compact nuclear simulator shows that the suggested emergency operation agent can manipulate the components to comply with identified constraints to the shutdown cooling entry condition.

2 Emergency Operation Analysis Current NPP operating strategies for reducing the pressure and temperature during an emergency situation were considered to develop an autonomous operating agent. FRPs were analyzed by identifying operation goals and criteria, required systems and components, and success paths to mitigate the emergency situation. Then, the identified elements were mapped in the table of ADS. Using ADS, the tasks of the agent and reward criteria were defined. 2.1 Emergency Operation Analysis Based on FRP This study identified the ultimate goal, criteria of each safety function, systems, and components that are required in the emergency operation based on the FRP. The goal of the emergency operations is to ensure the integrity of reactor core as well as containment building under undesired initiating events. For pressurized water reactors, the goal

524

D. Lee and J. Kim

of NPP safety can be typically accomplished by nine safety functions. Operators diagnose the current situation and take the necessary actions based on emergency operating procedures. The emergency operating procedure in Korean NPPs can be divided into the eventbased procedure (optimal recovery procedure) and symptom-based procedure (functional recovery procedure) [6]. Optimal recovery procedure (ORP) is designed to cover design basis accidents (DBAs), such as loss of coolant accident (LOCA) and steam generator tube rupture (SGTR). On the other hand, FRP is focused on the recovery of safety functions. FRP provides operator actions for events in which a diagnosis is not possible, or events in which any ORP is not available. The actions of FRPs are to ensure that plant is placed in a stable, safe condition. Figure 1 shows the flow of the emergency operation strategy.

Fig. 1. Strategy flow chart for emergency operation

This study analyzed nine safety features to identify the required systems and components. The nine safety functions and their purposes are shown in Table 1. Table 2 Table 1. Nine safety functions. No

Safety function

Purpose

1

Reactivity control

Shut reactor down to reduce heat production

2

Reactor coolant system (RCS) inventory control

Maintain volume or mass of reactor coolant system

3

RCS pressure control

Maintain pressure of reactor coolant system

4

RCS heat removal

Transfer heat out of coolant system medium

5

Core heat removal

Transfer heat from core to a coolant

6

Containment isolation

Close valves penetrating containment

7

Containment pressure and temperature control

Keep from damaging containment

8

Hydrogen control

Control hydrogen concentration

9

Maintenance of vital auxiliaries

Maintain operability of systems needed to support safety systems

Autonomous Emergency Operation of Nuclear Power Plant

525

represents the safety systems and components designed to satisfy each safety function in Korean NPPs [8]. Table 2. Safety systems and components designed to safety each safety function Function

System

Reactivity control

Plant protection system (PPS)

Control element drive mechanism

Digital control rods system (DCRS)

Control element drive mechanism

RCS inventory control

RCS pressure control

RCS heat removal

Component

Safety injection system (SIS)

SI pump, SI tank, SI valve

SIS

SI pump, SI tank, SI valve

Chemical and volume control system (CVCS)

Charging valve, letdown valve, charging pump, orifice valve

Safety depressurization and vent system

Power operated relief valve (PORV)

Pressurizer (PZR) pressure control system

PZR spray valve, PZR heater

Main feedwater system

Main feedwater pump

Aux feedwater system

Aux feedwater pump

Safety depressurization and vent system

Steam generator PORV

Core heat removal

Reactor coolant system

Reactor coolant pump

Containment isolation

Containment isolation system

Containment isolation valves

Containment pressure and temperature control

Containment spray system

Containment spray

Containment fan cooling system

Fan cooler

Hydrogen control

Hydrogen mitigation system

Hydrogen ignitors

Maintenance of vital auxiliaries

AC and DC power system

Diesel generator, station batteries

2.2 Work Domain Analysis by Using Abstraction Decomposition Space An abstraction decomposition space (ADS) is used to extract the systems and components that the agent is required to manipulate. The operational goal and constraints during the emergency operation were also analyzed to design the agent’s reward algorithm.

Fig. 2. Structure of abstraction decomposition space (ADS)

526

D. Lee and J. Kim

ADS can analyze the given work domain as the abstraction level and decomposition space, as shown in Fig. 2. The abstraction level consists of functional purpose, abstraction function, generalized function, physical function, and physical form as a hierarchical structure. This hierarchical structure describes complex systems in terms of abstract entities, which can be used to represent functions and multiple components in systems. These levels are connected with mean-end links that show how-what-why relationships between levels. The decomposition space is typically divided into a whole system, subsystem, and component. It can represent the entire domain under examination, stepping down through the spaces of detail to a component space.

Fig. 3. An example of ADS to reduce the pressure of reactor

Figure 3 illustrates an example of ADS for controlling the pressure of reactor and cooling system. The abstraction level is divided into functional purpose, abstraction function, generalized function, and physical function. Target systems and components to be controlled are identified in the level of the physical function. The functional purpose is considered as the objective of the systems and components. Thus, the functional purpose was defined as the reduction of primary pressure and temperature to prevent core damage. The abstraction function represents the basic principles such as flow, mass, temperature, and level. These principles should be fully considered as the means to achieve the ends specified in the functional purpose level. Table 3 shows the principles as physical parameters and its success criteria condition based on the FRP. For instance, the pressure of pressurizer (PZR) is a success criterion of the RCS pressure control function. To satisfy the success criterion, the PZR pressure should be below 29.5 kg/cm2 , which is the shutdown operation entry condition, and stay within the pressure-temperature curve (P-T Curve) boundary as shown in Fig. 4. The generalized function represents the general process and function of the system. This function was defined as the systematic process in relation to physical parameters. For example, PZR level is affected by decompressing PZR, and pumping and suppling

Autonomous Emergency Operation of Nuclear Power Plant

527

Table 3. Required physical parameters and its success criteria in abstraction function level Physical parameter

Success criteria

PZR pressure

Pressure < 29.5 kg/cm2 Pressure within P-T curve boundary

PZR level

20% < Level < 76%

RCS average temperature 170 ºC < Average temperature Temperature within P-T curve boundary 55 ºC/hour < Cooling rate S/G Pressure

Pressure < 88.2 kg/cm2

S/G level

6% < Narrow level < 50%

Fig. 4. P-T curve boundary and trajectory of the change of the pressure and temperature

the coolant. These system processes are related to the purpose of the system in the safety functions. The physical functions are defined as the components that can achieve each system process. Table 4 shows the components required to achieve the generalized function. This study classified these components into continuous control and discrete control according to the control type. The continuous controls adjust component states to satisfy specified target values of given parameters, and the rules that govern the necessary adjustments cannot be described with simple logics. In contrast, a discrete control involves the direct setting of a target state, i.e. on/off. Table 4. Required components and control type Control type

Component

Continuous control

PZR spray valve, SI pump, SI valve, aux feedwater valve, steam dump valve

Discrete control

PZR heater, charging valve, letdown valve, orifice valve, aux feedwater pump, main feedwater pump, reactor coolant pump

528

D. Lee and J. Kim

3 Development of an Algorithm for Emergency Operation This study developed an autonomous operation agent that can reduce the primary pressure and temperature. This algorithm employs a rule-based system and Soft Actor-Critic (SAC), which is a kind of deep reinforcement learning. Figure 5 illustrates the structure of the proposed algorithm, which consists of two modules: 1) a discrete control module and 2) a continuous control module. The discrete control module manipulates the PZR heater, charging valve, letdown valve, orifice valve, aux feedwater pump, main feedwater pump, and reactor coolant pump. The continuous control module adjusts the PZR spray valve, aux feedwater valve, and steam dump valve. This module also enables or disables the status of the SI pump and the SI valve. The procedures provide only the target values of parameters but not the rules for the operations of components, i.e. how much % the valve should be opened.

• • • • •

Fig. 5. Overview of the algorithm to reduce the primary pressure and temperature during emergency operation

Appropriate methods were selected by considering the characteristics of each control type in NPPs. A rule-based system was adopted to implement the discrete control because the specific rules can be developed from the operating procedures. On the other hand, the reinforcement learning was applied to implement the continuous control because it is difficult to define a rule for the extent of control, i.e., how much the valve should be open or closed. The reinforcement learning is similar to how actual operators learn operations and gain experiences in the real operation or training. For the continuous control, a deep neural network (DNN) and a soft actor-critic (SAC) algorithm were used. As a training algorithm, SAC was applied. The SAC agent can find the policy to explore more widely while giving up on clearly unpromising avenues. The policy can capture multiple operational paths of near-optimal behavior [6]. DNN was used to capture an action that is able to achieve the operational goal. In DRLs, the reward is an essential element that updates the weights of the SAC agent; learning by the agent is associated with updating the weights of the network to

Autonomous Emergency Operation of Nuclear Power Plant

529

maximize the accumulative reward. This study suggests a reward algorithm to reduce the pressure and temperature of reactor and cooling system down to the shutdown cooling system entry condition. The reward was calculated as shown in Equations 1 to 4. When the reward is close to zero, it means that the agent satisfies the success criteria. Total reward = − (Temperature distance + Pressure distance)

(1)

Calculated cooling temperature = Stable temperature after reactor trip (206 ◦ C) − 55 ◦ C ∗ (Current time[sec] − Reactor trip time[sec])/3600[sec] (2) Temperature distance = |Current temperature − Calculate cooling temperature| (3) Pressure distance = |Current pressure − Pressure of shutdown cooling entry condition| (4) The training was terminated when the temperature or pressure move outside the P-T curve boundary or when the agent is approached to the shutdown cooling entry condition.

4 Training and Experiment 4.1 Training Environment A compact nuclear simulator (CNS) was used as a real-time testbed for training and validating the autonomous emergency algorithm. The CNS was originally developed by the Korea Atomic Energy Research Institute (KAERI) using a Westinghouse 900 MWe, three-loop PWR as the reference [9]. Figure 6 shows the display for the reactor coolant system in the CNS.

Fig. 6. Reactor coolant system in the CNS

The SAC agent was developed based on the Python programming language with Pytorch machine learning libraries. Then, the agent was trained about the loss of coolant accident (LOCA) scenario caused by breaking pipe in RCS, i.e. 10cm2 line break in RCS loop 1 cold-leg. The agent was randomly given a break size between 5 cm2 and 15 cm2 .

530

D. Lee and J. Kim

4.2 Training and Stability To complete the emergency operation, the SAC agent was trained for more than 800 episodes. The SAC agent training is stopped when the average reward becomes saturated stably. Figure 7 shows the trend of the rewards that the SAC agent obtained. In one episode, the theoretically maximum cumulative reward during the entire emergency operation was 0 (the green dashed line in Fig. 7). The practicably feasible maximum reward for the emergency operation success was observed to be over -65.

Fig. 7. Reward obtained by the SAC agent

4.3 Experiment Results After the proposed algorithm was trained, an experiment was conducted to demonstrate that the proposed algorithm can autonomously reduce the pressure and temperature within the P-T curve boundary with the cooling rate (55 ºC/hour). As shown in Fig. 8, the proposed algorithm can reduce the pressure and temperature within operational criteria (P-T curve boundary and cooling rate per hour) in LOCA with a break size of 5cm2 in RCS loop 1 cold-leg.

Fig. 8. Simulation results for autonomous emergency operation

Autonomous Emergency Operation of Nuclear Power Plant

531

5 Conclusion This study proposed an algorithm for an autonomous emergency operation that uses AI techniques. The emergency operation algorithm was developed through a domain analysis based on the FRPs using the abstraction decomposition space. The proposed algorithm used a SAC agent and a DNN network for the continuous control and applied a rule-based system for the discrete control. A compact nuclear simulator was used to train and test the algorithm. Based on the simulation results, this algorithm was shown to reach the shutdown operation entry condition, according to the cooling rate (55 ºC/hour). Acknowledgments. This work was supported in part by the Basic Science Research Program through the National Research Foundation of Korea (NRF), funded by the Ministry of Science, ICT & Future Planning under Grant N01190021-06, and in part by the Korean Government, Ministry of Science and ICT under Grant NRF-2018M2B2B1065651.

References 1. Wood, R.T., Neal, J.S., Ray Brittain, C., Mullens, J.A.: Autonomous control capabilities for space reactor power systems. In: AIP Conference Proceedings, vol. 699, no. 1. American Institute of Physics (2004) 2. Schmidhuber, J.: Deep learning in neural networks: an overview. Neural Netw. 61, 85–117 (2015) 3. Lee, D., Seong, P.H., Kim, J.: Autonomous operation algorithm for safety systems of nuclear power plants by using long-short term memory and function-based hierarchical framework. Ann. Nucl. Energy 119, 287–299 (2018) 4. Lee, D., Arigi, A.M., Kim, J.: Algorithm for autonomous power-increase operation using deep reinforcement learning and a rule-based system. IEEE Access 8, 196727–196746 (2020) 5. Arulkumaran, K., Deisenroth, M.P., Brundage, M., Bharath, A.A.: Deep reinforcement learning: a brief survey. IEEE Signal Process. Mag. 34(6), 26–38 (2017) 6. Haarnoja, T., Zhou, A., Abbeel, P., Levine, S.: Soft actor-critic: off-policy maximum entropy deep reinforcement learning with a stochastic actor. In: International Conference on Machine Learning, pp. 1861–1870 (2018) 7. Park, J., Jung, W.: A study on the systematic framework to develop effective diagnosis procedures of nuclear power plants. Reliab. Eng. Syst. Saf. 84(3), 319–335 (2004) 8. KHNP: APR1400 Design Description. Korea Hydro & Nuclear Power Co., Ltd. (2014) 9. KAREI: Advanced Compact Nuclear Simulator Textbook. Nuclear Training Center in Korea Atomic Energy Research Institute (1990)

Author Index

A Abelha, António, 156 Adewumi, David, 446 Ahlers, Miranda, 287 Aigbavboa, Clinton, 465 Ajao, Adekunle Mayowa, 446 Akinradewo, Opeoluwa, 465 Alava, Christian Rene Vargas, 242 Alcañiz, Miguel, 216 Alexantris, Christina, 190 Allen, James, 60 Almeida-Solíz, Nidia, 199 Altimari, Ambra, 75 Amusan, Lekan, 446 Andrade-Altamirano, Jonnathan, 199 Arce-Rojas, Mauricio, 83 Arias, Santiago Teodoro Castro, 251 Arias-Flores, Hugo, 184 Askarbekuly, Nursultan, 358 Averkyna, Maryna, 107 B Barbeito, Gonzalo, 287 Barone, Ben, 60 Bharadwaj, Skanda, 116 Biosca Rojas, Dioen, 425 Bontchev, Boyan, 341, 395 Boring, Ronald Laurids, 493 Boronenko, Marina, 125 Boronenko, Yuri, 125 Braman, James, 67 Bravo-Buri, Sofía, 199 Brodtmann, Norbert L., 333 Brown, Nathanael, 14 Budde, Dieter, 287

Bui, Michael, 313 Bussell, Sam, 14 C Cai, Yang, 304 Cardenas, Kevin, 287 Cardenas, Luis, 472 Cendejas, Mariana Alfaro, 91 Cha, Hyeon-Ju, 269 Chaofan, Wang, 418 Chen, Yunbo, 148 Choi, Younhee, 259 Coar, David, 60 Colurcio, Maria, 75 Cóndor-Herrera, Omar, 184 Croft, Katelyn, 304 Cruz Felipe, Marely del Rosario, 425 Cunha, António F., 156 D Dankov, Yavor, 341, 395 de la Fuente-Mella, Hanns, 83 de Rosa, Francesca, 295 Digmayer, Claas, 485 Du, Jiali, 190 Dulin, Johnathon, 287 E Eckardt, Frank, 23 Ehrlich, Jacob, 287 Elórtegui-Gómez, Claudio, 83 F Fan, Zhang, 322, 372 Fei, Dingzhou, 141

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Z. Ahram et al. (Eds.): AHFE 2021, LNNS 271, pp. 533–535, 2021. https://doi.org/10.1007/978-3-030-80624-8

534 Feng, Shuzhen, 148 Ferrada-Rodríguez, Jorge, 83 Ferreira, Diana, 156 Flores-Urgilez, Cristhian, 366 Fu, Feng, 165 Fuentes, Esteban M., 216 G Galego, Brad, 60 Gao, Feng, 456 Garcia-Velez, Roberto, 366 Giler Villavicencio, José Antonio, 425 Glomann, Leonhard, 279 Gu, Yu, 387 Guerra, Karla, 403 Guo, Jinhong K., 60 Gutiérrez, Raúl, 403 H Hamilton, Alexander, 287 Hechavarría Hernández, Jesús Rafael, 411 Hernández-Álvarez, Myriam, 236 Hoffman, Matthew, 14 Hong, Zhang, 322, 372 Hu, Jinbo, 433 I Isaeva, Oksana, 125 J Jadán-Guerrero, Janio, 184 Jakobs, Eva-Maria, 485 Jen, Tien-Chien, 510 Jirovsky Jr., Vaclav, 55 Jirovsky, Vaclav, 55 Joseph, Antony William, 99 Jousselme, Anne-Laure, 295 Julson, Rye, 287 K Kato, Kenichi, 31 Kim, Jonghyun, 259, 522 Kok, Chiang Liang, 31 Kolesar, Michael, 287 Koumpan, Elizabeth, 174 Kovachev, Martin, 395 Kovesdi, Casey, 502 Künzell, Tobias, 313 L Laseinde, Timothy, 510 Lauer, Tim, 3 Lee, Daeil, 522

Author Index Lema-Condo, Efren, 199 Li, Junsong, 165 Limaico, José, 230 Lingelbach, Katharina, 313 Lingelbach, Yannick, 313 Liu, Hong, 350 Liu, Jie, 456 Liu, Minxia, 387 Liu, Yanfei, 165 Liu, Yuzhou, 165 Luján-Mora, Sergio, 230 Lukyanchikova, Elena, 358 M Machado, José, 156 Mahajan, Khyati, 133 Mansfield, Thomas, 295 Martinez, Miguel Angel Quiroz, 242, 251 Matoshi, Veton, 23 Mazzara, Manuel, 358 Meló, Raúl Grau, 216 Mouchou, Rosine, 510 Murugesh, Ramaswami, 99 N Narendra, B. K., 116 Neto, Cristiana, 156 Nistor, Marian Sorin, 287 Niveditha, N. M., 116 O Ogundipe, Kunle Elizah, 446 Otte, Sebastian, 313 P Palaniappan, Kavitha, 31 Parra-Astudillo, Ana, 199 Peissner, Matthias, 313 Pickl, Stefan, 39, 287 Pimenov, Denis, 358 Ping, Zhang, 418 Pinos-Velez, Eduardo, 366 Polaczyk, Jakub, 304 Portilla Castell, Yoenia, 411 Pugo-Mendez, Edisson, 223 Q Qiu, Yanmei, 477 R Ramírez, Andres Fabian Arteaga, 251 Ramos-Galarza, Carlos, 184 Ravikumar, G. K., 116

Author Index Rios, Monica Daniela Gomez, 242 Robles-Bykbaev, Vladimir, 199 Romero, Santiago Felipe Luna, 207 Rothenhäusler, Klaus, 23

S Sánchez Padilla, María Lorena, 411 Santhanam, Sashank, 133 Schilberg, Daniel, 333 Schönbein, Rainer, 39 Serpa-Andrade, Luis, 207, 223, 366 Shafer, Ashley, 60 Shaikh, Samira, 133 Solovyov, Alexandr, 358 Srinivasan, Vidhushini, 133 Steeger, Gregory, 287 Streicher, Alexander, 39 Su, Wei, 387 Su, Yi, 456

T Tanga, Ornella, 465 Telner, Jason, 47 Tenemaza, Maritzol, 230 Terzieva, Valentina, 341 Thanoon, Mohammed I., 380 Thwala, Didibhuku, 465 Tian, Zhiqiang, 165 Topol, Anna W., 174 Torres, Edgar A., 236 Torres, Edgar P., 236 Tremori, Alberto, 295 U Ukoba, Kingsley, 510 Umaña-Hermosilla, Benito, 83

535 V Vaiz, J. Sharmila, 99 Varela-Aldás, José, 216 Vassileva, Dessislava, 395 Vazquez, Maikel Yelandi Leyva, 242, 251 Verdú, Samuel, 216 Vincenti, Giovanni, 67 W Walterscheid, Heike, 23 Wang, Xin, 165 Wang, Yan, 477 Wieland, Sophia, 3 Wilson, Justin, 287 Wiśniewiecka-Brückner, Katarzyna, 23 X Xia, Zhijie, 433 Xia, Zhixiang, 477 Xu, Desheng, 350 Y Yoo, Sang Guun, 236 Yoon, Gyeongmin, 259 Yu, Pingfang, 190 Yuejiao, Zhang, 418 Z Zapata, Gianpierre, 472 Zavala, Diego, 472 Zelensky, Vladimir, 125 Zhai, Junyi, 350 Zhang, Fan, 456 Zhang, Liang, 165 Zhang, Nan, 456 Zhang, Xiuhua, 477 Zhang, Zhang, 439 Zhang, Zhisheng, 433 Zong, Chang, 439