486 64 8MB
English Pages [387] Year 2022
Studies in Computational Intelligence 1061
Mostafa Al-Emran Khaled Shaalan Editors
Recent Innovations in Artificial Intelligence and Smart Applications
Studies in Computational Intelligence Volume 1061
Series Editor Janusz Kacprzyk, Polish Academy of Sciences, Warsaw, Poland
The series “Studies in Computational Intelligence” (SCI) publishes new developments and advances in the various areas of computational intelligence—quickly and with a high quality. The intent is to cover the theory, applications, and design methods of computational intelligence, as embedded in the fields of engineering, computer science, physics and life sciences, as well as the methodologies behind them. The series contains monographs, lecture notes and edited volumes in computational intelligence spanning the areas of neural networks, connectionist systems, genetic algorithms, evolutionary computation, artificial intelligence, cellular automata, selforganizing systems, soft computing, fuzzy systems, and hybrid intelligent systems. Of particular value to both the contributors and the readership are the short publication timeframe and the world-wide distribution, which enable both wide and rapid dissemination of research output. This series also publishes Open Access books. A recent example is the book Swan, Nivel, Kant, Hedges, Atkinson, Steunebrink: The Road to General Intelligence https://link.springer.com/book/10.1007/978-3-031-08020-3. Indexed by SCOPUS, DBLP, WTI Frankfurt eG, zbMATH, SCImago. All books published in the series are submitted for consideration in Web of Science.
Mostafa Al-Emran · Khaled Shaalan Editors
Recent Innovations in Artificial Intelligence and Smart Applications
Editors Mostafa Al-Emran Faculty of Engineering and IT The British University in Dubai Dubai, United Arab Emirates
Khaled Shaalan Faculty of Engineering and IT The British University in Dubai Dubai, United Arab Emirates
ISSN 1860-949X ISSN 1860-9503 (electronic) Studies in Computational Intelligence ISBN 978-3-031-14747-0 ISBN 978-3-031-14748-7 (eBook) https://doi.org/10.1007/978-3-031-14748-7 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface
The domain of artificial intelligence (AI) and its smart applications has extremely evolved during the last decade. Several empirical and theoretical findings are growing enormously due to the increasing number of successful applications and new theories derived from numerous diverse issues. This book is dedicated to the AI domain in several ways. First, it highlights the recent research trends on the role of AI in advancing automotive manufacturing, augmented reality, sustainable development in smart cities, telemedicine, and robotics. Further, it sheds light on the recent AI innovations in classical machine learning, deep learning, Internet of Things (IoT), blockchain, knowledge representation, knowledge management, big data, and natural language processing (NLP). Besides, this edited book covers empirical and review studies that primarily concentrate on the aforementioned issues. These empirical and review studies would assist scholars in pursuing future research in the domain and identifying the possible future developments and modifications of AI applications. Postgraduate students can also gain insights into the recent developments in AI and its applications. This book is intended to present state-of-the-art studies on the recent innovations in AI applications. It was able to attract 50 submissions from different countries across the world. From the 50 submissions, we accepted 21 submissions, which represent an acceptance rate of 42%. The chapters of this book provide a collection of high-quality research studies that address broad challenges in both theoretical and application aspects of various AI applications. The chapters of this book are published in Studies in Computational Intelligence Series by Springer, which has a high SJR impact. Dubai, United Arab Emirates
Mostafa Al-Emran Khaled Shaalan
v
Contents
AI Models and Methods in Automotive Manufacturing: A Systematic Literature Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Christoph Mueller and Vitaliy Mezhuyev Edge AI: Leveraging the Full Potential of Deep Learning . . . . . . . . . . . . . . Md Maruf Hossain Shuvo Augmented Reality Technology: A Systematic Review on Gaming Strategy for Medication Adherence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . R. O. Adetunji, M. A. Strydom, M. E. Herselman, and A. Botha A Systematic Review on the Relationship Between Artificial Intelligence Techniques and Knowledge Management Processes . . . . . . . . Ahmad Mohammad, Mohammad Zahrawi, Mostafa Al-Emran, and Khaled Shaalan Monitoring Plant Growth in a Greenhouse Using IoT with the Energy-Efficient Wireless Sensor Network . . . . . . . . . . . . . . . . . . . A. C. Savitha, H. S. Aravind, M. N. Jayaram, K. Harshith, and V. Nagaraj
1 27
47
67
85
Predicting the Intention to Use Bitcoin: An Extension of Technology Acceptance Model (TAM) with Perceived Risk Theory . . . . . . . . . . . . . . . . 105 Gulsah Hancerliogullari Koksalmis, ˙Ibrahim Arpacı, and Emrah Koksalmis Research Trends on the Role of Big Data in Artificial Intelligence: A Bibliometric Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 Sebastián Cardona-Acevedo, Wilmer Londoño Celis, Jefferson Quiroz Fabra, and Alejandro Valencia-Arias Recent Applications of Artificial Intelligence for Sustainable Development in Smart Cities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 Tanweer Alam, Ruchi Gupta, Shamimul Qamar, and Arif Ullah
vii
viii
Contents
The Relevance of Individuals’ Perceived Data Protection Level on Intention to Use Blockchain-Based Mobile Apps: An Experimental Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 Andrea Sestino, Luca Giraldi, Elena Cedrola, and Gianluigi Guido Exploring the Hidden Patterns in Maintenance Data to Predict Failures of Heavy Vehicles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171 Hani Subhi AlGanem and Sherief Abdallah Arabic Dialects Morphological Analyzers: A Survey . . . . . . . . . . . . . . . . . . 189 Ridouane Tachicart, Karim Bouzoubaa, Salima Harrat, and Kamel Smaïli The Large Annotated Corpus for the Arabic Language (LACAL) . . . . . . 205 Abdellah Yousfi, Ahmed Boumehdi, Saida Laaroussi, Rania Makoudi, Si Lhoussain Aouragh, Hicham Gueddah, Brahim Habibi, Mohamed Nejja, and Iazi Said Topic Modelling for Research Perception: Techniques, Processes and a Case Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221 Ibukun T. Afolabi and Christabel N. Uzor A Survey on Crowdsourcing Applications in Smart Cities . . . . . . . . . . . . . 239 Hamed Vahdat-Nejad, Tahereh Tamadon, Fatemeh Salmani, Zeynab Kiani-Zadegan, Sajedeh Abbasi, and Fateme-Sadat Seyyedi Markov Switching Model for Driver Behavior Prediction: Use Cases on Smartphones . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255 Ahmed B. Zaky, Mohamed A. Khamis, and Walid Gomaa Understanding the Impact of the Ontology of Semantic Web in Knowledge Representation: A Systematic Review . . . . . . . . . . . . . . . . . . 277 Salam Al-Sarayrah, Dareen Abulail, and Khaled Shaalan Telemedicine: Digital Communication Tool for Virtual Healthcare During Pandemic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301 Lakshmi Narasimha Gunturu, Kalpana Pamayyagari, and Raghavendra Naveen Nimbagal Robotics and AI in Healthcare: A Systematic Review . . . . . . . . . . . . . . . . . 319 Saif AlShamsi, Laila AlSuwaidi, and Khaled Shaalan Outlier Detection for Customs Post Clearance Audit Using Convex Space Representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345 Omar Alqaryouti, Nur Siyam, and Khaled Shaalan
Contents
ix
Spatial Accessibility to Hospitals Based on GIS: An Empirical Study in Ardabil . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361 Saeed Barzegari, Ibrahim Arpaci, and Zahra Mahmoudvand Efficiency and Effectiveness of CRM Solutions in Public Sector: A Case Study from a Government Entity in Dubai . . . . . . . . . . . . . . . . . . . . 371 Orabi Habeh, Firas Thekrallah, and Khaled Shaalan
About the Editors
Dr. Mostafa Al-Emran is currently working in the Faculty of Engineering and IT at The British University in Dubai, UAE. He received his Ph.D. degree in Computer Science from Universiti Malaysia, Pahang, the M.Sc. degree in Informatics from The British University in Dubai (with distinction), and the B.Sc. degree in Computer Science from Al Buraimi University College (with honors). He is among the top 2% scientists in the world, according to the reports published by Stanford University in October 2020 and October 2021. He has published over 105 research articles, and his main contributions have appeared in highly reputed journals, such as International Journal of Information Management, Computers and Education, Computers in Human Behavior, Telematics and Informatics, Technology in Society, Journal of Enterprise Information Management, Interactive Learning Environments, International Journal of Human–Computer Interaction, Journal of Educational Computing Research, and Education and Information Technologies, among many others. Most of his publications were indexed under the ISI Web of Science and Scopus. He has edited a number of books published by Springer. His current research interests include human–computer interaction, knowledge management, educational technology, and artificial intelligence. Prof. Khaled Shaalan currently occupies Head of Informatics Department position at The British University in Dubai, UAE. He is currently holding the rank of Full Professor of Computer Science and AI. He has gained significant academic experience and insights into understanding complex ICT issues in many industrial and governmental domains through a career and affiliation spanning for more than 30 years with the international institutions, such as the Swedish Institute of Computer Science, School of Informatics (University of Edinburgh), Faculty of Engineering and IT (The British University in Dubai), and Faculty of Computers and Artificial Intelligence (Cairo University), international organizations such as UNDP/FAO, industrial corporates, such as Microsoft/FAST search. His areas of interest are artificial intelligence (AI), natural language understanding, knowledge management, health informatics, education technology, E-businesses, cybersecurity, and smart government services. He was selected as a member of Mohammed bin Rashed Academy of xi
xii
About the Editors
Scientists (MBRAS) under the engineering and technology category. He is ranked among the worldwide 2% top scientists in 2020 and 2021 according to a study led by Dr. Ioannidis and his research team at Stanford University. He is also ranked as one of the top computer scientists in the UAE according to the Research.com index. He supervised 76 M.Sc. dissertations and 21 Ph.D. theses in various computer science topics. He edited six journal special issues, seven conference proceedings, and four books in computer studies. He has published over 270 referred publications and achieved accumulative Google Scholar citations over 9200 with H-index over 45. He presented nine keynote speeches and evaluated 45 promotion applications worldwide to the rank of associate professor and full professor. He serves as Associate Editor for reputed journals, such as the ACM Transactions on Asian and Low-Resource Language Information Processing (TALLIP), and a member of several journal editorial boards. He received the Bronze Shield Award in the innovation with human resources from Dubai Courts, UAE-2020. He was selected as a fellow at the School of Informatics, University of Edinburgh, UK.
AI Models and Methods in Automotive Manufacturing: A Systematic Literature Review Christoph Mueller and Vitaliy Mezhuyev
Abstract While artificial intelligence (AI) experienced an increasing interest in industry during the past decade, the true potential and applicability of AI for automotive original equipment manufacturers (OEMs) and suppliers in real-world scenarios have not been clearly understood. Most applications of AI focus on the development of connected and autonomous cars, rather than the optimisation of automotive operations and manufacturing processes. This work, therefore, bridged this gap and shed light on the topic of AI in the context of automotive manufacturing and Industry 4.0. It aimed to promote understanding and provide up-to-date insights on specific models and methods of AI, applications that have been achieved with best practices as well as the problems that were encountered, underpinned with possible future prospects. A systematic literature review approach was adopted to ensure broad and thorough coverage of current knowledge and the identification of relevant literature on the topic. The literature search was confined to papers that were published from 2015 onwards using the databases of IEEE and ScienceDirect as primary sources, with a three-keyword search phrase to narrow down the results and increase specificity. A total of 359 papers were identified and subsequently screened for eligibility, of which 84 papers were selected for quantitative and 79 papers for qualitative analysis. The results of the quantitative analysis confirmed that the topic has markedly increased in significance, with a mere 3 papers published in 2015 and 33 papers in 2021. The majority of papers dealt with solving problems in production (39.29%), quality (35.71%) and assembly (16.67%), whereas supply chain (5.95%) and business intelligence (2.38%) were inadequately represented. The results of the qualitative analysis revealed that machine learning methods dominate current research and automotive applications, with neural networks as the most used out of more than 70 identified models. The industrial applicability was confirmed by many use cases including quality inspection, robot assembly, human–robot collaboration, material demand prediction or AI-enabled manufacturing decision making. The problems of such applications were mainly attributed to data availability and quality, model C. Mueller · V. Mezhuyev (B) Institute of Industrial Management, University of Applied Sciences FH JOANNEUM, Kapfenberg, Austria e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. Al-Emran and K. Shaalan (eds.), Recent Innovations in Artificial Intelligence and Smart Applications, Studies in Computational Intelligence 1061, https://doi.org/10.1007/978-3-031-14748-7_1
1
2
C. Mueller and V. Mezhuyev
development and gaps in simulation, system integration, the complexity of automotive processes, the physical conditions of the system environment and dynamic change. For industrial applications it is thus recommended to further optimise AI methods and models, enabling a wider system integration by harvesting the potential of big data and both edge and cloud computing. Keywords Artificial intelligence · AI models and methods · Automotive manufacturing · Systematic literature review
1 Introduction The first industrial revolution that commenced in the eighteenth century ignited the spark that entailed unforeseen technological advancements and global change. The invention of the steam engine together with electricity set the cornerstone of industrialisation, with the automotive industry positioned among the first movers in adopting new technologies. Henry Ford first introduced a conveyor-belt assembly line producing his standardised Model T, whereas Toyota introduced its renowned production system that became industry best practice. Decades later, the development of information and communication technologies, with sophisticated hardware and software, marked another milestone in industrial history that could only be surpassed by the invention of the internet. These and many other technological breakthroughs led to Industry 4.0 which is characterised by the internet of things, machine to machine communication and ultimately cyber-physical production systems. Reaping the benefits of today’s digital transformation will play a key role in further enhancing manufacturing processes and gaining a competitive advantage. The implementation of AI might be considered the appropriate strategic move to ensure sustainable growth. Even though the automotive industry has long paved the way for innovative manufacturing technologies, it is questionable to what extent AI has already been deployed in manufacturing. This work thus aimed to outline the extent of AI adoption in automotive manufacturing in the context of Industry 4.0 using a systematic literature review. Four research questions were defined to guide the research process, focussing on (1) the use of AI models and methods, (2) applications and best practices as well as (3) limitations and (4) future prospects. Given that previous research and industrial applications have been largely confined to the development of connected and autonomous vehicles, there is a significant gap in research with respect to automotive operations and manufacturing. The results of this research are therefore considered to provide valuable insights and gain relevant knowledge that can guide future research and applications in the field of AI. This work is divided into four chapters: the following introduction Sect. 2 discusses the background and theoretical foundations of the automotive industry and AI, Sect. 3 describes the principles of the adopted research methodology, Sect. 4 presents and discusses the research findings. Finally, Sect. 5 summarises the key aspects, findings and implications of this work.
AI Models and Methods in Automotive Manufacturing …
3
2 Background and Theoretical Foundations 2.1 Principles of the Automotive Industry The automotive industry ranks on top of the most innovative, complex and globally dispersed industries in today’s business world and is considered an early adopter of Industry 4.0 technologies. More than a hundred years of innovation and continuous improvement have contributed to the production of automobiles that initially were petrol engine-powered horse-drawn carriages in the late nineteenth century. Now, they are developed as digital and connected cars that can even respond to spoken language and are capable of driving autonomously, as is exemplified by many OEMs and their products today. While cars had traditionally been handcrafted in small workshops, they are manufactured as fully individualised and industrialised mass products today. The history of the automotive industry has experienced significant milestones and developments until the adoption and implementation of both AI and Industry 4.0. It is arguable that the inherent complexity of the automotive industry has positioned both OEMs and suppliers as steady frontrunners throughout the recent industrial revolutions and the digital transformation in particular.
2.2 AI in the Context of Automotive Manufacturing During the past decade, Industry 4.0 has been regarded as a vital revolutionary step for both automotive OEMs and suppliers to achieve manufacturing excellence better than before. This holds true for European companies that face fierce competition from competitors in emerging markets, such as Asia. In most of the early Industry 4.0 definitions and descriptions of the respective components and technologies, it is evident, that AI is not explicitly stated as the driving force to achieve the level of Industry 4.0. This view has significantly changed in recent years as AI development and applications have gained substantial momentum and experienced widespread adoption on a global scale. As such, in Josef Schumpeter’s terms, the widespread adoption of sophisticated AI systems would arguably mark another innovation cycle and the sixth wave of innovations, as AI is increasingly being implemented by both OEM and suppliers and set to reach the next development level towards AGI. The German Industry 4.0 platform stresses that “from an industrial point of view, AI technologies are to be understood as methods and procedures that enable technical systems to perceive their environments, process what they have perceived, solve problems independently, find new kinds of solutions, make decisions, and especially to learn from experience in order to be better able to solve and handle tasks” [57]. A major benefit of AI in automotive manufacturing is thus seen in harvesting real-time data through the deployment of so-called machine learning operations, to deal with the complexities of today’s dynamic environment and continuously adapt to change to foster organisational learning and development from a long-term perspective [22].
4
C. Mueller and V. Mezhuyev
It is worth pointing out that the theoretical aspects and opportunities of AI in an industrial context are in parallel confronted with practical challenges. The Industry 4.0 platform panel thus points out that approximately one-third of value-adding activities will be supported and enhanced by AI and affect almost all functional departments. However, as of today, the implementation of AI in manufacturing is still seen in its early trial-and-error stages, primarily owing to high costs, the inherent complexity and the need for organisational change and process amendment. Consequently, big industry players would rather focus on AI-enabled robotics and resource management, while SMEs are more prone to use AI to enhance the areas of knowledge and quality management as well as supply chain operations. Above all, the greatest benefit of AI in manufacturing is considered to be the achievement of a level of operational excellence that could not be reached by collective human intelligence alone. While it is expected that AI will enable automotive manufacturers to increasingly optimise task-intensive and repetitive processes, which would reduce much manual work, AI is also regarded as a key driver to create business opportunities at the same time [56].
3 Research Methodology 3.1 Systematic Literature Review A systematic literature review (SLR) approach was applied to answer the predefined research questions and provide valuable insights into the adoption of AI in automotive manufacturing. This approach ensured a high degree of focus, integration and transparency. It was based on a research plan and search strategy that stated both exclusion and inclusion criteria to select potential literature and provide a holistic overview and thorough discussion [67]. For this purpose, specific search terms, databases and platforms were defined as part of the methodological planning process. Only literature from 2015 onwards was considered to deliver up-to-date information, in order to cater for the fast pace of digital development and transformation. Table 1 provides an overview of the research approach.
3.1.1
Search Strategy and a Search Phrase
The SLR was carried out with a predefined search strategy. All the content to be considered in an SLR should be retrieved from well-established and highly regarded literature sources that meet the needs of common research standards. The use of credible scientific sources, such as IEEE or ScienceDirect, ensures a high degree of literature coverage. Due to the high number of sources and studies that deal with AI in the automotive industry, in particular focusing on autonomous driving, it was critical to define a clear
AI Models and Methods in Automotive Manufacturing …
5
Table 1 Research methodology overview Research methodology
Systematic literature review
Methodology type
Secondary data research
Primary literature sources
IEEE, ScienceDirect
Search phrase focus
AI, automotive, manufacturing
Selection criteria
Predefined minimum and specific criteria
Research question focus
AI models and methods, applications and best practices, issues and problems, future trends and potential innovations
Industry focus
Automotive industry
Functional area focus
Manufacturing and related processes
search phrase in order to find relevant papers. The search phrase that was initially used to retrieve studies was confined to a combination of three main key phrases: (1) AI, (2) automotive industry and (3) manufacturing. First search tests with these keywords on different databases did not deliver the desired output. Since the titles used for any research literature available might use synonyms, such as production instead of manufacturing, different variants of the primary search phrase were used to gain more search results by using the “AND” as well as “OR” operators. The focus on both title and abstract reduced the total number of literature to be retrieved, yet it provided a clear and focused overview of the current research state in the topic area and potential opportunities for future research. The desk research carried out revealed that machine learning is a core method of AI widely used in the automotive industry, it was therefore added as a separate keyword in addition to AI. In this respect, it is worth noting that an initial search resulted in 238 papers through the use of the ScienceDirect database. By adding the term machine learning, additional 39 papers were found. The following search phrase was used combining the above-mentioned keywords and logical operators: (‘artificial intelligence’ OR ‘machine learning’) AND ‘automotive’ AND (‘production’ OR ‘manufacturing’). 3.1.2
Literature Sources
When adopting the SLR methodology, the use of a single literature database would limit the potential of meaningful literature coverage, due to the broad application of AI in a given industry. Hence, the literature search was carried out by using different databases with the predefined multi-keyword search phrase. The scientific databases of IEEE Xplore and ScienceDirect were used as primary literature sources, given that they are considered highly credible and reliable.
6
C. Mueller and V. Mezhuyev
Table 2 SLR exclusion and inclusion criteria No.
Scope
Criteria
1
EC
Title, abstract or full text contain the defined keywords
2
EC
The content presented is relevant to the defined research topic
3
EC
Date of publication since 2015
4
EC
Availability of full text
5
EC
Published in the English language
6
IC
AI models, methods and technologies are described
7
IC
Applications of AI, best practices or use cases are described
8
IC
Negative results, issues or limitations are discussed
9
IC
Future prospects of AI or recommendations are discussed
3.1.3
Selection Criteria
The literature retrieved using the above-mentioned databases is scrutinised based on predefined selection criteria. This ensures a high level of focus and cohesion to properly answer the research questions. The selection criteria are divided into two parts: the exclusion criteria (EC) are used to identify suitable literature and exclude non-relevant literature, the inclusion criteria (IC) are used to screen the literature according to the focus of the content to be included in the research. The papers, studies and research materials were documented in a dedicated master data file using Microsoft Excel. The implemented filter function allows flexible search and identification of literature according to the selection criteria. The selection process using predetermined criteria ensures that the papers selected for quantitative as well as qualitative analysis contain the relevant content breadth and depth to adequately answer the research questions. The predefined criteria are listed in Table 2.
3.2 Literature Identification and Evaluation The process of literature identification and evaluation followed the proposed methodology and is depicted in Fig. 1. Relevant literature sources were identified using the databases of IEEE and ScienceDirect as primary sources. The identified literature was subsequently screened for relevance and eligibility by applying the predefined criteria. Finally, 84 papers were included for the quantitative and 79 papers for the qualitative analysis. Table 3 shows that the majority of papers were conference papers (52.38%) and journal articles (44.05%). Additionally, 2 chapters and 1 technical paper were selected. A total of 5 papers were excluded. For the literature identification, papers published before 2015 were excluded to ensure that outdated information is not considered for analysis. Figure 2 shows
Identification
AI Models and Methods in Automotive Manufacturing … Literature identified through database search (n=344) IEEE (n=277) ScienceDirect (n=67)
7
Additional literature identified through other source ResearchGate (n=15)
Eligibility
Screening
Literature identified for screening (n=359)
Literature after duplicate removal and screening (n=91)
Full-text literature after assessment for eligibility (n=84)
Literature not relevant to topic excluded (n=268)
Total number of full text literature excluded (n=7) Full-text not available (n=3) Different focus (n=4)
Included
Literature included in quantitative analysis (n=84)
Literature included in qualitative synthesis (n=79)
Literature excluded due to limited content (n=5)
Fig. 1 SLR PRISMA flowchart
Table 3 Distribution of the literature sources Type
Number of papers
Percentage (%)
Conference paper
44
52.38
Article
37
44.05
Chapter
2
2.38
Technical paper Total
1
1.19
84
100.00
the number of publications per year since 2015. It can be clearly seen that the research interest in AI and automotive manufacturing has markedly increased, with 3 publications published in 2015 (3.57%) and 33 publications in 2021 (39.29%).
8
C. Mueller and V. Mezhuyev 33
35 30 25 20 15
14
2019
2020
15 9
10
6
4
3
5 0
2015
2016
2017
2018
2021
Fig. 2 Sources distribution per year
4 Research Results 4.1 General Results The papers were categorised according to their relation to a specific manufacturing process in order to provide an overview of the focus of AI deployment and distribution. Table 4 lists the AI-enabled processes in automotive manufacturing that were identified. The quantitative analysis revealed that the majority of papers evaluated focussed on the application of AI for production and quality-related issues, representing 33 papers (39.29%) and 30 papers (35.71%) respectively. Assembly ranks in the third position, whereas supply chain and business intelligence form the minority. Table 4 AI-enabled processes in automotive manufacturing Process
Description
Number of the papers
Production
Machines and robots that properly execute production tasks according to production planning and customer requirements
33
Quality
Planning, analysis, evaluation and improvement of the quality of automotive parts, components and vehicles
30
Assembly
Assembly of individual automotive parts, components and systems in a dedicated production line to produce the final vehicle
14
Supply chain
Provision of required automotive parts, components and systems according to manufacturing requirements
5
Business intelligence
Methods used for monitoring, controlling and optimising operations and manufacturing processes on the managerial level
2
AI Models and Methods in Automotive Manufacturing …
9
35 3
30 25
10
15 10 5 0
Supply Chain Quality
20
1 2
4 2
2015
2016
2 1 1 2017
2
1
1 3 3 2
9
8
4
2018
2019
4 1 2020
12
Production Business Intelligence Assembly
8 2021
Fig. 3 Number of annual publications and distribution of the processes
Figure 3 illustrates the number of publications from 2015 to 2021 and the distribution of the identified manufacturing-related processes. It can be seen that both production and quality have been dealt with since 2015, yet the focus on assembly commenced and increased from 2018 onwards. While an AI system has the potential to support the model development and process of a full vehicle assembly, it requires substantial experience and knowledge to be put into practice. The same holds for issues related to supply chain and business intelligence.
4.2 Current Models and Methods of AI in Automotive Manufacturing Table 5 shows AI models and methods, described in the papers, published in 2019– 2021. The quantitative analysis revealed that machine learning methods rank on top of the most used AI methods for the presented automotive use cases, as is shown in Fig. 4. The top three methods included supervised learning (46.74%), deep learning (22.83%) and unsupervised learning (11.96%). Other AI methods and techniques, such as genetic programming, rule-based, case-based or knowledge-based reasoning, were used to solve specific AI problems and formed the minority. Figure 5 shows the applied AI methods in relation to the respective manufacturing process. This approach revealed that deep learning is primarily implemented for solving quality-related manufacturing problems. Case-based, rule-based, knowledge-based or search-based methods were confined to production. Reinforcement learning was chosen as the proper method to solve complex problems in assembly, production and supply chain. Federated learning and genetic programming were considered as revolutionary methods for enhancing the vehicle assembly process. Semi-supervised learning was used in production to combine the benefits of both machine learning techniques.
10
C. Mueller and V. Mezhuyev
Table 5 Current models and methods of AI in automotive manufacturing AI purpose
AI methods and models
Quick and reliable detection of defects
Machine Learning with combined Principal Component Analysis (PCA) and One-Class Support Vector Machine (OC-SVM) [11]
Development of vehicle assembly model
Federated learning with Support Vector Machine and smart contract integration [2]
Pre-calculation of risks for reliable inbound logistics
Machine learning model comparison including Multi-layer Perceptron neural network, gradient boosted tree and Decision Tree [18]
Trajectory optimisation for industrial robots Deep reinforcement learning with Markov decision process and proximal policy optimisation algorithm [30] Monitoring of cutting tool life span
Machine learning with a supervised approach and naive Bayes algorithm [8]
Detection of errors during assembly
Multi-agent comparison using machine learning with Random Forests, Bagging, rPart, Naive Bayes classifier [68]
Selection of design parameters for automotive welding
Supervised learning approach with Decision Tree and association rule techniques [20]
Quality monitoring of bending processes
Machine learning with linear regression, kNN, Random Forest, Multi-Layer Perceptron and deep learning using Convolutional neural network [41]
Adaption of production systems
Reinforcement learning approach with neural networks and use of proximal policy optimization algorithm [49]
AGV material handling and route planning
Reinforcement learning applied with Q-learning algorithm for AGV navigation policy development [29]
Sound detection of electrical harness assembly
Machine learning and Multi-layer Perceptron compared with deep learning and convolutional neural network [17]
Predictive quality and maintenance
Machine learning with data drift detection algorithm based on principal component analysis [82]
Control for discrete event system in assembly
Reinforcement learning with state-action-reward-state-action algorithm [87]
Prediction of machine maintenance demand Machine learning with adaptive ARIMA model for breakdown prediction [45] In-line evaluation of mechanical properties
Machine learning with analytical and numerical algorithms [38]
Prediction of material delamination
Machine learning approach comparing different methods/models including XGBoost-ARIMA, neural network, Support Vector Machine [12] (continued)
AI Models and Methods in Automotive Manufacturing …
11
Table 5 (continued) AI purpose
AI methods and models
Forecasting of manufacturing demand
Multi-agent approach with statistical and machine learning methods, models including ARIMAX (for launch stage), multilayer perception, support vector regression and Random Forests [21]
Evaluation of welding process quality
Deep learning with 2D and 3D convolutional neural networks adapted for temporal and spatial IR intensity changes [86]
Quality inspection of components
Deep learning with Expectation–Maximization Algorithm and YOLOv3 for object detection through regression and Darknet-53 CNN for class probabilities [62]
Detection of surface defects
Machine learning with structural similarity index algorithm and neural network [1]
Process control in full-vehicle assembly
Multi-agent machine learning with algorithms for classification and prediction tasks [61]
Optimisation of human–robot collaboration
Deep learning with a convolutional neural network [9]
Quality control of free form surfaces
Deep learning with Bayesian network and Object Shape Error Response model, multi-step training with closed-loop, transfer learning, continual learning approach [73]
Inspection of structural adhesives
Deep learning with generative adversarial networks and YOLOv4 models [55]
Correction of assembly object shape errors
Deep reinforcement learning with deterministic policy gradient algorithm and neural network [74]
Determination of appropriate assembly parts Machine learning with gaussian process regression and neural network [60] Predictive analytics and maintenance
Machine learning with classification, regression and association algorithms [32]
Detection of presence/absence of objects
Deep learning with ResNet-50 convolutional neural network [16]
Detection of non-conformance parts in the assembly
Machine learning with the use of Detectron2 library for detection and segmentation algorithms and feature pyramid network as a feature extractor for accuracy and speed [64]
Root-cause problem-solving in electronics wafer production
Machine learning with bootstrap forest model with class-probe and unit probe, data storage in Hadoop as an open framework [5]
Modular production control with AGVs
Single and multi-agent with reinforcement learning and Q-learning to handle complexity combined with a multi-agent system approach for high robustness [19] (continued)
12
C. Mueller and V. Mezhuyev
Table 5 (continued) AI purpose
AI methods and models
Performance evaluation in supply chains
Machine learning with Multi-layer Perceptron algorithm to be used to analyse data to reveal relations and influences of diverse variables [15]
Detection of functionality deviations
Multi-agent comparison using machine learning with neural networks, Support Vector Machine, k-Nearest-Neighbour, Decision Tree [78]
Prediction of energy efficiency and surface quality
Deep learning with a deep Multi-layer Perception algorithm [72]
Improvement of quality, reliability and maintenance
Machine learning with prediction algorithms [13]
Detection of wafer sawing defects
Multi-agent machine learning with XGBoost and Random Forests models [54]
Analysis of manufacturing anomalies
Machine learning with Bayesian network parameter learning approach and maximum expectation algorithm combining expert knowledge and new input samples [85]
Quality inspection of components
Deep learning with convolutional neural network, LeNet-5 architecture and Border Tracing Algorithm [46]
Automated inspection of crimp connections Deep learning with convolutional neural network and VGG16 framework [47] Real-time error detection and correction
Deep learning with convolutional neural network for error detection and Decision Tree for automated rework [77]
Inspection of the crimping process
Machine learning with Autoencoder/variational Autoencoder for anomaly detection, deep learning with convolutional neural network for process diagnosis [43]
Prediction of automotive paint film quality
Machine learning with Decision Tree and simple logistic regression approach, Area Under ROC Curve used as performance metric [66]
Part recognition and pose estimation
Deep learning with application of convolutional neural network, PoseNet and use of OpenSceneGraph to generate datasets [34]
Optimisation of manufacturing decision-making
Machine learning approach with neural networks and Multi-layer Perceptron used for classification, Levenberg–Marquardt and genetic algorithms used for training process [52]
Planning of production systems
Regression algorithms for training and creation of production configurations, neural networks to link training data with assembly processes and stations to replicate the knowledge of planners [25] (continued)
AI Models and Methods in Automotive Manufacturing …
13
Table 5 (continued) AI purpose
AI methods and models
Scheduling and motion planning of robot assembly lines
Multi-criteria decision-making model used for balancing of work, implementation of digital twin database for simulation/real-world data [31]
Long-term fault prediction
Machine learning with a balanced random survival forest model to deal with complex dynamic dependencies [7]
Consistency of product quality
Supervised machine learning algorithms are used to identify process deviations and suggest corrections [50]
Determination of machine health status and maintenance demand
Unsupervised machine learning approach adopting K-means and Gaussian Mixture Modelling [80]
Selection of assembly equipment
Machine learning with neural network [26]
Improvement of production performance through plant benchmarking
Machine learning with recursive and iteration algorithm [59]
Prescriptive analytics in production
Machine learning with predictive algorithms and models [76]
Detection of robot gripping points
Machine learning with perception algorithm [58]
Detection of defects in components
Three-stage deep learning approach with neural networks and Autoencoder [63]
Part detection during the assembly process
Deep learning with a two-step approach including (1) convolutional neural network for keypoint prediction and (2) recursive convolutional neural network for optimisation of first prediction results [84]
Detection of robot faults
Unsupervised learning using the Gaussian Mixture Model to cater for lack of labelled OK/NOK training data [10]
Detection and classification of assembly objects
Deep learning with single-shot detection and MobileNet as a Convolutional neural network, implemented with TensorFlow API [42]
A total of 50 papers (59.52%) presented a single-method approach with the selection of an adequate model and technique to solve the predefined problem based on the respective model characteristics. A multi-method approach was presented in 34 papers (40.48%), either by comparing the performance of different models for proper model selection and subsequent system optimisation, or to combine multiple models to overcome any trade-offs with the use of a single-method approach. For the quality problems, the multi-method approach was the most applied (17 papers). Table 6 shows the distribution of single and multi-method AI. A total of 79 models and algorithms with 168 applications were identified in the literature. The most applied models comprise neural networks (37), followed
14
C. Mueller and V. Mezhuyev Supervised Learning Deep Learning Unsupervised Learning Reinforcement Learning Rule-based
Deep Reinforcement Learning Knowledge-based reasoning Search-based Federated Learning Case-based reasoning Semi-supervised Learning Genec Programming 0
5
10
15
20
25
30
35
40
45
50
Fig. 4 AI methods in automotive manufacturing
100% 90% 80% 70% 60% 50% 40% 30% 20% 10% 0%
1
3 1
13
4 17
2
1
1
1
1
2
3
1
1
4
Supply Chain
17 4
2
Quality
1
2 4
3
3
Production Business Intelligence Assembly
Fig.5 AI method application per process
Table 6 Distribution of single-/multi-method approach Process
Multi-method
Single method
Total
Assembly
3
11
14
Total percentage (%) 16.67
Business intelligence
1
1
2
2.38
Production
11
22
33
39.29
Quality
17
13
30
35.71
Supply chain
2
3
5
5.95
Total
34 (40.48%)
50 (59.52%)
84
100.00
by support vector machine (14), random forest (9), decision tree (8) and multilayer perceptron (6). Table 7 lists the most used models and algorithms. The abovementioned models were used for different use cases.
AI Models and Methods in Automotive Manufacturing …
15
Table 7 ML models and methods used in automotive manufacturing Model/algorithm
Number of applications
Percentage (%)
Neural network
17
10.12
Convolutional neural network
17
10.12
Support vector machine
14
8.33
9
5.36
Random forest Decision tree
8
4.76
Multi-layer perceptron
6
3.57
k-Nearest-neighbor
5
2.98
Predictive modeling
4
2.38
Bayesian network
3
1.79
Autoencoder
3
1.79
The analysis revealed that neural networks were applied for all identified manufacturing processes. The models of support vector machine, random forest, decision tree and multi-layer perceptron were also widely used. Both convolutional neural networks and support vector machines were predominantly used for quality assurance. However, individual models were confined to specific manufacturing processes. For example, the models of autoencoder, Bayesian network, YOLO or XGBoost were commonly applied for quality-related issues. While the manufacturing processes differ in both requirements and complexity, they also differ in terms of AI readiness when it comes to the availability of suitable data. Given that quality is regarded as a critical determinant of long-term competitive advantage, it has been on top of the AI agenda with much effort being made to develop AI models that can be used to support the quality inspection and prediction of automotive components. More complex processes, including full vehicle assembly, have only recently been considered for AI development. In order to solve assembly issues, the models of Petri nets, MobileNet, N-step SARSA, Gaussian Process Regression, Detectron2 or rPart were specifically used. More traditional AI methods, including rule-based or search algorithms, were confined to the production process. The application of AI models in automotive manufacturing was confined to the respective manufacturing process and was not integrated into an AI system that is deployed across the manufacturing value chain.
4.3 Contemporary Applications and Best Practices of AI in Automotive Manufacturing The results of the qualitative analysis showed that AI has already been successfully implemented in automotive manufacturing for a variety of use cases in assembly,
16
C. Mueller and V. Mezhuyev
business intelligence, production, quality and supply chain. Table 8 shows the applications and best practices according to the respective process category. While a great variety of AI models was used to solve specific AI problems, many papers presented positive results of AI system deployment, with accuracy rates exceeding 90%. These results are considered a substantial achievement and best practice has given the complex and dynamic characteristics of the automotive industry. The papers thus confirmed the need for proper model selection, development and optimisation according to the distinct requirements of each use case. The examples indicate that AI has the potential to replace manual tasks and accomplish complex and data-intense tasks that humans could not easily cope. The high robustness, reliability and accuracy that were achieved in many use cases make AI suitable for industrial deployment.
4.4 Issues and Problems The quantitative analysis of the papers clearly showed that the interest in AI in the context of automotive manufacturing has increased during the last few years. While many AI systems were successfully implemented to enhance manufacturing operations and processes, there are still potential issues and problems that limit the application of AI systems. In particular, the quality and availability of data as well as a proper selection and training of AI models are regarded as critical limiting factors. Also, nowadays the level of AI maturity is confined to specific use cases and processes, whereas a wider system integration is still seen as a key issue that demands substantial efforts in research and development. Even if the simulation can foster AI development, there are gaps to be considered when transferring AI to real-world applications. Despite the availability of high computing power, it can still limit AI performance if the hardware was not properly specified to meet the requirements of real-time data generation and processing. Furthermore, the high process complexity of automotive manufacturing, together with both physical environmental conditions and dynamic change, can have adverse effects on the industrial applicability of AI systems.
4.5 Future Trends and Potential Innovations The qualitative analysis revealed future trends and potential innovations that could push the performance and applicability of future AI systems for automotive manufacturing. Many aspects are meant to address the critical limitations of contemporary AI. While huge leaps forward have been achieved since AI was coined in the 1950s, there is still a long way to go to reach AI autonomy levels 4 and 5. Much of the development needs to focus on both model development and simulation to make them more capable of complex tasks and processes. While data availability and
AI Models and Methods in Automotive Manufacturing …
17
Table 8 AI use cases per process category AI applications in vehicle assembly Control for discrete event system in assembly [87]
Enabling advanced human–robot collaboration [79]
Correction of assembly object shape errors [74]
Image recognition during assembly [3]
Detection and classification of assembly objects [42]
Optimisation of production scheduling [14]
Detection of errors during assembly [68]
Part detection during the assembly process [84]
Detection of non-conformance parts in the assembly [64]
Planning of intelligent automation systems for assembly [23]
Determination of appropriate assembly parts [60]
Process control in full-vehicle assembly [61]
Development of vehicle assembly model [2]
Selection of assembly equipment [26]
AI applications in business intelligence Development of consistent and reliable KPI forecasts [28]
Optimisation of manufacturing decision-making [52]
AI applications in production Adaption of production systems [49]
Part recognition and pose estimation [34]
Automation of body-in-white production design [24]
Planning of production systems [25]
Automation of robotic assembly operations [48]
Predictability of formability for sheet metal components [6]
Cooperative robots in metalworking [75]
Prediction of machine maintenance demand [45]
Detection of functionality deviations [78]
Prediction of material requirements [81]
Detection of robot faults [10]
Predictive quality and maintenance [82]
Detection of robot gripping points [58]
Predictive analytics and maintenance [32]
Determination of machine health status and maintenance demand [80]
Prescriptive analytics in production [76]
Diagnostic and prediction of machine health status [37]
Quality monitoring of bending processes [41]
Identification of flexibilities for lean manufacturing [69]
Recognition of human behaviour in industrial workflows [39]
Improvement of production performance through plant benchmarking [59]
Replicating human sensory perception [65]
Improvement of quality, reliability and maintenance [13]
Root-cause problem-solving in electronics wafer production [5]
Long-term fault prediction [7]
Scheduling and motion planning of robot assembly lines [31]
Modular production control with AGVs [19]
Selection of design parameters for automotive welding [20] (continued)
18
C. Mueller and V. Mezhuyev
Table 8 (continued) Monitoring of cutting tool life span [8]
Trajectory optimisation for industrial robots [30]
Optimisation of human–robot collaboration [9]
Tool condition monitoring and breakage detection [27]
Optimisation of robot picking and inspection [70] AI applications in quality Analysis of manufacturing anomalies [85]
Inspection of crimping process [43]
Automated inspection of crimp connections [47]
Inspection of structural adhesives [55]
Categorisation of surface defects [51]
Prediction of automotive paint film quality [66]
Consistency of product quality [50]
Prediction of component quality [4]
Detection of component defects in X-ray images [44]
Prediction of energy efficiency and surface quality [72]
Detection of defects in components [63]
Prediction of failures in production lines [83]
Detection of end-of-line faults in combustion engines [33]
Prediction of material delamination [12]
Detection of presence/absence of objects [16]
Prediction of screw-fastening process quality [40]
Detection of surface defects [1]
Quality control of free form surfaces [73]
Detection of wafer sawing defects [54]
Quality inspection of components [35, 46, 62]
Diagnosis of bearing faults [53]
Quick and reliable detection of defects [11]
Ensure flexibility and quality in production [71]
Real-time error detection and correction [77]
Evaluation of welding process quality [86]
Sound detection of electrical harness assembly [17]
In-line evaluation of mechanical properties [38] AI applications in supply chain AGV material handling and route planning [29]
Performance evaluation in supply chains [15]
Forecasting of manufacturing demand [21]
Pre-calculation of risks for reliable inbound logistics [18]
Image recognition for manufacturing logistics [36]
quality were regarded as critical limitations, much emphasis is expected to be put on data improvements for more accuracy and reliability to meet the requirements of automotive manufacturing. In this respect, data fusion is seen as a crucial component of future AI systems, in order to reap the benefits of multiple data sources that are provided in fully integrated CPPS. Through the increasing deployment of multi-agent systems, AI should thus be enabled to engage in production control by autonomously
AI Models and Methods in Automotive Manufacturing …
19
adapting parameters based on data analytics and knowledge generation and learning. Both edge and cloud computing are expected to become important drivers of AI performance, security and efficiency. The latter will be a critical competitive factor when it comes to AI development methods to speed up the implementation of AI in the automotive industry. Above all, a corporate culture that fosters employee engagement and AI acceptance will play a key role in the future of AI, as do proper training and education.
5 Conclusion This work outlines the contemporary state and extent of AI adoption in automotive manufacturing by identifying AI models and methods, applications and best practices as well as limitations and future prospects. Given that there was a lack of comprehensive and up-to-date meta-studies with regard to AI in automotive manufacturing, the results of this work provide valuable insights for both research and industry. The presented results showed that AI can be regarded as a critical success factor for reaping the benefits of Industry 4.0. Throughout the past decades, the optimisation of manufacturing processes has primarily been based on implementing information and communications technologies, automation as well as lean manufacturing methods. European automotive manufacturers have therefore aimed to improve product quality, reduce costs, and ensure customer satisfaction by adopting continuous improvement, total quality management, just-in-sequence or advanced robotics enabled by sophisticated IT hardware and software. Nowadays, the high operating expenses in Europe pose a marked risk for manufacturers and urge them to embrace the paradigm of digital transformation. Not only have manufacturers from Asia gained an increasing competitive edge, but they also have devoted much effort to generating know-how and experiences in manufacturing as well as research and development. Hence, the elements of the fourth industrial revolution are seen as a key lever to sustain supremacy and achieve competitive advantage from a long-term perspective. AI technologies have experienced enormous technological advancements since they were first introduced in the 1950s and went through two AI winters with serious setbacks and diminishing hope to achieve the high promises that were made. The major contributing factors were improvements in computing power, the development of sophisticated AI methods and models as well as the increase in big data. Furthermore, the internet-enabled the formation of knowledge groups that spread the interest in AI development and education on a global scale. AI has thus become an integral part of contemporary computer science and is on top of the agenda of both research and industry. Consequently, the maturity of AI has significantly surged and led to more complex and enhanced applications in the automotive industry, from autonomous driving to automotive manufacturing. Nevertheless, AI does not lend itself to be regarded as the single component that is required to increase both the effectiveness and efficiency of automotive operations. The true potential is highly dependent on an integrated system approach in the context of Industry 4.0,
20
C. Mueller and V. Mezhuyev
to establish a so-called cyber-physical production system capable of transforming businesses into real-time enterprises. The latter is based on a synchronised implementation of major Industry 4.0 components, such as the internet of things, machine to machine communication and big data. The results of the systematic literature review confirmed the rising interest in the topic of AI in automotive manufacturing and its potential advantages. The quantitative analysis revealed that the number of papers increased tenfold in the year 2021 compared with the year 2015. AI models have been implemented in various aspects of automotive manufacturing, including assembly, quality, production, supply chain or business intelligence. A wide range of use cases confirmed the industrial applicability of AI models by achieving high industrial robustness and accuracy rates of more than 90%. Examples of identified automotive AI use cases include the detection of objects, surfaces and errors to enhance quality inspection, analysis of production process and predictive machine maintenance, robot assembly and human–robot collaboration, prediction of material demand or AI-enabled manufacturing decision making. The majority of use cases applied machine learning methods with mostly supervised learning approaches where proper training datasets were available, only a few relied on unsupervised approaches. The analysis also revealed that deep learning was largely used to solve quality inspection problems, whereas reinforcement learning was adopted for vehicle assembly problems. More than 70 AI models and algorithms were applied in automotive manufacturing in use cases, with neural networks as the most used ones. Further models that were commonly applied include support vector machine, random forest, decision tree, multi-layer perceptron and k-nearest-neighbour. A multi-method approach was adopted by almost half of the identified papers to select the most advanced model based on trial-and-error or to avoid any trade-offs and disadvantages entailed by the use of a single-method approach. The latter proved to be inappropriate to cope with complex processes, dynamic environments and unpredictable change. Problems occurred where a sufficient availability and quality of data could not be ensured for proper model training and development. Also, a lack of computing power and challenging environmental conditions can lead to adverse effects. In order to ensure a high degree of industrial applicability, much emphasis needs to be placed on data integrity, model selection and development, system integration and ongoing monitoring and optimisation. Expert knowledge and experience are required for both AI development and automotive manufacturing to make AI systems work as intended. In this respect, the importance of employee engagement that was stressed by Toyota to successfully implement the principles of its production system still holds for the age of Industry 4.0 and the digital transformation. Automotive players that are capable of adopting AI-enabled technologies across their value chain while developing a learning culture that embraces change and innovation will arguably succeed and prosper from a long-term perspective. The knowledge gained through the results of this work can aid to select appropriate AI models and methods according to the respective problem in the automotive manufacturing process. It can be recommended to use the findings for creating a supporting framework for the model selection and development process to ensure a high degree of system robustness and accuracy. Further research is needed to evaluate the industrial applicability of AI and the corresponding
AI Models and Methods in Automotive Manufacturing …
21
data requirements for different manufacturing problems. In addition, future studies could address the problem of wider system integration of AI, covering more interrelated processes, in order to promote understanding and outline the prerequisites for industrial applications.
References 1. M. Abagiu, D. Cojocaru, F. Manta, A. Mariniuc, Detection of a surface defect on an engine block using computer vision, in 22nd International Carpathian Control Conference (ICCC) (Velke Karlovice, 2021), pp. 1–5 2. M. Arunozhi, Y. Venkatesh, P. Shi, Design and development of automobile assembly model using federated AI with smart contract. Int. J. Product. Res. (2021) 3. A. Behrouz, A. Gege, S. Rakshit, N. Vajjhala, Examining the applications of artificial intelligence and machine learning in the automotive industry, in The 23rd International Conference on Artificial Intelligence (Las Vegas, 2021) 4. Y. Bai, Z. Sun, J. Deng, Manufacturing quality prediction based on two-step feature learning approach, in International Conference on Sensing, Diagnostics, Prognostics, and Control (SDPC) (Shanghai, 2017), pp. 260–263 5. C. Berges, J. Bird, M. Shroff, R. Rongen, C. Smith, Data analytics and machine learning: root-cause problem-solving approach to prevent yield loss and quality issues in semiconductor industry for automotive applications, in International Symposium on the Physical and Failure Analysis of Integrated Circuits (IPFA) (Singapore, 2021), pp. 1–10 6. M. Bhatt, S. Buch, Prediction of formability for sheet metal component using artificial intelligent technique, in 2nd International Conference on Signal Processing and Integrated Networks (SPIN) (Noida, 2015), pp. 388–393 7. S. Bukkapatnam, K. Afrin, D. Dave, S. Kumara, Machine learning and AI for long-term fault prognosis in complex manufacturing systems. CIRP Ann. 68, 459–462 (2019) 8. C. Carvalho, P. Bittencourt, Industry 4.0 machine learning to monitor the life span of cutting tools in an automotive production line. Int. J. Adv. Eng. Res. Sci. 8, 220–228 (2021) 9. Y. Chen, W. Wang, V. Krovi, Y. Jia, Enabling robot to assist human in collaborative assembly using convolutional neural network, in RSJ International Conference on Intelligent Robots and Systems (IROS) (Las Vegas, 2020), pp. 11167–11114 10. F. Cheng, A. Raghavan, D. Jung, Y. Sasaki, Y. Tajika, High-accuracy unsupervised fault detection of industrial robots using current signal analysis, in International Conference on Prognostics and Health Management (ICPHM) (San Francisco, 2019), pp. 1–8 11. A. Ciampaglia, A. Mastropietro, A. De Gregorio, G. Belingardi, AI for damage detection in automotive composite parts: a use case. SAE Int. J. Adv. Curr. Prac. Mobility 3, 2936–2945 (2021) 12. J. Cui, W. Liu, Y. Zhang, C. Gao, Z. Lu, M. Li, F. Wang, A novel method for predicting delamination of carbon fiber reinforced plastic (CFRP) based on multi-sensor data. Mech. Syst. Sig. Process. 157 (2021) 13. A. Dacal-Nieto, J. Areal, M. Garcia-Fernandez, M. Lluch, Use cases and success stories of a data analytics system in an automotive paint shop, in Eighth International Symposium on Computing and Networking (CANDAR) (Naha, 2020), pp. 95–100 14. P. Denno, C. Dickerson, J. Harding, Dynamic production system identification for smart manufacturing systems. J. Manuf. Syst. 48, 192–203 (2018) 15. D. Dobrota, O. Dumitrascu, M. Dumitrascu, Performance evaluation for a sustainable supply chain management system in the automotive industry using AI. Processes 8 (2020) 16. C. El Hachem, G. Perrot, L. Painvin, R. Couturier, Automation of quality control in the automotive industry using deep learning algorithms, in International Conference on Computer, Control and Robotics (ICCCR) (Shanghai, 2021), pp. 123–127
22
C. Mueller and V. Mezhuyev
17. R. Espinosa, H. Ponce, S. Gutierrez, Click-event sound detection in automotive industry using machine/deep learning. Appl. Soft Comput. 108 (2021) 18. N. Evangeliou, G. Stamatis, G. Bravos, AI for inbound logistics optimisation in automotive industry, in AI for Digitising Industry. ed. by O. Vermesan, R. John, C. De Luca, M. Coppola (River Publishers, Gistrup, 2021), pp. 11–19 19. D. Gankin, S. Mayer, J. Zinn, B. Vogel-Heuser, C. Endisch, Modular production control with multi-agent deep Q-learning, in 26th IEEE International Conference on Emerging Technologies and Factory Automation (ETFA) (Vasteras, 2021), pp. 1–8 20. S. Gilabert, A. Arnaiz, Welding process quality improvement with machine learning techniques. IFAC-Pap OnLine 54, 343–348 (2021) 21. J. Goncalves, P. Cortez, S. Carvalho, N. Frazao, A multivariate approach for multi-step demand forecasting in assembly industries: empirical evidence from an automotive supply chain. Decis. Supp. Syst. 142 (2021) 22. S. Gupta, B. Amaba, M. Mcmahon, K. Gupta, The evolution of AI in the automotive industry, in Annual Symposium on Reliability and Maintainability (RAMS) (Orlando, 2021), pp. 1–7 23. A. Hanna, K. Bengtsson, M. Dahl, E. Eros, Per-Lage Götvall, M. Ekström, Industrial challenges when planning and preparing collaborative and intelligent automation systems for final assembly stations, in 2019 24th IEEE International Conference on Emerging Technologies and Factory Automation (ETFA) (Zaragoza, 2019), pp. 400–406 24. S. Hagemann, R. Stark, Hybrid AI system for the design of highly-automated production systems, in 4th International Conference on Frontiers of Educational Technologies (Moscow, 2018), pp. 192–196 25. S. Hagemann, A. Sünnetcioglu, R. Stark, Hybrid AI system for the design of highly-automated production systems. Proc. Manuf. 28, 160–166 (2019) 26. S. Hagemann, A. Sünnetcioglu, T. Fahse, R. Stark, Neural network hyperparameter optimization for the assisted selection of assembly equipment, in 23rd International Conference on Mechatronics Technology (ICMT) (Salerno, 2019), pp. 1–7 27. P. Huang, C. Ma, C. Kuo, A PNN self-learning tool breakage detection system in end milling operations. Appl. Soft Comput. 37, 114–124 (2015) 28. M. Hofmann, F. Neukart, T. Bäck, AI and Data Science in the Automotive Industry [Online] (2017). Available: https://www.researchgate.net/publication/319534479_Artificial_I ntelligence_and_Data_Science_in_the_Automotive_Industry 29. Y. Jeong, T. Agrawal, E. Flores-Garcia, M. Wiktorsson, A reinforcement learning model for material handling task assignment and route planning in dynamic production logistics environment. Proc. CIRP 104, 1807–1812 (2021) 30. N. Klarmann, M. Malmir, J. Josifovski, Optimising trajectories in simulations with deep reinforcement learning for industrial robots in automotive manufacturing, in AI for Digitising Industry. ed. by O. Vermesan, R. John, C. De Luca, M. Coppola (River Publishers, Gistrup, 2021), pp. 35–45 31. N. Kousi, D. Dimosthenopolous, A. Matthaiakis, G. Michalos, S. Makris, AI based combined scheduling and motion planning in flexible robotic assembly lines. Proc. CIRP 86, 74–79 (2019) 32. R. Kumari, K. Saini, Advanced automobile manufacturing: an industry 4.0, in 8th International Conference on Computing for Sustainable Global Development (INDIACom) (New Delhi, 2021), pp. 899–904 33. L. Leitner, A. Lagrange, C. Endisch, End-of-line fault detection for combustion engines using one-class classification, in International Conference on Advanced Intelligent Mechatronics (AIM) (Banff, 2016), pp. 207–213 34. C. Li, D. Li, C. Chen, Z. Zhao, Y. Wang, Part recognition and pose estimation based on convolutional neural network, in 2nd International Conference on Machine Learning, Big Data and Business Intelligence (MLBDBI) (Taiyuan, 2020), pp. 561–567 35. A. Luckow, M. Cook, N. Ashcraft, E. Weill, E. Djerekarov, B. Vorster, Deep learning in the automotive industry: applications and tools, in International Conference on Big Data (Big Data) (Washington, 2016), pp. 3759–3768
AI Models and Methods in Automotive Manufacturing …
23
36. A. Luckow, K. Kennedy, M. Ziolkowski, E. Djerekarov, M. Cook, E. Duffy, M. Schleiss, B. Vorster, E. Weill, A. Kulshrestha, M. Smith, AI and deep learning applications for automotive manufacturing, in International Conference on Big Data (Big Data) (Seattle, 2018), pp. 3144– 3152 37. R. Luo, H. Wang, Diagnostic and prediction of machines health status as exemplary best practice for vehicle production system, in 88th Vehicular Technology Conference (VTC-Fall) (Chicago, 2018), pp. 1–5 38. T. Magro, A. Ghiotti, S. Bruschi, A. Ferraiulo, An AI approach for the in-line evaluation of steels mechanical properties in rolling. Proc. CIRP 100, 193–198 (2021) 39. K. Makantasis, A. Doulamis, N. Doulamis, K. Psychas, Deep learning based human behavior recognition in industrial workflows, in International Conference on Image Processing (ICIP) (Phoenix, 2016), pp. 1609–1613 40. S. Matzka, Using process quality prediction to increase resource efficiency in manufacturing processes, in First International Conference on AI for Industries (Laguna Hills, 2018), pp. 110– 111 41. A. Mayr, P. Röll, D. Winkle, M. Enzmann, B. Bickel, J. Franke, Data-driven quality monitoring of bending processes in hairpin stator production using machine learning techniques. Proc. CIRP 103, 256–261 (2021) 42. M. Mazzetto, L. Southier, M. Teixeira, D. Casanova, Automatic classification of multiple objects in automotive assembly line, in 24th IEEE International Conference on Emerging Technologies and Factory Automation (ETFA) (Zaragoza, 2019), pp. 363–369 43. M. Meiner, A. Mayr, M. Kuhn, B. Raab, J. Franke, Towards an inline quality monitoring for crimping processes utilizing machine learning techniques, in 10th International Electric Drives Production Conference (EDPC) (Ludwigsburg, 2020), pp. 1–6 44. D. Mery, C. Arteta, Automatic defect recognition in X-ray testing using computer vision, in Winter Conference on Applications of Computer Vision (WACV) (Santa Rosa, 2017), pp. 1026– 1035 45. R. Mohan, P. Roselyn, A. Uthra, D. Devaraj, K. Umachandran, Intelligent machine learning based total productive maintenance approach for achieving zero downtime in industrial machinery. Comput. Ind. Eng. 157 (2021) 46. M. Mircea, D. Cireap, I, Giosan, Automatic vision inspection solution for the manufacturing process of automotive components through plastic injection molding, in 16th International Conference on Intelligent Computer Communication and Processing (ICCP) (Cluj-Napoca, 2020), pp. 423–430 47. H. Nguyen, M. Meiners, L. Schmidt, J. Franke, Deep learning-based automated optical inspection system for crimp connections, in 10th International Electric Drives Production Conference (EDPC) (Ludwigsburg, 2020), pp. 1–7 48. D. Ortega-Aranda, I. Lopez-Juarez, B. Nath-Saha, R. Osorio-Comparan, M. Pena-Cabrera, G. Lefranc, Towards learning contact states during peg-in-hole assembly with a dual-arm robot, in CHILEAN Conference on Electrical, Electronics Engineering, Information and Communication Technologies (CHILECON) (Pucon, 2017), pp. 1–6 49. L. Overbeck, A. Hugues, M. Carl May, A. Kuhnle, G. Lanza, Reinforcement learning based production control of semi-automated manufacturing systems. Proc. CIRP 103, 170–175 (2021) 50. H. Park, D. Phuong, S. Kumar, AI based injection molding process for consistent product quality. Proc. Manuf. 28, 102–106 (2019) 51. I. Pastor-Lopez, J. De La Pena Sordo, I. Santos, P. Bringas, Surface defect categorization of imperfections in high precision automotive iron foundries using best crossing line profile, in 10th Conference on Industrial Electronics and Applications (ICIEA) (Auckland, 2015), pp. 339–344 52. J. Patalas-Maliszewska, I. Pajak, M. Skrzeszewska, AI-based decision-making model for the development of a manufacturing company in the context of Industry 4.0, in International Conference on Fuzzy Systems (FUZZ-IEEE) (Glasgow, 2020), pp. 1–7 53. A. Patil, J. Gaikwad, J. Kulkarni, Bearing fault diagnosis using discrete wavelet transform and artificial neural network, in 2nd International Conference on Applied and Theoretical Computing and Communication Technology (iCATccT) (2016), pp. 248–253
24
C. Mueller and V. Mezhuyev
54. R. Peres, J. Barata, P. Leitao, G. Garcia, Detection and prevention of assembly defects, by machine learning algorithms, in semiconductor industry for automotive. IEEE Access 7, 2169– 3536 (2019) 55. R. Peres, M. Guedes, F. Miranda, J. Barata, Simulation-based data augmentation for the quality inspection of structural adhesive with deep learning. IEEE Access 9, 76532–76541 (2021) 56. Plattform Industrie 4.0, in What is Industrie 4.0? [Online] (2021). Available: https://www.pla ttform-i40.de/PI40/Navigation/EN/Industrie40/WhatIsIndustrie40/what-is-industrie40.html 57. Plattform Industrie 4.0, Technology Scenario: ‘AI in Industrie 4.0’ [Online] (2019). Available: https://www.plattform-i40.de/IP/Redaktion/EN/Downloads/Publikation/AI-in-Ind ustrie4.0.pdf?__blob=publicationFile&v=5 58. C. Poss, O. Mlouka, T. Irrenhauser, M. Prueglmeier, D. Goehring, F. Zoghlami, V. Salehi, Robust framework for intelligent gripping point detection, in 45th Annual Conference of the IEEE Industrial Electronics (Lisbon, 2019), pp. 717–723 59. Y. Qian, J. Arinez, G. Xiao, Q. Chang, Improved production performance through manufacturing system learning, in 15th International Conference on Automation Science and Engineering (CASE) (Vancouver, 2019), pp. 517–522 60. Y. Qian, J. Kim, H. Kwon, Development of hybrid AI model for car steering shaft assembly by combining Gaussian process regression and artificial neural network, in Canadian Conference on Electrical and Computer Engineering (CCECE) (Ontario, 2021), pp. 1–5 61. J. Queiroz, P. Leitao, J. Barbosa, E. Oliveira, G. Gisela, Agent-based distributed data analysis in industrial cyber-physical systems. J. Emerg. Sel. Top. Ind. Electron. 3, 5–12 (2021) 62. A. Rahimi, M. Anvaripour, K. Hayat, Object detection using deep learning in a manufacturing plant to improve manual inspection, in International Conference on Prognostics and Health Management (ICPHM) (Detroit, 2021), pp. 1–7 63. J. Ren, R. Ren, M. Green, X. Huang, Defect detection from X-ray images using a three-stage deep learning algorithm, in Canadian Conference of Electrical and Computer Engineering (CCECE) (Edmonton, 2019), pp. 1–4 64. I. Rio-Torto, A. Campanico, A. Pereira, L. Teixeira, V. Filipe, Automatic quality inspection in the automotive industry: a hierarchical approach using simulated data, in 8th International Conference on Industrial Engineering and Applications (ICIEA) (Chengdu, 2021), pp. 342–347 65. F. Ruiz, N. Agell, C. Angulo, M. Sanchez, A learning system for adjustment processes based on human sensory perceptions. Cogn. Syst. Res. 52, 58–66 (2018) 66. J. Salcedo-Hernandez, J. Garcia-Barruetabena, I. Pastor-Lopez, B. Sanz-Urquijo, Predicting enamel layer defects in an automotive paint shop. IEEE Access 8, 22748–22757 (2020) 67. M. Saunders, P. Lewis, A. Thornhill, Research Methods for Business Students, 5th edn. (Pearson Education Limited, Harlow, 2009) 68. G. Schuh, A. Gützlaff, K. Thomas, M. Welsing, Machine learning based defect detection in a low automated assembly environment. Proc. CIRP 104, 265–270 (2021) 69. R. Sekhar, P. Shah, N. Solke, T. Singh, Machine learning-based predictive modelling and control of lean manufacturing in automotive parts manufacturing industry. Glob. J. Flex. Syst. Manag. 23, 89–112 (2021) 70. O. Semeniuta, S. Dransfeld, P. Falkman, Vision-based robotic system for picking and inspection of small automotive components, in International Conference on Automation Science and Engineering (CASE) (Fort Worth, 2016), pp. 549–554 71. O. Semeniuta, S. Dransfeld, K. Martinsen, P. Falkman, Towards increased intelligence and automatic improvement in industrial vision systems. Proc. CIRP 67, 256–261 (2018) 72. G. Serin, B. Sener, U. Gudelek, M. Ozbayoglu, H. Unver, Deep multi-layer perceptron based prediction of energy efficiency and surface quality for milling in the era of sustainability and big data. Proc. Manuf. 51, 1166–1177 (2020) 73. S. Sinha, P. Franciosa, D. Ceglarek, Building a scalable and interpretable Bayesian deep learning framework for quality control of free form surfaces. IEEE Access 9, 50188–55028 (2021) 74. S. Sinha, P. Franciosa, D. Ceglarek, Object shape error correction using deep reinforcement learning for multi-station assembly systems, in 19th International Conference on Industrial Informatics (INDIN) (Palma de Mallorca, 2021), pp. 1–8
AI Models and Methods in Automotive Manufacturing …
25
75. D. Tokody, L. Ady, L. Hudasi, P. Varga, P. Hell, Collaborative robotics research: Subiko project. Proc. Manuf. 46, 467–474 76. J. Vater, L. Harscheidt, A. Knoll, Smart manufacturing with prescriptive analytics, in 8th International Conference on Industrial Technology and Management (ICITM) (Cambridge, 2019), pp. 224–228 77. J. Vater, L. Harscheidt, A. Knoll, Closing the loop: real-time error detection and correction in automotive production using edge-/cloud-architecture and a CNN, in International Conference on Omni-layer Intelligent Systems (COINS) (Barcelona, 2020), pp. 1–7 78. R. Wagner, J. Fischer, D. Gauder, B. Haefner, G. Lanza, Virtual In-line Inspection for function verification in serial production by means of AI. Proc. CIRP 92, 63–68 (2020) 79. W. Wang, R. Li, M. Diekel, Y. Jia, Facilitating human-robot collaborative tasks by teachinglearning-collaboration from human demonstrations. Trans. Autom. Sci. Eng. 16, 640–653 (2019) 80. E. Wescoat, M. Krugh, A. Henderson, J. Goodnough, L. Mears, Vibration analysis utilizing unsupervised learning. Proc. Manufact. 34, 876–884 (2019) 81. T. Widmer, A. Klein, P. Wachter, S. Meyl, Predicting material requirements in the automotive industry using data mining, in 22nd International Conference on Business Information Systems (Seville, 2019), pp. 147–161 82. A. Zeiser, B. Van Stein, T. Bäck, Requirements towards optimizing analytics in industrial processes. Proc. Comput. Sci. 184, 597–605 (2021) 83. D. Zhang, B. Xu, J. Wood, Predict failures in production lines: a two-stage approach with clustering and supervised learning, in International Conference on Big Data (Big Data) (Washington, DC, 2016), pp. 2070–2074 84. J. Zhang, S. Hu, H. Shi, Visual detection system of automotive parts attitude based on deep learning, in 8th Data Driven Control and Learning Systems Conference (DDCLS) (Dali, 2019), pp. 918–922 85. A. Zhang, J. Wang, A method for analyzing abnormality of automobile sunroof manufacturing process by using Bayesian method, in International Conference on Information Technology, Big Data and AI (ICIBA) (Chongqing, 2020), pp. 233–237 86. J. Zhou, D. Wang, J. Chen, Z. Feng, B. Clarson, A. Baselhuhn, Autonomous non-destructive evaluation of resistance spot welded joints. Rob. Comput. Integr. Manufact. 72 (2021) 87. K. Zielinski, L. Hendges, J. Florindo, Y. Lopes, R. Ribeiro, M. Teixeira, D. Casanova, Flexible control of discrete event systems using environment simulation and reinforcement learning. Appl. Soft Comput. 111 (2021)
Edge AI: Leveraging the Full Potential of Deep Learning Md Maruf Hossain Shuvo
Abstract The rapid emergence of deep learning (DL) algorithms has paved the way for bringing artificial intelligence (AI) services to end users. The intersection between edge computing and AI has brought an exciting area of research called edge artificial intelligence (Edge AI). Edge AI has enabled a paradigm shift in many application areas such as precision medicine, wearable sensors, intelligent robotics, industry, and agriculture. The training and inference of DL algorithms are migrating from the cloud to the edge. Computationally expensive, memory and power-hungry DL algorithms are optimized to leverage the full potential of Edge AI. Embedding intelligence on the edge devices such as the internet of things (IoT), smartphones, and cyber-physical systems (CPS) can ensure user privacy and data security. Edge AI eliminates the need for cloud transmission through processing near the source of data and significantly reduces the latency; enabling real-time, learned, and automatic decision-making. However, the computing resources at the edge suffer from power and memory constraints. Various compression and optimization techniques have been developed in both the algorithm and the hardware to overcome the resource constraints of edge. In addition, algorithm-hardware codesign has emerged as a crucial element to realize the efficient Edge AI. This chapter focuses on each component of integrating DL into Edge AI such as model compression, algorithm hardware codesign, available edge hardware platforms, and challenges and future opportunities. Keywords Artificial intelligence · Edge AI · Machine learning · Deep learning · Model compression · Algorithm-hardware codesign
M. M. H. Shuvo (B) Analog/Mixed Signal VLSI and Devices Laboratory, Department of Electrical Engineering and Computer Science, University of Missouri, Columbia, MO 65211, USA e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. Al-Emran and K. Shaalan (eds.), Recent Innovations in Artificial Intelligence and Smart Applications, Studies in Computational Intelligence 1061, https://doi.org/10.1007/978-3-031-14748-7_2
27
28
M. M. H. Shuvo
1 Introduction Artificial intelligence (AI) enables machines to mimic human-level problem-solving and reasoning capabilities. Edge computing involves collection, storage, processing, and analysis of data near the source. Figure 1 presents an overview of the edge computing architecture. Depending on the application, the edge computing resources could be edge server, router, network gateway, robots, smartphone, IoT, drones, surveillance camera, etc. Edge AI is the integration of edge computing and AI. It focuses on bringing the wide application spectrum of data science, machine learning (ML), and deep learning (DL) technology outside the cloud infrastructure. The advantages of migrating AI services outside the cloud include rapid and real-time decisionmaking, secure data processing, personalized user experience, and improved energy efficiency. Machine learning (ML) algorithms are the enabling techniques of Edge AI. Deep learning (DL) is one of the fields of ML [2]. In DL architectures, layers in between the input and output are the hidden layers. The widespread adoption of DL has been triggered by adding many hidden layers in modern DL architectures [3]. Weights and biases are learnable parameters refined through an iterative training process. The inference uses the learned parameters for automatic predictions from unseen data samples [4]. The large volume of data from billions of smart edge devices has increased the demand for bringing the DL near the data source. The rich foundation of ML and DL algorithms has initiated the recent boom in Edge AI research and applications. Edge AI has successfully been integrated into numerous applications for health monitoring [5], security, and intelligent assistance
Fig. 1 An overview of edge computing infrastructure [1]. Network components connects the edge computing resources to the cloud server. Edge devices are edge server and end devices
Edge AI: Leveraging the Full Potential of Deep Learning
29
[6]. Moreover, the transmission of big data at the edge to the cloud for processing is infeasible. Thus, the concept of Edge AI is rapidly evolving to tackle these challenges. This chapter provides a comprehensive summary of the constituent elements of the DL-enabled Edge AI techniques.
2 Edge Artificial Intelligence The massive amount of big data at the edge paves the way for significant enhancements of edge computing. At the same time, AI has become an integral part to get the full potential of big data at the edge. This intersection has resulted in a research frontier termed Edge AI. Integrating DL into Edge AI involves two operations: training and inference. The inference is computationally less expensive than training. The training and inference can be performed in the cloud or the edge. Different levels of training and inference of DL models to realize Edge AI is illustrated in Fig. 2. In traditional cloud AI, data from end-devices are transferred to the cloud for processing, and results are returned to the edge. For a large volume of the big data at the edge, this scheme is impractical. In most of the Edge AI, training the DL model is performed in the cloud and the learned model is deployed to make inferences on edge. However, training remains static on a consolidated and centralized dataset which is still problematic. This process wastes bandwidth, lacks continuous learning, and increases privacy and security concerns. Therefore, to exploit the full potential of Edge AI, both training and inference on edge is the ultimate solution which is still in the development phase. The main elements of Edge AI are edge caching, training, inference, and offloading [1]. The process of collection and storage of data from distributed and connected sensors, IoT, and embedded systems is known as caching. For instance, activity signals collected from wearables, physiological signals collected from bioelectrodes, images, and videos from smartphones, etc. are a few examples of edge caching. Caching can store raw sensor data and previous results for reuse. Data reuse can significantly reduce latency. For instance, edge caching demonstrates a 3× and 10× improvement in latency and energy saving, respectively [7].
Fig. 2 Training and inference of DL models for Edge AI [6]. Training and inference can be performed independently in the cloud and edge or collaboratively in both the cloud and edge with offloading data and computation
30
M. M. H. Shuvo
Data cached on edge are used in training. Training can be independent or collaborative with other distributed devices and servers. If edge training is performed independently on the edge device from the cached data, user privacy and data security are preserved. In a collaborative scheme, both the edge server and the edge devices are used with shared parameters. Although deeper models can be trained for high accuracy, a security threat exists due to data transfer between multiple nodes. The next step of realizing Edge AI is edge inference. The DL inference involves the computation of the forward propagation with unseen input samples to generate the results. Due to memory and power limitations of the edge hardware, compact network design or compression of existing algorithms is necessary. The inference can occur either on the edge device, edge server, or collaboratively in both the server and the device [6]. For the inference on the edge server, data cached on the device is transmitted to the server. The DL model stored in the edge server is used to obtain the results returned to the edge device. The latency depends on the round-trip delay between the server and the device. In on-device inference, the learned parameters of a pre-trained model are used for edge inference. Although privacy and security are highly maintained, performance is limited by resource constraints of edge devices. In collaborative inference, the edge device processes the DL model up to the partition points, and results are shared with the edge server to execute the remaining parts. However, determining the partition point remains to be the main challenge. Offloading supports Edge AI by carrying out caching, training, or inference operations when needed. Offloading can execute in any connected computing platform such as cloud servers, edge servers, and devices. For instance, the Edge AI performance has been shown to be boosted significantly by partitioning the task into multiple devices or offloading the training into a cloud server [8].
3 Benefits of Edge AI Edge AI eliminates cloud transmission, resulting in increased privacy, security, and latency. However, data processing on edge is contingent upon the available compute, memory, and power budget of the edge hardware [9]. Thus, instead of focusing on the highest accuracy, Edge AI finds the best trade-off between cost, power, and performance for a specific application. The key benefits of Edge AI are listed as follows: 1. Low Latency In cloud AI applications, data transfer to the cloud encounters round-trip delay. In Edge AI, the round-trip delay is eliminated by processing on edge resulting
Edge AI: Leveraging the Full Potential of Deep Learning
31
in reduced latency [10]. Low latency is a desirable feature in many applications such as high-speed self-driving cars, intelligent voice assistants [11], etc. 2. Real-Time Processing Edge AI has the capability of real-time processing which is critical for variety of applications. For example, robotic assistant in manufacturing requires rapid activations to prevent faulty production [12]. In addition, real-time decision support, and data analysis are essential for time-sensitive applications such as preventing vehicle accidents, remote surgery, unmanned aerial vehicles, [13] etc. 3. Scalability and Reliability The increasing volume of big data on edge demands innovative solutions for efficient processing. Data transfer from end nodes to the cloud is not a viable solution [14]. Thus, local processing of the data is the potential solution. In addition, the Edge AI is not connectivity dependent resulting in reliable service during a network failure. This type of robustness and reliability are necessary for manufacturing and healthcare applications [15]. 4. Security and Privacy Edge AI eliminates the cloud transmission of data reducing cyber threats. In addition, the Edge AI applications are limited to a smaller edge network that prevents data stealing and ensures user privacy [16]. 5. Automatic Decision-Making Edge AI is capable of automatic and learned decision-making without requiring human intervention. Such data-driven decision-making capabilities are critical for many applications. For instance, in a self-driving car [17], numerous sensors are employed to identify vehicle position, tire rotation speed, traffic sign, road signs, pedestrian, road images, etc., for automated decisions on the braking, acceleration steering, etc. 6. Reduced Costs The need for data transmission to the cloud is omitted in Edge AI which allows savings of the communication resources. Moreover, it increases the energy efficiency of edge devices. Edge AI performance is dependent on the capacity of the edge hardware which is often more power- and cost-efficient than cloud processing [18].
4 Deep Learning A neural network (NN) behaves like the human brain accumulating and learning information from interconnected synapses. In NN, synapses are the weights, and neurons are nodes in the hidden layers between the inputs and the outputs, as illustrated in Fig. 3. Activations are propagated to the next layer and obtained through multiply and accumulate (MAC) operations at the neurons [19]. The activations are passed through nonlinear functions such as ReLU, sigmoid, Tanh, etc., to capture
32
M. M. H. Shuvo
Fig. 3 Elements of a DL model [3]. a Graphical representation of the processing in a neuron. b Components, connections, and layers of a neural network
data nonlinearity. Typically, the number of hidden layers in modern DL models range from five to over thousands [3]. Integrating DL for a problem involves model selection and evaluation. Model selection entails finding the appropriate model according to the problem specifications from a pool of many candidates. Evaluation is the assessment of the DL performance to generate automatic predictions from unseen data. The dataset is divided into three subsets: training, development (validation), and test sets. DL models are trained with the training set and iteratively improved using the development set. Deep learning can be supervised, semi-supervised, and unsupervised. Supervised learning uses a labeled dataset to calculate and minimize the loss. The Gradient Descent, ADAM, AdaGrad, and RMSProp are some of the optimization algorithms popularly used for loss minimization to refine the learned parameters [20]. Semi-supervised learning uses both labeled and unlabeled data for training, while in unsupervised learning, labeled datasets are not required. Reinforcement learning has recently evolved that does not explicitly need labeled data and uses feedback through trial and error for training. Feed-forward and recurrent are the two most common forms of DL architecture. In feed-forward architecture, there is no memory, and the output has no dependency on the prior inputs. On the other hand, the recurrent architecture contains internal memory to generate results that incorporate information from the past sequence [3]. Different DL techniques such as recurrent neural network (RNN), convolutional neural network (CNN), generative adversarial network (GAN), etc. have been successfully used for various applications [21]. The RNN applications in sequence and time-series modeling include speech recognition, natural language processing (NLP), etc. CNN is suitable for computer vision problems [22] such as image recognition, deepfake detection, object tracking, etc. By integrating DL techniques, Edge AI has gained significant attention and widespread applications. Some of the applications include face verification for device unlocking, home assistance, automatic activity recognition in wearables, alwayson-vision cameras, advanced driver-assistance systems (ADAS), fall detection for ambient assistive living, and autonomous mobile robots. However, DL models are often overparameterized and require significant optimization to get the full benefits
Edge AI: Leveraging the Full Potential of Deep Learning
33
of Edge AI [9, 23]. Thus, compression techniques are used to obtain the compact network architecture with reduced computation, parameters, and memory.
5 Compression Techniques for Efficient Edge AI DL model compression is necessary to obtain edge-compatible models. This section covers various DL compression techniques to meet the massive trend of migrating AI services from the cloud to the edge. Widely accepted model compression techniques are summarized in Fig. 4.
5.1 Compact DL Model Design Compact network design plays a significant role in minimizing the number of parameters and computations. Therefore, in modern DL architectures, large filters are being replaced by a series of smaller filters emulating the same effects. For example, a large convolution operation in CNN can be realized by 1 × 1 convolution [24] to reduce the number of weights. Depth-wise separable convolutions minimize the number of parameters and MAC operations. The bottleneck residual connections can reduce the number of parameters by 30% and computations by 50% with improved accuracy [25]. Squeezing by 1 × 1 convolution and then expanding with 3 × 3 convolution with parallel processing can achieve a 50× reduction in AlexNet weights while preserving the accuracy [26]. However, one limitation of squeeze and expand operation is the energy consumption. Group convolutions and channel shuffle operation have shown promise [27] in retaining accuracy with reduced parameters in CNN. Winograd algorithm is another powerful tool to implement the convolution operation by addition and shift operations. For instance, FP32 multiplications were reduced by a factor of 2.25× [28] using the Winograd algorithm. Compact RNN design can bring many Edge AI solutions in speech recognition, time series analysis, NLP, and other sequence modeling tasks. Designing a compact RNN model involves both unit-level and network-level optimization. For instance,
Fig. 4 An overview of DL compression techniques used to generate Edge AI compatible models
34
M. M. H. Shuvo
the LSTM and GRU are the unit-level modifications of vanilla RNN to overcome the vanishing and exploding gradient problem [29]. Similarly, integrating the reset and update gate of GRU together [30], adding a time gate in LSTM for faster convergence [31] are a few examples of unit-level modifications for compact RNN design. Network-level optimization improves the interaction between the adjacent units by altering the RNN layers in different directions. For example, adding a linear recurrent projection layer to boost parameter efficiency [32], weight matrix decomposition [33], skip connections [34] are a few examples of network level optimization. Parameter sharing can also be exploited in RNN [35] to widen the network without adding extra parameters.
5.2 Pruning Often DL model contains redundant parameters that can be discarded to reduce the number of parameters, saving memory and computation [36]. Pruning techniques may remove weights, neurons, and filters in structured and unstructured ways [37]. In structured pruning, parameters are removed by grouping based on certain criteria [38]. At the cost of implementation complexity, structured pruning can exploit the parallel processing in hardware. Unstructured pruning does not follow any patterns and removes individual weights based on a threshold. The neurons are retained if at least one connection exists [39]. Unstructured pruning results in irregularities in parameter matrix hampering hardware realization. Structured and unstructured pruning can also be combined to achieve the optimum result [37]. Pruning can be magnitude-based, regularization-based, and energy-aware. In magnitude-based pruning, weights less than a threshold are removed, increasing the convergence rate. After retraining, the remaining parameters learn to minimize the effects of pruning on accuracy. For instance, approximately 9× and 3× weight and MAC operation reduction, respectively, can be achieved in AlexNet, which is around 80% of the total weight [23]. However, identifying the appropriate threshold for pruning is the main challenge. In regularization-based pruning, a mathematical regularization is added in the loss function equations [40] that does not corrupt the weights and results in minimal accuracy loss. The limitations of the regularization technique are the requirement of many iterations for convergence. In energy-aware pruning, weights are removed based on an estimation of energy cost [41]. For instance, a 1.74× improved energy efficiency can be achieved in AlexNet using energy-aware pruning. The limitations of pruning techniques include the irregular matrix, multiplication by zero, the requirement of fine-tuning, lacking generalization for different DL techniques, and complicated hardware realization.
Edge AI: Leveraging the Full Potential of Deep Learning
35
5.3 Quantization The single-precision 32-bit floating-point (FP32) representation for parameters and activations in DL causes slow processing and wastage of memory footprint. Inherently DL has strong noise resiliency that can be exploited through quantization. In quantization, the weights and activations use a low precision quantized representation reducing the number of bits required to store the data. Due to the reduced number of bits, memory requirement and computation complexity are minimized with improved energy efficiency. For example, 8-bit integer (INT8) results in 30× energy-saving and 116× area reduction for addition, and 18.5× energy-saving and 27.4× area reduction for multiplication compared to FP32 operations [42]. The Edge AI realization incorporating DL has become much easier using 8-bit floating-point [43] and fixed-point representations [44]. The quantization can be either linear or non-linear with a fixed or variable number of bits for different layers, filters, and parameters [45]. In linear quantization, fixed-point numbers are used instead of FP32, and scaling and biasing minimize the quantization effects. Flexible bit-width at different layers can be applied for memory efficiency and power savings. Binarization is an extreme form of linear quantization that uses a single-bit representation for both weights and activations. Binary weights make the MAC calculation through additions and subtractions. For example, a binarized CNN network can achieve 32× network size reduction and 2× speedup [46]. However, the major challenge of binary neural networks (BNN) is the loss of accuracy. Sometimes the input and the output layers are kept in FP32 representation resulting in 11% accuracy improvement [47]. Further accuracy improvement is possible using binary weights and 2-bit quantization for activations [48]. DL models using 3-bit quantization for weights and FP32 for activations can reduce the accuracy degradation as low as 0.6% [49]. Nonlinear quantization utilizes the non-uniform distribution of parameters throughout the DL architecture. Grouping the weights and activations using a look-up table [50], hash functions [51], and logarithmic functions [52] are possible to apply flexible or fixed quantization for different data groups. For instance, logarithmic base-2 quantization resulted in accuracy degradation of only 5% compared to 27.8% of 4-bit linear quantization. Joint way compression with pruning and quantization can further optimize the DL models [53]. For example, magnitude-based pruning of less significant weights can reduce the number of MAC. The remaining parameters can be quantized to reduce the memory requirements. The limitations of quantization are information loss due to low-precision, irregularities in model structure, and complex backpropagation prohibiting quantized training.
36
M. M. H. Shuvo
5.4 Knowledge Distillation The knowledge distillation (KD) transfers the learned knowledge from a large DL model or ensembles of models to a smaller one [54]. The larger one is the teacher model, and the smaller one is called the student model. The teacher model is usually pre-trained. The student model is trained by minimizing the distillation loss to learn the behavior of the teacher model. The student model is usually a compact network architecture suitable for resource-constrained Edge AI usage. For instance, accuracy can be improved by 2% for speech recognition by learning the softmax class probabilities of the teacher network [55]. In KD, knowledge could be neurons [56], hidden layer features [57], and activations [58] for the student model to capture from the teacher model. KD techniques demonstrate approximately 3× and 2.5× speedup and memory reduction, respectively, for CNN networks [59]. Several techniques have been developed to overcome the challenge of compact student model design such as self-learning [60], teacher assistant [61], and mutual learning [62]. KD has also successfully applied for dataset distillations to get a smaller dataset mimicking the behavior of a large dataset [63]. It can reduce the computational load during training the DL models. KD is very flexible in terms of adaptability to a variety of applications. However, the effective teacher and student model design from the large pool of candidate models and hyperparameters remains challenging.
5.5 Adaptive Optimization Techniques Adaptive optimization considers the input complexity to tune and select the appropriate DL model architecture. For example, two DL models of big and little size can be used to efficiently tackle the classification problem [64]. The predictions are made using the little network and verified by a success checker. If the classification confidence score is acceptable, the big network will not execute, saving 53.7% energy for the classification task [64]. However, energy can only be saved if the big network is seldom used. In addition, training two different DL models and saving the learned parameters cause extra memory and computation costs. Incremental training and weight reuse are promising to solve the problem of two DL models [65]. If the input is detected as easy, a shallow network is employed. A similar strategy is used in early exiting techniques [66] using many branches of the base network. The latency can be significantly improved when results with high confidence are generated at an early exit branch. However, the improved latency is only significant if deeper layers are not frequently activated. Early exiting can also be implemented at different points such as cloud, edge server, and edge devices. For example, a small RNN is executed on the edge devices for wake-word spotting in smart assistive devices such as ‘Google Home’ and ‘Apple Siri’, and the rest of the speech and action recognition tasks are processed in the cloud [67]. Such adaptive partitioning and execution can save energy, latency, and computing resources.
Edge AI: Leveraging the Full Potential of Deep Learning
37
6 Algorithm-Hardware Codesign The algorithm in conjunction with the hardware optimizations is known as algorithmhardware codesign. A general structure of the algorithm-hardware codesign is presented in Fig. 5. Algorithmic optimization focuses on compression techniques, and appropriate compiler design. The hardware optimization exploits parallel processing, pipelining, efficient memory access, sparsity handling, and specialized neural accelerator design. Low-level libraries such as CUDA, cuDNN, etc., provide the building block for rapid prototyping of different DL operations. Frameworks such as Caffe [68], PyTorch [69], TensorFlow [70], etc., are useful for DL model selection, building, training, development, and evaluation. Interoperability and portability of model trained in one framework to be deployed on edge using other frameworks are allowed using the Open Neural Network Exchange (ONNX). Graph compilers such as Intel OpenVINO, Tensor-RT, etc., generate edge-specific hardware instructions. These compilers enable algorithm-hardware codesign and optimization through kernel auto-tuning, limited memory access, data reuse, and redundant operations merge. As the conventional edge hardware attains the upper computation bound, algorithm-aware hardware design has gained significant attention. DL models undergo a hardware-specific compatibility check and optimization using an intermediate software package to deploy on edge. The algorithmic optimization generates hardware-specific instructions, and hardware provides feedback for a more efficient DL design. This dependency and closed-loop optimization process through algorithm-hardware codesign are the new paradigms in obtaining more efficient Edge AI solutions for various applications.
Fig. 5 Overview of algorithm-hardware codesign of DL for Edge AI [9]
38
M. M. H. Shuvo
7 Edge AI Hardware Platforms There are various edge hardware resources to deploy the Edge AI solutions. The success of an Edge AI is dependent on the memory, power budget, and computational capabilities of the computing platforms. A comparison of different hardware platforms to execute DL for Edge AI is presented in Fig. 6. The GPU and TPU are highly efficient in executing DL models on the cloud. However, the energy cost of these devices is not tolerable for Edge AI. Therefore, application-specific integrated circuits (ASIC), vision processing unit (VPU), system-on-chip (SoC), field-programmable gate array (FPGA), and reconfigurable neural accelerators have emerged to tackle the increasing demands. ASIC is one of the most convenient edge platforms to execute DL with the highest performance and energy efficiency. For example, a custom ASIC realized in the 65 nm CMOS process executing the inference for CNN and RNN with variable precision quantization achieves peak performance of 7372 tera operations per second (TOPS) [71]. Always-on vision for wearables is implemented in the 28 nm CMOS process for visual object recognition with 10 TOPS peak performance [72]. Object tracking in a tiny ASIC can process 34.4 image frames per second with only 1 W of power consumption per 1.32 TOPS [73]. Reconfigurable ASICs for applications including AR/VR headsets, smartphones, aerial vehicles, robots, IoT, wearables, and ADAS [74] are available. However, the limitations of ASICs are lack of flexibility and programmability. Therefore, reconfigurable architectures have emerged as a potential choice for rapid prototyping and deployment of DL. The FPGA is a reprogrammable fine-grained architecture having more flexibility than the ASICs [75]. Therefore, DL inference can be implemented on FPGA using hardware description languages (HDL). For example, INT8 quantized CNN model implementation on FPGA achieves 84 TOPS [76]. Peak performance of 408 TOPS and energy efficiency of 33 TOPS can also be reached using DL implementation on FPGA for computer vision tasks [77]. However, the challenges of FPGA for Edge AI are lack of automatic mapping of DL into FPGA, low speed, and long compilation time. The reconfigurable coarse-grained spatial architecture presented in Fig. 7, has more customized capabilities. The constituent component of such architecture is processing element (PE), local registers, and memory. Low latency and higher energy efficiency are attainable by efficient memory access, data reuse, and optimized PE design. The off-chip memory access consumes a large portion of the total power
Fig. 6 Comparison of Edge AI hardware platforms to execute DL. From left to right: lowest to highest
Edge AI: Leveraging the Full Potential of Deep Learning
39
Fig. 7 Generic reconfigurable architecture of a coarse-grained spatial neural accelerator [78]
that can be minimized using local registers and scratchpad memory through data reuse. For example, a neural accelerator can produce 166 TOPS/W performance for AlexNet with only 278 mW power consumption [78]. Further improvements are demonstrated incorporating the sparsity handling hierarchical architecture [79]. The vision processing unit (VPU) is the customized chip for Edge AI applications for numerous computer vision and image analysis tasks. For example, 4 K image processing is possible with a peak performance of 4 TOPS. A small neural compute stick can perform NLP, machine translation, and machine vision tasks. System-onchip (SoC) in the 7 nm FinFET process for the ADAS and autonomous driving tasks consume only 10 W of power delivering 14 TOPS [80]. Deploying the DL inference on microcontroller unit (MCU) can bring Edge AI to numerous applications. For example, keyword spotting for smart devices such as ‘Google Home’, ‘Amazon Alexa’, and smartwatch uses MCU for lightweight DL inference. The Edge AI incorporated in MCU offers easy installation, low power and cost, and privacy-preserving advantages. As a result, DL libraries for MCU are developed [81] to include various DL kernels and quantization. Modern MCU supports the INT8 and INT16 quantization along with parallel processing capabilities. However, FP32 operations of DL execution are not suitable for the MCUs. Due to limited memory, only the lightweight DL models can be deployed in MCU. Other limitations of MCU include the lack of parallel processing and low speed.
8 Edge AI Applications Edge AI is being deployed in numerous applications bringing the AI services to the end-user such as face recognition for device unlocking [82], mobile banking [83], and door unlocking. Smart wearables with Edge AI are used for activity monitoring, fitness tracking, and patient and elder care [84]. Tiny always-on vision cameras are designed for advanced driver-assistance systems (ADAS) [85]. Driving behavior analysis can prevent accidents by detecting driver distractions and fatigue using real-time sensor data [86]. A simple RNN on the smartphone can analyze the driving
40
M. M. H. Shuvo
pattern [10] for automobiles consuming only 7.7 mW of power and 44 kB of memory. Electrical equipment fault monitoring [9], transmission line fault detection [87], suspicious activity detection using a smart camera, and health monitoring using intelligent wearables are a few other applications of Edge AI. The 5G connectivity allows faster collection of a large volume of data. The need for Edge AI is increasing to fully understand, utilize, and process 5G and IoT data stream locally near the devices. Edge AI can revolutionize manufacturing by incorporating machine vision for reliable and high precision product quality monitoring. In addition, Edge AI can introduce highly accurate automatic equipment and production failure detection and material flow analysis. The proper generation and utilization of energy resources is another potential application area. Edge AI can automate and help precise data analytics for utilization of renewable energy, energy production decentralization, energy consumption monitoring, and forecasting future energy demands.
9 Challenges and Future Directions Most Edge AI techniques use supervised learning that assumes the availability of sufficient labeled data. However, most big data at the edge are unlabeled and sparse posing challenges to the existing algorithms. Moreover, data generated from multiple sensors and distinct environments are heterogeneous. Federated learning illustrated in Fig. 8, using multiple local nodes for training, and exchanging the parameters could overcome the data scarcity problem to some extent. Representation learning [88] could be a promising research direction to mitigate the data heterogeneity problem. Realizing the Edge AI solution on the reconfigurable architectures is challenging due to the lack of automatic mapping of DL algorithms to edge hardware. More functions and libraries are needed for rapid prototyping of different compression and optimization techniques. In addition, there is a lack of benchmark datasets and evaluation metrics to evaluate and compare the Edge AI solutions. Therefore, it is important to develop benchmark datasets and evaluation metrics for Edge AI. Most of the compression techniques require manual inputs. Therefore, automatic hardware-aware compression techniques are necessary. The compression techniques are not universally well-performing for different types of DL models. Most compression techniques consider only the dense layers. However, in modern DL architectures, few dense layers are used. Therefore, compression techniques should universally work for CNN, RNN, GAN, etc. In addition, the compression techniques should carefully consider the hardware architecture to exploit hardware optimizations. Efficient reconfigurable architectures are necessary to handle sparse and irregular tensors generated from the compression techniques [89]. ASIC and SoC are the most efficient solutions for Edge AI concerning power, cost, and processing speed. However, in ASIC, the DL models are hardwired in silicon which is not reconfigurable. Hence, ASIC cannot adapt to rapidly changing DL algorithms becoming obsolete after a certain period. Thus, some flexibility needs to be added in ASIC by rooting dynamic reconfigurability.
Edge AI: Leveraging the Full Potential of Deep Learning
41
Fig. 8 Federated learning allowing edge training [1]. Learned parameters are downloaded into edge that are updated using local data collected from sensor. New parameters are aggregated in the cloud
Most Edge AI solutions are still considering training of the DL models on the cloud and inference on edge. Such training schemes are static without any attempts to gather knowledge from the new data. Retraining and transfer learning capabilities on edge can introduce AI services for continuously changing data. There are challenges in implementing edge training. Bringing lifelong machine learning (LML) [90] to the edge for training and inference could be a promising research direction.
10 Conclusion Developing DL for different applications is mainly concerned with achieving the highest accuracy. However, integration of the DL capabilities in Edge AI for the enduser application requires a rigorous tradeoff between accuracy, power, and cost. This chapter provides a comprehensive and systematic overview of the Edge AI incorporating deep learning and an exploration of each component. The DL compression techniques to make the algorithms compatible with edge hardware are explained. Moreover, algorithm hardware co-design techniques for coherent and simultaneous training, optimization, and deployment of DL on edge are discussed. In addition, the open research challenges and future trends are included. Leveraging the full potential of DL for end-user applications remains challenging due to the resource constraints of edge. This chapter will help the researchers understand the current state of the Edge AI research to develop numerous applications in
42
M. M. H. Shuvo
healthcare, smart agriculture, precision medicine, NLP, security, surveillance, and intelligent industrial IoT. Acknowledgements The author would like to thank Dr. Syed Kamrul Islam, Professor and Chair, Department of Electrical Engineering and Computer Science, University of Missouri, Columbia, USA, for his constructive feedback.
References 1. D. Xu et al., Edge intelligence: empowering intelligence to the edge of network. Proc. IEEE 109(11), 1778–1837 (2021). https://doi.org/10.1109/JPROC.2021.3119950 2. Y. LeCun, Y. Bengio, G. Hinton, Deep learning. Nature 521(7553), 436–444 (2015). https:// doi.org/10.1038/nature14539 3. V. Sze, Y.-H. Chen, T.-J. Yang, J.S. Emer, Efficient processing of deep neural networks: a tutorial and survey. Proc. IEEE 105(12), 2295–2329 (2017). https://doi.org/10.1109/JPROC. 2017.2761740 4. M.M. Hossain Shuvo, O. Hassan, D. Parvin, M. Chen, S.K. Islam, An optimized hardware implementation of deep learning inference for diabetes prediction, in 2021 IEEE International Instrumentation and Measurement Technology Conference (I2MTC), May 2021, pp. 1–6. https://doi.org/10.1109/I2MTC50364.2021.9459794 5. M.M. Hossain Shuvo, N. Ahmed, K. Nouduri, K. Palaniappan, A hybrid approach for human activity recognition with support vector machine and 1D convolutional neural network, in 2020 IEEE Applied Imagery Pattern Recognition Workshop (AIPR), Oct 2020, pp. 1–5. https://doi. org/10.1109/AIPR50011.2020.9425332 6. Z. Zhou, X. Chen, E. Li, L. Zeng, K. Luo, J. Zhang, Edge intelligence: paving the last mile of artificial intelligence with edge computing. Proc. IEEE 107(8), 1738–1762 (2019). https://doi. org/10.1109/JPROC.2019.2918951 7. P. Guo, B. Hu, R. Li, W. Hu, FoggyCache, in Proceedings of the 24th Annual International Conference on Mobile Computing and Networking, Oct 2018, pp. 19–34. https://doi.org/10. 1145/3241539.3241557 8. H.-J. Jeong, H.-J. Lee, C.H. Shin, S.-M. Moon, IONN, in Proceedings of the ACM Symposium on Cloud Computing, Oct 2018, pp. 401–411. https://doi.org/10.1145/3267809.3267828 9. B.L. Deng, G. Li, S. Han, L. Shi, Y. Xie, Model compression and hardware acceleration for neural networks: a comprehensive survey. Proc. IEEE 108(4), 485–532 (2020). https://doi.org/ 10.1109/JPROC.2020.2976475 10. X. Xu, S. Yin, P. Ouyang, Fast and low-power behavior analysis on vehicles using smartphones, in 2017 6th International Symposium on Next Generation Electronics (ISNE), May 2017, pp. 1–4. https://doi.org/10.1109/ISNE.2017.7968748 11. J. H. Al Shamsi, M. Al-Emran, K. Shaalan, Understanding key drivers affecting students’ use of artificial intelligence-based voice assistants. Educ. Inf. Technol. (2022). https://doi.org/10. 1007/s10639-022-10947-3 12. F. Shang, J. Lai, J. Chen, W. Xia, H. Liu, A model compression based framework for electrical equipment intelligent inspection on edge computing environment, in 2021 IEEE 6th International Conference on Cloud Computing and Big Data Analytics (ICCCBDA), Apr 2021, pp. 406–410. https://doi.org/10.1109/ICCCBDA51879.2021.9442600 13. Y.-L. Lee, P.-K. Tsung, M. Wu, Techology trend of Edge AI, in 2018 International Symposium on VLSI Design, Automation and Test (VLSI-DAT), Apr 2018, pp. 1–2. https://doi.org/10.1109/ VLSI-DAT.2018.8373244 14. Y. Wu, Cloud-edge orchestration for the internet of things: architecture and AI-powered data processing. IEEE Internet Things J. 8(16), 12792–12805 (2021). https://doi.org/10.1109/JIOT. 2020.3014845
Edge AI: Leveraging the Full Potential of Deep Learning
43
15. M. Al-Emran, J.M. Ehrenfeld, Breaking out of the box: wearable technology applications for detecting the spread of COVID-19. J. Med. Syst. 45(2), 20 (2021). https://doi.org/10.1007/s10 916-020-01697-1 16. R. Sachdev, Towards security and privacy for Edge AI in IoT/IoE based digital marketing environments, in 2020 Fifth International Conference on Fog and Mobile Edge Computing (FMEC), Apr 2020, pp. 341–346. https://doi.org/10.1109/FMEC49853.2020.9144755 17. J.-W. Hong, I. Cruz, D. Williams, AI, you can drive my car: how we evaluate human drivers vs. self-driving cars. Comput. Hum. Behav. 125, 106944 (2021). https://doi.org/10.1016/j.chb. 2021.106944 18. Q. Liang, P. Shenoy, D. Irwin, AI on the edge: characterizing AI-based IoT applications using specialized edge architectures, in 2020 IEEE International Symposium on Workload Characterization (IISWC), Oct 2020, pp. 145–156. https://doi.org/10.1109/IISWC50251.2020. 00023 19. M.P. Véstias, R.P. Duarte, J.T. de Sousa, H.C. Neto, Moving deep learning to the edge. Algorithms 13(5), 125 (2020). https://doi.org/10.3390/a13050125 20. S. Ruder, An overview of gradient descent optimization algorithms (2016). arXiv preprint arXiv:1609.04747 21. M.Z. Alom et al., A state-of-the-art survey on deep learning theory and architectures. Electronics 8(3), 292 (2019). https://doi.org/10.3390/electronics8030292 22. M.M. Hossain Shuvo et al., Multi-focus image fusion for confocal microscopy using U-Net regression map, in 2020 25th International Conference on Pattern Recognition (ICPR), Jan 2021, pp. 4317–4323. https://doi.org/10.1109/ICPR48806.2021.9412122 23. S. Han, J. Pool, J. Tran, W.J. Dally, Learning both weights and connections for efficient neural networks (2015). arXiv preprint arXiv:1506.02626 24. A.G. Howard et al., Mobilenets: efficient convolutional neural networks for mobile vision applications (2017). arXiv preprint arXiv:1704.04861 25. M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, L.-C. Chen, Mobilenetv2: inverted residuals and linear bottlenecks, in Conference on Computer Vision and Pattern Recognition (2018), pp. 4510–4520 26. F.N. Iandola, S. Han, M.W. Moskewicz, K. Ashraf, W.J. Dally, K. Keutzer, SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and Supply voltage/Load resistance RL ⇒ 12 V/75 ⇒ 160 m A. The amplification factor: curr ent · I C curr ent hFE (min) > 5 ∗ load Max hFE (min) > 5 ∗ (160 m A/12 m A) hFE (min) > 66.6 Base resistance
(1)
96
A. C. Savitha et al.
Fig. 6 Flowchart for system design
RB = (VC ∗ hFE )/(5 ∗ IC) RB = (3.3 V ∗ 230)/(5 ∗ 160 m A) (230 is standard current gain of transistor) RB = 948.75 ≡ 1 K
(2)
Monitoring Plant Growth in a Greenhouse Using IoT …
97
Final values: IC(max) = 500 m A hFE = 230 RB = 1 K.
4 Results and Discussion 4.1 Sensor Efficiency The real-time data of temperature, relative humidity, and relative soil moisture content displayed on the website at room conditions, were noted and the graph was plotted in Mat Lab. The volatility in data is observed due to the quality of sensors used and self-induced climatic changes. Figures 7, 8, and 9 shows the real-time data reading for an hour.
Fig. 7 Real-time data temperature reading for an hour
98
Fig. 8 Real-time data humidity reading for an hour
Fig. 9 Real-time data soil moisture level reading for an hour
A. C. Savitha et al.
Monitoring Plant Growth in a Greenhouse Using IoT …
99
Table 1 Power consumption of the device Components
Quantity
Voltage (V)
Current (mA)
Power (mW)
ESP 32
1
3.3
500
1650
DHT11
1
5
2.5
12.5
Soil moisture sensor
1
5
5
25
Cooling fan
1
12
160
1920
Solenoid
1
12
500
6000
Relay module
1
3.3
20
66
4.2 Power Consumption of the Sub-system The power consumption of the ESP 32, DHT11, Soil moisture sensor, Cooling fan, Solenoid, and Relay module are shown in Table 1. The power consumption of the device is calculated using Eq. (3), where P is the power consumption of the device, V is operating voltage and I is operating current. Table 1 shows the power consumption of the device. The project work is powered through a 12 V DC, 2A adaptor. The approximate maximum power consumption of the system is 9673.5 mW 05 9.6735 W. P=V×I
(3)
4.3 Microclimatic Variables Monitoring Three measuring nodes were deployed all over the greenhouse block to find out possible microclimate layers and their differences. The respective node read the temperature, humidity, and soil moisture values. The soil moisture valves, relative humidity, temperature and are displayed on the serial monitor are shown in Figs. 10 and 11. The circuit has been built on perf board. To accommodate ESP32 with the sensors and actuators. In order to get accurate readings in real-time data for display, we use sensors with greater efficiency and longer life. The growth of plants monitoring and controlling is done by considering three parameters.
4.4 Generation of Local-IP Address A Local-IP address is added to the HTTP server which is hosted on the ESP32 development board. Communication between server and client was done using port 80. IP addresses can be allocated statically and dynamically. ESP32 is used as an access point, therefore the login credentials of this access point need to be entered
100
A. C. Savitha et al.
Fig. 10 Resistive soil moisture sensor output on the serial monitor
Fig. 11 DHT11 output on the serial monitor
in the new device which is to be paired. For the verification of the design made, the Local IP address generated is entered in web browser of the device connected with the ESP32 access point. The web browser will display the webpage and it can be used to debug errors connected with the ESP32 access point. The web browser will display the webpage and it can be used to debug errors initially to get the final webpage that is fully functional. Figure 12 shows the generation of IP address for an access.
4.5 Comparison of Sensor Readings Due to the loose connections and short circuits, the real-time data displayed in the website might actually be displayed incorrectly. Hence for the same environment, the entire circuit is built and the readings are checked and compared with google (weather.com) with our webpage designed. Figures 13 and 14 shows the temperature and humidity obtained from DHT11sensoron google (weather.com) and on our webpage.
5 Conclusion An energy-efficient wireless sensor network (WSN) for controlling and monitoring greenhouses using the Internet of Things was presented. In this work, it was evidently
Monitoring Plant Growth in a Greenhouse Using IoT …
101
Fig. 12 Generating IP address
shown that the changeability in data is observed due to the quality of sensors used. The superiority of plants under various intensities is also good quality because of controlling and monitoring of microclimatic variables. This study clearly reveals the growth of urgent need for safely growing and harvesting food crops. Since working from home has become the new normal of leading a day-to-day lifestyle, a smart greenhouse system would be helpful for working professionals who are indulged in farming activities. They can cultivate large amount of crops with a small amount of land and easily monitor it as well. For farmers who crop with a small amount of land and easily monitor it as well. For farmers who grow medicinal plants under monitored growth parameters, our smart greenhouse system would be a preferred choice; the data collected for a harvest can be used to develop plant-grown parameters. Obviously, it would be very difficult to achieve a good quality of yield during all seasons with additional effects of unknown adverse climatic conditions. The investigation made in the greenhouse evidence the operation competency of the
102
Fig. 13 Comparision of temperature and humidity shown by google
Fig. 14 Comparision of temperature and humidity shown by our webpage
A. C. Savitha et al.
Monitoring Plant Growth in a Greenhouse Using IoT …
103
wireless sensor network in challenging environments. High moisture is enforced to consider the possible damages and to save sensitive boards prudently. Pollen small particles also affect sensor measurement, affecting the measured results. Periodic communication with the test setup made the effective results of WSN. Sensor nodes are turned on all the time and measured the maximum power consumption of the network. Both DHT11 Temperature and Humidity Sensor are the best suitable for low power nodes which are connected wirelessly. The long range of the network can be resolved through amplifiers, multi-hop communication, and transmitter. DHT11sensor consumes a power of 12.5mW. Similarly, other sensors are also considered. Results can be improved by using a heater and a cooler which helps in achieving the required heating and cooling effects in controlling the temperature and humidity. Effective automation in the greenhouse made attractive wireless communication with all essential improvements.
6 Future Work • Extend the usage of smart greenhouse systems into the field of aquaponics and hydroponics significantly increases the amount of yield generated on the same dimensions of land as compared to normal farming techniques. • Develop efficient imaging processing techniques to identify and eliminate diseased plant leaves automatically with the help of smart robots to minimize human interference. • Build a secure system to protect the user network from hackers and store user data privately. • Develop a machine-learning algorithm to identify the optimal threshold parameters for a different plant grown and find the optimal growth parameters based on yield quality and quantity. • Build a database to store user data of all plant parameters and to display the real-time change in data as a graph for a certain duration. Acknowledgements This work was supported by the JSS Academy of Technical Education, India.
References 1. T. Gomes et al., GreenMon: an efficient wireless sensor network monitoring solution for greenhouses, in 2015 IEEE International Conference on Industrial Technology (ICIT) (IEEE, 2015) 2. S. Rodríguez, T. Gualotuña, C.A. Grilo, System for the monitoring and predicting of data in precision a agriculture system for the and predicting of wireless data in precision in a monitoring rose greenhouse based on sensor agriculture in a rose greenhouse based on wireless sensor networks Ne. Proc. Comput. Sci. 121, 306–313 (2017)
104
A. C. Savitha et al.
3. J. Paradiso, T. Starner, Energy scavenging for mobile and wireless electronics. Pervasive Comput. IEEE 4(1), 18–27 (2005) 4. K. Al-Aubidy, M. Ali, A. Derbas, A. Al-Mutairi, Real-time monitoring and intelligent control for greenhouses based on wireless sensor network, in 2014 11th International on Multi-Conference on Systems, Signals Devices (SSD), Feb 2014, pp. 1–7 5. Y. Liu, X. Wang, Y. Fang, M. Huang, X. Chen, Y. Zhang, The effects of photoperiod and nutrition on duckweed (Landoltia punctata) growth and starch accumulation. Ind. Crops Prod. 115, 243–249 (2018). https://doi.org/10.1016/j.indcrop.2018.02.033 6. M. Mehra, S. Saxena, S. Sankaranarayanan, R.J. Tom, M. Veeramanikandan, IoT based hydroponics system using deep neural networks. Comput. Electron. Agric. 155, 473–486 (2018). https://doi.org/10.1016/j.compag.2018.10.015 7. A.K. Tripathy, J. Adinarayana, K. Vijayalakshmi, S.N. Merchant, U.B. Desai, S. Ninomiya, T. Kiura, Knowledge discovery and leaf spot dynamics of groundnut crop through wireless sensor network and data mining techniques. Comput. Electron. Agric. 107, 104–114 (2014). https:// doi.org/10.1016/j.compag.2014.05.009 8. J. Muangprathub, N. Boonnam, S. Kajornkasirat, N. Lekbangpong, A. Wanichsombat, P. Nillaor, IoT and agriculture data analysis for smart farm. Comput. Electron. Agric. 156, 467–474 (2019). https://doi.org/10.1016/j.compag.2018.12.011 9. L. Graamans, E. Baeza, A. Van Den Dobbelsteen, I. Tsafaras, C. Stanghellini, Plant factories versus greenhouses: comparison of resource use efficiency. Agric. Syst. 160, 31–43 (2018). https://doi.org/10.1016/j.agsy.2017.11.003 10. D. Liu, X. Cao, C. Huang, L. Ji, Intelligent Agriculture Greenhouse Environment Monitoring System Based on IOT Technology (IEEE, Halong Bay, Vietnam, 2016) 11. I.B. Jafar, K. Raihana, S. Bhowmik, S.R. Shakil, Wireless Monitoring System and Controlling Software for Smart Greenhouse Management (IEEE, Dhaka, Bangladesh, 2014) 12. R. Li, X. Sha, K. Lin, Smart Greenhouse: A Real Time Mobile Intelligent Monitoring System Based on WSN (IEEE, Nicosia, Cyprus, 2014) 13. D.O. Shirsath, IoT based smart greenhouse automation using Arduino. IJIRCST 5(2), 234–238 (2017) 14. S. Devika et al., Arduino based automatic plant watering system. Int. J. Adv. Res. Comput. Sci. Softw. Eng. 4(10), 449–456 (2014) 15. H. Singh, S. Pravanda, S. Rajan, D. Singla, Remote sensing in greenhouse monitoring system. SSRG Int. J. Electron. Commun. Eng. (SSRG-IJECE)-EFES (2015) 16. A. Gaikwad, A. Ghatge, H. Kumar, K. Mudliar, Monitoring of smart greenhouse. Int. Res. J. Eng. Technol. (IRJET). e-ISSN: 2395-0056 17. D. Singh, C. Basu, M. Meinhardt-Wollweber, B. Roth, LEDs for energy efficient greenhouse lighting. Renew. Sustain. Energy Rev. 49, 139–147 (2015) 18. K. Arif, H.F. Abbas, Design and implementation a smart greenhouse. IJCSMC 4(8), 335–347 (2015). P. Pieter, A beginners guide to ESP8266, 08 Mar 2017 19. N. Yeh, J.P. Chung, High-brightness LEDs-energy efficient lighting sources and their potential in indoor plant cultivation. Renew. Sustain. Energy Rev. (2009). https://doi.org/10.1016/j.rser. 2009.01.027 20. H. Watanabe, Light-controlled plant cultivation system in Japan—development of a vegetable factory using LEDs as a light source for plants. Acta Hortic. (2011)
Predicting the Intention to Use Bitcoin: An Extension of Technology Acceptance Model (TAM) with Perceived Risk Theory ˙ Gulsah Hancerliogullari Koksalmis, Ibrahim Arpacı, and Emrah Koksalmis
Abstract Bitcoin, the world’s first completely decentralized digital currency, gradually attracts the interest of a large number of people all over the world. The study proposed an integrated research framework to discover the antecedents of behavioral intention to use cryptocurrencies, specifically Bitcoin, by extending the technology acceptance model (TAM) with perceived risk theory (PRT). Structural equation modeling approach using SmartPLS was employed to confirm validity of the instruments and test the hypothesized relationships based on data collected from a sample of 397 individuals, who are randomly selected among the people in the United States. The results indicated that perceived risk negatively predicted the behavioral intention to use Bitcoin. Whereas, social influence, perceived usefulness, perceived ease of use, and attitude positively predicted the behavioral intention. Implications of the findings and recommendations for further studies were discussed. Keywords Bitcoin · Cryptocurrencies · Technology acceptance model · Perceived risk theory
1 Introduction Advancements in finance and other innovative improvements have led to a demand for cryptocurrencies to enable quick real-time capital exchanges [1]. Cryptocurrencies are utilized within the virtual market and within the monetary area as an economic G. H. Koksalmis (B) Department of Industrial Engineering, Faculty of Management, Istanbul Technical University, Istanbul, Turkey e-mail: [email protected] ˙I. Arpacı Department of Software Engineering, Faculty of Engineering and Natural Sciences, Bandirma Onyedi Eylul University, Balikesir, Turkey E. Koksalmis Hezarfen Aeronautics and Space Technologies Institute, National Defense University, ˙Istanbul, Turkey © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. Al-Emran and K. Shaalan (eds.), Recent Innovations in Artificial Intelligence and Smart Applications, Studies in Computational Intelligence 1061, https://doi.org/10.1007/978-3-031-14748-7_6
105
106
G. H. Koksalmis et al.
tool [2]. Cryptocurrencies are dependent upon a peer-to-peer tool to meet economic requirements of e-commerce sites. Bitcoin is identified as currency of the Internet, or world’s first totally decentralized digital monetary unit such that it isn’t issued by any government or central bank but operated with some digital regulations [3]. To put in another way, Bitcoin has a novel transaction structure where the money is digital. Utilizing Bitcoin, anybody can transfer currency to anyplace without a third party [4]. It depends on cryptographic protocols and a system of clients who mint, trade, store, and transfer Bitcoins [5, 6]. They can be transferred to an individual with a Bitcoin account [7]. People can do transactions legitimately, there is “no need for a third-party intermediary” [8]. Bitcoin has lower transaction fees [9], and it can be used to book hotels on online travel platforms, shop for furniture on online retailers or buy video game consoles. Bitcoin spread very rapidly and attracted a lot of consumers. Bitcoin is used in several stores and services, even Bitcoin ATMs are accessible; and it is broadly utilized in several territories, including the United States, Canada, and Australia [10]. The value of a Bitcoin has increased and fluctuated significantly over time. Even though the Bitcoin was the first deployed almost 10 years ago, there are over 16 million Bitcoins in circulation that corresponds to over 152,344,677,156 USD on 7/12/2020 [11]. Currently, the Bitcoin economy is greater compared to some smaller countries’ economies [9]. Even though Bitcoin provides several advantages, for instance, lower transaction fees, faster transaction speed, there are so many people who resist to adopt because of risks and uncertainties. There are some concerns related to mistrust in the reliability and validity of Bitcoin since it has an ambiguous status in several nations [12]. Further, cyberattacks [3, 13] and privacy issues within the network are among other issues. Therefore, figuring out the root causes of this opposition would be valuable in promoting the Bitcoin use. This study aims to fill the gap by investigating factors predicting cryptocurrency acceptance, specifically Bitcoin, in the United States. In spite of the increasing usage of a cryptocurrency, research on dynamics affecting it’s acceptance is limited [4, 14– 18]. No earlier empirical study considers drivers and barriers simultaneously among the people in the United States. In order to construct a strong theoretic base to explore the Bitcoin acceptance, this study proposed a novel theoretical model by extending the Technology Acceptance Model (TAM) with Perceived Risk Theory. The research is directed by the following research questions. 1. Based on the TAM with Perceived Risk Theory, to what extent is each predictor variable (e.g., social influence, trust and perceived risk, etc.) correlated with the acceptance of Bitcoin usage? 2. To what extent do external variables, such as behavioral intention, explain variation in attitude toward use, perceived ease of use, perceived usefulness, social influence, trust and perceived risk? 3. How well is Bitcoin accepted by the people in the U.S.?
Predicting the Intention to Use Bitcoin …
107
2 Theoretical Background and Hypotheses Development In literature, a wide variety of methods; for instance, perceived risk theory (PRT), theory of reasonable action (TRA), theory of planned behavior (TPB), unified theory of acceptance and use of technology (UTUAT) and TAM have been proposed to determine the factors affecting the users’ attitudes. However, in order to interpret the behavioral intention to use information systems, scientists commonly prefer the TAM as it emerged from both TRA and TPB [19]. TAM is now a widespread model for evaluating acceptance of new systems in several businesses; for instance, healthcare, finance, education, etc. Moreover, so as to measure how risks are affecting acceptance of a novel system, PRT is one of the most effective and widely used models.
2.1 Technology Acceptance Model TAM is a robust model used to measure users’ behavior and acceptance of novel technologies in different fields [20, 21]. The TAM includes the factors of actual use (AU), behavioral intention to use (BIU), attitude toward use (ATU), perceived usefulness (PU), and perceived ease of use (PEOU), where the PU and PEOU are the main factors predicting the ATU and BIU (Fig. 1).
Perceived usefulness
H9
Trust
H3 H6
H4
Perceived ease of use
H5
Attitude toward use
H8
H2
Behavioral intention to use
H1 H7
Perceived risk
Fig. 1 Research framework
Social influence
108
G. H. Koksalmis et al.
2.2 Perceived Risk Theory Perceived Risk Theory proposed by [22] explained how risk perceptions affect consumers’ decision. The perceived risk was defined as an anticipated loss, while achieving a desired result [23]. There are several studies that observed the effects of risks on consumers’ behavior [24]. Perceived Risk Theory consists the dimensions of security (privacy), time risk, social risk, financial risk, performance risk, and perceived risk [25].
2.3 Hypotheses 2.3.1
Social Influence
Social influence is “a person’s perception that most people who are import to him think he should or should not perform the behavior in question” [26]. In the extended TAM, social influence directly impacts the behavioral intention which is defined as “the degree to which a person has formulated conscious plans to perform or not perform some specified future behavior” [27, 28]. Potential users’ behavioral intentions would be positively affected when the Bitcoins are suggested to be used by their social networks. Hence, this study hypothesized as follows: H1. “Social influence would have a positive relationship with the behavioral intention to use Bitcoin.” 2.3.2
Attitude
Attitude is “individual’s positive or negative feelings about performing the target behavior” [26, 29]. Prior studies indicated that behavioral intentions to use of a new technology were significantly influenced by the attitudes [30–35]. Hence, this study hypothesized as follows: H2. “Attitude would have a positive relationship with the behavioral intention to use Bitcoin.” 2.3.3
Perceived Usefulness
Perceived usefulness is “the degree to which an individual believes that using a particular system would enhance his or her job performance” [29]. In the TAM, PU is a key element affecting the BIU [20]. Prior studies indicated that perceived usefulness has a positive relationship with the BIU [31, 36–39]. Furthers, previous studies indicated a positive link between the PU and attitude [21, 24, 31, 40, 41]. Hence, this study hypothesized as follows:
Predicting the Intention to Use Bitcoin …
109
H3. “Perceived usefulness would have a positive relationship with the behavioral intention to use Bitcoin.” H4. “Perceived usefulness would have a positive relationship with the attitude toward Bitcoin use.” 2.3.4
Perceived Ease of Use
Perceived ease of use is “the degree to which a person believes that using a particular system would be free of effort” [21]. If a user feels that Bitcoin is easy to use, there will be an optimistic attitude toward Bitcoin use. Further, the easier it is to use, the quicker perceptions of the PU. Previous research indicated a positive relationship between the PEOU and PU [42, 43]. Hence, this study hypothesized as follows: H5. “Perceived ease of use would have a positive relationship with the attitude toward Bitcoin use.” H6. “Perceived ease of use would have a positive relationship with the perceived usefulness of Bitcoin.” 2.3.5
Perceived Risk
Perceived risk is “the consumer’s expectations of suffering loss in pursuit of a desired outcome” [16, 33]. Several studies indicated the negative impacts of perceived risk on the BIU [14, 16, 33, 36]. Hence, this study hypothesized as follows: H7. “Perceived risk would have a negative relationship with the behavioral intention to use Bitcoin.” 2.3.6
Trust
Trust is “an individual belief that others will behave based on an individual’s expectation” and “an expectation that others one chooses to trust will not behave opportunistically by taking advantage of the situation” [44]. Previous studies indicated a negative relationship between trust and perceived risk [36]. If people trust Bitcoin, they will identify the system to be useful, and be enthusiastic to use it. Previous studies have shown that trust has a positive relationship with perceived usefulness [36, 44–48]. Hence, this study hypothesized as follows: H8. “Trust would have a negative relationship with the perceived risk of Bitcoin.” H9. “Trust would have a negative relationship with the perceived usefulness of Bitcoin.”
110
G. H. Koksalmis et al.
3 Methodology 3.1 Survey Design The questionnaire consisted of three sections. The covering letter and informed consent form were included in the first section. The next section includes the demographic inquiries, for instance, educational level, employment status, profession, IT experience, knowledge of finance and knowledge of Bitcoins. The third section comprised of the TAM, Perceived Risk Theory related questions and further items that measure social influence and trust. The questionnaire included no personal details that could possibly indicate a specific respondent’s identity.
3.2 Sample The data were gathered using both paper and online surveys. The paper-based survey was distributed; besides, an URL address of the survey was provided via e-mail. The aim and context of this research were given to the participants. The sample involved American respondents from various background who were using Bitcoins. Totally 413 questionnaires were obtained and 397 of them were analyzed in this study. 57% of the participants were male (mean age = 28), 73% of them has a college degree, and 53% of them have a job. The details of the demographics are summarized in Table 1. Table 1 Sample demographics Age (years) Max: 66
Min: 19
M: 28.4
Gender (%) Female: 42.56
Male: 57.44
Education level (%) Primary/Secondary school: 2.01
High school: 13.6
Undergraduate: 72.54
Postgraduate/Ph.D.: 11.85 Employment (%) Employed: 52.64
Unemployed: 47.35
Internet use in a day (h) Max: 18
Min: 1
M: 5
Finance: 14.2
Information Technology:45.6
Healthcare: 6.6
Energy: 4.1
Education: 2.9
Other: 26.6
Industries of participants (%)
Predicting the Intention to Use Bitcoin …
111
3.3 Measures The scale used in the study consists of seven constructs and 27 items. Appendix shows the constructs and their items along with source references. A five-point Likert scale was used to measure participants’ opinions on each item.
3.4 Data Analysis This study used the partial-least-squares structural equation modeling (PLS-SEM), to test the research framework. This method empowers scholars to elucidate moderately novel situation even in the absence of hypothetical foundation. The PLS-SEM, which was utilized in the existing studies, gives precise estimations even the distribution of the data is not normal [49]. SmartPLS 3.2.7 programming was used as an analytical tool.
4 Results 4.1 Measurement Model The reliability, convergent, and discriminant validity were examined to assess whether constructs are reliable and valid. The convergent validity was evaluated using the composite reliability (CR) and average variance extracted (AVE) coefficients [50–52]. Construct validity was assessed via factor loadings that display the link between the item and the related construct. The threshold values for factor loadings were identified as 0.60 [51, 53]. According to Table 2, all the factor loadings were greater than 0.60 and statistically significant (p < 0.05), that confirms construct validity at the item level. The construct’s reliability was assessed by using the Cronbach’s alpha and composite reliability (CR) coefficients. The Cronbach’s alpha measures the internal uniformity of constructs. In this paper, the Cronbach’s alpha coefficients were more than the threshold value of 0.70, therefore, internal consistency was satisfied. [51]. The CR is defined as “the shared variance among a set of observed variables measuring an underlying construct” [51], in other words, “how well the items measure a construct”. The threshold value of the CR is.70; whereas in this study, the CR values were ranged between 0.86 and 0.91 [51, 54]. Average variance extracted (AVE) is defined as “the variance captured by the construct in relation to the amount of variance attributable to measurement error” and the threshold value of the AVE is 0.50 [51, 55]. In this study, the AVE values are more than the threshold value. The results confirmed the convergent validity.
112
G. H. Koksalmis et al.
Table 2 Reliability and validity Construct
Item code
Factor loading
t statistics
Cronbach’s alpha
CR
AVE
Attitude toward use
ATU01
0.937
102.754
0.779
0.863
0.684
ATU02
0.895
47.268
ATU03
0.611
5.515
BIU01
0.896
77.300
0.808
0.880
0.712
BIU02
0.920
52.375
BIU03
0.698
12.047
PER01
0.849
38.683
0.829
0.898
0.745
PER02
0.910
57.591
PER03
0.829
26.206
PEOU01
0.847
25.115
0.884
0.900
0.605
PEOU02
0.890
39.491
PEOU03
0.848
18.678
PEOU04
0.764
7.942
PEOU05
0.673
5.957
PEOU06
0.606
4.796
PU01
0.889
59.089
0.836
0.887
0.667
PU02
0.892
39.871
PU03
0.826
20.020
PU04
0.632
9.319
Social influence SI01
0.862
35.608
0.868
0.904
0.654
SI02
0.857
38.376
SI03
0.774
19.843
SI04
0.820
23.607
SI05
0.723
12.110
T01
0.939
79.023
0.864
0.909
0.772
T02
0.949
81.402
T03
0.730
10.962
Behavioral intention to use
Perceived risk
Perceived ease of use
Perceived usefulness
Trust
The discriminant validity, which tests to what degree constructs differ, was also checked. Constructs’ discriminant validities are assessed by comparing the square root of the AVE values for a given construct with the correlations [56]. The correlations among the constructs are shown in Table 3, square roots of the AVE values are higher than the non-diagonal elements in the respective rows and columns, which confirmed the discriminant validity. For measuring the discriminant validity, another most widely used approaches is the Heterotrait-Monotrait ratio (HTMT). The values of HTMT should be less than the suggested value of 0.85 [57]. According to Table 4, all HTMT values are less than 0.85.
Predicting the Intention to Use Bitcoin …
113
Table 3 Results of Fornell-Larcker criterion Construct ATU
ATU
BIU
PEOU
PER
PU
SI
T
0.827
BIU
0.698
0.844
PEOU
0.419
0.359
0.778
−0.634
−0.599
−0.351
0.863
PU
0.568
0.589
0.469
−0.551
0.817
SI
0.576
0.642
0.268
−0.481
0.463
0.809
T
0.321
0.307
0.242
−0.503
0.494
0.311
0.878
SI
T
PER
Table 4 Results of Heterotrait-Monotrait ratio (HTMT) Construct
ATU
BIU
PEOU
PER
PU
ATU BIU
0.770
PEOU
0.241
0.223
PER
0.488
0.639
0.227
PU
0.441
0.528
0.378
0.436
SI
0.460
0.347
0.183
0.313
0.337
T
0.424
0.737
0.088
0.674
0.350
0.344
4.2 Hypotheses A PLS-SEM approach is used to test hypothesized relationships. The hypotheses, standardized parameters, and respective t-statistics are summarized in Table 5. Hypotheses with a p-value lower than 0.05 (t > 1.96) were supported. Social influence (β = 0.297, p < 0.05), ATU (β = 0.330, p < 0.05), PU (β = 0.183, p < 0.05), and PER (β = −0.146, p < 0.05) significantly predicted the BIU, thereby Hypotheses 1, 2, 3, and 7 were supported. PU (β = 0.476, p < 0.05) and PEOU (β = 0.196, p < 0.05) significantly predicted the ATU, thereby Hypotheses 4 and 5 were supported. PEOU (β = 0.372, p < 0.05) and trust (β = 0.404, p < 0.05) significantly predicted the PU, thereby, Hypotheses 6 and 9 were supported. Trust (β = −0.503, p < 0.05) significantly predicted the perceived risk, and thereby Hypothesis 8 was supported. Figure 2 illustrates the structural model with path coefficients. The R2 specifies the level of overall variance of a variable. For instance, the proposed model explains 62% (R2 = 0.616) of total variance in the BIU.
114
G. H. Koksalmis et al.
Table 5 Hypothesis testing results Hypothesis
Path
H1
SI → BIIU
H2
ATU → BIU
H3
PU → BIU
0.183*
H4
PU → ATU
0.476*
5.804
Yes
H5
PEOU → ATU
0.196*
2.289
Yes
H6
PEOU → PU
0.372*
6.334
Yes
H7
PER → BIU
−0.146*
2.672
Yes
H8
T → PER
−0.503*
7.232
Yes
H9
T → PU
0.404*
6.647
Yes
*p
Path coefficient (β coefficient)
T statistics
Supported (Yes/No)
0.297*
4.304
Yes
0.330*
3.553
Yes
2.260
Yes
< 0.05
Perceived usefulness R2=0.374
0.404
Trust
0.183* 0.372*
0.476*
Perceived ease of use
0.196*
Attitude toward use R2=0.353
-0.503*
0.330*
Behavioral intention to use R2=0.616
0.297* -0.146*
Perceived risk R2=0.253
Social influence
Fig. 2 Path coefficients of the research model (* p < 0.05)
5 Discussion and Conclusion 5.1 Theoretical Implications The aim of this paper was to explore adoption of cryptocurrencies, specifically Bitcoin. The study proposed a theoretical model by extending the TAM with the PRT as well as social influence and trust. The path analysis implied that the results were similar to the earlier TAM literature by presenting that PU, PEOU, and ATU
Predicting the Intention to Use Bitcoin …
115
are positively and significantly affecting the BIU. Further, trust and social influence were also significant in predicting the behavioral intention to use Bitcoin. In order to assess the adoption of Bitcoin system in other countries or cultures, as Bitcoin is global, the proposed model in this research can also be tested. Nevertheless, sometimes Bitcoin raised by a nation may not fit the necessities of users in other nations due to diverse strategic approaches, and legal and political prerequisites. Every society has its own work discipline culture which also affects the perceptions of users. Some countries have placed limitations on the way Bitcoin can be used, with banks banning its customers from making cryptocurrency transactions. Other countries have banned the use of Bitcoin and cryptocurrencies outright with heavy penalties in place for anyone making crypto transactions. “Egypt, Iraq, Qatar, Oman, Morocco, Algeria, Tunisia, Bangladesh, and China have all banned cryptocurrency. Forty-two other countries, including Algeria, Bahrain, Bangladesh, and Bolivia, have implicitly banned digital currencies by putting restrictions on the ability for banks to deal with crypto, or prohibiting cryptocurrency exchanges, according to a 2021 summary report by the Law Library of Congress published in November” [58]. Hence, the results may vary if the study is conducted in other countries or cultures. The study has four main contributions to the existing literature. Firstly, this study explores the most critical factors predicting the Bitcoin acceptance in the United States. Secondly, a few number prior studies focused on the behavioral intention to use Bitcoin. Thirdly, this study integrates the Perceived Risk Theory with TAM to provide a strong theoretical base for examining the Bitcoin acceptance. Lastly, the results of this study indicated that social influence is a significant determinant of behavioral intention to use Bitcoin. These findings are highly interesting and important for the Bitcoin users, investors, traders, business specialists, and scholars since they can help understand what makes people to invest in Bitcoin and, thereby, to make future forecasts on digital currencies and its diffusion.
5.2 Practical Implications There are some implications of this study for investors, entrepreneurs, R&D initiatives, regulators, and government organizations in practice. The results indicated that social influence, trust, PEOU, PU, PER, and ATU were determinants of the BIU. People are likely to use Bitcoins if they think that their efficiency will be increased if they use the cryptocurrencies. Social influence was identified as a significant predictor of the BIU. This suggests that the opinions of families, friends, and colleagues are very influential in the adoption decision. In the same vein, the support of governments may also have a positive impact on the adoption. The result of this study indicated that perceived risks predicted were the BIU negatively. This implied that price volatility may hinder a large scale adoption of the cryptocurrency. Further, individuals may find Bitcoin less useful when they feel it is complex to use. Therefore, developers have to pay attention on adequate coaching
116
G. H. Koksalmis et al.
and training opportunities. Finally, Bitcoin usage can be encouraged by providing the consumers with Bitcoin ATMs, this will increase availability of Bitcoins.
5.3 Limitations and Future Research Directions Despite the aforementioned contributions of this research, there are several limitations which should be considered for further studies. First, this study is conducted only in the United States; the data collection only focused on the participant in “World Blockchain Forum: Investments and ICOs” in the United States so the results applies to the United States, and the findings may not be generalized to the other developing or developed countries. If the research is retested for a different population or a culture, the results may change. As a future research, the scholars may test the model with diverse audiences and with a diverse cultural background and compare the results. Second, the model did not consider demographics gathered through questionnaire as a factor or moderator. Therefore, the moderating role of gender, age, etc. can be tested in the model in a future study. Additional domain specific factors such as volatility, government support, and scalability may also be included in the model and tested in a future study. Finally, qualitative data can be used to increase for a better understanding of Bitcoin adoption.
Appendix: Constructs and Items
Construct
Item code
References
Attitude toward use
ATU01
[59]
ATU02
Perceived risk
BIU01
“Using the Bitcoin is a good idea” “The Bitcoin makes work more interesting”
ATU03 Behavioral intention to use
Item
“Working with the Bitcoin is fun” [59]
“I intend to use the Bitcoin in the next months”
BIU02
“I predict I would use the Bitcoin in the next months”
BIU03
“I plan to use the Bitcoin in the next months”
PER01 PER02
[16, 36]
“Using the Bitcoin may expose me to fraud or monetary loss” “Using the Bitcoin my jeopardise my privacy” (continued)
Predicting the Intention to Use Bitcoin …
117
(continued) Construct
Item code
References
PER03 Perceived ease of use
PEOU01
Using the Bitcoin may expose me to legal problem” [21, 29, 59]
Social influence
Trust
“Learning to operate the Bitcoin would be easy for me”
PEOU02
“I would find it easy to get the Bitcoin to do what I want it to do”
PEOU03
“My interaction with the Bitcoin would be clear and understandable”
PEOU04
“I would find the Bitcoin to be flexible to interact with”
PEOU05
“It would be easy for me to become skillful at using the Bitcoin”
PEOU06 Perceived usefulness
Item
PU01
“I would find the Bitcoin easy to use.” [21, 29, 59]
“Using the Bitcoin would improve my productivity”
PU02
“Using the Bitcoin would increase my efficiency in transaction”
PU03
“Using the Bitcoin would make my transaction easier”
PU04
“Using the Bitcoin would make my transaction quicker”
SI01
[59, 60]
“People who influence my behavior think that I should use the Bitcoin”
SI02
“People who are important to me think that I should use the Bitcoin”
SI03
“I use the Bitcoin because of the proportion of coworkers who use the Bitcoin”
SI04
“People in my organization who use the Bitcoin have more prestige than those who do not”
SI05
“People in my organization who use the Bitcoin have a high profile”
T01
[36]
“The Bitcoin is trustworthy”
T02
“The Bitcoin is one that keeps promises and commitments”
T03
“I trust the Bitcoin because it keeps my best interests in mind”
118
G. H. Koksalmis et al.
References 1. F. Brezo, P.G. Bringas, Issues and risks associated with cryptocurrencies such as Bitcoin (2012) 2. M. Briere, K. Oosterlinck, A. Szafarz, Virtual currency, tangible return: portfolio diversification with bitcoin. J. Asset Manag. 16(6), 365–373 (2015) 3. D. Ron, A. Shamir, Quantitative analysis of the full bitcoin transaction graph, pp. 6–24 4. F.E. Gunawan, R. Novendra, An analysis of bitcoin acceptance in Indonesia. ComTech Comput. Math. Eng. Appl. 8(4), 241–247 (2017) 5. S. Barber, X. Boyen, E. Shi, E. Uzun, Bitter to better—how to make bitcoin a better currency, in International Conference on Financial Cryptography and Data Security (2012), pp. 399–414 6. S. Nakamoto, Bitcoin: a peer-to-peer electronic cash system (2008) 7. A. Rogojanu, L. Badea, The issue of competing currencies. Case study-Bitcoin. Theoret. Appl. Econom. 21(1) (2014) 8. C. Tsanidis, D.-M. Nerantzaki, G. Karavasilis, V. Vrana, D. Paschaloudis, Greek consumers and the use of Bitcoin. Bus. Manag. Rev. 6(2), 295 (2015) 9. J. Brito, A. Castillo, Bitcoin: A Primer for Policymakers (Mercatus Center at George Mason University, 2013) 10. B. Scott, How can cryptocurrency and blockchain technology play a role in building social and solidarity finance? UNRISD Working Paper (2016) 11. Bitcoincharts (2020). https://bitcoincharts.com/ 12. K. Hill, in 21 Things I Learned About Bitcoin from Living on it for a Week (Forbes, 2013) 13. M. Rosenfeld, Dynamic block frequency. Bitcoin forum thread (2012) 14. D. Folkinshteyn, M. Lennon, Braving Bitcoin: a technology acceptance model (TAM) analysis. J. Inf. Technol. Case Appl. Res. 18(4), 220–249 (2016) 15. M. Hutchison, Acceptance of electronic monetary exchanges, specifically bitcoin, by information security professionals: a quantitative study using the unified theory of acceptance and use of technology (UTAUT) model. Colorado Technical University (2017) 16. A. Kumpajaya, W. Dhewanto, The acceptance of Bitcoin in Indonesia: extended TAM with IDT. J. Bus. Manag. 4(1), 28–38 (2015) 17. F. Shahzad, G.Y. Xiu, J. Wang, M. Shahbaz, An empirical investigation on the adoption of cryptocurrencies among the people of Mainland China. Technol. Soc. (2018) 18. J. Silinskyte, Understanding Bitcoin adoption: unified theory of acceptance and use of technology (UTAUT) application. Master thesis. Leiden Institute of Advanced Computer Science (LIACS) (2014) 19. S. Sternad, S. Bobek, Impacts of TAM-based external factors on ERP acceptance. Proc. Technol. 9, 33–42 (2013) 20. M. Al-Emran, A. Grani´c, Is it still valid or outdated? a bibliometric analysis of the technology acceptance model and its applications from 2010 to 2020, in Recent Advances in Technology Acceptance Models and Theories (Springer, 2021), pp. 1–12 21. F.D. Davis, Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quart., 319–340 (1989) 22. R.A. Bauer, Consumer behavior as risk taking 23. M.S. Featherman, P.A. Pavlou, Predicting e-services adoption: a perceived risk facets perspective. Int. J. Hum Comput Stud. 59(4), 451–474 (2003) 24. W.-B. Lin, Investigation on the model of consumers’ perceived risk—integrated viewpoint. Expert Syst. Appl. 34(2), 977–988 (2008) 25. J. Jacoby, L.B. Kaplan, The components of perceived risk. ACR (1972) 26. M. Fishbein, I. Ajzen, Belief, Attitude, Intention and Behavior: An Introduction to Theory and Research (1975) 27. V. Venkatesh, F.D. Davis, A theoretical extension of the technology acceptance model: four longitudinal field studies. Manage. Sci. 46(2), 186–204 (2000) 28. S.A. Kamal, M. Shafiq, P. Kakria, Investigating acceptance of telemedicine services through an extended technology acceptance model (TAM). Technol. Soc. 60, 101212 (2020)
Predicting the Intention to Use Bitcoin …
119
29. F.D. Davis, R.P. Bagozzi, P.R. Warshaw, User acceptance of computer technology: a comparison of two theoretical models. Manage. Sci. 35(8), 982–1003 (1989) 30. T. Ahn, S. Ryu, I. Han, The impact of Web quality and playfulness on user acceptance of online retailing. Inf. Manag. 44(3), 263–275 (2007) 31. F. Calisir, C. Altin Gumussoy, A. Bayram, Predicting the behavioral intention to use enterprise resource planning systems: an exploratory extension of the technology acceptance model. Manag. Res. News 32(7), 597–613 (2009) 32. D. Karaali, C.A. Gumussoy, F. Calisir, Factors affecting the intention to use a web-based learning system among blue-collar workers in the automotive industry. Comput. Hum. Behav. 27(1), 343–354 (2011) 33. M.-C. Lee, Factors influencing the adoption of internet banking: an integration of TAM and TPB with perceived risk and perceived benefit. Electron. Commer. Res. Appl. 8(3), 130–141 (2009) 34. T. Teo, J. Noyes, An assessment of the influence of perceived enjoyment and attitude on the intention to use technology among pre-service teachers: a structural equation modeling approach. Comput. Educ. 57(2), 1645–1653 (2011) 35. M. Al-Emran, R. Al-Maroof, M.A. Al-Sharafi, I. Arpaci, What impacts learning with wearables? an integrated theoretical model. Interact. Learn. Environ., 1–21 (2020) 36. P.A. Pavlou, Consumer acceptance of electronic commerce: Integrating trust and risk with the technology acceptance model. Int. J. Electron. Commer. 7(3), 101–134 (2003) 37. V. Venkatesh, Determinants of perceived ease of use: Integrating control, intrinsic motivation, and emotion into the technology acceptance model. Inf. Syst. Res. 11(4), 342–365 (2000) 38. V. Venkatesh, M.G. Morris, Why don’t men ever stop to ask for directions? gender, social influence, and their role in technology acceptance and usage behavior. MIS Quart., 115–139 (2000) 39. M. Al-Emran, A. Grani´c, M.A. Al-Sharafi, N. Ameen, M. Sarrab, Examining the roles of students’ beliefs and security concerns for using smartwatches in higher education. J. Enterp. Inf. Manag. (2020) 40. S.-H. Liu, H.-L. Liao, J.A. Pratt, Impact of media richness and flow on e-learning technology acceptance. Comput. Educ. 52(3), 599–607 (2009) 41. J.C. Ho, C.-G. Wu, C.-S. Lee, T.-T.T. Pham, Factors affecting the behavioral intention to adopt mobile banking: an international comparison. Technol. Soc. 63, 101360 (2020) 42. M. Al-Emran, V. Mezhuyev, A. Kamaludin, Is M-learning acceptance influenced by knowledge acquisition and knowledge sharing in developing countries? Educ. Inf. Technol. 26(3), 2585– 2606 (2021) 43. V. Mezhuyev, M. Al-Emran, M. Fatehah, N.C. Hong, Factors affecting the metamodelling acceptance: a case study from software development companies in Malaysia. IEEE Access 6, 49476–49485 (2018) 44. D. Gefen, E. Karahanna, D.W. Straub, Trust and TAM in online shopping: an integrated model. MIS Q. 27(1), 51–90 (2003) 45. I. Benbasat, W. Wang, Trust in and adoption of online recommendation agents. J. Assoc. Inf. Syst. 6(3), 4 (2005) 46. A.M. Chircu, G.B. Davis, R.J. Kauffman, Trust, expertise, and e-commerce intermediary adoption, in AMCIS 2000 Proceedings (2000), p. 405 47. J.-C. Gu, S.-C. Lee, Y.-H. Suh, Determinants of behavioral intention to mobile banking. Expert Syst. Appl. 36(9), 11605–11616 (2009) 48. T. Dirsehan, C. Can, Examination of trust and sustainability concerns in autonomous vehicle adoption. Technol. Soc., 101361 (2020) 49. J.F. Hair, M. Sarstedt, C.M. Ringle, J.A. Mena, An assessment of the use of partial least squares structural equation modeling in marketing research. J. Acad. Mark. Sci. 40(3), 414–433 (2012) 50. J.C. Anderson, D.W. Gerbing, Assumptions and comparative strengths of the two-step approach: comment on Fornell and Yi. Sociol. Methods Res. 20(3), 321–333 (1992) 51. C. Fornell, D.F. Larcker, Evaluating structural equation models with unobservable variables and measurement error. J. Mark. Res., 39–50 (1981)
120
G. H. Koksalmis et al.
52. Z. Sheikh, T. Islam, S. Rana, Z. Hameed, U. Saeed, Acceptance of social commerce framework in Saudi Arabia. Telematics Inform. 34(8), 1693–1708 (2017) 53. J.F. Hair, W.C. Black, B.J. Babin, R.E. Anderson, R.L. Tatham, in Multivariate Data Analysis, vol. 6. (Pearson Prentice Hall, Upper Saddle River, NJ (2006) 54. R.A. Peterson, Y. Kim, On the relationship between coefficient alpha and composite reliability. J. Appl. Psychol. 98(1), 194 (2013) 55. M.T. Fisher, A theory of viral growth of social networking sites. Case Western Reserve University (2013) 56. H.I.B. Baharum, Learning business English in virtual worlds: effectiveness and acceptance in a Malaysian context: a thesis submitted in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Management Information Systems at Massey University, Palmerston North. Massey University (2013) 57. J. Henseler, C.M. Ringle, M. Sarstedt, A new criterion for assessing discriminant validity in variance-based structural equation modeling. J. Acad. Mark. Sci. 43(1), 115–135 (2015) 58. M. Quiroz-Gutierrez, Crypto is fully banned in China and 8 other countries (2022). https://for tune.com/2022/01/04/crypto-banned-china-other-countries/ 59. V. Venkatesh, M.G. Morris, G.B. Davis, F.D. Davis, User acceptance of information technology: toward a unified view. MIS Quart., 425–478 (2003) 60. G.C. Moore, I. Benbasat, Development of an instrument to measure the perceptions of adopting an information technology innovation. Inf. Syst. Res. 2(3), 192–222 (1991)
Research Trends on the Role of Big Data in Artificial Intelligence: A Bibliometric Analysis Sebastián Cardona-Acevedo, Wilmer Londoño Celis, Jefferson Quiroz Fabra, and Alejandro Valencia-Arias
Abstract The purpose of this research is to structure a quantitative bibliometric analysis. It is intended to determine the research trends around managing large amounts of data in computational sciences focused on the development of artificial intelligence through the related scientific literature. In this sense, once the search equation was elaborated and implemented in the scientific database Scopus, 484 documents related to the object of study were observed. Those documents, published between 2013 and 2022, were then evaluated through an exclusion process based on specific criteria previously established to meet the research quality standards, leaving at the end 458 publications included in the quantitative synthesis. As a result of this analysis, it is observed that the topic, despite being relatively new, has had a substantial growth in recent years, mainly concentrated in the countries of China and the United States, the latter being the most relevant in comparison with the number of publications in the Asian country. Keywords Machine learning · Bibliometrics · Deep learning · Internet of things · Computer network technology
S. Cardona-Acevedo Faculty of Economic and Administrative Sciences, Institución Universitaria Escolme, Medellín, Colombia e-mail: [email protected] W. Londoño Celis · A. Valencia-Arias (B) Faculty of Engineering, Corporación Universitaria Americana, Barranquilla, Colombia e-mail: [email protected] W. Londoño Celis e-mail: [email protected] J. Quiroz Fabra Faculty of Economic and Management Sciences, Instituto Tecnológico Metropolitano, Medellín, Colombia e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. Al-Emran and K. Shaalan (eds.), Recent Innovations in Artificial Intelligence and Smart Applications, Studies in Computational Intelligence 1061, https://doi.org/10.1007/978-3-031-14748-7_7
121
122
S. Cardona-Acevedo et al.
1 Introduction Although the concept of big data is currently present everywhere, its place of origin is unknown. Some studies express the possibility that the term has emerged in collaborative worktables [1]. However, according to the research of O’leary [2], there are nowadays multiple opinions that attempt to define it. However, perhaps the one that has the greatest recognition is the one suggested by IBM, which proposes that big data is subject to certain specific characteristics called the three “v’s”: volume, variety, and velocity. The constant and rapid evolution of information and communication technologies has spread large amounts of data in different industries worldwide. Therefore, the term Big data has become increasingly relevant through the last years, being a fundamental element for knowledge management. Because of this, it is important to know the advantages that can be brought by taking advantage of big data as it facilitates more profitable and efficient processes, which can optimize activities and develop smarter systems [3]. Therefore, it is necessary to establish tools that allow the manipulation of large volumes of data, which are impossible to be analyzed by conventional methods in an era where information moves at high speeds. Likewise, the peak of developments in computational sciences and the applications of big data in the environment has influenced the rapid development of information systems programmed with artificial intelligence. These developments have allowed the development of theoretical models to investigate the adoption of new technologies by users, such as the “technology acceptance model (TAM)” [4, 5]. Thus, seeking a human–machine collaboration that enhances human capabilities based on decision making and reinvention of ecosystems through continuous machine learning [6]. Applications of Big data advances are found in many economic sectors and knowledge. For example, the education sector, which through educational data mining can predict academic performance [7]. Or the banking sector, which through the analysis of credit card usage can identify the behavior of cardholders to predict market segmentation [8]. With its enormous capacity, volume, velocity, and variety, big data has enabled advancement in areas in the sciences that were previously looked at as an elusive option [9]. The properties of connecting information in an agile way similarly demand continuous innovation facilitating the connection with other technologies such as the Internet of Things and artificial intelligence, which are factors of great relevance, and which according to Özdemir and Hekim [10], calls Industry 5.0. One of the sectors with the greatest progress and boom in recent years, around the integration of big data and artificial intelligence, is medicine. There are great projects from theory with the development of models and methods and from practice with the development of techniques that allow improving complex processes or developing new ones [11]. These technological advances have made it possible to broaden the landscape, even in pandemic times, demonstrating the relevance of their implementation in current issues and their adaptation to the environment. Since, during the isolation due to the pandemic caused by COVID-19, the use of big data
Research Trends on the Role of Big Data in Artificial …
123
applied to Artificial Intelligence supported by the governments of some countries allowed the tracking of individuals and strict confinement, supporting in flattening the contagion curve through algorithms in infection prediction, patient survival rates, vaccine development and drug discovery [12, 13]. The above has aroused the interest of different media, including academia, which has seen the need to generate discussions around the topic that has grown substantially in the last two years, allowing to favor the development of multiple disciplines. For this reason, the present research aims to implement a quantitative analysis with bibliometric emphasis to identify the research trends that are currently moving around the literature related to the role of big data in computational sciences that focus on the development of artificial intelligence.
2 Methodology 2.1 Type and Approach of the Research In order to fulfill the research objective outlined in the introduction, an exploratory type of research is presented, focused, in turn, on secondary sources of information. For this purpose, a bibliometric analysis is performed since these allow quantitative analysis of the structure of the scientific literature [14]. This bibliometrics is based on the information extracted from the Scopus database since this, as indicated by Baas [15], is a complete, robust, intuitive, and rigorous database, allowing the necessary quality standards for the research.
2.2 Search Equation For the search and extraction of information within the selected database, an advanced search equation was designed to relate the different ways of researching Artificial Intelligence with the topic of Big Data, applied exclusively to the titles of the articles, so that the results would be as specific, detailed and specialized as possible. Therefore, the specialized search equation is as follows: TITLE ({artificial intelligence} OR {Artificial Intelligence (AI)}) AND TITLE ({big data}). This search obtained a total of 484 documents from 2013 to 2022, which were subjected to a rigorous exclusion process based on specific criteria so that the resulting documents met research quality standards.
124
S. Cardona-Acevedo et al.
Fig. 1 Eligibility of records for bibliometrics. Source prepared by the authors
2.3 Exclusion Criteria in the Research As mentioned, the methodological design presents an exclusion process based on specific criteria detailed in Fig. 1. A total of 484 records obtained from the Scopus database were filtered, resulting in 26 documents eliminated, with their respective specific explanations, which gave a total of 458 final documents subjected to bibliometric analysis.
3 Results Any bibliometric review should be based on developing indicators or laws that allow quantifying the scientific structure between publications and citations evaluating academic productivity and its impact [16]. In this sense, the quantity indicators, which determine the volume of publications [17], are represented in Fig. 2, through two fundamental aspects of scientific evolution can be identified: the number of publications annually, as well as the cumulative or total amount of research in a given period. This shows that, although it is a relatively new topic about indexing in the Scopus database, with a first record in 2013, it is a research topic that is growing, based on the multiple developments, approaches, and applications that scientists have implemented. In this sense, the idea can be reinforced by mentioning that, of the total of 458 publications analyzed, 316 were registered from the year 2020. Therefore, 69% of the publications on the subject were made in the last two years.
Research Trends on the Role of Big Data in Artificial …
125
Publications
Cumulative amount
500 400 300 200 100 2022
2021
2020
2019
2018
2017
2016
2015
2014
0 2013
Number of publications
Fig. 2 Annual scientific productivity. Source own elaboration based on Scopus
Year
On the other hand, there are the bibliometric quality indicators, which, according to Morales-Zapata [17], evaluate the impact of the publications from the quantification of their citations; also, taking into account that for bibliometric analysis, some authors refer to the existence of a direct relationship between productivity and impact [18]. In Fig. 3, a correlative graph is presented between both variables based on the number of publications and accumulated citations in the evaluated records. This correlation, traced by a linear trend line, presents a determination or correlation coefficient of 0.9329, which shows that, in this specific case, there is a significant relationship between the number of publications and the total number of citations, i.e., between scientific productivity and academic impact. The present bibliometric analysis applies the bibliometric indicators to the scale of authors, allowing to identify the authors that show a greater research trend on the role of Big Data in Artificial Intelligence, reflected in a greater number of publications on the subject, as well as their respective impact based on the number of citations of these most important authors. In that sense, in Fig. 4, it can be identified that the most relevant author in terms of scientific productivity is the Chinese author Yongchang Zhang. This author has focused on investigating systems based on Artificial Intelligence and Big Data technology. His most important publication proposes a neural network based on artificial intelligence to predict big data collected in smart cities based on the Internet of Things [19]. Value
Number of citations
5000
Linear (Value)
4000 3000 y = 9.2791x R² = 0.7515
2000 1000 0
0
100
200
300
400
500
Number of publications
Fig. 3 Relationship between productivity and impact. Source author’s elaboration based on Scopus
He H.
140 120 100 80 60 40 20 0 Wu J.
Wang L.
Liu H.
Citations
Liu Z.
Liao H.-T.
Li Y.
Zhang Z.
Zhang Y.
Liu Y.
Publications
7 6 5 4 3 2 1 0
Number of citations
S. Cardona-Acevedo et al. Number of publications
126
Author
Fig. 4 Main authors. Source own elaboration based on Scopus
It can be identified that the six authors with the highest research frequency on the subject are authors with a comparatively low citation volume, which is why authors Lirong Wang and Jianghong Wu stand out, since they, located in the eighth and ninth position of productivity thanks to their four publications, have a total of 128 and 103 citations respectively, which makes their analysis striking. Specifically, the author Lirong Wang focuses on the application of Big Data to Artificial Intelligence and vice versa, which he embodied in his most relevant publication, which proposed an artificial intelligence paradigm for the discovery of new drugs or drugs in the era of Big Data [20]. The author Jianghong Wu, there is an interest in studies on the use of Artificial Intelligence and big data, which has manifested in different publications, where the most important indicated the importance of leveraging Big Data and Artificial Intelligence for better management of the COVID-19 pandemic [21]. The respective analysis of quantity and quality indicators is made on the scale of popular science journals to reflect the journals that publish the most on the subject, also relating the impact they have had from the quantification of their citations. In this order of ideas, as shown in Fig. 5, it was found that the journal with the highest academic productivity is the English Journal of Physics: Conference Series, a journal located within the Q4 quartile of the SCImago ranking; this journal specializes in the publication of proceedings and articles on different aspects of technological and scientific development, where its most recent publication emphasizes the progress presented in the generation and storage of information, creating the need for intelligent networks and access data analysis technologies based on Artificial Intelligence [22]. On the other hand, although the U.S. journal IEEE Access is in the sixth position in the ranking of academic productivity, it is noteworthy that, among the related journals, it is the one with the highest number of citations, with a total of 244 citations. This journal, which publishes articles in multidisciplinary areas, located in the Q1 quartile of SCImago, explains its impact from its most relevant publication that visualizes data-driven next-generation wireless networks, where network
Research Trends on the Role of Big Data in Artificial … Citations
250 200 150 100
International Journal of Environmental Research and Public Health
ACM International Conference Proceeding Series
Lecture Notes on Data Engineering and Communications Technologies
Lecture Notes in Electrical Engineering
IEEE Access
Economics, Management, and Financial Markets
PervasiveHealth: Pervasive Computing Technologies for Healthcare
IOP Conference Series: Materials Science and Engineering
Advances in Intelligent Systems and Computing
Journal of Physics: Conference Series
50
Number of citations
300
Revista
Number of publications
Publications
40 35 30 25 20 15 10 5 0
127
0
Journal Fig. 5 Main journals. Source own elaboration based on Scopus
managers employ advanced data analytics, machine learning, and artificial intelligence [3]. Similar is the case of the International Journal of Environmental Research and Public Health, which specializes in scientific studies that serve for environmental and biological decision-making and which, with five publications, is positioned tenth in the productivity ranking. However, it has 211 citations, which account for the rigor of its editorial team and the researchers who postulate their scientific studies in it. Finally, Fig. 6 shows the list of countries with the highest frequency of research on the role of Big Data in Artificial Intelligence through the number of publications and associated citations. It is evident that the two most relevant countries both in terms of productivity and impact are China and the United States, the former being the most productive and the latter the one with the highest total number of citations. In this sense, there is a total of 143 publications for the Chinese context, i.e., 35.66% of the total number of publications. In this country, they have mainly focused on research that analyze the practical application of Big Data and Artificial Intelligence technologies, which has reflected authors such as [23, 24], who have mentioned different components, direct or indirect, on the application of Artificial Intelligence
Publications
Citations
180 150 120 90 60 30 0 Japan
Russia
Canada
Italy
Spain
Germany
India
United Kingdom
United States
China
1200 1000 800 600 400 200 0
Number of citations
S. Cardona-Acevedo et al.
Number of publications
128
Country Fig. 6 Main countries. Source own elaboration based on Scopus
in computer network technology in times of big data, also applied to modern medical services in times of pandemic. On the other hand, for the US context, it has 65 publications, which positions it as the second most relevant in terms of the number of studies on the role of Big Data in Artificial Intelligence. However, it also has 1024 citations, which places it as the most important in terms of impact, as mentioned recently. In the United States, they have focused on studying aspects widely related to those mentioned in the Chinese context, which has been reflected in different researches that have analyzed the application of Big Data and Artificial Intelligence to multiple contexts such as the COVID-19 pandemic to data mining technologies or the security of regulatory sites [25–27]. Consistent with what was mentioned in the purpose of the research, this study focuses on identifying the main thematic trends derived from works that address the role of Big Data in Artificial Intelligence. So, it is necessary to identify the main keywords indexed in the evaluated records and their historical evolution so that these research trends explain the growth and evolution of the previously mentioned thematic (see Fig. 2). Therefore, Fig. 7 presents graphically the thematic evolution divided into four time periods. Between 2013 and 2015, the main concepts were: Intelligent Systems, Internet of Things, and Augmented Reality, which were related by O’Leary [2] in a paper that examined the basic concerns and uses of Artificial Intelligence for Big Data. Subsequently, for the interval between 2016 and 2018, the most frequent keywords changed: Algorithms, Digital humanities, and Future, jointly related in Casey and Niblett’s [28] research. On the other hand, between 2019 and 2021, the most addressed terms were Machine Learning, Deep learning, as well as Internet of Things, being extensively studied by the authors Tripathi et al. [29], who mention the role of Big Data and Artificial Intelligence techniques currently being implemented to meet the increasing research demands in drug discovery projects. Finally, by 2022, the main research trends are in studies on Blockchain, Digital Economy, and Cloud Computing.
Research Trends on the Role of Big Data in Artificial …
2013-2015
2016-2018
2019-2021
- Intelligent systems - Internet of things - Augmented reality - Parallelization - SuperReality - Visualization
- Algorithms - Digital humanities - Future - Internet of tings - Machine learning - Natural resources
- Machine Learning - Deep learning - Internet of things - Expert system - Perceptual system - COVID-19
129
Fig. 7 Evolution of the main keywords. Source own elaboration based on Scopus
This approach to the main concepts made it possible to identify the emerging, growing, and declining themes in studies on the topic of interest. Therefore, as shown in Fig. 8, the main topics are subdivided into two time periods, the first between 2013 and 2020 and the second between 2021 and 2022. It is found that the main research term has been Machine Learning, being the most researched topic in the two time periods mentioned. However, it can be evidenced that it is a decreasing topic, deep learning, COVID-19, and the Internet of Things. Contrary to these, some growing topics can be observed, i.e., in the first period have been investigated to a greater extent than in the second period, such as Blockchain, Computer Network Technology, Precision Medicine and Cloud Computing, being these relevant in the current research agenda. Finally, there are some emerging topics, i.e., which were not addressed between 2013 and 2020, but which have emerged and have been studied between 2021 and 2022, such as Digital Economy, Accuracy and Application, and which have recently been studied by authors such as Gallini et al. [30]. Finally, the present bibliometric analysis focused on the research trends of the role of Big Data on Artificial Intelligence added a new bibliometric indicator, such as structure, which quantifies the connection between scientific actors [31]. In that sense, with the help of the open-source software VOSviewer, Fig. 9 was designed, which establishes the co-occurrence network of keywords in the subject matter. Any cooccurrence network indicates that topologically when two keywords have coincided in the same document [32], so a total of three thematic clusters are designed, which allow identifying that the term Machine Learning, in the gray cluster, is the one that
130
S. Cardona-Acevedo et al.
Emerging, growing and declining topics Machine learning Deep learning COVID-19 Internet of Things Blockchain
Topic
Computer network technology Precision medicine Sustainability
2021-2022
Cloud Computing
2013-2020
Data mining Digital economy Industry 4.0 Accuracy Application Data analytics 0
10
20
30
40
50
Quantity
Fig. 8 Emerging, growing and declining topics. Source own elaboration based on Scopus
has had the greatest prominence in the evaluated research, since it has been related jointly with all the concepts derived from the subject, except for Cloud Computing. In the same cluster, topics such as Deep Learning, Expert System, Data Analytics, and COVID-19 can be related. The second most relevant cluster is blue, where the most representative concept is the Internet of Things, which within the cluster has been related to topics such as Cloud Computing, Industry 4.0, Blockchain, Healthcare, and Sustainability. Finally, there is the black cluster, which, although it has only three terms, relates important concepts such as Data Mining, Precision Medicine, and Medicine, which account for one of the main fields of application of these technologies.
4 Practical Implications of the Study Initially, this research study supports the importance of using Big Data in Artificial Intelligence for academia, manifested by significant growth in scientific productivity over the last five years. This allows knowing the composition of the body of literature on the subject concerning their research references from the number of publications and citations and the context in which their studies are presented, clarifying the picture to new researchers in the area.
Research Trends on the Role of Big Data in Artificial …
131
Fig. 9 Keyword cooccurrence network. Source own elaboration based on Scopus
Through the analysis of the main journals on the subject, researchers will be able to appreciate two intangible factors in the research work, such as, on the one hand, the recognition of the journals that, based on their approach, present a greater interest in the subject, and which in turn have given rise to a greater number of citations, thus identifying the most outstanding journals; and on the other hand, expanding the margin of options where authors can publish their research. In the global analysis, there is a clear consolidation of the countries with the greatest technological development, a result that is consistent with the number of publications and citations. This situation allows locating the main schools of knowledge in the area, among which the largest volume of research on the use of Big Data in Artificial Intelligence is generated. Finally, the most relevant aspect in practical terms is the one oriented to the evolution of the keywords arising from research on the subject, which, by 2022, show the main trends in terms of the use of Big Data in Artificial Intelligence, which consolidates areas such as Blockchain and Cloud Computing, reflecting, in turn, the emergence of the adaptation of an important concept in the field, such as the Digital Economy, becoming an important support for new research and research agendas that may emerge from the topic.
5 Conclusions Once the results have been evaluated, it can be seen that the object of study is a recent topic that has been indexed in Scopus for less than ten years; however, it has had
132
S. Cardona-Acevedo et al.
substantial growth since 2020, which translated into numbers means that 69% of the existing documents were published since then. In addition, the analysis between the number of publications and the number of citations shows a correlation of 0.9329, validating the importance of the topic about the impact it has been having in the academic world. However, striking is the difference between the productivity and relevance of the authors since it can be observed that those who have a larger number of publications are not necessarily the most cited, and the same happens with scientific journals. This can be seen especially reflected in the analysis between countries. Although China has greater efficiency in terms of publications, the United States takes the first place in relevance according to its number of citations. On the other hand, concerning the keywords evaluated to determine thematic trends, it can be observed that, although it is a young object of study, it comprises three periods established based on the main concepts addressed in each one. In recent years, the most used terms are related to Industry 4.0 technologies, being “Machine Learning” the most used concept since 2013, since it has been placed together with most terms derived from research results. However, it is a term that, together with “Deep Learning” and “Internet of Things,” has been decreasing in the level of relevance. Finally, contrary to the above, the concepts of “Blockchain,” “Computer Network Technology,” “Precision Medicine” and “Cloud Computing,” are growing topics, i.e., that in the last two years have been investigated more with previous periods, adding relevance to the academic community. Likewise, “Digital Economy,” “Accuracy,” and “Application” have appeared as emerging concepts that had not been studied between 2013 and 2020, but that currently there are works in which they are being addressed. The above shows that the object of study is an evolving concept that is adapting to the trends and needs of the environment, being taken into account by academics due to the level of relevance that relates it to current technologies.
References 1. A. Gandomi, M. Haider, Beyond the hype: big data concepts, methods, and analytics. Int. J. Inf. Manage. 35(2), 137–144 (2015). https://doi.org/10.1016/j.ijinfomgt.2014.10.007 2. D.E. O’Leary, Artificial intelligence and big data. IEEE Intell. Syst. 28(2), 96–99 (2013). https://doi.org/10.1109/MIS.2013.39 3. J.A. Mayor-Ríos, D.M. Pacheco-Ortiz, J.C. Patiño-Vanegas, S.E. Ramos-y-Yovera, Análisis de la integración del big data en los programas de contaduría pública en universidades acreditadas en Colombia. Rev. CEA 6(9), 53–76 (2019). https://doi.org/10.22430/24223182.1256 4. I. Arpaci, M. Al-Emran, M.A. Al-Sharafi, K. Shaalan, A novel approach for predicting the adoption of smartwatches using machine learning algorithms, in Recent Advances in Intelligent Systems and Smart Applications (Springer, Cham, 2021), pp. 185–195 5. J.H. Al Shamsi, M. Al-Emran, K. Shaalan, Understanding key drivers affecting students’ use of artificial intelligence-based voice assistants. Educ. Inf. Technol., 1–21 (2022)
Research Trends on the Role of Big Data in Artificial …
133
6. Y. Duan, J.S. Edwards, Y.K. Dwivedi, Artificial intelligence for decision making in the era of big data—evolution, challenges and research agenda. Int. J. Inf. Manage. 48, 63–71 (2019). https://doi.org/10.1016/j.ijinfomgt.2019.01.021 7. A.A. Saa, M. Al-Emran, K. Shaalan, Mining student information system records to predict students’ academic performance, in International Conference on Advanced Machine Learning Technologies and Applications (Springer, Cham, 2019), pp. 229–239 8. S. Zaza, M. Al-Emran, Mining and exploration of credit cards data in UAE, in 2015 Fifth International Conference on e-Learning (econf) (IEEE, 2015), pp. 275–279 9. C. Pavlidis, J.C. Nebel, T. Katsila, G.P. Patrinos, Nutrigenomics 2.0: the need for ongoing and independent evaluation and synthesis of commercial nutrigenomics tests’ scientific knowledge base for responsible innovation. OMICS 20(2), 65–68 (2016). https://doi.org/10.1089/omi. 2015.0170 10. V. Özdemir, N. Hekim, Birth of industry 5.0: making sense of big data with artificial intelligence, “the internet of things” and next-generation technology policy. OMICS J. Integr. Biol. 22(1), 65–76 (2018). https://doi.org/10.1089/omi.2017.0194 11. V. Özdemir, G.P. Patrinos, David Bowie and the art of slow innovation: a fast-second winner strategy for biotechnology and precision medicine global development. OMICS: J. Integr. Biol. 21(11), 633–637 (2017). https://doi.org/10.1089/omi.2017.0148 12. L. Lin, Z. Hou, Combat COVID-19 with artificial intelligence and big data. J. Travel Med. 27(5) (2020). https://doi.org/10.1093/JTM/TAAA080 13. M. Al-Emran, M.N. Al-Kabi, G. Marques, A survey of using machine learning algorithms during the COVID-19 pandemic, in Emerging Technologies During the Era of COVID-19 Pandemic (Springer, Cham, 2021), pp. 1–8 14. B. Wang, Q. Zhang, F. Cui, Scientific research on ecosystem services and human well-being: a bibliometric analysis. Ecol. Ind. 125, 107449 (2021). https://doi.org/10.1016/j.ecolind.2021. 107449 15. J. Baas, M. Schotten, A. Plume, G. Côté, R. Karimi, Scopus as a curated, high-quality bibliometric data source for academic research in quantitative science studies. Quant. Sci. Stud. 1(1), 377–386 (2020). https://doi.org/10.1162/qss_a_00019 16. M. Chankseliani, A. Lovakov, V. Pislyakov, A big picture: bibliometric study of academic publications from post-Soviet countries. Scientometrics 126(10), 8701–8730 (2021). https:// doi.org/10.1007/s11192-021-04124-5 17. D. Morales-Zapata, A. Valencia-Arias, L.F. Garcés-Giraldo, E. Toro-Vanegas, and J. QuirozFabra, Trends in research around the sustainable development objectives: a bibliometric analysis, in Sustainable Development Goals for Society, ed. by G. Nhamo, M. Togo, K. Dube, 1st edn. (Springer, Cham, 2021), pp. 247–260. https://doi.org/10.1007/978-3-030-70948-8_17 18. O.A.G. Tantengco, I.M.C. Aquino, J.L.B. Asis, J.J.E. Tan, M.N.A.R. Uy, E.P. Pacheco, Research trends in gestational diabetes mellitus in Southeast Asia: a bibliometric analysis (1975–2020). Diabetes Metab. Syndr. 15(4), 102202 (2021). https://doi.org/10.1016/j.dsx. 2021.102202 19. Y. Zhang, P. Geng, C.B. Sivaparthipan, B.A. Muthu, Big data and artificial intelligence based early risk warning system of fire hazard for smart cities. Sustain. Energ. Technol. Assess. 45, 100986 (2021). https://doi.org/10.1016/j.seta.2020.100986 20. Y. Jing, Y. Bian, Z. Hu, L. Wang, X.Q.S. Xie, Deep learning for drug design: an artificial intelligence paradigm for drug discovery in the big data era. AAPS J. 20(3), 1–10 (2018). https://doi.org/10.1208/s12248-018-0210-0 21. N.L. Bragazzi, H. Dai, G. Damiani, M. Behzadifar, M. Martini, J. Wu, How big data and artificial intelligence can help better manage the COVID-19 pandemic. Int. J. Environ. Res. Public Health 17(9), 3176 (2020). https://doi.org/10.3390/ijerph17093176 22. W. Qian, M. Na, Y. Zenan, S.M. Yue, L. Qing, Access data analysis technology and implementation of electric power big data achievement sharing platform through artificial intelligence. J. Phys. Conf. Ser., 2083(3), 032065 (2021). https://doi.org/10.1088/1742-6596/2083/3/032065 23. D. Jiang, Application of artificial intelligence in computer network technology in big data era, in 2021 International Conference on Big Data Analysis and Computer Science (BDACS) (2021), pp. 254–257. https://doi.org/10.1109/BDACS53596.2021.00063
134
S. Cardona-Acevedo et al.
24. D. Huang, Application of artificial intelligence technology in modern medical service system under the background of big data, in International Conference on Big Data Analytics for Cyber-Physical-Systems (Springer, Singapore, 2021), pp. 1205–1212. https://doi.org/10.1007/ 978-981-16-7466-2_133 25. I. Rodríguez-Rodríguez, J.V. Rodríguez, N. Shirvanizadeh, A. Ortiz, D.J. Pardo-Quiles, Applications of artificial intelligence, machine learning, big data and the internet of things to the COVID-19 pandemic: a scientometric review using text mining. Int. J. Environ. Res. Public Health 18(16), 8578 (2021). https://doi.org/10.3390/ijerph18168578 26. X. Wang, Application of big data and artificial intelligence technology in the prevention and control of COVID-19 epidemic. J. Phys. Conf. Ser. 1961(1), 012049 (2021). https://doi.org/ 10.1088/1742-6596/1961/1/012049 27. L. Xu, Application of artificial intelligence and big data in the security of regulatory places, in 2021 International Conference on Artificial Intelligence and Electromechanical Automation (AIEA) (IEEE, 2021), pp. 210–213. https://doi.org/10.1109/AIEA53260.2021.00052 28. A.J. Casey, A. Niblett, Self-driving laws. Univ. Toronto Law J. 66(4), 429–442 (2016). https:// doi.org/10.3138/UTLJ.4006 29. M.K. Tripathi, A. Nath, T.P. Singh, A.S. Ethayathulla, P. Kaur, Evolving scenario of big data and artificial intelligence (AI) in drug discovery. Mol. Diversity, 1–22 (2021). https://doi.org/ 10.1007/s11030-021-10256-w 30. N.I. Gallini, A.A. Denisenko, D.T. Kamornitskiy, P.V. Chetyrbok, R.R. Timirgaleeva, Research on the use of big data and artificial intelligence in forecasting the labor force balance in the Russian Federation, in 2021 IEEE Conference of Russian Young Researchers in Electrical and Electronic Engineering (ElConRus) (IEEE, 2021), pp. 891–894. https://doi.org/10.1109/ElC onRus51938.2021.9396531 31. M.M. Alshater, O.F. Atayah, A. Hamdan, Journal of sustainable finance and investment: a bibliometric analysis. J. Sustain. Financ. Invest.,1–22 (2021). https://doi.org/10.1080/204 30795.2021.1947116 32. T. You, J. Yoon, O.H. Kwon, W.S. Jung, Tracing the evolution of physics with a keyword co-occurrence network. J. Korean Phys. Soc. 78(3), 236–243 (2021). https://doi.org/10.1007/ s40042-020-00051-5
Recent Applications of Artificial Intelligence for Sustainable Development in Smart Cities Tanweer Alam , Ruchi Gupta , Shamimul Qamar , and Arif Ullah
Abstract We are seeing an explosion in artificial intelligence (AI), which may be defined as a technology that mimics the traits often associated with human intelligence. Today, many industries rely on artificial intelligence, including marketing and banking, healthcare and security; robotics and transportation; chatbots; artificial creativity; and manufacturing. AI has recently started to play a significant role in smart city operations. AI is used to describe how well computers can imitate human thought processes. Expert systems, natural language processing, speech recognition, and machine vision are some areas of AI. Smart City uses information and communication technologies to boost the economy, improve the quality of people’s lives, and support governing. AI can play a significant role in making cities safer and better for people to live in by giving people more access and control over their own homes, monitoring traffic and managing waste. In this paper, the authors have explored the recent artificial intelligence applications for sustainable development in smart cities. These applications can be used in (1) sustainable environmental plans, with requirements with more and better facilities in the limited available area, and (2) enhanced standard of living for urban residents at a more economical expense.
T. Alam (B) Faculty of Computer and Information Systems, Islamic University of Madinah, Madinah, Saudi Arabia e-mail: [email protected] R. Gupta Ajay Kumar Garg Engineering College, Kalam Technical University, GhaziabadLucknow, Abdul, India e-mail: [email protected] S. Qamar Computer Science and Engineering, College of Sciences and Arts, King Khalid University, Dhahran Al JanubAbha, Saudi Arabia e-mail: [email protected] A. Ullah Department of Computer Science, Riphah International University, Faisalabad, Pakistan e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. Al-Emran and K. Shaalan (eds.), Recent Innovations in Artificial Intelligence and Smart Applications, Studies in Computational Intelligence 1061, https://doi.org/10.1007/978-3-031-14748-7_8
135
136
T. Alam et al.
Keywords Machine learning · Artificial intelligence · Sustainable development · Smart cities · Smart applications
1 Introduction By 2050, more than two-thirds of the world’s population is expected to live in cities, which could be a big chance for businesses that make new technologies. It is estimated that by 2030, there will be 43 megacities, each with a population of at least 10 million people [1]. Several of the fastest-growing cities will have populations under one million. In contrast, others will be more liveable in emerging regions and metropolitan areas through better infrastructure and social management systems. AI will play a big part in making cities more sustainable by adding more features that make it easier for people to live, walk, buy, and have a more pleasant and safe life. It includes everything from administration and sanitation to traffic and security to parking. AI is being used to help people stay in their homes for a long time while also making cities smarter. This kind of AI looks like how humans act or think, and it can be taught how to solve problems. Deep Learning and Machine Learning are both used in AI, and both are essential parts of it. AI systems trained with data may be able to make smart decisions. AI is a hot topic in the computer world for a good reason. In the last few years, many things that were only in science fiction have started to become real. With AI, the global economy is expected to undergo a significant shift, with plenty of money to be earned. AI is predicted to have a worldwide economic impact of 15.7 trillion dollars by 2030, based on our calculations [2]. As a result of the AI revolution, China and the United States will make up about 70% of the world’s change. Many people use AI in their everyday lives. It is becoming more critical in many different industries, like health care, entertainment, financial services, and education, because it can deal with complicated problems quickly. AI is making our lives more fun and more efficient.
1.1 Importance of AI With the help of AI, some problems with the universe can be solved, as shown below. AI technology can help people get a better sense of the world. In healthcare, AI is being used more and more. Many people think AI will have a significant impact on the healthcare industry in the next five or ten years. People in the healthcare field increasingly rely on AI for faster and more accurate diagnoses. Before a patient is admitted to the hospital, their health may worsen. AI can help doctors and nurses see this before the patient is hospitalized. AI may be used in video games. AI robots can play chess, a game in which the system must think about many different things [3].
Recent Applications of Artificial Intelligence …
137
As long as you work in the financial field, these technologies are being used in finance, from chatbots and adaptive intelligence to algorithmic trading. Cyberattacks are becoming more common in the digital age, and protecting your data is essential for every business. The help of AI could help you keep your data safe and secure. The AEG bot and AI2 Platform, for example, are used to find software bugs and cyber-attacks more quickly. There are billions of people on Facebook, Twitter, and Snapchat, and each one needs to be kept and managed excellently to make sure it stays safe and works well. AI can organize and manage a lot of data very well. AI, which can look through a lot of data, can help you find the most recent trends, hashtags, and user needs [4]. AI is becoming more and more popular in the travel and transportation field. The AI can do everything from making reservations to suggesting the best routes. The travel industry uses them to give customers better and faster service by acting like they are human. Many car companies are now using AI in the form of virtual assistants to help their customers do better at their jobs. For example, Tesla Bot, an intelligent virtual assistant, has been made. Autonomous cars are being made by many different businesses right now to make your trip more secure and safe. AI plays a big part in robots. It may be possible to create intelligent robots that can do things based on their own experiences rather than pre-programmed. Erica and Sophia are two examples of intelligent humanoid robots that can talk and act like humans. They show how robots can learn and improve over time. When we go to places like Netflix and Amazon to watch movies, we already use AI-based apps to help us. Some of these services use algorithms based on machine learning and AI to show software or give advice. To get the best results in farming, you need many different things, like labour, money, and time. AI is becoming more common in agriculture because it is becoming more digital. Agri-culture robots, sensors to keep an eye on crops and solids, and predictive analyses are all part of this. Many farmers are excited about the use of AI in farming [5].
1.2 How Will AI Power the Cities in Future? As smart cities become more than just a futuristic notion, AI is gaining centre stage. This transition is being led by advanced technology, producing beneficial initiatives while also reducing process time and costs across the board. Smart cities, clean cities, and net-zero objectives are all being achieved with the help of these technologies, which are gaining popularity quickly in the marketplace. People are increasingly interested in e-commerce because of AI, and it’s getting better and better [6]. AI can do the grading for you to spend more time teaching. An AI chatbot can help students learn by talking to them. To let students know at their own pace and from any place, AI may one day act as a virtual teacher for them.
138
T. Alam et al.
1.3 AI Stats and Information As of 2025, Statista predicts that the global AI software market will reach $126 billion, more than double in 2016. According to the Gartner company, nearly 37% of businesses have already used AI in one way or another. Two hundred seventy per cent more businesses have used AI in the last four years than they did in the previous three. Servion Global Solutions says that by 2025, AI will oversee 95% of all customer interactions. According to Statista, the global AI software market is expected to grow by 54% year-over-year in 2022. This means that the demand for AI software will increase to $22.6 billion. AI technology can help organizations learn new skills, but there are questions about its use. AI systems will keep repeating what they have already learned.
1.4 Machine Learning and AI Many AI products use machine learning algorithms that are only as good as the trained data. This is a problem. If a person chooses which data is used to introduce an AI computer, there could be a bias in its learning. It is essential for anyone who plans to use machine learning in real-world, operational systems to think about how it will affect people’s lives. AI methods like deep learning and generative adversarial networks (GANs), which are built to be impenetrable, are very important in this case. AI adoption may be slowed down by a lack of explanation in businesses that have to follow strict rules. The federal government wants financial institutions in the United States, for example, to explain why they give out loans. Because the AI tools used to make these judgements pick out minimal connections between many different parts, it may be hard to explain how they came to be. There are times when the software’s decision-making process isn’t straightforward. This is called “black box AI.” [7]. Despite the risks, there aren’ many rules about how AI technology can be used, and when laws do exist, they’re often discussed vaguely. Fair Loan regulations require financial institutions in the United States to tell people about their lending options. As a result, lenders can’t use algorithms that are hard to understand and understand. Many businesses in the European Union must follow the EU’s General Data Protection Regulation to train and manage AI systems used by people [8, 9]. Because AI is made up of a wide range of technologies that businesses use for various reasons, and because limits could slow down AI progress and development, it will be hard to develop regulations to manage AI. Another thing that makes it hard to make reasonable rules in this area is how quickly AI technology has changed. Laws may become out-of-date because of new technology and creative ideas. Because these laws don’t apply to virtual assistants like Amazon’s Alexa and Apple’s Siri, they don’t apply to them. They collect but don’t share conversation data with anyone else, except their technology teams, who use it to improve machine learning algorithms. Problem: There are no laws that can help with this. Even if governments get AI
Recent Applications of Artificial Intelligence …
139
control rules in place, criminals will still find ways to use them. AI-enabled devices can do the following things: 1. 2. 3. 4.
Use speech recognition to identify the voice. Detection of something that is already there. Make sense of what you’ve learned and develop your ideas. Machine learning is at the heart of AI’s learning ability.
1.5 Deep Learning A deep learning system helped us distinguish between the different types of images submitted for review. The algorithm uses a method known as feature extraction to scan through a considerable number of components in photos to identify them. Images are classified into several categories, including landscape, portrait, and others, based on their characteristics. The images to be divided are placed in the input layer. It seems arrows have been drawn from the picture to the input layer dots. Each white dot in the yellow layer represents a pixel in the photograph (input layer). These photos fill in the white dots in the input layer [10, 11]. These layers are responsible for processing and extracting the features from our data. The orange layers in this figure depict the previously unseen parts of a system. In geology, strata are divided by ‘weights,’ or lines. To produce the final output, the input layer’s value is multiplied by these values. In the concealed layer, all of the weights have accumulated. The total of the consequences is represented as dots in the buried layer. The values are then passed on to the next tier of the stack. You may be perplexed as to why this game has so many stages. The hidden layers might be an option. As the number of hidden layers grows, so makes the difficulty of entering and producing data. The more complex the input and hidden layers there are, the more accurate the anticipated output will be. We may receive photographs that have been separated, thanks to the output layer. The layer is responsible for detecting whether a photo is landscape or portrait when all input weights are combined. According to several criteria, such as departure time– origin–destination–origin–destination, it is possible to train the computer by feeding it historical data on ticket pricing. Our algorithm and price projections are trained on new data. We looked at computers with memory after learning about the four major types of machines.
1.6 AI and Smart Cities This term refers to the ability of computers or robots that have been programmed by computers, such as AI, to do things that used to be done by humans. The term “smart cities” has been used in many ways. To become a smart city, however, a city must use ICT and AI to achieve long-term social, environmental, and economic
140
T. Alam et al.
growth and improve the quality of life for its people [12–14]. Learning, reasoning, and self-correction are somehow parts of the human brain.
2 Application Scenarios Where AI Might Have a Beneficial Influence on Smart Cities 2.1 Managing Traffic Today’s cities are home to tens of thousands of individual automobiles and a considerable number of commercial vehicles that move both people and commodities. Another area in which AI might be beneficial is the parking and traffic management of these vehicles, which requires constant innovation. Traffic management technologies based on AI, such as the ParKam solution, learn and predict traffic patterns and parking availability, reducing city travel and parking inconvenience.
2.2 Environment As cities commit to a more sustainable future, the importance of AI-powered products and services that decrease environmental impact will grow. Last year, the Loughborough University Department of Computer Science and Engineering created an AI and machine learning (ML) system that can anticipate air pollution levels several hours in advance. As it processes enormous volumes of city data, the system learns rules and characteristics that allow it to make predictions. AI technology will be used to analyze acceptable particulate matter levels in cities to provide municipal officials with the information they need to make educated pollution-control choices [16–18].
2.3 Optimization of Energy AI technologies may assist cities in using power more efficiently, saving money, and improving the lives of their residents. By monitoring and collecting data on people’s movements, AI technologies might be added to the mix to control energy distribution in busy locations and places. The AI would study energy use patterns throughout the city and make energy supply choices based on that information [19, 20].
Recent Applications of Artificial Intelligence …
141
2.4 Public Transportation AI has the potential to assist cities in improving mobility by reducing traffic congestion, better managing people’s movements, and giving residents real-time information updates. Singapore is one city that puts a strong emphasis on connected vehicles, with plans to introduce the self-driving public. Residents may get real-time traffic information via the city’s data-rich intelligent public transit system powered by big data. On top of the company’s infrastructure, AI might be utilized to analyze traffic patterns and allocate resources appropriately. Singapore is already one of the world’s least crowded cities, and AI has the potential to make it even less congested [21–23].
2.5 Management of Waste Waste management systems driven by AI are already in operation in several sites throughout the globe. Intelligent robots, for example, are cleaning up plastic pollution from Sydney’s rivers and sorting garbage, recognizing recyclables for recovery, and learning as they go. People can use all four of these senses to interact with computers in various ways.
3 Recent Applications of AI If AI is used in smart cities correctly, it could change people’s lives. AI can help cities and urban development in several ways, but many more.
3.1 Cameras for High-Tech Surveillance and Security Cameras and sensors with AI abilities can keep an eye on the area around the city and help maintain its people safe. These cameras may identify specific people, including their faces and any unusual actions they do in certain parts of the city. AI security cameras can track all registered cars and keep an eye on population density and cleanliness in public places around the clock every day. History from different local governments can help predict what kind of crimes will happen in a specific area.
142
T. Alam et al.
Fig. 1 Recent applications of AI
3.2 Parking and Traffic Control System People and goods are moved by many commercial vehicles in cities, mainly owned by the people there. Because of this, AI could be beneficial for things like parking and traffic (Fig. 1). Road surface sensors or CCTV cameras that are installed in parking spaces may be used to make real-time maps of parking and traffic. This means that cars can avoid long waits to find an empty parking space or move smoothly through traffic [24, 25]. AI-assisted traffic sensor systems may use cameras to get real-time data from vehicles on the road and send it to a control centre, which analyses the data and changes light traffic timings to ensure a smooth flow of vehicles. In addition, smart transportation also includes the public sector, and AI can help improve public transit. Uber and other ridesharing services are already using AI to make their customers’ trips more enjoyable.
3.3 Control of Uncontrollable Flying Things It is possible to use drones with AI or autonomous flight and other devices to keep an eye on the inner city, houses, and other places that might be dangerous. Drones with cameras can show real-time pictures of places that are hard to get to, which can help with the administration and security of these places. Self-flying drones can watch the
Recent Applications of Artificial Intelligence …
143
public, monitor road traffic, and provide detailed aerial maps for communities. Law enforcement and criminal groups could use it to improve security and surveillance. Detection cameras and movement detection systems are essential to keep the public safe. With the help of AI, it is possible to recognize specific people by their faces, revealing their unique names. People may prove who they are using AI in surveillance cameras or drones, which can recognize faces and compare them to a database of people [26, 27]. This is how an AI-based model is trained in a system that recognizes faces: Because it gives people more privacy and security, face detection technology of this level is important for smart cities.
3.4 Intelligence-Based System for Managing and Disposing of Waste There is also a problem with city waste management because so many people live in cities and throw away a lot of garbage. Today, many countries are trying to figure out how to deal with the trash that civilization makes while keeping the environment clean and safe. AI cameras can distinguish between different types of waste that people throw out on the street and classify it accordingly [28, 29]. Installing AIenabled sensors on garbage bins could make it easier for people to get rid of their trash. For example, authorities might be told when a trash can is close to being full so that they don’t have to collect too much waste, change routes and schedules, and make sure that garbage management is the best.
3.5 Controlling and Organizing the Government and Making Plans When planning new cities or urban townships, satellite data and aerial images in 2D or 3D can be used to make maps that show how the land has been used over time. These maps can show how the land has changed over time. In the future, satellite images could be used for city planning and development. They could show areas that could be hit by flooding, earthquakes, and storms and areas that aren’t as likely to happen. Machine learning algorithms are used. To improve governance, real-time and historical data could be looked at regularly. The two things that make e-commerce and AI work together are: Shop with a Purpose: Shop with a goal in mind. AI helps you build recommendation engines to better connect with your customers. When these recommendations are made, all your browsing history and preferences are considered. So, your customer service and brand loyalty improve.
144
T. Alam et al.
3.6 Artificial-Intelligence-Enhanced Helpers A study has found that improve customer satisfaction by using chatbots and virtual helpers when you buy things online. Natural Language Pro-cessing (NLP) is used to make the words more natural to make the discussion seem more real and personal. People who work for you could also talk with your customers in real-time. As it turns out, customer service at Amazon.com will soon be done by chatbots [30, 31]. It’s hard for E-Commerce businesses to deal with fake reviews and credit card fraud. AI can help cut down on credit card fraud by looking at how people use their cards. Many people buy a product or service based on what others say about it. AI may make it easier to spot and deal with fake reviews, making it more accessible [32].
3.7 Machine Learning in the Classroom Because humans still have the most power in education, AI is slowly but surely making its way into the business. Teachers have spent more time with their students and less time with administrative and office work because of the rise in AI.
3.8 Assisting Teachers with Automated Office Work Some jobs that aren’t related to education might be easier for AI to do, like handling enrolment and courses, HR-related issues, individual messages to students, backoffice tasks like grading papers and organizing parent and guardian connections, etc.
3.9 Putting Thoughtful Content Together AI technology can make video lectures, conference calls, and textbook guides all digital. Like animations and educational content, alternate user interfaces can be made for various grades. A rich learning experience can be made by giving audio and video summaries and incorporating lesson plans created by AI. Students can use voice Assistants without the help of a lecturer or teacher to get more information or get help. Short-term handbooks can be made at a lower cost while still giving quick and easy answers to common questions.
Recent Applications of Artificial Intelligence …
145
3.10 Learning that Adapts to the Needs of the Person Learning AI and hyper-personalization methods make it possible to keep track of all of a student’s data and make study aids, flash notes, lesson plans, and other resources that are just right for them to learn more quickly.
3.11 AI Will Change Our Daily Lives AI has made a massive difference in how humans live. Let’s take a look at a few of these now. 1. Self-driving cars are becoming more common It helps car companies like Toyota, Audi, Volvo, and Tesla make cars that can drive in any environment and avoid accidents through object detection. Companies like these use machine learning. 2. A piece of software to stop spam In the email we use every day, AI moves spam emails to the trash or spam folders to see only the essential parts. Gmail’s ability to filter emails 99.9% of the time makes it easy to see why so many people use it. 3. Being able to tell how someone is feeling on their face Smartphones, laptops, and PCs use face-recognition algorithms to ensure only people who are supposed to use them can. For example, face recognition is used in many industries, including very safe ones. 4. The Recommendation Engine helps you find things you like We use many different websites and apps in our daily lives, like e-commerce, entertainment websites like Netflix, social networks like Facebook, and videosharing sites like YouTube, to get information about people and make personalized recommendations. This is an AI application that is used in almost every industry. 5. Using AI to help you navigate It is said that GPS technology, according to research from MIT, can help people stay safe by giving them accurate, timely, and complete information that they can use. GPS Users will find their lives easier thanks to the system’s use of Convolutional and Graph Neural Networks, which make up the system. Companies like Uber and logistics companies use AI to improve their operations, keep an eye on the traffic, and plan the best routes for their cars and trucks.
146
T. Alam et al.
6. Use AI to make tasks easier Robotics and healthcare are two other industries that use AI. AI-powered robots need real-time information to avoid obstacles and quickly change their routes. It can be used for many things, like Products used in hospitals, factories, and warehouses that need to be moved. Large machinery and workplaces need to be cleaned very well [33]. The use of AI to manage Human Resources: Businesses use clever tools to speed up hiring new employees. AI can help people who don’t want to be hired. There are ways to use machine learning to figure out how good an app is based on certain things. Recruiters may use automated systems to look at job seekers’ profiles and resumes and determine the best candidates.
3.12 AI in the Healthcare Field When it comes to the healthcare business, AI can be used in many ways. AI is being used to make medical gadgets that differentiate between disease and cancer cells. The use of AI can help with lab and other medical data to ensure that chronic diseases are found early on. AI uses a mix of data from the past and medical knowledge to develop new drugs.
3.13 AI Is Being Used in Agriculture AI is used to find problems with the soil or a lack of nutrients. AI may use robotics, computer vision, and machine learning to find weeds and remove them. It is faster and more efficient to use AI bots to harvest crops than humans.
3.14 Games that Use AI Also, AI has become more popular in the game business. AI can make NPCs that are smart and human-like and interact with the players. It can predict how people will act when making and testing games. In the Alien Isolation games, which came out in 2014, an AI follows the player and helps them. Director AI, which is always aware of where you are, and Alien AI, guided by sensors and behaviours, both use AI to make the game more challenging. They both do this to make the game more interesting.
Recent Applications of Artificial Intelligence …
147
3.15 AI Can Be Used in the Automotive Industry Cars that drive themselves are made by using AI. When the driver is driving, AI can help with the camera, radar and cloud services, and GPS, cloud services, control signals and control signals. Adding technologies like emergency braking, blind-spot monitoring, and steering assistance could make the inside of the car better.
3.16 AI Can Be Used in Social Media to Help People 1. Instagram Use the Explore tab on Instagram to see posts that match your interests and the people you follow. Instagram uses AI to choose the posts you see based on your interests and the people you follow. 2. Facebook It’s not just DeepText being used. AI is also used. FB makes use of this technology so that it can better understand what people are talking about. It can be used to translate messages from one language to another automatically. 3. Twitter Twitter uses AI to find fraud, remove propaganda, and remove hateful content. Twitter also uses AI to suggest tweets to users based on what they do with the tweets they see on the service.
3.17 AI Is Becoming More Common in Marketing Marketers can target and personalize their ads more effectively with the help of AI tools like behavioural analysis, pattern recognition, and so on. People who are retargeted at the right time are more likely to get better results and less likely to have negative feelings like distrust and dissatisfaction. AI can be used in many ways to improve content marketing. Performance tracking, campaign analysis, and other things can all be done with this tool. This type of AI can read the words the user says and respond to them in a way that sounds like a human. It does this by using Natural Word Processing, Natural Language Generation, and Natural Language Understanding. AI can also be used to change and improve marketing efforts to match the needs of specific markets in real-time, based on how customers act.
148
T. Alam et al.
3.18 Chatbots that Use AI People who use a “live chat” option for customer service can be identified and helped by AI. AI chatbots use machine learning and can be used on various websites and apps. A database of responses and data can be built by AI chatbots. They can also get data from a set of integrated responses. Customers can be sure that these chatbots will be able to answer their questions and help them around the clock as AI improves. Customers may be able to use these AI chat-bots to their advantage.
3.19 AI Can Be Used in the Finance Recent surveys show that most banks are aware of the benefits of AI. From personal finance to commercial finance to consumer financing, AI’s cutting-edge technology could make many things better. For example, customers who need help with wealth management solutions might use SMS text messages or online chats that use AI to get help. AI can spot changes in transaction patterns and other signs that could be signs of fraud, which humans are more likely to overlook. This could save businesses and people a lot of money. As well as being able to detect fraud and automate tasks, AI can better predict and assess the chances of getting a loan.
4 Challenges and Future Research Directions We already see how the digital revolution has changed the way people live and work and how they communicate with each other. It’s just the start. However, the same technologies that could help billions of people worldwide also pose significant problems for people and governments. People and machines will work together in the future. They talked about future scenarios in small groups as part of the workshop’s discussion on human–machine teams. The following are three options that were thought about: Helpful Technology groups. We split into three groups to discuss how an intelligent assistant could help humans make decisions and learn independently. It’s called an “intel” assistant, and it asks questions to fill in knowledge gaps and is flexible enough to meet the needs of its users (provides relevant information when the user needs it). The second is a “cyber-security” assistant that keeps an eye out for security threats, “knows” what information is essential, and comes up with new, dynamic ways to solve problems. The “cooperative learning” assistant helps students and the teachers learn even more. People who work on naval ships’ maintenance teams at some point, a group looked at how humans and machines might work together in the future. Afterwards, the team will split up the work to reach their common goal. Working with complicated things, the intelligent system will understand and predict what its human partner will do. To adapt to its environment,
Recent Applications of Artificial Intelligence …
149
it will be able to think about causation, time, space, and identity in a very sophisticated way. When there is an accident, disaster relief teams will come to help. One group says that human–machine teams may help with disaster search and rescue and recovery operations. Some autonomous systems work together with people to find survivors (autonomous air vehicles), clean up after a disaster (independent movers) and give medical treatment. (Robotic medics). They will adapt quickly to changes in the environment and learn from their experiences and interactions with people.
4.1 AI of Things (AIoT) In the blockchain-envisioned secure authentication framework for AIoT, there are a lot of different kinds of authentication processes that need to be done for communication entities to communicate with each other. A wide range of authentication methods is used, including 2-factor and 3-factor authentication and methods that use certificates and those that don’t. On the other hand, these solutions make sure that everyone is safe. They might still be used even though they are completely safe and can’t be harmed. As a result, security systems need to withstand a wide range of attacks and intrusions to be safe. The Real-or-Random (RoR) model, AVISPA simulation, BAN logic, and other methods should be used to show that these systems are secure. Because of this, we need to do more research. There are essential parts in the blockchain-designed secure authentication architecture for AIoT, like Blockchain and AI. They often use the consensus method and the deep learning algorithm, which both take a lot of space and time. To use these methods, you need computing power, network bandwidth, and storage space. It’s hard to run such a system when we don’t have much money. To build the frameworks, the least number of resources should be used. That’s why they should be made that way. Algorithm selection can save both time and money. For example, PBFT is better than PoW because it doesn’t use as many resources. Light-weight cryptographic algorithms are better than other algorithms because they use less transmission, processing, and storage space while still providing the same level of security as different algorithms.
4.2 The Ability of Tools and Technologies to Work Together It’s made up of tools and technologies from AI, IoT, and the Blockchain. Consensus, deep learning, and IoT communication algorithms are all used to achieve this goal. Tools, technologies, and devices may not work together in this environment. As a result of this problem, smart IoT devices could go down in a big way. Because of this, care must be taken when handling it. Because of this, we need to do more research.
150
T. Alam et al.
4.3 Making Sure Your Privacy Using open ledgers, also called distributed ledgers, is becoming more important today. Blockchain is a big part of that, and it’s called “blockchain.“ It must be done in certain situations (i.e., public Blockchain). A healthcare data management system could make it risky to use it in that kind of environment. Rebuilding this ledger so that only people who are supposed to be able to see it should be done. However, these flaws can be fixed with various blockchain subcategories (for example, private Blockchain can be preferred when we need more privacy). Devices that send data over the Internet should do so securely so that hackers don’t use it. This is for the sake of privacy. People who use IoT devices should use the most advanced encryption methods to keep their data safe from people who don’t want them to. Communication entities need to be checked out by mutual authentication systems at the same time (i.e., IoT devices, cloud servers, and users). It is possible to use an access control system to keep out people who aren’t supposed to be there. As a result, the system’s coder should be able to read and understand everything that is written in the code (i.e., which technique should be used for which purpose). More research is needed to improve the framework’s privacy, which will need to be done.
4.4 Making the System More Accurate There could be bias because AI is a big part of these frameworks, which we all know (i.e., wrong accuracy value). People who use machine learning and AI have to pay attention to their input and how good their information is. It will be hard for us to make good predictions if the algorithms are not up to par (results). For example, the algorithm said that this person would have a massive heart attack, even though the person is completely healthy. I don’t think this will work out well. The training method and data set that are given are essential for systems like this. As a result, these problems need to be fixed.
5 Examples of AI Here are examples of AI in action in the real world.
5.1 Maps and How to Use Them Travel has become much easier thanks to AI. You can enter your location into Waze, Google Maps, or Apple Maps to get directions to your destination on your phone.
Recent Applications of Artificial Intelligence …
151
You can also use these apps to get directions to your destination. So, how does the program know where to go in the process, then? The best routes, obstacles, and traffic jams also need to be considered. For a long time, people could only use satellitebased GPS. Now, AI is being used to make the experience better. The algorithms use machine learning to remember the structure’s layout, making it easier to see and understand the house and building numbers on a map. To cut down on traffic jams, the software has been taught how to look for and think about changes in traffic flow.
5.2 Facial Expressions Can Be Read and Detected Face ID and virtual filters when taking pictures are two examples of how AI is becoming more and more a part of our lives every day. Face detection is used in the first case, making it possible to identify almost any human face with a simple search. A method called facial recognition is used in the second case. Facial recognition is used to keep people safe at government buildings and airports.
5.3 Autocorrection and Text Editors AI techniques use machine learning, deep learning, and natural language processing to find mistakes in word processors, messaging apps, and other text-based media. Robots will be taught grammar in the same way you were taught at school. Computer scientists and linguists are working together to do this, too. To make sure you don’t use the wrong comma, editors use algorithms that have been trained on high-quality linguistic data.
6 Discussion In most cases, if you choose to watch a movie or buy something online, you’ll see products that match your interests and recent search history. It has been a long time since these algorithms have learned about you and your preferences by watching your online activity. Machine learning and deep learning are used to look at data that the user has given in. It can then figure out what you like and suggest what to buy or listen to next. The customer service process takes a lot of time and frustration. Businesses find it an inefficient, costly, and difficult department to run. Artificially intelligent chatbots are becoming more popular. Algorithms set up in advance let machines answer questions, track orders, and direct calls. Customer service representatives use natural language processing to teach their chatbots how to speak like them. Chatbots that get more advanced don’t need certain input types (like yes/no questions) to work well. Some of them can answer tough questions that require a lot of time to respond.
152
T. Alam et al.
Whenever you give a bad rating to what the bot says, it will figure out what went wrong and ensure it doesn’t happen again. As soon as our hands are complete, we often rely on digital helpers to get things done. During your drive, you can have your assistant call your mother. Don’t text and drive, kids! Siri is an example of an AI that can search your contacts, find the word “Mom,” and call the number. Some of these assistants use natural language processing (NLP), machine learning (ML), statistics, and algorithms to figure out what you want and try to get it. With voice searches, many things are the same as with picture searches. Social networking sites can be used to connect with people. Facebook, Twitter, Instagram, and other social networks use AI to keep track of your activity, suggest connections, and show ads to people who are most likely to be interested. AI systems use phrases and images to find and remove content that doesn’t follow the rules. However, making a neural network for deep learning doesn’t end there. It doesn’t just start there. Users of social media sites related to advertisers and marketers who have found their profiles to be necessary. These companies use AI to do this. AI in social media may also figure out what kind of information a person likes and recommend similar material to that person. People who pay with electronic money are number eight on the list of things to do. The reason is that AI doesn’t know that you haven’t been to a bank in five years, and it’s a lot of work to go to the bank for every transaction. Therefore, more and more banks are using AI to make it easier for their customers to pay their bills in recent years. You can deposit money, transfer funds, or even open an account from anywhere. AI-powered security, identity management, and privacy rules have made it easy to do these things. Even possible fraud can be found by looking at how people use their credit cards. AI also shows itself this way. Algorithms know what a person usually buys, when and where, and how much they typically cost.
7 Conclusion AI’s applications are changing businesses and making it easier to solve complex problems. AI, for example, could make cities more liveable by improving security systems, traffic monitoring, and trash management, making people’s daily lives more accessible and more fun. It could make cities more pleasant and safe. It takes a lot of data from smart cities to train models like drones, AI security cameras, and face recognition systems. This training data is needed to make smart cities smart to last for a long time. Smart cities could be built with AI and machine learning systems that use Cogito’s training datasets. To do data labelling projects and picture annotation services, it uses computer vision technology to look for things of interest in photos and find them.
Recent Applications of Artificial Intelligence …
153
References 1. A. Lardieri, in Report: Two-Thirds of World’s Population Will Live in Cities by 2050 (2018). http://surl.li/boisi 2. PWC, in US$320 billion by 2030? (2022). https://www.pwc.com/m1/en/publications/potent ial-impact-artificial-intelligence-middle-east.html 3. D. Luckey, H. Fritz, D. Legatiuk, K. Dragos, K. Smarsly, in Artificial Intelligence Techniques for Smart City Applications, International Conference on Computing in Civil and Building Engineering (Springer, Cham, 2020), pp. 3–15. https://doi.org/10.1007/978-3-030-51295-8_1 4. M. Batty, Artificial intelligence and smart cities. Environ. Plan. B: Urban Anal. City Sci. 45(1), 3–6 (2018). https://doi.org/10.1177/2399808317751169 5. A.I. Voda, L.D. Radu, Artificial intelligence and the future of smart cities. BRAIN. Broad Res. Artif. Intell. Neurosci. 9(2), 110–127 (2018) 6. T. Alam, Blockchain cities: the futuristic cities driven by Blockchain, big data and Internet of things. GeoJournal 1-30 (2021). https://doi.org/10.1007/s10708-021-10508-0 7. Z. Allam, Z.A. Dhunny, On big data, artificial intelligence and smart cities. Cities 89, 80–91 (2019). https://doi.org/10.1016/j.cities.2019.01.032 8. S. Chatterjee, A.K. Kar, M.P. Gupta, Success of IoT in smart cities of India: an empirical analysis. Gov. Inf. Q. 35(3), 349–361 (2018). https://doi.org/10.1016/j.giq.2018.05.002 9. N. Thakur, P. Nagrath, R. Jain, D. Saini, N. Sharma, D.J. Hemanth, Artificial intelligence techniques in smart cities surveillance using UAVs: a survey. Mach. Intell. Data Anal. Sustain. Future Smart Cities 329-353 (2021). https://doi.org/10.1007/978-3-030-72065-0_18 10. T. Alam, A. Ullah, M. Benaida, Deep reinforcement learning approach for computation offloading in blockchain-enabled communications systems. J. Ambient Intell. Hum. Comput. (2022). https://doi.org/10.1007/s12652-021-03663-2 11. D. Durairaj, T.K. Venkatasamy, A. Mehbodniya, S. Umar, T. Alam, Intrusion detection and mitigation of attacks in microgrid using enhanced deep belief network. Energy Sour. Part A Recov. Utilization Environ. Effects (2022). https://doi.org/10.1080/15567036.2021.2023237 12. T. Alam, Blockchain-based big data integrity service framework for IoT devices data processing in smart cities. Mindanao J. Sci. Technol. (2021). https://doi.org/10.2139/ssrn.3869042 13. T. Alam, Cloud-based IoT applications and their roles in smart cities. Smart Cities 4(3), 1196– 1219 (2021). https://doi.org/10.3390/smartcities4030064 14. M. Al-Emran, R. Al-Maroof, M.A. Al-Sharafi, I. Arpaci, What impacts learning with wearables? An integrated theoretical model. Interact. Learn. Environ. 1–21 (2020) 15. Z. Ullah, F. Al-Turjman, L. Mostarda, R. Gagliardi, Applications of artificial intelligence and machine learning in smart cities. Comput. Commun. 154, 313–323 (2020). https://doi.org/10. 1016/j.comcom.2020.02.069 16. T. Liu, F. Sabrina, J. Jang-Jaccard, W. Xu, Y. Wei, Artificial Intelligence-enabled DDoS Detection for blockchain-based smart transport systems. Sensors 22(1), 32 (2022). https://doi.org/ 10.3390/s22010032 17. M. Al-Emran, V. Mezhuyev, A. Kamaludin, M. ALSinani, Development of M-learning application based on knowledge management processes, in Proceedings of the 2018 7th International Conference on Software and Computer Applications (2018), pp. 248–253 18. T. Alam, IBchain: internet of things and Blockchain integra-tion approach for secure communication in smart cities. Informatica 45(3) (2021). https://doi.org/10.31449/inf.v45i3. 3573 19. K. Alanne, S. Sierla, An overview of machine learning applications for smart buildings. Sustain. Cities Soc. 76, 103445 (2022). https://doi.org/10.1016/j.scs.2021.103445 20. M. Alshurideh, B. Al Kurdi, S.A. Salloum, I. Arpaci, M. Al-Emran, Predicting the actual use of m-learning systems: a comparative approach using PLS-SEM and machine learning algorithms. Interact. Learn. Environ. 1–15 (2020) 21. M. Al-Emran, G.A. Abbasi, V. Mezhuyev, Evaluating the impact of knowledge management factors on M-learning adoption: a deep learning-based hybrid SEM-ANN approach, in Recent Advances in Technology Acceptance Models and Theories (Springer, Cham, 2021), pp. 159–172
154
T. Alam et al.
22. M.A. Al-Sharafi, N. Al-Qaysi, N.A. Iahad, M. Al-Emran, Evaluating the sustainable use of mobile payment contactless technologies within and beyond the COVID-19 pandemic using a hybrid SEM-ANN approach. Int. J. Bank Market. (2021) 23. D. Hema, Smart healthcare IoT Applications Using AI. In Integrating AI in IoT Analytics on the Cloud for Healthcare Applications (IGI Global, 2022), pp. 238–257. https://doi.org/10.4018/ 978-1-7998-9132-1.ch014 24. T. Alam, M. Tajammul, R. Gupta, Towards the sustainable development of smart cities through cloud computing, in AI and IoT for Smart City Applications (Springer, Singapore, 2022), pp. 199–222. https://doi.org/10.1007/978-981-16-7498-3_13 25. T. Alam, Federated Learning Approach for Privacy-Preserving on the D2D Communication in IoT, International Conference on Emerging Technologies and Intelligent Systems (Springer, Cham, 2021), pp. 369–380. https://doi.org/10.1007/978-3-030-85990-9_31 26. T. Bhardwaj, H. Upadhyay, L. Lagos, Deep learning-based cyber security solutions for smartcity: application and review, in Artificial Intelligence in Industrial Applications (Springer, Cham, 2022), pp. 175–192. https://doi.org/10.1007/978-3-030-85383-9_12 27. J. Aguilar, A. Garces-Jimenez, M.D. R-Moreno, R. García, A systematic literature review on the use of artificial intelligence in energy self-management in smart buildings. Renew. Sustain. Energy Rev. 151, 111530 (2021). https://doi.org/10.1016/j.rser.2021.111530 28. P. Kumar, A.J. Obaid, K. Cengiz, A. Khanna, V.E. Balas, A fusion of artificial intelligence and internet of things for emerging cyber systems (2022). https://doi.org/10.1007/978-3-030-766 53-5 29. T.M. Ghazal, M.K. Hasan, M.T. Alshurideh, H.M. Alzoubi, M. Ahmad, S.S. Akbar, I.A. Akour, IoT for smart cit-ies: machine learning approaches in smart healthcare—a review. Fut. Internet 13(8), 218 (2021). https://doi.org/10.3390/fi13080218 30. G. Alam, I. Ihsanullah, M. Naushad, M. Sillanpää, Applications of artificial intelligence in water treatment for optimization and automation of adsorption processes: recent advances and prospects. Chem. Eng. J. 427, 130011 (2022). https://doi.org/10.1016/j.cej.2021.130011 31. M. Molinara, A. Bria, S. De Vito, C. Marrocco, Artificial intelligence for distributed smart systems. Pattern Recogn. Lett. 142, 48–50 (2021). https://doi.org/10.1016/j.patrec.2020.12.006 32. O. Samuel, N. Javaid, T.A. Alghamdi, N. Kumar, Towards sustainable smart cities: A secure and scalable trading system for residential homes using Blockchain and artificial intelligence. Sustain. Cities Soc. 76, 103371 (2022). https://doi.org/10.1016/j.scs.2021.103371 33. R. Gupta, T. Alam, Survey on Federated-Learning Approaches in Distributed Environment. Wireless Personal Communications (2022), pp. 1–22. https://doi.org/10.1007/s11277-022-096 24-y
The Relevance of Individuals’ Perceived Data Protection Level on Intention to Use Blockchain-Based Mobile Apps: An Experimental Study Andrea Sestino , Luca Giraldi, Elena Cedrola, and Gianluigi Guido
Abstract Blockchain technologies are enabling unprecedented opportunities. For example, in the healthcare business, blockchain solutions can store sensitive patient data more securely and improve the rapidity with which healthcare systems worldwide can respond to emergencies as we are experiencing now during the current pandemic (i.e., tracking outbreaks and vaccines, distributing test results). Past literature shows how blockchain-based applications and services, may be perceived as more secure than other technologies. However, the effects on an individual’s intention to use them are still unclear. Through experimental research design, we carried out an experiment in which a group’s perception of their data protection level was manipulated (low levels for traditional mobile apps vs. high levels for blockchain-based mobile apps) by employing a fictitious mobile app. Results show that perceived higher data protection levels positively influence the intention to use blockchainbased mobile apps. Implications and opportunities for marketers, managers and policymakers are discussed together with limitations and suggestions for future research. Keywords Blockchain · Mobile apps · Healthcare · Data protection · Perceived data protection level · Need for security
A. Sestino (B) Ionian Department of Law, Economics, Environment, University of Bari Aldo Moro, Taranto, Italy e-mail: [email protected] L. Giraldi · E. Cedrola Department of Economics, University of Macerata, Macerata, Italy e-mail: [email protected] E. Cedrola e-mail: [email protected] G. Guido Department of Management and Economic Science, University of Salento, Lecce, Italy e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. Al-Emran and K. Shaalan (eds.), Recent Innovations in Artificial Intelligence and Smart Applications, Studies in Computational Intelligence 1061, https://doi.org/10.1007/978-3-031-14748-7_9
155
156
A. Sestino et al.
1 Introduction Blockchain is a collection of digital information stored in data blocks [1], which have a limited storage capacity and form new, unchangeable links in the chain once they are filled and connected to the following block. Each block in the chain has a string created by a mathematical algorithm (i.e., hash function) that binds each block to another and protects the sequence from being manipulated [2]. Moreover, blockchain is not managed by a central server but relies on a widespread network of computers that update their blockchain when a new block is added, guaranteeing maximum security of the information stored and shared via the blockchain [3]. Despite being a novelty, opportunities arising from blockchain technology have increasingly captured interest in managerial practice and research, offering new opportunities to revitalise businesses [4]. Companies in various private and public sectors such as healthcare, tourism, energy, and insurance have already started actively working with blockchain technology [5]. As Botene expresses, blockchain is considered one of many enabling technologies presently being exploited as fundamental techniques in “creating improvements and responses to the difficulties created by the pandemic, and blockchain is one of these solution proposals” [6]. Interestingly, blockchain solutions have securely stored sensitive patient data within the system (Tandon et al. 2020). Due to limitations in healthcare systems being exposed by the COVID-19 pandemic, blockchain technology has emerged as the ideal tool to improve the response of healthcare systems to the current pandemic (ex. [7, 8]). Blockchain technology may also have a fundamental role in improving clinical trial data management by reducing delays in regulatory approvals, streamlining communication between stakeholders in supply chains, and ensuring that information received by the public and government agencies is reliable and trustworthy [9]. Indeed, blockchain adoption may effectively help in the fight against the pandemic, considering that the main features of blockchain can support many activities such as tracking, drug distribution, and information management. Since the COVID-19 pandemic has revolutionized human interactions [4], global policymakers have suggested a series of corrective actions regarding social distancing and limiting contacts to contain and curb the spread of infections. Because of these provisions, several countries have designed a series of mobile applications (mobile apps) for tracking an individual’s infection status, which in some cases can communicate it to other individuals with whom they have been in close contact to reduce the spread of the infection (e.g., “Immuni” in Italy, “TraceTogether” in Singapore, “COVID-Watch” in the USA, and so on) [10]. These apps were purposefully built to combat the COVID-19 outbreak, warning users who had exposure to the risk of infection. Users warned by the app could isolate themselves to avoid infecting others. In doing so, they could help contain the pandemic and assist in a quicker return to everyday life. Furthermore, users could promptly contact their general medical practitioner to further reduce the risk of complications. Individuals who had come into close
The Relevance of Individuals’ Perceived Data Protection Level …
157
contact with a user who tested positive for the COVID-19 virus received a notification warning them of the potential risk of being infected. Thanks to Bluetooth Low Energy technology, the tracking took place without collecting data on the user’s identity or location. For instance, in Italy—as one of the countries most affected by the pandemic—the “Immuni” app was designed and developed with great attention to protecting privacy (https://www.immuni.italia.it/). Data were collected and managed by the Ministry of Health and public entities and saved on servers located in Italy. This application did not collect personal data such as the name of users, their location, telephone number, email address, etc. Rather, reporting one’s ‘positive’ infection status was voluntary and done by the subject who contracted COVID-19 and reported using the mobile app. During this period, the individual’s status (as an alphanumeric code) changed, and the app warned other users who installed the application on their device. Despite the enhanced level of personal data protection, users’ perception was low [10], which caused the mobile app initiative to fail as there were fewer downloads than desired and a small number of registered users (https:// www.immuni.italia.it/dashboard.html). From an individual perspective, blockchain technology seems to be one of the most reliable for the perceived security level of personal data [11]. On the one hand, preliminary studies have already shown how such perceptions influence the intention to use mobile apps (e.g., as in [12–14]). On the other, literature has also shown how privacy concern dramatically reduces when the perceived benefit from data processing is high [15]. Regardless, to the best of the authors’ knowledge, the intentions to use blockchain-based mobile apps remains unclear. This work aims to understand the intention to use mobile apps for tracking vaccinated and infected individuals with Covid-19, based on “perceived data protection” [16] when using mobile apps. Moreover, given the COVID-19 contingency and the renewed need to protect oneself, we considered the individuals’ “need for security”— i.e., the subjective tendency of individuals to act to protect themselves and the people around them—as a possible moderating variable [17]. Accordingly, as for individuals’ technological-based healthcare systems intention to use, recent studies (e.g., [18]) show that apart the variables proposed in both the TAM and UTAUT (e.g., as for individuals’ perceived usefulness measuring the perception of users where they believe that using certain technologies can improve the performance of their work, and perceived ease of use that is individuals’ degree to which individuals perceive how easy it is to use the technology; [19], consumers’ differences and contingency state may play a fundamental role in influencing their intentions. More specifically, anxiety, computer self-efficacy, innovativeness, and trust are the most influential factors affecting various healthcare technologies. By considering these premises, an experimental study was carried out on a sample of international participants proposing a two-cell experiment in which perceived levels of data protection (low vs. high) were manipulated to investigate the intention to use blockchain-based mobile apps by shedding light on the moderator role of individuals’ need for security. Results show that individuals with a higher need for security and higher perceived data protection levels would be more inclined to use blockchain apps to prevent the diffusion of Covid-19, and therefore, mobile apps
158
A. Sestino et al.
based on blockchain technologies could be perceived as more secure in terms of data protection level than others. This paper develops as follows: in the next section, we present the conceptual foundations and the main characteristics of blockchain versus traditional mobile applications and the consequent perceived levels of data protection. Then, we explain the moderator role of individuals’ need for security in influencing their intention to use such mobile apps, presenting our conceptual framework. After, we describe the methodology that we implemented in the empirical study and present the obtained results. To close, we discuss our findings and delineate the related theoretical and practical implications, along with limitations and directions for future research.
2 Theoretical Background 2.1 An Overview of Blockchain Technologies Blockchain is defined as a ledger that stores transactions within blocks concatenated with each other and shared in a network of peer-to-peer nodes. Each node in the network has its copy of the ledger, updated whenever a new block is inserted into the blockchain [20]. The creation of a block generates an identification with a unique hash through appropriate hashing algorithms [21], such as SHA-256, which guarantees the uniqueness of the information it identifies. The change of a single bit would lead to a total change in the hash, and since concatenation is achieved precisely by inserting the hash of the previous one into the block, all subsequent blocks incorporate all the changes [22]. This process guarantees the non-modifiability of the data entered into the network, and this property has enabled countless blockchain applications far beyond its original application to cryptocurrencies [23]. Such applications of blockchain include, for example, those aimed at supply-chain management [24], as they provide accurate identification of the location of items in the supply chain, helping to prevent losses and monitor product quality. Of equal matter are the possibilities of application in the field of digital voting [25], as the use of blockchain would make voting transparent, and regulators would notice any changes made to the network. Another application is documentary certification, for example, wills or educational paths certification [26] and property deeds [27]. Furthermore, in the health field, there are countless open research fields, such as drug tracking (Sylim 2018), sharing health data for research purposes [28], and managing electronic health records [29]. As a hash chain with characteristics of decentralization, verifiability, and immutability, as explained above, blockchain technology can be used to securely store patients’ medical data [30]. Given the security assurance provided by it, blockchain technology has captured the attention of researchers and managers so that numerous apps have been developed or are being developed, for example, those in the banking sector and related mobile banking services where the issue of personal data protection is prominent [31]. Therefore,
The Relevance of Individuals’ Perceived Data Protection Level …
159
individuals could perceive more secure mobile applications based on blockchains concerning the protection and management of personal data than traditional ones [32].
2.2 Blockchain Technologies for Mobile App Development The healthcare sector has always explored the possibility of using data encryption and “anonymization” techniques to prevent possible improper access. In this context, blockchain technology can ensure data sharing and transparency by providing a regulatory mechanism for all network members to exchange sensitive data [33]. Using blockchain to protect collected data could help achieve the level of public engagement needed to combat the spread of COVID-19 [34]. With the decentralization and consensus features of the blockchain, a user can present a device’s Bluetooth unique identifier in an encrypted form, i.e., only those who have the appropriate key to decipher the data can read the collected information encoded in an alphanumeric string [35]. Any individual participating in the system would see if they were close to another person who owns a device registered on the network as infected (nobody could be identified). Any hash value protected each block in the chain. Health professionals and authorities can also access near real-time data, allowing rapid detection of infections. The immutability feature also implies the availability of the entire history of medical records. Usually, the architectures proposed for blockchain-based applications include, in addition to the blockchain itself, three fundamental layers, typical of the “Internet of Things” (IoT) stack [25]. The first is the application layer, made up of the software offering the various services. The second is the network communication layer, where the objects are connected for an IoT service. The last one is the physical layer of the objects connected to the network. To this architecture, which the inclusion of intermediate layers can complicate, we can add a further layer of information storage consisting of the blockchain, as proposed in the “smart-health” application [36].
2.3 Blockchain Data Protection and Individuals’ Perceived Data Protection Level Blockchain mechanisms play a crucial role in data security as a blockchain is nothing more than a database that stores all processed transactions—or data—in chronological order, in a set of computers that are tamper-proof for adversaries [37]. Its nature provides maximum data protection, even at the expense of system efficiency, so in the case of applications that handle sensitive data such as personal or medical information, its use is attractive. Different mechanisms have been studied to ensure
160
A. Sestino et al.
security, such as the combination of different blockchains, with or without permission, to exploit the advantages of both types [38]. A permissionless blockchain allows records to be shared by all network users, updated and monitored by all, but owned by no one [39]. Therefore, a permissionless blockchain has decentralization as a peculiarity, and the most widely successful example is the cryptocurrency Bitcoin. In contrast, a permissioned blockchain, being less decentralized, allows more significant levels of access control, averting any danger of privacy violation [40]. A solution widely used not only in the health sector to ensure the originality and authenticity of data is the combination of InterPlanetary File System (IPFS), used to store content, and smart contracts used to govern, manage and provide traceability and visibility into history [41]. The IPFS provides the customer with authenticity and data quality as the hashes of the data returned by the IPFS is encrypted with the Shamir Secret Sharing Scheme (SSS) so that a customer who has not deposited the price of the digital content is prevented from accessing the data. The authenticity of the data is also ensured by the addition of a review-based system, where customers can judge the quality of the data [42]. To create trust, smart contracts based on blockchain technology do not need a trusted third-party authority but instead use decentralized consensus mechanisms. The best-known mechanism, as it is used by the two most popular blockchain systems (i.e., Bitcoin and Ethereum), is the Proof of Work (PoW), which involves solving a computationally difficult puzzle to prove the credibility of data [43]. By combining all these solutions, blockchain technology provides a very high level of security, reliability, and data assurance. However, since the intention to use applications based on such technologies could be influenced by individuals’ characteristics or by contingent situations such as the increased protection required in health protection at the COVID-19 pandemic, we hypothesize that: H1: Individuals’ perceived data protection level positively affects individuals’ intention to use mobile apps.
2.4 The Moderator Role of Individuals’ Need for Security In certain contingent and emergency states, the traditional gradations of the importance of individuals’ needs may be subverted. For example, the current global COVID-19 pandemic has catapulted individuals into an unexpected state of emergency never experienced by current generations, affecting the lives of individuals and societies [44]. The nature of the pandemic and the only solution to counteract the spread of the cases, forcing individuals to social distancing, the use of personal protective equipment, the closure of businesses, and a decrease in traditional daily activities, resulted in continuous or interspersed lockdown periods [4]. Home is the only place recognized as partially safe. Here the conception of time has been downsized, interactions between subjects have had to find new facilitating tools (e.g., digital technologies), and sometimes the only possible activities are shopping for necessities [45, 46]. According to classic need theories (e.g., [47]), needs can have
The Relevance of Individuals’ Perceived Data Protection Level …
161
different levels of importance, as far as the urgency of their resolution is concerned, and the fulfillment of the individual is obtained through the orderly satisfaction of the various categories of needs. It is necessary to satisfy the lowest needs first to continue with the subsequent needs, and such hierarchy seems confirmed and unavoidable even in today’s society [48]. Among these needs, particular importance is given to security needs in terms of protection, tranquility, predictability, suppression of worries, anxieties, and fears, as needs that satisfaction guarantee and instill a sense of protection and tranquility [17]. In this time, the needs of individuals seem to have crystallized at the first two levels, as they are better oriented towards satisfying physiological and security needs in the protection of their health and that of their loved ones [49]. Globally, several alternative solutions have been put in place by companies and policymakers to reduce the risk of insecurity while meeting the needs of individuals, with a total transfer of activities online through smart-working [50] distance learning [12], online shopping platforms [51], home delivery services to the elderly [52], forms of telemedicine and teleconsultation [53], digitized forms of essential public services [54]. Considering all this, the need that has prevailed is undoubtedly safety: Safeguarding one’s health has become a priority because we are all, without distinction, vulnerable to contagion by Coronavirus [49]. Coherently, as for such individuals’ differences and variable influencing their attitudes and consequent behaviour regarding technological products, recent studies [55] interestingly suggested that perceived enjoyment was significantly associated with their perceived usefulness and, importantly, that trust-related to their need for security—were significantly related to the perceived ease of use. Together with the new services offered, policymakers worldwide have set up infection tracking systems that can be managed through mobile applications to voluntarily report their positive status and alert people who may have come into contact with a subject, specifically through the Immuni app in Italy, TraceTogether in Singapore, and COVID-Watch in the USA [10]. However, these technologies paradoxically clashed with the security needs of individuals that they were supposed to fill. Specifically, although mobile applications could positively reduce the risk of insecurity, thus fulfilling the need for health protection, individuals associated high perceived risk of lack of protection of personal data due to tracking technologies [56]. Therefore, the paradoxical situation created has profiled a ‘clash’ between two different needs related to security, in terms of health protection and the protection of one’s data, in the management of applications useful to safeguard health. This discrepancy is attributable to the lack of information given to individuals, since despite the level of protection of personal data guaranteed by the producers of these mobile applications, among other things, compliant with the regulations of the EU Regulation 2016/679 [43], the perception of personal data protection by users was low [10, 57], resulting in poor use of the tools offered. Despite timely efforts by governments, tracking applications have not been successful in many countries. The Immuni app promoted by the Italian government was quite a failure: 80% of Italians did not download it. An interesting exception to citizens’ hesitations about using tracking
162
A. Sestino et al.
Fig. 1 The proposed conceptual framework
apps seems to be Israel and the Czech Republic, which have used “smart quarantine” designed by geolocalising credit card movements to create “infection maps” with the places and contacts the COVID-19 infected person has had in the last five days, sending an automatic notification [58]. It is interesting to investigate the factors that influenced the refusal of citizens to download the application, which should be considered a valuable tool to prevent infection and, therefore, protect their health and that of others. Especially at this point, when vaccines lead to herd immunity in many Western countries, apps could become helpful in tracking and monitoring the spread of new variants of COVID-19. Attests to the importance of investigating the factors underlying citizens’ choice not to download the app to implement effective corrective action and a supportive national policy. Therefore, it is evident that there is an irrational motive associated with tracking apps, which instills a sense of fear, anxiety, and distress due to the (alleged and false) violation of subjects’ privacy. Based on the above, and by considering the technical characteristics of the blockchain in terms of an apparent increase in personal data protection due to the non-modifiability and immutability of the data and, therefore, maximum protection from external attacks [32], and the need for security in its broadest and all-encompassing sense of several needs to be met [17], we postulate that individuals may quickly adopt technologies that allow for greater perceived security in terms of personal data protection and that individuals’ need for security, in turn, influences this relationship (Fig. 1). Thus, we hypothesize that: H2: Individuals’ need for security moderates the relationship between the perceived data protection level and intention to use mobile apps.
3 Method We collected 182 subjects (Mage = 36.121, SDage = 12.113, of which 62% males and 38% females) randomly recruited online by providing the link to the questionnaire created by the Qualtrics platform, and accessible for four weeks for the whole day/night. More specifically, the survey has been administered to a sample of respondents randomly recruited online, via Amazon Mechanical Turk [59]. We initially
The Relevance of Individuals’ Perceived Data Protection Level …
163
collected 198 questionnaires, but we removed 16 participants who failed the attention check (“If you are reading this question, please select answer 3”; [60]. Moreover, to protect participants’ anonymity and reduce evaluation apprehension [61], the questionnaire ensured that participants’ responses would remain anonymous and there were no right or wrong answers. The questionnaire comprised three sections. First, we informed participants about blockchain technology and its peculiarities. Second, the participants were randomly allocated into two different scenarios within a two-cell experiment that manipulated the perceived data protection level in the mobile app use (low level of protection via traditional mobile app vs. high level of data protection via blockchain-based mobile app), in a fictitious mobile app called “Hey-track!” Thirdly, they reported their need for security (drawn by Rohem and Rohem Jr. [17]), and assessed on a seven-point Likert scale (1 = “Strongly disagree”; 7 = “Strongly agree”; α = 0.803)), using five items (“I am concerned with security,” “Protecting myself and my family is very important,” “I think a lot about how safe things are,” “There is nothing more important than security,” “I value security a great deal”). Furthermore, their intention to use mobile apps using the scale proposed by Fishbein and Ajzen [62] based on a seven-point Likert scale (1 = “Strongly disagree”; 7 = “Strongly agree”; α = 0.811), using three items (e.g., “I would use the APP to monitor the COVID-19 both the infected and vaccinated people next to me”, “I would consider to use the APP to monitor the COVID-19 both the infected and vaccinated people next to me”, “The probability that I would consider to use the APP to monitor the COVID-19 both the infected and vaccinated people next to me”). Then we asked for sociodemographic in terms of gender (“Male”, “Female”, “Preferred to not declare”), age, education (“Less than high school diploma or equivalent”, “B.Sc.”, “M.Sc.”, “Ph.D.”), and nationality.
4 Results We considered the perceived data protection level as the independent variable (coded as −1 = “low perceived data protection level,” and 1 = “high perceived data protection level),” the individuals’ need for security as a moderator, and their intention to use as the dependent variable. Specifically, we carried out a moderation analysis after mean-centering the measure of need for security, which served as the independent and the moderating variables. We implemented the Model 1 macro in SPSS PROCESS macro [63] to run a moderation analysis that expressed the intention to use the mobile application as a function of the independent variable (perceived data protection level), the moderator (need for security), and their interaction. Our results (Table 1) show that participants’ perceived data protection level (b = 0.361; t = 4.321; p = 0.000) and need for security (b = 0.358; t = 5.701; p = 0.510) exerted a significantly positive effect on their intention to use mobile application to prevent the contagion of COVID-19 and monitoring vaccinated individuals. However, there was a significantly negative effect of the interaction between the perceived data
164
A. Sestino et al.
Table 1 Results of regression analysis (Model 1, Hayes) Variable
B
SE
Perceived data protection levels (X)
0.361
0.0781
Need for security (Mo)
0.358
0.0625
Interaction (X × Mo) −0.142 0.444 (Perceived data protection levels × Need for security)
t
LLCI
ULCI
4.321 0.000 0.194
p
0.511
5.701 0.000 0.232
0.474
−1.245 0.002 −0.223 −0.041
Note N = 182. Dependent variable (Y) = Intention to use; X = Independent Variable; Mo = Moderator
protection level and participants’ need for security (b = –0.142; t = −1.245; p = 0.002). To further analyze the nature of this interaction, we checked the conditional effects of individuals’ perceived data protection level on the dependent variable (intention to use mobile app) at a lower and a higher level of their need for security. Results show that the effect of perceived data protection level on individuals’ intention to use mobile app was significantly positive for lower level of need for security (M–1SD; b = 0.528; t = 6.068; p = 0.000), and significantly at a higher level of the moderator (M + 1SD; b = −0.194; t = 2.398; p = 0.018). Therefore, individuals’ need for security influences the effect of perceived data protection level on their intention to use mobile apps. Individuals with different levels of need for security orientation may show different intentions to use the mobile app, especially when higher in perceived data protection level (as in the case of the blockchain-based app). On the other hand, individuals with higher (vs. lower) perceived data protection levels may be more inclined to use mobile apps, especially for the high need for security, representing a central issue for both events. Individuals’ need for security is a continuous variable. We increase knowledge on its significant interaction effect via the Floodlight analysis [64], a technique based on Johnson and Neyman’s [65] that allows plotting the effects of the independent variable on the dependent variable (on the y-axis) across all possible values of the moderator (on the x-axis). Thus, the analysis may be helpful to shed light on the regions of the significance of the plotted effect. The results of the Floodlight analysis (Fig. 2) show the magnitude of the positive interaction effect of perceived data protection level, at increasing levels of need for security on individuals’ intention to use mobile apps. The effect is positive and significant for all levels of need for security equal to and higher than 1.378 (bJN = 0.171, SE = 0.086, 95% confidence interval: 0.000, 0.343).
5 General Discussion and Conclusion Blockchain technology promises to be a strategic intangible resource for organizations in different sectors: from improving supply chains in industry 4.0 and the
The Relevance of Individuals’ Perceived Data Protection Level …
165
Fig. 2 Floodlight analysis: the moderator role of individuals’ need for security
enhancement of financial accounting systems to the development of blockchainbased mobile applications, which, when compared to traditional application, guarantee security and reliability as well advantages in terms of time and resources. The sudden outbreak of the COVID-19 pandemic highlighted the inadequacy of traditional pandemic management methods and the need to generate new blockchainbased mobile applications that would be a disruptive innovation from the past. In the healthcare sector, literature and industrial research invest energies in designing new applications based on this technology to strengthen personal security and protection of personal data. In this paper, we have investigated individuals’ intention to use blockchain applications versus applications based on traditional technologies to prevent and monitor infections from COVID-19, also considering that the failure of some applications proposed on national levels is attributable to the low level of personal data protection perceived by end-users. Through an experimental study, we have manipulated the level of protection of personal data perceived by individuals by offering them an application for monitoring infections based on traditional technologies and one based on blockchain technology to investigate how this level influenced their intentions for use. Indeed, the perceived level of protection of personal data, higher in the case of blockchain applications and lower for traditional ones, directly affected users’ intentions to use these applications. However, we found that this effect is also moderated by the security desired. Therefore, at higher levels of security need, the intention to use blockchain-based applications will increase more than proportionately. Our results may provide valuable insights for marketing professionals, managers, and technical professionals called to analyze, study, and mobile design applications of
166
A. Sestino et al.
this type in understanding which characteristics to leverage to influence the intention to use positively. Theoretical contribution The present study offers three main theoretical contributions. First of all, our results contribute to the literature on the consumption of mobile applications, showing that the use and success of the latter in terms of downloads can have differentiated effects depending on some perceived characteristics, such as the level of protection of personal data. Secondly, our results contribute to the literature on the use of new technologies in the healthcare sector, shedding light on how consumer intentions for applications of this type may vary according to the need for safety that individuals’ characteristics and, therefore, individuals according to the quota status. Thirdly, our results contribute to the literature on consumer needs, showing how beliefs, perceptions, and influences may be decisive in accepting new technologies. A previous longitudinal study conducted by Russo et al. [58] between December 2019 and June 2020 found that the greatest fear of citizens who choose not to download tracking apps is government information to control what they do and where even concerning non-pandemic areas such as privacy. Managerial contribution From a managerial perspective, our results suggest new ideas to managers to propose digital tools such as mobile applications, which should be encouraged for public purposes, such as monitoring and preventing infections. Indeed, our results suggest that the managers and designers of such mobile applications must not only consider the fundamental characteristics that such applications must have, for example, in terms of personal data protection. They should also make significant efforts to ensure that individuals’ perceptions of the proposed tools are close to the real characteristics with which they are endowed. Indeed, in the Italian case, for example, the failure to spread the “Immuni” app has been mainly attributed to a wrong perception of the goodness of the tool by individuals, despite the excellent level of protection of personal data guaranteed from a technical point of view in application development stages. New technologies are widely known, and their benefits may thus be preferred as they are perceived as safer in the minds of individuals. The results suggest that increasing the intention to use mobile apps can be done if marketing efforts aim at reassuring the consumer about the protection of their data, even in the most complex cases such as healthcare, where it seems—erroneously—that to safeguard public health, some rights must be sacrificed. This could be done by designing ad hoc information campaigns to counter the growing misinformation about measures to counter COVID-19, precise and detailed information about tracking applications, and how they do not store personal user data in any way. Furthermore, it is essential to understand the purposes that move the use of these applications, as in the case of the need for safety required by individuals, for example, in terms of safeguarding their health, since they can more or less impact the intention to end use of mobile applications and thus define their success. Accordingly, political and institutional actors play a crucial role in spreading common messages
The Relevance of Individuals’ Perceived Data Protection Level …
167
on the usefulness of such tracking applications, unlike what has happened in the past in some countries, including Italy. Finally, it is essential to emphasize that marketing managers should integrate such features simultaneously: the perceived level of personal data protection alone does not assure that individuals will use mobile applications. They must also consider, as anticipated, the need for safety required in terms of safeguarding one’s health and that of loved ones. Despite the study’s promising contributions, this has some limitations that may be useful for future research. Firstly, while being significant and offering interesting food for thought, the sample may not guarantee the scalability of the results and, therefore, future studies could, on the one hand, increase the sample analyzed and consider different research settings in which one tests the results. Furthermore, some marketing effects related to individual characteristics and differences have not been considered. Future studies could also focus on dependent variables such as satisfaction, personal involvement, memory. Furthermore, given the rationale underlying the study deriving from the understanding of the failure of some contagion prevention and monitoring initiatives such as the one related to the diffusion of the tracking application, the study was limited to the Italian context: future studies could investigate the same effects also internationally, where the success of similar applications has been affected by similar effects.
References 1. M. Nofer, P. Gomber, O. Hinz, D. Schiereck, Blockchain. Bus. Inf. Syst. Eng. 59(3), 183–187 (2017) 2. P.J. Taylor, T. Dargahi, A. Dehghantanha, R.M. Patrizi, K.K.R. Choo, A systematic literature review of blockchain cyber security. Digital Communications and Networks 6(2), 147–156 (2020) 3. D. Puthal, N. Malik, S.P. Mohanty, E. Kougianos, C. Yang, The blockchain as a decentralized security framework [future directions]. IEEE Consum. Electron. Mag. 7(2), 18–21 (2018) 4. C. Amatulli, A.M. Peluso, A. Sestino, G. Guido, in New Consumption Orientations in the COVID-19 Era: Preliminary Findings from a Qualitative Investigation, 20th International Marketing Trends Conference (2021), pp. 2–6 5. I. Konstantinidis, G. Siaminos, C. Timplalexis, P. Zervas, V. Peristeras, S. Decker, in Blockchain for Business Applications: A Systematic Literature Review, in International Conference of Business Information Systems (2018), pp. 384–399 6. P.H.R. Botene, A.T. de Azevedo, P.S. de Arruda Ignácio, Blockchain as an enabling technology in the COVID-19 pandemic: a systematic review. Health Technol. 11, 1369–1382 (2021) 7. S. Ribeiro-Navarrete, J.R. Saura, D. Palacios-Marqués, Towards a new era of mass data collection: assessing pandemic surveillance technologies to preserve user privacy. Technol. Forecast. Soc. Chang. 167, 120681 (2021). https://doi.org/10.1016/j.techfore.2021.120681 8. E. Sezgin, Y. Huang, U. Ramtekkar, S. Lin, Readiness for voice assistants to support healthcare delivery during a health crisis and pandemic. NPJ Digital Med. 3, 122 (2020). https://doi.org/ 10.1038/s41746-020-00332-0 9. D. Marbouh et al., Blockchain for COVID-19: Review, opportunities, and a trusted tracking system. Arab. J. Sci. Eng. 45(12), 9895–9911 (2020). https://doi.org/10.1007/s13369-020-049 50-4 10. T. Alanzi, A review of mobile applications available in the App and Google Play Stores used during the COVID-19 outbreak. J. Multidiscip. Healthc. 14, 45–57 (2021)
168
A. Sestino et al.
11. N. Raddatz, J. Coyne, P. Menard, R.E. Crossler, Becoming a blockchain user: understanding consumers’ benefits realisation to use blockchain-based applications. Eur. J. Inform. Syst. 1–28 (2021) 12. D. Magni, A. Sestino, Students learning outcomes and satisfaction. An investigation of knowledge transfer during social distancing policies. Int. J. Learn. Intellect. Cap. 1(1), 1–14 (2021) 13. C. Tam, D. Santos, T. Oliveira, Exploring the influential factors of continuance intention to use mobile apps: extending the expectation confirmation model. Inf. Syst. Front. 22(1), 243–257 (2020) 14. V.M. Wottrich, E.A. van Reijmersdal, E.G. Smit, The privacy trade-off for mobile app downloads: the roles of app value, intrusiveness, and privacy concerns. Decis. Support Syst. 106, 44–52 (2018) 15. A. Gutierrez, S. O’Leary, N.P. Rana, Y.K. Dwivedi, T. Calle, Using privacy calculus theory to explore entrepreneurial directions in mobile location-based advertising: Identifying intrusiveness as the critical risk factor. Comput. Hum. Behav. 95, 295–306 (2019) 16. G. Mazurek, K. Małagocka, Perception of privacy and data protection in the context of the development of artificial intelligence. J. Manage. Anal. 6(4), 344–364 (2019) 17. M.L. Rohem, H.A. Rohem Jr., The influence of redemption time frame on responses to incentives. J. Acad. Market. Sci. 39(3), 363–375 (2011) 18. A.A. AlQudah, M. Al-Emran, K. Shaalan, Technology acceptance in healthcare: a systematic review. Appl. Sci. 11(22), 1–40 (2021) 19. F. Davis, Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Q. 13(3), 319–340 (1989) 20. S. Zeng, X. Ni, Y. Yuan, F.-Y. Wang, A bibliometric analysis of blockchain research, in 2018 IEEE Intelligent Vehicles Symposium (IV) (2018), pp. 102–107 21. D. Vujicic, S. Randic, D. Jagodic, Blockchain technology, bitcoin, and ethereum: a brief overview, in 2018 17th International Symposium INFOTEH-JAHORINA (INFOTEH), Mar 2018, pp. 1–6 22. M. Pilkington, Blockchain technology: principles and applications, in Research Handbook on Digital Transformations, ed. by F. Xavier Olleros, M. Zhegu (Edward Elgar Publishing, 2016), pp. 1–39 23. S. Zhai, Y. Yang, J. Li, C. Qiu, J. Zhao, Research on the application of cryptography on the blockchain. J. Phys: Conf. Ser. 1168(3), 032077 (2019) 24. S. Saberi, M. Kouhizadeh, J. Sarkis, L. Shen, Blockchain technology and its relationships to sustainable supply chain management. Int. J. Prod. Res. 57(7), 2117–2135 (2018) 25. R. Khan, S. Ullah Khan, R. Zaheer, S. Khan, Future Internet: The Internet of Things Architecture, Possible Applications and Key Challenges, in 10th International Conference on Frontiers of Information Technology (FIT): Proceedings (2012), pp. 257–260 26. D. Shah, D. Patel, J. Adesara, P. Hingu, M. Shah, Exploiting the capabilities of blockchain and machine learning in education. Augm. Hum. Res. 6(1), 1–14 (2011) 27. M. Themistocleous, Blockchain Technology and land registry. Comput. Sci. 30(2), 195–202 (2018) 28. X. Liang, S. Shetty, D. Tosh, C. Kamhoua, K. Kwiat, L. Njilla, A blockchain-based data provenance architecture in cloud environment with enhanced privacy and availability, in 17th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGRID) (2018), pp. 468–477. 29. A. Dubovitskaya, P. Novotny, Z. Xu, F. Wang, Applications of blockchain technology for datasharing in oncology: results from a systematic literature review. Oncology 98(6), 403–411 (2020). https://doi.org/10.1159/000504325 30. Y. Chen, Blockchain tokens and the potential democratization of entrepreneurship and innovation. Bus. Horiz. 61(4), 567–575 (2018) 31. P. Garg, B. Gupta, A.K. Chauhan, U. Sivarajah, S. Gupta, S. Modgil, Measuring the perceived benefits of implementing blockchain technology in the banking sector. Technol. Forecast. Soc. Chang. 163, 120407 (2021). https://doi.org/10.1016/J.TECHFORE.2020.120407
The Relevance of Individuals’ Perceived Data Protection Level …
169
32. B.K. Mohanta, D. Jena, S.S. Panda, S. Sobhanayak, Blockchain technology: a survey on applications and security privacy challenges. Comput. Sci. 8 (2019) 33. L. Hang, E. Choi, D.H. Kim, A novel EMR integrity management based on a medical blockchain platform in hospital. Electronics (Switzerland) 8(4) (2019). https://doi.org/10.3390/electroni cs8040467 34. M. Kassab, J. DeFranco, T. Malas, P. Laplante, G. Destefanis, V.V.G. Neto, Exploring research in blockchain for healthcare and a roadmap for the future. IEEE Trans. Emerg. Top. Comput. 9(4), 1835–1852 (2019) 35. G. Barthe, et al., Listening to bluetooth beacons for epidemic risk mitigation, in medRxiv (2021), pp. 1–19 36. M. Zghaibeh, U. Farooq, N. Hasan, I. Baig, SHealth: a blockchain-based health system with smart contracts capabilities. Comput. Sci. 8, 70030–70044 (2020) 37. D. Minoli, B. Occhiogrosso, Blockchain mechanisms for IoT security. Internet Things 1–2, 1–13 (2018). https://doi.org/10.1016/J.IOT.2018.05.002 38. T. Zhou, X. Li, H. Zhao, Med-PPPHIS: blockchain-based personal healthcare information system for national physique monitoring and scientific exercise guiding. J. Med. Syst. 43(9), 305 (2019). https://doi.org/10.1007/s10916-019-1430-2 39. M. Swan, Blockchain: Blueprint for a New Economy (O’Reilly Media Inc., 2015) 40. M. Liu, K. We, J.J. Xu, How will blockchain technology impact auditing and accounting: permissionless versus permissioned blockchain. Am. Account. Assoc. 13(2), 19–29 (2019) 41. N. Nizamuddin, H.R. Hassan, K. Salah, in IPFS-Blockchain-Based Authenticity of Online Publications, International Conference on Blockchain (2018), pp. 1–15 42. M. Naz et al., A Secure Data Sharing Platform using blockchain and interplanetary file system. Sustainability 11(24), 7054–7078 (2019) 43. J. Becker, D. Breuker, T. Heide, J. Holler, H. P. Rauer, R. Bohme, Can we afford integrity by proof-of-work? Scenarios inspired by the bitcoin currency, in The Economics of Informtion Security and Privacy (Springer, Berlin, 2013), pp. 135–156 44. D. Albarracin, H. Jung, A research agenda for the post-COVID-19 world: theory and research in social psychology. Asian J. Soc. Psychol. 24(1), 10–17 (2021) 45. C. Cavallo, G. Sacchi, V. Carfora, Resilience effects in food consumption behaviour at the time of Covid-19: perspectives from Italy. Heliyon 6(12), e05676 (2020). https://doi.org/10.1016/J. HELIYON.2020.E05676 46. R.Y. Kim, The impact of COVID-19 on consumers: preparing for digital sales. IEEE Eng. Manage. Rev. 48(3), 212–218 (2020) 47. A.H. Maslow, Motivation and Personality (Harper and Row Publishers Inc., New York, 1954) 48. U. Abulof, Introduction: why we need maslow in the twenty-first century. Society 54(6), 508– 509 (2017) 49. P. Weiss, D.R. Murdoch, Clinical course and mortality risk of severe COVID-19. Lancet (London, England) 395(10229), 1014–1015 (2020). https://doi.org/10.1016/S0140-673 6(20)30633-4 50. G. Riva, B. Wiederhold, F. Mantovani, Surviving COVID-19: the neuroscience of smart working and distance learning. Cyberpsychol. Behav. Soc. Netw. 24(2), 79–85 (2021) 51. J. Grashuis, T. Skevas, M. Segovia, Grocery shopping preferences during the COVID-19 pandemic. Sustainability 12(13), 5369–5379 (2020) 52. J.E. Hobbs, Food supply chains during the COVID-19 pandemic. Can. J. Agric. Econ. 68(2), 171–176 (2020). https://doi.org/10.1111/cjag.12237 53. C. Johnson, K. Taff, B.R. Lee, A. Montalbano, The rapid increase in telemedicine visits during COVID-19. Patient Exp. J. 7(2), 72–79 (2020) 54. D. Agostino, M. Arnaboldi, M.D. Lema, New development: COVID-19 as an accelerator of digital transformation in public service delivery. Public Money Manage. 41(1), 69–72 (2021) 55. M. Al-Emran, R. Saeed, M. Al-Sharafi, I. Arpaci, What impacts learning with wearables? An integrated theoretical model, in Interactive Learning Environments (2020), pp. 1–21 56. L. Bradford, M. Aboy, K. Liddell, COVID-19 contact tracing apps: a stress test for privacy, the GDPR, and data protection regimes. J. Law Biosci. 7(1), lsaa034. https://doi.org/10.1093/jlb/ lsaa034
170
A. Sestino et al.
57. F. Rowe, O. Ngwenyama, J.-L. Richet, Contact-tracing apps and alienation in the age of COVID19. Eur. J. Inf. Syst. 29(5), 545–562 (2020) 58. M. Russo, et al., in The systemic dimension of success (or failure?) in the use of data and AI during the COVID-19 pandemic. A cross-country comparison on contact tracing apps, July 2021 59. H. Aguinis, I. Villamor, R. Ramani, MTurk research: review and recommendations. J. Manag. 47(4), 823–837 (2020) 60. D.M. Oppenheimer, T. Meyvis, N. Davidenko, Instructional manipulation checks: detecting satisficing to increase statistical power. J. Exp. Soc. Psychol. 45(4), 867–872 (2009). https:// doi.org/10.1016/J.JESP.2009.03.009 61. P.M. Podsakoff, S.B. MacKenzie, J.-Y. Lee, N.P. Podsakoff, Common method biases in behavioral research: a critical review of the literature and recommended remedies. J. Appl. Psychol. 88(5) (2003). https://doi.org/10.1037/0021-9010.88.5.879 62. M.A. Fishbein, I. Ajzen, Belief, Attitude, Intention and Behaviour: An Introduction to Theory and Research (Addison-Wesley, Reading, MA, 1975) 63. A.F. Hayes, Regression-based statistical mediation and moderation analysis in clinical research: observations, recommendations, and implementation. Behav. Res. Ther. 98, 39–57 (2017) 64. S.A. Spiller, G.J. Fitzsimons, J.G. Lynch, G.H. Mcclelland, Spotlights, floodlights, and the magic number zero: simple effects tests in moderated regression. J. Mark. Res. 50(2), 277–288 (2013) 65. J. Neyman, P.O. Johnson, Tests of certain linear hypotheses and their application to some educational problems. Statistical Res. Mem. 1, 57–93 (1936)
Exploring the Hidden Patterns in Maintenance Data to Predict Failures of Heavy Vehicles Hani Subhi AlGanem and Sherief Abdallah
Abstract Predictive maintenance plays an essential role in maintaining the fleet and keeping it available to operate with minimum downtime. However, for one government organization in Dubai, corrective maintenance work orders are more than just preventive maintenance and are needed to increase the vehicles’ availability and reduce the maintenance cost. This study investigates how machine learning can use historical maintenance data to predict vehicle failures and the type of failure that could happen. Our study focuses on a specific category of heavy vehicles with the same operation functionalities, with data consisting of 38,000 work orders in the past five years. Our results show that the Gradient Boosted Trees achieve the highest prediction accuracy, 68.09 and 41.24% Prediction of Issue Accuracy. So fleet managers should look into the historical data they have and let AI algorithms find the hidden patterns and employ them for better predictive maintenance schedules. Keywords Machine learning · Gradient boosted trees · Fleet management · Predictive maintenance · Dubai · Hidden patterns
1 Introduction A fleet of vehicles is considered valuable assets and needs to be managed in an organized manner to conduct preventive, corrective, or predictive maintenance based on the market’s manufacturing recommendation or best practices [1]. The main objective of maintenance is to reduce failures and increase the machines’ lifetime. One way to do that is to employ predictive maintenance using statistical analysis for historical data from maintenance management systems [2, 3].
H. S. AlGanem (B) · S. Abdallah Computer Science, The British University, Dubai, UAE e-mail: [email protected] S. Abdallah e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. Al-Emran and K. Shaalan (eds.), Recent Innovations in Artificial Intelligence and Smart Applications, Studies in Computational Intelligence 1061, https://doi.org/10.1007/978-3-031-14748-7_10
171
172
H. S. AlGanem and S. Abdallah
Maintenance is a set of activities carried out to avoid the breakdown of vehicles or machines and make them available. Predictive maintenance is considered one of the most important ways to maintain and guarantee a high level of fleet vehicles availability. The high availability can be achieved by reducing vehicle breakdown and accidents that could happen due to parts failure. Additionally, avoiding vehicles’ corrective maintenance usually costs more than predictive maintenance [4]. The study by Wait and Howard [25] explained predictive maintenance as the way to predict when vehicle parts need replacement or maintenance. It is vital to have a prediction and expectation for maintenance as some breakdowns might happen in distant locations, which requires an extra cost to move the vehicles from the breakdown location to the workshop and increase the downtime, which considers more cost and less in providing service. Predictive maintenance improves day by day due to the employment of new technologies such as the Internet of things (IoT) and AI algorithms, which leads to a better prediction for required maintenance that avoids vehicle breakdown. Additionally, it reduces the maintenance cost and increases both vehicle availability and operations service level agreement. Without having a proper failure prediction, vehicles breakdown chances increase. Breakdown vehicles usually require a longer time to repair and make the vehicle out of service for months. Masry et al. [5] explain that having predictive maintenance aims to reduce the operation cost and increase the availability to maintain reliability. Chaudhary [6] expresses the importance of building a prediction model by highlighting the impact of better management and decision-making and getting more knowledge about the fleet. Also, he asserts that employing a computerized prediction model will help the fleet manager make the right decisions. The study by Wang [7] claims the importance of the generated information and data generated through smart sensors connected through the Internet to main servers to store and analyze it using the IoT concept. The collected data plays a significant role in prediction maintenance and helps decision-makers make the right decision related to machine maintenance. In general, predicting vehicles’ failure will save time and cost for the organization. Also, provide better logistic services for their customers. So, it is necessary to predict the time of vehicle failure and the issue that could cause the failure. One government organization in the Dubai fleet consists of more than 3000 variety vehicles, including light vehicles, heavy vehicles, and pieces of equipment. Currently, the traditional way of maintenance is carried out, including preventive, corrective, accidents, and other modifications on vehicles. Although some trials introduce predictive maintenance, it still depends on bringing the vehicle to the workshop and making a traditional inspection. The fleet contains mission-critical old vehicles that are having a repeated breakdown, which makes the beneficiary departments complain due to the repeated interruptions on their core operations, additionally the high cost of corrective maintenance. A computerized system used to manage the fleet life cycle, including maintenance planning and management, is considered the primary source of maintenance management data. Nevertheless, the technical data is not stored in the system or any
Exploring the Hidden Patterns in Maintenance Data …
173
other database, such as a primary or subsystem failure description. Instead, there is a general free-text field containing a high-level description of the issues. The problem arose when trying to predict the failure and breakdown of vehicles depending on historical data while there is not well structured technical information about the failure of the vehicle. So human prediction of vehicle breakdown based on system reports came as mission impossible. There are two research questions needed to be answered to help a real case scenario for fleet maintenance management which are: 1. Can vehicle breakdown or failure be predicted based on historical maintenance data ( non-sensors data) 2. Can system or subsystem failure be determined and predicted based on historical maintenance data ( non-sensors data).
2 Literature Review Monnin et al. [8] explain various strategies to deal with fleet maintenance management, while some use fail-and-fix strategies. However, it is considered not practical for other fleet managers as a high impact on the cost and the downtime. So other fleet managers need to implement predictive maintenance or condition-based maintenance strategies to maintain the required service level and higher fleet availability. Prytz et al. [9] explain in their study that periodic maintenance based on time or mileage is not enough to manage a fleet of vehicles. And it is essential to have additional predictive maintenance to maintain the fleet health, minimize downtime, and ensure higher availability and productivity for the fleet. However, vehicle manufacturers like Mercedes, Volvo, and MAN are using different planning to conduct their services for their customers. Therefore, they did not consider advanced predictive maintenance for vehicles, as predictive maintenance depends on operating information gathered through the maintenance of historical information [10]. Bronté et al. [4] explain the calculation of the available time for vehicles as the mean time between failure divided by the sum of the mean time between failure and downtime. Furthermore, the addressed main parameter should be considered while taking an oil sample to study to predict any failure. These parameters are date, working hours/mileages, oil specification, and laboratory oil analysis. They were using the design of the experimental approach to predict the failure of vehicle parts. However, Alghanem et al. [18] assert the importance of using the sensors’ attributes and real-time data generated to check the machine health status by applying machine learning algorithms to evaluate machine health status by applying machine learning algorithms to assess machine health. The use of sensors was supported by Choy et al. [12] as they predict the gearbox failure based on vibration sensors output. Chaudhary [6] uses multiple linear regressions to build the required prediction model related to fleet maintenance cost for around 2000 construction equipment vehicles. The average square error is used to validate the prediction model by analyzing the operation information, purchase price, and vehicle age. Other studies use different methods
174
H. S. AlGanem and S. Abdallah
to build prediction models, such as time series analysis. Decision trees, quadratic models, Genetic Algorithms, fleet history data analysis, predictive maintenance can be predicted in different approaches such as oil analysis, parts inspection, vibration, and thermograph analysis [13, 14]. In addition, Zhao et al. [15] addressed two ways to predict the failure of machines, either by physics or based on the historical data; for the data-driven way, it required applying machine learning algorithms to explore the un-seen pattern provided by data. Chaudhuri [16] explains that the main attributes used to predict preventive maintenance are divided into two categories. Based attributes: vehicle age, type, model, mileage, registration information, while the other category is the derived category which includes the count of service type that happened to the vehicle, average labor time, average cost of spare parts, number of tasks performed in the last service, previous maintenance history, and many other attributes for two years. These data are analyzed using artificial intelligence algorithms such as a hierarchical modified fuzzy support vector machine, logistic regression, random forests, and support vector machines. Prytz [10] describes different methods used for predictive maintenance from the literature review as (1) Remaining useful life prediction, (2) deviation prediction and (3) classification of required maintenance. These methods can be applied for historical and previous data and real-time data collected by sensors and IoT. Deep learning was employed in many deployed models to predict and calculate machine health using different types and techniques of Neural Networks, Such as Convolutional Neural Networks (CNN), Recurrent Neural Networks (RNN). That uses a feature selection and extraction method to determine the attributes that give better results. The study by Masry et al. [5] uses the benefit of mixing and utilizing both scheduled maintenance/preventive maintenance and corrective maintenance/breakdown to predict machines’ failure and build a new maintenance plan. Prytz [10] uses three AI algorithms to differentiate the best to build the predictive model: Naive Bayes, Gradient Boosting Trees, and C5.0 Trees—using Manufacturer Type Key Manufacturer, Model, Process, Fault, and Material attributes. Raposo et al. [13] focused on their study of the employment of Artificial intelligence algorithms such as random forest classifiers. They supervised machine learning to read and analyze fleet data to build a prediction model that helps determine the prediction maintenance for a specific subsystem of heavy vehicles.
3 Research Methodology This study uses a design science methodology that uses main phases explained by Peffers et al. [17]: Observation/question, research topic area, hypothesis and research questions, test with experiment, analysis data, and report conclusions. As per [16], many steps need to be taken to predict the coming required maintenance for vehicles, starting by deciding which data to include in the analysis, cleaning, analyzing, model building, model training, and model testing. The same steps are followed in this
Exploring the Hidden Patterns in Maintenance Data …
175
paper. This paper is looking to know more about the prediction of vehicle breakdown using historical maintenance schedules. Also, AI algorithms can be employed using historical maintenance data to know the accuracy confidence. This paper focuses on building a prediction model and verifying it based on real data. Moreover, to predict the issue or the failure of the vehicle systems or subsystems. Therefore, the study must have suitable historical records, well-structured and specific formats for more than three years. Data collection has been done from Oracle’s Asset Life Management suite (ALM), specifically from the Enterprise asset management system (EAM) using Oracle Business Intelligence Discoverer to generate the required records within the required period. Due to millions of stored records in the Oracle database, the study selects several vehicles with the same characteristics. And It faces issues with repeated corrective maintenance after meetings with unit heads in the fleet maintenance section. As a result, we conclude that the fleet is divided into three units: (1) Light vehicles (LV), including Saloon, SUVs, pickups, and small busses, (2) Heavy Vehicles (HV) include big busses, bin loaders, compactors, HV unit is divided vehicles into three subcategories (HV1), big busses, (HV2) compactors, and (HV3) Dozers.
(3) Equipment (EQ), including Dozers, boats, and cranes. HV unit head explains that there are frequently repeated breakdowns for HV2. That happened because the vehicles have been in the service for more than ten years. So this HV2 category was selected as a sample from the fleet. In addition, waste management department vehicles were selected, Because it is considered a critical vehicle based on their operation to clean the Dubai city. and having the same working conditions. which eliminate other factors that affect the vehicle operating condition, with a total of 127 vehicles and records history for five years.
4 Data Analysis Alghanem et al. [18] express the importance of data analysis for innovational solutions in all domains. Also, It is important to understand the fleet historical data to provide management with the required knowledge about the fleet, which helps build their strategic and operational decisions. The knowledge gained from the fleet’s historical information and the comparison within the fleet pool (vehicles with the same characteristics) come as part of the knowledge-base system [8]. Also, Emmanouilidis et al. [20] addressed that knowledge is essential to operate the fleet and maintain fleet availability. Predicting the vehicles expected to face issues or problems is considered part of the decision support system. It helps maintenance managers make the right decision about the maintenance priority for their vehicles. Also, it is considered a knowledge base system by utilizing the info, data, and knowledge stored
176
H. S. AlGanem and S. Abdallah
in the knowledge database, also supported by AlGhanem et al. [19] of the importance of employing Knowledge management and artificial intelligent algorithms to maximize the benefits of the stored data. In fleet maintenance, vast and complicated data are saved in the fleet management system, and it requires expertise to understand this information and to be able to analyze it in the proper way to gain the benefit from historical maintenance data, especially for a group of similar vehicles [8]. The stored data and the structure is not designed to be used in artificial intelligence algorithms, so much preprocessing is required before starting the analysis. The required field was selected and generated two datasets. The first is the vehicle master data, including plate number, model, type, manufacture, start working date, warranty end date, and vehicle group. As shown in Fig. 1. The second dataset is work orders which contain information about the maintenance type, maintenance day, how long vehicles stay in the maintenance, vehicle age, week of the year, the issue, number of days since last preventive/ corrective/Other maintenance types as shown in Fig. 2. After collecting the data, a cleaning process took place to remove uncompleted records containing null values or outliers. Most of the fields are entered into the EAM system manually and contain incorrect information like meter reading is more than 100 Million KM or maintenance duration in minus. Also, some data correction is done as work order description is a free text and enables technicians to enter any text. However, it was noticed there is no standard way to describe the conducted work or
Fig. 1 Vehicles master dataset
Exploring the Hidden Patterns in Maintenance Data …
177
Fig. 2 Work order dataset
the description of the complaints, so the issue field was generated based on some formula and human review to make a consistent issue categorization.
4.1 Descriptive Analysis Table 1 shows descriptive statistics for datasets related to the work orders. It shows that most of the work orders are corrective maintenance with 21,203 work orders for 128 vehicles in five years, which shows a critical issue with maintenance prediction. There is a real need to predict the failure of vehicles before it reaches breakdown. Therefore, the following parameters were used to build the prediction model. WO Date Release: work order start date, Days of CM: number of days since last corrective maintenance, Days of PM: number of days since last preventive maintenance, Days of OM: number of days since last others maintenance, In-Service: the date of the vehicles start operating, Meter Reading: odometer reading on the work order date, model: vehicle model in years, Vehicle Age: the age of the vehicle in years, Warranty end date: the date where the warranty ends, WO Week: week of the year where the work order happened, Plate Number: is the unique number of the vehicle. Figure 3 compares work orders in charts and shows the high amount of corrective maintenance compared with preventive and other maintenance. Table 2 shows the mean values in days since the last kind of maintenance. Furthermore, it shows around 20 days of preventive maintenance, while there are around 12 days between corrective maintenance, which is considered a short period between failures. Table 1 Descriptive statistics for datasets related to the work orders Valid
Corrective maintenance
Frequency
Percent
Cumulative Percent
21,203
55.0
55.0
Others
3256
8.4
63.5
Preventive maintenance
14,084
36.5
100.0
Total
38,543
100.0
178
H. S. AlGanem and S. Abdallah
Fig. 3 Work orders distribution
Table 2 Duration in days between maintenance type N
Minimum
Maximum
Mean
Days of PM
38,543
0.0
697.0
20.277
Days of CM
38,543
0.0
372.0
12.006
Days of OM
38,543
0.0
945.0
87.965
Valid N (listwise)
38,543
Figure 4 shows that many vehicle models that purchased, and the unit head explains that only brand new vehicles from the authorized agency with a recent model of the current year. However, historical data cover a variety of new and old vehicles as there are seven vehicles from the model of 2005 and before. And 73 vehicles from the model between 2006 and 2010. And 45 vehicles from the model between 2011 and 2015, and only three vehicles from model more than 2015.
4.2 Prediction Model Building Rapid miner tool was used to build a prediction model to predict two main variables to answer the research questions. The first variable is the maintenance type, which comes in the work order type field. The second one is to predict the issue of the failure of the system or subsystem of the vehicle in case corrective maintenance is predicted.
Exploring the Hidden Patterns in Maintenance Data …
179
Fig. 4 Vehicle model distribution across the years
So the design came to build two prediction models separately. Many algorithms were used to differentiate which algorithm can predict better accuracy. Figure 5 shows the sampling process for Implementing statistical and AI algorithms, which consists of the leading six steps; the first one is data reading and merging, the second step is to select the variables to be processed and determine the predicted variable, the third step is Split the data for model training and testing using 90% to 10% using linear sampling which guarantees that the oldest and historical data were used to predict recent data, the fourth step is Train the model using the 90% of the data, fifth step came as apply the test data to trained model, and the last step is to check the performance and make sure about the prediction results, and weighting for each variable and its effect on the prediction. Figure 6 shows the weighted attributes for specific applied algorithms show variables, weights, and orders depending on the selected ML algorithm. And shows the order of used variables and their effect and contribution in model building. Figure 7 shows a sample of prediction results and confidence level of the prediction, so this gives an excellent example of how algorithm results can be read and employed in real-life management of fleet maintenance. As the prediction provides the result and how confident this result is, the prediction can be categorized based on the confidence level and tackle the higher confidence level first. Table 3 shows a matrix of the used algorithm to predict the work order type and the used parameters to build the prediction model, and it shows that not all algorithms were accepting all kinds of parameters.
180
H. S. AlGanem and S. Abdallah
Fig. 5 Sample process for implementing statistical and AI algorithms
Fig. 6 Weighted attributes for a specific algorithm
While predicting the issue or the failure system or subsystem, the work order type is introduced within the parameters for all models.
Exploring the Hidden Patterns in Maintenance Data …
181
Fig. 7 Sample of prediction results and confidence level
4.3 Model Validation The accuracy percentage was used for both predictions work order and issues to evaluate the prediction models. As per [21], there are three main valuable measurements for machine learning classifications accuracy, recall, and precision, using the confusion matrix in Table 4 below, which shows four values True Positive, True Negative, False positive, and false negative. using the confusion matrix in, the following formulas are used to evaluate the prediction. Accuracy = (TP + TN)/(TP + TN + FP + FN). Precision = (TP)/(TP + FP). Recall = (TP)/(TP + FN). The higher Accuracy, Precision, and Recall values mean better prediction for the model [22, 23]. Table 5 shows the accuracy matrix and compares the predicted with the real readings, and it shows the True positive/Negative and False Positive/ negative and total accuracy percentage. Table 6 shows the accuracy percentage results for applied models for both predicting the work order type) the failure of the vehicle) and the prediction of the issue (the problem of failure in system or subsystem).
*
*
*
Polynomial Regression
Linear regression
Vector linear regression
*
*
*
*
W-pace regression
W-Isotonic regression
Gradient boosted trees
*
*
Deep learning
*
Random tree
*
*
Support vector Machine (linear)
*
*
Support vector Machine
*
*
*
Date time Integer
Data Type
Days of CM
WO date release
Algorithm\variable
*
*
*
*
*
*
*
*
*
*
Integer
Days of PM
*
*
*
*
*
*
*
*
*
*
Integer
Days of OM
*
*
*
*
*
*
*
Date time
In-Service
Table 3 Used parameter in each work order type prediction algorithm
*
*
*
*
*
*
*
*
*
*
Integer
Meter Reading
*
*
*
*
*
*
*
*
*
*
Integer
Model
*
*
*
*
*
*
*
*
*
*
Integer
Vehicle age
*
*
*
*
*
*
*
Date time
Warranty end date
*
*
*
*
*
*
*
*
*
*
Integer
WO Week
String
Group
*
*
String
Plate number
182 H. S. AlGanem and S. Abdallah
Exploring the Hidden Patterns in Maintenance Data … Table 4 The confusion matrix
183
Predicted positive
Predicted negative
Actual positive
True positives (TP)
False negatives ( FN)
Actual negative
False positives (FP)
True negatives ( TN)
Table 5 Accuracy matrix for gradient boosted trees
Table 6 Accuracy percentage results for applied models Algorithm
Prediction of work order type accuracy (%)
Prediction of issue accuracy (%)
Polynomial Regression
19.10
1.40
Linear Regression
55.00
26.37
Vector Linear Regression
36.77
5.67
Support Vector Machine
55.03
4.07
Support Vector Machine (Linear)
54.99
9.77
W-PaceRegression
65.76
30.04
W-IsotonicRegression
66.12
30.15
Gradient Boosted Trees
68.09
41.24
Deep Learning
67.15
40.08
As Gradient Boosted Trees shows higher accuracy, Ridgeway [24] summaries Friedman’s Gradient Boost algorithm, as shown in Fig. 8. Figure 8 can be explained in words as determine the data to be used for the training of the model and the method to evaluate how the model fits the provided data using loss function. Then initialize the model with a specific value and usually start with the average of the provided variable. Then start a loop to generate predictions trees from 1 to the number of required trees to be created. Inside the loop, a residual is calculated for each record using the initial statement from the previous tree or the value from the algorithm initial, which is the average using formula (1).
184
H. S. AlGanem and S. Abdallah
Fig. 8 Friedman’s Gradient Boost algorithm
Then to build another tree to predict the residual using the regression tree to fit the residuals, each leaf of the tree might have one or more residuals. Leaves with one value will use the same value in the leaf, and leaves with more than one value will consider the average residual within that leaf. After that, a gradient descent step size is used, called the learning rate with a range between 0 and 1, and is used by applying the formula (2). The small value of the learning rate helps in better prediction results. Then repeat the loop for new trees; if the loop reaches the required iteration, then the output of the prediction value comes from the contribution of each tree from the loop, as shown in formula (3).
5 Results and Discussion Table 5 shows various results and accuracy between algorithms. Some algorism shows a very weak accuracy in predicting work order and issues. For example, polynomial Regression and Vector Linear Regression have less than 40% accuracy for WO type perdition and less than 10% for Issue prediction. While other methods and algorithms show moderate accuracy, Linear Regression shows less than 60% accuracy for work order type prediction and less than 30% for issue prediction. In contrast, there is higher prediction accuracy for WPaceRegression and W-IsotonicRegression with WO type prediction, more than 65%, and issue prediction more than 30%. The best algorithm in accuracy for both work
Exploring the Hidden Patterns in Maintenance Data …
185
order type and issue prediction is Gradient Boosted Trees with results of 68.09% for WO Type and 41.24% for issues, with 84.23% for recall value 67.72% for precision. The recall value means that 84.23% of the corrective maintenance was predicted correctly, and only 15.77% of vehicle breakdown or system failure were missed, which is considered a very high prediction rate. On the other hand, the precision value is 68.89%, which means 31.11% were predicted corrective while it is either preventive or other maintenance. To predict a specific vehicle’s failure, vehicle number and date in the future will be provided for the built model; if the model predicts a corrective work order, the vehicle is expected to break down or have a failure on the provided date. Otherwise, the model will consider no failure could happen to the vehicle. So this is to answer the first research question and prove that vehicle breakdown or failure can be predicted using historical maintenance data (non-sensors data) with an accuracy level of 68.09%. Regarding the second research question, results show issue predictions accuracy level of 41.24%, as there are 62 issues categories; this considers a correct prediction also.
6 Conclusion and Recommendations Fleet considers one of the critical assets the organization could have to provide its services and manage its operations. Therefore, it requires cautious maintenance to keep vehicles available for use. As a result, utilizing historical maintenance data and applying AI calculations in predictive maintenance will help in better comprehension of vehicle breakdown prediction. Likewise, it will foresee the breakdown before it occurs. Moreover, it assists in recognizing what the issue is. These forecasts will increase vehicles’ available time and reduce maintenance costs. Many ai algorithms can e used to build prediction maintenance models, the best results of accuracy come from Gradient Boosted Trees algorithms. IoT is recommended to be used as a source of data. It provides detailed info about vehicles’ systems conditions and can be stored in a well-structured format, which helps in better prediction based on real data from vehicles, with more attributes related to vehicle systems or subsystems. However, this paper is not designed to explain the causation of vehicle breakdowns or system and subsystems failure. Also, it is limited to a small category of vehicles, which makes it inapplicable to be generalized to the entire fleet. For future work, it is recommended to consider the building of spare parts prediction using the consumed spare parts in work orders.
186
H. S. AlGanem and S. Abdallah
7 Research Implications The research implication focus on how this research can help the targeted domain and area of study. And how the findings are valuable for subsequent research. The implication of this research comes in two parts.
7.1 Practical Implications This research shows how fleet managers can benefit from historical data stored for years and employ machine learning to predict the failure of vehicles. Furthermore, it will predict the primary or subsystem failure. As a result, it will give fleet managers a chance to repair the vehicles before it reaches the breakdown stage and save a lot of expense and downtime. The vehicle will be available more in others, and maintenance costs will be reduced to the minimum. As a result, fleet managers can make better maintenance decisions and adjust the maintenance plan accordingly.
7.2 Theoretical Implications This research considers a starting point for building a data-driven decision-support framework for fleet management in the government sector of Dubai. Furthermore, as there is very minimal research covering fleet management in the UAE in general and Dubai in specific, this research enriches the literature by conducting a design science research focus on employing machine learning algorithms on fleet management within the government of Dubai. Acknowledgements This study is a part of a work submitted to The British University in Dubai.
References 1. A. Pantelias, in Asset management data collection for supporting decision processes (Doctoral dissertation) (irginia Tech, 2005) 2. N. Amruthnath, T. Gupta, A Research Study on Unsupervised Machine Learning Algorithms for Early Fault Detection in Predictive Maintenance, 2018 5th International Conference on Industrial Engineering and Applications (ICIEA) (2018) 3. S. Dain, Normal accidents: human error and medical equipment design. Heart Surg. Forum 5(3), 254–257 (2002) 4. F.L. Bronté, C.R. Pagotto, de Almeida, V.B.C., Predictive maintenance techniques applied in fleet oil management, in Proceedings of the 23rd ABCM International Congress of Mechanical Engineering (2015)
Exploring the Hidden Patterns in Maintenance Data …
187
5. A. Masry, Z. Omri, N. Varnier, C. Morello, B. Zerhouni, Operating Approach for Fleet of Systems Subjected to Predictive Maintenance, in Euro-Mediterranean Conference on Mathematical Reliability (2019) 6. A. Chaudhary, Developing Predictive Models for Fuel Consumption and Maintenance Cost Using Equipment Fleet Data (2019) 7. K. Wang, Intelligent predictive maintenance (IPdM) system-Industry 4.0 scenario. W.I.T. Trans. Eng. Sci. 113, 259–268 (2016) 8. M. Monnin, B. Abichou, A. Voisin, C. Mozzati, Fleet historical cases for predictive maintenance. Int. Conf. Surveil. 6, 25–26 (2011) 9. R. Prytz, S. Nowaczyk, T. Rögnvaldsson, S. Byttner, Predicting the need for vehicle compressor repairs using maintenance records and logged vehicle data. Eng. Appl. Artif. Intell. 41, 139–150 (2015) 10. R. Prytz, in Machine learning methods for vehicle predictive maintenance using off-board and on-board data (Doctoral dissertation) (Halmstad University Press, 2014) 11. H. Qiao, T. Wang, P. Wang, S. Qiao, L. Zhang, A time-distributed spatiotemporal feature learning method for machine health monitoring with multi-sensor time series. Sensors (Basel) 18(9) (2018) 12. F.K. Choy, R.J. Veillette, V. Polyshchuk, M.J. Braun, R.C. Hendricks, Quantification of gear tooth damage by optimal tracking of vibration signatures. Int. J. Rotating Mach. 3(3), 143–151 (1997) 13. H. Raposo, J.T. Farinha, L. Ferreira, D. Galar, Dimensioning reserve bus fleet using life cycle cost models and condition based/predictive maintenance: a case study. Public Transp. 10(1), 169–190 (2018) 14. R. Zhao, R. Yan, Z. Chen, K. Mao, P. Wang, R.X. Gao, Deep learning and its applications to machine health monitoring. Mech. Syst. Signal Process. 115, 213–237 (2019) 15. R. Zhao, R. Yan, J. Wang, K. Mao, Learning to monitor machine health with Convolutional Bi-directional LSTM networks. Sensors (Basel) 17(2) (2017) 16. A. Chaudhuri, Predictive maintenance for industrial IoT of vehicle fleets using hierarchical modified fuzzy support vector machine. arXiv [cs.AI] (2018) 17. K. Peffers, T. Tuunanen, M.A. Rothenberger, S. Chatterjee, A design science research methodology for information systems research. J. Manag. Inf. Syst. 24(3), 45–77 (2007) 18. H. Alghanem, A. Mustafa, S. Abdallah, Knowledge and human development authority in Dubai (KHDA) open data: what do researchers want?,” in European, Mediterranean, and Middle Eastern Conference on Information Systems (Springer, Cham, 2019), pp. 58–70 19. H. AlGhanem, M. Shanaa, S. Salloum, K. Shaalan, The role of KM in enhancing AI algorithms and systems. Adv. Sci. Technol. Eng. Syst. J. 5(4), 388–396 (2020) 20. C. Emmanouilidis, L. Fumagalli, E. Jantunen, P. Pistofidis, M. Macchi, M. Garetti, Condition monitoring based on incremental learning and domain ontology for condition-based maintenance, in 11th International Conference on Advances in Production Management Systems (APMS) (2010) 21. D. Chicco, Ten quick tips for machine learning in computational biology. BioData Min. 10(1) (2017) 22. C. Goutte, E. Gaussier, A probabilistic interpretation of precision, recall and F-score, with implication for evaluation, in Lecture Notes in Computer Science (Springer, Berlin, 2005), pp. 345–359 23. H.S. Alghanem, R.H. Ajamiah, Arabic text summarization approaches: a comparison study. Int. J. Inform. Technol. Lang. Stud. 4(3) (2020) 24. G. Ridgeway, Generalized Boosted Models: A guide to the GBM package. Update 1(1) (2007) 25. K.W. Wait, B. Howard, Rail car predictive maintenance system. U.S. Patent Appl. 15 (2018)
Arabic Dialects Morphological Analyzers: A Survey Ridouane Tachicart, Karim Bouzoubaa, Salima Harrat, and Kamel Smaïli
Abstract Morphological analysis is a crucial component in natural language processing. For the Arabic language, many attempts have been conducted in order to build morphological analyzers. Despite the increasing attention paid to Arabic dialects, the number of morphological analyzers that have been built is not important compared to Modern Standard Arabic. In addition, these tools often cover a few dialects of Arabic such as Egyptian, Levantine, and Gulf, and don’t support currently all of them. In this paper, we present a literature review of morphological analyzers supporting Arabic dialects. We classify their building approaches and propose some guidelines to adapt them to a specific Arabic dialect.
Keywords Morphological analyzer Arabic dialect Lexicon processing Standard Arabic Corpus Annotation
Natural language
R. Tachicart (&) Chouaib Doukkali University, El Jadida, Morocco e-mail: [email protected] K. Bouzoubaa Mohammadia School of Engineers, Mohammed V University, Rabat, Morocco e-mail: [email protected] S. Harrat K. Smaïli Université Lorraine, Villers-lès-Nancy, Nancy, France e-mail: [email protected] K. Smaïli e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. Al-Emran and K. Shaalan (eds.), Recent Innovations in Artificial Intelligence and Smart Applications, Studies in Computational Intelligence 1061, https://doi.org/10.1007/978-3-031-14748-7_11
189
190
R. Tachicart et al.
1 Introduction Arabic is a Semitic language spoken by more than 450 million people in the world.1 It is the official language of the Arab nation covering 22 countries and displays a collection of forms: • The primary form is the classical Arabic (known also as Quranic Arabic) found in the Quran and Jahillyah literature (Arabic period before the arrival of Islam). It dates back to the seventh and ninth century from Umayyad and Abbasid times where it is used in literary texts; • Modern Standard Arabic (MSA) is the second form of Arabic currently used in formal situations [1] such as education, government documents, broadcast news, etc. It has a strong presence in written Arabic texts since it is considered a high variety of Arabic. • Arabic dialects (ADs) set is the third form of Arabic considered as the mother tongue of Arabic people. These dialects differ from each other and are usually used in informal venues such as daily communication, TV series, and programs, commercial advertising, etc. Unlike MSA, ADs have no written standards where users write dialectal words in many forms. • These dialects are sparsely represented in written texts compared to MSA in spite of their recent use on the Internet by Arabic people. According to their similarities, ADs can be divided into two major groups: east Arabic and Maghreb Arabic dialects. The first category includes Moroccan, Algerian, Tunisian, Libyan, and Mauritanian. While the second gathers Gulf (GLF), Egyptian (EGY), Levantine (LEV), Iraqi and Yemeni. Many works [2] showed that the dialects of the same geographical area (eastern or western) are close. For example, Moroccan people can understand Tunisian and Algerian people better than Egyptian or Syrian people. In addition to that, we can divide the dialect of the same country into sub-dialects according to geographical regions. Nowadays, Arabic dialects are widely used on the Internet. As an illustration, Arabic people increasingly use their dialects in social media by expressing their opinions using these dialects. In Morocco, some websites2,3 are examples where the text is completely written in the local dialect. This is why there is an increasing interest to process Arabic dialects and build their corresponding NLP tools such as sentiment analysis, opinion mining, and automatic identification. It is known that building NLP tools usually relies usually on the availability of a corresponding morphological analyzer (MA) which in turn relies on the availability of resources. However, there is a general lack of resources for most Arabic dialects. As a result, there is slow progress in building corresponding MA and consequently 1
http://www.vistawide.com/arabic/arabic.htm. http://www.goud.ma. 3 http://www.lsvbdarija.com. 2
Arabic Dialects Morphological Analyzers: A Survey
191
slow progress in building dialectal NLP tools. Hence, the availability of these tools is still in earlier stages. Concerning the building of Arabic Dialect Morphological Analyzers (ADMAs), there have been some attempts that focus on some Arabic dialects following different approaches. In an optimal case, an ideal ADMA should first deal with all exiting ADs regardless of their written forms (Arabic or Latin script) and provide all correct analyses with high accuracy and in the fastest time. An ideal ADMA should be also integrated easily and quickly in any NLP tool. However, even if some efforts have been made while other efforts are still ongoing, they are far from being close to the ideal ADMA given that building ADMAs raises many challenges that can be classified into two categories. The first category gathers challenges that are short-term raised while building ADMAs. The second category represents a set of challenges in terms of performance, coverage of different dialects, and in terms of integration in NLP applications. In this paper, we provide a wide literature review of the existing ADMAs in order to provide a quick reference guide of Arabic dialect morphological analyzers. The remainder of this paper is organized as follows: Sect. 2 presents related works in the field of Arabic dialect morphological analyzers. Section 3 is a synthesis of ADMAs building that gives an overview of the first category of ADMAs building challenges. It also describes followed approaches and presents used solutions. In Sect. 4, we detail the common measures to evaluate an ADMA. Then, in Sect. 5 we provide a discussion about the open challenges; finally, we conclude the paper with some observations in Sect. 6.
2 State of the Art Morphological analysis of Arabic Dialects is relatively a recent area of research that gained progressive attention during the last decade. We found that several works have been made to deal with Arabic dialect morphological analysis. However, we noticed that some Arabic dialects are targeted by more than one morphological analyzer, while other dialects are low addressed or not concerned. In addition, we can find mono-dialect and multi-dialect morphological analyzers. In the following, we present the current state-of-the-art Arabic dialects morphological analysis.
2.1
Egyptian Dialect
Authors of [3] developed CALIMAEGY, a tool for morphological analysis of the Egyptian dialect (EGY) which relies on an existing Egyptian lexicon. The latter follows the POS guidelines used by the Linguistic Data Consortium for Egyptian. and accepts a variety of orthographic spelling forms that are normalized to the conventional orthography of Arabic dialects CODA [3]. CALIMA has 100 k stems corresponding to 36 k lemmas. The evaluation of this analyzer is performed using
192
R. Tachicart et al.
3300 manually annotated words of an Egyptian corpus and gives accuracy in terms of POS tags that exceeds 84%. In a later work, the authors of [4] extended the MSA version of the MADA analyzer [5] to the Egyptian dialect. To this end, they replaced MADA annotations with CALIMAEGY annotations. Then, they added new prediction models that can predict, according to context, some morphological features such as the POS, the mood, the voice, etc. Using a test set containing 2445 annotated words, the evaluation of this analyzer shows an accuracy of 64.7%. Salloum and Habash built ADAM on the top of SAMA [6] database, ADAM [7]. It is a new MA that can analyze both Egyptian and Levantine dialects. In their approach, they extended the SAMA database by adding dialectal affixes and clitics, in addition to a set of handwritten rules. ADAM outputs analyzed text as lemma and feature-value pairs including clitics. This analyzer is intended for improving machine translation performance. As an example, it is used as part of ELISSA [8] that translates EGY and LEV text to MSA and a system of dialect identification [9]. The experimental results performed using 4201 words showed that the out of vocabulary rate is 16,1% for Levantine texts and it is 33,4% for Egyptian texts. Starting from the MADA experience, Pasha et al. developed a new MA called MADAMIRA [9] that can analyze both MSA and EGY text. In addition to what was available with MADA, one important benefit of MADAMIRA is the analysis ranking component that scores each word’s analysis according to MADA model predictions. MADAMIRA can also sort the analyses as output text based on scores. Authors claim that MADAMIRA can easily be integrated with web applications. Using a test corpus composed of 20k words in the evaluation, the reached accuracy exceeds 83%. Finally, the authors of [10] combined two MSA Morphological analyzers: MADAMIRA and FARASA [11] to build YAMAMA morphological analyzer for the Egyptian dialect. YAMAMA produces the same output as MADAMIRA and reaches 79% of accuracy. Despite its lower accuracy, the authors affirm that YAMAMA is faster than MADAMIRA.
2.2
Levantine Dialect
In addition to ADAM described above, Levantine dialect (LEV) was targeted by MAGEAD analyzer [12] where authors focused their efforts on modeling Arabic dialects directly. MAGEAD can decompose word forms into templatic morphemes and relates morphemes to string. The principle of its analyses relies on lexemes and features. They define the lexeme as a triple containing a root, a meaning index, and a morphological behavior class. The first version of this dialect analyzer covers the Levantine dialect in addition to MSA. Using a test corpus composed of 3157 words, the authors claimed accuracy of 56%. Similarly, Eskandar et al. [13] started from annotated corpora and MADAMIRA models to build two morphological analyzers: one for EGY (ALMOREGY) and the
Arabic Dialects Morphological Analyzers: A Survey
193
other for LEV (ALMORLEV). They used the Egyptian Arabic corpora (Maamouri et al. 2012) as EGY data and Curras corpus of Palestinian Arabic [14] as LEV data. These corpora are morphologically annotated in a similar style to the annotations in MADAMIRA. The morphological analyzers were created for different sizes from 5k up to 135k of the previous corpora. The evaluation of ALMOREGY reached an accuracy of 90% while ALMORLEV reached only 87%.
2.3
Gulf Dialects
To build a MA that deals with Gulf dialects, Almeman and Lee (2012) used Alkhalil [15] and added dialect affixes to its database. They split their MA processing into two steps. In the first one, Alkhalil analyzes dialect words sharing the same stem with MSA words. In the case where the corresponding analysis is not produced, the input text is segmented. Then, they estimate different segment frequencies on the Internet. These items are used to guess the correct base form. The overall synthesis achieved 69% of accuracy on a corpus composed of 2229 words. Khalifa et al. [16] developed CALIMAGLF a morphological analyzer for Emirati (EMR) Arabic verbs. They used two resources that provide explicit linguistic knowledge. The first is a database gathering a collection of roots, patterns, and affixes. While the second consists of a lexicon specifying verbal entries with roots and patterns. By merging these two resources in one model, all possible analyses are provided to cover more than 2600 EMR verbs following MADAMIRA representation. Evaluation of CALIMAGLF on 620 verbs gives an accuracy of 81%. In later work, authors of [17] presented a morphological analyzer and disambiguator for Gulf Arabic dialects (GLF-MA). The morphological analyzer follows the templatic morphology where roots and patterns are key elements and is built upon a structure consisting of a set of tables for affixes, stems, and their corresponding compatibilities. Regarding the disambiguation, the authors used the Gumar corpus [18] and the fastText model to train the disambiguator. The evaluation test using part of the Gumar corpus showed that the GLF-MA accuracy reaches 89.2%.
2.4
Yemeni Dialect
Al-Shargi et al. [19] performed an effort to annotate dialect corpora in order to adapt MADAMIRA to the Yemeni dialect. They used a tool named DIWAN [20] to manually annotate a corpus collected from both online and printed materials. The annotated corpus (about 32k words) was used then to adapt an MSA database to the Yemeni dialect by extending it to cover this dialect. The overall evaluation of the new analyzer rates to 69.3%.
194
2.5
R. Tachicart et al.
Tunisian Dialect
Zribi et al. [21] adapted Alkhalil morphological analyzer [15] to a Tunisian dialect. They started by building a corpus and a lexicon by recording speech and manually transcribing some radio and TV broadcasts. They integrated the Tunisian lexicon in the Alkhalil process and then added Tunisian linguistic rules such as roots and patterns. This task was time-consuming since each root has to be combined with corresponding patterns. The system performance reached an accuracy of 77%. In another version of MAGEAD, Hamdi et al. [22] extended this analyzer to cover a Tunisian dialect. Their approach relies on converting dialectal text to a pseudo-MSA form. Using a corpus containing 2455 words their analyzer achieved an accuracy of 82%.
2.6
Algerian Dialect
Harrat et al. [23] extended the BAMA [24] analyzer to the Algerian dialect using an Algerian lexicon and added necessary affixes and stems to the BAMA database. The new analyzer reached an accuracy of 69% when evaluating it on a test corpus containing 1618 words.
2.7
Summary
Finally, we summarize our state of the art by listing in Table 1 all known ADMAs with their claimed lexical coverage (LC), their extensibility (EXT) to other ADs, their claimed accuracies (ACC), their supporting of the disambiguation (DS), and the test corpus size (TCS) in terms of the number of words. In fact, given an input text, an ADMA may provide for the same word many analyses. However, there is only one correct analysis for the considered word when the context of the sentence is taken into consideration. Hence, some ADMAs take into account the context during the disambiguation and provide a suitable analysis or rank their results. In Table 1 we use the “–” mark when a value is not stated. By examining the content of this table, it seems that the listed ADMAs have only the advantage of extensibility to other ADs where nine of them are easy to extend. However, many ADMAs have a moderate accuracy and do not support disambiguation. For this reason, we can affirm that currently, we are a long way from reaching the ideal ADMA.
Arabic Dialects Morphological Analyzers: A Survey
195
Table 1 Synthesis of the surveyed ADMAs AD
ADMA
LCa
EXTb
ACC (%)c
DSd
TCSe
EGY
CALIMA Medium Easy 84 No 3300 MADA 65 Yes 2445 – – ADAM – Easy 67 No 4201 MADAMIRA – Easy 83 No 20,000 YAMAMA High – 79 No – ALMOR Medium Easy 90 No – MAGEAD High Not easy 56 No 3157 LEV ADAM – Easy 84 No 4201 ALMOR Medium – 87 Yes – TUN Alkhalil High Not easy 77 No – MAGEAD – Easy 82 No 2445 YEM ALMOR High Easy 69 No – GLF Alkhalil High Easy 69 No 2229 CALIMA 81 No 620 – – GLF-MA 89.2 Yes – – – ALG BAMA – Easy 69 No 1618 a Lexical coverage: defined by the vocabulary size as claimed by the author b Extensibility to other ADs: defined by the needed effort to extend a given ADMA to other AD as recommended by authors c Claimed accuracy d Disambiguation e Text corpus size
3 ADMAs Synthesis From the above-surveyed works, we can report that the different ADMAs faced short challenges for which they proposed solutions adopting different approaches. In the following, we first list and explain those challenges and then highlight the adopted approaches and solutions.
3.1
Challenges
• Varieties of Arabic dialects: a set of Arabic dialects exists with linguistic differences on different levels especially at the lexical and the phonological level. Moreover, each Arabic dialect displays several sub-dialects spoken according to regions. This situation means that the NLP community faces the problem of the language variability of Arabic dialects. • Using Arabic and Latin letters (Arabizi): since Arabic dialects are known as spoken languages and have no standards except CODA, Arabic speakers use
196
R. Tachicart et al.
either Arabic or Latin letters in order to write their local dialects or combine Arabic with Latin letters in some cases. Moreover, they often use Latin letters in social media as well as online chat and Short Messaging System (SMS) [25] which led to producing massive amounts of Arabizi every day. However, the majority of current ADMAs cannot process this type of text because they consider only Arabic text written in Arabic letters as in the case of MSA analyzers. • Orthographic ambiguity: ADs have no standards where spelling inconsistency is a big challenge. Either using the Arabic or Latin letters, the same word may be written in different forms according to users. For example, in Maghreb Arabic dialects, the word ( ﺑﻘﺮﺓcow) may be also written as ()ﺑﻜﺮﺓ. according to speakers. Moreover, this phenomenon is widely noticed when using Latin script as illustrated in [26]. In this work, authors noticed that the word ( )ﻳﺮﺣﻤﻚhave mercy on you/ may be represented in 66 different ways using Latin script. • Lack of AD resources: building ADMAs needs linguistic resources such as corpora and lexicons. Despite the building of some AD resources, they targeted only a few Arabic dialects and they are in the majority of cases not publicly available. • Code-switching: typically, native speakers of Arabic tend to use a mixture of MSA and ADs (when using Arabic or Latin script) in the same context especially in social media [27, 28]. In addition, they use also in some cases foreign languages such as French and English. This situation increases ADMAs’ error rate when analyzing such text because of the MSA content.
3.2
Adopted Approaches
From our previous reviews, we can consider that works performed in order to provide morphological analyzers for Arabic dialects generally fall into two camps. The first trend gathers solutions modeling Arabic dialects directly. They are built from scratch and contain rich linguistic representations and morphological rules. Moreover, these systems rely on lexicons and compile the effect of the morphemic, phonological and orthographic rules in the lexicon itself. As an illustration, MAGEAD and CALIMA systems or analyzers follow this direction. The benefit of applying this approach is the valuable performance resulting from testing these systems. However, their main drawback is the important effort needed in their development and in extending them to other Arabic dialects (time-consuming). The second direction, followed to build Arabic dialect Morphological Analyzers, proposes adapting existing MSA MA to ADs. It consists of either superficial or deep adaptation of MSA analyzers. The first is a light modification usually related to the database or the included lexicon in the MSA analyzer. For example, in the work of [29], only affixes table was concerned by the task of Alkhalil adaptation to Arabic dialects. While the second relies on a deep process concerning, in addition to the database, the language modeling, and the used algorithm in the MSA analyzer.
Arabic Dialects Morphological Analyzers: A Survey
197
Table 2 Arabic dialect morphological analyzers by approaches From scratch (27%)
Superficial adaptation (40%)
Deep adaptation (33%)
MAGEADLEV CALIMAEGY CALIMAGLF GLF-MA
AlkhalilGLF ALMORY EM BAMAALG MAGEADTUN ALMORLEV ALMOREGY
MADA-ARZ ADAM MADAMIRA AlKhalilTUN YAMAMAEGY
As an illustration, the work of MADAMIRA falls in this category. We provide in Table 2 the surveyed AD morphological analyzers categorized according to the adopted approach. If we consider that deep and superficial adaptations use the same approach concept (adaptation of MSA analyzers to AD), we can affirm the dominance (73% = 33 + 40%) of adaptation of MSA analyzers to AD over the other technique (building from scratch). Hence, it seems that researchers prefer to adapt existing MSA analyzers to Arabic dialects rather than building them from scratch. In fact, they benefit from the closeness existing between MSA and Arabic dialects especially at the lexical level. To illustrate that, 81% of the Moroccan dialect lexicon is borrowed from Arabic according to [30].
3.3
Proposed Solutions
In the surveyed ADMAs, many efforts were performed in order to overcome the drawbacks and face challenges: • Focusing on main Arabic dialects: given that processing ADs is relatively a recent NLP field compared to MSA and the existence of sub-dialects that are spoken in the same Arabic country according to regions, researchers focused on the main sub-dialect in each country and plan to take into account other sub-dialects in future works. For instance, there are several sub-dialects in Morocco spoken according to the geographical areas. However, the sub-dialects spoken in Rabat and Casablanca regions are the most used especially on TV/ Radio and social media. • Using transliteration systems: to avoid the problem of Arabizi, some works introduced a transliteration module in their systems such as the work of May et al. [31]. These systems convert Arabic dialects written in Latin letters into Arabic letters and then perform the corresponding processing. As a consequence, Arabizi may be handled like Arabic text in the same MA. • Conventional Orthography: because Arabic dialects have no standards which impede their processing, several works proposed to adopt new rules towards the standardization of AD orthography. One example is the work of Habash et al. in [32] where authors proposed a unified framework called CODA to write all ADs with Arabic script based on MSA-AD similarities.
198
R. Tachicart et al.
• Building new AD resources: to overcome the problem of AD resources lack, it is of great importance that researchers make efforts in building and providing resources of all Arabic dialects. This is why some researchers devoted their efforts recently to assembling all the available AD resources as in the work of [14, 33] in order to provide a starting point for other researchers. While others started from scratch and used web mining or exploited the similarities existing between MSA and ADs in order to build new AD resources as in the works of [23]. Using Language Identification (LID) Systems: the existence of code-switching in Arabic text is a major issue that increases ADMAs’ error rate. To address this issue, several works introduced LID systems to distinguish between MSA and AD content and thus consider only AD text in the processing phase such as the AIDA system [34] and the work of Tachicart et al. [35].
4 Evaluation of ADMAs After explaining the short challenges and presenting a broader comparison between the surveyed ADMAs, it is necessary to present common metrics to evaluate the performance of a given ADMA. From a purely technical point of view, there are some common metrics to evaluate NLP morphological analyzers namely: precision, recall, accuracy, and F-measure. Indeed, multiple evaluation metrics are usually used to evaluate a given analyzer given that it may perform well on one evaluation metric but may perform poorly using another one. To ensure that an analyzer is operating correctly and optimally it is important to use the previous evaluation metrics. These metrics are calculated using the parameters: True Positive (TP), True Negative (TN), False Positive (FP), and False Negative (FN) as described in Table 3. Based on these parameters, precision is defined as the number of correct analyses returned by the morphological analyzer compared to the total number of the produced analyses. Alternatively, it can be calculated as in the following formula: Precision ¼
TP TP þ FP
Table 3 ADMA Analysis parameters True False
Positive
Negative
Correct analyses produced by the ADMA Incorrect analyses returned by the ADMA
Incorrect analyses identified by the ADMA Correct analyses not returned by the ADMA
Arabic Dialects Morphological Analyzers: A Survey
199
Moreover, recall is defined as the number of correct analyses returned by the morphological analyzer compared to the expected analyses that are correct and should be returned. The recall can be calculated as in the following formula: Recall ¼
TP TP þ FN
Higher precision means that a morphological analyzer returns more relevant analyses than irrelevant ones, and high recall means that it returns most of the relevant results. Therefore, it is important to consider both Precision and Recall since the Precision metric gives indicative information about the quality whereas the Recall metric can be seen as a measure of quantity. In some situations, we can consider that achieving a high recall is more important than getting a high precision or vice versa. Thus, it is easy to compare the Precision and Recall metrics and pick up the best value. In other situations, both precision and recall are equally important. In such cases, we use the accuracy and F-measure metrics. The accuracy expresses the proportion of false predictions. It is calculated as in the following formula: Accuracy ¼
TP TP þ FP þ FN
The F-measure which is defined as the harmonic mean of precision and recall makes the evaluation easy since instead of balancing precision and recall, we can just aim for a good F-measure and that would be indicative of a good Precision and a good Recall value as well. Fmeasure ¼ 2
Precision Recall Precision þ Recall
Given that the previous metrics do not consider the time taken in order to process the input text (run time), Jaafar et. al proposed a new metric called the GMscore [36]. The goal of this metric is to evaluate the global behavior of the morphological analyzer by combining in one formula both results and run time. Hence, to evaluate a given morphological analyzer, GMscore gathers accuracy, run time, and the consideration of some morphological tags. It is obtained by applying the following formula: GM score ¼
RT AC þ STg þ aATg
where • RT: is the run time, • AC: is the accuracy of the morphological analyzer, • STg: is the total number of standard tags. Following [36] the standards tags are vowelized form, stem, pattern, root, POS, prefix and suffix,
200
R. Tachicart et al.
• ATg: is the number of additional tags considered by the morphological analyzer such as gloss, person, etc. • a 2 ½½0; 1 is a parameter used to decrease or increase the weight of ATg in this formula. According to the formula, the morphological analyzer having the lowest GMscore is considered the best. Moreover, when the GMscore value tends to zero the morphological analyzer is considered to be perfect.
5 Discussion In a follow-up to synthesizing ADMAs with the faced challenges, adopted approaches, and the proposed solutions, we present in the following a more detailed discussion according to different points of view. First, as regards the performance, it should be noted that the MSA analyzers reach high performance compared to AD analyzers thanks to several years of research concerning MSA. The reached accuracy exceeds 95% such as AlKhalil MA [37] which means that MSA analyzers perform better than ADMAs where the most efficient reached almost 90% of accuracy. Moreover, there is a large difference in the reached accuracy between the set of known ADMAs. In fact, a set of AD analyzers reached acceptable accuracies, whereas another set reached low accuracies; AD Morphological Analyzers reached low accuracies compared to MSA even if AD ones are extended from MSA analyzers. This result can be explained by the fact that MSA presents a high level of standardization, syntactic and grammatical rules. Whereas, Arabic dialects still integrate new lexical lemmas and grammatical rules especially from foreign languages. Secondly, when considering AD Coverage, existing AD analyzers do not cover all Arabic dialects. In addition, some covered dialects such as Egyptian and Levantine dialects are strongly addressed compared to other dialects. This may be explained by the fact that Egyptian and Levantine dialects are more popular than other Arabic dialects. As a matter of fact, the first Arabic dialect lexicons addressed these dialects which were helpful to deal with resources lack and build corresponding analyzers. Nevertheless, some other AD lexicons are increasingly available. This can be useful in the future to address remaining Arabic dialects in morphological analysis. Finally, using these AD analyzers in large-scale NLP systems is not reached yet to the best of our knowledge. Currently, they are only integrated into a few NLP tools dealing with automatic translation or automatic language identification. In fact, processing Arabic dialects is still in earlier stages compared to MSA. Moreover, as stated in Sect. 3, researchers focus their effort on building resources and basic NLP systems such as sentiment analysis systems, morphological analyzers, and machine translation till now. Thus, we can affirm that in the future, more attention will be paid to advanced NLP systems thanks to the availability of resources and especially MA that are important to this end.
Arabic Dialects Morphological Analyzers: A Survey
201
6 Conclusion and Perspectives In this paper, we presented a literature review of Arabic dialect morphological analyzers. We provided a synthesis and classified them according to building approaches. We believe that addressing new Arabic dialects depends on the existence of necessary Arabic dialects resources. As regards extending MSA analyzers to ADs, we describe some directions to adapt the most important MSA morphological analyzers MADAMIRA and Alkhalil. On one hand, concerning the first MA, it is necessary to replace the MADAMIRA lexicon with the corresponding dialect lexicon. In addition, it is necessary to create the AD language model and the AD statistical classifiers using AD training resources. These components are used in order to predict common morphological features and to rank the produced analyses. On the other hand, to adapt the Alkhalil analyzer to new ADs, it is necessary to integrate a dataset composed of roots and patterns of the dialect to be processed. Then, it requires to express all possible combinations between them in order to represent all possible dialect forms. Note that the adaptation process of this analyzer requires an important effort compared to MADAMIRA. On a theoretical level, this review of the existing dialect analyzers allows interested researchers (i) to have an overview of the current situation on the challenges and the approaches adopted, (ii) to be informed on the comparison with the MSA analyzers, (iii) to know the dialects considered so far, and (iv) to know on which NLP applications these dialect analyzers are used. On a practical level, this review presents the interested researcher with a guideline on the strategy to follow in terms of resources and algorithms to develop either a new analyzer or increase the coverage of an existing system to consider other dialects.
References 1. S.A. Salloum, M. Al-Emran, K. Shaalan, A survey of lexical functional grammar. Int. J. Comput. Network Technol. 4(3), 141–147 (2016). 2. K. Abu Kwaik, M. Saad, S. Chatzikyriakidis, S. Dobnika, A lexical distance study of Arabic dialects, in The Fourth International Conference on Arabic Computational Linguistics (ACLING’18), Dubai, United Arab Emirates (2018) 3. N. Habash, R. Eskander, A. Hawwari, A morphological analyzer for Egyptian Arabic, in The Twelfth Meeting of the Special Interest Group on Computational Morphology and Phonology, Montréal, Canada (2012) 4. N. Habash, R. Roth, O. Rambow, R. Eskander, N. Tomeh, Morphological analysis and disambiguation for dialectal arabic, in The 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT’13), Atlanta, USA (2013) 5. N. Habash, O. Rambow, R. Roth, MADA+TOKAN: a toolkit for arabic tokenization, diacritization, morphological disambiguation, POS tagging, stemming and lemmatization, in The 2nd International Conference on Arabic Language Resources and Tools (MEDAR’09), Cairo, Egypt (2009)
202
R. Tachicart et al.
6. D. Graff, M. Maamouri, B. Bouziri, S. Krouna, S. Kulick, T. Buckwalter, Standard arabic morphological analyzer (SAMA) version 3.1, in Linguistic Data Consortium LDC2009E73 (2009) 7. W. Salloum, N. Habash, ADAM: analyzer for dialectal arabic morphology. J. King Saud Univ. Comput. Inform. Sci. 4(26), 372–378 (2014) 8. N. Habash, W. Salloum, Elissa: a dialectal to standard arabic machine translation system, in The 24th International Conference on Computational Linguistics (COLING’12), Mumbai, India (2012) 9. A. Pasha, M. Al-Badrashiny, A. ElKholy, R. Eskander, M. Diab, N. Habash, M. Pooleery, O. Rambow, R. Roth, MADAMIRA: a fast, comprehensive tool for morphological analysis and disambiguation of Arabic, in The Ninth International Conference on Language Resources and Evaluation, Reykjavik (2014) 10. S. Khalifa, N. Zalmout, N. Habash, YAMAMA: yet another multi-dialect arabic morphological analyzer, in The 26th International Conference on Computational Linguistics, Osaka (2016) 11. K. Darwish, H. Mubarak, Farasa: a new fast and accurate arabic word segmenter, in Tenth International Conference on Language Resources and Evaluation, Portorož, Slovenia (2016) 12. N. Habash, O. Rambow, Magead: a morphological analyzer and generator for the arabic dialects, in 21st International Conference on Computational Linguistics and the 44th Annual Meeting of the Association for Computational Linguistic (2006), pp. 681–688 13. R. Eskander, N. Habash, O. Rambow, A. Pasha, Creating resources for dialectal Arabic from a single annotation: a case study on Egyptian and Levantine, in COLING 2016, 26th International Conference on Computational Linguistics, Osaka, Japan (2016) 14. M. Jarrar, N. Habash, F. Alrimawi, D. Akra, N. Zalmout, Curras: an annotated corpus for the Palestinian Arabic dialect. Lang. Resour. Eval. I(10579), 1–31 (2016) 15. A. Boudlal, A. Lakhouaja, A. Mazroui, A. Meziane, M. Ould Abdallahi Ould Bebah, M. Shoul, AlkhalilMorpho Sys1: a morphosyntactic analysis system for Arabic texts, Dec 2010 16. S. Khalifa, S. Hassan, N. Habash, A morphological analyzer for gulf arabic verbs, in The Third Arabic Natural Language Processing Workshop, Valencia, Spain (2017) 17. S. Khalifa, N. Zalmout, N. Habash, Morphological analysis and disambiguation for Gulf Arabic: the interplay between resources and methods, in The 12th Conference on Language Resources and Evaluation (LREC 2020), Marseille, France (2020) 18. S. Khalifa, N. Habash, F. Eryani, O. Obeid, D. Abdulrahim, M. Al Kaabi, A morphologically annotated corpus of Emirati Arabic, in 11th International Conference on Language Resources and Evaluation, Miyazaki, Japan (2018) 19. F. Al-Shargi, A. Kaplan, E. Eskander, N. Habash, O. Rambow, Morphologically annotated corpora and morphological analyzers for Moroccan and Sanaani Yemeni Arabic, in 10th Language Resources and Evaluation Conference (LREC 2016), Portoroz, Slovenia, May 2016. 20. F. Al-Shargi, O. Rambow, DIWAN: a dialectal word annotation tool for Arabic, in the Second Workshop on Arabic Natural Language Processing (WANLP’15), Beijing, China (2015) 21. I. Zribi, M. Ellouze Khemakhem, L. Hadrich Belguith, Morphological analysis of Tunisian dialect, in International Joint Conference on Natural Language Processing, Nagoya (2013) 22. A. Hamdi, G. Núria, N. Alexis, N. Habash, POS-tagging of Tunisian dialect using standard Arabic resources and tools, in Proceedings of the Second Workshop on Arabic Natural Language Processing, Bejing (2014) 23. S. Harrat, K. Meftouh, M. Abbas, K. Smaïli, Building resources for Algerian Arabic dialects, in 15th Annual Conference of the International Communication Association Interspeech, Singapour, Singapore (2014) 24. T. Buckwalter, Buckwalter arabic morphological analyzer version 1.0 (2002) 25. A. Bies, Z. Song, M. Maamouri, S. Grimes, H.L. Jonathan Wright, S. Strassel, N. Habash, R. Eskander, O. Rambow, Transliteration of Arabizi into Arabic orthography: developing a parallel annotated Arabizi-Arabic script SMS/Chat Corpus, in EMNLP 2014 Workshop on Arabic Natural Langauge Processing (ANLP), Doha, Qatar (2014)
Arabic Dialects Morphological Analyzers: A Survey
203
26. K. Abidi, K. Smaïli, An empirical study of the Algerian dialect of social network, in International Conference on Natural Language, Signal and Speech Processing (ICNLSSP’17), Casablanca, Morocco (2017) 27. H. Elfardy, M. Diab, Token level identification of linguistic code switching, in 24th International Conference on Computational Linguistics COLING 2012, Mumbai (2012) 28. N. Al-Qaysi, M. Al-Emran, Code-switching usage in social media: a case study. Int. J. Inform. Technol. Lang. Stud. 1(1), 25–38 (2017) 29. K. Almeemam, M. Lee, Towards Developing a multi-dialect morphological analyser for Arabic, in 4th International Conference on Arabaic Language Processing (CITALA’12), Rabat (2012) 30. R. Tachicart, K. Bouzoubaa, H. Jaafar, Lexical differences and similarities between Moroccan dialect and Arabic, in 4th IEEE International Colloquium on Information Science and Technology (CiSt), Tanger (2016) 31. J. May, Y. Benjira, A. Echihabi, An Arabizi-English social media statistical machine translation system , in The Eleventh Biennial Conference of the Association for Machine Translation in the Americas, Vancouver, Canada (2014) 32. N. Habash, M. Diab, O. Rambow, Conventional orthography for dialectal Arabic, in The Eight International Conference on Language Resources and Evaluation (LREC’12), Istanbul, Turkey (2012) 33. M. Diab, N. Habash, O. Rambow, M. Altantawy, Y. Benajiba, COLABA: Arabic dialect annotation and processing, in LREC Workshop for Language Resources (LRs) and Human Language Technologies (HLT) for Semitic Languages: Status, Updates, and Prospects (2010), pp. 66–74 34. H. Elfardy, M. Al-Badrashiny, M. Diab, AIDA: identifying code switching in informal Arabic text, in 1st Workshop on Computational Approaches to Code Switching, Doha, Qatar (2014) 35. R. Tachicart, K. Bouzoubaa, S.L. Aouragh, H. Jaafar, Automatic identification of the Moroccan Colloquial Arabic, in 6th International Conference on Arabic Language Processing (ICALP 2017), Fez, Morocco, 2017. 36. Y. Jaafar, K. Bouzoubaa, A. Yousfi, R. Tajmout, H. Khamar, Improving Arabic morphological analyzers benchmark. Int. J. Speech Technol. 19(2), 259–267 (2016) 37. M. Boudchiche, A. Mazroui, M. Ould Abdallahi Ould Bebah, A. Lakhouaja, L’Analyseur Morphosyntaxique AlKhalil Morpho Sys 2, in 1ère Journée Doctorale Nationale sur L'Ingénierie de la Langue Arabe, (JDILA'14), Rabat, Maroc, (2014)
The Large Annotated Corpus for the Arabic Language (LACAL) Abdellah Yousfi, Ahmed Boumehdi, Saida Laaroussi, Rania Makoudi, Si Lhoussain Aouragh, Hicham Gueddah, Brahim Habibi, Mohamed Nejja, and Iazi Said
Abstract Annotated corpora has an important role in the NLP field. They are used in almost all NLP applications: automatic dictionary construction, text analysis, information retrieval, machine translation, etc. Annotated corpora are the basis for training operation in NLP systems. Without these corpora, it is difficult to build an efficient system that takes into account all variations and linguistic phenomena. In this paper, we present the annotated corpus we developed. This corpus contains more than 12 million different words labeled by different types of labels: syntactic, morphological, and semantic. This large corpus adds value to the Arabic NLP field, and will certainly improve the quality of the training phase of Arabic NLP systems. Moreover it can be a suitable corpus to test and evaluate the quality of these systems. Keywords Arabic
NLP Large corpus Field Period Annotated
1 Introduction According to Kilgarriff, the history of corpus lexicography can be defined by four periods [1, 2]: the first period is before the advent of the computer. At that time, corpora existed in the form of dictionaries and paper documents (books, articles, etc.), such as the Lissan-Alarab (1290 AD), Taj-Alarouss (1730–1790 AD). The second period begins with the first edition of the Collins COBUILD Dictionary A. Yousfi (&) FSJES-Souissi, Mohammed V University in Rabat, Rabat, Morocco e-mail: yousfi[email protected] A. Boumehdi R. Makoudi S. L. Aouragh M. Nejja I. Said Team ICES, ENSIAS, Mohammed V University, Rabat, Morocco S. Laaroussi Ibn Tofail University, Kenitra, Morocco H. Gueddah B. Habibi Mohammed V University, Rabat, Morocco © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. Al-Emran and K. Shaalan (eds.), Recent Innovations in Artificial Intelligence and Smart Applications, Studies in Computational Intelligence 1061, https://doi.org/10.1007/978-3-031-14748-7_12
205
206
A. Yousfi et al.
(1987). This dictionary was the first of a new generation of dictionaries based on real examples of English texts (the spoken and the written English). The third period corresponds to the period of large corpora. These corpora required the use of powerful computer tools. The last period began when lexicographers started to perform queries on computers to obtain data according to their needs. The new field of artificial intelligence also helped to grow these corpora by defining new dictionaries. However, some lexicographers, such as Landau [3], believe that artificial intelligence has done more harm than good by generating low-quality dictionaries. Annotated corpora are very useful to the NLP field. They are used in various NLP applications such as morphological analysis, syntactic analysis, semantic analysis, information retrieval, machine translation, data mining, etc. There are many large annotated corpora: the Bank of English corpus, the Lancaster-OsloBergen corpus, the Open American National corpus, etc. The main goal of this paper is to present a first work on the construction of a large annotated corpus for Arabic. This first version of the corpus contains more than 12 million of different Arabic words extracted from many Arabic websites and books of various areas and periods. Each word is associated with a set of labels that provide morphological, syntactic, and semantic information about that word.
2 Related Works The creation of corpora is a delicate step in the NLP field. Many researchers from different fields have been actively involved in this area (linguists, computer scientists, lexicographers, etc.). For English language, several studies have been conducted in this area, such as [4]: • Brown Corpus [5] and Lancaster-Oslo-Bergen (LOB) [6, 7]. These corpora did not contain more than 1 million of words. • In 2002, a large corpus called Bank of English was designed. This corpus contains about 450 million words. • The Open American National Corpus is a structured set of open source texts. It was developed to help researchers in the field of NLP. The first version of this corpus contains 22 million annotated words [4]. For the works on the Arabic language, we cite for example [4]: • The corpus of Khoja [8]. It contains 50,000 marked Arabic words by a set of syntactic and lexical labels. • The Penn Arabic Treebank corpus [9] and the Prague Arabic Dependency Treebank corpus [10]. These corpora are annotated with morphological and syntactic labels. • Arabic WordNet [11–13] is a free lexical database with the same design and content as Princeton.
The Large Annotated Corpus for the Arabic …
207
• WordNet (PWN) [14]. The Words are connected by semantic links. • BabelNet is a large free corpus in the form of a multilingual semantic network. It was developed by integrating WordNet, Wikipedia, Wiktionary, Wikidata, and OmegaWiki. abelNet Arabic version 4.0 contains about 2.942.886 entries [15]. • Arabic Propbank [16, 17] is a semantically annotated corpus. This corpus contains sentences annotated by semantic labels. • Arramooz AlWaseet1 is a free Arabic dictionary developed for morphological analysis of Arabic words. It contains over 50,000 entries [15]. • Arabic Wikipedia2: It is a large and open corpus that exists on the web. It was created in the year 2003. In 2020, this corpus contained more than 1,077,037 articles. • Internet Archive3 is an international digital library that contains about 475 billion web pages and 28 million books and texts in different languages. This library contains more than 458,767 Arabic documents and books in doc, txt, and pdf formats. • NEMLAR4 Written corpus contains about 500,000 words of Arabic texts from 13 different categories. The text of this corpus is available in four different formats: raw text, fully vowelized text, with Arabic lexical analysis, with Arabic POS-tags. • DIINAR.1 [18] is a lexical resource for the Arabic language. This corpus contains 121,522 diacritical entries, devided into nouns (29,534), verbs (19,457), and derived nouns (70,702). • OntoNotes Release 5.0 is a manually annotated Arabic corpus; it contains more than 12,500 Arabic words, annotated with semantic labels [19]. • Meftouh presented how to build a corpus for Arabic language from the web. The developed tools allow to automatically collect a list of web addresses according to certain criteria [20]. • Althobaiti has developed a novel approach that automatically generates an annotated Arabic dialect corpus. This corpus is collected from 311,785 tweets containing a total of 3,858,459 words in total [21]. • Fashwan and Alansary created a morphologically annotated corpus for Egyptian Arabic. This corpus contains 527,000 words. 239,000 of these words are annotated with morphological labels, e.g., Proclitic(s), Word Form, Tag, Enclitic (s), Glossary, Number, Gender, etc. [22].
1
http://arramooz.sourceforge.net/index.php?content=projects_en. https://fr.wikipedia.org/wiki/Wikipédia_en_arabe. 3 https://archive.org/about/. 4 https://vlo.clarin.eu/data/others/results/cmdi-1_1/European_Language_Resources_Association/ oai_catalogue_elra_info_ELRA_W0042.xml. 2
208
A. Yousfi et al.
3 Natural Language Processing and Annotated Corpora Annotated corpora are of of great importance in natural language processing are very significant. They are used to help an NLP system learn the various linguistic rules. This enables the system to analyze and process unannotated data. Similarly, they are used in the testing phase to evaluate the performance of an NLP system. The applications of the using annotated corpora include: • Morphological analysis: to develop an efficient morphological analyzer, we need a corpus with morphological labels (prefix, suffix, lemma, root, …). • Syntactic analysis: learning is done from a corpus annotated with syntactic labels (noun, verb, subject, complement, etc.). As an example of these studies, we cite the two works of Al-imrane, the first uses the annotated corpus (Treebank) for the statistical analysis of sentences in Modern Standard Arabic [23], and the second discusses the use of the LFG in the notation of the Treebank corpus [24]. • Machine translation: the development of a powerful translation system requires an annotated corpus with different types of labels (morphological, syntactic, and semantic). Annotated corpora are not just simple one-dimensional vectors containing only the word. They are tables that contain more information. These corpora are characterized by: • The size: the number of words or the number of documents contained in these corpora. • The quality of the information contained in these corpora: A corpus with only one dimension, namely the word, is not very important and can only be used in a limited way. The quality also depends on the different types of labels in the corpora, such as morphological, syntactic, and semantic labels. • The coverage corpus degree: this gives an idea of the coverage degree (or representativeness) of a set of linguistic phenomena at the levels of phonology, syntax, morphology, and semantics. For example, in some cases, we may find a large corpus with low coverage degree. This parameter has a direct impact on the quality of the training.
4 The Data Used in the Construction of Our Corpus To build our annotated corpus (LACAL), we used various data sources: the document collections we used, annotated word lists, a list of Arabic radicals and a set of Arabic dictionaries. The document collections consist of 170,100 documents in different formats (pdf, doc, txt, HTM and HTML). These documents were downloaded and collected from different websites and different document collections. Tables 1 and 2 provide the necessary information about these documents:
The Large Annotated Corpus for the Arabic … Table 1 The different collections used in our corpus
Table 2 The names of some collections
4.1
209
Collection
The number of documents
Collection I Collection II Collection III Collection IV dictionaries Annotated corpus Total
17,139 55,385 21,910 75,645 20 3 170,100
Documents collection
The web address
Archive.gov Hindawi.com Arabic Wikipedia Almeshkat Other websites
https://archive.org/ https://www.hindawi.org/books/ https://ar.wikipedia.org/ https://www.almishkat.net Other websites
Classification of Documents by Date
There is a lack of historical Arabic dictionaries that contain a detailed history of a word and its transformations. Moreover, the existing historical lexicons do not verify the historical criteria.5 It is also worth noting that the number of Arabic documents written before the year 700 is close to zero. Most of the surviving texts before that time were spoken and passed down from generation to generation. Most of these spoken texts were preserved at the beginning of the era of writing and editing process era for the Arabic language (after 600 AD). Among the works done for this kind of lexicon, we can cite: The historical lexicon of Arabic scientific language is a dictionary of Arabic scientific terms that contains one or more meanings of each term and its origin. This lexicon also contains the development of this term overtime and the first date of its appearance or disappearance [25]. Catherine Pinon has written a good article on the evolution of the Arabic language corpora, from the first corpus composed only of archaic poetry (before 700 AD), Koran, and Hadith (after 7000 AD) until now [26].
5 Françoise Quinsat, LE FICHIER HISTORIQUE DU LEXIQUE ARABE (FHILA). 2008, Université Charles de Gaulle - Lille 3. Françoise Quinsat, « Le Coran et la lexicographie historique de l’arabe», in: Results of contemporary research on the Qur’ān, the question of a historico-critical text of the Qur’ān (WOCMES [Premier Congrès mondial des études sur le Moyen-Orient], Mayence, 08–13/09/2002), éd. Manfred Kropp (Univ. Mayence), Beyrouth, Orient-Institut der Deutschen Morgenländischen Gesellschaft, 2006, Beiruter Texte und Studien 100, p. 175–191; en particulier, les notes 14 et 15.
210
A. Yousfi et al.
Table 3 An example of Arabic words with the date of their appearance Lexeme
Arabic word
Translation
Date
’lh Lh ly f kull mlk mnṣb
ﺇﻟﻪ ﺍﻟﻠﻪ ﺇﻟﻰ ﻑ ﻛﻞ ﻣ ﻠﻚ ﻣﻨﺼﺐ
God Divinity To And All King Pillar
300–200 100 BC 528–529 100 BC 300 AD 528–529 200 BC
BC
AD
AD
Table 4 An example of documents number in each period Period
Number of documents
Start date (AD)
End date (AD)
P1 P2 … Unknown period
12 6 … 318
400 600 … ?
600 700 … ?
Several attempts have been made to develop historical dictionaries for the Arabic language, such as the work of Arabic language complex in Sharjah.6 This dictionary is produced by the Arabic Center for Research and Political Studies in Doha7 (Qatar). These dictionaries contain the chronological path of words from their appearance until today (Table 3). To get an idea of the lexical chronology of Arabic words, we have taken a sample of 48,823 Arabic documents and classified them according to the date of their appearance. All the Arabic texts we have are from the period around 400 AD. In this paper, we have divided the time interval 400–2021 into 17 periods of length 200 years, more an unknown period that contains documents that do not have appearance dates. Table 4 shows these periods with the beginning and ending dates of each period: It should be noted that there are periods that are poorly represented in the documents we have. This is due to the lack of digital documents covering these periods.
6
https://www.alashj.ae/%D8%B3%D9%84%D8%B7%D8%A7%D9%86-%D8%A7%D9%84% D9%82%D8%A7%D8%B3%D9%85%D9%8A-%D9%8A%D8%B3%D8%AC%D9%84-%D8% AD%D8%AF%D8%AB%D8%A7%D9%8B-%D8%AA%D8%A7%D8%B1%D9%8A%D8%A E%D9%8A%D8%A7%D9%8B-%D9%88%D9%8A%D8%B7/. 7 https://dohadictionary.org/dictionary-word.
The Large Annotated Corpus for the Arabic …
211
Table 5 The size of documents in some field Field
Size
Definition
D1 D2 … D8 … D15 … Other
313 MO 340 MO … 14 GO … 1200 MO … 3,6GO
ﺍﻟﺴﻴﺎﺳﺔ ﺍﻟﻘﺎﻧﻮﻥ ﻭﺍﻟﻘﺎﻧﻮﻥ ﺍﻟﺪﻭﻟﻲ ﻭﺍﻟﺼﺮﺍﻋﺎﺕ ﻭﺃﻣﻮﺭ ﻋﺴﻜﺮﻳﺔ ﺍﻻﻗﺘﺼﺎﺩ ﺗﺠﺎﺭﺓ ﺳﻴﺎﺣﺔ ﺍﻟﺴﻴﺎﺳﺎﺕ ﺍﻟﻤﺎﻟﻴﺔ ﺻﻨﺪﻭﻕ ﺍﻟﻨﻘﺪ ﺍﻟﺪﻭﻟﻲ ﺯﺭﺍﻋﺔ ﻭﻓﻼﺣﺔ … (ﺩﻳﻦ ﻭﻣﻌﺘﻘﺪﺍﺕ )ﺟﻤﻴﻊ ﺍﻷﺩﻳﺎﻥ ﻭﺍﻟﻤﻌﺘﻘﺪﺍﺕ … ﺍﻟﻠﻐﺔ ﻭﺍﻟﻠﺴﺎﻧﻴﺎﺕ … ﺁﺧﺮ
4.2
The Classification of Documents by Field
The second type of classification used in this corpus is classification by field. For this, we have chosen the 19 most used domains on the web in Arabic (Table 5).
5 Preprocessing of Documents Before Creating Our Annotated Corpus In order to organize the documents in our corpus, we standardized all Arabic texts. In this work, we performed a number of pre-processing operations on these documents before creating of our annotated corpus: • Conversion of the different file formats (.pdf,.doc,.docx,.htm,.html) to txt format with UTF-8 encoding. • Manual and semi-automatic classification of documents by appearance date. • Manual and semi-automatic classification of documents by field. • The cleaning of all documents in “.txt” format from Arabic or foreign punctuation. • The suppression of all non-Arabic characters in documents. A set of python applications have been developed to apply these operations: • The Convertpdftotxt.py program converts “.pdf” files (which are not images) into “.txt” format. • The Convertdoctotxt.py program converts “.doc” and “docx” files to “.txt” format. • The ConvertHtmtotxt.py program converts “.htm” and “.html” files to “.txt” format.
212
A. Yousfi et al.
6 Word Statistics in Our Corpus The number of non-diacritical words with repetitions is approximately 277,818,176 words, and 13,305,056 words without repetitions. The distribution of these words according to their length is as in Table 6. The majority of the words have a length of 1 to 5 characters with a percentage of 39.77%. The words containing six characters account for 24% of all the words in our database. The words containing seven characters account for 18.2% of the total words in our database.
6.1
The Distribution of Words by Period
Table 7 shows the number of words found in each period by word length. We note that the periods that are less in our corpus, are the periods P2 (0.35%), P3 (0.51%), P1 (2.62%). This is in Contrast to P16 and P17, which are both represented with a percentages of 10.38% and 8.24% respectively.
6.2
The Distribution of Words by Field
The words included in our corpus are also classified into the 19 fields already mentioned (Table 5). Table 8 shows this classification with the number of words in each field: It is worth noting that most of the words belong to the fields D8, D15, D3, and D9 with the proportions8 of 49.50%, 30.66%, 27.81%, and 19.6% respectively. Moreover, the domain ``Other'' contains 41.97% of the words that we could not classify.
7 The Design of Our Large Annotated Corpus for Arabic Language 7.1
The Different Tables of Our Annotated Corpus
All the files with txt format are used to construct the following word tables (Tables 9 and 10):
8
the percentage is calculated on the number of words without repetition = 13,305,056 mots.
The Large Annotated Corpus for the Arabic …
213
Table 6 The distribution of words by their length Length
Number of words without repetition
Number of words with repetition
Percentage without repetition (%)
1–5 6 7 8 9 10–11 Over 12 Total
5,291,731 3,187,841 2,419,557 1,301,673 555,670 435,368 113,216
110,494,766 66,564,182 50,521,915 27,179,774 11,602,749 9,090,765 2,364,023
39.77 23.96 18.19 9.78 4.18 3.27 0.85
13,305,056
277,818,176
100
Table 7 The number of words in each period Period
Length 1–5
Length 6–9
Length 10
Total
P1 P2 … P15 P16 P17 Unknown period
123,224 80,727 … 564,092 1,633,363 1,303,968 2,001,247
74,265 52,873 … 683,349 2,240,861 1,718,135 3,546,799
484 261 … 34,067 117,643 148,121 333,428
197,973 133,861 … 1,281,508 3,991,867 3,170,224 5,881,474
Table 8 The distribution of words by field
D1 D2 … D18 D19 Other
Length 1–5
Length 6
Length 7
Length 8
Length 9
Total
Percentage (%)
254,932 104,028 … 46,379 7785 1,929,306
147,935 55,587 … 21,445 3176 1,275,566
109,955 40,909 … 14,421 2155 1,076,434
61,104 24,252 … 7432 1200 652,256
53,833 26,052 … 4848 839 650,660
627,759 250,828 … 94,525 15,155 5,584,222
4.72 1.89 … 0.71 0.11 41.97
Table 9 The table of words by period Non-Diacritical word
P1
P2
…
P7
P8
P9
…
P-unknown
ﺍﻟﺪﺭﺍﻭﻳﺶ ﺍﻹﻧﻜﺸﺎﺭﻱ …
0 0
0 0
0 0
263 0
… 0
… 104
… …
… … …
214
A. Yousfi et al.
Table 10 The table of words by field Non diacritical word
D1
D2
…
D8
ﺍﻋﺘﻜﺎﻓﻬﺎ ﻗﺎﻧﻮﻧﻲ
2 42
0 3
0 0
173 1
…
D19
D-other
…
0 1
0 0
Table 11 The table of radical verb Verb-radical ﺩﺧﻞ ﺗﺴﺎﺭﻉ
Mazid/mojarad Mazid
Horoufs-mazida
Base1
Base2
Base3
root
ﺕ
ﺩﺧﻞ ﺗﺴﺎﺭﻉ
ﺩﺧﻞ ﺳﺎﺭﻉ
ﺩﺧﻞ ﺳ ﺮﻉ
ﺩﺥﻝ ﺱﺭﻉ
• Table-word-period: This table contains the word, the number of the word in each period Pi (i = 1,…, 17), and P-unknown. • Table-word-domain: This table contains the word and the number of the word in each field Di (i = 1,…, 19) and D-other. • Table-verbs-radicals: it contains more than 10.282 radical verbs of Arabic language (2282 verbs of length 3 and 8000 verbs of length greater than 3). This database also contains other information such as the morphological class of the verb, the lemma, the root, the lexical type of the verb: extended or not (، ﻣﺰﻳﺪ )ﻣﺠﺮﺩ, the syntactic type of the verb (imperative or transitive ( ( ﻣﺘﻌﺪﻱ، )ﻻﺯﻡ (Table 11). Table-all-verbs: The table of all the conjugated Arabic verbs in different tenses (Table 12), this table contains the conjugated verb, tense, pronoun, stem, lemma and root. This database was generated by a Python application that allows you to conjugate any verb at any tense. This application has been executed on over 10.282 Arabic verbs (2282 verbs of length 3 and 8000 verbs of length greater than 3). These verbs are related to different encletics and procletics (Table 13). • Derived-nouns table: This table was generated from a script that allows you to extract all possible derived nouns from a given verb (Table 14). The application is applied to the list of radical verbs (Table 11). • Table-word-meaning: this database contains the word, and its different meanings listed in 10 dictionaries (Table 16). Table 15 gives an example of some names of the Arabic dictionaries used.
7.2
The Creation of the Annotated Corpus
From these different tables, we created a query that represents our annotated corpus. This corpus contains 13.305.056 words (diacritical and non-diacritical words), with a number of separate labels type: syntactic, morphological, morpho-syntactic, and semantic.
The Large Annotated Corpus for the Arabic … Table 12 The different tenses used in our corpus
Table 13 An extract of the informations contained in the Table of all verbs
Table 14 Example of derived nouns with different information
Table 15 Some names of the Arabic dictionaries used
215
The tense
The tense
ﺍﻷﻣﺮ ﺍﻷﻣﺮ ﺍﻟﻤﻨﻔﻲ ﺍﻷﻣﺮ ﺍﻟﻤﺆﻛﺪ ﺍﻷﻣﺮ ﺍﻟﻤﺆﻛﺪ ﺍﻟﻤﻨﻔﻲ ﺍﻟﻤﺎﺿﻲ ﺍﻟﻤﺠﻬﻮﻝ ﺍﻟﻤﺎﺿﻲ ﺍﻟﻤﻌﻠﻮﻡ ﺍﻟﻤﻀﺎﺭﻉ ﺍﻟﻤﺠﺰﻭﻡ ﺍﻟﻤﻀﺎﺭﻉ ﺍﻟﻤﺠﻬﻮﻝ
ﺍﻟﻤﻀﺎﺭﻉ ﺍﻟﻤﺠﻬﻮﻝ ﺍﻟﻤﺠﺰﻭﻡ ﺍﻟﻤﻀﺎﺭﻉ ﺍﻟﻤﺠﻬﻮﻝ ﺍﻟﻤﻨﺼﻮﺏ ﺍﻟﻤﻀﺎﺭﻉ ﺍﻟﻤﺠﻬﻮﻝ ﺍﻟﻤﺆﻛﺪ ﺍﻟﺜﻘﻴﻞ ﺍﻟﻤﻀﺎﺭﻉ ﺍﻟﻤﺠﻬﻮﻝ ﺍﻟﻤﺆﻛﺪ ﺍﻟﺨﻔﻴﻒ ﺍﻟﻤﻀﺎﺭﻉ ﺍﻟﻤﻌﻠﻮﻡ ﺍﻟﻤﻀﺎﺭﻉ ﺍﻟﻤﻨﺼﻮﺏ ﺍﻟﻤﻀﺎﺭﻉ ﺍﻟﻤﺆﻛﺪ ﺍﻟﺜﻘﻴﻞ ﺍﻟﻤﻀﺎﺭﻉ ﺍﻟﻤﺆﻛﺪ ﺍﻟﺨﻔﻴﻒ
Diacritical conjugated verb
ﻓﻴﺄﺧﺬﻭﻧﻬﻢ
Non-diacritical conjugated verb The tense … Encletic Root
ﻓﻴﺄﺧﺬﻭﻧﻬﻢ ﺍﻟﻤﻌﻠﻮﻡ-ﺍﻟﻤﻀﺎﺭﻉ … ﻫﻢ ﺱﺭﻉ
Diacritical derived-noun
ﺍﻟﻤﺘﺴﺎﺭﻋﻮﻥ
…
ﻓﺎﻟﻤﻜﺘﻮﺏ
Non-diacritical derived-nouns Type Number … Suffix Root
ﻣﺘﺴﺎﺭﻋﻮﻥ
…
ﻓﺎﻟﻤﻜﺘﻮﺏ
ﺍﺳﻢ ﻓﺎﻋﻞ ﺟ ﻤﻊ … ﻭﻥ ﺱﺭﻉ
… … … … …
ﺍﺳﻢ ﻣﻔﻌﻮﻝ ﻣﻔ ﺮ ﺩ …
Dictionary title
Dictionary title in Arabic
Lissan-alarab Taj-alarouss Shams-Alolom …
ﻟﺴﺎﻥ ﺍﻟﻌﺮﺏ ﺗﺎﺝ ﺍﻟﻌﺮﻭﺱ ﺷﻤﺲ ﺍﻟﻌﻠﻮﻡ …
ﻙﺕﺏ
For example of the information contained in the final request, we cite (Table 18): • Diacritical word: the word with the Arabic diacritics. • Word-without-proclitic-enclitic: this field contains the word after deletion the proclitic and the enclitic if any. • Diacritical-Enclitic extracted from the word from the list of all Arabic enclitics. • Diacritical-Pref: the diacritical prefix belonging to the list of all Arabic words.
216
A. Yousfi et al.
Table 16 An example of words with their meanings in the different dictionaries
Table 17 The various syntactic and morpho-syntactic labels
Table 18 Example of the information contained in the final request
Non-diacritical word
ﻓﻴﺘﻘﺎﺳﻤﻮﻧﻪ
ﺍﻟﻤﺘﺪﺍﺧﻠﻴﻦ
Diacritical word Word-without-procletic-encletic … Diacritical-procletic
ﻓﻴﺘﻘﺎﺳﻤﻮﻧﻪ ﻳﺘﻘﺎﺳﻤﻮﻥ … ﻑ
ﺍﻟﻤﺘﺪﺍﺧﻠﻴﻦ ﻣﺘﺪﺍﺧﻠﻴﻦ … ﺍﻝ
• Lemma, Base1, Base2, Base3, Root, and Stem: contains the diacritical lemma of the word. Base1, Base2, and Base3 are the first, the second, and the third primitives of the diacritical word (respectively). This information is taken from Table 11. • Word-Type: contains the syntactic and morpho-syntactic information of the word. This variable indicates on the one hand whether the word is a verb, a noun, or a particle. This variable belongs to a list of 56 labels (Table 17).
The Large Annotated Corpus for the Arabic …
217
Fig. 1 The architecture of our annotated corpus
• Gender: contains the two genders of a word (male and female) ( ﻣﺆﻧﺚ، )ﻣﺬﻛﺮ. • Period: indicates the evolution of the word and its roots over time. This information is taken from the period table (Table 9). • Field: contains the different fields of the word. They are taken from the fields table (Table 10). At the end the architecture of our corpus can be presented by Fig. 1. This architecture shows how from several types of raw documents (annotated word lists, documents collection, list of Arabic radicals, set of Arabic dictionaries), we build several tables (table of annotated words, …, table of word meaning) which are linked by joins to form our annotated corpus.
8 Conclusion In this paper, we have introduced the first version of a large Arabic corpus annotated with different types of labels: syntactic, morphological, and semantic. This corpus contains more than 12 million words. Each word is described with the following data: Domain, period, morphemes, syntactic and morpho-syntactic type, meaning, …). This corpus will add value to the NLP field. It will also help researchers in this discipline with the two phases: the training phase to estimate the parameters of the models, and the testing phase to evaluate the effectiveness of these models. Finally, we will try to increase the capacity of this corpus by adding new collections of documents to add new words. This will help us generate specific databases dedicated to NLP applications.
218
A. Yousfi et al.
References 1. A. Kilgarriff, D. Tugwell, Sketching words, in Lexicography and Natural-Language Processing, ed. by M.-H. Corréard (EURALEX, Göteborg, 2002), pp. 125–137 2. B. Henri, Informatique et lexicographie de corpus : les nouveaux dictionnaires, Revue française de linguistique appliquée XII, 7–23. https://doi.org/10.3917/rfla.121.0007. URL: https://www.cairn.info/revue-francaise-de-linguistique-appliquee-2007-1-page-7.htm 3. S.I. Landau, Dictionaries: The Art and Craft of Lexicography, 2nd ed. (Cambridge University Press, Cambridge, 2001), 494pp. ISBN: 0-521-78040-3 4. M. Outahajala, L. Zenkouar, P. Rosso, CONSTRUCTION D’UN GRAND CORPUS ANNOTÉ POUR LA LANGUE AMAZIGHE (2014), pp. 77–94. https://www.cairn.info/ revue-etudes-et-documents-berberes-2014-1-page-77.htm. ISSN 0295-5245 5. H. Kurčera, N. Francisw, Computational Analysis of Present-Day American English (Brown University Press, Providence, RI) (1967) 6. S. Johansson, The LOB Corpus for British English texts: presentation and comments. ALLC J. 1(1), 25–36 (1980) 7. M.P. Marcus, B. Santorini, M.A. Marcinkiewicz, Building a large annotated corpus of English: the Penn Treebank. Comput. Linguist. 19(2), 313–330 (1993) 8. S. Khoja, R. Garside, G. Knowles, A tagset for the morphosyntactic tagging of Arabic, in Proceedings of Corpus Linguistics. Lancaster, UK (2001), pp. 341–353. 9. Maamouri, M., Bies, A., Buckwalter, T.: The Penn Arabic Treebank: building a large-scale annotated Arabic Corpus, in NEMLAR Conference on Arabic Language Resources and Tools, Cairo, Egypt (2004) 10. Smrž, O., Hajič, J, The other Arabic Treebank: Prague dependencies and functions, in Arabic Computational Linguistics, ed. by A. Farghaly (CSLI Publications, 2006) 11. W.J. Black, S. ElKateb, A prototype english-arabic dictionary based on wordnet, in Proceedings of 2nd Global WordNet Conference (GWC2004), Czech Republic (2004), pp 67–74 12. F. Christiane, B. William, E. Sabri, M. Antonia, P. Adam, R. Horacio, V. Piek, Constructing Arabic wordnet in parallel with an ontology (2005) 13. S. Elkateb, W. Black, P. Vossen, D. Farwell, H. Rodríguez, A. Pease, M. Alkhalifa, Arabic wordnet and the challenges of Arabic, in Proceedings of Arabic NLP/MT Conference, London, UK. (Citeseer, 2006) 14. G.A. Miller, Wordnet : a lexical database for english. Commun. ACM 38(11), 39–41 (1995) 15. M.H. Salah, Désambiguïsation Lexicale de l'Arabe pour et par la raduction Automatique. Thèse de doctorat, l’École Doctorale Mathématiques, Sciences et Technologies de l'Information, Informatique et l’École Doctorale Informatique de la Faculté des sciences économiques et de gestion de Sfaxsoutenue. 18 décembre 2018. 16. M. Palmer, O. Babko-Malaya, A. Bies, M.T. Diab, M. Maamouri, A. Mansouri, W. Zaghouani, A pilot arabic propbank. In LREC (2008) 17. W. Zaghouani, M. Diab, A. Mansouri, S. Pradhan, and M. Palmer. The revised Arabic propbank, in Proceedings of the Fourth Linguistic Annotation Workshop (Association for Computational Linguistics, 2010), pp. 222–226. 18. J. Dichy, A. Braham, S. Ghazali, M. Hassoun, La base de connaissances linguistiques diinar. 1 (dictionnaire informatisé de l’arabe, version 1), in Proceedings of the International Symposium on the Processing of Arabic, Tunis (La Manouba University) (2002), pp. 18–20 19. M.H. Salah, H. Blanchon, M. Zrigui, D. Schwab, Un corpus en arabe annoté manuellement avec des sensWordNet (ATALA 2018) 20. K. Meftouh, K. Smaïli, M.T. Laskri, Constitution d’un corpus de la langue Arabe à partir du Web. CITAL (2007) 21. M. Althobaiti, Creation of annotated country-level dialectal Arabic resources: an unsupervised approach. Nat. Language Eng. 1-42 (2021). https://doi.org/10.1017/S135132492100019X
The Large Annotated Corpus for the Arabic …
219
22. A. Fashwan, S. Alansary, A morphologically annotated corpus and a orphological Analyzer for Egyptian Arabic. Procedia Comput. Sci. 189, 203–210 (2021), https://doi.org/10.1016/j. procs.2021.05.084. ISSN 1877-0509 23. M. Al-Emran, S. Zaza, K. Shaalan, Parsing modern standard Arabic using Treebank resources, in 2015 International Conference on Information and Communication Technology Research (ICTRC) (2015), pp. 80–83. https://doi.org/10.1109/ICTRC.2015.7156426. 24. S.A. Salloum, M. Al-Emran, K. Shaalan, A survey of lexical functional grammar in the Arabic context. Int. J. Comput. Network Technol. 4(03) (2016) 25. H. Hamze, R. Rachad, Lexique historique de la langue scientifique arabe. Revue de la lexicologie, Octobre (2018) 26. C. Pinon, orpus et langue arabe: un changement de paradigme. Dossiers d’HEL, SHESL (2017), Analyse et exploitation des données de corpus linguistiques, pp.29–39. hal-01511222
Topic Modelling for Research Perception: Techniques, Processes and a Case Study Ibukun T. Afolabi and Christabel N. Uzor
Abstract There is a need for an automated approach to extract current trends and perceptions from literature review material in a field of interest. Manually reviewing a large number of papers is time-consuming, topic modelling will help to avoid this. The text mining technique chosen for this task is topic modelling. The chapter gives an overview of the most widely used topic modelling techniques, as well as a few applications. It also summarizes a few current research trends and the generic processes of topic modelling. A section demonstrates an approach to discovering current perceptions from literature materials focused on data analytics in e-commerce using topic modelling. The case study framework included five steps: data collection, data pre-processing, topic tuning, performance evaluation, and interpretation of topic modelling results. The topic numbers were tuned using MALLET with Gensim wrappers. LDA is used. The Gensim topic coherence framework in Python was used to evaluate the topics. The perceptions in the reviewed material are interpreted using the inter-topic distance map in pyLDAVis. The modelling revealed distinct perceptions or directions of interest in e-commerce and data analytics research. Researchers can use topic modelling to see which areas are getting attention and which aren’t.
1 Introduction Text mining is the process of obtaining useful, significant information from disordered or free text. Patterns and relationships provide knowledge that can be used to show facts, trends, or concepts [1]. Text mining can also be referred to as determining and obtaining the information that a document is trying to pass across. Human beings tend to clearly understand what information the document is disI. T. Afolabi C. N. Uzor (&) Department of Computer and Information Sciences, Covenant University, Ota, Nigeria e-mail: [email protected] I. T. Afolabi e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. Al-Emran and K. Shaalan (eds.), Recent Innovations in Artificial Intelligence and Smart Applications, Studies in Computational Intelligence 1061, https://doi.org/10.1007/978-3-031-14748-7_13
221
222
I. T. Afolabi and C. N. Uzor
cussing as they are reading [2], although most times it is impossible for humans to read through a large set of documents to find out what is being discussed or to obtain the desired information, especially in this digital age where the number of written materials we come across daily is beyond our capacity to process [3] However, Text mining technology has developed some programs for digging through documents and extracting the information that we want, but a program is only provided a written text, not the document’s topic. For these programs to detect the subject matter/topic from a group of words, data scientists use a technique called topic modelling [2]. Extracting topics from data is known as topic modelling. By classifying terms into specific themes and topics into various areas of knowledge, topic modelling can obtain statistically valuable information from data, allowing for the discovery of new information [4]. Topic modelling is a text mining method that enables a user to discover new topics in a set of information, analyse that information based on the topics, and then use the analysis to categorize, evaluate, and explore documents automatically [5]. Topic modelling can also be described as feature extraction from a textual document and the use of mathematical frameworks such as Singular Value Decomposition and matrix factorization to construct distinct clusters or groupings of terms [6]. Text mining techniques have recently advanced, allowing researchers to compile data and discover growing research trends from large amounts of text documents. The discovery of topics and research areas can provide insights into how a discipline has evolved. Thanks to the advent of topic modelling tools and a massive library or literature database that holds past original research publications, we can theoretically investigate any research paper that is relevant and determine the common trends [7]. In research, experts are increasingly turning to topic models to research the most important current trends in various disciplines of study. Reference [8], for example, employed LDA to assess the recent trends in hydropower. Ding et al. [9] examined recent development in building construction. [10] investigated manufacturingrelated research topics. Zaza and M. Al-Emran [11] extracted classes of attributes from online surveys distributed via social media [12] conducted several review studies on learning analytics [13] examined research trends in competency-based education. Pratidina and Setyohadi [14] used a similar approach to identify research trends for picking a list of topics that automatically predict the news. Salloum et al. [15] employed text parsing node to analyze textual data from Facebook posts to generate the hot topics being discussed across all news channels. This chapter is aimed at giving the details involved in topic modelling, the techniques, current research, and a typical case study of how topic modelling can be used to discover perceptions in literature review materials. The topic models and framework presented can help academic researchers, funding organizations, and publishers gain a better understanding of their various research topics and trends, allowing researchers to identify current research directions and make a more informed decision.
Topic Modelling for Research Perception …
1.1
223
Statement of the Problem
In the early stages of a literature review, a large number of papers are collected to be reviewed which is often tasking and time-consuming. The options of the researcher are either to limit the number of papers they review or use other methods to review the papers. So far, coding sheets have been used to organize research materials into areas of study or topics. It requires time and a prior understanding of how to group papers into an area of study based on titles, terms, keywords before they can be analysed. Many researchers may be discouraged, having spent much time, effort, and resources before seeing any research directions [16]. When automated practice such as topic modelling is replaced with human or manual processing, reliability is increased and the cost of time is reduced.
2 Topic Modelling 2.1
Approaches in Topic Modelling
Topic modelling aims to reveal the latent semantic information in a collection of documents. There are two types of topic modelling approaches: Probabilistic and algebraic (non-probabilistic) approaches [17]. Probabilistic approaches: They are suites of algorithms for summarizing a large number of texts using a smaller number of distributions over words. These distributions are referred to as “topics” because when fitted to data, they contain the collection’s main topics [18]. Rather than searching and exploring documents through keywords alone, probabilistic models first find the topic of interest and then examine the document related to that topic. Probabilistic Latent Semantic Indexing and Latent Dirichlet Allocation are the popular probabilistic topic modelling algorithms [19]. One of the benefits of probabilistic techniques is the ease of expanding models. Many LDA extensions have been developed. Non-probabilistic approaches (Algebraic approaches): The term-document matrix is estimated as the product of a term-topic matrix and a topic-document matrix under some conditions, and each document is depicted as a vector of terms. The interpretation of this approach is to project the term-document matrix into a k-dimensional topic space with each axis representing a different topic. Latent semantic indexing is a typical representation of the non-probabilistic approach. It solves the problem by decomposing the term-document matrix under the premise that topic vectors are orthogonal. Non-negative matrix factorization (NMF) is a technique that is similar to LSI. The term-document matrix is factorized in NMF with the requirement that all elements in the matrices must be equal to or larger than zero [18]. Many advanced topic models and information retrieval techniques are
224
I. T. Afolabi and C. N. Uzor
built on the Vector Space Model (VSM). It was the first basic algebraic model for extracting semantic information from the use of words that were based directly on the document-term matrix.
2.2
Techniques in Topic Modelling
Topic modelling is a research area that uses text mining to recommend appropriate topics from a document corpus. Different techniques and algorithms have been used to model topics [20]. Topic modelling techniques are effective for establishing relationships between words, topics, and documents, as well as discovering hidden topics in documents. Material science, medical sciences, chemical engineering, and a range of other fields can all benefit from topic modelling [21]. Topic modelling techniques should be studied because of their diverse uses. This study provides a review of the major topic modelling techniques. Latent Semantic Indexing (LSI) is the origin of topic model techniques. It serves as the basis for the development of other techniques. A valid topic model based on LSI is probabilistic latent semantic analysis (PLSA). Latent Dirichlet Allocation (LDA) is a probabilistic generative model that is more robust than PLSA. Latent semantic analysis: Topic modelling began with a linear algebra approach called LSA. Latent Semantic Analysis is also known as Latent Semantic Indexing. The goal of Latent Semantic Analysis (LSA) is to build vector-based representations of texts to make semantic analysis possible. LSA measures the correlation between texts using vector representation to select the most efficient related term. LSA was originally conceived as latent semantic indexing but was improved for a better information retrieval technique. LSA has a lot to give such as keyword matching. The vector representation and keyword matching are both reliant on words that appear in documents regularly [17]. Probabilistic latent semantic analysis: PLSA is a probabilistic variant of latent semantic analysis with a strong statistical base and a well-defined generative data model. Every word inside the document is selected from different multinomial distributions for each document in the corpus, as predicted by PLSI, and proportions corresponding to mixture weights are sampled from a mixture of multinomial distributions that can be regarded as topics. A method for inferring topic-word distributions, as well as document-topic distributions, from textual corpora is specified as an inference algorithm based on a generative model [2]. The PLSA technique was created to improve the LSA technique as well as handle other issues that LSA could not solve. Many real-world applications, including computer vision and recommender systems, have been successful using PLSA. PLSA, on the other hand, suffers from overfitting problems because the number of parameters increases linearly with the number of documents [22].
Topic Modelling for Research Perception …
225
Non-matrix factorization: A matrix factorization approach would be another way to present PLSA. To factorize high-dimensional sparse document term matrices into low-dimensional matrices with non-negative coefficients, linear algebra-based techniques are used. Latent Dirichlet Allocation: LDA, is one of the most widely used topic modelling algorithms for extracting topics from document bodies. LDA is a probabilistic model being that it uses word probabilities to represent topics. Words that have the highest probabilities in a block of text usually give an insight into what the topic can be. It generates a topic from a batch of documents based on word recurrence. LDA proposes that each document can be depicted as a probability distribution across hidden topics, with a shared Dirichlet prior across all documents [23]. LDA is a generally recognized approach for identifying the hidden topic distribution in a big dataset (corpus). Thus, it can recognize sub-topics in any field with many patents and represent each patent in a range of subject distributions. The words in a set of documents are used to build a vocabulary, which is then used to reveal new topics using LDA. Documents are viewed as a collection of topics, each of which represents a probability distribution over the terms. Then, for each document, a probability distribution over a set of topics is viewed. The data can be viewed as the output of a generative process defined by the combined probability distribution across visible and hidden data. Hierarchical Latent Dirichlet Allocation: HLDA is an LDA variant where the topics take the form of a tree rather than an LDA flat topic structure. To model topical hierarchies, hierarchical LDA employs a non-parametric Bayesian technique. An algorithm develops a hierarchy when data is made available, resulting in a tree of subjects. Each node in the topic tree is a random variable that has been given a word-topic distribution. A document can be created by going from the root to one of the tree’s leaves and sampling topics along the way. Dynamic Topic Model: Blei and Lafferty [24] introduced the DTM as a variant of LDA that captures the change of topics in a database as time passes. DTM depicts how word-topic distribution evolves which makes it simple to find the most popular topics. Underlying topics can be extracted from a collection of documents to track how they have evolved. The fixed number of a topic is a drawback for DTM because many topics in a corpus grow and die over time. Author-topic model: ATM is an extension of LDA proposed by Rosen-Zvi et al. [25]. ATM models the topics distribution associated with each author in the corpora using the meta-data included in the texts. Every word in a document is connected with two variables in this model; an author and a topic. Each author is viewed as a distribution of topics, and each topic is viewed as a distribution of words, similar to LDA. Unlike LDA, however, authors are also observed factors in addition to the topics. The author topic model’s primary motive is to allow us to include authors in models, thereby providing a general framework for predicting both at the author and document level.
226
I. T. Afolabi and C. N. Uzor
Correlated Topic Model: The correlations between topics can’t be modelled using LDA. “House,” for example, is more likely to be related to “Land” than “Air.“ LDA does not show the relationship between topics, so Blei and Lafferty [26] created CTM an LDA extension that can model topic correlations.
2.3
Generic Topic Modelling Process
Generally, the topic modelling framework breaks down into the following steps. The initial step is to collect data. The text data is then pre-processed in the second step for further processing. The next stage is to determine the number of topics that should be included. The final step is knowledge discovery, which uses text mining techniques to extract insights from the corpus [27]. A corpus is known as a large, well-organized collection of documents. Research is done with various types of documents such as journals, articles, abstracts, etc. Data collection often involves selecting high-quality documents e.g., journals, and selecting search keywords. The aim is to select semantically correct documents. The document must be cleaned of unwanted elements and further prepared before a topic model can be estimated for an actual corpus. The vocabulary of the document used in the modelling process can be affected after the document has been cleaned and pre-processed. The process for cleaning text data differs depending on the research topic and the data type used. A language filter, for example, is employed when a study focuses solely on one language. Before data analysis, content such as URLs or HTML must be stripped off. Data pre-processing is performed which includes the processes of tokenizing, removing stop words, punctuations, and special characters, lemmatizing, and stemming [6]. Tokenization divides the text into sentences, which are then broken down further into words. The words are then changed to lowercase and the punctuation is deleted. Stop words are deleted to emphasize keywords in the document that define the text’s meaning [28]. The process of stemming and lemmatization involves reducing a word to its seed form. Stemming is the process of removing the last few characters of a word (suffixes) leaving only the stem, For example, Porter’s stemming algorithm turns “tiredness” into “tired,” and “quickly” into “quick” [29]. Lemmatization, on the other hand, considers the context of a word and returns the dictionary form of a word, which is called a lemma. For example, “diffusion” and “diffusing” become “diffuse” [30]. This phase is used to reduce the amount of data to be processed, which aids in model performance improvement. Feature selection is another stage in data pre-processing. More than 100,000 distinct words can be found in a modestly large corpus, with the majority of the words appearing in only a few documents. Using an LDA algorithm in a corpus is both time-consuming and ineffective as most words do not contribute to making a meaningful topic. Words like “the” and “have” appear in practically every text,
Topic Modelling for Research Perception …
227
regardless of the topic hence they are not informative. An excellent technique for filtering out terms that are either unusual or too common is to use TF-IDF (term frequency-inverse document frequency), which provides a low score to words that are very rare or words that are very common [29]. The third stage is to apply a topic modelling technique of choice. Certain criteria, such as the number of topics, must be defined when selecting a topic modelling technique. Researchers usually run several topic modelling techniques to determine an optimum number of topics. After that, the techniques are compared to see if there are any major discrepancies or if they are interpretable. The overall purpose is to provide a legitimately interpreted topic solution. Some researchers apply additional external and internal validation criteria to choose from the many candidate techniques. The final stage is the interpretation of the results. The topic modelling algorithm generates a list of words that collectively form a topic after it has been executed. In addition, the percentage coverage of each topic in the examined documents is calculated. Even though the researcher has been working with the text for some time, the results of the topic modelling are new. Most researchers choose the most straightforward route to the valid interpretation of the resulting topics which is to review the words with the highest probabilities for each topic and try to find a title that describes the topic’s substantive content [31]. Additionally, some researchers use quantitative diagnostic metrics, such as topic coherence or mutual information measures to test that a topic is authentic and can be easily interpreted by a reader. Similarity measures aim to determine which of the top words provides the most relevant information to a given topic, whereas topic coherence measures how frequently the top keywords in a topic appear together. Hierarchical clustering can be used to determine if subjects are sufficiently separate from one another (inter-topic validity) or to discover semantic patterns among topics [32].
2.4
Current Research Trends in Topic Modelling
Event analysis, music retrieval, opinion mining, and aspect mining were among the popular works released between 2012 and 2013. Researchers created an integrated Bayesian model that conducts event classification and topic modelling into a unified system. They created an LDA model called Twitter’s Event and Tweets LDA (ET-LDA) for getting event topics and evaluating tweeting patterns/behaviours. They used the Gibbs Sampling method to estimate the topic distribution [33]. ET-LDA has the advantage of being able to detect and select the top general topic from a large number of tweets, which is extremely useful. Researchers proposed Mr. LDA, a parallel LDA algorithm and model in the MapReduce framework. Unlike previous LDA models that use Gibb’s sampling, this uses variational inference. To find a near optimal LDA configuration, they suggested an LDA model based on a Genetic Algorithm (LDAGA). Detectable link recovery, feature
228
I. T. Afolabi and C. N. Uzor
location, and label attachment are three cases where this method is applied. To approximate parameter posterior distributions, they Fast collapsed Gibbs sampling and used Euclidean distance to calculate the distance between documents. [34]. Researchers proposed TopicSpam. TopicSpam is a system that detects false reviews using generative LDA. According to the researchers, their algorithm can tell the difference between authentic and fraudulent reviews. TopicSpam significantly exceeded TopicTDB in accuracy, according to the authors, who examined 800 reviews from 20 Chicago hotels [23]. Hashtag discovery, aspect mining, opinion mining, recommendation system [35] are some of the most popular works released in 2014 and 2015. The hashtag recommendation TOT-MMM was created by researchers. This method is a composite model that combines a short-term grouping component comparable to the TOT Model with the MMM (Mixed Membership Model), which was first described for recurrent cited words. This model can account for the impact of short-term grouping in hidden topics, allowing for more accurate hashtag modelling and suggestions. lodar and Wang [23] used labels hierarchy as a base hierarchy, known as Base Tree, and then used hierarchical latent Dirichlet allocation to automatically create a hierarchy of topics for every leaf node in the base tree, known as Leaf Topic Hierarchy. One of the advantages of Semi-Supervised Hierarchical Latent Dirichlet Location is that it can include labelled topics in the document production process. SSHLDA can also automatically investigate hidden topics in data space, as well as extend the hierarchy of shown topics [23]. Researchers are drawn to topic modelling because of its benefits, but it also has drawbacks. An inherent issue is that the resulting topics are dependent on the length of words in the dataset. A longer group of words will produce more topics. There is also the problem of inconsistent results which means that if an algorithm is run through a document many times, the topics produced from each run may be a little different. It may also not be possible to determine the exact number of topics that are easily understandable and robust [36]. Probabilistic models focus on words that repeat frequently rather than the semantics of the content [20]. Documents are viewed as a probability distribution over a wide range of topics. During the topic modelling process, word frequency and co-occurrence are taken into account as key facts, and a set of topics is formed appropriately. Humans interpret content by comprehending its meaning, however, probabilistic topic models focus on word counts rather than semantics or meaning. As a result, without understanding the semantics of the text, useless topics will be formed [20]. Another problem is the misrepresentation of topics from the human perspective [36]. The quality of the resulting topics may be affected if the dataset is not properly pre-processed and cleaned. Also, the quality of software and hardware used for the topic models can affect the resulting topics. There is also the problem of generating meaningless and irrelevant topics because the topic models have been set to produce a fixed number of topics instead of the most related topic, so its focus is on the number of topics it can produce not the relevance of the topics. Specifically, there is also the problem of topic evolution. Topics present in a literature change over time, therefore modelling them without taking this into account will lead to confusion. Topic evolution
Topic Modelling for Research Perception …
229
modelling is a type of modelling that takes into account time. It can reveal key knowledge hidden in a document, allowing topics to be identified as time passes.
2.5
Application of Topic Modelling
Topic models have extensive application in solving many problems in document collection. It has been applied in classifying documents with similar contents, finding possible topics in a text collection, identifying relationships between terms, grouping trending topics, indexing, and many more problems in different fields of study [37]. Essentially, it collects data from different sources and analyses them to produce more descriptive information or trends [38]. The main applications of Topic Modelling are classification, categorization, summarization of documents. Many customized topic models have lately been developed to fulfill various application requirements. Some of the models, for example, can trace the change of topics over time, others show how topics are linked to each other and how they form a hierarchical order, others consider the authors, citations, sources, or the relationship between papers and any other type of document labels. They also interpret objects associated with the document which are not in text form such as diagrams, pictures, a named entity. Some of the models aim to improve the stability, sparsity, robustness, and human interpretability of topics. Syntactic considerations, word grouping into n-grams, and discovering collocations or constituent phrases are all advantages of linguistically motivated models [39]. In addition to the title given to a document, the topic discovery problem domain aims to find hidden topics [40]. When a user does not know for sure what is to be searched for, topic models are used to explore and summarize document collections outside of any specific information demand. In contrast to typical information retrieval systems, which return relevant documents based on users’ stated information needs, this technique of information retrieval is distinct. Where information retrieval systems might hunt for the “needle in the haystack,” topic models will tell about the general proportion of hay and needles, and may even tell about the presence of mice that weren’t known [41]. However, topic models might be effective in instances where there is specific information required but are unsure how it can be found. Topic models are a simple indexing mechanism that can be employed. Users can look for topics that have a high likelihood of matching a query term, and then look for documents that have a high probability of matching these topics. Reference [42] Employed topic modelling in conducting literature review by searching online libraries with search strings and assessing the quality of selected research papers used through automatic and semi-automatic paper selection. The topic-based search may also give some question clarification, as it may be clear from topic-word distributions that one topic is more relevant to the user’s information needs than the other. The line between query-driven retrieval and unsupervised topic modelling is blurring in more complex systems. Learning the hidden topics, which are given as distributions over different words, allows topic models to capture the semantic relationships between words. These relationships between words provide a way to match or expand words
230
I. T. Afolabi and C. N. Uzor
semantically rather than by direct spelling matching. For example, given the short query “diabetes,” topic models can quickly locate related words like “insulin,” “glucose,” “coronary,” and “metformin,” as they frequently appear in the same context [43]. As a result, topic models have been applied effectively in query expansion.
3 A Case Study: Discovering Perception in e-Commerce Analytics Literature While there are a variety of strategies to perform an investigative review, some of them demand a substantial amount of time input and prior knowledge of the subject. Manually reviewing a large number of papers will take a lot of time from the researcher, whereas topic modelling can be automated, allowing the researcher to spend less computer time and more effort on the research. To demonstrate the approach to discovering research perception using topic modelling, eCommerce analytics literature was used. The aim is to discover current perceptions regarding data analytics in eCommerce research publications. Figure 1 captures the methodology used for this case study. The methodology workflow diagram has adapted the combination of [6, 16].
3.1
Data Collection
To use indexed publications in data analytics for eCommerce research, the Scopus database was used as the data source. Scopus is an Elsevier Company abstract and indexing database with full text links. It’s the world’s most comprehensive abstract and indexing database [44]. The search criteria were TITLE-ABS-KEY (“eCommerce” OR “Electronic commerce” AND “Data Analytics” OR “Data Mining”), which was further limited to 2022–2006. 1933 documents were retrieved on the 23rd of January 2021. The data was downloaded and exported in several file formats as provided on the Scopus platform. The data downloaded included metadata for all the 1933 articles downloaded which includes the names of authors, countries of the lead researchers, the overall number of papers published, the total number of times the paper was cited, the average citation count, search keywords, journal sources, number of citing articles with and without self-citations, countries, and regions, and author-level metrics (e.g., h-, m-, gindices) [45].
3.2
Data Pre-processing
To process the raw data, some basic text wrangling and preparation were performed, which involves tokenizing, lemmatizing nouns, and removing stop words and
Topic Modelling for Research Perception …
231
Fig. 1 Methodology workflow for topic modelling
single-character words. After this, we collect some useful bi-gram-based phrases from our research papers, discard some redundant terms, and do feature engineering and vectorization. A dictionary representation of the document was created, and words that appeared in less than twenty papers (or more than fifty percent of the documents) were taken out. The corpus was transformed into a bag of word vectors.
3.3
Topic Tuning and Applying LDA Algorithm
To get the optimal topic to use for the topic modelling, A toolkit MALLET, with Gensim wrappers was used to tune the topic numbers. Based on the result of the
232
I. T. Afolabi and C. N. Uzor
topic tuning, the LDA algorithm is then applied. The concept of LDA is to view every document as a collection of topics and every topic as a collection of words [9]. The main steps are summarized as follows: (a) Randomly assign each word in a document to one of the k topics. K is a parameter that is set by the user. (b) Calculate the probability distribution of each word in each topic based on word frequency correlations and reassign each word to a new topic. (c) Continue updating the algorithm until it converges. (d) Compute the topic term co-occurrence frequency matrix as the resulting LDA model.
3.4
Evaluating Topic Modelling Performance
Topic models are unsupervised learning-based models that are trained on unlabelled data, making the quality of output difficult to measure. Topic coherence is used to ensure the quality of topic models to a certain degree. However, it can be very complicated. If a group of statements supports each other, they are said to be coherent [6]. To evaluate the topics, we used the topic coherence framework by Roder et al. [46] implemented in Python in the Gensim framework. The result of the LDA algorithm is evaluated using the overall mean coherence score of the model. We can apply perplexity and coherence scores to validate the topic model. Generally, the less perplexing the model, the better. Similarly, the lesser the UMass score and the higher the Cv score, the better the model.
3.5
Interpreting Topic Modelling Results
Finally, pyLDAVis is the most commonly used and a nice way to visualize the information contained in a topic model. The perceptions in the reviewed material are interpreted using the inter-topic distance map in pyLDAVis. The significance of every circle is determined by the size of the circles and the gap between the centres of the circles illustrates the connection and relationship among topics within the literature. The bar chart is used to visualize the thirty most important terms in the data set. The inter-topic distance map is used to establish the various degrees of consideration given to the most important research topics.
4 Results and Discussion The result of the topic tuning is presented in Fig. 2. The graph shows that even though no topic number had a significant optimal coherence score, topic number 11 has the highest and therefore was selected for the topic modelling. Also, the
Topic Modelling for Research Perception …
233
Fig. 2 Topic tuning results
Fig. 3 Emerging topics and the terms contained
coherence score for the 11 topic number is 0.4194, though slightly below average, it appears to be the best given the data used. The topics gotten from the modelling are revealed in a term topic data frame in Fig. 3. Topic 1 reveals issues around recommendation techniques, approaches, and applications, so we can label this topic Recommendation Analytics research in E-commerce. Topic 2 points to research security in the eCommerce business. Topic 3 refers to the business intelligence domain. Topic 5 deals with supervised learning issues while topics 7 and 8 deal with unsupervised learning issues. Topic 10 for example outlines product review analytics in e-commerce and finally, Topic 11 distinctly focuses on customer behaviour analysis on the e-commerce platform. From Fig. 3, it is obvious that there are at least 7 distinct perceptions or directions of interest in research related to e-commerce and data analytics as discovered from the topic modelling.
234
I. T. Afolabi and C. N. Uzor
Fig. 4 Inter-topic distance map
The map of the inter-topic distance in Fig. 4 helps to further give the emerging perceptions a clear interpretation pictorially. The figure shows how separated the topics are. Topic 11 for example, which has been highlighted in red is the one given the smallest attention in literature but it’s a distinct cluster, well separated from other topics. Topics 9, 6, 4, and 2 follow, all of which are separated. Topic 1 labelled recommendation issues in e-commerce have the greatest attention but overlap with 7 and 5. The results of this topic modelling are interesting because it helps to know which areas are getting attention and which are not and if there is a need to do more research that can bring other areas that are isolated together.
4.1
Practical Implication
This research has proposed a simple but effective framework that would allow researchers to use topic modelling to conduct a literature review and discover trends that can lay the foundation for greater research. The process described will reduce the need for manual reading and allow researchers to analyse a larger, unending, number of papers quicker, and with higher accuracy. This research also indicated that topic models can be used to perform various tasks in a document such as a document analysis, grouping of documents based on related content, and knowledge discovery/information retrieval. The task of the topic model should determine how a topic model is formed, different factors will make a topic model more or less suitable for a task. A topic model is useful if it accomplishes the task it was
Topic Modelling for Research Perception …
235
intended for. If the topic model is meant to be used to analyse a large number of documents, it succeeds to the extent that it does so in a way that is useful to the researcher, which has been demonstrated in this chapter. If the goal of a topic model is to group documents into meaningful groups, it will be successful to the extent that it achieves that goal. If a topic model is designed to be used as a tool for retrieving information, it is successful if it returns the information the user is seeking for. The process described can be improved upon by using other methods to interpret the results of the LDA example and make more conclusions on the discovered topics. Some of these methods include presenting the topics in form of word clouds, sentence charts, and t-distributed stochastic neighbour embedding charts.
5 Conclusion Topic modelling in text mining was discussed in this chapter. LSA, PLSA, LDA, Author-Topic, Dynamic Topic, Hierarchical LDA, Non-matrix factorization, and Correlated Topic Model have all been discussed. The differences between these methods were also explained in terms of characteristics, limitations and theoretical backgrounds. Each of these methods is only briefly discussed. It only provides an overview of how these techniques are used in text mining for topic modeling. In addition, some of the applications for these techniques have been discussed. Also, it has mentioned the current trends in topic modelling and presented a case study to discover current perceptions from literature materials focused on data analytics in e-commerce. This finding is crucial for researchers to understand research directions in any field of study and to discover quality topics.
References 1. V.B. Kobayashi, S.T. Mol, H.A. Berkers, G. Kismihók, D.N. Den Hartog, Text mining in organizational research. Org. Res. Methods 21(3) (2018) 2. I. Vayansky, S.A.P. Kumar, A review of topic modeling methods. Inf. Syst. 94, 101582 (2020). https://doi.org/10.1016/j.is.2020.101582 3. S.K. Ray, A. Ahmad, C.A. Kumar, Review and implementation of topic modeling in Hindi. Appl. Artif. Intell. 33(11), 979–1007 (2019). https://doi.org/10.1080/08839514.2019. 1661576 4. T. Nummelin, R. Hänninen, M. Kniivilä, Exploring forest sector research subjects and trends from 2000 to 2019 using topic modeling. Curr. For. Rep. 267–281 (2021). https://doi.org/10. 1007/s40725-021-00152-9 5. C.C. Silva, M. Galster, F. Gilson, Topic modeling in software engineering research (2021) 6. N.L. Processing, D. Sarkar, Text analytics with python (2016) 7. M.W. Neff, E.A. Corley, 35 years and 160,000 articles: A bibliometric exploration of the evolution of ecology. Scientometrics 80(3), 657–682 (2009). https://doi.org/10.1007/s11192008-2099-3
236
I. T. Afolabi and C. N. Uzor
8. H. Jiang, M. Qiang, P. Lin, A topic modeling based bibliometric exploration of hydropower research. Renew. Sustain. Energy Rev. 57, 226–237 (2016). https://doi.org/10.1016/j.rser. 2015.12.194 9. Z. Ding, Z. Li, C. Fan, Building energy savings: analysis of research trends based on text mining. Autom. Constr. 96(June), 398–410 (2018). https://doi.org/10.1016/j.autcon.2018.10. 008 10. H. Xiong, Y. Cheng, W. Zhao, J. Liu, Analyzing scientific research topics in manufacturing field using a topic model. Comput. Ind. Eng. 135, 333–347 (2019). https://doi.org/10.1016/j. cie.2019.06.010 11. S. Zaza, M. Al-Emran, Mining and exploration of credit cards data in UAE, in Proceedings of 2015 5th International Conference on e-Learning (ECONF 2015) (2016), pp. 275–279. https://doi.org/10.1109/ECONF.2015.57 12. S. Hantoobi, A. Wahdan, M. Al-Emran, K. Shaalan, A review of learning analytics studies. Stud. Syst. Decis. Control 335, 119–134 (2021). https://doi.org/10.1007/978-3-030-64987-6_8 13. S. Paek, T. Um, N. Kim, Exploring latent topics and international research trends in competency-based education using topic modeling. Educ. Sci. 11(6) (2021). https://doi.org/ 10.3390/educsci11060303 14. T.M. Pratidina, D.B. Setyohadi, Automatization news grouping using latent dirichlet allocation for improving efficiency. Int. J. Innov. Comput. Inf. Control 17(5), 1643–1651 (2021). https://doi.org/10.24507/ijicic.17.05.1643 15. S.A. Salloum, M. Al-Emran, K. Shaalan, Mining text in news channels: a case study from Facebook. Int. J. Inf. Technol. Lang. Stud. 1(1), 1–9 (2017) 16. C.B. Asmussen, C. Møller, Smart literature review : a practical topic modelling approach to exploratory literature review. J. Big Data (2019). https://doi.org/10.1186/s40537-019-0255-7 17. P. Kherwa, P. Bansal, Topic modeling: a comprehensive review. ICST Trans. Scalable Inf. Syst. 159623 (2018). https://doi.org/10.4108/eai.13-7-2018.159623 18. Q. Wang, J. Xu, H. Li, N. Craswell, Regularized latent semantic indexing: A new approach to large-scale topic modeling. ACM Trans. Inf. Syst. 31(1) (2013). https://doi.org/10.1145/ 2414782.2414787 19. S. Debortoli, O. Müller, I. Junglas, Text mining for information systems researchers : an annotated topic modeling tutorial. Commun. Assoc. Inform. Syst. 39 (2016). https://doi.org/ 10.17705/1CAIS.03907 20. D.T.K. Geeganage, Concept Embedded Topic Modeling Technique (2018), pp. 831–835 21. O. Kononova, T. He, H. Huo, A. Trewartha, E.A. Olivetti, G. Ceder, Opportunities and challenges of text mining in aterials research. iScience 24(3), 102155 (2021). https://doi.org/ 10.1016/j.isci.2021.102155 22. R. Alghamdi, A survey of topic modeling in text mining. Int. J. Adv. Comput. Sci. Appl. 6(1), 147–153 (2015) 23. H. Jelodar, Y. Wang, Latent Dirichlet Allocation (LDA) and Topic modeling: models, applications, a survey, Nov 2017 24. D.M. Blei, J.D. Lafferty, Dynamic topic models. ACM Int. Conf. Proc. Ser. 148, 113–120 (2006). https://doi.org/10.1145/1143844.1143859 25. M. Rosen-Zvi, T. Griffiths, P. Smyth, M. Steyvers, Learning author topic models from text corpora. J. Mach. Learn. Res. V, 1–38 (2005). [Online]. Available: http://citeseerx.ist.psu.edu/ viewdoc/download?doi=10.1.1.59.7284&rep=rep1&type=pdf%0A; http://scholar.google. com/scholar?hl=en%7B%5C&%7DbtnG=Search%7B%5C&%7Dq=intitle:Learning +Author-Topic+Models+from+Text+Corpora%7B%5C#%7D0 26. D.M. Blei, J.D. Lafferty, A correlated topic model of science. Ann. Appl. Stat. 1(1), 17–35 (2007). https://doi.org/10.1214/07-aoas114 27. X. Bai, X. Zhang, K. X. Li, Y. Zhou, K. Fai, Research topics and trends in the maritime transport : a structural topic model. Transp. Policy 102 (2020), 11–24 (2021). https://doi.org/ 10.1016/j.tranpol.2020.12.013 28. S. Rani, M. Kumar, Topic modeling and its applications in materials science and engineering. Mater. Today Proc. 45, 5591–5596 (2021). https://doi.org/10.1016/j.matpr.2021.02.313
Topic Modelling for Research Perception …
237
29. C. Jacobi, W. Van Atteveldt, K. Welbers, Quantitative analysis of large amounts of journalistic texts using topic modelling. Amounts J. Texts 0811 (2015). https://doi.org/10. 1080/21670811.2015.1093271 30. T. Bergmanis, S. Goldwater, Context sensitive neural lemmatization with lematus, in NAACL HLT 2018—2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, vol. 1 (Long Papers) (2018), pp. 1391–1400. https://doi.org/10.18653/v1/n18-1126 31. D. Elgesem, I. Feinerer, L. Steskal, Bloggers’ Responses to the snowden affair: combining automated and manual methods in the analysis of news blogging. Comput. Support. Coop. Work CSCW An Int. J. 25(2–3), 167–191 (2016). https://doi.org/10.1007/s10606-0169251-z 32. D. Maier et al., Applying lda topic modeling in communication research: toward a valid and reliable methodology. Commun. Methods Meas. 12(2–3), 93–118 (2018). https://doi.org/10. 1080/19312458.2018.1430754 33. Y. Hu, A. John, F. Wang, S. Kambhampati, ET-LDA: Joint topic modeling for aligning events and their twitter feedback. Proc. Natl. Conf. Artif. Intell. 1, 59–65 (2012) 34. A. Panichella, B. Dit, R. Oliveto, M. Di Penta, D. Poshynanyk, A. De Lucia, “How to effectively use topic models for software engineering tasks? An approach based on Genetic Algorithms, in Proceedings of International Conference on Software Engineering (2013), pp. 522–531. https://doi.org/10.1109/ICSE.2013.6606598 35. Y. Kim, K. Shim, TWILITE: A recommendation system for Twitter using a probabilistic model based on latent Dirichlet allocation. Inf. Syst. 42, 59–77 (2014). https://doi.org/10. 1016/j.is.2013.11.003 36. D. Gritsenko, The Palgrave Handbook of Digital Russia Studies (2020) 37. Y. Hu, J. Boyd-Graber, B. Satinoff, A. Smith, Interactive topic modeling. Mach. Learn. 95(3), 423–469 (2014). https://doi.org/10.1007/s10994-013-5413-0 38. A. Wahdan, S. Hantoobi, M. Al-emran, Early detecting students at risk using machine learning predictive models (2022) 39. K. Vorontsov, A. Potapenko, Tutorial on probabilistic topic modeling : additive regularization for stochastic matrix factorization (2014) 40. A. Daud, J. Li, L. Zhou, F. Muhammad, Knowledge discovery through directed probabilistic topic models : a survey (2009). https://doi.org/10.1007/s11704-009-0062-y 41. J. Boyd-Graber, D. Mimno, Applications of Topic Models, vol. XX, no. Xx (2017), pp. 1– 154. https://doi.org/10.1561/XXXXXXXXXX 42. K. Management, Mining Student Information System Records to Predict Students’ Academic Performance. ﻳﻤﻴﺪﺍﻛﻸﺍ ﻣﻪ ءﺎﺩﺃ ﺏ ﺅﺑﻨﺘﻠﻞ ﺓ ﺏ ﻟﻄﻼ ﺗﺎﻣﻮﻟﻌﻢ ﻡ ﺍ ﻇﻦ ﺗﻼﺟﺲ ﻧﻴﺪﻋﺖby AMJED TARIQ MOHAMMAD ABU SAA,” no. Nov 2018 43. Q.T. Zeng, D. Redd, T. Rindflesch, J. Nebeker, Synonym, topic model and predicate-based query expansion for retrieving clinical documents. AMIA Annu. Symp. Proc. 2012, 1050– 1059 (2012) 46. J.F. Burnham, Scopus database: a review. Biomed. Digital Libr. 3(1), 1–8 (2006). https://doi. org/10.1186/1742-5581-3-1 45. I. Martynov, J. Klima-frysch, J. Schoenberger, A scientometric analysis of neuroblastoma research (2020), pp. 1–10 46. M. Röder, A. Both, A. Hinneburg, Exploring the space of topic coherence measures, in WSDM 2015—Proceedings of the 8th ACM International Conference on Web Search and Data Mining (2015), , pp. 399–408. https://doi.org/10.1145/2684822.2685324
A Survey on Crowdsourcing Applications in Smart Cities Hamed Vahdat-Nejad, Tahereh Tamadon, Fatemeh Salmani, Zeynab Kiani-Zadegan, Sajedeh Abbasi, and Fateme-Sadat Seyyedi
Abstract With the emergence of the Internet of things (IoT), human life is now progressing towards smartification faster than ever before. Thus, smart cities have become automated in different aspects such as business, education, economy, medicine, and urban areas. Since smartification requires a variety of dynamic information in different urban dimensions, mobile crowdsourcing has gained importance in smart cities. This chapter systematically reviews the related applications of smart cities that use mobile crowdsourcing for data acquisition. For this purpose, the applications are classified as environmental, urban life, and transportation categories and then investigated in detail. This survey helps in understanding the current situation of smart cities from the viewpoint of crowdsourcing and discusses the future research directions in this field. Keywords Mobile crowdsourcing applications · Smart city · Urban service · Transportation · Survey
H. Vahdat-Nejad (B) · T. Tamadon · F. Salmani · Z. Kiani-Zadegan · S. Abbasi · F.-S. Seyyedi PerLab, Faculty of Electrical and Computer Engineering, University of Birjand, Birjand, Iran e-mail: [email protected] T. Tamadon e-mail: [email protected] F. Salmani e-mail: [email protected] Z. Kiani-Zadegan e-mail: [email protected] S. Abbasi e-mail: [email protected] F.-S. Seyyedi e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. Al-Emran and K. Shaalan (eds.), Recent Innovations in Artificial Intelligence and Smart Applications, Studies in Computational Intelligence 1061, https://doi.org/10.1007/978-3-031-14748-7_14
239
240
H. Vahdat-Nejad et al.
1 Introduction Technological developments have turned human life objects into smart objects that can perceive and react to their surrounding environments [1, 2]. These smart objects constitute the main part of the Internet of Things (IoT) [3]. With its smart solutions, the IoT has significant effects on all dimensions of human life [4]. It is used in different areas such as healthcare [5–7], environment [8, 9], transportation [10–12], security [13, 14], entertainment [15], business [16], education [17, 18], and tourism [19]. After the previous three industrial revolutions of mechanization, electricity, and IT, the IoT and its relevant services are known as the fourth industrial revolution [20, 21]. According to previous studies, more than 50 billion devices were connected to the Internet in 2021 [22] due to the pervasiveness of the IoT and its countless applications. By 2025, more than 100 billion devices will be connected to the Internet [23]. These figures indicate the ever-increasing growth of the IoT. As the world’s population grows, there has been an unprecedented upward trend in urbanization [24]; therefore, urban infrastructure has been under tremendous pressure. As a result, urban managers must modernize urban life, formulate effective strategies, and make initiative plans [25]. Therefore, different countries have made some efforts in urban services and infrastructure to develop cities with better facilities and improve socio-economic conditions [26]. Their efforts lead to the development of smart cities, which can greatly help states solve socio-economic crises of recent years [27]. Thanks to the IoT, smart cities are now growing rapidly [28, 29]. Different projects in smart cities have been conducted, such as smart lighting [30, 31], smart parking [32, 33], smart agriculture [34, 35], smart waste management [36, 37], smart buildings [38, 39], and smart tourism [40, 41]. Smart cities are based on intelligent citizens that can provide appropriate feedback and useful ideas. However, collaboration with citizens is an issue for realizing smart cities. With the development of advanced mobile devices, crowdsourcing has emerged for providing collaboration with citizens in smart cities to employ the potential capacity of citizens (crowds) for different tasks. Accordingly, crowdsourcing is an online process of collecting information through the participation of citizens via their smartphones [42, 43]. In fact, crowdsourcing is a participatory online activity in which an individual, an institution, an organization, or a private company recommends participation in an activity to a group of heterogeneous individuals. Users (crowds) participate in that activity by bringing their work, money, knowledge, and experience [44]. Collecting information in a smart city by crowdsourcing helps to obtain social feedback on a subject or offer solutions to a problem [45, 46]. Crowdsourcing has now attracted much attention both academically [47, 48] and commercially1 [49]. Various survey studies have analyzed smart cities from different perspectives, such as concepts and applications [28, 50–52], data management [53], security and privacy [54], and cloud computing [55]. Besides, many other survey studies have analyzed crowdsourcing in different aspects such as overviews [56], obstacles and barriers [57], 1
https://www.upwork.com; https://www.mturk.com.
A Survey on Crowdsourcing Applications in Smart Cities
241
incentive and task assignment [58], software engineering [59], security, privacy and trust [60], quality control [61], context awareness [62], and spatial crowdsourcing [63, 64]. However, investigating crowdsourcing in smart cities has specific characteristics that have been mostly neglected by previous survey papers. In this regard, two survey papers [65, 66] have recently investigated crowdsourcing in smart cities. The main remained problem is concentrating on the application as the main perception of citizens from smart cities. To this end, this review study collects, analyzes, and classifies the research applications that employed crowdsourcing in smart cities. We used keywords of “crowd sensing” or “crowd sourcing” and “smart city” or “smart cities” in Google Scholar as the most comprehensive paper indexing service. We then refined the retrieved results in two steps: (1) We eliminated the papers that have been published by unknown or unpopular publishers and (2) We semantically checked the papers to eliminate those that mistakenly have been retrieved. After a review, these applications and research studies are classified into environmental, urban life, and transportation categories (Table 1), each of which is discussed separately, in detail. This can help clearly understand the status quo as well as future research paths. The main contribution of this chapter is systematically investigating the crowdsourcingbased research in smart cities from the application perspective. This chapter consists of the following sections. Section 2 reviews the environmental applications of crowdsourcing in smart cities. Similarly, Sects. 3 and 4 investigate urban life and transportation applications, respectively. In the end, Sect. 5 concludes and presents the future research paths.
2 Environmental The development of cities and human interference in nature have caused many problems such as floods, air pollution, and noise pollution. It is essential to monitor the environment, especially the urban environment, because it can raise public awareness of human living environments. In this regard, an efficient solution to environmental monitoring is mobile crowdsourcing. For this purpose, mobile users must collect and report their problems in an urban environment [89]. To collect information through crowdsourcing, it is necessary to prepare the platform consisting of user mobiles and urban servers [90]. Urban managers are then informed about environmental problems in time to take the necessary actions for solving the problems. Floods are among the natural disasters that can cause great loss of life and financial damage. For many years, human societies have been trying to reduce the consequent costs and damages by predicting the occurrence of floods. Unfortunately, many cities have no flood warning systems. With the help of crowdsourcing, early warning systems can predict floods [91]. For this purpose, an Android application and a Web-based system have been presented to predict floods through crowdsourcing [67]. For participation, users must have smartphones with specialized sensors such as accelerometer, gyroscope, magnetometer, barometer, inertial measurement unit (IMU), and GPS. When users take photos of the edge of the water surface (e.g.,
242
H. Vahdat-Nejad et al.
Table 1 Research studies reviewed Research applications
Environmental
[67]
Flood forecasting
Mycoast [68]
Flood forecasting
Environmental sensing [69]
Air pollution measurement
Airsense [70, 71]
Air pollution measurement
City soundscape [72]
Noise estimation
Urban life
UserVoice [73]
Acquiring innovative ideas
Publicsense [74]
Reporting problems of public facilities
[75]
Reporting problems of public facilities
CUAPS [76]
Urban anomaly prediction
mPASS [77]
Providing a personalized path for special users
FlierMeet [78]
Advertisement sharing
Transportation
[79, 80]
Acquiring Parking spaces
[81]
Acquiring Parking spaces
[82]
Acquiring Parking spaces
[83]
Acquiring Parking spaces
[84]
Smart public transportation
CrowdOut [85]
Road Safety
[86]
Fuel prices
[87]
Navigation
[88]
Music recommendation
riverbanks), data of mobile sensors are processed through a geometric method (i.e., 3-cone intersection method) to determine the altitude and pitch angle. The resultant information is then sent to servers along with the photos. Finally, a map of information on floods and rivers will be shown to users and urban managers via a Web application. In MyCoast [68], experts take photos of districts where they think floods are likely to occur and provide their comments containing estimation of water depth and reduction or rise level of water. The data is then sent with spatiotemporal
A Survey on Crowdsourcing Applications in Smart Cities
243
information to the server, where convolutional neural networks are employed to process the images for the prediction of flood occurrence. Moreover, the tweets with flood-related keywords such as flood, inundation, dam, dike, and levee are extracted. The relevant tweets are then processed through natural language processing (NLP) methods to extract their spatial information. Based on crowdsourcing and Twitter data, the server finally predicts the flood location and presents it on Google Map. Air pollution is among the most prevalent problems in metropolises where the citizens need to know about the quality of air in their neighborhoods [92]. In environmental sensing [69], citizens can measure the concentration of environmental pollutants such as carbon monoxide, nitrogen dioxide, temperature, and humidity by using mobile environmental sensors (e.g., battery, onboard microprocessor, firmware, and Bluetooth) that they carry while walking or cycling. This information is then sent to a unified sensing platform and is processed through different analysis algorithms. If the measured environmental data of a district is incomplete, data-driven regression and interpolation techniques are utilized to complete the data. Ultimately, the air pollution information is presented as an updated heat map to users. In AirSense [70, 71], air quality monitoring devices (AQMD) send their IDs and data of measured air quality via Bluetooth to the nearest smartphone on which the program is installed. The received data is cleaned and formatted and then sent to the cloud. In the cloud, the OpenShift [93] service provider (i.e., the provider of free platform-as-a-service that provides a platform to store, aggregate, and analyze information obtained from sensors) is employed to collect, aggregate, analyze, and store data. Finally, at the request of users, the local air quality index as well as the air quality index map (AQI map) are sent to them with regard to the neighborhood in which they are located. Noise pollution is another problem that metropolises face as their populations grow. It is so serious that it can have various physical and mental effects on citizens and jeopardize their health. Hence, states need certain information and sources to offer specific solutions to this problem [94]. City Soundscape [72] is a platform presented through crowdsourcing for realizing large-scale and low-cost acoustic measurement. Utilizing this application, users can record noise automatically (when the application collects data automatically) and manually (when users set the starting point and duration for noise measurement) through the built-in mobile microphones. They can then send the recorded noise to the server. Users can also make comments or take a photo of a place where there is noise and represent their dissatisfaction with the environment by providing a numerical psychometric scale to help perceive noise pollution levels. Data processing is then performed on the server in accordance with the extract-transform-load (ETL) pipeline (i.e., a process by which information is collected and processed through one or several different sources and then uploaded in the database). Ultimately, objective noise maps are created along with subjective noise maps, which are based on user comments.
244
H. Vahdat-Nejad et al.
3 Urban Life With scientific advances at the dawn of the Industrial Revolution, many changes occurred in urban structures, resulting in modern urbanization. With population growth, urban infrastructure has gradually sustained a great deal of pressure; therefore, managers developed cities based on smart city frameworks [25]. Information technology (IT) now plays a key role in the smartification of cities [95], for which mobile crowdsourcing can collect a plethora of information from cities to help states improve the lives of citizens [25]. In this section, we review the studies that used mobile crowdsourcing for urban life smartification. Urban managers need innovative ideas to plan urban projects for the smartification of cities. UserVoice [73] offers an online interactive platform for collecting citizens’ ideas through mobile crowdsourcing. In addition to recording their ideas, citizens can vote and comment on the ideas of others. Every user is allowed to vote on three ideas. To encourage the users, an Apple iPad2 was awarded to one user at random. Experts then evaluate the ideas based on three criteria: innovation, feasibility, and benefit for users using Krippendorff’s alpha (i.e., an index that shows the extent agreed among experts on evaluating quality parameters). The mean of every quality dimension of data is then calculated to select the ideas with the highest ranks for the urban development stage. The failure of urban facilities can affect the quality of life among citizens. Therefore, a crowdsourcing platform has been developed to report the problems of public facilities in smart cities [74]. Users enter their name, phone number, type of damage, and corresponding photos to complete a report on the application and send it to the server. After normalizing these photos, the server classifies the data sent by users. Finally, users can look up different reports on a city map and become aware of details of a problem, responses, and recent events. Since many users report many problems, it is important to identify repetitive reports. For this purpose, repetitiveness or genuineness of a report is determined concerning temporal dimensions, two spatial dimensions (i.e., latitude and longitude), and different categories of reports (e.g., street light, traffic sign, and pothole) as well as the least-squares distance and the Bayes theorem [75]. In metropolises, people of different opinions and cultures live together, something which can cause some citizens to show certain behaviors or experience abnormalities. CUAPS [76] has been proposed to timely predict urban abnormalities (e.g., noise, illegal use of public facilities, and urban infrastructure malfunctions). Citizens can send their complaints to this system. The city districts are then clustered through a Bayesian inference model based on the number of abnormalities and their temporal vectors (showing when abnormalities occurred). Finally, the Markov trajectory estimation is employed to predict the abnormality of every district based on the dependency of districts and a history of abnormality in that district. The timely prediction of abnormalities can help prevent them and bring peace to citizens. Another problem with urbanization is the negligence of people with special needs (e.g., children, the elderly, and the disabled), for the commuting of whom the urban
A Survey on Crowdsourcing Applications in Smart Cities
245
facilities are not usually appropriate [77]. With the help of users, mPASS [96] collects information on facilities/barriers such as stairs and ramps across a city. After users determine origins and destinations on the application, a recommended series of personal paths for walking (based on facilities/barriers) and traveling by bus (with the required equipment for citizens with special needs) will then be shown to users. Advertising has a major role in urban life. FlierMeet [78] offers an advertisement sharing system through crowdsourcing. Utilizing their mobile cameras, users take photos of notice boards across the city and send them to the server via this application. GPS, light sensor, accelerometer, and magnetometer of smartphones are also employed to measure and send the position, light intensity, motion blur, and shooting angle. The server evaluates the quality of photos and deletes the lowquality or repetitive ones. The contents of fliers are also labeled with tags, including the advertisement, educational event, notice, and recruitment. Based on people’s interests, the fliers are then labeled with semantic tags including popular (i.e., the fliers that are used by most people and reported from different locations), hot (i.e., the fliers that have extensive audiences and become extremely popular in a short time), professional (i.e., the fliers that are considered for a specific population with common needs and skills), and surprising (i.e., the surprising ads). Users can see their favorite fliers on the map based on the classification or semantic tags and also make comments on the fliers.
4 Transportation The increasing rate of transportation in smart cities has caused various issues such as traffic, accidents, and insufficient parking spaces on streets. Therefore, urban transportation systems are an important aspect of modern urbanization, becoming smart with technological advances. Mobile crowdsourcing is a paradigm for the realtime collection of a great deal of information on smart transportation in cities [97]. Based on crowdsourcing, the intelligent transportation system (ITSs) can identify the states of roads and offer quick and safe routes to citizens by predicting traffic across the city [98]. These systems can also provide other services such as crowdsourced geospatial data acquisition, urban traffic planning and management, smart parking, and green transportation [99]. This section reviews the papers that have used mobile crowdsourcing in different areas of urban transportation. Finding a parking spot has become a challenge to drivers at rush hours in metropolises. Mobile crowdsourcing has been employed to propose a smart parking system [79, 80]. Drivers can manually record the number of parking spots on the street on a questionnaire and then send it. The server first assumes that all parking spots on-street sides are unknown. After data of user questionnaires are received, the status of every spot is changed to occupied or available. Finally, available parking spots are offered to users in graphics or voice. Users are also provided with other information such as parking price, statistics about the arrival rate of vehicles, and parking rates.
246
H. Vahdat-Nejad et al.
A mobile crowdsourcing application [81] has also been proposed to show the map of available parking spots on the street. To employ this application, drivers must attach their smartphones vertically to their windshields when their magnetometer, gyroscope, and GPS sensors are activated. The orientation of axes on the magnetometer is important while processing its sensor signal. Using the information received from smartphones, the server infers the status around the user’s car in terms of parking space. Another study [82] leverages a vehicle equipped with a sonar sensor, ultrasonic rangefinder, GPS, camera, Raspberry Pi chip, 3G/4G antenna, and 3G/4G connection. The ultrasonic rangefinder sensor creates a short pulse (in every 50 ms) to measure the distance between a car and the roadside. Vehicles are then distinguished from the roadside barriers through the features extracted from the sonar sensor, including the vehicle length, the distance to a parked car, the standard deviation of the distance, and the angle between vertices as well as the bottom of the detected object. After collecting and aggregating the parking spot information (e.g., parking capacity and location), the server generates parking capacity and spots on a map. Developing technology-based transportation projects can impose hefty costs on organizations and companies; therefore, it is necessary to analyze a project before it is implemented and operationalized in order to identify and solve the potential issues. For this purpose, a crowdsourcing simulator has been implemented in Java to extract urban parking spots [83]. This simulator uses MASON [100] as a multi-agent simulation toolkit and utilizes Mapzen2 to download geographic data. This simulator lets users report occupied or available parking spots to the server. Moreover, the aging algorithm is adopted to estimate the validity of a parking spot (i.e., a report will be unreliable after two minutes, whereas it will be invalid and then deleted from the system after five minutes). Finally, OpenStreetMap (OSM) [101] services display the map of available parking spaces obtained from users’ participation. The public transportation system now plays a significant role in moving citizens. People are willing to use the public buses for various reasons such as low costs and ease of use. The challenge is the lack of awareness regarding the arrival time of buses at stops that wastes people’s time. In this regard, a mobile crowdsourcing application [84] with the help of the Beacon3 technology has been presented to provide awareness of buses’ position and the approximate time of their arrival at different stops. The transmitter of the Bluetooth-based beacons installed on buses and bus stops broadcasts certain signals with its ID to the surrounding environment. A user’s smartphone receives the bus stop ID and sends it to the server to inquire about the arrival times of buses. Upon the arrival of a bus boarding the user, a message containing information on the bus and the bus stop is sent to the server. The current position of the bus is also sent periodically to the server. Finally, the server estimates the arrival time of a bus at the next stop and notifies citizens so that they can use the public transportation system with appropriate timing.
2 3
https://www.mapzen.com/. https://developer.apple.com/ibeacon/.
A Survey on Crowdsourcing Applications in Smart Cities
247
In metropolises, non-compliance with traffic laws and road defects can cause issues. CrowdOut [85], a crowdsourcing-based service, allows citizens to report traffic violations and road defects. If users witness violations (e.g., unauthorized speed and parking) and defects (e.g., traffic light failure, damaged roads, and traffic), they can report the problem. They can also take photos, provide short comments about the type of violation (e.g., parking violation) with spatial coordinates (GPS), and send them to the server. After aggregating the received data, the server can display a map of problems and violations to urban managers via Google Map. Finally, urban managers can also observe the photos as well as various diagrams of violations in different districts to take the necessary actions (e.g., contacting the owner of the vehicle and alerting the impound service to remove the vehicle). Given the fuel price differences at different gas stations in some countries, drivers are willing to fuel their cars at the stations that offer low prices. Hence, a mobile crowdsourcing scheme has been proposed [86] for drivers to observe the fuel price at different gas stations. As a user’s car approaches a gas station, the user’s mobile camera placed at the front of the car is activated automatically to capture photos of the price boards. The photos are then processed on the server, and the fuel price characters are identified through a neural network system. Since there might be several routes to a destination, a smart navigator [87] has been designed to offer a personalized and optimal route based on the quality of road surface and the risk profile of the driver, in addition to considering both time and distance. The surface quality of roads is obtained from the onboard diagnostic II (OBDII) unit and inertial sensors (e.g., accelerometer and gyroscope) of vehicles or smart devices of the drivers. These sensors measure the information of a vehicle’s linear accelerations and angular rotations. The resultant information is then sent to the driver’s smartphone via Bluetooth. In local processing, the information is first de-noised through wavelet packet decomposition (WPD), and the frequency of road anomalies is then obtained. The feature extraction (statistical, time, frequency, and time–frequency) and classification techniques are employed to identify and classify these anomalies. Ultimately, the information of road anomalies that are location stamped is sent to the server. In addition to inertial sensors, front and rear cameras and short-range and long-range radar sensors are used to identify the driver’s behavior. A hidden Markov model and the Viterbi algorithm are used to determine the driver’s behavior. The identified behavior is then location stamped and sent to the server. The quality of a road is lastly classified into good, moderate, and poor classes through fuzzy inference. Moreover, the driver’s behavior risk is determined at different route segments with respect to the identified behavior and environmental features (weather conditions). At the time of a route request, the navigator analyzes all the connected routes between the origin and the destination to find the optimal route. The information of different segments of routes will then be extracted from the database to obtain the overall quality level of a route based on the mean quality of its segments. In addition, the driver’s profile risk history on similar segments of the road is extracted from the database. Based on the weighted mean, the overall risk of a road is obtained for the driver. It includes risky, moderate, and safe tags. Ultimately, the navigator can recommend the most optimal route to the driver based on the resultant information.
248
H. Vahdat-Nejad et al.
Playing appropriate music can prevent the driver’s drowsiness and also affect the driver’s mood. For this purpose, a crowdsourcing system [88] has been proposed for music selection while driving. Based on their feelings while listening to a group of songs, users label specific tags (e.g., lively, energetic, sad, groovy, noisy, and peaceful) along with their social context information (e.g., age, gender, and cultural background). Based on different social contexts, data is grouped and aggregated on the server. Ultimately, the appropriate song is played to drivers with respect to their human mood contexts.
5 Conclusion This chapter reviewed and classified the smart city studies that adopted mobile crowdsourcing models. These systems have been classified into environmental, urban life, and transportation classes. In every reviewed system, one problem of smart cities has been discussed, and related solutions have been explained. Given the extensiveness of these studies, it can be concluded that mobile crowdsourcing has had a central role in solving the issues of smart cities. However, the applications of these systems have been in diverse areas and could be regarded as disjoint and independent islands. Furthermore, the results show that almost none of the studies have used crowdsourcing for smart governing, although it might be regarded as the primary application for crowdsourcing. We anticipate that the mobile crowdsourcing model will be used in different specialized areas of smart cities such as healthcare, archeology, tourism, social sciences, and confrontation with natural and unnatural disasters. Dealing with natural disasters (e.g., earthquakes, fires, floods, and viral epidemics) and unnatural ones (e.g., wars and terrorist attacks) are among the most serious challenges in smart cities. A common step in dealing with these disasters is obtaining accurate and dynamic information (in spatiotemporal dimensions) to make plans and manage responses. For this purpose, crowdsourcing can be a key strategy for collecting dynamic spatiotemporal information in future research studies.
References 1. G. Kortuem, F. Kawsar, V. Sundramoorthy, D. Fitton, Smart objects as building blocks for the internet of things. IEEE Internet Comput. 14(1), 44–51 (2009) 2. M. Al-Emran, Evaluating the use of smartwatches for learning purposes through the integration of the technology acceptance model and task-technology fit. Int. J. Hum. Comput. Inter. 37(19), 1874–1882 (2021) 3. H. Vahdat-Nejad, Z. Mazhar-Farimani, A. Tavakolifar, Social internet of things and new generation computing—a survey, in Toward Social Internet of Things (SIoT): Enabling Technologies, Architectures and Applications (Springer, Berlin, 2020), pp. 139–149
A Survey on Crowdsourcing Applications in Smart Cities
249
4. M.A. Rahman, M.M. Rashid, M.S. Hossain, E. Hassanain, M.F. Alhamid, M. Guizani, Blockchain and IoT-based cognitive edge framework for sharing economy services in a smart city. IEEE Access 7, 18611–18621 (2019) 5. A. Albahri et al., IoT-based telemedicine for disease prevention and health promotion: Stateof-the-Art. J. Netw. Comput. Appl. 173, 102873 (2021) 6. M.S. Hossain, G. Muhammad, Cloud-assisted industrial internet of things (iiot)–enabled framework for health monitoring. Comput. Netw. 101, 192–202 (2016) 7. S.R. Islam, D. Kwak, M.H. Kabir, M. Hossain, K.-S. Kwak, The internet of things for health care: a comprehensive survey. IEEE Access 3, 678–708 (2015) 8. S. Fang et al., An integrated system for regional environmental monitoring and management based on internet of things. IEEE Trans. Industr. Inf. 10, 1596–1605 (2014) 9. F. Montori, L. Bedogni, L. Bononi, A collaborative internet of things architecture for smart cities and environmental monitoring. IEEE Internet Things J. 5, 592–605 (2017) 10. A. Ramazani, H. Vahdat-Nejad, CANS: context-aware traffic estimation and navigation system. IET Intel. Transport Syst. 11, 326–333 (2017) 11. R. Sfar, Y. Challal, P. Moyal, E. Natalizio, A game theoretic approach for privacy preserving model in IoT-based transportation. IEEE Trans. Intell. Transp. Syst. 20, 4405–4414 (2019) 12. F. Zantalis, G. Koulouras, S. Karabetsos, D. Kandris, A review of machine learning and IoT in smart transportation. Future Internet 11, 94 (2019) 13. P. Datta, B. Sharma, A survey on IoT architectures, protocols, security and smart city based applications, in 2017 8th International Conference on Computing, Communication and Networking Technologies, India (IEEE, 2007) 14. D. Wang, B. Bai, K. Lei, W. Zhao, Y. Yang, Z. Han, Enhancing information security via physical layer approaches in heterogeneous IoT with multiple access mobile edge computing in smart city. IEEE Access 7, 54508–54521 (2019) 15. B. Zhong, F. Yang, From entertainment device to IoT terminal, in Handbook of Research on Managerial Practices and Disruptive Innovation in Asia (2020) 16. B. Nguyen, L. Simkin, The internet of things (IoT) and marketing: the state of play, future trends and the implications for marketing. J. Market. Manage. 33 1–6 (2017) 17. J. H. Al Shamsi, M. Al-Emran, K. Shaalan, Understanding key drivers affecting students’ use of artificial intelligence-based voice assistants. Educ. Inform. Technol. 1–21 (2022) 18. M. Al-Emran, R. Al-Maroof, M. A. Al-Sharafi, I. Arpaci, What impacts learning with wearables? An integrated theoretical model. Interact. Learn. Environ. 1–21 (2020) 19. H. Vahdat-Nejad, H. Khosravi-Mahmouei, M. Ghanei-Ostad, A. Ramazani, Survey on context-aware tour guide systems. IET Smart Cities 2, 34–42 (2020) 20. M. Chen, J. Yang, X. Zhu, X. Wang, M. Liu, J. Song, Smart home 2.0: Innovative smart home system powered by botanical IoT and emotion detection. Mob. Networks Appl. 22, 1159–1169 (2017) 21. A.D.D. Maynard, Navigating the fourth industrial revolution. Nat. Nanotechnol. 10, 1005– 1006 (2015) 22. A. Nordrum, The internet of fewer things [news]. IEEE Spectrum 53, 12–13 (2016) 23. R. Taylor, D. Baron, D. Schmidt, The world in 2025-predictions for the next ten years, in 2015 10th International Microsystems, Packaging, Assembly and Circuits Technology Conference, Taipei, Taiwan (IEEE, 2015) 24. N. Anderson, Urbanism and urbanization. Am. J. Sociol. 65, 68–73 (1959) 25. H. Kumar, M.K. Singh, M. Gupta, J. Madaan, Moving towards smart cities: solutions that lead to the smart city transformation framework. Technol. Forecast. Soc. Chang. 153, 119281 (2020) 26. S.H. Lee, J. H. Han, Y. T. Leem, T. Yigitcanlar, Towards ubiquitous city: concept, planning, and experiences in the Republic of Korea, in Knowledge-Based Urban Development: Planning and Applications in the Information Era (2008) 27. Y. Li, A. Liu, in Analysis of the challenges and solutions of building a smart city, Presented at the ICCREM 2013: Construction and Operation in the Context of Sustainability, Germany (2013)
250
H. Vahdat-Nejad et al.
28. H. Arasteh, et al., Iot-based smart cities: a survey, in 2016 IEEE 16th International Conference on Environment and Electrical Engineering, Florence, Italy (IEEE, 2016) 29. S. Chatterjee, A.K. Kar, M. Gupta, Success of IoT in smart cities of India: an empirical analysis. Gov. Inf. Q. 35, 349–361 (2018) 30. M. Castro, A.J. Jara, A.F. Skarmeta, Smart lighting solutions for smart cities, in 2013 27th International Conference on Advanced Information Networking and Applications Workshops, Barcelona, Spain (IEEE, 2013) 31. A.K.K. Sikder, A. Acar, H. Aksu, A.S. Uluagac, K. Akkaya, M. Conti, IoT-enabled smart lighting systems for smart cities, in 2018 IEEE 8th Annual Computing and Communication Workshop and Conference, Las Vegas, NV, USA (IEEE, 2018) 32. Aydin, M. Karakose, E. Karakose, A navigation and reservation based smart parking platform using genetic optimization for smart cities, in 2017 5th International Istanbul Smart Grid and Cities Congress and Fair, Istanbul, Turkey (IEEE, 2017) 33. A. Khanna, R. Anand, IoT based smart parking system, in 2016 International Conference on Internet of Things and Applications, Pune, India (IEEE, 2016) 34. S. Prathibha, A. Hongal, M. Jyothi, IoT based monitoring system in smart agriculture, in 2017 International Conference on Recent Advances in Electronics and Communication Technology, Bangalore, India (IEEE, 2017) 35. H. Sharma, A. Haque, Z.A. Jaffery, Maximization of wireless sensor network lifetime using solar energy harvesting for smart agriculture monitoring. Ad Hoc Netw. 94(1), 101966 (2019) 36. A. Medvedev, P. Fedchenkov, A. Zaslavsky, T. Anagnostopoulos, S. Khoruzhnikov, Waste management as an IoT-enabled service in smart cities, in Internet of Things, Smart Spaces, and Next Generation Networks and Systems (2015) 37. K. Nirde, P.S. Mulay, U.M. Chaskar, IoT based solid waste management system for smart city, in 2017 International Conference on Intelligent Computing and Control Systems, Madurai, India (IEEE, 2007) 38. S. Alawadhi, et al., Building understanding of smart city initiatives, in International Conference on Electronic Government, Kristiansand, Norway (Springer, Berlin, 2012) 39. J. Dutta, S. Roy, IoT-fog-cloud based architecture for smart city: prototype of a smart building, in 2017 7th International Conference on Cloud Computing, Data Science & EngineeringConfluence, Noida, India (IEEE, 2017) 40. Z. Abbasi-Moud, H. Vahdat-Nejad, W. Mansoor, Detecting tourist’s preferences by sentiment analysis in smart cities, in 2019 IEEE Global Conference on Internet of Things, Dubai, United Arab Emirates (IEEE, 2019) 41. Z. Abbasi-Moud, H. Vahdat-Nejad, J. Sadri, Tourism recommendation system based on semantic clustering and sentiment analysis. Expert Syst. Appl. 167, 114324 (2021) 42. D. Mazzola, A. Distefano, Crowdsourcing and the participation process for problem solving: the case of BP, in Proceedings of ItAIS 2010 VII Conference of the Italian Chapter of AIS, Naples, Italy (ItAIS, Napoles, 2010) 43. M. Vukovic, Crowdsourcing for enterprises, in 2009 Congress on Services-I, Los Angeles, CA, USA (IEEE, 2009) 44. E. Estellés-Arolas, F. González-Ladrón-De-Guevara, Towards an integrated crowdsourcing definition. J. Inform. Sci. 38, 189–200 (2012) 45. A. Afuah, C.L. Tucci, Crowdsourcing as a solution to distant search. Acad. Manag. Rev. 37(3), 355–375 (2012) 46. K.D. Giudice, Crowdsourcing credibility: the impact of audience feedback on Web page credibility. Proc. Am. Soc. Inform. Sci. Technol. 47(1), 1–9 (2010) 47. L. Chen, D. Lee, T. Milo, Data-driven crowdsourcing: management, mining, and applications, in 2015 IEEE 31st International Conference on Data Engineering, Seoul, Korea (South) (IEEE, 2015) 48. H. Garcia-Molina, M. Joglekar, A. Marcus, A. Parameswaran, V. Verroios, Challenges in data crowdsourcing. IEEE Trans. Knowl. Data Eng. 28, 901–911 (2016) 49. J. Füller, K. Hutter, N. Kröger, Crowdsourcing as a service–from pilot projects to sustainable innovation routines. Int. J. Project Manage. 39, 183–195 (2021)
A Survey on Crowdsourcing Applications in Smart Cities
251
50. E. Okai, X. Feng, P. Sant, Smart cities survey, in 2018 IEEE 20th International Conference on High Performance Computing and Communications; IEEE 16th International Conference on Smart City; IEEE 4th International Conference on Data Science and Systems, Exeter, UK (IEEE, 2018) 51. E.P. Trindade, M.P.F. Hinnig, E. Moreira da Costa, J.S. Marques, R.C. Bastos, T. Yigitcanlar, Sustainable development of smart cities: a systematic review of the literature. J. Open Innov. Technol. Market Complexity 3, 11 (2017) 52. Yin, Z. Xiong, H. Chen, J. Wang, D. Cooper, B. David, A literature survey on smart cities. Sci. China Inform. Sci. 58, 1–18 (2015) 53. P.L. Lau et al., A survey of data fusion in smart city applications. Inform. Fusion 52, 357–374 (2019) 54. A. Gharaibeh et al., Smart cities: a survey on data management, security, and enabling technologies. IEEE Commun. Surv. Tutor. 19, 2456–2501 (2017) 55. R. Petrolo, V. Loscri, N. Mitton, Towards a smart city based on cloud of things, a survey on the smart city vision and paradigms. Trans. Emerg. Telecommun. Technol. 28, e2931 (2017) 56. M.-C. Yuen, I. King, K.-S. Leung, A survey of crowdsourcing systems, in 2011 IEEE Third International Conference on Privacy, Security, Risk and Trust and 2011 IEEE Third International Conference on Social Computing (IEEE, 2011), pp. 766–773 57. L. Islam, S.T. Alvi, M.N. Uddin, M. Rahman, Obstacles of mobile crowdsourcing: a survey, in 2019 IEEE Pune Section International Conference, Pune, India (IEEE, 2019) 58. A.I.I. Chittilappilly, L. Chen, S. Amer-Yahia, A survey of general-purpose crowdsourcing techniques. IEEE Trans. Knowl. Data Eng. 28, 2246–2266 (2016) 59. K. Mao, L. Capra, M. Harman, Y. Jia, A survey of the use of crowdsourcing in software engineering. J. Syst. Softw. 126, 57–84 (2017) 60. W. Feng, Z. Yan, H. Zhang, K. Zeng, Y. Xiao, Y.T. Hou, A survey on security, privacy, and trust in mobile crowdsourcing. IEEE Internet Things J. 5, 2971–2992 (2017) 61. F. Daniel, P. Kucherbaev, C. Cappiello, B. Benatallah, M. Allahbakhsh, Quality control in crowdsourcing: a survey of quality attributes, assessment techniques, and assurance actions. ACM Comput. Surv. 51, 1–40 (2018) 62. H. Vahdat-Nejad, E. Asani, Z. Mahmoodian, M.H. Mohseni, Context-aware computing for mobile crowd sensing: a survey. Future Gener. Comput. Syst. 99, 321–332 (2019) 63. S.R.B. Gummidi, X. Xie, T.B. Pedersen, A survey of spatial crowdsourcing. ACM Trans. Database Syst. 44, 1–46 (2019) 64. Y. Tong, Z. Zhou, Y. Zeng, L. Chen, C. Shahabi, Spatial crowdsourcing: a survey. VLDB J. 29, 217–250 (2020) 65. T. Kandappu, A. Misra, D. Koh, R. D. Tandriansyah, N. Jaiman, A feasibility study on crowdsourcing to monitor municipal resources in smart cities, in Companion Proceedings of the the Web Conference 2018, France (2018) 66. X. Kong, X. Liu, B. Jedari, M. Li, L. Wan, F. Xia, Mobile crowdsourcing in smart cities: technologies, applications, and future challenges. IEEE Internet Things J. 6(5), 8095–8113 (2019) 67. Y. Sermet, P. Villanueva, M.A. Sit, I. Demir, Crowdsourced approaches for stage measurements at ungauged locations using smartphones. Hydrol. Sci. J. 65, 813–822 (2020) 68. R.-Q. Wang, H. Mao, Y. Wang, C. Rae, W. Shaw, Hyper-resolution monitoring of urban flooding with social media and crowdsourcing data. Comput. Geosci. 111, 139–147 (2018) 69. F. Zeiger, M.F. Huber, Demonstration abstract: participatory sensing enabled environmental monitoring in smart cities, in IPSN-14 Proceedings of the 13th International Symposium on Information Processing in Sensor Networks, Berlin, Germany (IEEE, 2014) 70. Dutta, C. Chowdhury, S. Roy, A. I. Middya, F. Gazi, Towards smart city: sensing air quality in city based on opportunistic crowd-sensing, in Proceedings of the 18th International Conference on Distributed Computing and Networking, Hyderabad, India (ACM Press, 2017) 71. J. Dutta, F. Gazi, S. Roy, C. Chowdhury, AirSense: opportunistic crowd-sensing based air quality monitoring system for smart city, in 2016 IEEE SENSORS, Orlando, FL, USA (IEEE, 2016)
252
H. Vahdat-Nejad et al.
72. M. Zappatore, A. Longo, M.A. Bochicchio, Using mobile crowd sensing for noise monitoring in smart cities, in 2016 International Multidisciplinary Conference on Computer and Energy Science, Split, Croatia (IEEE, 2016) 73. D. Schuurman, B. Baccarne, L. De Marez, P. Mechant, Smart ideas for smart cities: Investigating crowdsourcing for generating and selecting ideas for ICT innovation in a city context. J. Theor. Appl. Electron. Comm. Res. 7, 49–62 (2012) 74. Z. Wang, et al., PublicSense: a crowd sensing platform for public facility management in smart cities, in 2016 International IEEE Conferences on Ubiquitous Intelligence & Computing, Advanced and Trusted Computing, Scalable Computing and Communications, Cloud and Big Data Computing, Internet of People, and Smart World Congress, Toulouse, France (IEEE, 2016) 75. J. Zhang, D. Wang, Duplicate report detection in urban crowdsensing applications for smart city, in 2015 IEEE International Conference on Smart City/SocialCom/SustainCom, Chengdu, China (IEEE, 2015) 76. Huang, X. Wu, D. Wang, Crowdsourcing-based urban anomaly prediction system for smart cities, in Proceedings of the 25th ACM International on Conference on Information and Knowledge Management, Indianapolis, Indiana, USA (ACM Press, 2016) 77. S. Mirri, C. Prandi, P. Salomoni, F. Callegati, A. Campi, On combining crowdsourcing, sensing and open data for an accessible smart city, in 2014 Eighth International Conference on Next Generation Mobile Apps, Services and Technologies, Oxford, UK (IEEE, 2014) 78. B. Guo, H. Chen, Z. Yu, X. Xie, S. Huangfu, D. Zhang, FlierMeet: a mobile crowdsensing system for cross-space public information reposting, tagging, and sharing. IEEE Trans. Mob. Comput. 14, 2020–2033 (2014) 79. X. Chen, N. Liu, Smart parking by mobile crowdsensing. Int. J. Smart Home 10, 219–234 (2016) 80. X. Chen, E. Santos-Neto, M. Ripeanu, Crowdsourcing for on-street smart parking, in Proceedings of the Second ACM International Symposium on Design and Analysis of Intelligent Vehicular Networks and Applications, Paphos, Cyprus (ACM Press, 2012) 81. J. Villanueva, D. Villa, M.J. Santofimia, J. Barba, J.C. Lopez, Crowdsensing smart city parking monitoring, in 2015 IEEE 2nd World Forum on Internet of Things, Milan, Italy (IEEE, 2015) 82. C. Roman, R. Liao, P. Ball, S. Ou, M. de Heaver, Detecting on-street parking spaces in smart cities: Performance evaluation of fixed and mobile sensing systems. IEEE Trans. Intell. Transp. Syst. 19, 2234–2245 (2018) 83. K. Farkas, I. Lendák, Simulation environment for investigating crowd-sensing based urban parking, in 2015 International Conference on Models and Technologies for Intelligent Transportation Systems, Budapest, Hungary (IEEE, 2015) 84. D. Cianciulli, G. Canfora, E. Zimeo, Beacon-based context-aware architecture for crowd sensing public transportation scheduling and user habits, Procedia Comput. Sci. 109, 1110– 1115 (2017) 85. E. Aubry, T. Silverston, A. Lahmadi, O. Festor, CrowdOut: a mobile crowdsourcing service for road safety in digital cities, in 2014 IEEE International Conference on Pervasive Computing and Communication Workshops, Budapest, Hungary (IEEE, 2014) 86. Y.F. Dong, S. Kanhere, C.T. Chou, N. Bulusu, Automatic collection of fuel prices from a network of mobile cameras, in International Conference on Distributed Computing in Sensor Systems, Santorini, Greece (Springer, Berlin, 2008) 87. A. Abdelrahman, A.S. El-Wakeel, A. Noureldin, H.S. Hassanein, Crowdsensing-based personalized dynamic route planning for smart vehicles. IEEE Network 34, 216–223 (2020) 88. S. Krishnan, et al., A novel cloud-based crowd sensing approach to context-aware music mood-mapping for drivers, in 2015 IEEE 7th International Conference on Cloud Computing Technology and Science, Vancouver, BC, Canada (IEEE, 2015) 89. J. Li, J. Wu, Y. Zhu, Selecting optimal mobile users for long-term environmental monitoring by crowdsourcing, in 2019 IEEE/ACM 27th International Symposium on Quality of Service, Phoenix, AZ, USA (IEEE, 2019)
A Survey on Crowdsourcing Applications in Smart Cities
253
90. O. Alvear, C.T. Calafate, J.-C. Cano, P. Manzoni, Crowdsensing in smart cities: overview, platforms, and environment sensing issues. Sensors 18, 460 (2018) 91. L. See, A review of citizen science and crowdsourcing in applications of pluvial flooding. Front. Earth Sci. 7, 44 (2019) 92. J. Huang et al., A crowdsource-based sensing system for monitoring fine-grained air quality in urban environments. IEEE Internet Things J. 6, 3240–3247 (2018) 93. D. Beimborn, T. Miletzki, S. Wenzel, Platform as a service (PaaS). Bus. Inf. Syst. Eng. 3, 381–384 (2011) 94. Y. Liu et al., Internet of things for noise mapping in smart cities: state-of-the-art and future directions. IEEE Network 34, 112–118 (2020) 95. R. Lutsiv, Smart cities: Economic dimensions of their evolution. Herald Ternopil National Econ. Univ. 2, 50–61 (2020) 96. Prandi, P. Salomoni, S. Mirri, mPASS: integrating people sensing and crowdsourcing to map urban accessibility, in 2014 IEEE 11th Consumer Communications and Networking Conference, Las Vegas, NV, USA (IEEE, 2014) 97. Nandan, A. Pursche, X. Zhe, Challenges in crowdsourcing real-time information for public transportation, in 2014 IEEE 15th International Conference on Mobile Data Management, Brisbane, QLD, Australia (IEEE, 2014) 98. X. Wan, H. Ghazzai, Y. Massoud, Mobile crowdsourcing for intelligent transportation systems: real-time navigation in urban areas. IEEE Access 7, 136995–137009 (2019) 99. K. Ali, D. Al-Yaseen, A. Ejaz, T. Javed, H.S. Hassanein, “Crowdits: crowdsourcing in intelligent transportation systems, in 2012 IEEE Wireless Communications and Networking Conference, Paris, France (IEEE, 2012) 100. S. Luke, C. Cioffi-Revilla, L. Panait, K. Sullivan, G. Balan, Mason: a multi-agent simulation environment. SIMULATION 81, 517–527 (2005) 101. M. Haklay, P. Weber, Openstreetmap: User-generated street maps. IEEE Pervasive Comput. 7(4), 12–18 (2008)
Markov Switching Model for Driver Behavior Prediction: Use Cases on Smartphones Ahmed B. Zaky, Mohamed A. Khamis, and Walid Gomaa
Abstract Several intelligent transportation systems focus on studying the various driver behaviors for numerous objectives. This includes the ability to analyze driver actions, sensitivity, distraction, and response time. As the data collection is one of the major concerns for learning and validating different driving situations, we present a driver behavior switching model validated by a low-cost data collection solution using smartphones. The proposed model is validated using a real dataset to predict the driver behavior in short duration periods. Multiple Markov Switching Variable Auto-Regression (MSVAR) models are implemented to achieve a sophisticated fitting with the collected driver behavior data. This yields more accurate predictions not only for driver behavior but also for the entire driving situation. The performance of the presented models together with a suitable model selection criterion is also presented. The proposed driver behavior prediction framework can potentially be used in accident prediction and driver safety systems. Keywords Driver behavior · Markov switching model · Auto-regression model A. B. Zaky (B) Faculty of Engineering (Shoubra), Benha University, Cairo 11689, Egypt e-mail: [email protected]; [email protected] M. A. Khamis · W. Gomaa Cyber-Physical Systems Lab, Egypt-Japan University of Science and Technology (E-JUST), New Borg El-Arab City 21934, Alexandria, Egypt e-mail: [email protected]; [email protected] W. Gomaa e-mail: [email protected] M. A. Khamis IME (Data Science), Ejada Systems Ltd., Alexandria, Egypt W. Gomaa Faculty of Engineering, Alexandria University, Alexandria 21544, Egypt A. B. Zaky Computer Science and Information Technology (CSIT), Egypt-Japan University of Science and Technology (E-JUST), New Borg El-Arab City 21934, Egypt © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. Al-Emran and K. Shaalan (eds.), Recent Innovations in Artificial Intelligence and Smart Applications, Studies in Computational Intelligence 1061, https://doi.org/10.1007/978-3-031-14748-7_15
255
256
A. B. Zaky et al.
1 Introduction Transportation systems face a lot of challenges. In recent years, the number of vehicles has been increasing, vehicles are more influenced by each other, and traffic safety has become a considerable issue. In addition, transportation systems reliability and sustainability are important. Drivers get a lot of attention in research; Advanced Driver Assistance Systems (ADAS) is an example of the extensive work of the driving task analysis and support [1]. Systems such as collision avoidance, congestion assistants, speed warning, and obstacles warning systems have been implemented to support drivers in complex driving tasks. The main objective behind such systems is to be supportive for autonomous driving vehicle systems and to be able to achieve this goal. The most important factor is to get more knowledge about driver behavior and try to describe how they act in different driving situations. Developing a driving behavior model that can be adapted to different driving situations and be able to cover most driving behaviors, is still a challenging task. Thus, the need of advanced analytic techniques is crucial for such modeling tasks. Machine learning (ML) is one of the fastest growing areas of science. It has been used in many applications; e.g., traffic signal control [2, 3], mining information to predict students’ academic performance [4], reinforcement learning applications [5], mining and exploration of credit cards data [6], machine learning applications during COVID-19 pandemic [7], and learning analytics studies [8]. ML techniques such as regression models [9], neural networks (NNs) [10], and fuzzy systems [11] have been used recently in modeling patterns of driving situations. However, such models face the complication of understanding different driving situations (especially the unexpected ones). Driving tasks can be segmented into driving regimes mapped to different driving situations with different response for each driver. A driver usually switches between different behaviors such as car following, lane changing, mobile messaging, sign reading, etc. It is normal to see a driver perform more than one task at the same time, e.g., following a car while switching radio channels or messaging, etc. In this article, we propose a stochastic model that is suitable for detecting and classifying different driving regimes. Preliminary results of the work presented in this paper have been published in [1, 12]. In this paper, the Expectation Maximization (EM) and Markov Chain Monte Carlo (MCMC) are used for estimating the proposed model parameters. Moreover, we calibrate the model for car following driver behavior using our own data set collected by smartphones (as a low-cost solution for collecting driving data) plus a naturalistic driving data set presented in [13]. We also present a brief survey for different machine learning models employed for driver behavior and data collection based on smartphones. The rest part of this paper is organized as follows. Section 2 presents the state-ofthe-art literature review. Specifically, we focus on motion detection with use cases on driving behavior detection using smartphones. Also, this section provides the necessary background for the work presented in this paper. This includes detailed
Markov Switching Model for Driver Behavior Prediction …
257
description for driver behavior models Sect. 3 introduces Markov Switching Vector Auto-Regressive model and Bayesian Gipps sampling for model parameters estimation. Section 4 depicts the data collection process using smartphones, the car following dataset, and the adopted driving behavior model. Section 5 presents the results of using both of the data collected using smartphones and the naturalistic driving data. Finally, Sect. 6 concludes the work presented in this article and provides directions for future research.
2 Related Work 2.1 Driver Behavior Models Recently, machine learning approaches have been proposed for driver behavior modeling. Car following is the most popular behavior for evaluating these approaches. Three models are mainly used: Hidden Markov Models (HMM), Gaussian Mixture Models (GMM), and Piece-Wise Auto Regressive Exogenous models (PWARX). These models have achieved remarkable results in simulating driving scenarios. Additionally, these models divide each complex driving pattern into subpatterns using mixture components. Models introduce different methods for calculating the latent classes, the relationship between the observed variables and each class, the estimation of class parameters, and the number of latent classes.
2.1.1
Hidden Markov Models
HMM has been used for driver behavior modeling in different situations such as the model implemented in [14]. This model uses sensor data evolution to predict the real current driving situation. The results achieved a prediction accuracy of 80% of driver behavior recognition from the initial driver movements. In [15], the authors presented a collision warning system based on HMM. Traffic models based on HMM have been reviewed in [14, 16].
2.1.2
Gaussian Mixture Models
A stochastic driver behavior modeling framework based on GMM is presented in [9]. The model calculates the joint probability distribution for a number of driving signals (following distance, vehicle velocity, brake and gas pedal forces, and vehicle dynamics). The model implements two GMMs as a representation of gas and brake pedals, and their relation with the follower velocity and the gap distance. The authors used different mixtures (4, 8, 16, and 32) for evaluating the model performance.
258
2.1.3
A. B. Zaky et al.
Piece-Wise Auto-regressive Exogenous Models (PWARX)
PWARX have been presented in [17, 18] to model human driving behavior as a Hybrid Dynamical System (HDS). The proposed approach is switching between simple linear behavior models instead of modeling a non-linear complex model. The driver behavior recognition model introduced in [17] is a standard HMM extended by embedding an auto-regressive exogenous model (ARX) in each discrete state. The authors introduced a simulation of a collision avoidance system. In [19], a car following model classification approach has been introduced based on PWARX as a segmentation approach and a K -means clustering for the input vector. The classification between modes is done using a Support Vector Machine (SVM). The PWARX models have impressive results in modeling driver behavior. However, the PWARX models have two problems [20]. First, the model can not classify and estimate the behavior simultaneously. Second, it is unable to handle a probabilistic time varying data. The Probability weighted Auto-Regressive model (PrARX) proposed in [21] is an extension to PWARX. The PrARX model addresses these two issues by composing multiple ARX models by a probabilistic weighting function.
2.2 Driving Behavior Detection Using Smartphones Motion detection in traffic networks (e.g., anomaly detection, car following driver behavior, etc.) has gained much concern in the last few years. In [1], the authors proposed a Markov regime switching-based model to estimate the driver behavior and extract different driving regimes. The proposed model analyzes a sequence of observations of driving time series data. Trajectory data such as velocity, acceleration, and space gap between the leader and follower drivers were used in model learning. In [22], the authors proposed a novel system that uses dynamic time warping (DTW) and smart phone sensors (accelerometer, gyroscope, magnetometer, GPS, and video) in order to detect and record driving style activities.
3 Methodology In this section, we introduce the proposed model framework including model formula, parameter estimation, and characteristics.
Markov Switching Model for Driver Behavior Prediction …
259
3.1 Markov Switching Vector Auto-regressive Model MSVAR [23] is non-linear model which joins the vector auto-regressive models with hidden Markov chain models. The MSVAR model builds a non-linear data model as piece-wise linear model; this is achieved by modeling the process to be linear in each regime. The main objective of such a model is to find the specification of each regime using variables as switches between each regime. Such models use intercept, mean, or both used as switches. For instance, the model presented in [24] utilizes MSVAR with mean switches to study business cycles using the U.S. GDP series. The major difficulty of using the mean as a switching parameter is the estimation of the switching parameters due to their interrelation and the latent variable. On the other side, models that use the intercept as switches require less effort in estimation, it can be estimated using Monte Carlo methods. The model introduced is based on multivariate time-series Y = (y1 , . . . , yt ) consisting of t observations, where yt represents an N-dimensional vector and a stochastic process that depends on a latent discrete stochastic process, St , having discrete state-space with state variable st which indicates the dominant regime at time t. The reduced form of the model is presented in [23] and is known by MSIAHVAR(p) as in Eq. (1). The reduced model uses three types of switches: intercept A(0) st , regression coefficient A(i) , and co-variance matrix U . t st yt = A(0) st +
p
A(i) st yt−i + Ut
(1)
s=1
where Ut |s(t) i.s N 0, st . The state variable st is evolved over time as a discrete time, discrete space Markov process, assuming st = 1, 2, . . . , k for k regimes. Let M represents the number of appropriate regimes, so that st ∈ 1, . . . , M. The conditional probability density of the observed time series vector yt is given by (2) where θ M is the VAR model parameter vector for regime M and Yt−1 are the observations from time T = 0 to time T = t −1. ⎧ ⎨ f (yt |Yt−1 , θ1 ) if st = 1, p(yt |Yt−1 , st ) = · · · ⎩ f (yt |Yt−1 , θ M ) if st = M.
(2)
The stochastic transition of states is determined by a Markov transition matrix p which determines the dynamics of the switching process where pi, j = Pr(st = i|st−1 = j) is the probability of switching from state j to state i and M j=1 pi, j = 1. The model parameter state vector is defined as follows: (0) (0) (1) (1) (1) θ = {A(0) 1 , A2 , . . . , A M , A1 , A2 , . . . , A M , U1 , U2 , . . . , U M , pi, j }.
(3)
260
A. B. Zaky et al.
This model parameter state vector can be estimated by maximum likelihood as in Eq. (4). The estimation process is performed through the Expectation Maximization (EM) algorithm that is presented in [25]. EM iteratively calculates the next step of the state vector θt+1|t given the previous observation and the previous state vector using the loglikelihood function of the data. The algorithm proceeds in two steps: expectation and maximization steps. The expectation step uses the parameter estimated from the previous maximization step to compute both filtered probability vectors. The likelihood is proportional to the probability of observing the data given the estimated parameter. The minimization of the loglikelihood in Eq. (4) can be used as an objective for parameter estimation and for comparing the different model fitting schemes. ln L =
T t=1
ln
K
P(yt |st = j, θ ) · P(st = j).
(4)
j=1
There are three main terminologies for gaining information about different driving regimes: driving regime inference, regime classification, and regime expected duration.
3.1.1
Regime Inference
The objective of the regime inference process is the identification of the latent regime variable st from the observations xt . This process requires two main steps: filtering and smoothing. Filtered probability estimates: The filter step aims at estimating ξ jt which represents the probability of the unobserved state vector st . The probability of being under regime j at time t given the model parameters is given by Eq. (5): ξ jt = P(st = j|t ; θ ).
(5)
where t is the sequence of observations over time given by: t = {yt , yt−1 , ..., y2 , y1 } and θ is the population parameter vector which is given by:
θ = σ1 , σ2 , ..., σk , c1 , c2 ..., ck , φ1 , φ2 , ..., φk , pi, j with the constraints that ξ jt > 0 and ∀ j ξ jt = 1. Smoothing: The filter step will generate estimates for state st where t = 1...., T using observations up to time t. The smoothing task improves regime inference by taking into consideration the future observation t +1 < T . The smoothed probability is defined as P(st |yt+1,...,T ). The smoothed algorithm is a backward filter that starts
Markov Switching Model for Driver Behavior Prediction …
261
from the last observation point t = T . The smoothing algorithm starts by calculating the smoothed probability of the last observation point P(st |y T ), and then iterates backward to t = 1; the algorithm steps are as follows: I. The smoothed probability of the last observation point: P(st |yT ) =
P(st , st+1 |yT )
st+1
=
P(st |st+1 , yT ) × P(st+1 |yT ).
st+1
II. According to the Markovian assumption, the st+1 depends only on st : P(st |st+1 , yT ) = P st |st+1 , yt , yt+1,...,T P(yt+1,...,T |st , st+1 , yT ) × P(st |st+1 , yt ) = P yt+1,...,T |st+1 , yt = P(st |st+1 , yt ). III. The calculation of the smoothed probability P(st |y T ) is done by using the last term of the previous iteration P(st+1 |yT ) where; P(st+1 |st , yt ) × P(st |yt ) P(st+1 |yt ) P(st+1 |st ) × P(st |yt ) = . P(st+1 |yt )
P(st |st+1 , yT ) =
IV. The recursion is initialized with the final filtered probability vector P st |y T . The following equation shows how the future observation yt+1,...,T improves the inference of the unobserved state st .
M P(st+1 |s t ) × P(st |y t ) × P(st+1 |y T ). P(st |yT ) = P(st+1 |y T ) s =1 t+1
3.1.2
Regime Classification
The classification of regimes begins by assigning each observation yt to a regime S. The classification is achieved by mapping each observation to the winning regime with the highest smoothed probability as in Eq. (6).
262
A. B. Zaky et al.
sˆt = arg max P(st = m|yT ).
(6)
1,...,M
The smoothed regime probabilities are calculated using the dataset. Then each observation is assigned to the highest regime filtered probability.
3.1.3
Regime Expected Duration
The expected length of stay in a specific regime (State i) can be derived from the regime transition matrix; this is achieved by using the probability of staying in the same regime Pii . Let Di be the time period in which the system stays at regime i. Equation (7) is the probability to stay time period k in regime i. The expected duration can be specified by the formula presented in Eq. (8), and according to the formula the expected duration depends only on the transition probability Pii of the same regime, so the expected duration remains constant over time and the higher the transition probability, the longer the stay for the regime. P(Di = k) = Piik−1 (1 − Pii ). E(Di ) =
∞
(k × P(Di = k)) =
k=0
3.1.4
1 . 1 − Pii
(7)
(8)
Gipps Bayesian Parameter Estimation
The estimation of the MSVAR model is a difficult task. Parameter estimation can be done easily for Bayesian models having known closed form posterior distributions. The main objective of our proposed model is to adopt the MSVAR in driver behavior modeling where we cannot determine the model parameters of prior distributions directly. MCMC can be used for finding the posterior distribution for the model parameters. This is attributed to the ability of MCMC to generate samples from the posterior distribution. The estimation of the proposed MSVAR model has been implemented based on Gipps sampler for posterior distribution sampling. The Gipps sampler is a popular and efficient MCMC sampling algorithm [26]. The Gipps sampler is like the Metropolis Hastings (MH) component-wise implementation in sampling each dimension. However, instead of sampling each dimension from an independent proposed Gipps samples from variable distribution, full conditional distribution P y j |y− j = P y j |y1 , y2 , . . . , y j−1 , y j+1 , . . . , yn . The algorithm accepts all drawn samples, thus Gipps has lower computation requirements and converges faster [27]. Like the component-wise implementation, the Gipps algorithm step-samples through each variable while the other variables are fixed.
Markov Switching Model for Driver Behavior Prediction …
263
If the target conditional distribution is belonging to standard distributions, then the sampling can directly be done from these distributions, otherwise Metropolis Hastings algorithm can be used for sampling the target distribution. The Bayesian parameter estimation approach assumes that both of the regime S and the model parameters θ are random variables. The Gipps sampler can be used for sampling the parameters of the posterior distributions. The sampler draws samples from the latent states and samples the model parameters from the full conditional distribution. The sampler starts with sampling θ i from p(θ |S i−1 , T ) where T is the observed data, then sampling S i from p(S|θ i , T ). The prior specification of the state-space sampling for known number of states is followed (as presented in [28]) where the standard distribution families selected for model parameters are implemented and the model parameters are conditionally independent. The parameters of prior distribution are estimated as follows: • The joint transition probabilities use independent Dirichlet distribution prior Dir (1, 1, 1, . . . , 1) for each state. • Each regression coefficient mean μi has independent Gaussian prior. • Each regression coefficient standard deviation has gamma prior. The presented prior specification has full state conditional distribution which follows a Dirichlet distribution as in Eq. (9) where I {.} is an indicator function for indicating the current state. p(s1,..,k |θ, ) Dir (I {s1 = 1} + 1, I {s2 = 1} + 1, . . . , I {sk = 1} + 1).
(9)
The joint transition probabilities pi j for each state has full conditional distribution which follows a Dirichlet distribution as in Eq. (10) where n i j is the number of transitions from State i to State j. p( p1,i,...,k |θ, , s) Dir (n 11 + 1, n 12 + 1, . . . , n 1k + 1).
(10)
The Gipps sampler iterates on two steps. Step (a) for updating the parameters and Step (b) for Markov chain revised as follows: • Step a – Update the mean by sampling from the Gaussian prior. – Update the standard deviation gamma prior. – Update the transition probabilities for each state independently by sampling from the proposed distribution in Eq. (10). – Update the p(s1,..,k |θ, ) from the proposed distribution in Eq. (9). • Step b – Update the filtered probability P(st = j|, θ ). – Update the transition probabilities P(st = j|st−1 = i).
264
A. B. Zaky et al.
4 Experimentation 4.1 Car Following Data Set The Robert Bosch GmbH Research Group [13] floating car dataset (FCD) is used to validate our model. This dataset represents a car following behavior of vehicle speed under stop-and-go traffic conditions during an afternoon peak on a single lane in Stuttgart, Germany. A car with a frontal radar sensor based on a Doppler ultrasound is used to measure the relative speed and distance between a leader and a follower driver. The used datasets are sampled at 100 ms with duration of 250, 400, and 300 s. Data set 1 gap distance, speed, and acceleration are shown in Fig. 1. These datasets have complex situations in daily urban traffic with lots of acceleration and deceleration periods. Due to the existence of traffic lights in the recorded scenario, there are some standstill periods. The velocity varies in the range between 0 and 60 km/h. These datasets are used in modeling, evaluating, and calibrating car Fig. 1 Dataset_1 gap distance, velocity difference, and acceleration
Markov Switching Model for Driver Behavior Prediction …
265
following models such as the Intelligent Driver Model (IDM) in [29], the neural network models [10], and the state-space models [30].
4.2 Data Collection The driving data collection is a complex task. Most vehicle data collection experiments consist of high-quality recording of driver’s behavior. Lots of sensors have to be equipped within the vehicle equipment in order to record the various driver behavior signals. Sensors such as microphones, video camera, steering wheel angle, gas pedal, brake pedal, GPS, speed, acceleration, and heart rate can be used according to the objective behind the study. The collection of driving behavior with this procedure has a high cost. Thus, a low-cost solution using smartphones to collect car following behavior data has been introduced. Sensor data from both follower and leader vehicles is highly beneficial and accordingly used to fit the proposed model (using smartphones iPhone 6 and 6 plus). We converted the GPS latitude and longitude to the actual distance between the two vehicles considering the spherical shape of the earth using the Haversine formula [31]. The driving experiment has the following characteristics: • There are follower and leader drivers with predefined set of behaviors. • Every driver has one smartphone in his vehicle. • Every smartphone has SensorLog application running (an application for logging sensors data). • All sensors are being logged; e.g., GPS, accelerometer, gyroscope, compass (location heading). • Sampling rate of GPS is 1 Hz (1 sample per second). • Sampling rate of accelerometer is 100 Hz. • We depend on GPS samples for localization. We obtain distances, velocities, and accelerations from GPS readings. The accelerometer does not readily provide acceleration data since it is relative to free fall. • Velocity is calculated by SensorLog (using 2 consecutive GPS readings). • Acceleration is calculated from the velocity difference. • We need to estimate, and accordingly classify, the follower velocity based on follower acceleration, velocity difference, and gap distance.
4.3 Adopted Driving Behavior Model MSVAR models presented in Eq. (1) are adopted using driving signals that represent the car following behaviors. The signals used are presented in Eq. (11) where the observation vector consists of four observation signals yt = [vt , at , vt , h t ]. These signals represent the car following driver behavior where vt is the follower velocity,
266
A. B. Zaky et al.
at is the follower acceleration, vt is the difference in velocity between the leader and follower, and h t is the gap distance between the leader and follower. The prediction of the model has n interval forecasts which can be evaluated using the conditional mean y t+n and the mean square prediction error (MSPE). The objective is to find the conditional density of yt+n given the model parameters and the previous observation t . The prediction density given in Eq. (12) is a mixture of Gaussians, and p(yt+n |s t+n = j, t ) is the probability of each predicted regime.
⎧ ( p) (0) (1) ⎪ ⎨ A1,1 + A1,1 vt−1 + · · · + A1,1 vt− p + U1,1 if st = 1, vt = . . . ⎪ ⎩ A(0) + A(1) v + · · · + A( p) v 1,M t− p + U1,M if st = M. 1,M 1,M t−1 ⎧ ( p) (0) (1) ⎪ ⎨ A2,1 + A2,1 at−1 + · · · + A2,1 at− p + U2,1 if st = 1, at = . . . ⎪ ⎩ A(0) + A(1) a + · · · + A( p) a 2,M 2,M t−1 2,M t− p + U2,M if st = M. ⎧ ( p) (0) (1) ⎪ ⎨ A3,1 + A3,1 vt−1 + · · · + A3,1 vt− p + U3,1 if st = 1, vt = . . . ⎪ ⎩ A(0) + A(1) v + · · · + A( p) v t−1 t− p + U3,M if st = M. 3,M 3,M 3,M
(11)
⎧ ( p) (0) (1) ⎪ ⎨ A4,1 + A4,1 h t−1 + · · · + A4,1 h t− p + U4,1 if st = 1, ht = . . . ⎪ ⎩ A(0) + A(1) h + · · · + A( p) h 4,M 4,M t−1 4,M t− p + U4,M if st = M. p(yt+n |t ) =
M
Pr(st+n = j)t ) × p(yt+n |st+n = j, t ).
(12)
j=1
Pr (st+n = j|t ) =
M
Pr(st+n = j|st = i) × Pr(st = i|t ).
(13)
i=1
5 Results and Discussion 5.1 Results of Data Collected Using Smartphones The data collected from the smartphones have been fitted using the Markov Regime Switching model presented in Eq. (14), where the driving signals selected as features are: follower velocity v, acceleration a, velocity difference dv, and gap distance h. The model is introduced in our previous work [1] with the model ability to classify
Markov Switching Model for Driver Behavior Prediction …
267
the different car following driving regimes. vt+1 = φ1,st at + φ2,st dvt + φ3,st h j,t + t .
(14)
The estimated model parameters σsk , φ1 , φ2 and φ3 are presented in Table 2 for each regime, where the log-likelihood approach is used. Markov transition matrix p estimated, where P(st = i|st−1 = j) is the probability of moving from driving regime j to driving regime i: ⎛
0.94 ⎜ 0.00 p=⎜ ⎝ 0.00 0.06
0.05 0.85 0.00 0.10
0.01 0.00 0.81 0.18
⎞ 0.00 0.01 ⎟ ⎟ 0.51 ⎠ 0.48
Table 1 presents the observed information of each driving regime in the dataset. We can observe the expected duration in which a driver can stay in each regime based on Eq. (8). This means that the driver will stay for a time around the expected duration time driving in that regime. Other characteristics of regimes are shown such as the number of occurrences which counts the samples inside each regime, the number of observations that belongs to each regime, and the percentage of driving under each regime overall time. We have conducted several trials and due to the noise of the sensors specially the GPS (which has up to 4 m error with a sampling rate of 1 sample per second), our experiment is limited by driving in only three car following situations: acceleration, braking, and normal following. A manual tagging for the car following situations is done. The interpretation of the results is as following: Table 1 Driving regimes contained in the adopted dataset Regime characteristics State
Expected duration (ms)
Regime 1
17.50
Regime 2
6.79
Regime 3 Regime 4
Occurrence
Observations
Percentage (%)
4
276
67
8
42
10
5.34
10
34
8.25
1.91
11
60
14.5
Table 2 Estimates of car following Markov regime switching model parameters Parameter
Regime 1
Regime 2
Regime 3
Regime 4
σsk
1.3346
1.2511
1.6234
1.9769
φ1
1.2163
−0.4069
0.0671
−1.5901
φ2
1.5228
1.2641
0.3858
0.9247
φ3
0.6563
0.5074
0.2625
0.3031
268
A. B. Zaky et al.
• Tag 0: is the acceleration behavior which the model classifies as Regime 2. • Tag 2: is the braking behavior which the model classifies as Regime 3. • Tag 4: is the stable following behavior which the model classifies as Regime 1. The results comply with the labels taken by the follower driver that is out of 412 records (seconds): • 320 records are tagged as label 4 (stable following). The model classifies 276 observations with mis-classification of 44 observations, i.e., accuracy of 86.25%. • 49 records are tagged as label 0 (acceleration). The model classifies 42 observations with mis-classification of 7 observations, i.e., accuracy of 85.42%. • 43 records are tagged as label 2 (braking). The model classifies 34 observations with mis-classification of 9 observations, i.e., accuracy of 7.
5.2 Results of the Naturalistic Driving Data We have implemented different MSVAR models with different regimes, varying from 2 to 5 regimes and different time lag varying from 2 to 5 ms. The maximum log-likelihood values are shown in Table 3 for different lags and regimes. As shown in Table 3 the best model fit is for lag 1 having 5 regimes with the maximum likelihood value. The using of the lag can be useful for modeling the driver sensitivity factor which is presented in all GM models [32]. The following driver responds to the leading driver action by acceleration or deceleration depending on the driver perception, reaction time, and driver sensitivity factors. An average reaction time is estimated in a range between 1.0 and 2.2 s, and an average driver sensitivity factor of 0.37 s as introduced in [33]. Most VAR models are estimated using symmetric lags, i.e., the same lag length is used for all variables of the model. The model lag length P = 1, ..., p can be determined based on a specific selection criterion. Model selection criteria such as Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC), and HannanQuinn Information Criterion (HQC) are used. The selection of an inappropriate lag length may affect the model performance and fitting. Table 3 MSVAR log likelihood values Dataset_1 Regime
Lag 1
2
3
4
5
2
10,036.68
10,642.67
10,739.58
11,057.45
11,090.25
3
9908.593
10,869.07
10,992
10,837.91
10,889.22
4
10,151.84
10,400.08
10,981.64
10,515.56
10,499.64
5
10,377.99
10,244
10,359.58
10,341.47
10,005.32
6
10,191.81
9348.61
9985.96
10,054.52
10,119.44
Markov Switching Model for Driver Behavior Prediction …
269
The results of applying each criterion on the adopted dataset are shown in Fig. 2. This figure shows the results for different lags between 1 and 60 ms. This helps us to exploit the history of the driving situation in the model. The dataset is sampled at 10 Hz; thus 1 lag means 0.1 s while 60 lag means 6 s. The minimum values of the three selection criteria falling between 5 and 13 (0.5 and 1.3 s) can be observed which represent the different reaction times of the driver. The MSVAR model presented in Table 3 is used for fitting the naturalistic driving data. The presented results are for the dataset which contains 2529 observations. As shown in the table, the best fitting is for the model with a lag of 5 and 2 regimes. The lag selection criteria described above uses 2520 observations, and the remaining 9 observations have been used for models forecasting evaluation. The forecasting results based on Eq. (12) represent a comparison of the prediction performance of the two selected models. The highest max log-likelihood model is selected. The first model has 1 lag and 5 regimes Model I (p = 1, r = 5), while the second model has 5 lags and 2 regimes Model II (p = 5, r = 2). Table 4 shows a comparison between the Mean Square Error (MSE) of the observation vector for the two models. The error is calculated between the dataset observed values and the
(a) AIC values for lag between 1 and 60 ms.
(b) BIC values for lag between 1 and 20 ms.
(c) HQ values for lag between 1 and 20 ms.
Fig. 2 MSVAR(P) model selection criteria
270
A. B. Zaky et al.
Table 4 Mean square error of the two MSVAR models Model
MSE a
v
h
I (p = 1, r = 5)
0.1788184
0.013395644
0.05679842
0.02309714
II (p = 5, r = 2)
0.1481768
0.010264159
0.05339007
0.01554496
v
predicted values for 9 samples (0.9 s). As shown, Model II has a lower MSE for all observation vector elements. Figure 3 presents the prediction of the two models for each point. The red points represent the naturalistic real driving data samples, blue points represent Model I, and green points represent Model II. Both models are accurate for the first forecasting steps, afterwards the models start deviating due to the accumulated error of the forecasting process (as shown in the velocity and velocity difference figures). The models are able to predict not only the driver behavior represented by its observation (velocity and acceleration), but also the entire driving situation represented by the relation with the leading vehicle observation (gap distance and velocity difference).
5.3 MSVAR Versus PrARX The proposed MSVAR framework, as a switching linear regression model for driving behavior, has features listed in Table 5. The table also lists the features of PrARX as a switching framework based on linear regression. Both frameworks are able to provide behavior/mode segmentation extracted from driving signals. PrARX uses K-means clustering and the proposed framework uses a probabilistic classifier based on selecting the maximum state filtering value for each observation. The probabilistic classification approach is able to identify the membership probability of a new observation to each state (finds the best state that represents the observation with the highest probability). The advantage of probabilistic classifier over nonprobabilistic one is that the former behaves like a confidence weighted classifier which helps avoiding error propagation. The classifier adds value for the driving behavior problem by allowing a smooth transition between each regime allowing a mixed mode representation to understand the current behaviors that the driver may behave. The learning process of PrARX has two stages; behavior classification and parameters estimation. PrARX framework cannot simultaneously classify and estimate. The proposed framework classifies and estimates in a recursive process by using Hamilton filter for classification and maximum likelihood for parameter estimation. Computational cost of PrARX is higher due to the processing of classification and estimation independently. For the proposed model, a Bayesian parameter estimation based on MCMC minimizes computational time over Expectation Maximization
Markov Switching Model for Driver Behavior Prediction …
(a) DVP
(b) HP
(c) VP
Fig. 3 MSVAR model predicted values of car following observations
271
272
A. B. Zaky et al.
Table 5 Main features of PrARX model and the proposed MSVAR framework PrARX
Proposed MSVAR framework
Behavior (mode) segmentation
Extension to k-means clustering
Probabilistic classification
Learning process
Cannot simultaneously classify and estimate
Classify and estimate in a single recursive process
Parameter estimation method
Steepest descent
ML (Hamilton) and Bayesian (Gipps Sampling)
Computational cost
High due to the processing of the classification and estimation independently
MCMC minimizes computational time
Assumptions
Human driving behavior does not exhibit an abrupt change
Abnormal behavior detected and can capture short time events—behavior and switching processes are stationary
Mixed mode
Can
Can
(EM). As a future work, performance measurements for the framework evaluation will be conducted. PrARX has a major assumption for relaxing the problem of parameter estimation. The assumption is that human driving behavior does not exhibit an abrupt change which allows the framework to use the parameters of the previous behavior as initial values for the next one. The proposed MSVAR framework does not have this assumption as the model parameter estimation process generates independent parameters for each regime, which allows the framework to handle abnormal behavior. The proposed MSVAR framework has an assumption that the driver behavior is stationary, so the stochastic process representing each regime and the switching process (transition matrix) have the same parameters over time. Releasing this assumption requires a variable time transition probabilities and more complex stochastic models for driver behavior modeling such as Gaussian Mixture Models (GMM).
6 Conclusions and Future Work Markov regime-switching models have been implemented to capture, classify and predict driver behavior patterns in real traffic. Car following behavior regimes switching is studied and presented. Samples from naturalistic driving studies are employed for training the model. we present analysis of the proposed switching model based on safety parameters that can classify more accurate and detailed behaviors are introduced.
Markov Switching Model for Driver Behavior Prediction …
273
The prediction of driving behavior based on multiple Markov Switching Variable Auto-Regression (MSVAR) is introduced. More than one model is implemented with different parameters (lag and regime) and with different evaluation criteria (AIC, BIC, HQC). The best fitted models are selected for the prediction process. Additionally, the model is capable of fitting driving data and data segmentation into regimes by estimating the different driving behavior change points. One limitation is the long calibration time of the model parameters. This is attributed to the learning of each model (depending on the model configuration the learning takes up to 3 days). The best fitted model is achieved at lag of 13 (1.3 s). We have implemented models with lags down to 5 (0.5 s). The computational efficiency of the prediction is reasonable; it takes only few seconds; however, it needs more adaptation to be more accurate. A low-cost data collection solution using smartphones is presented. A validated process with another naturalistic driving dataset for predicting the driver behavior for short periods of time are shown. The proposed driver behavior detection model can potentially be used in systems such as accident prediction and driver safety systems. The proposed method can be tested for a set of behaviors such as lane changing and accident analysis. A driving simulation over 3D city traffic environment as in [34] can be used in online testing of the model. Acknowledgements This work is mainly supported by the Ministry of Higher Education (MoHE) of Egypt through Ph.D. fellowship awarded to Dr. Ahmed Zaky. This work is supported in part by the Science and Technology Development Fund (STDF); Project ID 42519—“Automatic Video Surveillance System for Crowd Scenes”, and by E-JUST Research Fellowship awarded to Dr. Mohamed A. Khamis.
References 1. A.B. Zaky, W. Gomaa, Car following regime taxonomy based on Markov switching, in Proceedings of the IEEE 17th International Conference on Intelligent Transportation Systems (ITSC 2014). Qingdao, China (IEEE, 2014), pp. 1329–1334 2. M.A. Khamis, W. Gomaa, Adaptive multi-objective reinforcement learning with hybrid exploration for traffic signal control based on cooperative multi-agent framework. J. Eng. Appl. Artif. Intell. 29, 134–151 (2014) 3. M.A. Khamis, W. Gomaa, A. El-Mahdy, A. Shoukry, Adaptive traffic control system based on Bayesian probability interpretation, in Proceedings of the IEEE 2012 Japan-Egypt Conference on Electronics, Communications and Computers (JEC-ECC 2012), Alexandria, Egypt, 2012, pp. 151–156 4. A.A. Saa, M. Al-Emran, K. Shaalan, Mining student information system records to predict students’ academic performance, in International Conference on Advanced Machine Learning Technologies and Applications (Springer, 2019), pp. 229–239 5. M. Al-Emran, Hierarchical reinforcement learning: a survey. Int. J. Comput. Digit. Syst. 4(02) (2015) 6. S. Zaza, M. Al-Emran, Mining and exploration of credit cards data in UAE, in 2015 Fifth International Conference on e-Learning (econf) (IEEE, 2015), pp. 275–279
274
A. B. Zaky et al.
7. M. Al-Emran, M.N. Al-Kabi, G. Marques, A survey of using machine learning algorithms during the COVID-19 pandemic, in Emerging Technologies During the Era of COVID-19 Pandemic, 2021, pp. 1–8 8. S. Hantoobi, A. Wahdan, M. Al-Emran, K. Shaalan, A review of learning analytics studies, Recent Advances in Technology Acceptance Models and Theories (2021), pp. 119–134 9. P. Angkititrakul, C. Miyajima, K. Takeda, Stochastic mixture modeling of driving behavior during car following. J. Inf. Commun. Converg. Eng. 11(2), 95–102 (2013) 10. S. Panwai, H. Dia, Neural agent car-following models. IEEE Trans. Intell. Transp. Syst. 8(1), 60–70 (2007) 11. X. Ma, A neural-fuzzy framework for modeling car-following behavior, in Systems, Man and Cybernetics, 2006. SMC’06. IEEE International Conference on, vol. 2 (IEEE, 2006), pp. 1178– 1183 12. A.B. Zaky, W. Gomaa, M.A. Khamis, Car following Markov regime classification and calibration, in Proceedings of the IEEE 14th International Conference on Machine Learning and Applications (ICMLA 2015), Miami, FL, USA (IEEE, 2015) 13. D. Manstetten, W. Krautter, T. Schwab, Traffic simulation supporting urban control system development, in Mobility for Everyone. 4Th World Congress on Intelligent Transport Systems, Berlin, 21–24 Oct 1997 (Paper No. 2055) (1997) 14. N. Dapzol, Driver’s behaviour modelling using the hidden Markov model formalism, in ECTRI Young Researchers Seminar, The Hague, the Netherlands, vol. 2, no. 2.2 (2005), pp. 2-1 15. K. Ikeda, H. Mima, Y. Inoue, T. Shibata, N. Fukaya, K. Hitomi, T. Bando, An adaptive rearend collision warning system for drivers that estimates driving phase and selects training data. Trans. Inst. Syst. Control Inf. Eng. 24, 193–199 (2011) 16. A. Sathyanarayana, P. Boyraz, J.H. Hansen, Driver behavior analysis and route recognition by hidden Markov models, in Vehicular Electronics and Safety, 2008. ICVES 2008. IEEE International Conference on (IEEE, 2008), pp. 276–281 17. S. Sekizawa, S. Inagaki, T. Suzuki, S. Hayakawa, N. Tsuchida, T. Tsuda, H. Fujinami, Modeling and recognition of driving behavior based on stochastic switched ARX model. IEEE Trans. Intell. Transp. Syst. 8(4), 593–606 (2007) 18. H. Okuda, T. Suzuki, A. Nakano, S. Inagaki, S. Hayakawa, Multi-hierarchical modeling of driving behavior using dynamics-based mode segmentation. IEICE Trans. Fundam. Electron. Commun. Comput. Sci. 92(11), 2763–2771 (2009) 19. T. Akita, T. Suzuki, S. Hayakawa, S. Inagaki, Analysis and synthesis of driving behavior based on mode segmentation, in Control, Automation and Systems, 2008. ICCAS 2008. International Conference on (IEEE, 2008), pp. 2884–2889 20. K. Takeda, Modeling and detecting excessive trust from behavior signals: overview of research project and results, in Human Harmonized Information Technology, vol. 1 (2016), pp. 57–75 21. H. Okuda, N. Ikami, T. Suzuki, Y. Tazaki, K. Takeda, Modeling and analysis of driving behavior based on a probability weighted ARX model. IEEE Trans. Intell. Transp. Syst. 14(1), 98–112 (2013) 22. D.A. Johnson, M.M. Trivedi, Driving style recognition using a smartphone as a sensor platform, in Proceedings of the 14th International IEEE Conference on Intelligent Transportation Systems (IEEE, Washington, DC, 2011) 23. H.-M. Krolzig, Markov-Switching Vector autoregressions: Modelling, Statistical Inference, and Application to Business Cycle Analysis, vol. 454 (2013) 24. J.D. Hamilton, A new approach to the economic analysis of nonstationary time series and the business cycle. Econometrica: J. Econom. Soc. 357–384 (1989) 25. J.D. Hamilton, R. Susmel, Autoregressive conditional heteroskedasticity and changes in regime. J. Econom. 64(1), 307–333 (1994) 26. Bayesian Data Analysis, 3rd edn. (2013) 27. P.G. Gipps, A behavioural car-following model for computer simulation. Transp. Res. Part B: Methodol. 15(2), 105–111 (1981) 28. S. Richardson, P.J. Green, On Bayesian analysis of mixtures with an unknown number of components. J. Royal Stat. Soc. Ser. B: Methodol. 731–792 (1997)
Markov Switching Model for Driver Behavior Prediction …
275
29. M. Treiber, A. Kesting, Microscopic calibration and validation of car-following models—a systematic approach. Procedia Soc. Behav. Sci. 80, 922–939 (2013) 30. S.P. Hoogendoorn, S. Ossen, M. Schreuder, Adaptive carfollowing behavior identification by unscented particle filtering, in Transportation Research Board 86th Annual Meeting, no. 070950 (2007) 31. M.K. Nichat, N.R. Chopde, Landmark based shortest path detection by using A* and Haversine formula. Int. J. Innov. Res. Comput. Commun. Eng. 92, 298–302 (2013) 32. H. Rakha, P. Pasumarthy, S. Adjerid, A simplified behavioral vehicle longitudinal motion model. Transp. Lett. 1(2), 95–110 (2009) 33. A.D. May, Traffic Flow Fundamentals (1990) 34. H. Prendinger, K. Gajananan, A.B. Zaki, A. Fares, R. Molenaar, D. Urbano, H. van Lint, W. Gomaa, Tokyo virtual living lab: designing smart cities based on the 3d internet. IEEE Internet Comput. 17(6), 30–38 (2013)
Understanding the Impact of the Ontology of Semantic Web in Knowledge Representation: A Systematic Review Salam Al-Sarayrah, Dareen Abulail, and Khaled Shaalan
Abstract This paper is a systematic review of 19 resources showing the relationship between semantic web and ontology and answers several queries on the implications of this relationship on knowledge representation and real-life solutions in various industries. PRISMA guidelines were used to select the papers and the authors focused on the most important 10 resources papers to investigate the research questions. It is concluded that semantic-web ontologies are one of the solid foundations of knowledge representation and suggested further research on focused industry to further elaborate on a specific implications of semantic web ontology on knowledge representation and associated proposed solutions in a specific industry. Keywords Ontology · Semantic web · Systematic review · Knowledge management · Metadata
1 Introduction Machine-centered methodologies in systematic reviews and maps are divisive in the international development and social impact evidence. Some academics think that human-centered approaches such as translation bots are preferable to machinecentered methods such as machine learning. The researchers suggest merging machine and human-centered features can improve efficacy, efficiency, and societal significance [25]. Projects associated with the Semantic Web have been expanded and studied as one of the most active areas of knowledge representation research. So, what is the semantic web, and what role does an ontology play in this journey? Accordingly, we will begin our investigation by defining the main concepts representing the research topic and gap.
S. Al-Sarayrah · D. Abulail · K. Shaalan (B) Faculty of Engineering and IT, The British University in Dubai, Dubai, United Arab Emirates e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. Al-Emran and K. Shaalan (eds.), Recent Innovations in Artificial Intelligence and Smart Applications, Studies in Computational Intelligence 1061, https://doi.org/10.1007/978-3-031-14748-7_16
277
278
S. Al-Sarayrah et al.
2 Definitions 2.1 Knowledge Representation and Reasoning Knowledge is interpreted as the information obtained after processing a certain type of data that could be true or false. Representation and Reasoning are the art of symbolizing knowledge and/or encoding symbols that were created based on the gained knowledge [12]. Knowledge representation and reasoning in the field of Artificial Intelligent, and it’s about making sure that information about the real world is written down in a way that a computer can understand and use this information to solve complex realworld problems, like diagnosing a medical condition or communicating with people in natural language [12]. It is also a way to explain how we can store knowledge in artificial intelligence. Knowledge representation is more than just putting data into a database. It also lets an intelligent machine learn from that knowledge and experience so that it can act like a human [12]. Researchers developed technologies and new approaches that resulted in software and knowledge engineering propositions with the advent of semantics. Several researchers have used knowledge management and representation in semantic webenabled software test generation, such as automated testing using semantic-web and ontology. However, Un-formalizing the data test process is a significant difficulty in knowledge-based software testing methodologies [9]. Integration testing to guarantee proper component interface processing. Highlevel testing (e.g., system or acceptance testing) to verify system behavior. Software testing is a knowledge-intensive process that can benefit from Knowledge Management principles and methodologies. These Knowledge Management activities can be used for software testing. Thus, provided knowledge and observed system behavior can be used to design better tests during exploratory testing using Knowledge Management approaches in design, execution, and analyze test cases dynamically. Expands on KM systems. Ontologies knowledge management enablers in software testing. Some software testing researchers employ ontologies for knowledge representation [9].
2.2 Ontology An ontology is a systematic and formal explanation of a field’s concepts. Property refers to the characteristics of each idea’s specific attribute and numerous concept properties. In addition, the bulk of ontologies employs classes to convey domain notions. Subclasses of a class can be used to describe the more detailed ideas of the parent class [29].
Understanding the Impact of the Ontology of Semantic Web …
279
The ontology is generally used to handle software engineering difficulties such as service models and metrics. Still, it is also used for further analysis and evaluation information, such as the right classification of problematic and advantages in software engineering modules [13]. One of the most challenging components of building ontology is finding a definition that academics can agree on, it’s not an easy task. It’s tough to know where to begin because there are so many different definitions of ontology [20]. At the global level, a standardization-working group has been created by the World Wide Web Consortium (W3C) to produce an ontology language standard. The W3C formed a standardization working group to produce an ontology language standard, recognizing that it would be a necessity for the development of the semantic web (Horrocks, n.d.).
2.3 Semantic Web and Ontology What exactly is the Semantic Web, and how does it function? What distinguishes it from the current internet? What is the connection between this and ontology? It is an extension of the current World Wide Web in that it represents information in a more meaningful way for people and machines alike. It supports automated annotation, discovery, publication, advertising, and composition services, as well as machinereadable descriptions of products and services. It is based on Ontology [20], which is the backbone of the Semantic Web. The semantic web has given away for adding meaning to data using knowledge graphs (ontology), with the goal of moving from a data-based system to an information-based system based on web data, affecting all knowledge-based system foundations [29]. The semantic web architecture is demonstrated in Fig. 1, and technologies such as RDF (XML standard for objects), OWL (ontology writing languages) [29] and others, which were formerly standard for ontology creation and were employed by huge IT projects, are now part of the ontology evolution. In the next part, we’ll look at examples of these applications, focusing on projects that are searching for a standard schema for their data and information, which is crucial to the project’s success [14]. The relationship between semantic web and knowledge management may be described as the representation of data in information-based systems as online data that affects knowledge system infrastructures.
3 Problem Identification and Analysis This paper is seeking to provide an overall understanding of semantic-web relation to ontology and to provide future opportunities for further investigation and research that can support the BuID informatics community to explore applications
280
S. Al-Sarayrah et al.
Fig. 1 Semantic web architecture
that will study the relations between semantic-web and ontologies focusing on a human-centric approach and shedding light on social causes such as the accessibility technology solutions offered to students with disabilities, relying on the generated knowledge around the applications of semantic web and ontologies while determining the rich data and meta-data sources that will serve providing real-life solutions. The goal of this study is to undertake a complete analysis of studies based on ontologies and the semantic web. Consequently, publications were assessed in order to gather data to support the study’s objectives and to provide a wide overview of research on the topic. The proposed research questions are: RQ1 What is the role of ontology in relation to the semantic web? RQ2 What Are the Semantic Web’s State-of-the-Art of Ontology? RQ3 What are the challenges and limitations of current approaches? This systematic review is an exploratory effort to investigate the application of semantic web ontologies for knowledge representation modelling, in preparation for a further sector-focused investigation.
4 Literature Review The Semantic could be a massive archive of machine-processable datasets produced based on semantic enabled technology and based on metadata that is developed over a certain period of time. The Semantic is layered, with each layer leveraging the capabilities of the levels below. There are several ways to describe Semantic architecture. It could be a stack that depicts the standardization of technologies and their organization to enable the Semantic. Some of these layers, notably the intermediate ones, incorporate W3C 1 defined technologies for constructing semantic apps (i.e., RDF, RDFS, OWL,
Understanding the Impact of the Ontology of Semantic Web …
281
SPARQL, RIF). Ontologies are the backbone technologies, allowing the representation of domain experts. An ontology is an explicit specification of a conceptualization, which is an abstract, simplified view of the world used for representation and identification. An ontology is a data model representing a domain’s concepts and relationships. Figure 2 depicts a software testing sub-ontology—part of the ROoST’s ontology [9]. A UML class diagram shows the concepts and relationships in the software testing techniques domain. There are many types of testing techniques in this sub-ontology: black-box, which-box, defect-based, and model-based. Section 8 will expand on the ROoST. This section provides an overview of the available literature on real-world projects that use ontologies and semantic web, as well as the significant influence that these technologies have on project success, particularly for initiatives that rely on data contributed by software [9]. One of the use case to use the ontologies with semantic web is air traffic flow management (ATFM) system to arrange airplane traffic sequences to minimize conflicts and delays. This system necessitates the precise analysis of a significant
Fig. 2 ROoST’s testing techniques sub-ontology
282
S. Al-Sarayrah et al.
volume of unstructured enormous data, which might lead to aviation disasters and permanent financial and human losses [29]. Building and implementing an ontology of flight safety messages, which includes an essential component of airport operations information for captains, flight crew, flight operators, and air traffic controllers, among others, is the recommended answer for airline safety messaging. In the meanwhile, this ontology will serve as the foundation for semantic web technologies, allowing machines to process information [29]. This research used actual data from outbound and arriving flights at Mashhad Airport in Iran to put this approach to the test and Portage software to build the ontology [29] (Fig. 3). One of the most widely used 3D modelings approaches for urban landscapes in various information systems is the Open Geospatial Consortium’s CityGML standard (OGC). This is being utilized as a data transmission standard for city landscape management and planning systems, as well as a file-based data source for apps that explore 3D city landscapes on the web [8]. A massive cross-domain graph-based knowledge database, such as OpenCyc, Freebase, Wikidata, DBpedia, and YAGO, is used to create a dynamic geographic knowledge graph (DGKG) [22]. If the underlying ontology is constructed on CityGML 2.0, data may be updated without losing information, avoiding the inference mistakes that can occur when using CityGML. The import/exporter tool was also given a more extensive overhaul. We were able to obtain building data from any area in a city using coordinates due to the fast-geospatial search engines. The
Fig. 3 Dictionary of concepts, examples, relationships and features
Understanding the Impact of the Ontology of Semantic Web …
283
system’s speed was kept far below its capacity limits thanks to the usage of named graphs and data structures for data splitting [8]. The OntoCityGML ontology is a case of a well-defined and specified international standard being introduced into the semantic web while still following W3C rules and standards. Using advanced data processing tools, the capacity of such an ontology to serve as a schema for the semantic twin of the three-Dimensional Database, which has been constructed and optimized at Technische Universität München for many years, was proven. The research data used was a set given by the BlazegraphTM database, which offered the essential geographical search capability, as well as publicly available CityGML 2.0 data from Charlottenburg in Berlin [8]. Another example is adaptive learning, rule-based recommender systems outperform alternative techniques. Educators will aid differently since a semantic-webbased system can adjust to the outcomes of the screening tests. It is also straightforward to explain why a specific learning item is recommended for a particular learner, which increases confidence and credibility [20]. Other researchers show how to use whole texts instead of titles, abstracts, and keywords to combine lexical databases with crowded dictionaries. Metadata sets may considerably enhance systematic reviews and maps. Because machine-centered forestry techniques and forestry-related assessments and maps are rare, the benefits in effectiveness, efficiency, and relevance can be pretty significant. The researchers also believe that the help of the hybrid approach will grow as global digital literacy and ontologies improve [25]. Acquiring competencies of education programmers or work activities, as well as criteria for learner or worker observation and evaluation, is an element of the knowledge management and education processes, allowing for a systematic assessment of competency acquisition [22]. Existing competency models struggle with the need for a formal, explicit depiction of capabilities and competency profiles; the proposed solution is to create a competency ontology for the semantic web that could be used as a common domainspecific in the definition of competency profiles, resulting in structured datasets defining people, materials, or task competencies that can be processed by linked data web software applications [22]. The created ontology consists of five phases, each of which is interlinked with other ontologies in use within the network of linked open data, followed by the production of a general graphical competence model that humans can read for evaluation and communication purposes. It also converts its data into semantic Web code, which is subsequently processed by computers. Furthermore, the link with ontologies allowed us to customize the competence model in the learning environment and adjust the activities and materials supplied to online learners to their individual knowledge and skills, or competencies [22]. Climate change monitoring [23], precision agriculture [28], smart urban planning [24], and many more uses rely on data generated by Earth Observation satellite systems. It is a challenging task to construct knowledge-driven remote sensing technologies and provide knowledge acquisition representation to human experts. These
284
S. Al-Sarayrah et al.
abilities help with data standards and semantic source integration, which aids in the development of complex applications [1]. The suggested approach is to establish the RESEO ontology, which offers a framework for the semantic consolidation of data gathered by Earth Observation satellites, by using ontologies and web semantic technologies in remote sensing in the domain of earth observation. It was designed in such a manner that it may readily be expanded to include additional data sources, such as satellites, unmanned aerial vehicles, or connected open data. RESEO is linked to a number of existing Earth observation ontologies, as well as ontologies specific to meteorological open data, resulting in a more complete knowledge base [1]. Another method for implementing ontologies is to build ontologies from existing data sources using two basic processes: conceptual ontology generation and instancelevel ontology population. Documents are mined and formally stored in an intermediate conceptual model, which is subsequently utilized to produce an ontology at the conceptual level, in order to develop ontologies and depth semantics from XML schemas and XML instances [30]. To build ontology is capable of mining the deep semantics of XSD and accuracy expressing the semantics, as well as preserving the major data and semantic information in XSD and XML instance documents [30]. In international development and social impact, Human Machine Hybrid Approach presents new methods for systematic reviews and mapping which explains the adjustments by enhancing efficacy and disclosing strategic behavior in the title, abstract, and keyword formation, efficiency by removing the screening stage, and social relevance by combining text mining with human auditing, accomplish so by crowding literature identification and enhancing metadata granularity. According to [25], this technique improves systematic reviews and maps in the international development and social impact sectors by integrating the benefits of both human-centered and machine-centered approaches on numerous levels, from developing queries to synthesis (Fig. 4). For example, in Combining lexical databases and crowdsourced dictionaries for queries; creating a search query is an important stage in any systematic review and map. In traditional reviews and maps, the inquiries are manually identified by review authors or advisory committee experts. If a query fails to get a reference set of resources, it is retested until it returns the entire reference set or a major part of it. An expert opinion is a quick approach to building a query that will likely return relevant results. There is always a considerable risk of excluding key resources and experts because the expert identification procedure is not random [25]. Primary Outputs or Milestones in a Systematic Review/Map Traditional (green) and hybrid (yellow) techniques each have their own tasks. When the approaches differ, many boxes reflect the differences. A box with “same” was used when the tasks were not varied between the two techniques, or detailed elements of the tasks were not deleted to improve accessibility. For example, traditional procedures result in static reviews. The ultimate result of the hybrid technique is dynamic (live) evaluations, which may be updated utilizing new evidence sources without requiring substantial effort or reviewer time [25].
Understanding the Impact of the Ontology of Semantic Web …
285
Fig. 4 Comparison of conventional and the human machine hybrid approach to systematic reviews and maps
Primary Outputs or Milestones in a Systematic Review/Map Traditional (green) and hybrid (yellow) techniques each have their own tasks. When the approaches differ, many boxes reflect the differences. A box with “same” was used when the tasks were not varied between the two techniques, or detailed elements of the tasks were not deleted to improve accessibility. For example, traditional procedures result in static reviews. The ultimate result of the hybrid technique is dynamic (live) evaluations, which may be updated utilizing new evidence sources without requiring substantial effort or reviewer time [25].
286
S. Al-Sarayrah et al.
Fig. 5 Example of use of word association profiles for case matching
An incoming instance may be compared to existing examples in the knowledge base using this implementation of the similarity measure for word association profiles. Cases having textual data in the problem description can have their word association profiles compared to existing case profiles as in Fig. 5. Figure 6 shows us the assembling ontology in OWL format using the Protégé3 editor. Each of the super-classes (clinical, diseases, and caregiving) has subclasses inside. Dementia and associated disorders are included in this class. Diagnosis, assessment, and treatment of dementia are covered in subgroups under “Diagnosis”. An informal caregiver’s requirements are addressed in the Caregiving class. b-based searches of sites such as yielded a preliminary collection of phrases and concepts linked to the clinical element of dementia. It was also utilized to search for clinical aspects, therapies, and risk factors in Alzheimer’s disease using the ADO ontology. The ICF dementia-related codes were reused to extract dementia symptoms. Table 2 lists the various materials utilized to construct the ontology. A final example is Accessibility. Disability is a dynamic combination of health disorders and personal or environmental variables. According to the World Report on Disability, almost a billion people are disabled. Such as; sensory, bodily, or mental problems. Achieving a certain aim in a specific situation is defined as “accessibility” by ISO 26800. So, the more accessible a software is, the more individuals can use it, even those with disabilities. Accessibility combines design for all and assistive technologies which enhance a person with disability’s personal autonomy. Accessibility benefits everyone, not just persons with disabilities. Ailing, inept, or unsure users may benefit [10].
Understanding the Impact of the Ontology of Semantic Web …
287
Fig. 6 Dementia ontology, case structure
The Semantic-web is an approach to making meta-Data more machine-readable. Based on these ontologies, it is possible to create information resources, publish them publicly, and query for them [10]. The semantic-web has many applications. In this way, everyone can access information like timetables, product sheets, personal profiles, and many more. These data can also be filtered, compared, or processed to derive new important data. Search engines use semantic data to improve results and contextualize user selections. Ontologies describe several aspects of disability and accessibility. EARL is the most vital [10].
5 Methodology This study employs a systematic review approach by following the guiding procedures of [16] and other systematic reviews [3–5].
5.1 Inclusion/Exclusion Criteria Table 1 shows the inclusion and exclusion criteria for the papers to be reviewed for this paper.
288
S. Al-Sarayrah et al.
Table 1 Inclusion and exclusion criteria Standard
Inclusion/exclusion
Date
Should be published between 2018 and 2022
Language
Exclude papers that use languages other than English
Keywords
The article has to be related to all the search keywords below Semantic web, ontologies, systematics review, knowledge management
General criteria The research keywords must be met, and the materials used must be from well-known journals Priority of review will be given for papers providing real-life examples and solutions. The number of reviewed papers should not exceed 10 papers for the in-depth analysis and literature review segment of the study
Table 2 Research string on the web to obtain the articles for this research Research string TITLE-ABS-KEY (“ontologies” AND “semantic web”) AND (LIMIT-TO (PUBSTAGE, “final”)) AND (LIMIT-TO (DOCTYPE, “ar”)) AND (LIMIT-TO (SUBJAREA, “comp”)) AND (LIMIT-TO (PUBYEAR, 2022) OR LIMIT-TO (PUBYEAR, 2021) OR LIMIT-TO (PUBYEAR, 2020) OR LIMIT-TO (PUBYEAR, 2019) OR LIMIT-TO (PUBYEAR, 2018)) AND (LIMIT-TO (LANGUAGE, “english”))
5.2 Data Sources and Search Strategies To identify the research deficit, a bibliometric study was performed. The goal of the study was to look at the gaps between ontologies and the semantic web in terms of knowledge representation. The metadata generated from SCOPUS, IEEE, ScienceDirect, Emerlad, Google Scholar and Springer in relation to the publications obtained was analyzed using the VOSViewer software, the graphs in Fig. 7 were created as a result of this examination refers to the co-occurrence of keywords in the 684 articles obtained by the research string described in Table 2 to validate the out study problem and focused on current research’s that should contain the keywords in the title or abstract, be published after 2018, and be of the article, review paper, or book type. Figure 7 reveals that the majority of studies are focused on ontology and semantic web, which represents the intersection of their components, as well as knowledge representation and knowledge-based systems in general. However, for the in-depth analysis 19 papers were only used. Figure 8 gives density and depth to the reviewed papers and provided insights for the required in-depth analysis and literature review. When doing the research, we followed the PRISMA guidelines [18]. This approach has been characterized as a four-phase process in the literature. The four processes are identification, screening, eligibility, and inclusion (as seen in Fig. 9). Figure 10 confirms the relationship between semantic web and ontologies and shows that 2019 was the richest in quality papers according to the inclusion criteria more than 2020 and before 2019.
Understanding the Impact of the Ontology of Semantic Web …
Fig. 7 Co-occurrence of words in the documents retrieved
Fig. 8 Density of co-occurrences for 684 research papers
289
290
S. Al-Sarayrah et al.
Fig. 9 Study PRISMA [21]
Fig. 10 Narrowing down the key words to 44
We can see in Fig. 11 that the more we narrowed down the analysis, the less clusters we get. At this stage we got 3 clusters compared to 7 clusters in Fig. 7 and we can conclude more details on the relations to the keywords used in the research main query (Fig. 12). Even though the number of studies beyond 2019 decreased in studying the relationship between ontology and the semantic web. The analysis result confirms the ties between ontology and semantic web in the investigated studies.
Understanding the Impact of the Ontology of Semantic Web …
Fig. 11 Narrowing the analysis to 44 words and a minimum of 50 occurrences
Fig. 12 Confirming the direct and relationship between ontology and semantic web
291
292
S. Al-Sarayrah et al.
5.3 Quality Assessment Along with the inclusion and exclusion criteria, quality assessment is an important factor to consider [2]. A quality evaluation checklist was created to incorporate a method of assessing the content of the research articles that were kept for additional study (N = 19). The checklist was created using inputs from others [16]. Each question was graded on a three-point scale, with “1” indicating “Yes,” “0” indicating “No,” and “0.5” indicating a half point. As a result, each study might receive a score ranging from 0 to 7, with the greater the overall score, the better the study’s ability to address the research objectives. Quality assessment checklist 1. 2. 3. 4. 5. 6. 7.
Is the research’s context/field of study well-defined? Are the problems considered by the study clearly specified? Are the data collecting procedures described in sufficient detail? Do the findings add to the body of knowledge? Is the journal and country rank above Q2? Is the research relevant to the field of computer science? Is the literature search comprehensive enough to include all relevant studies?
Table 4 shows the results of the quality evaluations for all 19-paper investigations. Consequently, 12 of the studies have clearly passed the quality assessment with 100% suggesting that they are competent to be used in subsequent study; the papers with less quality have been excluded (Appendix represent the Reference papers details).
6 Results The main goal of Ontology is to compose a shareable knowledge that can be understood by humans and applications to play a role in achieving integration across organizations and on the Semantic Web since they aim to capture domain knowledge and their role is to create semantics explicitly in a generic way, providing the basis for agreement within an as a result, ontologies have become a hot issue in a variety of circles. Figure 13 depicts a chart of term co-occurrence in the documents obtained, demonstrating the strong link between the two elements. And these results answer question 1 of the proposed questions. Many applications have been using ontologies and semantic web technologies (Fig. 14), which have represented a huge value in the projects that have been successful and the golden movement that has occurred using these technologies, beginning with air traffic management systems for flight safety and how ontology implementation building accurate messages for flight safety and using semantic web technologies allowing machines to process information. Another use of ontologies
Understanding the Impact of the Ontology of Semantic Web …
293
Table 4 Quality assessment results #RP
Q1
Q2
Q3
Q4
Q5
Q6
Q7
Total
Percentage (%)
RP1
1
1
1
1
1
1
1
7
100 100
RP2
1
1
1
1
1
1
1
7
RP3
0
0
0.5
0.5
0
1
0
2
28
RP4
1
1
1
1
1
1
1
7
100
RP5
1
1
1
1
1
1
1
7
100
RP6
1
1
1
1
1
1
1
7
100
RP7
1
1
1
1
1
1
1
7
100
RP8
1
1
1
1
1
1
1
7
100
RP9
1
1
1
1
1
1
1
7
100
RP10
1
1
1
1
1
1
1
7
100
RP11
1
1
1
1
1
1
1
7
100
RP12
1
1
1
1
1
1
1
7
100
RP13
1
1
1
1
1
1
1
7
100
RP14
0
0
0.5
1
0
1
1
3.5
50
RP15
0.5
0.5
0
0.5
0
0.5
0
2
28
RP16
0.5
0.5
1
0
0
1
1
4
57
RP17
1
0
0
0
0
1
0
2
28
RP18
0
1
0.5
1
0
0.5
0.5
3.5
50
RP19
0.5
0.5
0.5
0.5
0
1
1
4
57
600 500 400 300 200 100 Ontology Knowledge Representaon Ontology's Birds Knowledge Base Arcle Semanc Similarity RDF Big Data Semanc Interoperability Automaon Informaon Management Semanc Annotaons Semanc Informaon Deep Learning Applicaon Programs Data Visualizaon Product Design Quality Of Service Internet Of Things (IOT) SWRL Access Control Engines Semanc Annotaon Alignment Health Care Ontology Construcon Semanc Ontology Web Ontology Language (OWL) Digital Humanies Factual Database Knowledge Discovery
0
Fig. 13 Co-occurrence of keywords in the documents retrieved
294
S. Al-Sarayrah et al.
Slovenia
Undefined
Lebanon
Hong Kong
Viet Nam
Kuwait
South Africa
Croatia
Peru
Bulgaria
Russian Federation
Portugal
Luxembourg
Japan
Iran
Singapore
Mexico
Tunisia
Egypt
Saudi Arabia
Pakistan
United Kingdom
India
United States
100 90 80 70 60 50 40 30 20 10 0
Fig. 14 The number of published papers per country in the documents retrieved
in geospatial data by collecting building data from any place in a city using coordinates. Moving on to learning fields, we’re starting to construct tailored learning environments and competency models utilizing ontologies, among other things. To respond to last research question, which is related to the challenges and limitations of current approaches. The authors examined the challenges and limitations of regression testing approaches based on ontologies. Some of the limitations and obstacles are listed below: 1. Large firms with access to big infrastructures are now the key participants in this industry [1]. 2. Additional data sources, particularly more Earth observation, need to be incorporated into the ontology [1]. 3. The variety of data and the processing of that data are significant issues [8]. 4. Maintains the openness of standard-based systems to future innovation [8]. 5. Convert data of interest into information in order to make a meaningful contribution to the development of knowledge in a certain topic [17]. 6. Manufacturing automation is rising, as is the usage of networked cyber-physical systems, large-scale heterogeneous sensor networks, and machine learning [27]. 7. The ability to deal with very large ontologies and data sets [14]. 8. Need to have use-cases in UAE and more research; as represented below most countries research coming from other regions. 9. Lack of funding to conduct comprehensive testing [25].
7 Discussion and Conclusion The middleware layer that gathers data from sensors, cameras, and RFID devices (RFIDs) when understanding semantic-web and ontologies study seeks to provide
Understanding the Impact of the Ontology of Semantic Web …
295
Fig. 15 Data integration middleware
a new semantic integration layer for modeling heterogeneous data from several sites. Managing real-time traffic is difficult to define intelligent traffic using current ontologies. Data from diverse sites must be integrated into a consistent format. Figure 15 shows a recommended process to further study the traffic of data and monitor it and related it to specific ontologies and the importance of data integration [26]. A typical systematic review and mapping approaches comprise title, abstract, and keyword screening to discover potentially relevant literature for the study issue. Because identification is based on individual opinion on adherence to inclusion criteria, many people screen and compare results [25]. To interact with the screeners to agree on standard criteria and review. This method takes time and is oblivious to the unintended concealing stated in part explaining the obstacles. In the education sector for example, most graduate schools feature a research writing case that teaches to employ prominent scientific or social terminology in titles, abstracts, and keywords. These techniques aim to select articles among the numerous hits returned by the query. Still, their frequent usage causes a significant difference between full-text content and title, abstract, and keywords. Also, international development and social impact studies are supported through international research for development programs [25]. In Fig. 16, it is found that this proposed system architecture is convenient for optimizing the learning environment of the students and provides a reasonable example for the semantic web and ontology relationship and how knowledge representation can be optimized to both the educator and the student. Further to that, it is concluded that there are several best practices for creating a rule-based recommender system: separating the user system from the recommendation system (rule-based engine) and specifying the models to design in advance attributes, metadata, and structure, and ensuring a knowledge base for facts and rules from the knowledge acquisition process from experts, books, sites, etc. it is
296
S. Al-Sarayrah et al.
Fig. 16 System architecture for a learning environment based on semantic web ontology approach
also concluded that user satisfaction testing is necessary to assess whether recommended resources in any proposed system structure is satisfactory for the end user and whether the report adds value for all involved stakeholders. Data integration allows merging data from multiple sets into one format. IoT applications use ontology to create a machine-understandable conceptualization of a domain offer a uniform ontology schema to overcome all IoT integration issues. The data unification layer links data from various formats to data patterns based on the unified ontology model. This study presents a middleware that collects data from various devices based on ontologies. Cloud-based IoT platforms require an extra semantic layer to establish a schema for data generated from various sites [26]. In the health sector it is concluded that case-based reasoning (CBR) for example, is a problem-solving strategy that draws on prior knowledge and experiences. It is suitable for experience-based difficulties. Knowledge base is built on ontologies and case representation that have standards, criteria and validated reports to inform the development of any patient case management system that uses semantic web and ontology relationships. There was no doubt when conducting this systematic review that the benefits of establishing web-semantic and ontology solid relations is of significant interest and benefits to several industries that uses the Internet of Things among other domains that we see in our daily lives, such as flights management through controlling the air traffic flow, climate change predictions, and social impacts projects, creation of maps and 3D modelling for landscaping projects.
Understanding the Impact of the Ontology of Semantic Web …
297
One concern is raised when conducting this systematic review on the privacy of utilized metadata to develop the relationship between the semantic web and ontology and to result on a well-informed knowledge representation tool/model. It is important for future work to consider focusing on a human centric approach when it comes to the boundaries of utilizing data and to conduct further impact studies that inform users of the layers behind the result of data representation and reasoning. It is also concluded that ethical, focused and smart application of semantic web ontologies will result in Better, Cost Effective and faster decision-making, predictions and smooth operations. The systematic review encountered limitations due to time of conducting the systematic review, the extensive resources on semantic web and ontologies and required subject matter experts to further develop the research methodology. Acknowledgements This work is a part of a project undertaken at the British University in Dubai.
Appendix
#
Title
Classification
Paper description
1
Semantic modelling of earth observation remote sensing [1]
RP1
100
2
Constructing ontologies by mining deep [30]
RP2
100
3
Ontologies and the semantic web special section [15]
RP3
28
4
A new competency ontology for learning environments personalization [22]
RP4
100
5
Semantic 3D City Database—an enabler for a dynamic geospatial knowledge graph [8]
RP5
100
6
A human machine hybrid approach for systematic reviews and maps in international development and social impact sectors [25]
RP6
100
7
A design of a multi-agent recommendation system using ontologies and rule-based reasoning: pandemic context [20]
RP7
100
8
A unified ontology-based data integration approach for the internet of things [26]
RP8
100
9
Knowledge representation and management based on an ontological CBR system for dementia caregiving [19]
RP9
100
10
Semantic web technologies applied to software accessibility evaluation: a systematic literature review [10]
RP10
100
11
A systematic review on time-constrained ontology evolution in predictive maintenance [7]
RP11
100
12
An experimental analysis on evolutionary ontology meta-matching [11]
RP12
100
13
Ontology generation for flight safety messages in air traffic management [29]
RP13
100
14
Semantic description of quality of data in sensor networks [27]
RP14
50
15
Ontology-based reasoning for educational assistance in noncommunicable chronic diseases [17]
RP15
28
16
Ontology-based regression testing: a systematic literature review [13]
RP16
57
17
Semantic description of quality of data in sensor networks [27]
RP17
28
18
Ontologies and the semantic web [14]
RP18
50
19
The role of ontologies in linked data, big data and semantic web applications [6]
RP19
57
298
S. Al-Sarayrah et al.
References 1. J.F. Aldana-Martín, J. García-Nieto, M. del Mar Roldán-García, J.F. Aldana-Montes, Semantic modelling of earth observation remote sensing. Expert Syst. Appl.187 (2022). https://doi.org/ 10.1016/j.eswa.2021.115838 2. M. Al-Emran, V. Mezhuyev, A. Kamaludin, K. Shaalan, The impact of knowledge management processes on information systems: a systematic review. Int. J. Inf. Manag. 43, 173–187 (2018) 3. A.A. Alqudah, M. Al-Emran, K. Shaalan, Technology acceptance in healthcare: a systematic review. Appl. Sci. 11(22) (2021). https://doi.org/10.3390/APP112210537 4. K. Al-Saedi, M. Al-Emran, E. Abusham, S.A. El-Rahman, Mobile payment adoption: a systematic review of the UTAUT model, in 2019 International Conference on Fourth Industrial Revolution, ICFIR 2019 (2019). https://doi.org/10.1109/ICFIR.2019.8894794 5. M. AlShamsi, M. Al-Emran, K. Shaalan, A systematic review on blockchain adoption. Appl. Sci. 12(9), 4245 (2022). https://doi.org/10.3390/APP12094245 6. M. Bennett, K. Baclawski, The role of ontologies in Linked Data, Big Data and Semantic Web applications. Appl. Ontol. 12(3–4), 189–194 (2017). https://doi.org/10.3233/AO-170185 7. A. Canito, J. Corchado, G. Marreiros, A systematic review on time-constrained ontology evolution in predictive maintenance. Artif. Intell. Rev. (2021). https://doi.org/10.1007/s10462-02110079-z 8. A. Chadzynski, N. Krdzavac, F. Farazi, M.Q. Lim, S. Li, A. Grisiute, P. Herthogs, A. von Richthofen, S. Cairns, M. Kraft, Semantic 3D City Database—an enabler for a dynamic geospatial knowledge graph. Energy AI6 (2021). https://doi.org/10.1016/j.egyai.2021.100106 9. M. Dadkhah, S. Araban, S. Paydar, A systematic literature review on semantic web enabled software testing. J. Syst. Softw. 162, 110485 (2020). https://doi.org/10.1016/j.jss.2019.110485. ISSN 0164-1212 10. F.J. Estrada-Martínez, J.R. Hilera, S. Otón, J. Aguado-Delgado, Semantic web technologies applied to software accessibility evaluation: a systematic literature review. Univ. Access Inf. Soc. (2020). https://doi.org/10.1007/s10209-020-00759-y 11. N. Ferranti, J.F. de Souza, S. Sã Rosário Furtado Soares, An experimental analysis on evolutionary ontology meta-matching. Knowl. Inf. Syst. 63(11), 2919–2946 (2021). https://doi.org/ 10.1007/s10115-021-01613-0 12. C. Grosan, A. Abraham, Intelligent Systems. Intelligent Systems Reference Library, vol. 17 (Springer, Berlin, Heidelberg, 2021). https://doi.org/10.1007/978-3-642-21004-4_6 13. M. Hasnain, I. Ghani, M.F. Pasha, S.-R. Jeong, Ontology-based regression testing: a systematic literature review. Appl. Sci. 11(20), 9709 (2021). https://doi.org/10.3390/app11209709 14. I. Horrocks, Ontologies and the Semantic Web (2008). http://www.w3.org/1999/02/ 15. E.K. Jacob, Ontologies and the Semantic Web Special Section (2003). www.w3.org/TR/RECrdf-syntax/ 16. B. Kitchenham, S. Charters, Guidelines for Performing Systematic Literature Reviews in software Engineering (Software Engineering Group, School of Computer Science and Mathematics, Keele University, 2007), pp. 1–57. https://doi.org/10.1.1.117.471 17. A.V. Larentis, E.G.de A. Neto, J.L.V. Barbosa, D.N.F. Barbosa, V.R.Q. Leithardt, S.D. Correia, Ontology-based reasoning for educational assistance in noncommunicable chronic diseases. Computers 10(10) (2021). https://doi.org/10.3390/computers10100128 18. D. Moher, A. Liberati, J. Tetzlaff, D.G. Altman, Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. J. Clin. Epidemiol. 62(10), 1006–1012 (2009). https://doi.org/10.1016/j.jclinepi.2009.06.005 19. S. Nasiri, G. Zahedi, S. Kuntz, M. Fathi, Knowledge representation and management based on an ontological CBR system for dementia caregiving. Neurocomputing 350, 181–194 (2019). https://doi.org/10.1016/j.neucom.2019.04.027 20. A. Ouatiq, K. El-Guemmat, K.Mansouri, M. Qbadou, A design of a multi-agent recommendation system using ontologies and rule-based reasoning: pandemic context. Int. J. Electr. Comput. Eng. 12(1), 515–523 (2022). https://doi.org/10.11591/ijece.v12i1.pp515-523
Understanding the Impact of the Ontology of Semantic Web …
299
21. M.J. Page, J.E. McKenzie, P.M. Bossuyt, I. Boutron, T.C. Hoffmann, C.D. Mulrow, L. Shamseer, J.M. Tetzlaff, E.A. Akl, S.E. Brennan, R. Chou, J. Glanville, J.M. Grimshaw, A. Hróbjartsson, M.M. Lalu, T. Li, E.W. Loder, E. Mayo-Wilson, S. McDonald, D. Moher, The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ 372 (2021). https:// doi.org/10.1136/bmj.n71 22. G. Paquette, O. Marino, R. Bejaoui, A new competency ontology for learning environments personalization. Smart Learn. Environ. 8(1) (2021). https://doi.org/10.1186/s40561-021-001 60-z 23. S.L.P. Plummer, M. Doherty, The ESA Climate Change Initiative (CCI): a European contribution to the generation of the Global Climate Observing System. Remote Sens. Environ. 203, 28 (2017). https://doi.org/10.1016/j.rse.2017.07.014 24. M. Reba, K.C. Seto, A systematic review and assessment of algorithms to detect, characterize, and monitor urban land change. Remote Sens. Environ. 242 (2020). https://doi.org/10.1016/j. rse.2020.111739 25. M. Sartas, S. Cummings, A. Garbero, A. Akramkhanov, A human machine hybrid approach for systematic reviews and maps in international development and social impact sectors. Forests 12(8) (2021). https://doi.org/10.3390/f12081027 26. A. Swar, G. Khoriba, M. Belal, A unified ontology-based data integration approach for the internet of things. Int. J. Electr. Comput. Eng. 12(2), 2097–2107 (2022). https://doi.org/10. 11591/ijece.v12i2.pp2097-2107 27. A.P. Vedurmudi, J. Neumann, M. Gruber, S. Eichstädt, Semantic description of quality of data in sensor networks. Sensors 21(19) (2021). https://doi.org/10.3390/s21196462 28. M. Weiss, F. Jacob, G. Duveiller, Remote sensing for agricultural applications: a meta-review. Remote Sens. Environ. 236 (2020). https://doi.org/10.1016/j.rse.2019.111402 29. M. Yousefzadeh Aghdam, S.R. Kamel Tabbakh, S.J. Mahdavi Chabok, M. Kheyrabadi, Ontology generation for flight safety messages in air traffic management. J. Big Data 8(1) (2021). https://doi.org/10.1186/s40537-021-00449-3 30. F. Zhang, Q. Li, Constructing ontologies by mining deep semantics from XML schemas and XML instance documents. Int. J. Intell. Syst. 37(1), 661–698 (2022). https://doi.org/10.1002/ int.22643
Telemedicine: Digital Communication Tool for Virtual Healthcare During Pandemic Lakshmi Narasimha Gunturu , Kalpana Pamayyagari , and Raghavendra Naveen Nimbagal
Abstract The emergence of severe acute respiratory Syndrome coronavirus 2 (SARS-CoV-2) completely changed the way of living. Since its eruption, researchers and healthcare professionals are facing challenges in areas of drug therapy and viral spread control. This challenges paves way for the use of information technologies that plays a key role in the control of coronavirus-19 (COVID-19) pandemic. In the current paper, we discussed the concept of telemedicine, importance of telehealth and telemedicine services during the COVID-19 pandemic, challenges faced by healthcare professionals during the COVID-19 pandemic with its solutions through execution of telemedicine. We also dealt with limitations in the implementation of telemedicine services during the COVID-19 pandemic. Keywords Coronavirus · Information technologies · Telemedicine · Applications · Challenges · Solutions
1 Introduction Coronavirus-2019 (COVID-19) belongs to genus coronaviridae that results in severe health infections in both humans and animals [1]. In humans, especially these viruses can cause respiratory tract infections ranging from cold to severe lung infections such as pneumonia. The recent COVID-19 outbreak in Wuhan, China had rapidly spread to other regions of countries due to high transmission rate. Considering its spread World Health organization (WHO) announced it as pandemic outbreak [2]. General L. N. Gunturu (B) Scientimed Solutions Private Limited, Mumbai, Maharashtra, India e-mail: [email protected] K. Pamayyagari Department of Pharmacy Practice, Annamacharya College of Pharmacy, Rajampeta, Andhra Pradesh, India R. N. Nimbagal Department of Pharmaceutics, Sri Adichunchanagiri College of Pharmacy, Mandya, Karnataka, India © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. Al-Emran and K. Shaalan (eds.), Recent Innovations in Artificial Intelligence and Smart Applications, Studies in Computational Intelligence 1061, https://doi.org/10.1007/978-3-031-14748-7_17
301
302
L. N. Gunturu et al.
clinical symptoms of COVID-19 are fever, cough, throat pain, and dyspnoea [3, 4]. People with elderly age and underlying health problems such as diabetes, hypertension, and cardiovascular diseases have the more chance of developing most severe form of COVID-19 clinical symptoms [5]. One of the important precautions implemented to reduce the viral transmission and infection is social distance between the people or group of people especially in crowded areas [6, 7]. To minimize the viral transmission restrictions have reinforced in the travelling around the world and public were quarantined in the houses [8]. Despite these preventive measures, people who are less chance of getting COVID-19 and elderly population who are more susceptible to COVID-19 must definitely follow the daily guidelines and care without getting exposed to virus and patients in the hospital [6]. Healthcare professionals does not allow the common people into wards of COVID-19 patients as a part of strict restrictions [9, 10]. Pandemics situations poses challenges both to the healthcare professionals, health organizations, and public in terms of safety measures and prevention [11]. Hence, special innovative technologies that can address the challenges and provides solutions to healthcare organizations are essential to meet the need of COVID-19 patients and those who also require the normal healthcare services. Therefore, technology plays a crucial role in this arena by providing its users new options [12]. Of course, the ultimate solution to reduce the COVID-19 pandemic will be multifactorial; hence, it is best to use the current technologies to ease the required healthcare services while reducing the risk of direct risk from the COVID-19 virus [13]. The implementation of telemedicine services at this point of time in the COVID-19 pandemic has prospects to enhance the research aspects, reduce the virus transmission and clinical cases management [6, 13, 14]. The implementation of telehealth services is twenty first century concept that engages patient, physicians, and other health care professionals [15, 16]. In this process of telemedicine, the doctors deliver healthcare services to patient by means of social distance. This can be possible by the use of information technologies as a source to exchange the information [17]. Telemedicine and telehealth services provide real time benefits to users by making distance as a critical key factor [18]. Currently in the digital world, each home possess one or other mode of electronic instruments particularly smart mobiles [19]. This mobile phones act as a channel of communication between the physician and patient through webcams [20]. Meetings through webcam and television programs is used to educate the patients in the hospital and quarantines to prevent the spread of viral infection to others. Doctors make use of these services to treat their patients remotely [21]. Apart from this physician with telemedicine, services can easily face the existing challenges [7, 22]. There are numerous advantages by the implementation of telemedicine services particularly in the non-emergency departments and General wards where most of the cases does not require the physician–patient interaction [23]. With the advantage of providing, the patient care through remote settings there can be a reduced risk of virus transmission from person to person. In addition to providing safety to public, patients, health care providers, and physicians, they also offer wide access to health care benefits to the patients [23]. Therefore, telemedicine and telehealth services are considered as best and effective informative technologies in control of COVID-19
Telemedicine: Digital Communication Tool for Virtual …
303
pandemic [24, 25]. Public and patients are interested to use the telemedicine services in day-to-day lives but still there are limitations [26, 27]. Effective implementation of this service requires approvals form the Government authorities, payment systems, and insurance policies. Apart from this, there exists concerns in physicians regarding the services quality, effectiveness, privacy, and safety issues [28]. AlQudah et al. in their systematic review concluded that technology acceptance model and unified theory of acceptance and use of technology (UTAUT) were prevailing technology acceptance models in healthcare [29]. Telehealth services are essential tools to fight with COVID-19 pandemic. They are helpful to the public, patients of both COVID19 and non-COVID-19 especially who are self-isolated during the pandemics and seeks the doctor advices for patient health. Since outburst of the COVID-19, we have been facing many problems in every field and organization especially when it comes to healthcare domain. Public are not aware of the existing virtual tools that makes them to visit the hospitals directly even for small health issues which may increases the chance of risk for both physicians and other patients. Therefore, it is important to make aware of public digital health concepts and their significance in pandemic era. Henceforth, this chapter discuss the importance of telemedicine role to overcome the pandemic risk. The main objectives of the study are: a. Providing awareness of telemedicine concepts and significance of virtual tools. b. Explain need for telemedicine and how to utilize it in COVID-19 patients. c. Discuss regarding how telemedicine offers solutions for different healthcare problems.
2 Methodology An extensive literature survey was performed in databases such as PubMed and Google Scholar by using keywords such as COVID-19, prevention, combat technologies, telehealth, applications, All the related articles that fall under the scope of technology applications and healthcare are taken into consideration.
3 Need of Telemedicine During Pandemic Telemedicine services fulfils the following functions.
304
L. N. Gunturu et al.
3.1 Virtual Consultations Video meetings and telephone consultations for the patients reported with COVID19 symptoms or even non-COVID-19 patients who need patient care for other health conditions. This minimizes the chance of viral spread [30].
3.2 Tele-screening These devices collect the blood samples to monitor the patient healthcare and vitals such as respiratory rate, oxygen saturation, and blood pressure. Collected data reported to healthcare professionals to facilitate better therapeutic outcomes.
3.3 Use of Sensors Sensors are used inform of GPS trackers especially in remote areas. This provides the cautious signals to public to avoid the potential dangerous areas with severe COVID-19 cases.
3.4 Chatbots These provide the beneficial health recommendations to the public during the pandemic times; answering FAQ by experts to the questions posed by patients, and it also connects the physicians with patients to fulfil their needs [30]. Implementation of tele-health services are cheaper, affordable to all public and provides the accessibility of health information by the use of internet and other health related sources. Starting with the telephone meetings, teleservices has evolved as sophisticated tools with the progress of technologies and computer that provides health information to patients and public at various places. Other objectives in the implementation of telemedicine services are A. Minimize the diagnosis time and provide accurate therapy to the infected patient to stabilize the patient within a short time. B. Regular follow-up of patients and public especially who are quarantined and implementing travel restrictions. This can be beneficial in storing of hospital resources for critically ill patients. C. Use of medical facilities for remote areas. D. Avoiding the virus spread to the healthcare professionals who considers being important during the COVID-19 pandemic. Avoid direct person-to-person contact.
Telemedicine: Digital Communication Tool for Virtual …
305
E. Reducing the costs associated with medication tool kits such as disposable gloves, facial masks, and gloves, which is known to be green impact of telehealth [30, 31].
4 Telemedicine and COVID-19 Patients In the present section, we discussed how the telemedicine is essential in providing patient care and helps physicians to tackle the COVID-19 pandemic (Fig. 1).
4.1 Initial Screening Tool When the patient visit the clinic, there can be initial screening to decide patient entry into the hospital. Patient is screened at the entry by a scanners outfitted in the settings of hospital with the help of protective personnel equipment’s. If the patient founds to be COVID-19 positive, he will issue to take the virtual visit with the physician.
4.2 Temperature Monitoring Scanners monitor the body temperature when the patients visits the hospital settings. If the temperature is > 100 °C or greater then he/she denied to enter the hospital premises. Later those patients are provided information on virtual visit. Fig. 1 Benefits of telemedicine during COVID-19
Screening tool
Temperature monitor
Coronavirus diagnosis
Surveillance tool Patient screening & triage
306
L. N. Gunturu et al.
4.3 Surveillance and Dispersion of COVID-19 Information To control COVID-19 pandemics one of the effective strategies are to monitor the public and patients continuously and transfer the pandemic data obtained in the different regions of the world to the healthcare professionals so that clear-cut evidence on the viral pathology, clinical signs and symptoms are clearly analysed [32]. The best example for this Iran. In the year 2020, it initially confirmed 43 cases with fatality rate [33]. However, the adoption of mathematical models reported the original number of COVID-19 in thousands. Hence, underreporting is one of the reasons for the global spread of COVID-19 pandemic [34]. Health organizations need effective tools to enhance the speed of conveying information to limit the viral spread. To overcome this challenge telemedicine services are utilised, as they own the better connectivity around the world to transfer the electronic data and epidemiological information [35]. Technology platforms such as HealthMap and Surveillance outbreak response management and system analysis tools have been utilised in the surveillance program of COVID-19 [36, 37]. They have the ability to identify the disease at the earliest stage when compared to use of traditional resources. Sun et al. described the advantage of monitoring the coronavirus patients from media channels and social networks to help rebuild the pandemic outburst and facilitate detailed patient related information to the health care departments [38]. Qin et al. make use of big data informative technologies to identify the newly infected coronavirus patients who have either chance of suspicion or infection [39]. In addition to this private organization such as Bluedot had manufactured an Artificial intelligence based surveillance tool to disclose the pandemic news to wide number of population in the world and described as first company to identify the pandemic outbreak in late 2019 December ahead of Chinese officials [40, 41]. Zhang et al. discussed a real time tool to identify the pandemic information based on the information obtained from the social media called as Twitter [42]. Evaluation of twitter data revealed that random forest algorithms were superior in prediction of COVID-19 cases. Zivkovic et al. reported high efficacy with hybrid mathematical algorithms, which combines the machine learning models to predict the coronavirus cases in newly diagnosed patients [43]. Likewise, telemedicine technologies combined with other informative technology approaches to get the real-time and updated pandemic information during the coronavirus outbreak. This can permit the physicians, national, and international healthcare agencies to embrace the synchronize control strategies [28]. AlQudah et al. proposed a model of Health Level Seven protocol (HL7) for public benefits. Their work concluded that HL7 is an efficient tool to reduce the patient’s journey time and helps early identification of patients in the outpatient department [44].
Telemedicine: Digital Communication Tool for Virtual …
307
4.4 COVID-19 Patient Screening and Triage One of the important strategies for pandemic outbreak is forward triage; differentiate patients and public before they enter in to the hospital settings for treatment. Consumer based telehealth service approaches considered as best method in forward triage approach. This telehealth services screens the patients effectively in a timely manner [15]. Therefore, this is patient oriented, friendly to use especially during self-isolated times, and helps the physicians and other healthcare professionals to identify the COVID-19 patients and reduce the risk of disease transmission [16, 17]. Many countries being offering forward triage health services to their netizens by the use of websites and smart phone apps, which takes a short survey, based questions based on the patients age, clinical symptoms, previous travel histories of the public and patients. Depending upon the survey results triage will provide the respective solutions to visit the nearby COVID-19 centres to confirm the diagnosis or to connect virtually with the physicians for their treatment. In addition to these health websites such as Buoy health [45] and Lark health [46] also used to record the symptoms of public to provide the exact diagnostic and preventive measures. Yan et al. described a telemedicine tool integrated with Artificial intelligence (AI) technology that benefits the public to self-evaluate the risks associated with coronavirus and thus reducing constraint on the staff in healthcare settings and minimizing the anxiety levels in the patients [47]. Al-Emran et al. in their work concluded that Artificial Intelligence (AI) techniques along with machine learning algorithms are efficient to combat COVID19 pandemic [48]. Arpaci et al. developed a machine learning classifier algorithm based on fourteen clinical features for COVID-19 prediction. Results of this work concluded that classification via Regression (CR) was the accurate classifier for prediction of positive and negative COVID-19 symptoms in patients with an accuracy of 84.21% [49]. Al-Emran et al. in their work concluded that wearable smart glasses equipped with sensors were able to detect the COVID-19 spread and even used in patients who requires early screening for disease identification and thus helps to minimize the infection rates among the public [50]. Many countries in the pandemic crisis have adopted the virtual tools to deliver the timely and appropriate medical facilities to the public and patients [15]. Through virtual mode (mobile phones and web cameras) patients can interact with the physicians to clear their health concerns [20]. Netizens presenting the clinic with respiratory clinical symptoms is an early indication of coronavirus disease as such patients especially communicated to doctors via telehealth services. Srinivasa et al. [51] and Zahedi et al. [52] described the mathematical frameworks integrated with AI telemedicine that can help to recognize the coronavirus cases in the patients through risk assessment of clinical signs and symptoms matching with disease diagnostic criteria deployed on the smart phone and web based clinical assessment. Based on the final diagnostic confirmation and netizen responses these apps can further send an information to the clinicians and healthcare bodies to record the daily suspected or confirmed COVID-19 cases [34]. The respective patient information also forwarded to the particular patient for further screening and health visits. Screening algorithms
308
L. N. Gunturu et al.
automated with the integration of telehealth services especially consultation facilities and local epidemiological information on pandemic is availed to come up with standard screening strategies and health care patterns among the healthcare staff [34]. For example Cleveland clinic have leverage telehealth services that permits the clinical staff to diagnose the patients who are self-isolated at houses due to severe COVID-19 risk [34]. The D’Angelo team used the convolutional networks based on human activity to enhance the execution of COVID-19 cases tracking applications [53]. In the pandemic crisis, many open places like stadiums and other exhibition places are converted into hospitals to provide the healthcare facilities. This concept first implemented in China to tackle the corona pandemic outbreak [54]. In this respective hospitals, patient segregated based on the symptoms form severe to mild and moderate category [54]. Interestingly informative technologies like telemedicine is used to provide support to the health staff, to record the electronic health data and share the information to higher centers through platforms like cloud technologies. Such type of facilities provides quality in medical treatment and fulfils the criteria of screening triage [55].
4.5 Diagnosis of Coronavirus In the conditions of COVID-19, telehealth services provide a better plan of action to increase the communication between the health staff can increase the chance of accurate identification of difficult coronavirus cases to enhance the treatment outcomes in critically to moderate ill patients [56]. For example in China, to fight up with this pandemic an Epidemic expert team was launched to isolate the infected patients, provide accurate diagnosis, and report the treatment outcomes as per protocols. This expert team availed the teleservices platforms to connect with the physicians, healthcare organizations, and patients for better treatment outcomes across the globe. During times when World Health Organization (WHO) declared this coronavirus condition as pandemic all, the expert team members summoned many healthcare delegates in china to make use of telemedicine platform named as Cloud intensive care unit for the patients who diagnosed with critically ill condition [56]. Many of these Chinese health professionals shared their views regarding the use of this telemedicine platform to manage the corona patients. In a similar manner, the Chinese health professionals in the hospitals used 5G dual networks to cope with pandemic [57]. With this, total 424 telemedicine consultations were performed and among those 15% of cases were diagnosed with coronavirus. This improved the diagnostic accuracy and used in the rural parts of western china. This model has clearly explained the treason for the low case fatality rate (0.55%) in Sichuan when compared with Hubei (4.63%) [57].
Telemedicine: Digital Communication Tool for Virtual …
309
5 Challenges Faced and Solutions Through Telemedicine 5.1 Impact of COVID on Research Participants With the increased spread of coronavirus pandemic among the countries, the research workflow completely transformed. Research personnel have given the other tasks to perform during the crisis time [58]. To explain this for example nurses involved in the research are allocated to perform functions as supporting staff in the hospital, labour wards, postpartum wards, and in delivery wards to take care of pregnant women. This move reduced the pressure on the physicians and made them to provide clinical services to the COVID-19 infected patients. There is a chance of spread of corona virus for the research personnel also during their visits or came in touch with patients to provide health services. Hence, to minimize the risk, research personnel are scheduled for only video conference visits. In addition to this protective personnel equipment is provided to protect from virus [58]. In some places, medical students and residents are involved in screening and enrolling study participants into research that can be usually performed by research personnel. To ensure the safe and better health outcomes to public during COVID-19, medical residents are involved in patient research activities. Nevertheless, priority must be given to the research coordinators with better skills. Kienle et al. discussed the implementation of telemedicine services in the clinical trials (Table 1). They addressed the possible challenges faced in the clinical trials by both researchers and patients during the COVID-19 and their solutions through the telemedicine implementation [59]. They concluded telemedicine services are implemented regularly not only during the pandemic times because this will provide benefit for the patients who cannot have the access to the health care facilities present in the remote areas [59]. Table 1 Solutions obtained by telemedicine in clinical trials Challenge
Solution
Elderly people had difficulties in vision, hearing, and information processing. This become a task for the therapies
Telemedicine provides cameras with high-resolution images that gives better clarity in vision. It also provides therapist extra screens to examine the patient clearly
Elderly patients does not had awareness on technology. Hence this become a limitation for the participants
It offers support from the neighbours and friends that allow the older people to gain information on technology usage It also provides the rental smart phone devices so the patients who are devoid of smart tablets can offer it
310
L. N. Gunturu et al.
5.2 COVID-19 Impact on Outpatients in Hospitals With the outbreak of COVID-19 pandemic, most of outpatient’s visits to hospital and treatment facilities provided by healthcare staff to outpatients are minimized due to rapid spread of pandemic [60]. Hence, to manage such outpatient visits and provide appropriate diagnostic and health facilities to them adoption of telemedicine is the only option. Telemedicine and online assessment tools provide patients care to public and helps to overcome the physician barriers in treatment. Implementation of telemedicine services in the outpatient wards sustains the continuous patient care during and after the COVID-19 pandemic. Some of the medical health centers started to utilize the adoption of online services such as Microsoft teams, Zoom, Skype, and Google meets to provide the telemedicine services in the COVID-19 crisis [60]. Implementation of telehealth services helps in the following (Fig. 2). Telemedicine and online virtual tools are used previously in providing healthcare facilities to public during the pandemics. Implementation of telehealth services helps to come up with accurate diagnostic information from physicians. This helps healthcare staff to get the real time updates on daily basis. It helps to store the medical information such as patient findings, complaints, and diagnostic images etc. that can be viewed by the physician later whenever required. The patients visiting outpatient departments in hospitals can adopt either synchronous or asynchronous telemedicine health platforms. Healthcare centers positioned on-site telehealth services and hospitals provided laps at houses. They make use of video conference platforms such as Zoom, Google duo, Microsoft teams, and digital stethoscopes having web cameras [60].
Minimizes the diagnosis time and helps to initiate the treatment early. Regular patient follow-up is possible from their homes that avoids oversaturation of health needs. Reduces the patient inflow to hospitals and minimizes the chance of hopital acquired infections. Helps in effective utilization of medical resources. Reduces the chance of getting infected from virus importantly to medical staff. Saves expenses on personal protective equipments such as hand gloves, sanitizers etc.
Fig. 2 Areas equipped with telehealth services
Telemedicine: Digital Communication Tool for Virtual …
311
6 Challenges Telemedicine and the software virtual podiums are sensible, realistic and pertinent to aid clinicians and patients during covid and post pandemic epoch through synchronous and asynchronous means like smart phone, videoconference, and e-mail [13]. In addition, various barriers in sustainable implementation of telehealth need to be addressed. In some developed countries researchers have outlined the discrepancies that occur in the adoption of telemedicine [61, 62]. Based on the literature the challenges in implementing telemedicine can be labeled based on social, technological, human, institutional and financial elements. We have summarized these elements and their influence on predominant adoption of telemedicine and virtual software platforms with regard to organizational, clinicians and patients. The barriers are discussed below.
6.1 Organizational/Service Provider Barriers The service providers endeavored with funding, reimbursement, legal issues, data security, privacy and confidentiality, equipment and efficiency.
6.1.1
Funding
Implementation of telemedicine and software virtual podiums requires time and do not occur abruptly. To acquire the needful resources, funding is necessary. Also, it includes the charges of the virtual software development, equipment cost; salaries of physicians, IT support, and training [63]. Currently, some countries such as the USA, China and Australia have capitalized in telemedicine and are achieving favorable results.
6.1.2
Advocacy and Policies
The current policies limit the use of telemedicine and act as a barrier in its adoption. Currently, majority of the health insurance do not include the treatment of telemedicine and hence reimbursement is not provided to the patients [64]. Also, the advocacy groups such as the physician advocacy bodies, patients and the telemedicine association renders to the low acceptance of treatment through telemedicine and virtual platforms.
312
6.1.3
L. N. Gunturu et al.
Data Access and Security
For achieving favorable outcomes of telemedicine, the role of data privacy and security is critical [65]. Telemedicine must ensure that data is safeguarded and access is limited to authorized persons and is well protected. Moreover implementing telemedicine and virtual platforms include acquiring data through digital means and usage of liable health information among patients and clinicians, which may pose security risk and disclosure of confidential data of individuals [66]. There must be appropriate guidelines and measures need to be taken to ensure to patient data security, privacy and confidentiality for adopting telemedicine.
6.1.4
Assimilation of Workflow
Integration of virtual telemedicine with the contemporary medical practice in the hospital may results in management issues by clinicians which consequently leads to less utilization of telehealth by patients and medical practitioners [67]. Hence, for implementing virtual platforms suitable functionalities must be delineated to reduce workload [68]. Therefore, the opted telehealth platform must permit flexible usage for health practitioners to provide medical care.
6.1.5
Technical Elements
Inadequacy of technological infrastructure is an important barrier for telemedicine, especially in rural areas [69]. Proper infrastructure (basic elements and provisions of telecommunications systems), training, compatibility (compliancy level and consonance of telemedicine technology) and constant support must be provided by the service providing organizations.
6.2 Clinician Barriers The clinicians face the issues of legal concern, license and permissions, willingness to adopt telehealth and training that are mentioned below.
6.2.1
Medico-legal Concerns
The patients may not accept the switch to telemedicine as the clinicians are not present in the real time and if the subjects agreed to telehealth eventually, there may be legal concerns in the case of medical errors. Hence, medico-legal concerns need to be settled to adopt telemedicine by medical practitioners [70].
Telemedicine: Digital Communication Tool for Virtual …
6.2.2
313
Licensure Requisite
The set forth requirement of license generally commands that addressing clinician should be licensed at the time of service in the region where the patient is residing. This prerequisite licensure is a constraint to opt telemedicine by the medical practitioners [71]. So these state licensures are considered as barrier to expand telemedicine. It is difficult for temporary stoppage of the critical restrictions on licensure requirements to implement telemedicine in the pandemic. Hence, modifications to the policies regarding the license must be done to promote telehealth without geographical boundaries to clinicians.
6.2.3
Training
The physicians who attend the patients by means of telemedicine and virtual solutions must be trained appropriately. Therefore, training sessions should be provider and made available when needed, either virtually or physically [66, 72]. Likewise, implementing telemedicine may also be difficult for some public and they require proper training on digital technologies and their application.
6.2.4
Willingness
The limited use of telemedicine and virtual care is ascribed to reluctance of medical practitioners [73]. Telemedicine is an unconventional and complex approach and hence clinicians must pursue skills of consulting approaches. Acceptance of telemedicine by physicians’ depends on the perceptivity of virtual care as being safe and effective [6]. In addition, several hospitals are not willing to opt telemedicine, as the patients are not acquainted in virtual technologies. Further consent is required by the patients for using the audio and video to implement telehealth and virtual care for providing treatment.
6.3 Patient Barriers Considering the patient point of view, several factors such as age, level of education, computer literacy, Wi-Fi bandwidth, and unawareness of services, social factors and privacy issues limit the use of telemedicine.
6.3.1
Lack of Awareness
One of the critical barriers that currently persist to telemedicine is the deficit of education and awareness about telehealth efficacy and safety in the ongoing circumstances.
314
L. N. Gunturu et al.
Some of the public are unaware of the option of having telemedicine and some are incapable to access the visits by means of telemedicine [74]. Digital health literacy is found to be an important hurdle to telemedicine and virtual care in COVID-19.
6.3.2
Patient Preferences
In telemedicine, there is a lack of physical communication between patient and health professionals. Physical care offered by clinicians and nursing staff is vital in the management of certain conditions. Hence, patient prefers to visit their own providers physically rather than some others with whom they have no established relation. In addition, absence of the proximity among patients and health providers may result in improper evaluation of cardiopulmonary vitals, abdominal and other visual physical examinations. Yet the advanced technologies like electronic stethoscopes, smart applications and wearable devices (watch, glasses, and bands) aids to measure and monitor individual patient health [75].
6.3.3
Social/Cultural Factors
The literature suggests that cultural and social factors play an important role in owning telemedicine approach. Culture may have impact on virtual care and telemedicine through data privacy and information policy. Prior to the adoption of telemedicine, authorities must keep a note on the culture of the nations and its policies that regulate telemedicine. A cooperative relation must be maintained between them [76].
6.3.4
Technology Availability
Technically virtual care relies on internet speed, band access, Smartphone applications and the basic digital skills. Uncoordinated and poor technology adoption mostly in developing countries is a major barrier to adopting contemporary virtual software platforms and advancements such as telemedicine Literature suggests that network quality of communications is an important element that affects telemedicine. Inferior video quality may affect the rapport and cannot engage patient and clinician, which may decrease patient satisfaction. Hence, suitable bandwidth is required to carry the voice, image data and video. This barrier is mainly observed for patients located in rural areas who have weak access to the Internet services. Hence, it is necessary to improve internet speed for effective implementation of telemedicine [77, 78].
6.3.5
Confidentiality and Privacy
Data privacy and security matters are the ceaseless trouble to telemedicine as there is an ample usage of wireless networks and new communication technologies. Patient’s
Telemedicine: Digital Communication Tool for Virtual …
315
medical records have very sensitive data that should not be disclosed to unauthorized persons in order to safeguard integrity and confidentiality of the patient. Simultaneously the information must readily be accessible to the accredited individuals for authentication. Despite of employing control measures the information is still exploited by a security or privacy threat and enormous damage. Thus, the threat of privacy and security breaches are one of the critical barriers in telemedicine that requires constant monitoring [79].
7 Conclusions Our work explained the effective role of telehealth services and other virtual platforms to minimize COVID-19 and its outbreak. Results of our work suggest that telemedicine have a big impact on COVID-19 by supporting social distance, reducing the direct visits, and minimizing crowd sourcing. Being a virtual tool, it is used in virtual meetings, monitor the COVID-19 patients, and even helps to diagnose the symptoms of COVID-19 patients. It also minimizes the burden on healthcare professionals and offers solutions to different problems. Limitations in our work includes, we collected data only from databases such as PubMed and google scholar hence data from other bases have missed. We included only suitable articles that fall under the scope of healthcare applications of telemedicine. In near future telemedicine should be implemented as a proactive measure to enhance medical facilities and should not be considered as a temporary option during emergencies. Telehealth services are integrated in Intensive care units in the coming days to monitor the critically ill patients. We can assure that telemedicine will amalgamate in our daily care owing to its safe, convenient, and an effective tool to provide medical facilities in pandemic times.
References 1. L. Van Der Hoek et al., Identification of a new human coronavirus, Nat. Med. 10(4) (2004) 2. M. Lipsitch, D.L. Swerdlow, L. Finelli, Defining the epidemiology of Covid-19—studies needed. N. Engl. J. Med. 382(13) (2020) 3. C. Huang et al., Clinical features of patients infected with 2019 novel coronavirus in Wuhan, China. Lancet 395(10223) (2020) 4. F. Jiang, L. Deng, L. Zhang, Y. Cai, C.W. Cheung, Z. Xia, Review of the clinical characteristics of coronavirus disease 2019 (COVID-19). J. Gen. Internal Med. 35(5) (2020) 5. World Health Organization, Novel Coronavirus (2019-nCoV) WHO Bulletin Situation Report1. World Health Organ 10 (2020) 6. A.C. Smith et al., Telehealth for global emergencies: implications for coronavirus disease 2019 (COVID-19). J. Telemed. Telecare 26(5) (2020) 7. J.E. Hollander, B.G. Carr, Virtually perfect? Telemedicine for Covid-19. N. Engl. J. Med. 382(18) (2020)
316
L. N. Gunturu et al.
8. T.J. Papadimos et al., Ethics of outbreaks position statement. Part 2: Family-centered care. Crit. Care Med. 46(11) (2018) 9. W. Li et al., Progression of mental health services during the COVID-19 outbreak in China. Int. J. Biol. Sci. 16(10) (2020) 10. L. Kang et al., The mental health of medical workers in Wuhan, China dealing with the 2019 novel coronavirus. Lancet Psychiatry 7(3) (2020) 11. V. Chauhan et al., Novel coronavirus (COVID-19): leveraging telemedicine to optimize care while minimizing exposures and viral transmission. J. Emerg. Trauma Shock 13(1) (2020) 12. R.S. Wax, M.D. Christian, Practical recommendations for critical care and anesthesiology teams caring for novel coronavirus (2019-nCoV) patients. Can. J. Anesth. 67(5) (2020) 13. X. Zhou et al., The role of telehealth in reducing the mental health burden from COVID-19. Telemed. e-Health 26(4) (2020) 14. R. Ohannessian, Telemedicine: potential applications in epidemic situations. Eur. Res. Telemed. 4(3) (2015) 15. C.S. Kruse, N. Krowski, B. Rodriguez, L. Tran, J. Vela, M. Brooks, Telehealth and patient satisfaction: a systematic review and narrative analysis. BMJ Open 7(8) (2017) 16. E.R. Dorsey, E.J. Topol, State of telehealth. N. Engl. J. Med. 375(2) (2016) 17. World Health Organization, 2010 opportunities and developments report on the second global survey on eHealth Global Observatory for eHealth series—Volume 2 TELEMEDICINE in Member States WHO Library Cataloguing-in-Publication Data. World Health Organ. 2 (2010) 18. N.K. Bradford, L.J. Caffery, A.C. Smith, Telehealth services in rural and remote Australia: a systematic review of models of care and factors influencing success and sustainability. Rural Remote Health 16(4) (2016) 19. J. Valle, T. Godby, D.P. Paul, H. Smith, A. Coustasse, Use of smartphones for clinical and medical education. Health Care Manag. (Frederick) 36(3) (2017) 20. A. Jahanshir, E. Karimialavijeh, H.S. Motahar Vahedi, M. Momeni, Smartphones and medical applications in the emergency department daily practice. Arch. Acad. Emerg. Med. 7(1) (2019) 21. V.A. Canady, COVID-19 outbreak represents a new way of mental health service delivery. Ment. Health Wkly. 30(12) (2020) 22. T. Greenhalgh, J. Wherton, S. Shaw, C. Morrison, Video consultations for covid-19. BMJ 368 (2020) 23. B.L. Charles, Telemedicine can lower costs and improve access. Healthc. Financ. Manag. 54(4) (2000) 24. A. Mehrotra, A.B. Jena, A.B. Busch, J. Souza, L. Uscher-Pines, B.E. Landon, Utilization of telemedicine among rural Medicare beneficiaries. JAMA—J. Am. Med. Assoc. 315(18) (2016) 25. H.S. Sauers-Ford et al., Acceptability, usability, and effectiveness: a qualitative study evaluating a pediatric telemedicine program. Acad. Emerg. Med. 26(9) (2019) 26. J. Portnoy, M. Waller, T. Elliott, Telemedicine in the Era of COVID-19. J. Allergy Clin. Immunol.: Pract. 8(5) (2020) 27. A.M. Morenz, S. Wescott, A. Mostaghimi, T.D. Sequist, M. Tobey, Evaluation of barriers to telehealth programs and dermatological care for American Indian individuals in rural communities. JAMA Dermatol. 155(8) (2019) 28. T. Greenhalgh, G.C.H. Koh, J. Car, Covid-19: a remote assessment in primary care. BMJ 368 (2020) 29. A.A. Alqudah, M. Al-Emran, K. Shaalan, Technology acceptance in healthcare: a systematic review. Appl. Sci. 11(22), 1–40 (2021) 30. T. Greenhalgh et al., Virtual online consultations: advantages and limitations (VOCAL) study. BMJ Open 6(1), e009388 (2016) 31. Eurosurveillance Editorial Team, Latest updates on COVID-19 from the European Centre for Disease Prevention and Control. Euro Surveill. 25(6) (2020) 32. R. Ohannessian, T.A. Duong, A. Odone, Global telemedicine implementation and integration within health systems to fight the COVID-19 pandemic: a call to action. JMIR Public Health Surveill. 6(2) (2020)
Telemedicine: Digital Communication Tool for Virtual …
317
33. A.R. Tuite, I.I. Bogoch, R. Sherbo, A. Watts, D. Fisman, K. Khan, Estimation of coronavirus disease 2019 (COVID-19) burden and potential for international dissemination of infection from Iran. Ann. Internal Med. 172(10) (2020) 34. B. Udugama et al., Diagnosing COVID-19: the disease and tools for detection. ACS Nano 14(4) (2020) 35. Z.S.Y. Wong, J. Zhou, Q. Zhang, Artificial intelligence for infectious disease big data analytics. Infect. Dis. Health 24(1) (2019) 36. Flu & Ebola Map | Virus & Contagious Disease Surveillance. Accessed 10 Jan 2022 37. Surveillance Outbreak Response Management and Analysis System (SORMAS). Accessed 1 Jan 2022 38. K. Sun, J. Chen, C. Viboud, Early epidemiological analysis of the coronavirus disease 2019 outbreak based on crowdsourced data: a population-level observational study. Lancet Digit. Heal. 2(4) (2020) 39. L. Qin et al., Prediction of number of cases of 2019 novel coronavirus (COVID-19) using social media search index. Int. J. Environ. Res. Public Health 17(7) (2020) 40. I.I. Bogoch, A. Watts, A. Thomas-Bachli, C. Huber, M.U.G. Kraemer, K. Khan, Potential for global spread of a novel coronavirus from China. J. Travel Med. 27(2) (2020) 41. B. McCall, COVID-19 and artificial intelligence: protecting health-care workers and curbing the spread. Lancet Digit. Health 2(4) (2020) 42. X. Zhang, H. Saleh, E.M.G. Younis, R. Sahal, A.A. Ali, Predicting coronavirus pandemic in real-time using machine learning and big data streaming system. Complexity 2020, 1–10 (2020) 43. M. Zivkovic et al., COVID-19 cases prediction by using hybrid machine learning and beetle antennae search approach. Sustain. Cities Soc. 66 (2021) 44. A.A. AlQudah, M. Al-Emran, K. Shaalan, Medical data integration using HL7 standards for patient’s early identification. PLoS One 16(12), 3–8 (2021) 45. Check your symptoms and find the right care | Buoy. Accessed 10 Jan 2022 46. Lark Health: Digital Care Management & Prevention Platform. Accessed 10 Jan 202 47. A. Yan, Y. Zou, D.A. Mirchandani, How hospitals in mainland China responded to the outbreak of COVID-19 using information technology-enabled services: an analysis of hospital news webpages. J. Am. Med. Inform. Assoc. 27(7) (2020) 48. G. Al-Emran, M. Al-Kabi, M.N. Marques, A survey of using machine learning algorithms during the COVID-19 pandemic, in Emerging Technologies During the Era of COVID-19 Pandemic, ed. by G. Arpaci, I. Al-Emran, M.A. Al-Sharafi, M. Marques (Springer, Cham, 2021), pp. 1–8 49. I. Arpaci, S. Huang, M. Al-Emran, M.N. Al-Kabi, M. Peng, Predicting the COVID-19 infection with fourteen clinical features using machine learning classification algorithms. Multimed. Tools Appl. 80(8), 11943–11957 (2021) 50. M. Al-Emran, J.M. Ehrenfeld, Breaking out of the box: wearable technology applications for detecting the spread of COVID-19. J. Med. Syst. 45(2), 19–20 (2021) 51. A.S.R. Srinivasa Rao, J.A. Vazquez, Identification of COVID-19 can be quicker through artificial intelligence framework using a mobile phone-based survey when cities and towns are under quarantine. Infect. Control Hosp. Epidemiol. 41(7) (2020) 52. A. Zahedi, A. Salehi-Amiri, N.R. Smith, M. Hajiaghaei-Keshteli, Utilizing IoT to design a relief supply chain network for the SARS-COV-2 pandemic. Appl. Soft Comput. 104 (2021) 53. G. D’Angelo, F. Palmieri, Enhancing COVID-19 tracking apps with human activity recognition using a deep convolutional neural network and HAR-images. Neural Comput. Appl. (2021) 54. S. Chen et al., Fangcang shelter hospitals: a novel concept for responding to public health emergencies. Lancet 395(10232) (2020) 55. G. Yao, X. Zhang, H. Wang, J. Li, J. Tian, L. Wang, Practice and thinking of the informationized cabin hospitals during the novel coronavirus pneumonia period. Chin. J. Hosp. Adm. 36, 8 (2020) 56. X. Song, X. Liu, C. Wang, The role of telemedicine during the COVID-19 epidemic in China— experience from Shandong province. Crit. Care 24(1) (2020)
318
L. N. Gunturu et al.
57. Z. Hong et al., Telemedicine during the COVID-19 pandemic: experiences from Western China. J. Med. Internet Res. 22(5) (2020) 58. M. Mourad, S. Bousleiman, R. Wapner, C. Gyamfi-Bannerman, Conducting research during the COVID-19 pandemic. Semin. Perinatol. 44(7) (2020) 59. G.S. Kienle et al., Addressing COVID-19 challenges in a randomised controlled trial on exercise interventions in a high-risk population. BMC Geriatr. 21(1) (2021) 60. A.J. Bokolo, Exploring the adoption of telemedicine and virtual software for care of outpatients during and after COVID-19 pandemic. Ir. J. Med. Sci. 190(1), 1–10 (2021) 61. S. Koch, Home telehealth—current state and future trends. Int. J. Med. Inform. 75(8) (2006) 62. S.R. Isabalija, V. Mbarika, G.M. Kituyi, A framework for sustainable implementation of Emedicine in transitioning countries Int. J. Telemed. Appl. 2013 (2013) 63. A. Doshi, Y. Platt, J.R. Dressen, B.K. Mathews, J.C. Siy, Keep calm and log on: telemedicine for COVID-19 pandemic response. J. Hosp. Med. 15(5) (2020) 64. M. Sodhi, Telehealth policies impacting federally qualified health centers in face of COVID-19. J. Rural Health 37(1), 158–160 (2021) 65. H. Cho, D. Ippolito, Y.W. Yu, Contact Tracing Mobile Apps for COVID-19: Privacy Considerations and Related Trade-Offs (2020) 66. J. Wosik et al., Telehealth transformation: COVID-19 and the rise of virtual care. J. Am. Med. Inform. Assoc. 27(6) (2020) 67. J. Torous, K.J. Myrick, N. Rauseo-Ricupero, J. Firth, Digital mental health and COVID-19: using technology today to accelerate the curve on access and quality tomorrow. JMIR Mental Health 7(3) (2020) 68. E. Whaibeh, H. Mahmoud, H. Naal, Telemental health in the context of a pandemic: the COVID-19 experience. Curr. Treat. Opt. Psychiatry 7(2) (2020) 69. R.T. Goins, U. Kategile, K.C. Dudley, Telemedicine rural elderly, and policy issues. J. Aging Soc. Policy 13(4) (2001) 70. K.I. Adenuga, Telemedicine system: service adoption and implementation issues in Nigeria. Indian J. Sci. Technol. 13(12), 1321–1327 (2020) 71. J.H. Wright, R. Caudill, Remote treatment delivery in response to the COVID-19 pandemic. Psychother. Psychosom. 89(3) (2020) 72. J. Gutierrez, E. Kuperman, P.J. Kaboli, Using telehealth as a tool for rural hospitals in the COVID-19 pandemic response. J. Rural Health 37(1), 161–164 (2021) 73. S. Banskota, M. Healy, E.M. Goldberg, 15 smartphone apps for older adults to use while in isolation during the Covid-19 pandemic. West. J. Emerg. Med. 21(3) 2020. 74. A. Kichloo et al., Telemedicine, the current COVID-19 pandemic and the future: a narrative review and perspectives moving forward in the USA. Fam. Med. Commun. Health 8(3) (2020) 75. A.M. Ansary, J.N. Martinez, J.D. Scott, The virtual physical exam in the 21st century. J. Telemed. Telecare 27(6) (2021) 76. P. Mansouri-Rad, M.A. Mahmood, S.E. Thompson, K. Putnam, Culture matters: factors affecting the adoption of telemedicine, in Proceedings of the Annual Hawaii International Conference on System Sciences (2013) 77. J. Humphreys et al., Rapid implementation of inpatient telepalliative medicine consultations during COVID-19 pandemic. J. Pain Symptom Manag. 60(1) (2020) 78. K. Okereafor, O. Adebola, R. Djehaiche, M. El, B. El, Exploring the Potentials of Telemedicine and Other Non-contact Electronic Health Technologies in Controlling the Spread of the Novel Coronavirus Disease (COVID-19) (2020) 79. S. Das, A. Mukhopadhyay, Security and privacy challenges in telemedicine, CSI Commun. (2011)
Robotics and AI in Healthcare: A Systematic Review Saif AlShamsi, Laila AlSuwaidi, and Khaled Shaalan
Abstract There is an increase in the number of people aged 60 and above worldwide [37]. On October 12, 1999, the world population peaked at 6 billion and was estimated to climb to 9 billion. By 2037, with a growth rate getting lower each year, we expect to have many older adults by then. With the increase in the price of caregivers and medication year by year, it is getting harder to maintain their longevity. The research papers reviewed will include the latest technology in Healthcare, which will assist older adult’s Healthcare. The aim is to represent data and findings of other research papers to figure out the problems faced in this area of research and how much it has advanced the state-of-the-art during the past years, and where it is going in the future. This research paper uses PRISMA guidelines for systematically reviewing the studies. These studies focus on novel technologies, new findings, current trends, benefits, and contributions in elderly care using the latest technologies from every aspect. The conducted review covers the period from January 2018 to January 2022. The research outcomes were astonishing, especially the studies that assisted older people in care. This area of research needs continuous and intensive research in various databases to get full coverage of the subject. Keywords Automation · Robotics · Healthcare · Elderly healthcare · Artificial intelligence · Smart home
S. AlShamsi · L. AlSuwaidi · K. Shaalan (B) Faculty of Engineering and IT, The British University in Dubai, Dubai, United Arab Emirates e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. Al-Emran and K. Shaalan (eds.), Recent Innovations in Artificial Intelligence and Smart Applications, Studies in Computational Intelligence 1061, https://doi.org/10.1007/978-3-031-14748-7_18
319
320
S. AlShamsi et al.
1 Introduction Artificial intelligence (AI) applications play a significant role in healthcare [1, 7, 33]. Robotics and AI in healthcare help decrease the cost of medication and prevention of disease in senior people. These advances will let the older adults benefit by making their lives a little easier and helping them rely on themselves in case they need treatment or administration of medication or require daily check-ups [38]. We searched about automation and robotics. The study aims to search for quality articles from several databases like Sage, ACM Digital Library, Science Direct, and IEEE Xplore using their supported search engines for retrieving research papers relative to this field and extract data and findings accordingly. The studies collected are be shortlisted accordingly. The criteria placed primarily focus on technical terms that are used in typical targeted research papers and are designed to find relevant research papers. The aim is to represent data and findings of other research papers to figure out the problems faced in this area of research and evaluate how much it has advanced during the past years, and where it is going.
2 Problem Identification and Analysis With the rate of population reproduction lowering with time and overall health care is advancing, the total population is getting higher; however, the working population is getting lower, and the number of older people is increasing. Therefore, this leads to an overall increase of elderly housing and more in-house nurses to maintain good health. The implementation of robotics with automation and closed-loop systems will increase living standards and overall increase the older adults’ capabilities of living. Removing the human (open loop system) will make the older people more independent and self-reliable for their everyday needs and activities. It is recommended in the PRISMA model to follow the PICO design tool. Table 1 follows the PICO design tool for obtaining focused clinical questions that help do a systemic review [36]. Table 1 PICO design tool for systemic review questions P
Population or problem
Older adults—robotics and AI technology advances in healthcare
I
Intervention or exposure
Finding main research themes and research outcomes in the collected studies
C
Comparison
Finding current trends in elderly technology research
O
Outcome
Benefits from finding these new technologies
S
Study design
Data analysis and overall evaluation
Robotics and AI in Healthcare: A Systematic Review
321
Table 2 Research questions The systematic review study will answer the following research questions RQ1. What are the main research themes and outcomes seen in the collected studies? RQ2. What are the current trends in robotics and AI technology used in elderly healthcare and the primary research outcomes? RQ3. What are the main databases used in robotics and AI in healthcare?
2.1 Research Questions After gaining the areas of focus, we studied publications about innovative home healthcare and robotic technologies to respond to the following research questions by following the systematic review steps of the PRISMA model to answer the following research questions see Table 2.
3 Teamwork Collaboration Following the PRISMA model, we target finding and analyzing 25 research papers as a team. The strategy was to follow the guideline given to follow the right pathway of systemic review. After that, with online meetings and phone calls, we could unified method for reviewing all study papers reviewing strategies based on the requirement and specifications given by the course professor. The team was formed on January 11 with the authors of this paper as members.
4 Method This study follows a systematic review approach. The approach follows the guidelines of [20] and other systematic reviews [5, 6, 8–11, 18]. This report aims to represent data about Robotics and AI in Healthcare and represent the up-to-date findings of other research papers to figure out the problems faced in this matter, how much it has advanced during the past years, and where it is going.
4.1 Inclusion and Exclusion Criteria See Table 3.
322
S. AlShamsi et al.
Table 3 Inclusion and exclusion criteria Inclusion
Exclusion
• • • • •
After January 2018 publication paper • Research papers published before January Written in English 2018 and in English or other languages Peer-reviewed publications • Systematic reviews papers excluded Conference papers • Literature reviews papers excluded The papers focused on the latest technologies in artificial intelligence and advanced software solutions only for elderly care settings and self-help and independent living, assistive technologies • The paper will include studies done in elderly homes, smart homes, investigational lab settings, nursing homes, and rehabilitation locations • Senior’s care, living alone or self-care environment, and new assistant technologies used • Only selected technologies for self-care for elderly care, home setting, smart homes, trail setting, nursing homes, and recovery settings
4.2 Data Sources and Search Strategies Methods applied to the research paper include a systematic review of several papers that include keywords in aging, older people, robotics, automation, health care 4.0, telemedicine, Healthcare, artificial intelligence (AI), service robot, telemedicine, and smart home shown in Table 4. The keyword strings like (“Aging” OR “Aged” OR “Elderly People” And (“self-care” OR “Independent Living”) AND (“Self-care Devices” OR “Telemedicine” OR “Assistive Living” OR “Service Robot”). The primary datasets analyzed are based on research study objectives, technologies used, database, research paper publication year, and type application (Table 5). Table 4 Keywords strings Keywords
Strings
Older people
Elderly, aging, aged, elders and senior citizen
Homecare
Nursing home—home health care—independent living—telecommunication
Technology
Smart home technology—telemedicine—assistive technology—self-help device—artificial intelligence in eldercare—robotics
Table 5 Search keywords Search keywords
(“Aging” OR “Aged” OR “Elderly People”) And (“self-care” OR “Independent Living”) AND (“Self-care Devices” OR “Telemedicine” OR “Assistive Living” OR “Service Robot”)
Robotics and AI in Healthcare: A Systematic Review
323
This systemic review research paper started in January till February 2022. Figure 6 shows the distribution of studies with the highest research studies found databases was Science Direct with eight research papers. Following IEEE Xplore with seven papers, Taylor & Francis only one, and ACM Digital Library only. This study’s research and refinement stages on Preferred Reporting Items for Systemic Reviews and Meta-Analysis (PRISMA) [25]. We will be reviewing several papers interested in the ability of robotics that will change the system from the human interface (closedloop) into an automated procedure (closed-loop). We will be excluding all topics that will not involve the elderly, and we will aim to examine more recent papers from 2018 to 2022 because of the advancement of technologies that happened in the last few years in the field of artificial intelligence, as shown in Fig. 1: PRISMA Flow Diagram. That shows search results, study selection, and inclusion process [29]. The search outcomes retrieved 1017 papers using the stated keywords. Eight hundred thirty-one articles removed for different reasons, such as books, before January 2018 and duplicates. Thus, the overall number of papers selected after the title and abstract screened is 186 papers. The writers confirmed the inclusion and exclusion criteria for each study. Subsequently, they have selected sixty-six research articles after a full-text review. Finally, seventeen research papers included in the analysis process after checking the papers’ quality and availability in the databases selected. Figure 2 shows the publication year of the research papers gathered shows the excellent distribution in the curved in Fig. 2, where five papers were in 2018, three in 2019, five in 2020, and two in 2021.
4.3 Quality Assessment One of the significant factors that need to be studied is the quality assessment [3]. A quality assessment checklist with eight criteria was prepared and used to evaluate the quality of the research papers collected for further analysis (n = 17) papers. The quality assessment checklist explained in Table 6. The main purpose of the checklist was not for criticism of any writer’s work, yet used as a guide for finding the right research papers that suites the content of the systemic review. Each question was scored based on 3-point scale with (yes = 1 point), (No = 0 point), (Partially = 0.5 point). Therefore, each study can score between 0 and 8; the higher the paper’s total scores, the higher the degree to which this study relates the research questions. Table 7 shows the quality assessment results for all 17 studies. Since it is obvious that all the studies have passed the quality assessment, it tells us that the studies are qualified for use for further analysis. The studies selected from Jan 2018 to Jan 2022, all papers were written in the English Language. Thus, eligibility criteria were based on all the aspects that led to selecting the highest quality of the paper in the databases selected previously. The ranking was based on SJR, Scientific Journal Rankings—SCImago. To start with the year of publication, research language, citation number, studies must be Q1 and Q2 based on the SCImago website; otherwise excluded from the selection.
324
Id e nt ifi c at io n
S cr e e ni n g
S. AlShamsi et al. Records idenfied from keywords: All Databases (n=1017) Emerald (n=26) Taylor & Francis (n=42) Sage (n=104) ACM Digital Library (n=341) Science Direct (n=382)
Records Selected Aer Title and Abstract screened (n =186)
Records excluded (n = 120) Systemac review Not Applicable topic Not for aging people
Reports Selected aer Full text Study (n =66)
Reports not retrieved (n = 42) Not Applicable topic Paper Quality
Reports assessed for eligibility
Reports excluded:7 Not for Aging (n = 1) Paper Quality (n =2 ) Paper not found (n =3 )
(n = 24)
In cl u d e d
Records removed before screening: Duplicate records removed (n = 2) Records removed for other reasons (n =829) Before January 2018 Books
Total studies included in review (n=17) Emerald (n=0) Taylor & Francis (n=0) Sage (n=1) ACM Digital Library (n=1) Science Direct (n=8) IEEE Xplore (n=7)
Fig. 1 PRISMA flow diagram for robotics and AI in healthcare systematic
Figure 3 shows that most of the papers were (Q1 = 11), and one paper only was (Q2 = 1). Furthermore, some papers are not stated because they are conference papers that do not have Quartile information on their journal webpage (Conferences = 5). The selection process of papers is based on two reviewers’ opinions for each record. This method will ensure that the inclusion criteria for the review implemented as they worked in a group while selecting the articles. No software tool used as this
Robotics and AI in Healthcare: A Systematic Review
325
Fig. 2 Publication year of research papers gathered
Table 6 Quality assessment checklist # Question 1. Are the research objectives clearly specified? 2. Are the study focused on the geriatric population and homecare? 3. Is the novel technology considered by the study specified? 4. Are the finding of the research adequately detailed? 5. Does the study explain the limitation found while conducting the study? 6. Do the results add to the future literature? 7. Are there any figures or tables techniques used to analyze the data described? 8 Does the study add to knowledge or understanding for the writer?
was done on a sharing excel sheet on google drive to assist the process of paper selection in the group. The SCImago Journal Rank Indicator shows in Fig. 4. It measures a journal’s impact, influence, or prestige. It indicates the average number of weighted citations received in the year selected by the number of papers issued in the journal in the three earlier years. The higher the number of SJR, the better the choice is for the journal.
326
S. AlShamsi et al.
Table 7 Quality assessment results S/Q
Q1
Q2
Q3
Q4
Q5
Q6
Q7
Q8
Total
S1
1
0.5
1
0
1
1
0
0.5
5
Percentage 63
S2
1
1
1
1
1
1
1
1
8
100
S3
1
1
1
1
1
1
1
1
8
100
S4
1
1
1
1
1
0
1
1
7
88
S5
1
1
1
1
1
1
1
1
8
100
S6
1
0.5
0.5
1
1
1
1
1
7
88
S7
1
0.5
1
1
1
1
1
1
7.5
94
S8
1
1
1
1
0
1
1
1
7
88
S9
1
1
1
1
1
1
1
1
8
100
S10
1
1
1
1
0
0
1
0.5
5.5
69
S11
1
1
1
1
0
0
1
0.5
5.5
69
S12
1
0.5
1
1
0
0
1
0.5
5
63
S13
1
1
1
1
0
0
1
0.5
5.5
69
S14
1
1
1
1
1
1
1
1
8
100
S15
1
1
1
1
1
1
1
1
8
100
S16
1
1
1
1
1
1
1
1
8
100
S17
1
1
1
1
1
1
1
1
8
100
Fig. 3 Scientific journal rankings—SCImago (SJR)
4.4 Data Coding and Analysis The characteristic associated with the research method quality were coded, including (a) Keywords, (b) Study objectives, (c) Novel Technology and Innovation, (d) Highlights, (e) Evidence, (f) Finding or Conclusions, (g) Limitations, (h) Future Research,
Robotics and AI in Healthcare: A Systematic Review
327
Fig. 4 SCImago journal rank indicator
and (i) Research outcomes (e.g., positive, neutral, and negative). Throughout the data analysis phase, the keywords, highlights, and evidence were excluded from the synthesis as the studies were not describing it very clearly. The analysis of the gathered studies carried out by authors of this paper used and shared an excel sheet in google drive that assisted in doing deep analysis for each paper and systematically writing analysis of each paper.
5 Results The findings of this systematic review are reported based on three research questions, based on the published seventeen research papers about Robotics and AI in Healthcare from January 2018 till January 2022. RQ1. What are the main research themes and outcomes seen in the collected studies? After analysing the papers selected for this systematic review, we noticed three significant themes in robotics and automation found in the papers. As shown in Fig. 5, the three major themes found in the papers were movement and self-care, social services and movement, and decision making. Most of the papers tackled the safety problem, which works by monitoring and making decisions-based information gathered using sensors as inputs, from systems implemented on mobiles, homes, and fully functional robots that monitor the movement and try to improve their life with specifications like fall prevention and detecting abnormalities. The second theme talks about how the robot can help in the movement of the elderly. This part has papered the talk about smart wheelchairs smart showers, and we noticed that it might contribute to making the life of people of old age easier
328
S. AlShamsi et al.
Fig. 5 Field of robotics and automation
Fig. 6 Database name and number of research papers
without the end of day-care helper to aid them. The least talked about was how the robot will help in their social life. Those papers talked about How the users will benefit from the robot being by their side in social interaction by making some decisions for them carrying baggage, determining the degree of capability of the user, and adjusting the degree of service.
Robotics and AI in Healthcare: A Systematic Review
329
RQ2. What are the current trends in Robotics and AI technology used in elderly Healthcare and the main research outcomes? Several research studies found among the current trends in Robotics and AI technology used in elderly Healthcare. Table 8 summarizes the main study objectives and the novel technology and innovation used in the research paper. Starting with [16], the new technology used was Wheelchair Mounted Robotic Arms (WMRAs), designed to help individuals with severe upper limb impairments to perform daily activities more often. The aim was to measure the use and outcomes of WMRAs and document the results by caregivers. The initial finding suggests that caregivers noticed more significant benefits than a burden. So far, more quantitative data is needed to prove the benefit of the WMRs. The paper [13] targets to let older adults interact with the robot in their homes called “Hobbit robotic” that aims to enable self-independent living for elderly people in their homes. Hobbit robotic is the first time in a private house to provide a combination of manipulation capabilities, autonomous navigation, and non-scheduled interaction for an extended period. The result showed that all the participants in 16 Private Households interacted with Hobbit daily, with most functionality working in the trial. However, the other functions need to be enhanced. Another study [26] uses a novel forecasting approach. The Grey model system works with observing the daily activities and learning the behavior of the monitored person. Later, enabling the detection of dangerous behavior. The results show that with minimal sensing and data gathering, the system can gather information accurately to evaluate older adults’ dependency, help predict the health condition, and detect every irregular situation. In [40] uses a bathing robotic system prototype that aims to support daily living activities in real-life scenarios. The results from this paper displayed good performance, with the elderly, we saw high satisfaction and overall effectiveness in modes of procedure. The study [12] aims to suggest a newly developed robot that is vital to assess the disease severity and progression for Alzheimer’s Disease (AD) and Mild Cognitive impairment (MCI). The Robot Syndrom Kurztest neuropsychological battery (SKT) uses a small test to measure cognitive decline as it assesses the memory, attention span, and other related cognitive functions with the speed of information processing. Clinical experts use all these methods. Robots used in healthcare act as teleoperators; we propose using robots and agent technology to provide the doctor with slightly more intelligent support than a simple teleoperator system. The robot does not make decisions but, programmed based on the agent architecture we propose, can interact with patients and doctors in a changing environment and alert the doctor or suggest strategies when necessary [12]. In [21] study, they work with an intelligent medical care robot system that allows the ability to select the most helpful plan for unhanded situations and advice the physician with his recommendation in different healthcare scenarios. The study aims to enhance the telehealth system qualities to serve patients with requirements, operating as human caregivers. The main goal is to support the independent living of the patient at home to monitor their daily health status. The use of robots and agent technology
Study objectives
Measure the use and outcomes of Wheelchair Mounted Robotic Arms (WMRAs) and document the results by caregivers
Hobbit robotic aims to enable self-independent living for older adults in their homes. The paper targets to let older adults interact with the robot in their homes
The study suggested a new e-Health monitoring system addressing the older adults living alone to check their daily activities and evaluate their self-reliance based on old scales evaluated by healthcare professionals
S#
S1
S2
S3
The Grey model system works with observing the daily activities and learning the behavior of the monitored person. Later, enabling the detection of dangerous behavior. This system uses a novel forecasting approach
Hobbit robotic is the first time in a private house to provide a combination of manipulation capabilities, autonomous navigation, and non-scheduled interaction for an extended period
Wheelchair Mounted Robotic Arms (WMRAs) designed to help individuals with severe upper limb impairments to perform daily activities more often
Novel technology and innovation
The results show that with minimal sensing and data gathering, the system can gather information accurately to evaluate older adults’ dependency, help predict the health condition, and detect every irregular situation
The result showed that all the participants interacted with Hobbit daily with most functionality working in the trial. Nevertheless, the other functions need to be enhanced
The initial finding suggests that caregivers noticed more significant benefits than a burden. So far, more quantitative data is needed to prove the benefit of the WMRs
Findings/conclusions
The current system lacks an adaptive and predictive context-aware monitoring system. That observes every drawback of the existing e-health keys, such as continuous monitoring, the absence of abnormality detection methods, and prediction techniques
Function limitation of the prototype during the tested period of two weeks. Some participants are frustrated because of communication with Hobbit via speech, as some verbal interactions are not precise
The number of participants was only five, making it hard to implement and rely on technology The impact of WMRs seems to be limited on caregiver burden
Limitations
Table 8 Main study objectives and novel technology and innovation compared in the 17 research papers
(continued)
The future effort will be better if it focuses on extending the number of actions found in the scenario creations to adjust individual behavior knowledge To plan an experiment with a system in a real-world scenario
The Hobbit will be part of the future older homecare, and the function limitation will be adjusted in future development and system integration. It is necessary to move the trials to homecare to discover real-world challenges
Increase the relationship between caregiver-user to deliver support and trust Check the source of concern in case of breakages for the caregivers
Implications for future research
330 S. AlShamsi et al.
Study objectives
The bathing robotic system prototype supports daily living activities in real-life scenarios
The study aims to suggest a newly developed robot that is vital to assess the disease severity and progression for Alzheimer’s disease (AD) and mild cognitive impairment (MCI)
S#
S4
S5
Table 8 (continued)
The primary purpose of the Syndrom Kurztest neuropsychological battery (SKT) robot is a small test to measure cognitive decline as it assesses the memory, attention span, and other related cognitive functions with the speed of information processing. Clinical experts use all these methods
The evaluation of HUI interface innovative technologies by investigative and assessing them using clinical studies
Novel technology and innovation
Robots used in healthcare act as teleoperators; we propose using robots and agent technology to provide the doctor with slightly more intelligent support than a simple teleoperator system. The robot does not make decisions but, programmed based on the agent architecture we propose, can interact with patients and doctors in a changing environment and alert the doctor or suggest strategies when necessary
With innovations researched and applying new sensing, reasoning, and end controls with a developed operation unit, it can familiarize the audience’s abilities. The results from this paper displayed good performance; with the elderly, we saw high satisfactoriness and overall effectiveness in modes of procedure
Findings/conclusions
The study does not explain in detail certain aspects, such as cognitive game scenarios. The caregiver’s job has not been studied very well while preparing specific scenarios. Most of the work removes the caregiver from the loop of interactions in means of providing care for receivers directly making it
It was challenging to place the sensors. The problem faced with the camera was recording the chores of the interactions
Limitations
(continued)
The examination done was to check the devotion and trace memory. It displays a tool for the effectiveness of the caregivers and the receivers. They want to bring to life they determine its effectiveness in the future
Implications for future research
Robotics and AI in Healthcare: A Systematic Review 331
Study objectives
The study aims to enhance the telehealth system qualities to serve patients with requirements, operating as human caregivers. The main goal is to support the independent living of the patient at home to monitor their daily health status
The study proposed a technical idea to check the usability of robotic solutions in delivering items from outdoor areas to people’s apartments and vice-versa
The research paper presents the new development of a robot-integrated smart home (RISH) used for older adults’ care in assistive technologies research
S#
S6
S7
S8
Table 8 (continued)
A network of sensors, robots, and remote caregivers. Are the basics of designing and implementing the Rish software given essential functions to enable the robot to recognize the movement by the human using IMU
Three different mobile platforms, operating in three different surroundings (domestic, condominium, outdoor), can collaborate amid themselves and with other technologies in the background (i.e., the condominium elevator). The assessment executed in genuine settings involving 30 subjects
It is an intelligent medical care robot system that allows the ability to select the most helpful plan for unhanded situations and advice the physician with his recommendation in different healthcare scenarios
Novel technology and innovation
Audio perception with voice recognition with a 2% error and body movement accuracy of 86% tracking of 0.2 m of location. The system detects falling by 80%
Both collecting the deliverables and garbage removal showed a high level of acceptance by the end-users
The use of robots and agent technology will give the doctor slightly more intelligence to solve a simple teleoperator system. However, the robot does not make decisions, but it is programmed to integrate with patients and doctors and alert the doctor when necessary
Findings/conclusions
The instrument to manage and to finish. Showing an overall scenery of the usability, the SUS is not enough to measure the system usability, but how it is received
Still, the system must let the medical assistant perform actions remotely, such as changing treatments or interacting with patients who require support
Limitations
(continued)
To present the overall idea and set the framework for the completion of the fall detection system
The study shall evaluate using sus for quantitative and interviews for qualitative. Furthermore, calculating the success rate and the required to finish a job
In further studies, the focus is on the interaction between human–robot teaming and validating the results
Implications for future research
332 S. AlShamsi et al.
The research paper gives an overview of Homecare robotic systems developed by healthcare 4.0
S9
S11 New innovative Healthcare is a framework for AAL to observe physical activities
S10 The paper’s main aim is to focus on essential design principles and device features required to fulfill the older generation’s needs
Study objectives
S#
Table 8 (continued)
Ethics should be considered due to the privacy problem of the subjects
Findings/conclusions
It is a platform that includes The ability to predict 12 types three together with the first for of movement with less than elders, the second for health 3% error agents lastly for friends and family. The platforms incorporate investigation, therapy, and playful apps
Topographies of the CPS-based HRS presented. The latest advancement in aiding technologies is revised, containing: artificial intelligence, sensing fundamentals, materials and machines, cloud computing, and communication
Novel technology and innovation Enhancement of the exoskeleton to mimic human-like motion for joint movement
Limitations
(continued)
In the future, AI will be customizing learning based on individual users and reading gathered from personal data
Implications for future research
Robotics and AI in Healthcare: A Systematic Review 333
S14 The study aims to increase Movecare to monitor the older people’s self-acceptance subjects at home and reliance, thus creating ICT platform to support independent living for this population
A robot that can be socially active and lead conversations with the basic properties of conversations
S13 This paper proposes a developed framework that assists the social robots in conducting daily clinical screening interviews in older homecare like cognitive evaluation, falls, and pain management
Novel technology and innovation
The usage of power electronics, communication, automation, code, and the Web unified to deliver a automounts PWC charging position. The resolution includes the utilization of a multi-coil transmitted to get a free-positioning specification and dual-side regulator to order the power
Study objectives
S12 This article presents a system-level approach for designing a 1-kW smart wireless charger for PWCs
S#
Table 8 (continued)
The system empowers the home-based new technology with quantitative measures of data reporting. Usually, this is done by individuals themselves while visiting the clinical in clinical practice
has the ability to handle cognitive evaluation, falls’ danger estimation, pain assessment, wellbeing screening, attitude calculation, friendship valuation, retention testing, tiredness
Adopting a human-in-the-loop proposal to get feedback and manipulate it to get the final result by complying with more than one prototype. Pad design has already verified on the 1-kw prototype
Findings/conclusions
the time and expertise needed which result,
Limitations
(continued)
Application on valid criteria
improvement of expression on the face acknowledgment. Assessment of the faces to know their healthiness. By testing it on robots with elders who have a terrible memory
Implications for future research
334 S. AlShamsi et al.
The proposed project aims to shift the EWR far from the obstacles on its path by getting information from sensors about the surroundings and the necessary decision-making
This technology uses a human-in-the-loop-based user and system interface approach This innovation presents the novel method of integrating humans with assistive robots into ECHONET (a smart home-based setting) targeting nursing care challenges in aging societies
S16 The study aims to develop a fuzzy obstacle avoidance system, especially for older adults’ assistants and walking assistant robot (EWR), using two ultrasonic sensors mounted at the front of the robot
S17 This article presents a new software framework to integrate a user, a humanoid robot called Softbank Robotics Pepper, with ECHONET, an innovative home-based environment (iHouse)
Novel technology and innovation
The app helps caregivers to determine their needs. Data saved online so that caregivers can get notifications, know the details of book services and notify them
Study objectives
S15 The study aims to develop a telehealth system that helps communicate and monitor older adults’ daily needs
S#
Table 8 (continued)
To establish a connection between robots and humans to ease their life. The robot interacted in a helpful way by motion and linguistics
The examination done showed us that the algorithm could successfully avoid obstacles and assist the user in walking and steering
Their own opinion to achieve better living standards. data from an elderly house was collected to know the most critical needs
Findings/conclusions
Implications for future research
To avoid any communication problems; a touch interface was established. by asking them and the user answering taking in answer to act based on it
Not stated
EWR acts on pressure on its Not stated sensors on the grips. The proposal attacks the problem of not being able to move autonomously
The improvement of elder’s Not stated health and QOF by allowing them to live alone by monitoring them to assist their daily needs
Limitations
Robotics and AI in Healthcare: A Systematic Review 335
336
S. AlShamsi et al.
will give the doctor slightly more intelligence to solve a simple teleoperator system. However, the robot does not make decisions, but it is programmed to integrate with patients and doctors and alert the doctor when necessary. The study [22] proposed robotic solutions in delivering items from outdoor areas to people’s apartments and vice-versa. Three different mobile platforms, operating in three different surroundings (domestic, condominium, outdoor), can collaborate amid themselves and with other technologies in the background (i.e., the condominium elevator). The assessment executes in natural settings involving 30 subjects. Both collecting the deliverables and garbage removal showed a high level of acceptance by the end-users. The research paper [15] presents the new development of a robot-integrated smart home (RISH) used for older adults’ care in assistive technologies research. A network of sensors, robots, and remote caregivers. Are the basics of designing and implementing the Rish software given essential functions to enable the robot to recognize the movement by the human using IMU. The outcomes show that the audio perception with voice recognition with a 2% error and body movement accuracy of 86% tracking of 0.2 m of location. The system detects the falling by 80% and the system’s overall accuracy to prevent the subject from falling. The research paper [38] also gives an overview of Homecare robotic systems developed by healthcare 4.0. The latest advancement in aiding technologies is revised, containing: artificial intelligence, sensing fundamentals, materials and machines, cloud computing, and communication. However, the ethics part should be considered due to the privacy problem of the subjects. A study by Syed et al. [34] recommends a new smart Healthcare framework for observing assistant living using IoMT and big data analytics physical activities. It is a platform that includes three together with the first for elders, the second for health agents lastly for friends and family. The platforms incorporate investigation, therapy, and playful apps. The ability to predict 12 types of movement with less than 3% error. This article [35] presents a system-level approach for designing an intelligent wireless charger system for Power Wheelchairs (PWCs). The usage of power electronics, communication, automation, code, and the Web unified to deliver a automounts PWC charging position. The resolution includes the utilization of a multi-coil transmitted to get a free-positioning specification and dual-side regulator to order the power. Adopting a human-in-the-loop proposal to get feedback and manipulate it to get the final result by complying with more than one prototype. Pad design has already verified on the 1-kw prototype. Manh Do et al. [24] suggesting clinical screening interviews using the social robot for geriatric care. This paper proposes a developed framework that assists the social robots in conducting daily clinical screening interviews in older homecare like cognitive evaluation, falls, and pain management. This robot can be socially active and lead conversations with the basic properties of conversations. This robot can handle cognitive evaluation, falls’ danger, pain assessment, wellbeing screening, attitude calculation, friendship valuation, retention testing, tiredness. The study [23] aims to increase older people’s self-acceptance and reliance, thus creating ICT platform to support independent living for this population. The project
Robotics and AI in Healthcare: A Systematic Review
337
MOVECARE is a Home-based monitoring system that monitors subjects at home, and the system empowers the home-based new technology with quantitative measures of data reporting. Usually, this is done by individuals themselves while visiting the clinical in clinical practice. The study [31] aims to develop a telehealth system that helps communicate and monitor older adults’ daily needs. The application helps caregivers to determine their needs. Data saved online so that caregivers can get notifications, know the details of book services and notify them. Their own opinion to achieve better living standards. Data from an elderly house was collected to know the essential needs. Another study by Osivue et al. [27] aims to develop a fuzzy obstacle avoidance system, especially for older adults assistants and walking assistant robots (EWR), using two ultrasonic sensors mounted at the front of the robot. The proposed project aims to shift the EWR far from the obstacles on its path by getting information from sensors about the surroundings and the necessary decision made. The examination showed us that the algorithm could successfully avoid obstacles and assist the user in walking and steering. This article [14] presents a new software framework to integrate a user, a humanoid robot called Softbank Robotics Pepper, with ECHONET, an innovative home-based environment (iHouse). This technology uses a human-in-the-loop-based user and system interface approach. This innovation presents the novel method of integrating humans with assistive robots into ECHONET (a smart home-based setting) targeting nursing care challenges in aging societies. To establish a connection between robots and humans to ease their life. The robot interacted helpfully by motion and linguistics. RQ3. What are the main databases used in Robotics and AI in Healthcare? This section is devoted to showing the most used databases that published studies related to Robotics and AI in Healthcare. All papers will be analyzed based on research study objectives, technologies used, database, research paper publication year, and type application. Fig. 6 shows the distribution of selected studies from related databases. ScienceDirect considered the most published article related to the search compared with eight research articles. This is followed by IEEE Xplore (n = 7), ACM Digital Library (n = 1), and SAGE (n = 1).
6 Discussion and Conclusion AI has many applications across many domains (e.g., [2, 4, 17, 28, 30, 32, 39]). One of such applications are robotics. This systemic review’s main objective is analyzing quality published studies and gaining novel insights into the contextual aspects of robotics and AI, and the current trends and advances. Eight struggles that older adults face which are included and addressed by researchers in the field of robotics and artificial intelligence. They are: dependent living, mobility problems, monitoring health, lack of recreation opportunities, cognitive, motion and mobility problems, detection, and prevention.
338
S. AlShamsi et al.
We have reviewed the state-of-the-art technologies related to AI and robotics, and we believe that they will aid the quality of life of elders. Many robots have been developed to facilitate telecommunication, manipulation, fall and movement. We have noticed that there is a huge positive potential in the field. Five top limitations faced these studies. One was the lack of participants in the study, making it hard to implement and rely on technology [16]. Another challenge was the function limitation of the prototype during the trial period making the verbal interactions with participants unclear [13]. Next, placing the sensors in the study [40] seems very difficult. Another limitation [21] is the need to get medical assistance remotely, such as changing treatments or interacting with patients, which requires support. Finally, it seems complicated [31] to allow the elderly to live alone by monitoring them to assist their daily needs, so the improvement of the elderly’s health can be measured. Numerous Future research recommendations were recorded during our systematic review. In [16], it suggests assessing the aspects of caregiver-user sentiments towards the technology, which would matter while developing any feature among users. Besides, The Hobbit will be part of the future older homecare. Therefore, it is necessary to move the trials to homecare as there only can discover real-world challenges [13]. In addition, [26] focuses on planning an experiment with a system in a real-world scenario. Lanza et al. [21] suggests focusing on the interaction between human–Robot teaming and validating the results. Finally, Manh Do et al. [24] adds an excellent idea of improvement of expression on the face acknowledgment and assessment of the faces to know their healthiness. By testing it on robots with elders who have a terrible memory. Some researches gave us high hopes in limiting caregivers’ jobs; for example, the bathing robotic system [40] gave great potential in making people take showers by themselves. Other the provide and extra arm integrated on wheelchairs [16] that takes care of people with impairments and cannot handle objects. Some other researches discussed the [22] elimination of the daily chores of elders in taking out and bringing items like garbage and delivery items and other intuitive proposals. These ideas should be integrated to build an idea on a larger scale with the papers reviewed and subparts to produce a complete system with multiple purposes. We should combine the systems of house monitoring with more complex activities like monitoring health and requirements [12, 13, 21, 26]. One complete robot in the future can prevent falling deliver items and, if needed, thinks instead of us [12, 24, 22, 34]. Acknowledgements This work is a part of a project undertaken at the British University in Dubai.
Robotics and AI in Healthcare: A Systematic Review
339
Appendix: Paper Quality
S#
Source Title
Year
Database Publisher
Journal/conference Cite SJQ SJR
S1
[16]
Use and outcomes of a wheelchair mounted robotic arm—preliminary results of perceptions of family
2020 Sage
Sage
ASNR
S2
[13]
Results of field trials with a mobile service robot for older adults in 16 private households
2019 ACM Digital Library
ACM
Transactions on Human–Robot Interaction
S3
[26]
Adaptive monitoring system for e-health smart homes
2018 Science Direct
S4
[40]
S5
H index
2
Q1
1.79 168
21
Q2
0.6
8
ELSEVIER Pervasive and 54 Mobile Computing
Q1
0.69
64
I-Support: a robotic 2020 Science platform of an Direct assistive bathing robot for the elderly population
ELSEVIER Robotics and Autonomous Systems
16
Q1
0.81 118
[12]
Deciding the different robot roles for patient cognitive training
2018 Science Direct
ELSEVIER International Journal of Human–Computer Studies
22
Q1
0.71 122
S6
[21]
Agents and robots for collaborating and supporting physicians in healthcare scenarios
2020 Science Direct
ELSEVIER Journal of Biomedical Informatics
26
Q1
1.06 103
S7
[22]
Robotic delivery service in combined outdoor-indoor environments: technical analysis and user evaluation
2018 Science Direct
ELSEVIER Robotics and Autonomous Systems
15
Q1
0.81 118
S8
[15]
RiSH: a robot-integrated smart home for elderly care
2018 Science Direct
ELSEVIER Robotics and Autonomous Systems
99
Q1
0.81 118
S9
[38]
Homecare Robotic Systems for Healthcare 4.0: visions and Enabling Technologies
2020 IEEE Xplore
IEEE
IEEE Journal of 34 Biomedical and Health Informatics
NS
1.29 125
(continued)
340
S. AlShamsi et al.
(continued) S#
Source Title
Year
Database Publisher
Journal/conference Cite SJQ SJR
H index
S10 [19]
Designing mobile technology for elderly. A theoretical overview
2020 Science Direct
ELSEVIER Technological Forecasting and Social Change
34
Q1
1.29 125
S11 [34]
Smart healthcare framework for ambient assisted living using IoMT and big data analytics techniques
2019 Science Direct
ELSEVIER Future Generation 54 Computer Systems
Q1
1.26 119
S12 [35]
System-level approach to designing a smart wireless charging system for power wheelchairs
2021 IEEE Xplore
IEEE
IEEE Transactions on Industry Applications
1
Q1
1.19 195
S13 [24]
Clinical screening interview using a social robot for geriatric care
2021 IEEE Xplore
IEEE
IEEE Transactions on Automation Science and Engineering
6
Q1
1.31
87
S14 [23]
The MOVECARE project: home-based monitoring of frailty
2019 IEEE Xplore
IEEE
2019 IEEE EMBS 19 International Conference on Biomedical & Health Informatics (BHI)
NS
0.25
5
S15 [31]
An effective telehealth assistive system to support senior citizen at home or care-homes
2018 IEEE Xplore
IEEE
2018 International Conference on Computing, Electronics & Communications Engineering (iCCECE)
2
NS
0.25
5
S16 [27]
Approach for obstacle avoidance fuzzy logic control of an elderly-assistant and walking-assistant robot using ultrasonic sensors
2018 IEEE Xplore
IEEE
2018 15th International Conference on Ubiquitous Robots (UR)
4
NS
0.21
6
S17 [14]
An integrated 2018 IEEE approach to Xplore human–robot-smart environment interaction interface for ambient assisted living
IEEE
Proceedings of IEEE Workshop on Advanced Robotics and its Social Impacts, ARSO
16
NS
0.14
16
Robotics and AI in Healthcare: A Systematic Review
341
References 1. M. Al-Emran, I. Arpaci, Intelligent systems and novel coronavirus (COVID-19): a bibliometric analysis, in Emerging Technologies During the Era of COVID-19 Pandemic (Springer, 2021), pp. 59–67. https://doi.org/10.1007/978-3-030-67716-9_5 2. M. Al-Emran, R. Al-Maroof, M.A. Al-Sharafi, I. Arpaci, What impacts learning with wearables? An integrated theoretical model. Interact. Learn. Environ. 1–21 (2020). https://doi.org/ 10.1080/10494820.2020.1753216 3. M. Al-Emran, V. Mezhuyev, A. Kamaludin, K. Shaalan, The impact of knowledge management processes on information systems: a systematic review. Int. J. Inf. Manag. 43, 173–187 (2018). https://doi.org/10.1016/j.ijinfomgt.2018.08.001 4. M. Al-Emran, S. Zaza, K. Shaalan, Parsing modern standard Arabic using Treebank resources, in 2015 International Conference on Information and Communication Technology Research, ICTRC 2015 (2015). https://doi.org/10.1109/ICTRC.2015.7156426 5. M.N. Al-Nuaimi, M. Al-Emran, Learning management systems and technology acceptance models: a systematic review. Educ. Inf. Technol. 1–35 (2021). https://doi.org/10.1007/s10639021-10513-3 6. N. Al-Qaysi, N. Mohamad-Nordin, M. Al-Emran, Factors affecting the adoption of social media in higher education: a systematic review of the technology acceptance model, in Recent Advances in Intelligent Systems and Smart Applications (Springer, 2021), pp. 571–584 7. A.A. AlQudah, M. Al-Emran, K. Shaalan, Medical data integration using HL7 standards for patient’s early identification. PLoS ONE 16(12), e0262067 (2021). https://doi.org/10.1371/ JOURNAL.PONE.0262067 8. A.A. Alqudah, M. Al-Emran, K. Shaalan, Technology acceptance in healthcare: a systematic review. Appl. Sci. 11(22) (2021). https://doi.org/10.3390/APP112210537 9. K. Al-Saedi, M. Al-Emran, E. Abusham, S.A. El-Rahman, Mobile payment adoption: a systematic review of the UTAUT model, in 2019 International Conference on Fourth Industrial Revolution, ICFIR 2019 (2019). https://doi.org/10.1109/ICFIR.2019.8894794 10. M. AlShamsi, M. Al-Emran, K. Shaalan, A systematic review on blockchain adoption. Appl. Sci. 12(9), 4245 (2022). https://doi.org/10.3390/APP12094245 11. R.A. Alsharida, M.M. Hammood, M. Al-Emran, Mobile learning adoption: a systematic review of the technology acceptance model from 2017 to 2020. Int. J. Emerg. Technol. Learn. 15(5) (2021). https://doi.org/10.3991/ijet.v16i05.18093 12. A. Andriella, G. Alenyà, J. Hernández-Farigola, C. Torras, Deciding the different robot roles for patient cognitive training (2018) 13. M. Bajones, D. Fischinger, A. Weiss, P.D.L. Puente, D. Wolf, M. Vincze, T. Körtner, M. Weninger, K. Papoutsakis, D. Michel, A. Qammaz, P. Panteleris, M. Foukarakis, I. Adami, D. Ioannidi, A. Leonidis, M. Antona, A.A. Argyros, P. Mayer, P. Panek, H. Eftring, S. Frennert, Results of field trials with a mobile service robot for older adults in 16 private households. ACM Trans. Hum.-Robot Interact. 9(2), 1–27 (2020). https://doi.org/10.1145/3368554 14. H.-D. Bui, N.Y. Chong, An Integrated Approach to Human-Robot-Smart Environment Interaction Interface for Ambient Assisted Living (2018). https://www.softbank.jp/en/robot/ 15. H.M. Do, M. Pham, W. Sheng, D. Yang, M. Liu, RiSH: a robot-integrated smart home for elderly care. Robot. Auton. Syst. 101, 74–92 (2018). https://doi.org/10.1016/j.robot.2017.12.008 16. F. Routhier, J. Bouffard, D. Dumouchel, J. Faieta, D. Pacciola, Use and outcomes of a wheelchair-mounted robotic arm—preliminary results of perceptions of family. Neurorehabilitation Neural Repair 35(4), NP1–NP275 (2021). https://doi.org/10.1177/154596832098 8381 17. T. Fernandes, E. Oliveira, Understanding consumers’ acceptance of automated technologies in service encounters: drivers of digital voice assistants adoption. J. Bus. Res. 122, 180–191 (2021). https://doi.org/10.1016/J.JBUSRES.2020.08.058 18. A. Grani´c, Educational technology adoption: a systematic review. Educ. Inf. Technol. 1–20 (2022). https://doi.org/10.1007/S10639-022-10951-7
342
S. AlShamsi et al.
19. I. Iancu, B. Iancu, Designing mobile technology for elderly. A theoretical overview. Technol. Forecast. Soc. Change 155 (2020). https://doi.org/10.1016/j.techfore.2020.119977 20. B. Kitchenham, S. Charters, Guidelines for performing systematic literature reviews in software engineering. Software Engineering Group, School of Computer Science and Mathematics, Keele University (2007), pp. 1–57. https://doi.org/10.1.1.117.471 21. F. Lanza, V. Seidita, A. Chella, Agents and robots for collaborating and supporting physicians in healthcare scenarios. J. Biomed. Inform. 108 (2020). https://doi.org/10.1016/j.jbi.2020.103483 22. R. Limosani, R. Esposito, A. Manzi, G. Teti, F. Cavallo, P. Dario, Robotic delivery service in combined outdoor-indoor environments: technical analysis and user evaluation (2018). http:// www.dhl.com/content/dam/downloads/g0/about_us/logistics_insights/dhl_ 23. F. Lunardini, M. Luperto, M. Romeo, J. Renoux, N. Basilico, A. Krpic, N.A. Borghese, S. Ferrante, The movecare project: home-based monitoring of frailty, in 2019 IEEE EMBS International Conference on Biomedical and Health Informatics, BHI 2019—Proceedings (2019). https://doi.org/10.1109/BHI.2019.8834482 24. H. Manh Do, W. Sheng, E.E. Harrington, A.J. Bishop, Clinical screening interview using a social robot for geriatric care. IEEE Trans. Autom. Sci. Eng. 18(3), 1229–1242 (2021). https:// doi.org/10.1109/TASE.2020.2999203 25. D. Moher, A. Liberati, J. Tetzlaff, D.G. Altman, D. Altman, G. Antes, D. Atkins, V. Barbour, N. Barrowman, J.A. Berlin, J. Clark, M. Clarke, D. Cook, R. D’Amico, J.J. Deeks, P.J. Devereaux, K. Dickersin, M. Egger, E. Ernst, P. Tugwell, Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med. (2009). https://doi.org/10.1371/jou rnal.pmed.1000097 26. H. Mshali, T. Lemlouma, D. Magoni, Adaptive monitoring system for e-health smart homes. Pervasive Mob. Comput. 43, 1–19 (2018). https://doi.org/10.1016/j.pmcj.2017.11.001 27. O.R. Osivue, X. Zhang, X. Mu, H. Han, Y. Wang, Approach for obstacle avoidance fuzzy logic control of an elderly-assistant and walking-assistant robot using ultrasonic sensors, in 2018 15th International Conference on Ubiquitous Robots, UR 2018 (2018), pp. 708–713. https:// doi.org/10.1109/URAI.2018.8441818 28. T. Ozturk, M. Talo, E.A. Yildirim, U.B. Baloglu, O. Yildirim, U. Rajendra Acharya, Automated detection of COVID-19 cases using deep neural networks with X-ray images. Comput. Biol. Med. 121 (2020). https://doi.org/10.1016/j.compbiomed.2020.103792 29. PRISMA. (n.d.). Retrieved from: http://prisma-statement.org/PRISMAStatement/PRISMASta tement. 29 Jan 2022 30. A.A. Saa, M. Al-Emran, K. Shaalan, Mining student information system records to predict students’ academic performance, in International Conference on Advanced Machine Learning Technologies and Applications (2019), pp. 229–239 31. M. Saeed Sharif, L. Herghelegiu, An Effective TeleHealth Assistive System to Support Senior Citizen at Home or Care-Homes (2018) 32. P. Smutny, P. Schreiberova, Chatbots for learning: a review of educational chatbots for the Facebook Messenger. Comput. Educ. 151 (2020). https://doi.org/10.1016/J.COMPEDU.2020. 103862 33. F. Suhail, M. Adel, M. Al-Emran, K. Shaalan, A bibliometric analysis on the role of artificial intelligence in healthcare, in Augmented Intelligence in Healthcare: A Pragmatic and Integrated Analysis, vol. 1024 (Springer, 2022), pp. 1–14. https://doi.org/10.1007/978-981-19-1076-0_1 34. L. Syed, S. Jabeen, S. Manimala, A. Alsaeedi, Smart healthcare framework for ambient assisted living using IoMT and big data analytics techniques. Future Gener. Comput. Syst. 101, 136–151 (2019). https://doi.org/10.1016/j.future.2019.06.004 35. C. Teeneti, U. Pratik, G. Philips, A. Azad, M. Greig, R. Zane, C. Bodine, C. Coopmans, Z. Pantic, System-level approach to designing a smart wireless charging system for power wheelchairs. IEEE Trans. Ind. Appl. 57(5), 5128–5144 (2021). https://doi.org/10.1109/TIA. 2021.3093843 36. Using PICO or PICo—Systematic Reviews—Research Guide—Help and Support at Murdoch University (n.d.). Retrieved from: https://libguides.murdoch.edu.au/systematic/PICO. 29 Jan 2022
Robotics and AI in Healthcare: A Systematic Review
343
37. Worldometer (n.d.), Retrieved from: https://www.worldometers.info/. 29 Jan 2022 38. G. Yang, Z. Pang, M. Jamal Deen, M. Dong, Y.T. Zhang, N. Lovell, A.M. Rahmani, Homecare robotic systems for Healthcare 4.0: visions and enabling technologies. IEEE J. Biomed. Health Inform. 24(9), 2535–2549 (2020). https://doi.org/10.1109/JBHI.2020.2990529 39. S. Zaza, M. Al-Emran, Mining and exploration of credit cards data in UAE, in Proceedings— 2015 5th International Conference on e-Learning, ECONF 2015 (2015), pp. 275–279. https:// doi.org/10.1109/ECONF.2015.57 40. A. Zlatintsi, A.C. Dometios, N. Kardaris, I. Rodomagoulakis, P. Koutras, X. Papageorgiou, P. Maragos, C.S. Tzafestas, P. Vartholomeos, K. Hauer, C. Werner, R. Annicchiarico, M.G. Lombardi, F. Adriano, T. Asfour, A.M. Sabatini, C. Laschi, M. Cianchetti, A. Güler, I. Kokkinos, B. Klein, R. López, I-Support: a robotic platform of an assistive bathing robot for the elderly population. Robot. Auton. Syst. 126 (2020). https://doi.org/10.1016/j.robot.2020.103451
Outlier Detection for Customs Post Clearance Audit Using Convex Space Representation Omar Alqaryouti, Nur Siyam, and Khaled Shaalan
Abstract Audit plays an important customs role in which the past transactions of a company are investigated for non-conformance. Traditionally, audit is conducted based on a selection of a sample group of transactions for investigation which at times resulted on wrongful audit. Many resources were wasted in trying to prove the legitimacy of company activities. This process is also based on human perception and analytical capabilities without sufficient evidence of the reasons a company was selected for audit. This paper aims to build a solution that allows customs administrations to assess company’s behaviors in terms of trade legitimacy. This solution aims to enhance the efficiency in determining if a company shipment poses any risk. This study adopts the Cross Industry Standard Process for Data Mining (CRISP-DM) methodology and a convex-based approach is employed. This approach works by creating a relationship between shipments in order to clearly interpret the distances between these shipments. Accordingly, the shipments are represented as points in multi-dimensional space. This approach is able to significantly determine shipments that poses security issues. The performance results indicated the potential of adopting this method in customs administrations as it achieved an accuracy of 87%. This approach will significantly reduce or eliminate false audits and provide better resource management, thus reducing operating cost for both customs and the entity being audited. Keywords Convex space representation · Clustering · Outlier detection · Post clearance audit · Distance measures · Behavioral analysis
O. Alqaryouti (B) · N. Siyam · K. Shaalan The British University in Dubai, Dubai, UAE e-mail: [email protected] K. Shaalan e-mail: [email protected] K. Shaalan University of Edinburgh, Edinburgh, UK © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. Al-Emran and K. Shaalan (eds.), Recent Innovations in Artificial Intelligence and Smart Applications, Studies in Computational Intelligence 1061, https://doi.org/10.1007/978-3-031-14748-7_19
345
346
O. Alqaryouti et al.
1 Introduction Customs administration has the responsibility of protecting the society and sustaining the economic development. This task is performed by making sure that the trade is following the country legislation and rules. The task of protection involves auditing and assessing the risk factors for each company trading behavior to identify any potential risk. This auditing task involves analyzing all of companies’ shipments to identify whether any shipment poses a security risk. Performing this task is a major challenge since a large number of attributes that describe each shipment are all required to be investigated and analyzed [1]. By law, the companies must retain the original documents for all shipments for a defined period based on the company’s trade license. Moreover, the customs administrations perform a post clearance audit on the company’s historical transactions and may request for any of the original documents during this period [2]. Furthermore, as part of the declaration processing, customs administration collects the duty amount according to the tariff code of the commodities. This duty is associated with the commodity type. Moreover, the duty value varies based on the commodity nature. Thus, in addition to the typical risk factors such as smuggling, the trader may manipulate the commodity values to avoid paying duties or reduce the duty amount. The valuation manipulation is a major concern to the customs administrations since it may result in loss of revenue [3]. The post clearance audit takes into consideration the trader’s behavior in order to correctly analyze any risk factors that rise from the historical transactions of the company. This analysis is a process that is typically performed by auditors at the organization’s premises. This process is time consuming and may not be efficient since the selection process of the companies may not consider the entire company’s history. This process is also based on human perception and analytical capabilities without sufficient evidence of the reasons a company was selected for audit. Therefore, this indicates the need for building a solution that optimizes the process of selecting the company that is under consideration for customs audit. The challenges facing the cross-border trade supply chain are of global scale. Their impact spans economic, communication, technology, social, health and political realms. A much-needed change can only be achieved through an orchestrated active participation and contributions of concerned parties while recognizing the complexity of the work involved. In this paper, the aim is to develop a convex-based algorithm that provides an efficient solution that helps in determining if a company shipment poses any risk. This solution can be adopted by customs administrations to simplify the overall auditing process. The proposed solution takes into account the trading pattern of each company during the analysis of the historical transactions. Using this pattern, the proposed solution is able to determine shipments that poses any security issues. For instance, Fig. 1 illustrates the representation of the shipments in three-dimensional space where commodity type, goods value and country of origin represent each of the shipments.
Outlier Detection for Customs Post Clearance Audit …
347
Fig. 1 Multi-dimensional space representation example
Each shipment in the proposed approach is considered as a point in multidimensional space, where the dimensions in this space represent the shipment attributes such as cost, country of origin, type among others. Accordingly, the entire history of all shipments is represented as points in this space. Whereas, the locations of the shipments in this space highlights the similarity between them. Using such representation, the problem of detecting risky shipments can be seen as an outlier detection problem. Once the trader historical transactions are represented in the multi-dimensional space, the analysis is performed by establishing a convex hull for the represented points. Convex hull results in a convex space. Convex space is defined as a space representation where any line that connects two points fall into the current subspace [4]. By using convex subspace, the aim is to extract the behavioral attribute for shipments that are most likely to be in the safe and risky shipments. This paper is composed of seven sections, including the introductory section. The second section provides a background on the trade supply chain process. The third section investigates the literature and identifies the related work and research gaps. The fourth section is concerned with the methodology that is adopted to develop the convex-based approach. The fifth section illustrates the proposed algorithm and the different stages that are followed to achieve the aim of this study. The sixth section discusses the experimental sceneries and illustrates the research findings. Finally, the seventh section provides brief summary of the study and presents the key findings. Furthermore, it points out the areas for future research.
2 Background The cross-border trade through the sea channel comprises exchange of various documents between the trade supply chain parties [3]. The process of moving goods across-borders starts once an agreement is established between the exporter and the importer. As part of the requirements for this process, critical documents are required
348
O. Alqaryouti et al.
such as the Bill of Lading (BoL), the original commercial invoice, and the Certificate of Origin (CoO). The BoL represents the document that is issued by the carrier to the shipper to acknowledge the receipt of the cargo for shipment. And the CoO is a widely used international document in the global trade supply chain that indicates and attests the provenance of products. These documents must be shared in original form between the exporter and importer. Both exporter and importer will use these documents to submit the customs declaration either directly or through a customs broker each in their own intended country in both source and destination. For instance, the exporter submits the export declaration at the origin country. The shipment is then shipped through a shipping agent. After that, the importer submits the import customs declaration at which the clearance process for the imported shipment will commence. The customs clearance processes include risk assessment for the shipment. The risk assessment process results in tagging risky shipments for mitigation which may result in physical inspection. The customs clearance will be issued following the payment of customs duties by the importer in order to clear the goods. Once the shipment is cleared, the customs administration is responsible to target specific shipments for post clearance auditing. This process verifies all information and documents for the targeted shipments. In case of any discrepancies, the customs post clearance committee will contact the importer to clarify these discrepancies. If the importer fails to justify the variations with evidences, the customs administration will issue fines and penalties in addition to any additional customs duties. The process of post clearance audit heavily depends on a human determining the criteria based on which he will assess cleared transactions and gather the data for all transactions matching these criteria along with historical interactions and it is dependent on the skill-level and experience of the auditor. This procedure lacks accuracy, consistency, efficiency and effectiveness. Mistakes made will result in wrong decision taken which leads to multiple negative effects such as cost of manual activities, wrong actions taken, and reduced customer happiness.
3 Related Work The problem of detecting abnormal activities has been extensively studied in the literature. To address this problem, the use of outlier detection algorithms has result in significant improvement in different domains. For instance, in [5, 6] the authors investigated the problem of detecting credit card fraudulent activities. Accordingly, in this paper, k-nearest neighbor (K-NN) approach has been used to determine whether a certain transaction is legitimate. In [7, 8], to improve the bus root planning in the transportation domain, the authors proposed several outlier detection algorithms. These algorithms try to determine the bus locations that must be removed from the root in order to optimize the operational cost. Accordingly, the proposed algorithm uses the entire city map to rank the bus stops based on their locality. This ranking takes into consideration the number of
Outlier Detection for Customs Post Clearance Audit …
349
available buses and the maximum allowed travelling time to each bus. Thus, the proposed algorithms aim to remove the bus locations such that the new bus roots cover the entire city within the pre-defined travelling deadline for each bus. In the money laundering domain, [9] proposed an algorithmic solution that aim to determine the likelihood of the presence of money laundering related activities. The proposed solution is cluster-based where the transactions are groups based on the distance between themselves. In this paper, based on the analytical study, the author determines a threshold value that can be used to determine whether a certain transaction is legitimate or illegal. Likewise, to investigate the financial behaviors of the Taiwanese companies, in [10], the authors studied the benefits of applying outlier detection algorithms such as Local Outlier Factor (LOF) and K-NN algorithms to detect any abnormalities in the companies’ behavior. In this study, the authors showed the advantage of employing such algorithm through analytical studies. The outlier detection algorithms have been proposed to address the abnormality in computer network traffic. Reference [11] proposed a mechanism to detect such abnormality by representing the network traffic related information as points in multi-dimensional space. Then, a cluster-based approach was employed to determine neighborhood for each point and whether two points are related in terms of traffic behavior. In this line, in [12], the authors have used K-NN approach to address the problem of frauds in car insurance domain. In this approach, the transactions were represented used pre-determined set of features. These features are used to represent the transactions in multi-dimensional space. Accordingly, once a new transaction is submitted, based on the “K” nearest neighbor of this transaction, the status of the new transaction will be confirmed. The value of “K” is determined using the sensitivity analysis and the entire process eventually uses majority voting mechanism to determine whether the new transaction is legitimate of not. To improve the speed up of the distance-based outlier detection, [13] proposed multi-core clustering algorithm. At the core of this algorithm, pruning method was applied to determine the outlier factor for each point. Once the outlier factor reaches certain threshold, the point is dropped from consideration since it is no longer considered as an outlier. To address the problem of updating the cut-off point of updating the threshold, the proposed solution employed an advanced hierarchal sorting technique to simplify the cut-off updating process. Whereas, the actual outlier mechanism employs the concept of leadership points which monitor the entire transactions set and the process of determining whether any point can be considered an outlier. The problem of detecting the abnormality behavior in trading behavior from customs perspective has been addressed in the literature, including customs transactional risks in general and valuations risks. For instance, Juma et al. [3] proposed a secondary distributed risk assessment method that employed the Local-density Outlier Factor (LOF) algorithm to detect whether a new shipment can be considered risky (outlier). The aim of the study was to compliment the risk assessments performed at customs administration by providing feedback from the early stage of risk analysis. The results showed that the proposed algorithm can provide classification that is 83% accurate on average. Similarly, Alqaryouti et al. [14] proposed
350
O. Alqaryouti et al.
a cluster-based approach to detect the customs commodity value manipulation by representing the shipment related information into a 3-dimensional space through two stages comprising distance- and density-based techniques. The results of the proposed approach achieved an accuracy of 86%. However, this study aims to address the need for building a solution that optimizes the process of selecting the company that is under consideration for customs audit by constructing a convex space representation for the companies’ trading transactions. In such space, the closer the transaction to the border, the more likely that this transaction is considered an outlier. The benefit of using such mechanism is to simplify the detection mechanism since at any stage the process of assessing a new transaction works by identifying the distances between the points inside the convex space.
4 Methodology This work aims to determine whether a certain company shipment poses any risk according the company trading behavior. This section illustrates the proposed convexbased approach that is performed through representing the company’s historical shipments during the representation to simplify the analysis process. This approach consists of three steps; namely space representation, convex space construction and riskiness identification. Various methodologies were adopted in the literature to perform data mining tasks. The following methodologies are well-known and have been used in various application domains: Knowledge Discovery in Databases (KDD) [15–20], SEMMA [21] and the Cross Industry Standard Process for Data Mining (CRISP-DM) [22]. These methodologies aim to establish proper holistic understanding prior to performing the data mining tasks in order to gain insights and discover hidden knowledge. CRISP-DM methodology is adopted in this study as it is an industrial and technological neutral. Furthermore, it is one of the commonly used methodologies in the development of knowledge discovery projects data mining methodology [23]. Figure 2 illustrates the six stages that represents the CRISP-DM methodology. In the first stage, a proper understanding of current and desired business requirements is essential. In the second stage, the data collection activities take place followed by studying and understanding the nature and the characteristics of the data. The third stage is concerned about data preparation and cleansing to produce harmonized dataset. In the fourth stage, the modelling exercise starts by taking the input prepared dataset and building the convex-based approach. In the fifth stage, the results of the modelling exercise are evaluated. Finally, the output model is deployed to production in order to handle real-time data. The modelling stage is concerned about building the Data Mining model. In this study, clustering approaches were adopted. The focus in this study is to address real-life scenarios related to customs post clearance audit and value manipulation. The obtained dataset is divided into 90% safe customs declarations and 10% risky customs declarations. This creates a challenge to reduce the learning bias through
Outlier Detection for Customs Post Clearance Audit …
351
Fig. 2 Research methodology
controlling the outliers and class imbalance. Traditional classification algorithms are biased toward the majority class. This may lead to degrade the classifier performance. According to the literature, using clustering techniques proved its efficiency in various domains [24]. Thus, this study adopts clustering techniques as a promising staring point, since converting the problem into geometrical space is expected to reduce the problem complexity and avoid bias. In our problem, the presence of the HS-Code hierarchy underlines the expected benefit of using clustering since the presence of such controlling parameters already divided the input data into isolated subspaces. By transforming it in a way where the relationship between the points is represented by using Euclidian distance and this simplifies the complexity of the problem and give the ability on how to control performance and achieve the desired outcomes. The data used in this study was obtained from Dubai Customs which basically represents random company shipments information. This dataset was prepared for shipments conducted in the year of 2018 and includes 500,000 records. The records are labelled to identify the risky company declarations. As part of data preparation, all shipments for importers who have duty exceptions are removed from consideration. These declarations have been removed since the importer has no duty to pay. The following sections discusses the details of the proposed convex-based approach and the various experiment settings to evaluate the performance of this approach.
5 Convex-Based Approach In the proposed approach, the companies trading shipments are represented as points in multi-dimensional space. Based on the commodity type, a convex space is constructed to represent the available shipments information. In the convex space, connecting any two points inside this space will result in having the entire lines fall
352
O. Alqaryouti et al.
into the same space. The objective of such representation is to simplify the process of monitoring the company trading behavior. The adoption of such mechanism aims to establish “Confidence” space. This space represents shipments with distinct safe characteristics. Accordingly, to construct the convex space, the available historical shipments data is analyzed to include only those shipments with normal behavior. The available historical shipments information is divided into safe and risky to establish this space for each trade commodity type. Each one of these shipment types will be processed separately. The representation step works to represent each shipment as point in three-dimensional space, where the dimensions of this space are the country of origin, commodity value and duty value. Once all available historical data are represented in this three-dimensional space, the convex space representation works to determine the two convex spaces to cover the safe and risky shipments separately. The point of constructing this convex space is to establish a relationship between the shipment locality in the space and the risk factors for this particular shipment.
5.1 Space Representation The objective in this step is to use the shipments attributes in representing the shipments in multi-dimensional space. Accordingly, the attributes that are used have to help in clarifying the relationship between the shipments. In this work, the attributes used for space representation are the country of origin, commodity value and duty value. These attributes are selected because they can be used to determine the similarity in terms of shipment path and commodity value. The country of origin as well as the other attributes are selected to help in predicting the variation of the value between the countries. For instance, electronics from Japan are most likely to have more value than electronic from China. The duty value as well as the value of the commodity can together highlight the potential of smuggling or suspicious behaviors specially when these two values do not follow predictable pattern. This representation will be conducted for each commodity type using the entire available historical shipments information. In addition, this representation will be conducted to the safe and risky shipments.
5.2
Convex Space Representation
This step aims to represent the safe and risky shipments into two convex spaces. Such representation aims to simplify the process of identifying whether the shipment under consideration is safe or risky. In this vein, the location of any point helps in determining whether this point can be identified as outlier (safe) or not. Accordingly, points closer to the border of the risky convex space are more likely to be outliers (safe) compared to points located inside the convex space. Figure 3 illustrates an
Outlier Detection for Customs Post Clearance Audit …
353
Fig. 3 Convex space representation example
example of this representation, where the represented shipment point in green color is expected to be an outlier since it is closed to the convex border. To represent the risky space as a convex space, this study employs the Quickhull algorithm proposed by [25]. The convex hull for a space (safe or risky) is the smallest convex set that contains all the space points. Given the number of dimensions (d), this algorithm starts by selecting d +1 points, which do not share a plane or hyperplane, and these points are used to establish the initial hull. Then, for each facet ( f i ∈ F) of the constructed hull, the algorithm proceeds by constructing the outside set of the facet. This set represents all unassigned points that are located above the facet. Each point can be assigned to only one facet. Now, for each facet with non-empty outside set, the process of expansion starts by selecting the farthest point ( p) from this set, and initialize the visibility set (V ) to be F. This visability set is expanded by adding all neighboring facets, which also located below p. The boundary of the visiability set forms the set of horizon ridges (H ). Then, for each ridge, facets are created by combining these ridges H to p. The created facets are then liked together based on their locality to establish a new hull. This process then repeated by determining the outside set for each new facet (F ), and perform the expansion process. This process stops when the constructed (new) facets have empty outside sets. In the risk identification step, this convex representation is used at several stages to analyze the shipments from different perspectives.
354
5.3
O. Alqaryouti et al.
Riskiness Identification
In this step, the main idea is to use the two convex space representations to identify whether a certain shipment poses any risk. The identification of this risk suggests that the company who is responsible for this shipment is a target for auditing. The use of the two convex spaces aim to filter the analysis criteria. The risky convex space is expected to be significantly smaller than the safe convex space. However, the overlapping between the safe and risky convex spaces is considered as unknown space that does not belong to the safe convex space. Accordingly, the focus will be on the resultant safe convex space once this overlapping is eliminated. Then, the actual risk identification is performed using the safe convex space for validation using the HS-Code. The HS-Code is an eight digits number that is used to uniquely identify the traded commodities. Accordingly, the performed analysis takes the trader behavior in terms of his previous transactions. The objective of this step is to assign a risk score for the shipment based on its location in the convex space. A shipment could have single or multiple items, and accordingly each shipment will be represented by several points equal to the number of commodity types in this shipment. As a pre-processing step, the convex space representation is established and the overlapping areas between the two spaces are eliminated. The calculation of the risk score for a given point takes into consideration two factors, namely distance to border and the distance to the neighbors. The points that are closer to the border of the convex space are more likely to be outliers (risky shipments). Whereas, the points that are closer to the center of the convex space are more likely to represent safe shipments. In addition, the distance of the point to its neighbors also influences the risky score. For instance, assume that a single point that is located inside the convex space where the average distance between this point and its neighbors is relatively small. This point is more likely to be safe compare to points located in faraway distance from the neighboring points. The calculation of the points risk factors starts by determining the set of points that must examined. Once these points are identified, the process proceeds by analyzing the identified node in sequential order. For each point, the algorithm calculates the distance between this point and the center of the convex dc as well as the border of the convex db . Points that are closer to the center of the convex are more likely to be safe. These two values are used to calculate the border factor ( f b ) which is calculated by dividing the distance to the center over the distance to the border ( ddbc ). When the value of this factor is equal to one, the distance between the point and the border is equal to the distance to the center. Next, the distance factor is calculated by taking into consideration the average distance between this point and its k-neighbors. The distance factor of a point ( p) is calculated as follows: f d ( p) =
avg K − d( p) avg K − d(c)
Outlier Detection for Customs Post Clearance Audit …
355
where avg K − d( p) refers to the average distance between point p and its kneighbors. avg K -d(c) is the average distance between the center of the convex and it its k-neighbors. The value of this factor indicates whether the point is located in dense or sparse area of the convex. Once these two factors are calculated, the riskiness score of the HS-Code is calculated as follows: H S − scor e( p) = f b ( p) × f d ( p) Using this score, points located near the border of the space or points in areas with low density will have relatively high riskiness score. In particular, any risk score value that is higher than one (> 1) is worth investigating. This is due to the fact that points with score higher than one are either near the border of the space or are located in less density areas. Function 1 describes the proposed convex-based algorithm in this study. Function 1 The proposed convex-based algorithm
6 Experiments and Discussion of Results This part illustrates the various set of experiments that have been conducted to evaluate the performance of the proposed convex-based outlier detection approach.
356
O. Alqaryouti et al.
In this section, several sets of experiments are conducted to evaluate the accuracy of the proposed approach. The input dataset represents 500,000 shipments declarations occurred during the year of 2018 obtained from Dubai Customs. As a preprocessing step, the declarations applications are grouped based on the HS-Code. Then, the lowest 10% of these declarations’ applications are removed. The objective of this step is to increase the accuracy since as mentioned, low number of declarations for a given HS-Code might result in reducing the anticipated performance. Once this filtration step is performed, the declarations are grouped based on the risk factor into safe and risky groups. To determine the points that will be used for each convex space representation, the k-means– (k-means minus minus) to remove the points that are on the border with respect to the safe space and risky space is employed. The k-means– is an extension to the well-known k-means algorithm where the Farthest n% of the nodes is removed from consideration. In the proposed approach, the removed n% of the nodes represents the borders of the convex space. This percentage is eventually converted to absolute value that represents the k value. The HS-Codes are grouped based on the number of shipments that belong to each HS-Code. Accordingly, in the presented experiments, the percentage of HS-Code groups selected is varied and the impact of the k value in the k-means– is studied. In addition, the testing sample was prepared by selecting a random 10% from each resulted HS-Code convex clusters. To illustrate the impact of the percentage of the removed points in k-means– on the overall accuracy of the proposed approach, several experiments were executed while varying the percentage while the k value set to 5 and percentage of the selected HS-Code groups set to 25%. Using k-means– as a pre-processing step aims to reduce the possibility of having noisy data around the convex cluster. The use of percentage measure aims to simplify the experiment. This measure was eventually converted to k value as input parameter for the k-means– algorithm. As illustrated in Fig. 4, increasing the percentage of the eliminated points to over 15% results in reducing the accuracy of the proposed approach. Therefore, the impact of this percentage in this preprocessing step on the approach performance needs to be taken into consideration. Thus, increasing the percentage of this input parameter results in reducing the number of the actual shipments that are used in the convex space representation. Accordingly, this increment of the used parameter without careful analysis will impact performance since it will reduce the number of available shipments for each convex space representation. On the other hand, having low percentage (< = 15%) will improve the performance since it will result in removing the points that are located at the border of the space. Figure 5 illustrates the impact of changing the percentage of the selected HS-Code groups on the accuracy of the proposed algorithm. In this experiment, the assumption is that shipments with riskiness score greater than one has the potential being labeled as outlier and therefore it requires further investigation. From the figure, it is evident that increasing the percentage of the selected HS-Code groups has a stable impact on the accuracy until the percentage reaches to 55%. After this percentage, the accuracy started to drop. This is attributed to the fact that large groups will obtain high accuracy
Outlier Detection for Customs Post Clearance Audit …
357
Fig. 4 Performance evaluation when changing the k value in k-means
due to the number of shipments in these groups. On the other hand, using any outlier mechanism on groups with low number of shipments will obtain less accuracy. Finally, to investigate the relationship between the k-nearest neighbor and the accuracy of the proposed approach, several experiments were executed while varying the value of k in the risk identification step. Figure 6 shows the performance results for these experiments. As illustrated in Fig. 6, increasing the value of k to 5 results in increasing the accuracy of the proposed algorithm. Whereas, changing the value of k to over 8 results in reducing the performance of the proposed algorithm. Therefore, the impact of the k value on the definition of point neighborhood needs to be taken into consideration. Hence, increasing the k value will increase the number of points that belong to each point’s neighborhood group. And this will improve the possibility of detecting density-based outliers. However, increasing this value without performing
Fig. 5 Performance evaluation when changing the percentage of the selected HS-Code groups
358
O. Alqaryouti et al.
Fig. 6 Performance evaluation when changing the k value in k-nearest neighbor
sensitivity analysis will result in reducing the performance since eventually having large neighborhood points reduces the possibility of detecting local outliers.
7 Conclusion and Future Work Trade supply chain is characterized by operating silos. Information is shared among parties using paper documents which makes them vulnerable to manipulation and forgery. The role of customs administration is safeguarding society and local businesses from frauds and criminal activities. As part of the customs core functions, post clearance audit plays an important role in which the past transactions of a company are examined for non-conformance. This study aims to provide mechanism that helps in the process of detecting abnormal activities in companies trading behavior. To achieve this objective, a convex-based approach was proposed. This approach works by creating a relationship between shipments in order to clearly interpret the distances between these shipments. Accordingly, in the proposed approach, the shipments are represented as points in multi-dimensional space. This approach was able to significantly determine shipments that poses security issues. The performance results indicated the potential of adopting it in customs administrations as it achieved an accuracy of 87%. Using the analytical capabilities of the Cross Industry Standard Process for Data Mining (CRISP-DM) methodology, Customs will allow to assess company behaviors and detect outliers in the data received as part of Customs. The proposed clustering allows company related data to be analyzed to determine abnormal behaviors with companies in the same line of business. This technique will provide much needed capabilities to protect the revenue and secure the trade supply chain from illegitimate activities. This approach will significantly reduce or eliminate false audits and
Outlier Detection for Customs Post Clearance Audit …
359
provide better resource management, thus reducing operating cost for both customs and the entity being audited. As part of our future work, investigating the use of other mechanisms in shaping the convex space for the nodes such as LOF may significantly improve the performance of the proposed approach. Additionally, there is potential to expand the proposed approach by providing a hierarchal scheme that covers the situation where the number of shipments for a specific HS-Code is not enough to perform an accurate analysis. This can be established by running an additional level where the nodes represent groups in the HS-Code header. Moreover, the proposed algorithm can be considered as an off-chain component that compliments the blockchain-based framework proposed in [26].
References 1. O. Alqaryouti, K. Shaalan, Trade facilitation framework for e-commerce platforms using blockchain. Int. J. Bus. Inf. Syst. (in press) 2. W.C.O. WCO, Post-Clearance Audit (PCA) (2018) 3. H. Juma, K. Shaalan, I. Kamel, Customs-based distributed risk assessment method, in Parallel Architectures, Algorithms and Programming (Singapore, 2020), pp. 417–429. https://doi.org/ 10.1007/978-981-15-2767-8_37 4. G.E. Blelloch, Y. Gu, J. Shun, Y. Sun, Randomized incremental convex hull is highly parallel, in Annu. ACM Symp. Parallelism Algorithms Archit. (2020), pp. 103–115. https://doi.org/10. 1145/3350755.3400255 5. V. Ceronmani Sharmila, R. Kiran Kumar, R. Sundaram, D. Samyuktha, R. Harish, Credit card fraud detection using anomaly techniques, in 2019 1st International Conference on Innovations in Information and Communication Technology (ICIICT) (2019), pp. 1–6. https://doi.org/10. 1109/ICIICT1.2019.8741421 6. N. Malini, M. Pushpa, Analysis on credit card fraud identification techniques based on KNN and outlier detection, in 2017 Third International Conference on Advances in Electrical, Electronics, Information, Communication and Bio-Informatics (AEEICB), Feb 2017, pp. 255–258. https://doi.org/10.1109/AEEICB.2017.7972424 7. K. Almiani, A. Viglas, Y. Lee, R. Abrishambaf, Peripheral nodes and their effect in path planning in networks. Int. J. Ad Hoc Ubiquitous Comput. 27(3), 157–170 (2018). https://doi. org/10.1504/IJAHUC.2015.10001796 8. K. Almiani, S. Chawla, A. Viglas, The effect of outliers in the design of data gathering tours, in 2014 Sixth International Symposium on Parallel Architectures, Algorithms and Programming, Jul 2014, pp. 209–214. https://doi.org/10.1109/PAAP.2014.23 9. Z. Gao, Application of cluster-based local outlier factor algorithm in anti-money laundering, in 2009 International Conference on Management and Service Science, 08 Sept 2009, pp. 1–4. https://doi.org/10.1109/ICMSS.2009.5302396 10. M.-C. Chen, R.-J. Wang, A.-P. Chen, An empirical study for the detection of corporate financial anomaly using outlier mining techniques, in 2007 International Conference on Convergence Information Technology (ICCIT 2007), Nov 2007, pp. 612–617. https://doi.org/10.1109/ICCIT. 2007.4420326 11. Z. Gan, X. Zhou, Abnormal network traffic detection based on improved LOF algorithm, in 2018 10th International Conference on Intelligent Human-Machine Systems and Cybernetics (IHMSC), Aug 2018, vol. 1, pp. 142–145. https://doi.org/10.1109/IHMSC.2018.00040
360
O. Alqaryouti et al.
12. T. Badriyah, L. Rahmaniah, I. Syarif, Nearest neighbour and statistics method based for detecting fraud in auto insurance, in 2018 International Conference on Applied Engineering (ICAE), Oct 2018, pp. 1–5. https://doi.org/10.1109/INCAE.2018.8579155 13. K. Bhaduri, B.L. Matthews, C.R. Giannella, Algorithms for speeding up distance-based outlier detection, in Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining—KDD ’11, 2011, p. 859. https://doi.org/10.1145/2020408.2020554 14. O. Alqaryouti, N. Siyam, K. Shaalan, Customs Valuation Assessment Using Cluster-based Approach (2022). https://doi.org/10.21203/rs.3.rs-1288941/v1 15. Z. Alkashri, O. Alqaryouti, N. Siyam, K. Shaalan, Mining Dubai government tweets to analyze citizens’ engagement, in Recent Advances in Intelligent Systems and Smart Applications (Springer, 2021), pp. 615–638 16. U. Fayyad, G. Piatetsky-Shapiro, P. Smyth, From data mining to knowledge discovery in databases. Am. Assoc. Artif. Intell. 17(3), 37–54 (1996). https://doi.org/10.1007/978-3-31918032-8_50 17. N. Siyam, O. Alqaryouti, S. Abdallah, Mining government tweets to identify and predict citizens engagement. Technol. Soc. 60, 101211 (2020). https://doi.org/10.1016/j.techsoc.2019.101211 18. A.A. Saa, M. Al-Emran, K. Shaalan, Mining student information system records to predict students’ academic performance, in International Conference on Advanced Machine Learning Technologies and Applications (2019), pp. 229–239 19. S. Zaza, M. Al-Emran, Mining and exploration of credit cards data in UAE, in 2015 Fifth International Conference on e-Learning (econf), 2015, pp. 275–279 20. A. Wahdan, S. Hantoobi, M. Al-Emran, K. Shaalan, Early detecting students at risk using machine learning predictive models, in International Conference on Emerging Technologies and Intelligent Systems (2021), pp. 321–330 21. A. Azevedo, M.F. Santos, KDD, SEMMA and CRISP-DM: a parallel overview, in: MCCSIS08—IADIS Multi Conf. Comput. Sci. Inf. Syst. Proc. Inform. 2008 Data Min. 2008, Jan 2008 (2008), pp. 182–185 22. C. Pete et al., Crisp-Dm 1.0, CRISP-DM Consort (2000), p. 76 23. G. Mariscal, Ó. Marbán, C. Fernández, A survey of data mining and knowledge discovery process models and methodologies. Knowl. Eng. Rev. 25(2), 137–166 (2010). https://doi.org/ 10.1017/S0269888910000032 24. D.L. Olson, Data mining in business services. Serv. Bus. 1(3), 181–193 (2007). https://doi.org/ 10.1007/s11628-006-0014-7 25. C.B. Barber, D.P. Dobkin, H. Huhdanpaa, The quickhull algorithm for convex hulls. ACM Trans. Math. Softw. 22(4), 469–483 (1996). https://doi.org/10.1145/235815.235821 26. O. Alqaryouti, Customs Trade Facilitation and Compliance for Ecommerce using Blockchain and Data Mining, Thesis, The British University in Dubai (BUiD) (2021) [Online]. Available at: https://bspace.buid.ac.ae/handle/1234/1886. Accessed 23 Jan 2022
Spatial Accessibility to Hospitals Based on GIS: An Empirical Study in Ardabil Saeed Barzegari , Ibrahim Arpaci , and Zahra Mahmoudvand
Abstract The accessibility and distribution of healthcare facilities have great significance not only for protecting basic human rights to healthcare but also for maintaining the social stability. This study aimed to describe the accessibility of urban and rural residents of Ardabil county to hospitals and location-allocation for a new public hospital in 2020. This study focused on Ardabil county of Iran as an empirical case and acquired travel time to eight public and private hospitals by riding and walking through the network analysis of ArcGIS version 10.3. To demonstrate the locationallocation of a new public hospital, we used the weighted overlay toolset to combine population density, access to public hospital based on the traveling time and distance, cold spot of population count in each access time and distance to the public hospitals, street type, and traffic speed maps. The maximum travel time by using a car was 59 min and mean time to access public, private, social security, psychiatry, children, and all hospitals were 7:30, 7:40, 6:52, 8, 9:10, and 5:24 min, respectively. Also, the meantime of access to public hospitals after adding a new hospital decreased from 7:30 to 5:40 min. According to the results, all residents of Ardabil county have appropriate spatial access to hospitals using cars. Although in walking mode, the access of many residents, especially rural populations, was not desirable. We suggested using GIS capabilities to distribute hospitals fairly and location-allocation for new hospitals establishment to provide all criteria of an accessible hospital. Keywords Geographic information system · GIS · Spatial access · Location-allocation S. Barzegari (B) Ardabil University of Medical Sciences, Ardabil, Iran e-mail: [email protected] I. Arpaci Department of Computer Education and Instructional Technology, Tokat Gaziosmanpasa University, 60250 Tokat, Turkey e-mail: [email protected] Z. Mahmoudvand Department of Health Information Technology, School of Allied Medical Sciences, Mazandaran University of Medical Sciences, Sari, Iran © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. Al-Emran and K. Shaalan (eds.), Recent Innovations in Artificial Intelligence and Smart Applications, Studies in Computational Intelligence 1061, https://doi.org/10.1007/978-3-031-14748-7_20
361
362
S. Barzegari et al.
1 Introduction The spatial accessibility and fair distribution of health and medical facilities in a geographical area are the most important characteristics of social and human services growth [1, 2]. When the phrase “geographical, physical or spatial accessibility” is expressed, the purpose is the easy access of specific regional individuals to health facilities on time and within a short distance [3–5]. Unequal availability of health services is a major obstacle to improving global healthcare services [5, 6]. There is a direct relationship between access to healthcare facilities and population health. For this reason, in all developed and developing countries, the appropriate geographical distribution of health facilities is a major health policy issue [7]. According to Lankarani et al., more than 90% of Iranians have fair access to primary healthcare services, however, 24 years of difference in life expectancy among different provinces is a cue of inequality in the sharing of healthcare services [8]. According to a study conducted in Ardabil, residents declared that one of the most important health problems was inaccessibility to health centers [9]. Also, inequality and lack of access to healthcare services not only have been significantly influenced the late diagnosis of breast cancer in Iranian women [10] but also the prevention of diabetes in Iran [11]. Studying spatial access have a huge potential role to help health policymakers and managers in deciding to achieve equity in hospital access. Information of access to the hospital in combination with demographic characteristics can be used in a variety of contexts [12]. The best way to measure the spatial accessibility is the use of geographic information system (GIS). By using this system and its capabilities such as cartography and network analysis, it is possible to make valid data and discuss with certainty in measuring the distance and time spent on access [13, 14]. GIS network analysis is a branch of spatial analysis that analyzes flows through a network, which is modeled as a set of links and nodes that connect nodes to model areas of healthcare facilities [15]. This study was aimed to calculate spatial access to hospitals of Ardabil county and location-allocation for a new public hospital establishment to decrease the meantime of accessibility. This study is the first study on spatial access to Hospitals in Iran using GIS and network analysis.
2 Method Ardabil province located in the northwest of Iran and on the south side of Aras River, east of Eastern Azerbaijan and west side of Gilan province and the south of Azerbaijan Republic. Ardabil encompasses 17,800 km2 having 10 towns and more than 1700 villages, 1.248 million population with a population density of 70 per km2 . Ardabil county is the capital of province with 590,000 population in 2012 and all of the population belongs to Azerbaijani ethnicity [16, 17]. The city has 368,000 population (65.42%) in the urban area and 208 villages with 222,000
Spatial Accessibility to Hospitals Based on GIS …
363
population (37.63%). The study aimed to determine the spatial access in eight public and private hospitals in Ardabil county by using a GIS. Six hospitals were public including children, psychiatry, and social security insurance hospitals and the two hospitals were private. Network analysis of ArcGIS version 10.3 was used to calculate spatial access. Since the speed of traffic on roads is different and always the long distance does not indicate more time of access, so using network analysis can provide valid results [12]. To perform network analysis, the map of the city (including city and villages), dispersion of residents (villages in the form of dot and city blocks based on the last census), roads and streets (road type and mean road speed information) were used. To access either by riding or walking, six logical models were considered including: 1. 2. 3. 4. 5. 6.
Access to public hospitals (3 hospitals; Fatemi, Alavi, Emam Khomeini), Access to private hospitals (2 hospitals; Arta and Ghaem), Access to the social security hospital (1 hospital; Sabalan), Access to the children hospital (1 hospital; Bou Ali), Access to the psychiatry hospital (1 hospital; Isar), Access to all hospitals excluding psychiatry and children hospitals (6 hospitals; Sabalan, Fatemi, Alavi, Emam Khomeini, Arta, and Ghaem).
Considering that the pace speed at different ages varies from 4 to 7 km/h, access with different pace speeds was investigated in this study. The mean speed of walking for pregnant women, elderly and middle-aged person is 4–5 km/h and for young women and men is 6–7 km/h [18]. Individuals with an access time of 1-h by walking [19] and 30 min by using a wheel have optimal spatial access to the healthcare services [20]. Required access time and meantime of access to each of the mentioned six models were calculated in case of walking and by a car. The location allocation models determine the optimal locations based on travel distance, travel time or other forms of cost functions. The location allocation models as well as integration of accessibility provide a framework to improve healthcare services. We used location allocation analysis of GIS to identify optimal locations of new public hospital based on the following maps; population density, access to public hospital based on the traveling time, access to public hospital based on the distance, cold spot of population count in each access time and distance to the public hospitals, street type, and traffic speed. We combined maps through the weighted overlay toolset.
3 Results The results showed that the maximum travel time by a car was 56 min, and in fact, 100% of the residents of Ardabil city can have access to any type of hospital in less than an hour. Spatial access to hospitals at various time intervals using a car is presented in Table 1. The mean time of access to the public, private, social security, psychiatry, children, and all hospitals excluding psychiatry and children hospitals
364
S. Barzegari et al.
Table 1 Accessibility to hospitals by vehicle Time (min)
Public
Private
Social security
Psychiatry
Children
Alla
0–5
50.19
47.37
54.68
41.08
30.24
83.76
5–10
36.78
39.35
33.4
43.43
53.25
4.42
10–15
2.70
2.84
3.14
4.82
5.45
3.22
15–20
2.79
3.15
2.74
3.13
2.75
3.23
20–25
2.89
2.54
1.76
2.81
2.05
1.41
25–30
1.71
1.75
1.59
1.73
2.45
1.42
30–60
2.94
3.0
2.69
3.0
3.81
2.53
a
All hospitals excluding children and psychiatry
using car is 7:30, 7:40, 6:52, 8:00, 5:25, and 5:24 min, respectively. The maximum required time of access was also 55, 56, 54, 55, 54, and 53 min, respectively. The mean time of access for residents of Ardabil city was 5:32, 5:40, 4:50, 5:50, 3:30, and 3:28 min, respectively. The maximum required time of access for Ardabil city was also 11, 11, 10, 11, 6, and 6 min, respectively. Traveling time and the spatial access in the pedestrian mode for speeds of 4, 5, 6 and 7 km/h are presented in Tables 2 and 3. The mean time of access to the public, private, social security, psychiatry, all hospitals excluding psychiatry and children as well as children at speeds of 4 km/h is 84:30, 79:40, 91:40, 113:00, 67:10 and 84:25 min, respectively. The children and elderly people are included in this category (4 km/h walking speed). When the study area was considered purely in the urban area, the meantime of access was 57:25, 54:30, 70:05, 81:25, 48:47 and 65:50 min, respectively. The maximum time of access to these hospitals was 120, 105, 160, 160, 90 and 120 min, respectively. The mean time of access for speed of 5 km/h was also 65:40, 63:15, 78:05, 89:06, 55 and 69:45 min, respectively. Middle-aged people were included in this category. When the study area was considered merely in an urban area, the meantime of access was 46:05, 43:38, 55:30, 63:05, 33:10, and 52:40 min, respectively. The maximum required time of access was 95, 85, 130, 135, 80 and 95 min, respectively. The mean time of access for speeds of 6 km/h is 55:39, 53:15, 66, 76:35, 47:20 and 57:40 min, respectively. Young women were considered in this category. When the study area was considered merely in the urban area, the meantime of access was 38:15, 36:15, 45:50, 52:22, 27:30, and 43:45 min, respectively. The maximum required time of access was 80, 70, 110, 105, 60 and 80 min, respectively. The mean time of access for speeds of 7 km/h was 46:40, 45:31, 54:29, 64:40, 39:30 and 50:15 min, respectively. Young men were considered in this category. When the study area was considered merely in the urban area, the meantime of access was 32:45, 31:10, 39:20, 45, 23:35 and 37:40 min, respectively. The maximum required time of access was 70, 60, 95, 90, 50 and 70 min, respectively. By increasing the pace speed from 4 to 7 km/h, the access rate to public hospitals per hour was 66.46, 80.59, 84.22 and 85.79%, respectively. For social security was 48.56, 61.18, 74.56 and 83.19, respectively. For private hospitals was 73.84, 82.94,
Spatial Accessibility to Hospitals Based on GIS …
365
Table 2 Accessibility to Ardabil hospitals in pedestrian mode (walking) at 4 and 5 km/h Time
Public
Private
Social security
Psychiatry
Children
Alla
4 km 5 km 4 km 5 km 4 km 5 km 4 km
5 km
0–5
0.36
0.68
0.46
0.68
0.00
0.02 0.01
0.02
0.04 0.11
0.8
1.4
5–10
2.7
3.9
1.4
2.8
0.47
1.1
0.31
0.91 2
4.3
6.6
10–15
3.9
6
3.1
5.8
2.3
4.6
0.51
1
2.3
3.4
7.3
12.8
15–30
18.1
30
22.7
32.8
15.8
20.4
5.3
10.4
13.8
24.6
37.7
44.6
30–45
27
24.4
27.2
29.4
16.3
18.9
12.6
19
27.3
28.6
25.2
17.6
45–60
14.5
15.7
19
11.6
13.8
16.1
15.8
20.9
17.5
18.8
7.2
2.1
60–120
17.8
6.7
11.1
4.7
35.7
25.8
45
35
24.6
10.4
2.9
2.6
120–180 1.9
2.6
2.1
2.6
3.7
3.6
5
2.5
1.8
2.6
1.9
3
180–240 3.1
3
2.7
2.9
2.5
3.4
2.6
3.5
2.4
2.8
3.1
3
240–300 2.3
2.6
2
2.4
3.3
1.5
2.5
2.6
2.4
2.4
2.2
1.8
≥ 300
4.5
8.2
4.5
6.2
4.5
10.6
4.8
6.9
4.5
7.4
4.5
a
8.4
0.07
4 km 5 km 4 km 5 km
All hospitals excluding children and psychiatry
Table 3 Accessibility to Ardabil hospitals in pedestrian mode (walking) at 6 and 7 km/h Time
Public
Private
Social security
Psychiatry
6 km 7 km 6 km 7 km 6 km 7 km 6 km
7 km
Children
Alla
6 km 7 km 6 km 7 km
0–5
1.2
2.1
0.91
1.3
0.02 0.29
0.03
0.04
0.38 0.69
2.3
3.5
5–10
5.6
6.8
4.1
6.2
2.7
0.57
0.95
2.8
10.1
13.7
10–15
7.5
10.7
9.8
13.9
6.7
9.3
1.5
2.9
5.4
8.1
17.6
23.4
15–30
39.2
42.1
41
44.4
25.4
28.1
16.5
22.6
35.8
42
46.3
40.8
30–45
22.4
20
24.9
18.2
19.2
23.2
23.8
29.1
26.5
24.7
8.2
3.7
45–60
8.3
4.1
4.6
1.9
20.6
18
21.6
18.9
12.2
6.3
0.92
0.86
60–120
3.7
4.2
3.1
4.3
13.7
6.6
23.4
14.1
5.3
4
2.7
4.2
120–180 4.2
3.6
4
3.2
4.1
4.4
3.8
4.6
4.2
3.5
4.7
4.4
180–240 3.2
2.9
3
3.1
2.7
2.2
3.1
3.1
2.6
3.4
2.3
2.2
240–300 1.7
2.3
1.7
2.2
1.8
2.4
2.4
2.1
2.6
1.9
1.9
2.1
≥300
1.3
2.9
1.3
3.2
1.3
3.3
1.6
2.3
1.5
3
1.2
a
3
4.3
4
All hospitals excluding children and psychiatry
85.30 and 85.90, respectively. For psychiatry hospital was 34.26, 51.61, 64.09 and 74.44, respectively. For all private and public hospitals, excluding psychiatry was 82.52, 85.00, 85.42, and 85.93, respectively. For children hospital was 38.18, 77.34, 83.01 and 85.71, respectively. The weighted overlay gives us 11 optimal locations for new public hospital. We examined each location access meantime. The best case decreased the meantime of
366
S. Barzegari et al.
access to public hospitals from 7:30 to 5:40 min. In the first 5 min, 81.56% of people could access to these hospitals, 6.83% for 5–10 min, 3.15% for 10–15 min, 3.06% for 15–20 min and 5.6% for 20–54 min. The maximum time of access was 54 min. The mean time of access for urban residents decreased from 5:32 to 4:10 min and the maximum time of access was decreased from 11 to 7 min.
4 Discussion and Conclusion The findings indicated that equity in access to public hospitals was more than all other hospitals, and only 2.53% of the population of Ardabil county, that were residents of villages, have not fair access to these hospitals. Although in all cases, rural inhabitants had poor access to the hospitals and urban residents had fair access. In general, the lack of access to hospitals was 2.53–3.81%. According to a study conducted in China, 34.6% of rural residents did not have access to medical services, and 1.4–10% of urban residents did not hospitalized because of a lack of geographical access, despite a physician’s recommendation [21]. In a study by Tanser et al. in South Africa in 2006 on access to primary healthcare services in urban and rural populations and its impact on the use of these services, it was concluded that the meantime of access to hospitals was 170 min. According to this, there was a significant correlation between increasing the distance from healthcare centers and reducing their referrals, so that people who were 30 min away from clinics use the facility 10 times more likely than those who were between 120 and 190 min away from clinics [22]. In another study in China, the meantime of access to the nearest hospital for Sichuan provinces was 48.4 min and was 60% higher than the standard access time to the hospital, which was 30 min. Only 39.4% of the mentioned county had access to the nearest hospital in less than 30 min. 36.8% in 30–60 min, and the rest by 1–2 h. In general, in population with high density, access to hospitals was higher than in other areas [21]. In the present study, public hospitals were located in areas with the highest density of population, but average traffic speed was lower than in other areas.
5 Practical Implications and Theoretical Contributions Access for the elderly population, children and those have children was lower than others to children and psychiatric hospitals. The lack of access for these people was between 17.48 and 65.74%. The results showed exception in the case of the social security hospital that located at the suburb of the Ardabil, in the remaining cases, the rural population did not have adequate access to hospitals. Given that limited access to healthcare services for women and children at the moment of birth and later stages can increase the mortality of children and mothers and also reduce their quality of life [23–25]. In addition, given that pedestrian access to children’s hospital
Spatial Accessibility to Hospitals Based on GIS …
367
of Ardabil was very limited in one hour, it is necessary to study the impact of the lack of access to the hospital in this hospital and the mortality or quality of life of children and infants. According to the findings of Amiri et al. in 2013, diagnosis in the early stages of the disease for Iranian children aged 1–59 months in 2009 was only 19.2%. In addition, 20% of deaths occurred outside of hospitals and this drawback demands paying more attention to the cause of patients lost at the outside of the hospital. Lack of access to healthcare services and the lack of adequate health centers in some of the provinces of the country are the main causes of inequality in the mortality of children aged 1–59 months, and the unfair distribution of healthcare services can lead to an unfair distribution of mortality [25]. A study in Bolivia showed that 88% of Córdoba’s people can go to primary care centers after one hour walking [26]. Also in a study conducted by Rosero in Costa Rica on access to healthcare services using the GIS software, it was found that half of the residents of Costa Rica live within a distance of less than 5 km to the hospital and 12–14% of residents were inappropriate in terms of access to services [13]. The optimal mean time to access with a car to the hospitals is for the social security hospital. Considering that the hospital is at a hot spot of traffic speed this is the reason that people with the least traffic and at the maximum speed can access to this hospital. Of course, given the fact that the hospital is located on the border of the city, walking access was more limited. Considering that one of the goals of this study was to locate a public hospital to improve the accessibility of Ardabil residents, the streets type, traffic speed, population density, access to public hospitals as well as the cold spot of population count in each access time and distance to the public hospitals were used. The regions with the lowest access to public centers and the highest population density and the fastest transit speeds were considered as the best places to build a new hospital. The selected location will improve the meantime of access for residents of Ardabil county by two minutes and will decrease the meantime of access for urban residents by 40 s. By adding this hospital, the residents of the farthest areas of the city of Ardabil can present in a public hospital by four minutes. Despite the suitable accessibility of urban residents to hospitals, the rural population, especially in the north and south of the city, had more limited access. These results can be used as a tool for health planning and health policymakers and managers should consider the accessibility of health centers and their correct and fair distribution as one of the basic principles. Due to the concentration of medical and diagnostic centers of the province for various diseases such as cancer in Ardabil city, accessibility of other cities will not be available in less than an hour, especially in the north and south of the province. Therefore, it is suggested to use GIS capabilities to distribute hospitals fairly and location-allocation for new hospitals establishment to provide all criteria of an accessible hospital. Acknowledgements The authors would like to thank the deputy of research at Ardabil University of Medical Sciences. Declarations Conflict of Interest The authors declare that there is no conflict of interest in this study.
368
S. Barzegari et al.
Funding This study was supported financially by the Ardabil University of Medical Sciences (Grant number 9304). Ethics Approval This study was approved by the Ardabil University of Medical Sciences.
References 1. Z. Zheng et al., Spatial accessibility to hospitals based on web mapping API: an empirical study in Kaifeng, China. Sustainability 11(4), 1160 (2019) 2. M. Ahmadi et al., Geographical accessibility to the hemodialysis centers in Ardabil, Iran. J. Nephropharmacol. 11(2) (2022) 3. M.P. Kwan, J. Weber, Individual accessibility revisited: implications for geographical analysis in the twenty-first century. Geogr. Anal. 35(4), 341–353 (2003) 4. K. Witten, D. Exeter, A. Field, The quality of urban environments: mapping variation in access to community resources. Urban studies 40(1), 161–177 (2003) 5. S. Zhang, X. Song, Y. Wei, W. Deng, Spatial equity of multilevel healthcare in the metropolis of Chengdu, China: a new assessment approach. Int. J. Environ. Res. Public Health 16(3), 493 (2019) 6. F. Lotfi, M. Bayati, A.R. Yusefi, S. Ghaderi, O. Barati, Inequality in distribution of health care resources in Iran: human resources, health centers and hospital beds. Shiraz E-Med. J. 19(6) (2018) 7. A.A. Kiadaliri, B. Najafi, H. Haghparast-Bidgoli, Geographic distribution of need and access to health care in rural population: an ecological study in Iran. Int. J. Equity Health 10(1), 39 (2011) 8. K.B. Lankarani, S.M. Alavian, P. Peymani, Health in the Islamic Republic of Iran, challenges and progresses. Med. J. Islam Repub. Iran 27(1), 42 (2013) 9. S.S. Ahari, S. Habibzadeh, M. Yousefi, F. Amani, R. Abdi, Community based needs assessment in an urban area: a participatory action research project. BMC Public Health 12(1), 161 (2012) 10. I. Harirchi, F. Ghaemmaghami, M. Karbakhsh, R. Moghimi, H. Mazaherie, Patient delay in women presenting with advanced breast cancer: an Iranian study. Public Health 119(10), 885– 891 (2005) 11. S. Noshad, M. Afarideh, B. Heidari, J.I. Mechanick, A. Esteghamati, Diabetes care in Iran: where we stand and where we are headed. Ann. Glob. Health 81(6), 839–850 (2015) 12. L. Brabyn, C. Skelly, Modeling population access to New Zealand public hospitals. Int. J. Health Geogr. 1(1), 3 (2002) 13. L. Rosero-Bixby, Spatial access to health care in Costa Rica and its equity: a GIS-based study. Soc. Sci. Med. 58(7), 1271–1284 (2004) 14. P. Apparicio, M. Abdelmajid, M. Riva, R. Shearmur, Comparing alternative approaches to measuring the geographical accessibility of urban health services: distance types and aggregation-error issues. Int. J. Health Geogr. 7(1), 7 (2008) 15. A. Murad, Using GIS for determining variations in health access in Jeddah City, Saudi Arabia. ISPRS Int. J. Geo-Inf. 7(7), 254 (2018) 16. F. Amani, N. Fouladi, A. Zakeri, S. Tabrizian, A. Enteshari-Moghaddam, S. Barzegari, Changing trend of breast cancer in Ardabil Province, Iran by age group, grading, and gender during 2003–2016. Middle East J. Cancer 12(2), 285–291 (2021) 17. F. Amani, S.S. Ahari, S. Barzegari, B. Hassanlouei, M. Sadrkabir, E. Farzaneh, Analysis of relationships between altitude and distance from volcano with stomach cancer incidence using a geographic information system. Asian Pac. J. Cancer Prev.: APJCP 16(16), 6889–6894 (2015). ((in English)) 18. R.W. Bohannon, Comfortable and maximum walking speed of adults aged 20–79 years: reference values and determinants. Age Ageing 26(1), 15–19 (1997)
Spatial Accessibility to Hospitals Based on GIS …
369
19. A. dos Anjos Luis, P. Cabral, Geographic accessibility to primary healthcare centers in Mozambique. Int. J. Equity Health 15(1), 173 (2016) 20. P.L. Delamater, J.P. Messina, A.M. Shortridge, S.C. Grady, Measuring geographic access to health care: raster and network-based methods. Int. J. Health Geogr. 11(1), 15 (2012) 21. J. Pan, H. Liu, X. Wang, H. Xie, P.L. Delamater, Assessing the spatial accessibility of hospital care in Sichuan Province, China. Geospatial Health (2015) 22. F. Tanser, B. Gijsbertsen, K. Herbst, Modelling and understanding primary health care accessibility and utilization in rural South Africa: an exploration using a geographical information system. Soc. Sci. Med. 63(3), 691–705 (2006) 23. P.W. Gething et al., Geographical access to care at birth in Ghana: a barrier to safe motherhood. BMC Public Health 12(1), 991 (2012) 24. O.M. Campbell, W.J. Graham, Lancet Maternal Survival Series steering group, Strategies for reducing maternal mortality: getting on with what works. Lancet 368(9543), 1284–1299 (2006) 25. M. Amiri, H.R. Lornejad, S.H. Barakati, M.E. Motlagh, R. Kelishadi, P. Poursafa, Mortality inequality in 1–59 months children across Iranian provinces: referring system and determinants of death based on hospital records. Int. J. Prev. Med. 4(3), 265 (2013) 26. B. Perry, W. Gesler, Physical access to primary health care in Andean Bolivia. Soc. Sci. Med. 50(9), 1177–1188 (2000)
Efficiency and Effectiveness of CRM Solutions in Public Sector: A Case Study from a Government Entity in Dubai Orabi Habeh, Firas Thekrallah, and Khaled Shaalan
Abstract The customer relationship management (CRM) system has been adopted by many organizations to get better understanding of their customers’ needs and efficient utilization. The aim is to help organizations draw their strategies and enhance the quality of their products and services. Both public sector and government entities can use CRM systems to understand citizens needs and enhance services provided accordingly. This paper presents a case study measuring the efficiency and effectiveness of CRM systems in public sectors. The case study is conducted through an anonymous survey that is applied to one of the Dubai’s government entities. This survey focuses on eight questions which was answered by users who are using a CRM system in a governmental entity in Dubai. The results showed that 21% of respondents are extremely satisfied with the outcomes of the CRM. Moreover, 67% of the users showed moderate satisfactions, and the remaining respondents tends to be slightly unsatisfied. The in-depth gap analysis suggested that enhancements are essential to improve user’s satisfactions with CRM usage and expected outcomes. Keywords Customer relationship management · Knowledge management · Customers knowledge · Public sector · Dubai government · Citizen’s benefits
1 Introduction Knowledge management (KM) processes play a significant role in acquiring, sharing, applying, and storing information across many sectors [5, 8, 9]. The process model of Customer Knowledge Management (CKM) that was introduced by Gebert et al. in 2003 defined which KM tools can be used and applied to the CRM processes to achieve better Customer Knowledge Management. Knowledge of customers has a positive impact on the quality of service, CRM system can be utilized to collect the knowledge from customers which will help organizations enhance and develop O. Habeh · F. Thekrallah · K. Shaalan (B) Faculty of Engineering and IT, The British University in Dubai, Dubai, United Arab Emirates e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. Al-Emran and K. Shaalan (eds.), Recent Innovations in Artificial Intelligence and Smart Applications, Studies in Computational Intelligence 1061, https://doi.org/10.1007/978-3-031-14748-7_21
371
372
O. Habeh et al.
their services and therefore the customer satisfaction increases as well as profitability [26, 30]. Customer relationship Management (CRM) is a crucial component for organizations success specially in order to provide products and services through the internet [18]. Moreover, “the customer orientation impact” and “the technological performances of the CRM system” will improve products and services provided by an organization which will directly impact the customers satisfaction positively [19]. As the demand on analytics is increasing for most industries, “Deriving value from structured data is now commonplace” as pointed by [25] who provided a guidance for organizations for utilizing text analysis for enhancing customer experience and develop their own products. In addition to private sector, governmental organizations always try to satisfy citizens needs and improves the relationship with them [27]. United Arab Emirates is one of the unique countries in terms of number of nationalities, around 200 nationalities are living in UAE, in addition to the Emirati people [31], this considered a major motivative for meeting all nationalities and local people expectations [12]. As concluded by Aly Shaban Abdelmoteleb et al. [12], it is very important to study customers and factors behind their happiness and accordingly this will help in providing customized services and products which meet their needs, expectation, and satisfaction. This paper conducted a case study to measure the efficiency and effectiveness of CRM systems in public sectors. The case study analyses an anonymous survey covered one of Dubai government entities to measure the benefits. The survey consists of eight questions which was answered by 24 users who uses CRM system in a governmental entity in Dubai where 4 (17%) users were females, and 20 (83%) users were males.
2 Literature Review 2.1 Organization Performance and Service Quality Customer relationship management systems (CRM) are different from other knowledge management systems such as enterprise resource planning (ERP) and supply chain management (SCM) systems in a way that CRM systems are more focused on how customers interact with organizations and the services provided by them, accordingly it will impact the performance of these organizations [15]. Haislip and Richardson [15] tried on their study to exam the suggested framework for the benefits of IT investments (Dehning and Richardson 2002, cited in [15], p. 17) which is illustrated in Fig. 1. The authors conducted a study on 87 organizations that utilized CRM systems during the period 2001 until 2011 in order to measure the impact of CRM systems on organization operational performance and organization’s business processes.
Efficiency and Effectiveness of CRM Solutions in Public Sector …
373
Fig. 1 Framework for the benefits of IT investments (adapted from Dehning and Richardson 2002, cited in [15], p. 17)
Authors concluded that existence of CRM system within an organization support the relationship between organizations and customers and may help in improving business processes such as sales and help also in increasing the efficiency of operations processes. In addition, improvement would be noticed on the performance of operation which can be assessed by return on asset (ROA) and cash flow provided by operations. In another study by Arenas-Gaitan et al. [13], authors examine the impact of implementing a CRM system on call centers performance, authors conducted the study on data collected from 168 call center managers through a web based survey. The result of survey answers analysis concluded that CRM systems have a positive impact on service quality and operational efficiency in call centers in addition to the positive impact on customer satisfaction. KM processes have a significant relationship with various variables [1, 4, 6, 7]. For instance, Tseng [29] tried to study the relationship between knowledge management capabilities (such as Knowledge acquisition, Knowledge application and Knowledge protection), customer relationship management and service quality. The author collected a sample consists of 500 organizations from the largest Taiwanese corporations which was collected by China Credit Information Service. A structured survey have been distributed to the marketing employees in the selected 500 organizations. According to the analyzed survey results, author reached a conclusion that knowledge management capabilities have a positive impact on both customer relationship management and quality of services provided by these organizations.
374
O. Habeh et al.
Batista et al. [14] claimed that organizations success can be represented by how these organizations are adaptive to the changes happened in the surrounding environment through changes in the organization behaviors and not only to the internal environment. The author added that CRM system can support this process considering the fact how the CRM system is implemented and utilized in the organization. This study explore the impact of effective CRM systems on the capabilities of organizations to respond to external environment changes and CRM system contribution to organizations staff empowerment. A quantitative research survey has been conducted to collect the related data from 250 managers working in the Brazilian financial services sector, out of the 250 managers; only 116 completed responses considered and analyzed, accordingly author concluded that and effective CRM system implementation would help in staff empowerment and accordingly enhance organizations ability to respond to the external changes.
2.2 CRM in Public Sector It has been shown that there is an increasing interest in implementing CRM system concept in the public sector as well as private sector [27]. The aim is to build a framework for managing customers relationship in public sector utilizing the past research results conducted over private sector. Figure 2 shows the developed framework, known as Citizen Relationship Management (CiRM). The major differences between implementation of CRM system in the private organizations and public organizations is summarized in Fig. 3. Consequently, the study concluded that public sector should define a strategy for Citizen Relationship Management (CiRM) supported by new technologies. This strategy should be built according to the citizen’s interest which will enhance and develop the relationship with the public sector. In another comparative study on between CRM implementation in public and private sectors, Iqbal Raina and Pazir [17] conducted a study comparing CRM processes in private telecommunication companies and public telecommunication companies located in two states in India. The survey results analysis reveals that CRM practices and services quality in private telecommunication companies are much better than public telecommunication companies [21]. Although the study highlighted its limitation to the area of study, it did not consider the challenges faced by public sectors such as budget limitation. Fig. 2 The CiRM framework ([27], p. 3)
Efficiency and Effectiveness of CRM Solutions in Public Sector …
375
Fig. 3 Differences between CRM in private and public organizations ([27], p. 5)
2.3 CRM Implementation Success Factors Alamgir [11] conducted a study to identify the factors behind a successful implementation of CRM systems, through a deep interview conducted with customer service managers in 10 telecommunication operators in Bangladesh. According to the extracted and analyzed data from the interviews, some factors have been identified and integrated into a model as shown in Fig. 4. In another study conducted by Khlif and Jallouli [19], different type of factors are identified which contribute to the success of the CRM system implementation. The research is based on questioner survey that targets eleven Tunisian companies. According to the responses, 265 responses are validated and analyzed. Multiple factors with multiple dimensions would affect the success of CRM implementation. These factors can be grouped into two sections: one is related to technologies used in CRM system and the other is the customer orientation. As claimed by Lawson-Body et al. [20], there is no research that is conducted to identify factors behind either successful or failure of CRM implementation in
376
O. Habeh et al.
Fig. 4 Inclusive model of CRM success ([11], p. 56)
public sector. So, this study is aimed to identify the unexpected factors that would affect CRM implementation in Small Business Development Center (SBDC) in Texas state in USA under the following “categories contextual, organizational, and individual factors”. A one to one deep interviews were conducted with 19 consultants working in SBDC. Three factors have been identified: “Accreditation review pressure” under Contextual category, “Legal and licensing agreement feasibility” under Organizational category, and “Internal User participation” under Individual category. Jafari Navimipour and Soltani [18] studied the impact of “cost, technology acceptance and employees’ satisfaction” on the implementation of CRM systems. A questioner survey is collected from employees working in Azerbaijan Tax Administration in Iran. According to the results analysis, authors found that all three aforementioned factors are having strong impact on the implementation of CRM system in the studied area. Organizational factors impacting CRM system utilization have been studied by Šebjan et al. [28]. The researchers developed a model based on the Technology Acceptance Model (TAM), this model is summarized in Fig. 5. This is mainly because the TAM has been validated across several studies in the literature [2, 3, 10, 23, 24].
2.4 Customer Experience and Loyalty Customer loyalty is defined by Rayner (1996) (cited in [22], p.72) “the commitment that a customer has to a particular supplier”, Mandina [22] used both qualitative and quantitative researches to conduct his research, where qualitative and quantitative questioners distributed across 150 business customers and 147 employees and analyzed results showed that proper CRM strategies positively impact customer loyalty.
Efficiency and Effectiveness of CRM Solutions in Public Sector …
377
Fig. 5 Conceptual research model ([28], p. 464)
Hrnjic [16] evaluated the impact CRM system on students satisfaction in higher education institutes. The author developed his own model for CRM strategy in higher education that is concerned with student satisfaction, this model is illustrated in Fig. 6. In order to evaluate author’s model and to identify key components of student satisfaction with regard to higher education institute CRM strategy, a survey is distributed across 504 university covering students from both undergraduate and postgraduate programs. The analyzed results indicate that factors related to students’ satisfactions that having impact on CRM system in higher education institutes are related to the management of university organization and teaching processes, skills of academic staff, students materials, and learning methodologies. Aly Shaban Abdelmoteleb et al. [12] studied the UAE government experience in satisfying their customers in terms of customers happiness. This study consists of two parts. The first one was a qualitative study on 50 groups from different type
Fig. 6 CRM strategy dimensions in higher education student satisfaction model ([16], p. 64)
378
O. Habeh et al.
Fig. 7 Identified drivers of happiness ([12], p. 246)
of people in UAE (Expats and Locals) and the second part is a quantitative study using a sample of 12,000 customers who used governmental services during the last 3 months from study date. According to the results of this study, 16 drivers of customers happiness using government services in UAE are summarized in Fig. 7.
3 Method and Measurement The data of this case study is collected through an online structured survey which was conducted using an online survey tool and a URL link was distributed to the participants. The Participant sample is selected based on a certain criterion defined in the following section. The number of users selected was 40 users; however, participant who responded to the survey were 24 CRM users. Measurement of the collected answers were performed automatically using an online survey tool, and the results of the standard deviation of the overall results score was 5.96 which reflects a positive feedback received from the participant in overall satisfactions of the current CRM module usage.
Efficiency and Effectiveness of CRM Solutions in Public Sector …
379
4 Sample Selection Criteria The results of the survey were based on eight questions provided to selected users of the CRM in a government entity. Selection of the users identified is based on the following criteria: • Users who are actively using CRM frequently. • Users who had a vast level of expertise’s in CRM usage. • Users who provide suggestions for enhancing and improving CRM application. The following section presents detailed insights of each survey question and respondent answers.
5 Results and Analysis 5.1 CRM Understand Customers Figure 8 shows that twenty respondents (83%) realized the benefits of CRM in understanding customer’s behaviors, issues and needs. Therefore, results show that users are fully aware of the CRM positive impact on customers’ needs and they understand the concept of CRM. However, there are respondents (13%) who selected “slightly yes” option which indicates that they are excepting better outcomes of the CRM application in understanding customers’ needs.
5.2 CRM Can Solve Customers’ Issues Figure 9 shows twenty respondents (83%) indicates that the CRM helped in solving customers issues. Thus, it improves the organization performance. In fact, this result indicate that management could decide to enhance and improve its staff reward recognition program (i.e. KPI’s) towards best employee who resolve the maximum number of customers issues through CRM within specific timeframe. However, three respondents (17%) indicates that CRM slightly/does not help in solving customer issues which require management attentions to conduct in depth analysis of the reasons behind that, whether it is related to a lack of periodic training or because the CRM is outdated and requires an improvement.
380
O. Habeh et al.
Fig. 8 Survey question 1 results
Fig. 9 Survey question 2 results
5.3 CRM Improves Business Performance One of the essential goals of CRM is to improve business performance; thus, organizations seek to implement CRM in order to achieve the stated generic goal. However, customization of the CRM application may negatively or positively impact this goal.
Efficiency and Effectiveness of CRM Solutions in Public Sector …
381
Fig. 10 Survey question 3 results
Figure 10 shows 18 respondents (75%) see that CRM can improve business performance; however, 6 respondents (25%) see that CRM is not the useful tool to improve business performance. This result is surprisingly requiring in depth analyses and detailed questionnaire with the respondents to clarify further reasons behind this answer.
5.4 CRM Importance to Business Figure 11 shows 22 respondents (92%) indicates that CRM is considered an important tool for the business which led to a positive conclusion where most users aware of the CRM capabilities and impact on its individual performance as well as organization. However, the result shows that few users (8%) see less importance of the CRM in business. In fact, result shown in Fig. 11 can guide the government entity to expand the utilization of the CRM on its internal processes.
5.5 CRM Usage of the Collected Data Figure 12 shows that 15 respondents (63%) indicate that CRM is considered a useful tool to collect data from it. However, this percentage showed a gap in perception of CRM capabilities. It is obvious that one of the most essential and useful features of the CRM is the data collections that should be structured and organized which can be beneficial to utilize as a valuable information in the decision making process.
382
O. Habeh et al.
Fig. 11 Survey question 4 results
Therefore, management requires in depth analysis and insights of the reasons with associated gaps behind these perceptions and the results of this question. Fig. 12 Survey question 5 results
Efficiency and Effectiveness of CRM Solutions in Public Sector …
383
Fig. 13 Survey question 6 results
5.6 CRM Data Improves Business and Services Figure 13 shows that 24 respondents (100%) indicate significant confirmation from all respondents of the importance of CRM data in supporting the government entity business. However, the answers in this question showed that all respondent are aware of the generic CRM standards and capabilities. However, this answer conflicts with the results shown in Fig. 12, which requires from the management to analyze the gaps of the current CRM module and review its capabilities and features versus its business needs and future growth. Consequently, it should increase users trust, utilizations and perceptions of the CRM data importance.
5.7 CRM Data Validity and Accuracy Figure 14 showed 23 respondents (96%) indicate that CRM collected data are considered accurate, valid and reflect customer’s actual feedback and behavior. Therefore, the selected CRM module for this government entity might provide the expected outcomes; however, this result along with the results shown in Fig. 12 can be considered as an evidence of a gap in the selected CRM module which may not suite the current business’ objectives and goals.
384
O. Habeh et al.
Fig. 14 Survey question 7 results
5.8 CRM Overall Users’ Satisfactions Figure 15 showed 21 respondents (88%) indicate an overall satisfaction of the CRM usage and its outcome [Extremely satisfied (21%) and Moderate satisfied (67%)]. Based on the results shown in Fig. 12, through Fig. 14, the overall satisfaction percentage cannot be considered positive for this survey, and it requires significant analysis of the current CRM module versus the current business objective and future growth in order to fulfil the gaps and ensure the best user satisfaction of the improved CRM. Fig. 15 Survey question 8 results
Efficiency and Effectiveness of CRM Solutions in Public Sector …
385
6 Conclusion CRM plays an essential role in business continuity, improvements and consumers loyalty in order to sustain and increase organization profit. Therefore, companies should ensure selecting the best CRM application which can better suite its business and obtain the expected outcome. The results of our conducted survey indicate that management should take the following suggested steps in order to eliminate the gaps identified in the collected data. Regularly, evaluate the current CRM application capabilities and features against current and planned business needs, in order to take the optimum decision by improving current CRM application. In addition to organize periodic refreshment training for the users. Moreover, conduct an ad-hoc audits on the CRM users. Also, ensure data collected from the CRM are utilized efficiently and effectively to support decision making process. Acknowledgements This work is a part of a project undertaken at the British University in Dubai.
References 1. M. Al-Emran, G.A. Abbasi, V. Mezhuyev, Evaluating the impact of knowledge management factors on M-learning adoption: a deep learning-based hybrid SEM-ANN approach, in Recent Advances in Technology Acceptance Models and Theories, vol. 335 (Springer, Cham, 2021), pp. 159–172. https://doi.org/10.1007/978-3-030-64987-6_10 2. M. Al-Emran, R. Al-Maroof, M.A. Al-Sharafi, I. Arpaci, What impacts learning with wearables? An integrated theoretical model. Interact. Learn. Environ. 1–21 (2020). https://doi.org/ 10.1080/10494820.2020.1753216 3. M. Al-Emran, A. Grani´c, M.A. Al-Sharafi, N. Ameen, M. Sarrab, Examining the roles of students’ beliefs and security concerns for using smartwatches in higher education. J. Enterp. Inf. Manag. 34(4), 1229–1251 (2021). https://doi.org/10.1108/JEIM-02-2020-0052 4. M. Al-Emran, V. Mezhuyev, Examining the effect of knowledge management factors on mobile learning adoption through the use of importance-performance map analysis (IPMA), in International Conference on Advanced Intelligent Systems and Informatics (2019), pp. 449–458. https://doi.org/10.1007/978-3-030-31129-2_41 5. M. Al-Emran, V. Mezhuyev, A. Kamaludin, Students’ perceptions towards the integration of knowledge management processes in M-learning systems: a preliminary study. Int. J. Eng. Educ. 34(2), 371–380 (2018) 6. M. Al-Emran, V. Mezhuyev, A. Kamaludin, An innovative approach of applying knowledge management in M-learning application development: a pilot study. Int. J. Inf. Commun. Technol. Educ. (IJICTE) 15(4), 94–112 (2019). https://doi.org/10.4018/IJICTE.2019100107 7. M. Al-Emran, V. Mezhuyev, A. Kamaludin, Is M-learning acceptance influenced by knowledge acquisition and knowledge sharing in developing countries? Educ. Inf. Technol. 26, 2585–2606 (2021). https://doi.org/10.1007/S10639-020-10378-Y 8. M. Al-Emran, V. Mezhuyev, A. Kamaludin, M. AlSinani, Development of M-learning application based on knowledge management processes, in 2018 7th International Conference on Software and Computer Applications (ICSCA 2018) (2018), pp. 248–253. https://doi.org/10. 1145/3185089.3185120
386
O. Habeh et al.
9. M.A. Al-Sharafi, M. Al-Emran, M. Iranmanesh, N. Al-Qaysi, N.A. Iahad, I. Arpaci, Understanding the impact of knowledge management factors on the sustainable use of AI-based chatbots for educational purposes using a hybrid SEM-ANN approach. Interact. Learn. Environ. 1–20 (2022). https://doi.org/10.1080/10494820.2022.2075014 10. J.H. Al Shamsi, M. Al-Emran, K. Shaalan, Understanding key drivers affecting students’ use of artificial intelligence-based voice assistants. Educ. Inf. Technol. 1–21 (2022). https://doi. org/10.1007/S10639-022-10947-3 11. M. Alamgir, Customer relationship management (CRM) success factors: an exploratory study. Ecoforum 4(1), 7 (2015) 12. A. Aly Shaban Abdelmoteleb, S. Kamarudin, P.N.E. Nohuddin, Data driven customer experience and the roadmap to deliver happiness. Mark. Brand. Res. 4(3), 236–248 (2017). https:// doi.org/10.33844/mbr.2017.60452 13. J. Arenas-Gaitan, B. Peral-Peral, M.A. Ramon-Jeronimo, The strategic impact of technology based CRM on call centers’ performance. J. Internet Bank. Commer. 20(1), 1–24 (2015) 14. L. Batista, S. Dibb, M. Meadows, M. Hinton, M. Analogbei, A CRM-based pathway to improving organisational responsiveness: an empirical study. J. Strateg. Mark. 28(6), 494–521 (2018). https://doi.org/10.1080/0965254X.2018.1555547 15. J.Z. Haislip, V.J. Richardson, The effect of customer relationship management systems on firm performance. Int. J. Account. Inf. Syst. 27, 16–29 (2016). https://doi.org/10.1016/j.accinf. 2017.09.003 16. A. Hrnjic, The transformation of higher education: evaluation of CRM concept application and its impact on student satisfaction. Eurasian Bus. Rev. 6(1), 53–77 (2016). https://doi.org/10. 1007/s40821-015-0037-x 17. D. Iqbal Raina, D. Pazir, Customer relationship management practices in telecom sector: comparative study of public and private companies. Int. J. Manag. Stud. (2019) 18. N. Jafari Navimipour, Z. Soltani, The impact of cost, technology acceptance and employees’ satisfaction on the effectiveness of the electronic customer relationship management systems. Comput. Hum. Behav. 55, 1052–1066 (2016). https://doi.org/10.1016/j.chb.2015.10.036 19. H. Khlif, R. Jallouli, The success factors of CRM systems: an explanatory analysis. J. Global Bus. Technol. 10(2), 25–42 (2014) 20. A. Lawson-Body, L. Lawson-Body, L. Willoughby, Using action research to identify unexpected factors affecting CRM implementation. J. Appl. Bus. Res. 33(4), 757–764 (2017). https://doi.org/10.19030/jabr.v33i4.9997 21. A. Lawson-Body, L. Willoughby, L. Mukankusi, K. Logossah, The critical success factors for public sector CRM implementation. Journal of Computer Information Systems 52(2), 42–50 (2011). https://doi.org/10.1080/08874417.2011.11645539 22. S.P. Mandina, Contribution of CRM strategies in enhancing customer loyalty. J. Mark. Dev. Compet. 8(2), 69–87 (2014) 23. V. Mezhuyev, M. Al-Emran, M. Fatehah, N.C. Hong, Factors affecting the metamodelling acceptance: a case study from software development companies in Malaysia. IEEE Access 6, 49476–49485 (2018). https://doi.org/10.1109/ACCESS.2018.2867559 24. V. Mezhuyev, M. Al-Emran, M.A. Ismail, L. Benedicenti, D.A. Chandran, The acceptance of search-based software engineering techniques: an empirical evaluation using the technology acceptance model. IEEE Access 7, 101073–101085 (2019). https://doi.org/10.1109/access. 2019.2917913 25. O. Müller, S. Debortoli, I. Junglas, J. vom Brocke, Using text analytics to derive customer service management benefits from unstructured data. MIS Q. Exec. 15(4), 243–258 (2016) 26. J. Peng, A. Lawrence, T. Koo, Customer knowledge management in international project: a case study. J. Technol. Manag. China 4(2), 145–157 (2009). https://doi.org/10.1108/174687 70910965000 27. A. Schellong, CRM in the public sector: towards a conceptual research framework, in Proceedings of the 2005 National Conference on Digital Government Research, January 2005 (2005), pp. 326–332. https://doi.org/10.1145/1065226.1065342
Efficiency and Effectiveness of CRM Solutions in Public Sector …
387
28. U. Šebjan, S. Bobek, P. Tominc, Organizational factors influencing effective use of CRM solutions. Procedia Technol. 16, 459–470 (2014). https://doi.org/10.1016/j.protcy.2014.10.113 29. S.M. Tseng, Knowledge management capability, customer relationship management, and service quality. J. Enterp. Inf. Manag. 29(2), 202–221 (2016). https://doi.org/10.1108/JEIM04-2014-0042 30. S.M. Tseng, P.H. Wu, The impact of customer knowledge and customer relationship management on service quality. Int. J. Qual. Serv. Sci. 6(1), 77–96 (2014). https://doi.org/10.1108/ IJQSS-08-2012-0014 31. Wikipedia, Expatriates in the United Arab Emirates (Wikipedia, 2014)