335 65 6MB
English Pages 186 [179] Year 2021
Fadi Al-Turjman Editor
Real-Time Intelligence for Heterogeneous Networks Applications, Challenges, and Scenarios in IoT HetNets
Real-Time Intelligence for Heterogeneous Networks
Fadi Al-Turjman Editor
Real-Time Intelligence for Heterogeneous Networks Applications, Challenges, and Scenarios in IoT HetNets
Editor Fadi Al-Turjman Artificial Intelligence Engineering Department Research Center for AI and IoT Near East University Mersin, Turkey
ISBN 978-3-030-75613-0 ISBN 978-3-030-75614-7 (eBook) https://doi.org/10.1007/978-3-030-75614-7 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
This book is dedicated to my great parents, my wife, my little stars, my brother, my sisters, and all my beloved ones, to honor their major support and encouragement over the course of my entire life. My mother, for the life-long tender care, thank you is too little. The deepest gratitude I have for you. I love you! My father, the greatest of the greatest. The prophet of this age. Indeed you are the most brilliant teacher I have ever seen. I owe you a lot, and I believe your support was the key factor in any achievement I have ever made. Thank you! My wife, my lady. Your continuous support is always appreciated. I love you. Also, I shouldn’t forget my parents in law. I love you too! My brother and sisters, the best friends in my life. The biggest support I ever had. Thanks for everything, especially your beautiful and true “love.” My little stars, the hope of my life and beyond. The reason for my hard work and my continuous success. I’m very thankful for your gifted presence. I love you all. Fadi Al-Turjman
Preface
Heterogenous network, or Het-Net, has significantly affected our daily life around the globe. It emphasized the real benefit for interconnected devices and enabling technologies via what we call the Internet of Things (IoT) paradigm. It tightly integrates with the existing cloud infrastructure to impact functional multidisciplinary fields. Intelligent IoT-enabled solutions have made revolutionary advances in all these fields and in the telecommunication field specifically. It reduces the time for data finding, processing, and communication. This book opens the door for several exciting research topics and applications in this Het-Net IoT era. In this book, we are resolving significant issues towards realizing the future vision of the artificial intelligence (AI) in IoT-enabled spaces. Such AI-powered IoT solutions will be employed in satisfying critical conditions towards further advances in our daily smart lives. This book overviews the associated issues and proposes the most up-to- date alternatives. The objective is to pave the way for AI-powered IoT-enabled spaces in the next generation Het-Net technologies and open the door for further innovations. Mersin, Turkey
Fadi Al-Turjman
vii
Contents
1 Machine Learning Applications for Heterogeneous Networks������������ 1 Saad Aslam, Fakhrul Alam, Houshyar Honar Pajooh, Mohammad A. Rashid, and Hafiz M. Asif 2 Federated Learning with Support of HetNets, Cloud Computing, and Edge Computing���������������������������������������������������������� 19 Altan Koçyiğit and Enver Ever 3 Cooperative Mobile Traffic Offloading in Mobile Edge Computing for 5G HetNet IoT Applications ���������������������������������������� 43 B. D. Deebak 4 Nuclear Radiation Monitoring in the Heterogeneous Internet of Things Era ���������������������������������������������������������������������������� 59 Akbar Abbasi, Fatemeh Mirekhtiary, and Hesham M.H. Zakalya 5 Convergence of Blockchain in IoT Applications for Heterogeneous Networks������������������������������������������������������������������ 71 Firdous Kausar, Mohamed Abdul Karim Sadiq, and Hafiz Muhammad Asif 6 Measuring Clock Reliability in Cloud Virtual Machines �������������������� 87 Aarush Ahuja, Vanita Jain, and Dharmender Saini 7 Analysis of LTE Downlink Performance Under Heterogeneous Mobility Conditions via a Mobile App������������������������ 99 Adeosun Nehemiah Olufemi and Fadi Al-Turjman 8 Classification of Solid Wastes Using CNN Method in IoT Het-Net Era ���������������������������������������������������������������������������������� 117 Fadi Al-Turjman, Rahib H. Abiyev, Hamit Altıparmak, Meliz Yuvalı, and Şerife Kaba
ix
x
Contents
9 Characterization and Benchmarking of Message-Oriented Middleware ���������������������������������������������������������������������������������������������� 129 Aarush Ahuja, Vanita Jain, and Dharmender Saini 10 Intelligent Mobile Application for Crime Reporting in the Heterogenous IoT Era������������������������������������������������������������������ 149 Father Phuthego and Fadi Al-Turjman Index������������������������������������������������������������������������������������������������������������������ 171
About the Author
Fadi Al-Turjman received his Ph.D. in computer science from Queen’s University, Canada, in 2011. He is the associate dean for research and the founding director of the International Research Center for AI and IoT at Near East University, Nicosia, Cyprus. Prof. Al-Turjman is the head of Artificial Intelligence Engineering Dept., and a leading authority in the areas of smart/intelligent IoT systems, wireless, and mobile networks’ architectures, protocols, deployments, and performance evaluation in Artificial Intelligence of Things (AIoT). His publication history spans over 400 SCI/E publications, in addition to numerous keynotes and plenary talks at flagship venues. He has authored and edited more than 40 books about cognition, security, and wireless sensor networks’ deployments in smart IoT environments, which have been published by well-reputed publishers such as Taylor and Francis, Elsevier, IET, and Springer. He has received several recognitions and best papers’ awards at top international conferences. He also received the prestigious Best Research Paper Award from Elsevier Computer Communications Journal for the period 2015–2018, in addition to the Top Researcher Award for 2018 at Antalya Bilim University, Turkey. Prof. Al-Turjman has led a number of international symposia and workshops in flagship communication society conferences. Currently, he serves as book series editor and the lead guest/associate editor for several top tier journals, including the IEEE Communications Surveys and Tutorials (IF 23.9) and the Elsevier Sustainable Cities and Society (IF 7.8), in addition to organizing international conferences and symposiums on the most up to date research topics in AI and IoT.
xi
Chapter 1
Machine Learning Applications for Heterogeneous Networks Saad Aslam , Fakhrul Alam , Houshyar Honar Pajooh Mohammad A. Rashid , and Hafiz M. Asif
,
Next-generation (NG) cellular networks, such as beyond fifth-generation (B5G), promise data rates in the range of Tbits/s, ultralow latency, and most importantly mass connectivity. 5G systems are already being rolled out and are not expected to meet all these requirements, and therefore, 6G is expected to address the shortfall. One of the most important aspects of the sixth-generation (6G) system is seamless global connectivity with significantly improved energy and spectrum efficiency. 6G is expected to generate a massive volume of heterogeneous data that needs to be analyzed to support various services. Since data traffic is a major constituent of cellular traffic, a low-cost and effective solution for handling this traffic is the deployment of small cells. The deployment of small cells improves cellular coverage and capacity. The interoperation of these small cells with macro-cells forms a heterogeneous network (HetNet). Initial studies indicate that HetNets will be an integral part of 6G. This chapter presents an overview of HetNets, discusses research challenges, and outlines how a machine learning (ML) approach can address some of the challenges.
S. Aslam (*) · F. Alam · H. H. Pajooh · M. A. Rashid Department of Mechanical & Electrical Engineering, SF&AT, Massey University, Auckland, New Zealand e-mail: [email protected]; [email protected]; [email protected]; [email protected] H. M. Asif Department of Electrical & Computer Engineering, Sultan Qaboos University, Muscat, Oman e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 F. Al-Turjman (ed.), Real-Time Intelligence for Heterogeneous Networks, https://doi.org/10.1007/978-3-030-75614-7_1
1
2
S. Aslam et al.
1.1 Introduction The exponentially increasing demand of mobile users [1] poses a significant challenge for upcoming cellular networks. The next-generation networks (such as 6G) are anticipated to solve this densification problem and provide ubiquitous coverage to all users [2]. Next-generation (NG) cellular networks encompass heterogeneous networks (HetNets) that promise better coverage, energy-efficient solutions, ultrahigh data rates, and extremely low latency [3]. HetNets can help meet the capacity demand and deliver a uniform experience to cellular users. Small cells, e.g., femto- and pico- cells, coexist with macro-cells in HetNets supporting aggressive spectrum reuse [4]. A typical HetNet architecture is shown in Fig. 1.1. Heterogeneous networks are considered vital for transforming the fourth generation of cellular networks into the next generation, i.e., the fifth and sixth generation. One of the reasons behind this is the ability of HetNets to accommodate dense networks. Expected benefits of HetNets are given in Fig. 1.2. In HetNets, macro-base stations (MBSs) are deployed to provide coverage for large areas, while overlaid
Pico Cell
INTERNET
Femto Cell Macro Cell
Relay
Coverage Extension Emergency Communicat Communication ion
Macro Cell
Fig. 1.1 A heterogeneous network where different cells coexist
3
1 Machine Learning Applications for Heterogeneous Networks
Capacity
Significant Capacity Gain.
Data Rate
In the range of 1Tbps
Latency
Less than 0.5 ms
Connectivity
Massive Simultaneous Connecons
Energy and Spectral Efficiency
Less Energy Consumpon and Higher Spectral Efficiency
Fig. 1.2 The expectation of next-generation cellular networks from HetNets
lower-powered small base stations (SBSs) are used to cover relatively small areas (e.g., Wi-Fi hot spots) [5–7]. Enhancing data rates and improving spectral efficiency are the focus of today’s networks. HetNets are meant to bring energy and network efficiency while addressing the capacity demand of cellular networks. Small cells in 5G HetNets are supposed to support very aggressive frequency reuse to increase the network capacity [8]. The deployment of Wi-Fi networks to assist the cellular networks has increased. Wi-Fi networks are considered one of the tiers of heterogeneous networks. Legacy cellular and Wi-Fi networks can interact with each other to offer superior performance to the users. 3rd Generation Partnership Project (3GPP) provided more details on this cooperation in Release 12 [9]. Widely deployed Wi-Fi networks can be loosely or tightly bound with cellular networks. In a loosely bound system, both networks are connected to the Internet, whereas in the latter case, both networks are connected to the same scheduler, therefore cooperating at lower layers [8]. Since a large number of users are demanding real-time services, seamless transition is required between cellular and Wi-Fi networks. Seamless continuity of service is possible by involving the user and control plane. To support real-time services, cellular operators deploy Wi-Fi networks and enable cellular services over these Wi-Fi networks. This cooperation between cellular and Wi-Fi networks can be seen as the integration of small cells with cellular macro-cells. Traffic is accommodated such that non-real-time demands are serviced by small cells (Wi-Fi), whereas macro-cells handle the real-time traffic [10, 11]. There are several challenges facing HetNets. The integration of mmWave frequencies, load balancing, mobility management, and power control are some of the prominent issues of HetNets [12]. It should be noted that while such challenges also existed with previous generations, there is a significant increase in the complexity for B5G networks.
4
S. Aslam et al.
1.2 The Emergence of Heterogeneous Information Network The real-time HetNets constitute a large number of interconnected components. These components can represent communication systems, computing systems, users’ social activities, etc. Collectively these interconnected components can be termed information networks (IN) [13]. IN are vital components of modern cellular networks that facilitate important network-level decision-making. They have gained a lot of attention from researchers working in the field of machine learning and data mining. Data mining techniques are utilized to find hidden patterns from networked devices, and then machine learning algorithms are applied to train on the data and make network optimization decisions [14]. Most of the existing information analysis techniques incurs large computational burden. Moreover, many existing techniques consider unique objects and links, therefore considering networks to be homogeneous. The information extracted from these networks does not consider heterogeneity. However, most real systems contain many types of interacting components, and we can model them as heterogeneous information networks (HIN) [14] having various types of objects and links. In recent years, emphasis has been given to developing methods for information network analysis. Data mining techniques have been popularly used for this task. It should be noted that most of these techniques were explored for homogeneous networks such as clustering and link analysis [15]. Unfortunately, these techniques cannot be directly applied due to the unique characteristics of HetNets. This makes the exploration of data mining techniques a major research trend. Data mining techniques will help in exploring the hidden relationships between different entities of the HetNets. The study of big data analysis has also become very important as content-centric networks have gained popularity. Heterogeneity is one of the most important characteristics of big data analytics that can be exploited for user- generated content. This can help the networks optimize the decision-making process and better utilize the network resources. HIN can be a significant tool to deal with complex data generated by most applications. It will help fuse the information more efficiently since it is natural for heterogeneous networks to deal with multi- typed objects and interactions between them. It should be noted that HetNets gather data from multiple sources; therefore, it is easier to deliver information across multiple sources as well [15]. Information fusion represents a process of merging information from numerous heterogeneous sources. Since we have access to various data sources, developing techniques for disseminating the information is an important research topic [16, 17]. Traditional data mining techniques such as data schemas, protein-protein interaction, and gene regulatory networks are more suited for homogeneous networks [18–20]. Therefore, researchers need to invest more in data analytics to obtain comprehensive knowledge and understanding of data entities including hidden patterns, data structures, and activities. Once the different data entities are aligned, new models can be explored that present more details on network trends. One such model can be found in [21], where clustering has been used to get more knowledge about
5
1 Machine Learning Applications for Heterogeneous Networks
shared entities. Moreover, by fusing multiple networks, users can be more interconnected with each other which helps information propagation (which is important for social networks). An example of information fusion across multiple aligned networks can be found in [22].
1.3 Heterogeneous Networks and Machine Learning Integration of HetNets into the NG cellular networks such as 5G and 6G needs to overcome several challenges. Many of these challenges can be resolved using ML techniques. ML has found various applications ranging from finance, economics, human behavior, and speech/image processing to wireless systems [23]. One of the advantages of using ML is its ability to learn complex relationships that exist between various optimization variables. A large number of research articles have considered ML to solve classical or emerging problems of wireless networks. The applications of ML specific to wireless networks are given in Fig. 1.3.
Cognive Radios Device-to-Device Communicaon HetNets
Massive MIMO
Energy Harvesng
Applications of Machine Learning for Next-Generation Cellular Networks
Power Allocaon Interference Migaon Resource Allocaon and Management Security Channel Coding & Channel Esmaon Synchornizaon
Beamforming
Fig. 1.3 Applications of machine learning specific to wireless networks
6
S. Aslam et al.
Diverse characteristics of HetNets are required to be learned to optimize various parameters of the network [24]. Owing to its advantages, it is important to employ ML for optimizing the HetNets. One of the goals of this chapter is to explain the motivation of applying ML algorithms in the context of HetNets. Various machine learning techniques exist in literature that can be used to address the challenges of HetNets. A brief description of these techniques is provided below. I. Supervised Learning This is a ML approach that aims to learn the relationship between input and output given that labelled data is utilized for learning. This learning approach can be used either for classification or regression. Support vector machines (SVM) and K-nearest neighbors are the examples of supervised learning algorithms. II. Unsupervised Learning Unsupervised learning algorithms aim to learn the hidden functions/relations given that the data is not labelled. Among many examples, K-means clustering algorithms and principal component analysis are the two popular learning techniques [25]. III. Reinforced Learning The purpose of reinforced learning (RL) is to optimize an objective function. The agents perform the optimization, learning from its environment. IV. Neural Networks As the name suggests, neural networks are comprised of neurons, attached to their respective weights that pass through an activation function to provide an output. There can be several layers of this network, and each layer can have different number of neurons and different activation function. To optimize the network, different backpropagation techniques can be applied such as Adam, etc. [26]. All of the abovementioned categories of ML can be further divided into various algorithms/techniques (summarized in Fig. 1.4).
1.3.1 Socially Aware HetNets and Machine Learning The socially aware networks (SANs) have played a significant role in shaping today’s cellular networks. SANs generate various kinds of networked data, and therefore literature has proposed several concepts, e.g., multi-type data [27] and multi-view data [28], to model such networked data. HetNets utilize software- defined radio (SDR) to enhance the information sharing among various networks [29]. SDRs can extract data from various sources (such as sensors) that can be used for network optimization, i.e., decision-making, resource allocation, etc. Location information of socially aware nodes can be utilized to help mobile operators make better resource allocation decisions [29].
1 Machine Learning Applications for Heterogeneous Networks
Supervised Learning
Support Vector Machine Random Forest K-Nearest Neighbors Naïve Bayes Decision Trees Linear Regression
Neural Networks Unsupervised Learning
Reinforced Learning
K-Means Clustering Principal Component Analysis Associaon Rules
7
Deep Neural Network Recurrent Neural Network Convolutional Neural Network Auto-Encoder Extreme Learning Machine (Feed forward Neural Network)
Q-Learning Mul-Armed Bandit Learning Actor-Cric Learning Deep Reinforcement Learning Joint Ulity and Strategy Est imat ion
Fig. 1.4 Categorization of various machine learning algorithms. Neural networks can be used for both supervised and unsupervised learning [25]
Mobile users are extensively utilizing web-based services, e.g., social media, research, online shopping, etc. All these services generate a large quantity of data, having multi-typed content [30]. The heterogeneity of data and services makes it imperative for HetNets to explore this huge data space and extract features to exploit hidden relationships and patterns. Therefore, ML models are necessary to develop intelligent solutions and possibly automate various crucial tasks. Without the involvement of ML, it will be difficult to cater to socially aware networks. There are examples of functional non-ML-based approaches. However, they do not exploit the hidden information in the data and do not extract relationships among different objects, nodes, etc. Moreover, non-ML-based solutions often result in heavy signalling load which is not conducive to decentralized/distributed solutions.
1.3.2 Mobility Management in Heterogeneous Networks A HetNet represents a merger of several entities. Therefore, mobility management is critical for the performance of the network. Handoffs and mobility management can be a serious issue due to small-sized cells [31, 32]. Typical mobility management mechanisms are applicable to macro-cells. Though these mechanisms can execute stable handovers, the presence of a large number of small cells increases complexity. It may result in frequent handovers since mobile users can quickly
8
S. Aslam et al.
traverse through various cells. It also results in load balancing issues among different network layers. Different levels of transmit power in various types of cells also contribute to loading issues in HetNets. Therefore, to enhance the performance of the HetNets, handover optimization techniques need to be designed to achieve better resource allocation and scheduling [31]. The objective of handover procedures is to ensure that the user moves from one cell to another satisfying the Quality of Service (QoS) requirements. The goal is to minimize link failures. Numerous studies in the literature investigate the handover/ mobility management techniques. A few studies focused on optimizing the different parameter settings such as presented in [33]. Another study focusing on the parameters of handover was presented in [34] where fuzzy optimization was applied to optimize the parameters. A few other articles explored cell range expansion techniques and focused on interference coordination among different cells of HetNets [35]. Small cells and macro-cells coordinate with each other to accommodate fast- moving users. A mixed-mode scheme where mobility is managed by the network at the macro- cell level whereas users are autonomous in a small cell has been proposed in [36]. Increasing the handover success rate by manipulating the size of the sub-bands has also been explored [37]. The size of these sub-bands is optimized based on signal to interference and noise ratio (SINR) values. The abovementioned schemes are not dynamic and require extensive signalling for their implementation. They do not jointly consider handover and load balancing. Machine learning-based solutions are not well investigated in this regard. ML-based implementation can be dynamic, and offline training can address the high signalling load as well.
1.3.3 The Interference Mitigation for HetNets The utilization of nested cells in HetNets brings challenges such as interference. Optimal power allocation can be utilized to address this problem [38]. Work presented in [39, 40] performs feature optimization to improve the transmission rate. Another scheme presented in [41] jointly optimizes resource allocation and cell selection to improve power allocation and reduce energy consumption. However, extensive signalling exchange is required to implement this scheme. The methods presented in [42, 43] investigated the usage of time-domain blank subframes to coordinate the interference between macro- and femto-cells. All the methods mentioned above are not adaptive and require extensive knowledge of channel state information (CSI). ML algorithms need to be explored to address the power allocation and interference coordination problem. Q-learning (QL), a common reinforcement learning technique, can be applied for interference mitigation as well. It is constituted with agents having a set of states and actions. A reward is earned by the agent based on its activities in a specific state. Q-functions represent this reward, and the objective is to maximize this reward.
1 Machine Learning Applications for Heterogeneous Networks
9
Various reward functions have been designed for improved power allocation and interference mitigation in HetNets [44–47]. However, these reward functions do not meet the requirements of a highly dense network. Moreover, the reward functions that did consider dense environment found that states of the agents were too large to be practical. Deep learning was proposed to solve this issue. Deep Q-network is proposed in [38] that helps MBSs to allocate power to femto-cell BSs (FBSs) to alleviate the interference and increase network throughput. Moreover, the introduction of neural networks in the decision-making process allows the mechanism to handle huge amount of data. Multi-agent deep learning techniques have been proposed to solve both handoff and power allocation problems in dense HetNets [48]. This work aims at decreasing the handover frequency and optimizing power allocation to increase the throughput. Though these works have taken significant strides toward addressing the power allocation problem, comparative study of different reward functions needs to take place to determine the optimal strategy. Moreover, different mechanism needs to be explored that optimizes the information sharing process between different agents. Q-learning has also been studied with Markov decision process (MDP) to optimize HetNets. In [49], Q-learning has been utilized to propose a multi-objective distributed model for self-organization of femto-cells. This model targeted two important problems of HetNets, i.e., resource allocation and interference mitigation. Efficient spectrum allocation is very important for interference management. MDP and Q-learning can be utilized for optimizing the process and improving spectrum efficiency. Firstly, the learning model is utilized to obtain spectrum awareness and find the unused spectrum. Secondly, the model provides the optimal sub-channel selection as well. This information is used to configure small/femto-cells such that interference is mitigated and QoS is guaranteed. In [50], authors use Q-learning model for managing the outage of small cells in a dense HetNet. This work considered channel quality and resource blocks associated with small cell as system’s states, whereas the actions were represented by the downlink power control mechanism. The reward was symbolized by the improvement in SINR. It was shown that Q-learning in conjunction with MDP attained a significant performance improvement.
1.3.4 Load Balancing in HetNets When different types of cells cooperate, load balancing becomes an important parameter to consider. Different cells may generate disparate amount of traffic that may lead to imbalance. The goal is to balance the load between cellular resources and traffic demand. Several studies have focused on load balancing between macro- and small cells [51, 52]. RL can be used for load balancing. For example, in [53], QL-based bias value optimization has been proposed. This work demonstrates, as compared to
10
S. Aslam et al.
non-ML techniques, significant improvement can be achieved utilizing QL, with respect to user outages, interference levels, and throughput [54]. Reinforcement learning has been utilized in [55] as well. The authors explore the concept of cell range expansion (CRE) by associating it to two bias factors, namely, SINR and rate. It was shown that bias factors are independent of BS densities across different cells but depend on transmit power of each of the cells. The work presented in [56] utilized the clustering concept to group together different BSs and adjust cell range expansion bias accordingly. Macro-cells obtain the CRE information from small cells and select their CRE bias values accordingly. A similar clustering concept has been used in [57, 58] to address the load balancing problem. Authors in [59] proposed two algorithms based on RL that dynamically adjust the reference power signals (RS) of small/femto-cells. This technique helps in offloading users from congested macro-cells. The optimization of RS levels distributes the traffic more fairly among macro- and small cells which results in improved user experience. A ML approach has been used in [57] to find hidden patterns, learn from labelled samples using classifiers, and make decisions accordingly. Signal levels are used as features to determine the biasing for user-BS association. This work presents a less complex solution compared to conventional reinforcement learning-based solutions. Reported ML-based load balancing for HetNets primarily utilizes RL techniques. Other ML techniques such as supervised and unsupervised learning should be explored. Historical data of HetNets can be used to learn parametric relations that can assist in designing better load balancing schemes.
1.3.5 Backhaul Management in HetNets In HetNets, the management of connections between SBSs and MBSs is significant for the performance of the system. RL has been used to design backhaul-aware mechanisms that can dynamically set the offset value of CRE [60]. Each small cell tries to maximize its backhaul capacity while fulfilling the backhaul capacity requirements of other cells. Q-learning is employed to control macro-cell congestion and improves the system’s performance. Game theory has been used for backhaul management as well. The game consists of players, each player has its strategy, and all the players are required to reach an equilibrium state termed as Nash equilibrium (NE). In [61], SBSs are players that predict urgent requests of users and download the contents accordingly. To update the strategy of each user, RL is used to reach NE. The work presented in [62] also utilizes game theory to model the interactions between users of macro-cells and small cells. Macro-cell users communicate with their BSs via SBSs acting as relays. Users of the macro-cells compete to select transmit power levels and also the relays (i.e., SBSs). This scheme demonstrates that throughput and delay experienced by the users of macro-cells are improved.
1 Machine Learning Applications for Heterogeneous Networks
11
1.4 Future Research Directions While many studies are reported on the applications of ML for HetNets, there are several open issues that need to be addressed. Some of them are summarized below.
1.4.1 Optimization of HetNets and Machine Learning The heterogeneity of the data and network nodes/objects involved suggests that ML algorithms are vital for optimizing HetNets. The current trend shows most of the issues involving HetNets are addressed predominantly by reinforced learning. Various issues such as resource management, load balancing, backhaul management, etc. are investigated primarily by Q-learning/deep Q-learning techniques (see Fig. 1.5). However, supervised and unsupervised learning techniques have been proven to be effective in meeting the demands of other aspects of cellular networks and should be utilized for HetNets as well. It should be noted that the complexity of the algorithms should be considered while choosing a ML technique. The time and implementation complexity of trained algorithms can significantly impact their practicality. For instance, real-time applications may have very tight time constraints, where decisions are required to be made within milliseconds. Moreover, to ensure if an algorithm is implementable, it is necessary to consider the system’s requirements (hardware and software), data collection, and storage complexity.
Supervised Learning Unsupervised Learning Neural Network Reinforced Learning(Q/Deep-Q Learning)
Fig. 1.5 Utilization of Various ML algorithms for HetNets, shown to be dominated by reinforced learning
12
S. Aslam et al.
1.4.2 B ig Data Analytics for Real-Time Heterogeneous Networks Data mining algorithms need to be designed that can work on data of various types and sizes. HIN is a powerful tool to handle such kind of data owing to its flexibility to integrate various objects. However, designing a real-time HIN is not a trivial task. We need to utilize data mining techniques, machine learning, and artificial intelligence to find small hidden networks (within a bigger heterogeneous network) based on user behaviors. This helps the network to find clusters, learn user behaviors, as well as identify anomalies in the network. It should be noted that the currently reported data mining strategies are designed for isolated networks and do not consider dynamic parallel data processing.
1.4.3 mmWaves and HetNets The modern network promises to provide connectivity to a large number of devices. As more devices demand connectivity, more small cells are likely to be accommodated in cellular architecture. Hence, more resources are required, most importantly with efficient utilization techniques. It is imperative for modern networks to explore mmWave frequencies for cellular communication. Moreover, different techniques exist in the literature that can increase spectral efficiency such as Non-orthogonal multiple access (NOMA). These concepts of mmWave and NOMA need to be integrated with HetNets and require comprehensive evaluation as well. The propagation characteristics at mmWave frequencies are different from sub-6Ghz bands. Therefore, the beamforming mechanism needs to be further investigated. Exploring the use of narrow beamwidth to allow guaranteed service to small cells is required to develop the modern HetNets.
1.4.4 Energy and Power Efficiency of HetNets Optimizing energy and power efficiency of HetNets is one of the primary objectives of NG networks. While designing ML algorithms, the appropriate selection of objective functions is of utmost importance. Therefore, energy and power consumption constraints can be added to the objective function that can help in designing BS switch-off strategies. The learning strategies also need to consider the trade-off between energy/power efficiency and other performance parameters such as delay, data rate, outage probability, etc. This will help in designing more dynamic optimization functions and more effective BS switch-off strategies. To further improve the objective functions
1 Machine Learning Applications for Heterogeneous Networks
13
utilized for learning, more information can be provided such as backhaul overhead, transmit power selection strategies, popular contents requested, high-density traffic areas, etc. Another aspect of energy and power efficiency is utilizing information regarding BS load since the overlapping of small and micro-cells will affect the energy consumption. Moreover, a more practical approach demands to investigate the effect of traffic variation on MBS and SBS that can help to design better BS switch-off strategies.
1.4.5 M achine Learning-Based HetNet Infrastructure Management To enable the deployment of ML-based HetNets, the network infrastructure needs to evolve. NG networks are likely to have a mix of wired and wireless fronthaul and backhaul solutions. For these solutions to exist, it is important for wireless networks to be embedded with a graphical processing unit to implement ML techniques such as deep neural networks (DNN). It should be noted that backhaul/fronthaul may have wired or wireless solutions where each solution presents a different bandwidth utilization and power/energy consumption. Therefore, ML-enabled networks can extract traffic characteristics and anticipate user requirements to select appropriate backhaul/fronthaul solutions. Moreover, network functions such as network slicing and network virtualization are now part of modern networks. These network functions help realize distributed hardware and software implementations that can be controlled by ML.
1.4.6 Multiband Cooperation for HetNets ML can play a significant role in configuring a multiband cooperative network (MBCN) for HetNets. In MBCN, the user and control plane are split up, and each is given a particular set of tasks [2]. The control plane is responsible for control signalling and mobility management, whereas user plane caters the user’s data. This configuration helps in extending the network functions to increase the efficiency of resource management. Moreover, MBCN can also help to address the ever- increasing demand of users by increasing the data transfer entities in the user plane. Deep learning and reinforcement learning strategies can be embedded in MBCN to get insights into the user demands and behaviors and optimize the network performance.
14
S. Aslam et al.
1.5 Conclusion HetNets have become an effective way of addressing cellular user densification. This chapter provided details on this expanding field while highlighting its advantages and discussing its inherent research challenges. The significance of applying ML for HetNets was discussed in detail, and it was shown that ML approaches are useful to optimize the performance of the networks. State-of-the-art literature suggests that there are several unresolved issues in HetNets and ML can be a significant tool in realizing the solution of many of these issues. HetNet-related studies that consider ML have been heavily reliant on reinforced learning. Therefore, there is an opportunity to employ other machine learning techniques.
References 1. Forecast, G. (2019). Cisco visual networking index: Global mobile data traffic forecast update, 2017–2022. Update, 2017, 2022. 2. Zhao, J., Ni, S., Yang, L., Zhang, Z., Gong, Y., & You, X. (2019). Multiband cooperation for 5G HetNets: A promising network paradigm. IEEE Vehicular Technology Magazine, 14(4), 85–93. 3. Huang, Y. (2017). Network management and decision making for 5G heterogeneous networks. The Australian National University (Australia). 4. Lindbom, L., Love, R., Krishnamurthy, S., Yao, C., Miki, N., & Chandrasekhar, V. (2011). Enhanced inter-cell interference coordination for heterogeneous networks in lte-advanced: A survey. arXiv preprint arXiv:1112.1344 5. Andrews, J. G., et al. (2014). What will 5G be? IEEE Journal on selected areas in communications, 32(6), 1065–1082. 6. Bhushan, N., et al. (2014). Network densification: The dominant theme for wireless evolution into 5G. IEEE Communications Magazine, 52(2), 82–89. 7. Cimmino, A., et al. (2014). The role of small cell technology in future smart city applications. Transactions on Emerging Telecommunications Technologies, 25(1), 11–20. 8. Dandachi, G. (2017). Multihoming in heterogeneous wireless networks. Evry. 9. Astely, D., Dahlman, E., Fodor, G., Parkvall, S., & Sachs, J. (2013). LTE release 12 and beyond [accepted from open call]. IEEE Communications Magazine, 51(7), 154–160. 10. Simsek, M., Bennis, M., Debbah, M., & Czylwik, A. (2013). Rethinking offload: How to intelligently combine WiFi and small cells? In 2013 IEEE international conference on communications (ICC), 2013: IEEE, pp. 5204–5208. 11. Yoon, W., & Jang, B. (2013). Enhanced non-seamless offload for LTE and WLAN networks. IEEE Communications Letters, 17(10), 1960–1963. 12. Fani, A. S. M. (2018). Capacity dimensioning for 5G mobile heterogeneous networks. Ph. D. dissertation. 13. Han, J. (2009). Mining heterogeneous information networks by exploring the power of links. In International conference on discovery science (pp. 13–30). Springer. 14. Sun, Y., & Han, J. (2013). Mining heterogeneous information networks: A structural analysis approach. Acm Sigkdd Explorations Newsletter, 14(2), 20–28. 15. Shi, C., Li, Y., Zhang, J., Sun, Y., & Philip, S. Y. (2016). A survey of heterogeneous information network analysis. IEEE Transactions on Knowledge and Data Engineering, 29(1), 17–37. 16. Zhan, Q., Zhang, J., & Philip, S. Y. (2019). Integrated anchor and social link predictions across multiple social networks. Knowledge and Information Systems, 60(1), 303–326.
1 Machine Learning Applications for Heterogeneous Networks
15
17. Zhang, J., & Philip, S. Y. (2015). Integrated anchor and social link predictions across social networks. In Twenty-fourth international joint conference on artificial intelligence. 18. Melnik, S., Garcia-Molina, H., & Rahm, E. (2002) Similarity flooding: A versatile graph matching algorithm and its application to schema matching. In Proceedings 18th international conference on data engineering, 2002: IEEE, pp. 117–128. 19. Shih, Y.-K., & Parthasarathy, S. (2012). Scalable global alignment for multiple biological networks. In BMC bioinformatics (Vol. 13, No. 3, pp. 1–13). Springer. 20. Doan, A., Madhavan, J., Domingos, P., & Halevy, A. (2004). Ontology matching: A machine learning approach. In Handbook on ontologies (pp. 385–403). Springer. 21. Philip, S. Y., & Zhang, J. (2015). MCD: Mutual clustering across multiple social networks. In 2015 IEEE international congress on big data, 2015: IEEE, pp. 762–771. 22. Zhan, Q., Zhang, J., Wang, S., Philip, S. Y., & Xie, J. (2015). Influence maximization across partially aligned heterogenous social networks. In Pacific-Asia conference on knowledge discovery and data mining (pp. 58–69). Springer. 23. Joo, E. M., & Zhou, Y. (2009). Theory and novel applications of machine learning. BoD– Books on Demand. 24. Jiang, C., Zhang, H., Ren, Y., Han, Z., Chen, K.-C., & Hanzo, L. (2016). Machine learning paradigms for next-generation wireless networks. IEEE Wireless Communications, 24(2), 98–105. 25. Sun, Y., Peng, M., Zhou, Y., Huang, Y., & Mao, S. (2019). Application of machine learning in wireless networks: Key techniques and open issues. IEEE Communications Surveys & Tutorials, 21(4), 3072–3108. 26. Ruder, S. (2016). An overview of gradient descent optimization algorithms. arXiv preprint arXiv:1609.04747. 27. Long, B., Zhang, Z., Wu, X., & Yu, P. S. (2006). Spectral clustering for multi-type relational data. In Proceedings of the 23rd international conference on Machine learning, pp. 585–592. 28. Liu, J., Wang, C., Gao, J., & Han, J. (2013). Multi-view clustering via joint nonnegative matrix factorization. In Proceedings of the 2013 SIAM international conference on data mining, 2013: SIAM, pp. 252–260. 29. Kosmides, P., Adamopoulou, E., Demestichas, K., Theologou, M., Anagnostou, M., & Rouskas, A. (2015). Socially aware heterogeneous wireless networks. Sensors, 15(6), 13705–13724. 30. Zhang, C. (2020). Learning from heterogeneous networks: Methods and applications. In Proceedings of the 13th international conference on web search and data mining, 2020, pp. 927–928. 31. Simsek, M., Bennis, M., & Guvenc, I. (2015). Mobility management in HetNets: A learning- based perspective. EURASIP Journal on Wireless Communications and Networking, 2015(1), 1–13. 32. Lee, C. H., Lee, S. H., Go, K. C., Oh, S. M., Shin, J. S., & Kim, J. H. (2015). Mobile small cells for further enhanced 5G heterogeneous networks. ETRI Journal, 37(5), 856–866. 33. Barbera, S., Michaelsen, P. H., Säily, M., & Pedersen, K. (2012). Mobility performance of LTE co-channel deployment of macro and pico cells. In 2012 IEEE wireless communications and networking conference (WCNC), 2012: IEEE, pp. 2863–2868. 34. Muñoz, P., Barco, R., & de la Bandera, I. (2013). On the potential of handover parameter optimization for self-organizing networks. IEEE Transactions on Vehicular Technology, 62(5), 1895–1905. 35. Lopez-Perez, D., Guvenc, I., & Chu, X. (2012). Mobility management challenges in 3GPP heterogeneous networks. IEEE Communications Magazine, 50(12), 70–78. 36. Pedersen, K. I., Michaelsen, P. H., Rosa, C., & Barbera, S. (2013). Mobility enhancements for LTE-advanced multilayer networks with inter-site carrier aggregation. IEEE Communications Magazine, 51(5), 64–71. 37. Feki, A., Capdevielle, V., Roullet, L., & Sanchez, A. G. (2013). Handover aware interference management in LTE small cells networks. In 2013 11th international symposium and workshops on modeling and optimization in Mobile, Ad Hoc and wireless networks (WiOpt), 2013: IEEE, pp. 49–53.
16
S. Aslam et al.
38. Su, Q., Li, B., Wang, C., Qin, C., & Wang, W. (2020). A power allocation scheme based on deep reinforcement learning in HetNets. In 2020 international conference on computing, networking and communications (ICNC), 2020: IEEE, pp. 245–250. 39. Niu, C., Li, Y., Hu, R. Q., & Ye, F. (2017). Fast and efficient radio resource allocation in dynamic ultra-dense heterogeneous networks. IEEE Access, 5, 1911–1924. 40. Ha, V. N., & Le, L. B. (2013). Fair resource allocation for OFDMA femtocell networks with macrocell protection. IEEE transactions on vehicular technology, 63(3), 1388–1401. 41. Chu, J.-H., Feng, K.-T., & Chang, T.-S. (2014). Energy-efficient cell selection and resource allocation in LTE-A heterogeneous networks. In 2014 IEEE 25th annual international symposium on personal, indoor, and mobile radio communication (PIMRC), 2014: IEEE, pp. 976–980. 42. Guvenc, I. (2011). Capacity and fairness analysis of heterogeneous networks with range expansion and interference coordination. IEEE Communications Letters, 15(10), 1084–1087. 43. Okino, K., Nakayama, T., Yamazaki, C., Sato, H., & Kusano, Y. (2011). Pico cell range expansion with interference mitigation toward LTE-Advanced heterogeneous networks. In 2011 IEEE international conference on communications workshops (ICC), 2011: IEEE, pp. 1–5. 44. Tefft, J. R., & Kirsch, N. J. (2013). A proximity-based Q-learning reward function for femtocell networks. In 2013 IEEE 78th vehicular technology conference (VTC Fall), 2013: IEEE, pp. 1–5. 45. Saad, H., Mohamed, A., & ElBatt, T. (2012). Distributed cooperative Q-learning for power allocation in cognitive femtocell networks. In 2012 IEEE vehicular technology conference (VTC Fall), 2012: IEEE, pp. 1–5. 46. Wen, B., Gao, Z., Huang, L., Tang, Y., & Cai, H. (2014). A Q-learning-based downlink resource scheduling method for capacity optimization in LTE femtocells. In 2014 9th international conference on computer science & education, 2014: IEEE, pp. 625–628. 47. Galindo-Serrano, A., & Giupponi, L. (2010). Distributed Q-learning for interference control in OFDMA-based femtocell networks. In 2010 IEEE 71st vehicular technology conference, 2010: IEEE, pp. 1–5. 48. Guo, D., Tang, L., Zhang, X., & Liang, Y.-C. (2020). Joint optimization of handover control and power allocation based on multi-agent deep reinforcement learning. IEEE Transactions on Vehicular Technology, 69(11), 13124–13138. 49. Alnwaimi, G., Vahid, S., & Moessner, K. (2014). Dynamic heterogeneous learning games for opportunistic access in LTE-based macro/femtocell deployments. IEEE Transactions on Wireless Communications, 14(4), 2294–2308. 50. Onireti, O., et al. (2015). A cell outage management framework for dense heterogeneous networks. IEEE Transactions on Vehicular Technology, 65(4), 2097–2113. 51. Behjati, M., & Cosmas, J. (2013). Self-organizing network interference coordination for future LTE-advanced networks. In 2013 IEEE international symposium on broadband multimedia systems and broadcasting (BMSB), 2013: IEEE, pp. 1–5. 52. Aguilar-Garcia, A., et al. (2015). Location-aware self-organizing methods in femtocell networks. Computer Networks, 93, 125–140. 53. Kudo, T., & Ohtsuki, T. (2013). Cell range expansion using distributed Q-learning in heterogeneous networks. Eurasip journal on wireless communications and networking, 2013(1), 1–10. 54. Gomez, C. A., Shami, A., & Wang, X. (2018). Machine learning aided scheme for load balancing in dense IoT networks. Sensors, 18(11), 3779. 55. Ye, Q., Rong, B., Chen, Y., Al-Shalash, M., Caramanis, C., & Andrews, J. G. (2013). User association for load balancing in heterogeneous cellular networks. IEEE Transactions on Wireless Communications, 12(6), 2706–2716. 56. Jiang, H., Pan, Z., Liu, N., You, X., & Deng, T. (2016). Gibbs-sampling-based CRE bias optimization algorithm for ultradense networks. IEEE Transactions on Vehicular Technology, 66(2), 1334–1350.
1 Machine Learning Applications for Heterogeneous Networks
17
57. Park, J.-B., & Kim, K. S. (2017). Load-balancing scheme with small-cell cooperation for clustered heterogeneous cellular networks. IEEE Transactions on Vehicular Technology, 67(1), 633–649. 58. Afshang, M., & Dhillon, H. S. (2018). Poisson cluster process based analysis of HetNets with correlated user and base station locations. IEEE Transactions on Wireless Communications, 17(4), 2417–2431. 59. Musleh, S., Ismail, M., & Nordin, R. (2017). Load balancing models based on reinforcement learning for self-optimized macro-femto LTE-advanced heterogeneous network. Journal of Telecommunication, Electronic and Computer Engineering (JTEC), 9(1), 47–54. 60. Jaber, M., Imran, M., Tafazolli, R., & Tukmanov, A. (2015). An adaptive backhaul-aware cell range extension approach. In 2015 IEEE international conference on communication workshop (ICCW), 2015: IEEE, pp. 74–79. 61. Hamidouche, K., Saad, W., Debbah, M., Song, J. B., & Hong, C. S. (2017). The 5G cellular backhaul management dilemma: To cache or to serve. IEEE Transactions on Wireless Communications, 16(8), 4866–4879. 62. Samarakoon, S., Bennis, M., Saad, W., & Latva-aho, M. (2013). Backhaul-aware interference management in the uplink of wireless small cell networks. IEEE Transactions on Wireless Communications, 12(11), 5813–5825.
Chapter 2
Federated Learning with Support of HetNets, Cloud Computing, and Edge Computing Altan Koçyiğit
and Enver Ever
2.1 Introduction The ongoing technological advancements and evolution of communication systems have significantly changed the ways distributed computing is performed. One of the driving factors for the rapid rate of technological developments is the drastic increase in the number of connected devices. Heterogeneous networks (HetNets) typically have the configuration where small cells are distributed around macrocells, and they are known to be one of the solutions for dealing with large volumes of wireless traffic [1]. Various mechanisms have been introduced in the literature to address issues such as inference, physical design, and medium access control [2] of HetNets. Furthermore, some studies have presented models for interaction between various components of HetNets [3, 4]. With efficient wireless communication infrastructures, another important distributed computing paradigm, distributed systems and Internet of Things (IoT) became an important research area alongside the vision of the Internet extending into everyday objects. The realisation of efficient wireless communication is a must for IoT since it heavily relies on the connectivity of devices, systems, and services in terms of a variety of protocols, domains, and applications. Fully realising the IoT phenomenon means even further increased quantities of connected systems such as vehicles, environment monitoring and surveillance systems, wearables, health monitoring devices, tablets, computers, and smartphones. A. Koçyiğit Department of Information Systems, Middle East Technical University, Ankara, Turkey e-mail: [email protected] E. Ever (*) Computer Engineering, Middle East Technical University, Northern Cyprus Campus, 99738, Mersin 10, Guzelyurt, Turkey e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 F. Al-Turjman (ed.), Real-Time Intelligence for Heterogeneous Networks, https://doi.org/10.1007/978-3-030-75614-7_2
19
20
A. Koçyiğit and E. Ever
Furthermore, in IoT, large numbers of heterogeneous objects are expected to interact seamlessly to enable various ubiquitous and pervasive applications. The heterogeneous objects of interest are expected to be equipped with sensors or other similar components, collecting information and supporting the development of various smart services. Therefore, IoT can be considered as an important source of data with enormous volumes, which also makes it an important part of big data research. Traditionally it is possible to classify big data according to five main characteristics [5]. The first characteristic is the volume which is the extensive amounts of information generated from smart systems. Distributed systems such as cloud structures are generally used to store data in multiple locations. The second characteristic is the variety. In modern smart applications in addition to the more traditional scalar data storage, data in the form of photos, videos, and audios are also stored and used for different purposes. The third characteristic is veracity which is related to the reliability that the data should offer. It is necessary to filter the existing unstructured and irrelevant data and translate them into a format needed for specific applications. Value is the fourth characteristic. It is related to the amount of valuable, reliable, and trustworthy information that should be stored and focussed on for a specific application. Finally, the last characteristic is the velocity which is related to providing data on demand and at fast enough pace. All these characteristics require good connectivity in HetNets and new methods for processing the available data correctly and timely. The availability of large datasets and the exponential growth in computing power also introduced an interest in developing and improving algorithms in the field of machine learning [6]. It is possible to use machine learning for classification, regression, clustering, or dimensionality reduction tasks. As the training-related activities of machine learning applications are still known to be computation-intensive, using distributed computing facilities such as cloud, fog, and edge computing is also becoming quite popular. In this chapter, federated learning mechanisms which are relatively recent distributed computing applications for machine learning tasks are considered. The basics of machine learning and federated learning algorithms are discussed together with available cloud, fog, and edge computing mechanisms. The applications of federated learning mechanisms are considered with a particular focus on IoT. In turn, the existing challenges and opportunities in the field are discussed. The remainder of the chapter is organised as follows: Section 2.2 explains the basics of machine learning which is needed to understand the federated learning and possible applications. In Section 2.3, the basics of federated learning mechanisms are presented. Since distributed computing mechanisms are important facilitators for federated learning, mobile cloud, edge, and fog computing mechanisms are considered in Section 2.4. Section 2.5 considers existing federated learning architectures with a focus on the heterogeneous nature of computation. Section 2.6 provides some examples of popular application areas where federated learning can be used effectively. The current challenges and opportunities are discussed in Section 2.7, and a summary is provided in the final section to conclude the chapter.
2 Federated Learning with Support of HetNets, Cloud Computing, and Edge Computing
21
2.2 Machine Learning Decision-making can be considered as a problem-solving activity. Rational decisions play a crucial role in the maximisation of utilities and the minimisation of costs of real-life processes. In the absence of a theoretical basis with reasonable assumptions, rational decision-making should rely on prior experiences rather than intuitions or beliefs. Essentially, insights gained by processing data from prior experiences can be used to objectively evaluate alternatives and to make good choices. This leads us to data-driven decision-making. Data-driven decision-making offers enormous potential for improving quality and efficiency in many domains. For instance, in a HetNet, it is not wise to randomly connect a user to one of the available networks and maintain such associations arbitrarily. In order to guarantee the desired level of quality of service, a decision-maker may consider several factors such as the user’s requirements, the current traffic demand, and the movement pattern as well as the current situations of the available networks and perhaps other users connected to these networks in assigning the user to the most appropriate network. This necessitates analysing the current situation and making a justifiable decision based on available data and prior experiences. As a result, not only do we provide a high-quality service to the user, but we also use the resources efficiently to serve more users with the available capacity. It may be possible to collect massive amounts of data relevant to a real-life process from various sources. However, timely processing and analysis of such a large and variety of data may not be easy especially when the data is being produced at a very high rate at the sources and the data is not in a proper form that facilitates processing. Hence, the real challenge is to analyse available big data to extract non- obvious and useful patterns that give us valuable insights to guide the decisions and solve a diverse set of problems that do not have obvious solutions. This encompasses several activities such as collecting data from sources, integrating and storing data to make it available when it is needed, preprocessing data to facilitate analysis, and analysing data to get actionable insights. All these activities are addressed by the field of study called data analytics [7]. In a problem domain, data analytics is carried out to answer questions about the current situations (a.k.a. descriptive analytics), to determine the causes of some events (a.k.a. diagnostic analytics), to predict what may happen in the future (a.k.a. predictive analytics), and to determine what must be done to fulfil some goal (a.k.a. prescriptive analytics). To this end, it is possible to use some algorithms or functions which can transform available data to an output that serves as an answer a decision-maker is interested in. These algorithms or functions may be designed and hardcoded by humans, or preferably computers can learn by themselves how to transform input data to desired outputs without being explicitly programmed as suggested by Arthur Samuel in 1959. The latter is a special form of artificial intelligence known as machine learning. Machine learning is concerned with employing learning algorithms to solve a given task by improving solution performance with experience. Therefore, there are
22
A. Koçyiğit and E. Ever
essentially three components of a machine learning problem [8]: a task (T), a performance measure (P), and sufficient experience (E) to achieve a certain level of performance on the task. For example, in a HetNet environment, we may want to predict whether a user moves beyond the coverage area of the currently associated cell and join some other cell (i.e. handover) in a given time interval. In this case, the task (T) corresponds to predicting whether a handover will occur in the given time interval.The performance measure (P) may be determined as the accuracy of the handover predictions made about a set of cases compared to the actual handovers occurred in those cases. The experience (E) may comprise data collected from previous user associations and handover events. Hence, in order to solve task T, an algorithm can be used to analyse the cases in E to produce a model that relates a given case to a prediction. When this model is used in real-life cases, some of the predictions may come true, and some of them may come false. As the number of cases in E increases, the algorithm can produce a better model that makes more accurate handover predictions. In other words, the performance of the algorithm can be improved with more experience. Each case (or example) in the experience set E is typically described by some attributes (or features) that may be useful to fulfil a given task T. For example, in the handover prediction example, the features that may be useful for the solution of prediction task could be the latest received signal strength indication (RSSI) measurements taken by the user, the user’s current speed, and other relevant data about the user known at the time when the prediction is made. The desired output (a.k.a. label) can be a Boolean which is equal to true if the user is predicted to move beyond the coverage area and has to join some other cell and false otherwise. The most common task in machine learning is to relate features to a label. Hence, the goal is to approximate an unknown target function f that perfectly maps input features to output labels for all cases in E. Suppose there is a function f such that:
y = f ( x1 ,…,xn )
where y is the label of a given case with features x1, …, xn. The task T can be defined as finding a function fˆ , that is, an approximation of f, such that performance P improves with experience E. Consequently, the value returned by fˆ will be the prediction made for a given case, that is,
yˆ = fˆ ( x1 ,…,xn )
where yˆ is the prediction for a given case with features x1, …, xn. In this setting, performance P may be defined as a function of fˆ that is applied to each case in E to find yˆ , and the actual y values (if they are known) for all cases in E, or some other relationship between the examples in E and the outputs returned by fˆ for the examples in E. Hence, a general definition for P is:
2 Federated Learning with Support of HetNets, Cloud Computing, and Edge Computing
23
( )
P = J E,fˆ
A suitable performance measure must be defined before solving a machine learning task. For example, a proper performance measure for a problem, such as the handover prediction, could be the ratio of the number of predictions that come out to be correct to the number of predictions made. Hence, the performance can be improved by choosing a function fˆ that maximises P with respect to the examples in E. Equivalently, it is possible to define the performance P as the ratio of the number of predictions that come out to be wrong to the number of predictions made, and the goal becomes minimising P by using the examples in E to fit a better function. The latter, reducing the prediction error, is the most commonly used objective in prediction-type machine learning problems. Hence, a machine learning problem can be codified as an optimisation problem that searches the best approximation fˆ minimising the cost J E,fˆ . In some cases, fˆ can be represented by a finite set of parameters wi, i = 1, . . , d. Hence, the problem reduces to finding the parameters wi for a given E that minimises the function J. For example, in the handover prediction problem, we may employ a very simple approach like comparing the weighted moving average of the most recent n RSSI measurements taken by the user to a threshold value. If the weighted moving average of the latest n RSSI measurements is smaller than the threshold, we predict true and otherwise predict false, that is, xi variables represent recent RSSI measurements taken, wi values correspond to the weights assigned to these measurements, and w0 corresponds to the threshold value used to decide on the final result. Hence, the search for a handover prediction model can be formulated as:
( )
true w0 < w1 x1 +…+ wn xn yˆ = fˆ ( x1 ,…,xn ) = otherwise false
y ≠ yˆ 1 error for an example = e = otherwise 0
1 m P = J E,fˆ = ∑ei m i =1
( )
{ ( )}
fˆ ∗ = argmin J E,fˆ
fˆ
where ei is the error associated with ith case, i = 1, . . , m, in E, and fˆ ∗ is the resulting model. This problem can be expressed as finding the best function defined by the parameters wi, i = 0, 1, . . , n, that minimises the average error for the cases in E. The moving average approach is just one of the possible approaches that can be employed in the handover prediction problem. Essentially, there is a multitude of
24
A. Koçyiğit and E. Ever
possible approaches to choose from, and each approach may involve a different number and kinds of parameters to be optimised. A search for the best approximate function starts with deciding on the type (or form) of the approximate function. A linear or nonlinear function can be fitted to the examples in the E. There are many widely used forms to choose from such as linear regression, logistic regression, support vector machine (SVM), neural networks (NN), decision tree, and probabilistic models. Each of these forms defines a different set of parameters that must be tuned to optimise the performance on given examples. Consequently, the training process consists of two main steps: choose a suitable form of the target function and perform an optimisation to determine the best parameters for the target function so that the performance is the best when the function is applied to the examples in E. In this context, the experience set E is called the training set. Parameter optimisations can be based on some characteristics of records in the training set (e.g. rule-based approaches). In some cases, closed-form solutions that take training data as a parameter are available (e.g. linear regression). For some target functions, like regressions or neural networks, iterative optimisation methods such as gradient descent can be used. In gradient descent, a set of initial values are chosen for the parameters, and the parameters are updated iteratively according to the rate of change in J E,fˆ with respect to changes in individual parameters. The partial derivatives of J E,fˆ are used for this purpose. The partial derivatives with respect to the parameters constitute the gradient of the function at the current values of these parameters, and the negative gradient points to the direction in which the function decreases most quickly. Hence, at each iteration, the parameters are updated according to:
( ) ( )
w t +1 = w t − γ∇ J ( E, w t )
where wt denotes the vector of parameters of function fˆt at iteration t, ∇ J ( E, w t ) is the gradient with respect to the weights, and γ is the step size, usually a small constant which is also allowed to change in each iteration. In some implementations, instead of using all examples in the training set, a sample taken from E is used to approximate the gradient in each iteration. This approach is known as stochastic gradient descent. Normally, having more examples in the experience set E to train a model improves the model performance. However, making use of all examples in E as the training set may not give us a model that is eligible to be used in real-life scenarios. This is because a model that fits well with the training set examples may not generalise well to produce credible output for unseen cases. In order to address this issue, the experience set E is usually divided into two disjoint subsets: a training set and a testing set. The examples in the training set are used to generate a model that maximises the performance with respect to performance measure P. Then, the resulting model is evaluated with respect to the same performance measure but on the testing set. In practice, the majority (e.g. 60% or 70%) of examples in E are used for model
2 Federated Learning with Support of HetNets, Cloud Computing, and Edge Computing
25
training, and the remaining are used to evaluate the performance of the model on unseen cases. Machine learning can be used for different purposes, and different kinds of feedback can be provided in training a machine learning model. Accordingly, three main categories of machine learning can be defined: supervised learning, unsupervised learning, and reinforcement learning. In the supervised learning category, examples in the experience set E have both features x1, …, xn that serve as inputs to the function f that will be approximated and the associated labels y that are the outputs of f (i.e. y = f(x1, …, xn)). Hence, the goal is to find the best function yˆ = fˆ ( x1 ,…,xn ) that produces outputs yˆ that are as close as possible to the actual output values y with respect to a given performance metric. According to the type of output, a supervised learning problem can be formulated as a regression problem or a classification problem. In regression problems, the output is a scalar value. Therefore, the task is to find a model that captures the relationship between output scalars and input features for the cases in E. For example, load prediction, which is a very important issue in network planning and management, can be formulated as a regression problem [9, 10]. In order to optimally distribute user terminals to available networks and manage vertical handovers in a HetNet, the amount of traffic that is likely to be generated by the user terminals may be considered as well as the number of user terminals in the area covered and the physical locations of these terminals. The amount of traffic arriving at the network can be predicted in terms of the types of the user terminals currently attached to the network, traffic produced by these terminals in the last N time slots, and other relevant information. Data collected from the network can be used as prior experience to fit a suitable regression model. In classification problems, the output is a categorical label assigned to a case with known features. Therefore, the task is to find a model that maps input features to a label selected from a set of output categories. There may be just two categories, such as true/false and yes/no, or more than two categories to choose from. For example, vertical/horizontal handover prediction in HetNets can be considered as a classification problem. Predicting whether a specific user will hand over in the next time slot is a binary classification problem with two classes true/false (or yes/no). It is also possible to predict the target cell, selected among a set of neighbouring cells, to which the user will hand over [11]. This is a multiclass classification problem if there are more than two possible target cells. In the unsupervised learning category, the examples in an experience set E have features but lack labels. The goal is to discover interesting patterns and relationships between these examples. For example, user densities, mobility patterns, traffic characteristics, and channel characteristics can be considered in determining where to install the cells with suitable coverage planning and performance/efficiency optimisations by guiding user associations in a HetNet [12]. There are several unsupervised learning methods, such as clustering and anomaly detection. In clustering, the goal is to segment examples in E according to a given similarity measure and a given clustering performance measure P. As an example, the cells in a HetNet may be clustered according to their traffic patterns so that the cells that have similar
26
A. Koçyiğit and E. Ever
traffic patterns can be analysed together. Hence, the analysis results may be used for further planning and management activities. In clustering, the number of clusters can be known beforehand or can be determined according to the selected performance measure. Another widely used unsupervised learning method is anomaly detection. The goal is to distinguish examples similar to each other from other unusual and outlier examples. For example, it is possible to detect network misuse or attacks based on unusual behaviour [13] by means of anomaly detection. In the reinforcement learning category, there is an agent that generates its own experience during training. In a general reinforcement learning setting, the agent interacts with an environment via observing the environment and performing some actions accordingly. These actions may yield rewards or penalties depending on the desirability of the situation of the environment after performing the actions. Hence, the goal of the agent can be to learn a policy that provides a series of actions for changing states of the environment so that the cumulative reward is maximised. As an example scenario in HetNets, reinforcement learning methods can be used to learn dynamically allocating radio resources to improve the performance of the network [14]. Models generated by machine learning can be used in various ways for different analytics purposes in real-life settings. For example, a model can be used to make relevant predictions (a.k.a. inference) based on the currently known information about a case. Hence, these predictions guide the decisions, and with proper planning, service quality may be improved, benefits may be maximised, costs may be minimised, or resource efficiencies can be improved. As long as the model structure allows, the parameters of a generated model may be used for prescriptive purposes to suggest ways to improve some objectives by making some adjustments in the real-life processes as well as for diagnostic purposes to determine the factors that have a significant effect on the occurrence of pertinent events. Making an inference by using a trained model is generally not computation- intensive. However, training a model is a computation-intensive task. Typically, a large amount of training data is necessary to train a reliable model. Moreover, complex models may require a large number of iterations to achieve adequate performance. Hence, training times and processing resources needed generally grow exponentially fast with the complexities of models. Therefore, machine learning can be done in a distributed fashion to complete the training in a reasonable time period. Distributed machine learning may employ an approach to partition the learning problem, such as data partitioning and model partitioning [15]. In the data partitioning approach, the training set is distributed to a group of worker nodes, and each node has only a subset of the training set. Usually, examples are distributed in an independently and identically distributed (i.i.d.) manner. Then, each worker node executes a learning process (e.g. stochastic gradient descent) by collaborating with a centralised aggregator and/or other worker nodes. Hence, an aggregate model can be generated collectively. In the model partitioning approach, the training set is completely copied to all worker nodes. The worker nodes may generate a single model by working on different parts of the model, or they may generate similar
2 Federated Learning with Support of HetNets, Cloud Computing, and Edge Computing
27
models that can be used to serve as a single model by using ensemble or other model aggregation approaches.
2.3 Federated Learning Distributed machine learning can help in reducing training times. However, there is a need for a training set accessible by worker nodes. Moreover, the training set must comprise cases that are representative enough to fit a model that generalises well to real-life cases. These requirements may not be satisfied if the data sources/owners are not willing to share their data due to privacy or other concerns, or it is not feasible to collect data that is representative enough for all possible real-life cases. The distributed machine learning approach can be adapted to this situation by relaxing the centralised data availability requirement. To this end, it is possible to encourage data owners to participate in training a common machine learning model without requiring them to share their data. For example, the training process can be performed on participants’ devices where training data are stored without disclosing the training data. This approach is called federated learning. Federated learning is very similar to the distributed machine learning. However, there are some significant differences [16] that potentially complicate the training process, such as: • Typically, the number of participants in federated learning is much larger compared to the number of workers in a distributed machine learning case. • Processing and communication resources of the participants involved in federated learning may vary significantly. Participants may not always be available. There may be stragglers that slow down the rounds in the training process. • Incentive mechanisms are needed to convince participants to contribute to the learning process. • There may be malicious participants that undermine the learning process. • The training data of the participants may not be i.i.d. as the participants may have different prior experiences that may not be representative of the entire set of participants. Moreover, the sizes of their training data may also vary. Hence, converging to a common model that generalises well to all possible cases may not be easy or possible. Model parameter optimisation based training approaches can be adapted to federated learning [17] with the collaboration of a group of participants under the supervision of a centralised aggregator. This may be possible with training tasks that can be defined as optimisation problems with additive performance metrics, such as:
{ ( )} argmin ∑ mm J ( E , fˆ )
fˆ ∗ = argmin J E, fˆ
fˆ
k
i
fˆ
i =1
(i )
28
A. Koçyiğit and E. Ever
where k is the number of participants, E(i) is the training set available at participant i, mi is the number of examples in E(i), m is the number of examples in E, and the training set E is the union of the learning sets of all participants: k
E = E (i )
i =1
Such a formulation is essentially applicable to several types of learning models such as linear/logistic regressions, support vector machines, and neural networks. For instance, a set of participants may collectively learn a neural network using the federated averaging (FedAvg) or split learning. FedAvg [16] is an iterative algorithm executed by a central aggregator and a group of participants having training data. At each round of the algorithm, a subset of available participants is involved in the training process as outlined below: 1. The aggregator selects a model with a randomly initialised parameter vector (or matrix) w. 2. The aggregator selects a subset of participants and broadcasts the model (i.e. the parameters w) to these participants. 3. Each participant i updates (i.e. improves by training) the given model by using its own training data and sends the resulting model (i.e. the parameters wi) to the aggregator. 4. The aggregator receives the updated models (i.e. wis) from the participants and m aggregates the updated models as w = ∑ i w i where mi is the number of examples in the training set of participant i iandmm is the total number of examples in the training sets of the participants from which updated models are received. 5. The aggregator evaluates the adequacy of the model. If more training is needed, execution continues from step 2. 6. The aggregator deploys the final model to the users. In this process, selected participants receive the current model (i.e. the parameters of the model) and improve the model by employing gradient descent with their own training data for a predefined number of iterations. The aggregator may request participants to divide their training data into mini-batches of a given size and perform stochastic gradient descent using each mini-batch once at each iteration. The aggregator determines the number of rounds, the number of participants selected to update the model in each round, the number of iterations performed by the selected participants at each round, and the size of mini-batches (in the case of stochastic gradient descent) by considering several factors such as global convergence of the algorithm, processing/communication loads on participants, and available data. These kinds of adjustments are very important when the participants have limited storage/processing resources [18]. In split learning [19], a centralised server and set of participants collaborate to train a single but distributed deep learning model without requiring sharing of participants’ data or model parameters. In this approach, the model is partitioned into
2 Federated Learning with Support of HetNets, Cloud Computing, and Edge Computing Parcipant 1
Parcipant 2
29
Parcipant k iinputs t ((x))
inputs (x)
inputs (x)
z=f1(x)
z=f2(x)
z=fm(x)
Intermediate outputs (z)
Intermediate outputs (z)
Intermediate outputs (z)
Server inputs (z) y=fs(x)
output (y) Server
Fig. 2.1 Split learning
at least two neural network layer partitions that are cascaded to each other (Fig. 2.1). In the simplest form, the neural network in the server is commonly used by all participants. The participants also have their own neural networks that produce interim outputs that are consumed by the neural network in the server to produce desired outputs. In the training process, only intermediate results and labels may be sent by participants to the server. Hence, the server computes the error and gradients. These gradients are broadcast by the server to the participants to update the participants’ neural networks. Split learning is a flexible approach that allows more complex model decompositions to enable additional features. For example, by using an additional neural network in the participants, it is also possible to avoid sharing the labels. So far, it is tacitly assumed that the examples in the training set of a participant are independent of the examples in other participants’ training sets, but the same set of attributes applies to the examples in all participants’ training sets. However, there are other possibilities. According to the distribution characteristics of data, three major categories of federated learning can be defined [20]: horizontal federated learning, vertical federated learning, and federated transfer learning. Horizontal federated learning assumes that the training sets in participants contain examples with the same set of features, but the examples belong to different subjects. This is the most common case as participants collect data pertinent to their own activities, and there is no significant overlap between the activities of different participants. In vertical federated learning, it is assumed that participants have observations regarding different aspects of a group of subjects. In other words, participants know different features of the same set of subjects. Therefore, in order to generate a common model, the different features in different participants should be consolidated
30
A. Koçyiğit and E. Ever
(perhaps by using split learning or a similar approach). In federated transfer learning, the examples in different participants’ training sets are not related to each other, and the features of the examples in the training sets of different participants are different. Again, the collaboration of participants may be possible by discovering a common representation for different feature sets of different participants.
2.4 Mobile Cloud Computing for Heterogeneous Networks The publicly available global cloud computing market is expected to exceed $360 billion by 2022 [21]. The main causes of attracting global investments in these scales are related to the facilities provided in terms of on-demand availability of data storage and computing resources over the Internet. Since it is possible to use third-party online servers instead of saving information to the local hard drives, cloud computing is also becoming a standard for IoT data storage. Furthermore, cloud computing infrastructures are able to serve as valuable facilitators for high- speed computation which is particularly important for application areas such as machine learning and deep learning [22]. Mobile cloud computing makes use of cloud computing facilities with an attempt to deliver various services and applications to mobile devices. The cloud services can be used to build or update/revise mobile applications following their deployment. The services are usually made available to mobile stations with various operating systems, computing tasks, data storage, and computational facilities. In other words, extensive heterogeneous collections of devices which could not be otherwise supported receive services thanks to the mobile cloud computing facilities [23]. Various areas such as smart cities, healthcare, smart homes, and high-precision agriculture use services of mobile cloud computing. Many of these application areas require autonomous devices which collect application-related information. Therefore, they can be considered as part of the Internet of Things (IoT) paradigm. The data collected using IoT and the sensors available are transferred to cloud data centres and/or computing facilities which are usually at remote locations for analysis, fusion, storage, and processing. In that sense, mobile cloud computing can be considered as an extension of cloud computing; however, its main target is mobile smart devices [24]. Transferring the data acquired to mobile cloud computing centres introduces some limitations since cloud data centres are most of the time located at remote locations from the place of request. The distance between the origin of the request (e.g. mobile station) and the server (i.e. cloud centres) introduces some network- related issues such as delay, network resilience, and higher probabilities of link failures. These issues have the potential to become sufficiently severe to cause the failure of the main objectives of mobile cloud computing. Although developments such as edge and fog computing attempt to address these issues similarly by bringing intelligence and processing closer to the users (resource of data), they have different usages and infrastructural requirements.
2 Federated Learning with Support of HetNets, Cloud Computing, and Edge Computing
31
The platform, called fog computing, was first introduced by Cisco to serve the IoT applications which require higher efficiency services than the services provided by cloud computing. The main target was the applications running on billions of heterogeneous connected devices [25]. The strategies used by fog and edge computing have the same aim of improving efficiency by performing the computing closer to the end users in the network. The main difference between edge computing and fog computing can be explained by considering the location where the processing of the acquired data is performed. Most of the time, the edge computing takes place directly on the end devices with the sensors or a gateway device that is in close proximity to the sensors. On the other hand, fog computing moves the computing- related activities to alternative processors connected to the local area network (LAN) or into the LAN hardware. In other words, in fog computing, the processing can take place at physically more distant locations from the devices with sensors and actuators. Cloudlets which are mobility-enhanced small-scale cloud data centres are commonly used in fog computing to reduce the distance between end users and servers and to provide distributed load balancing with an attempt to minimise the latency as well as energy consumption on the mobile phones [26, 27] for resource-intensive and interactive mobile applications. Both edge and fog computing are becoming important parts of the evolution of distributed computing in IoT, cyber-physical systems, machine learning, robotics, AR/VR, vehicular networks, and other similar network applications with functions that may require service provisioning to be closer to the users [28]. In Fig. 2.2, an example of system is presented where cloud computing, fog nodes, and edge devices are in interaction with each other. Edge computing is an important part of the infrastructure for assisting the communication of distributed heterogeneous IoT devices to facilitate federated learning, as well as real-time training. When federated learning and related applications are considered, although the training process with local data is handled at the mobile station, building a common and robust machine learning model can still be computationally intensive. The mobile cloud computing (MCC) platforms are commonly used to serve these kinds of demands of computations [29]. Mobile stations are able to offload the computation-intensive tasks to the centralised cloud server, which is usually located remotely, in an attempt to maintain the quality of services (QoS) required by a specific application. When federated learning is considered, the results of the local training may be of large sizes. This, in turn, may cause significant traffic loads towards the cloud depending on the application. Although MCCs are able to assist with computationally intensive tasks centrally, in case the traffic from mobile stations to the central cloud is relatively high, delivering large amounts of data from mobile stations to the cloud can cause a heavy link burden on the core network as well as undesirable delays. For federated learning applications, it is possible to use mobile edge computing (MEC), which is an implementation of fog computing where a small-scale cloud platform is used to provide computing resources and storage services at the edge of the radio access networks. Of course, with different applications and especially the heterogeneity of the mobile networks, the
32
A. Koçyiğit and E. Ever
Fig. 2.2 Cloud, fog, and edge computing integration
complexity of offloading algorithms also increases because of the need to make the correct decisions for optimum performance [30].
2.5 Federated Learning Architectures and Heterogeneity A typical federated learning application consists of a centralised aggregator communicating with a group of participants. The aggregator can be hosted by cloud infrastructure, and participants are usually smart devices that have their own processing, communication, and storage resources (Fig. 2.3). Each device contributes to the learning process by using its locally available data and sharing locally trained models or some other information that can be useful to improve the global model. However, its local data is never shared with the aggregator or any other device. The aggregator is solely responsible for coordinating the devices taking part in the learning process and iteratively improving a global model. The aggregator determines the devices to participate in the entire learning process and determines which device will be active at each iteration. The aggregator broadcasts the starting point, such as
2 Federated Learning with Support of HetNets, Cloud Computing, and Edge Computing
Model Aggregaon
Aggregator
33
Σ Global model
w1
w2
w3
w4
Devices
Fig. 2.3 A typical federated learning scenario
the current global model, to initiate local learning and collects and aggregates local models or other relevant information sent by participants. This setup, particularly the information exchanges between the aggregator and the participants, introduces a high demand on network bandwidth. Furthermore, the central aggregator represents a potential bottleneck and a single point of failure [31]. On top of the network traffic-related issues, in terms of privacy and security as well, centralised federated learning can be problematic. For example, malicious aggregators may be able to intentionally manipulate the model or collect the clients’ private information from the updates. Alternatively, a new flow of distribution is introduced in studies such as [32, 33], which defines fully decentralised or peer-to- peer distributed learning. In this category, instead of using centralised aggregators, the tasks are distributed among participants. Mechanisms such as Blockchain are employed effectively for the organisation and synchronisation of the local learning processes without a centralised aggregator. A federated learning application may only involve participants that are in the same geographical area. In this case, the aggregator may be running in a mobile cloud infrastructure that is close to participants. This approach offers some benefits in terms of communication efficiency and latency. Mobile cloud computing at the edge also enables some alternative architectures that potentially reduce the load on networks and participant's devices. For instance, if the participant’s devices have limited resources or limited connectivity, the learning tasks can be offloaded to a nearby mobile cloud infrastructure operated by a trusted entity. Another improvement opportunity could be to define mobile cloud infrastructures as an intermediate processing layer between the central aggregator and the participants. For example, in order to reduce the load on the aggregator and its access network, intermediaries that are close to participants may perform local aggregation (Fig. 2.4) [34, 35]. Moreover, such intermediaries can enable mutual authentications to prevent
34
A. Koçyiğit and E. Ever
Model Aggregaon
Aggregator
w1
w2
Σ
Global model
w3+4
Local Aggregator(s)
Model Aggregaon
w3
Σ
Agg. model
w4
Devices
Fig. 2.4 Local aggregation
malicious intrusions. It is also possible to consider the current context to guide participant selection, enable performance-enhancing operations such as proactive vertical handovers, and implement more efficient data exchange mechanisms. In many of the application areas, local computation power comes from a large number of heterogeneous nodes such as smart devices, mobile phones, and IoT devices. The heterogeneity of the devices with different computational, communication, and storage capabilities can be considered in adjusting the computation load on the participants by adjusting federated learning parameters such as the number of rounds to improve the global model, the type and number of participants involved in each round, and the amount of computation to be carried out by participants on the received global model in each round [16]. It is also possible to employ models with different complexities in order to address the heterogeneity of the devices by the heterogeneity of models as suggested in HeteroFL [36]. For example, it is possible to vary the width and depth of neural networks to reduce the computation complexity of local models. The local models do not have to be the same as the central model, but model aggregation is still possible as the local models are of the same model class.
2.6 Federated Learning Applications for IoT and HetNets Federated learning approaches can be used for various types of applications where broader data coverage (i.e. statistical heterogeneity) and distributed computational power are required, while the privacy-related issues prevent the sharing or
2 Federated Learning with Support of HetNets, Cloud Computing, and Edge Computing
35
distribution of local data. It is expected that by 2025 there will be more than 25 billion interconnected IoT devices, and each one of these devices will be collecting data through multiple integrated sensors [37]. Parallel to this, the size of data generated by IoT devices is expected to reach 79.4 zettabytes by 2025 [38]. With data collection in these extensive scales, the machine learning schemes can introduce great potential enabling various IoT applications. Depending on the application, the datasets of interest may include sensitive data making the privacy-related concerns critical for specific IoT applications. In turn, these applications can benefit from federated learning approaches. Similar to IoT, the Industrial Internet of Things (IIoT) paradigm has also been growing and generating large amounts of data [39]. IIoT applications are pervading in manufacturing industry as they suggest significant improvements in several aspects such as quality, productivity, and efficiency through the use of data analytics. In order to improve the benefits further, the collaboration of organisations having mutual interests is expected to grow. How to use available data in multiple organisations to carry out business analytics, without compromising organisational as well as individual privacy, is an important challenge in this context. Moreover, IIoT infrastructures are susceptible to security threats such as device hijacking, denial of service, data breaches, device theft, and data siphoning. Therefore, particularly the privacy-related advantages of using federated learning can be beneficial for IIoT applications [40]. From the participants’ perspectives, it is possible to categorise federated learning applications into two broad categories: cross-individual and cross-organisation. These categories can also be seen as cross-device and cross-silo, as suggested by [41]. In the cross-individual category, a multitude of individuals (in particular, their computing devices) take part in a federated learning task by using their own experiences (i.e. training data) to contribute to the development of a common machine learning model. Participants with different characteristics usually have different experiences; hence, they have non-i.i.d. data. Moreover, these participants may not always be available or willing to contribute to learning. The number of participants may be millions; so a participant rarely takes part in more than a few iterations of a federated learning task. Nevertheless, incentive mechanisms must be considered, and the processing and communication overhead caused by learning should be adjusted by considering the conveniences of participants. In a HetNet environment, a network operator may offer complimentary services to relieve the processing and communication load on participants as incentive mechanisms and to improve quality of experience during the training process. In the cross-organisation category, a group of organisations with mutual interests may collaborate with their own data for data analytics purposes. In particular, organisations may contribute to the development of several machine learning models that may improve their businesses without disclosing their data. These organisations may be operating in different domains; hence, they may have different experiences. However, their data may complement each other if they are operating in the same region and serving the same customer base. Therefore, it is possible to boost the performances of the machine learning models by incorporating data that is relevant
36
A. Koçyiğit and E. Ever
to the same subjects but possibly different aspects (i.e. different features). Hence, vertical federated learning and federated transfer learning are very common in the organisation-wide application of federated learning. Sure enough, these organisations would be willing to increase mutual benefits, but they do this without sharing their data as organisational privacy is of paramount importance. The learning process involves considerably reliable and powerful on-premises infrastructures such as private clouds and broadband networks. Hence, processing and communication overheads are not the most important issues. Nevertheless, efficient use of network and processing resources cannot be ignored as all participants usually take part in all iterations of a federated learning task and a large amount of data might be processed in each iteration. Applications such as intelligent transportation, smart industries, healthcare, and smart surveillance are good examples where federated learning can assist, complement, and maximise the benefits obtained from IoT and IIoT implementations. Heterogeneity is the common theme in such settings and should be taken into account in all collaborative tasks, such as federated learning, that require extensive use of resources. On the other hand, security and privacy concerns should be properly addressed. Hence, smart homes [42] and Smart City Sensing [43] are interesting application areas for both HetNets and federated learning. The emergence of autonomous moving platforms such as self-driving cars and autonomous flying vehicles also increases the need for HetNets as well as relatively complex machine learning technologies. These platforms require orchestration of multiple subsystems which include computer vision and machine learning with real-time decisions. Moreover, such systems need to integrate with intelligent infrastructures and other similar systems to better support their operations while improving themselves. Therefore, federated learning can be a powerful tool in this domain, as it enables collaboration without sacrificing privacy. The potential large-scale use of such autonomous systems with stringent delay requirements also brings significant restrictions in terms of the quality of service. Federated learning-based approaches supported by mobile cloud computing and optimal use of HetNets would be enablers to limit data transfer volumes and accelerate the learning process without disturbing real-time operations. E-Health applications, which are considered to be a part of IoT, are also good examples where federated learning can offer ideal solutions mainly because of the sensitivity of data that is usually used in these applications [44]. Federated learning is particularly designed to offer solutions for the problem associated with data governance and privacy. The sensitive data can be used to train algorithms collaboratively, and it is never exchanged. For example, instead of sending sensitive patient information to a central cloud infrastructure to establish a deep learning model, the locally trained models are exchanged and aggregated to protect the private data from possible leakages. Federated learning can also be an enabler for improving the performances and efficiencies of HetNets as well. For example, as HetNets are vulnerable to different kinds of attacks, the cooperation of edge devices and the cloud can enable enhanced attack detection [45]. In order to improve the quality of service for mobile users,
2 Federated Learning with Support of HetNets, Cloud Computing, and Edge Computing
37
proactive handover mechanisms can utilise federated learning to enable the participation of limited resource nodes to improve prediction performance [46]. Federated learning can also be used to enable spectrum management based on high-resolution spectrum utilisation data of all devices without causing any privacy issue [47]. Federated learning paradigm is a promising approach for on-device machine learning; however, it also has some issues, especially in terms of security, robustness, and optimisation of available resources. Open issues and challenges should be carefully considered and addressed in order to maximise the benefits of federated learning.
2.7 Challenges and Opportunities Digital transformation has been shaping society and industry with unprecedented applications. The proliferation of wireless networks providing widespread and heterogeneous connectivity and the advances in several technologies, such as processing, storage, communication, big data, IoT, cyber-physical systems, and artificial intelligence, are the main drivers of transformation. In contemporary organisations, data-driven decision-making that involves machine learning on big data has gained momentum in the last decade. Besides, individuals are making use of machine learning models in their daily lives. In general, more data means much better machine learning models. Hence, in order to maximise the benefits, not only do organisations and individuals depend on their own data, but also they use publicly available data (e.g. web and social media), and they cooperate with others with similar interests to share experiences. However, consolidating relevant big data in a central storage to make it accessible to a modeller and processing it in a reasonable amount of time may not be possible. Moreover, privacy concerns and legislation such as General Data Protection Regulation (GDPR) in the EU prohibit individuals and organisations from sharing their data with others. In this context, federated learning is a viable option for applying large-scale machine learning on distributed data owned by different entities. Hence, it is possible to generate machine learning models that generalise well beyond individual experiences while preserving data privacy. With federated learning, machine learning models can be collectively trained by a large group of individuals with their connected devices (i.e. cross-individual federated learning) or a group of organisations with their on-premises processing and storage infrastructures (i.e. cross-organisation learning). Hence, federated learning cannot be thought independently of edge computing and access networks which are both the integral components and the potential bottlenecks in a typical cross- individual federated learning application. Similarly, cross-organisation federated learning involves on-premises cloud resources that host the learning applications on the participant side. Moreover, distributed operations that are common in the services industry and also the case in some manufacturers usually depend on mobile communications and mobile clouds. Therefore, in a federated learning application,
38
A. Koçyiğit and E. Ever
the efficient use of processing and communication resources is an important concern as well as improving the performance of the trained models. Participants taking part in a federated learning application use training data captured from their prior experiences to improve a global model. These participants may have diverse experiences relevant to the learning task. Hence, non-i.i.d. data is an important issue that must be taken into account in the federated learning application design. On the other hand, this diversity is essential to have training data that is representative enough to cover all possible cases and hence to generate generalisable models. However, achieving broader data coverage by involving a large number of participants is not easy. Usually, incentive mechanisms are employed to increase the number of participants. In this regard, mobile network operators and service providers may play important roles. For instance, they can offer some free communication and mobile cloud services to their subscribers to attract them to take part in federated learning applications. Heterogeneity of the devices and access networks is another important issue that has to be taken into account, especially in cross-individual federated learning. Typically, mobile devices performing local training have limited processing powers, memories, storage capacities, power sources, and communication bandwidths. Moreover, participants may not want to take part in learning rounds when they are currently using their devices for other tasks, they are accessing the Internet via metered connections, or their devices are not plugged into a power outlet. Heterogeneity of devices results in variable delays in exchanging model parameters and completing local training iterations for different participants. There may also be excessive congestions in certain parts of the network affecting some participants. Moreover, participants may be mobile, and there may be temporary connectivity problems. Therefore, the aggregator has to deal with straggling and dropped-out participants in every round. Hence, in order to execute the learning process efficiently, several factors such as device capabilities, current network state, and user contexts should be considered in selecting participants that will take part in each round, and learning parameters should be tuned accordingly. In this context, network operators may assist aggregators as they know the current situation of their networks and they have more information regarding the current states of the participants. They can streamline the learning process by managing the network resources to improve communication quality and by performing proactive handovers to provide more reliable connections. They can even offer free services to active participants so that they enjoy higher bandwidth, vertical handovers to higher-quality network connections, or computation offloading facilities available at their current locations. Cloud infrastructures at the edge are also potential facilitators for federated learning. In cross-organisational applications, on-premises private clouds host applications that implement the local learning processes. In cross-individual applications, trusted mobile clouds in the vicinity of a participant can be used for computation offloading to relieve the processing and communication loads on the participant’s device. Cloud infrastructures at the edge may also serve as intermediaries between the participants and the aggregator. These intermediaries can receive
2 Federated Learning with Support of HetNets, Cloud Computing, and Edge Computing
39
the global model (or initial model parameters) from the aggregator and efficiently broadcast it to the participants at the beginning of each round, collect updates from participants at the end of the round, and perform local aggregations before forwarding the updates to the aggregator. Therefore, the communication and processing load on the aggregator may be relieved as it needs to exchange data with a smaller number of entities in each round. Moreover, this structure may improve the communication (and hence energy) efficiency of participants as they communicate with a nearby local aggregator instead of a distant aggregator. In split learning or similar approaches, the intermediary located at an edge node may implement some intermediate layers of the global model. Therefore, context information and local dynamics may be incorporated into the global model by this intermediary. Hence, model performance can be improved. In fully decentralised or peer-to-peer federated learning applications, cloud infrastructures at the edge and other similar intermediaries may simplify the implementation by serving as reliable proxies of participants and data exchange points. It is also possible to implement a mobile aggregator (i.e. migrating across edge clouds) by choosing participants that are close to each other. In each round, the aggregator may move closer to the participants to improve communication efficiency. Issues such as planning rounds based on the availability of participants, ensuring convergence of the global model, and proper migration of the aggregator must be considered in such implementations. In federated learning applications, malicious participants, malicious aggregators, compromised devices, and intruders may pose serious security and privacy risks. Countermeasures such as mutual authentications, encryption, and integrity checking can be employed against such threats. These issues can also be considered at the network level. Cooperation of the network operators and federated learning application designers potentially offers much more straightforward, efficient, and secure solutions. For instance, mutual authentications can be achieved via the aggregator- network operator and network operator-subscriber authentications. Intermediaries such as local aggregators or proxies of participants in an edge cloud operated by the network operator may contribute to participants’ privacy as the intermediary can limit the information accessible by the aggregator regarding the individuals contributing to the model updates.
2.8 Conclusion The effective use of existing distributed computing paradigms is gaining importance with the improvements in the ways the data is managed and processed. The advancements in computing facilities and the ways various systems are interconnected continue to open new opportunities and introduce new challenges for the researchers in the fields of distributed computing, computer networks, Internet of Things, and machine learning. Federated learning is becoming a critical machine learning technique for researchers and engineers since it can provide solutions for
40
A. Koçyiğit and E. Ever
applications with minimised delay requirements. Furthermore, one of the main reasons that federated learning is gaining popularity is the services it can provide in terms of data privacy. In this chapter, the heterogeneous networks, as well as heterogeneous objects, are considered as enablers of federated learning approaches. The machine learning concepts are discussed together with the important properties of federated learning. Following that, the existing federated learning architectures are considered together with enabling technologies such as cloud computing, fog computing, edge computing, and mobile edge computing. The heterogeneous structure of federated learning is further exposed through examples, particularly from the field of IoT. Finally, existing challenges and open research issues are considered critically.
References 1. Zhang, N., Cheng, N., Gamage, A. T., Zhang, K., Mark, J. W., & Shen, X. (2015). Cloud assisted HetNets toward 5G wireless networks. IEEE Communications Magazine, 53(6), 59–65. 2. Agiwal, M., Roy, A., & Saxena, N. (2016). Next generation 5G wireless networks: A comprehensive survey. IEEE Communications Surveys & Tutorials, 18(3), 1617–1655. 3. Yaqoob, M., Gemikonakli, O., & Ever, E. (2021). Modelling heterogeneous future wireless cellular networks: An analytical study for interaction of 5G femtocells and macro-cells. Future Generation Computer Systems, 114, 82–95. 4. Al-Turjman, F., Ever, E., & Zahmatkesh, H. (2017). Green femtocells in the IoT Era: Traffic modeling and challenges--an overview. IEEE Network, 31(6), 48–55. 5. Anuradha, J. (2015). A brief introduction on Big Data 5Vs characteristics and Hadoop technology. Procedia computer science, 48, 319–324. 6. Qiu, J., Wu, Q., Ding, G., Xu, Y., & Feng, S. (2016). A survey of machine learning for big data processing. EURASIP Journal on Advances in Signal Processing, 2016(1), 1–16. 7. Erl, T., Khattak, W., & Buhler, P. (2016). Big data fundamentals: concepts, drivers & techniques. Prentice Hall Press. 8. Mitchell, T. M. (1997). Machine learning. McGraw Hill. 9. Tang, F., Fadlullah, Z. M., Mao, B., & Kato, N. (2018). An intelligent traffic load prediction- based adaptive channel assignment algorithm in SDN-IoT: A deep learning approach. IEEE Internet of Things Journal, 5(6), 5141–5154. 10. Wang, W., Zhou, C., He, H., Wu, W., Zhuang, W., & Shen, X. S. (2020). Cellular traffic load prediction with lstm and gaussian process regression. In ICC 2020-2020 IEEE International Conference on Communications (ICC). IEEE. 11. Chen, Xu, Mériaux, F., & Valentin, S.. (2013). Predicting a user’s next cell with supervised learning based on channel states. In 2013 IEEE 14th workshop on signal processing advances in wireless communications (SPAWC). IEEE. 12. Gazda, J., ŠLapak, E., Bugár, G., Horváth, D., Maksymyuk, T., & Jo, M. (2018). Unsupervised learning algorithm for intelligent coverage planning and performance optimisation of multitier heterogeneous network. IEEE Access, 6, 39807–39819. 13. Syarif, I., Prugel-Bennett, A., & Wills, G. (2012). Unsupervised clustering approach for network anomaly detection. In International conference on networked digital technologies. Springer. 14. Tang, F., Zhou, Y., & Kato, N. (2020). Deep reinforcement learning for dynamic uplink/ downlink resource allocation in high mobility 5G HetNet. IEEE Journal on Selected Areas in Communications, 38(12), 2773–2782.
2 Federated Learning with Support of HetNets, Cloud Computing, and Edge Computing
41
15. Verbraeken, J., Wolting, M., Katzy, J., Kloppenburg, J., Verbelen, T., & Rellermeyer, J. S. (2020). A survey on distributed machine learning. ACM Computing Surveys (CSUR), 53(2), 1–33. 16. McMahan, B., Moore, E., Ramage, D., Hampson, S., & y Arcas, B. A. (2017). Communication- efficient learning of deep networks from decentralised data. Artificial Intelligence and Statistics. PMLR, 1273–1282. 17. Konečný, J., McMahan, H. B., Ramage, D., & Richtárik, P. (2016). Federated optimisation: Distributed machine learning for on-device intelligence. arXiv preprint arXiv:1610.02527. 18. Wang, S., Tuor, T., Salonidis, T., Leung, K. K., Makaya, C., He, T., & Chan, K. (2018). When edge meets learning: Adaptive control for resource-constrained distributed machine learning. In IEEE INFOCOM 2018-IEEE Conference on Computer Communications. IEEE. 19. Vepakomma, P., Gupta, O., Swedish, T., & Raskar, R. (2018). Split learning for health: Distributed deep learning without sharing raw patient data. arXiv preprint arXiv:1812.00564. 20. Yang, Q., Liu, Y., Chen, T., & Tong, Y. (2019). Federated machine learning: Concept and applications. ACM Transactions on Intelligent Systems and Technology (TIST), 10(2), 1–19. 21. Public cloud computing market size 2022 | Statista", Statista, 2021. [Online]. Available: https://www.statista.com/statistics/273818/global-revenue-generated-with-cloud-computing- since-2009/. Accessed 05 Feb 2021. 22. Pop, D. (2016). Machine learning and cloud computing: Survey of distributed and saas solutions. arXiv preprint arXiv:1603.08767. 23. Fernando, N., Loke, S. W., & Rahayu, W. (2013). Mobile cloud computing: A survey. Future generation computer systems, 29(1), 84–106. 24. Dinh, H. T., Lee, C., Niyato, D., & Wang, P. (2013). A survey of mobile cloud computing: architecture, applications, and approaches. Wireless communications and mobile computing, 13(18), 1587–1611. 25. Bonomi, F., Milito, R., Zhu, J., & Addepalli, S.. (2012). Fog computing and its role in the internet of things. In Proceedings of the first edition of the MCC workshop on Mobile cloud computing, pp. 13–16. 26. Dolui, K., & Datta, S. K. (2017). Comparison of edge computing implementations: Fog computing, cloudlet and mobile edge computing. In 2017 Global Internet of Things Summit (GIoTS), pp. 1–6. IEEE. 27. Kaya, M., & Koçyigit, A.. (2014). Mobil Uygulamalarda Vekil Tabanlı Kod Taşıma Yönteminin Farklı Seviyelerdeki Bulut Bilişim Altyapılarının Kullanılması Durumundaki Başarımının Karşılaştırılması. In UYMS. 28. Shi, W., Cao, J., Zhang, Q., Li, Y., & Lanyu, X. (2016). Edge computing: Vision and challenges. IEEE internet of things journal, 3(5), 637–646. 29. Shakarami, A., Ghobaei-Arani, M., Masdari, M., & Hosseinzadeh, M. (2020). A survey on the computation offloading approaches in mobile edge/cloud computing environment: A stochastic-based perspective. Journal of Grid Computing, 1–33. 30. Huang, X., Xu, K., Lai, C., Chen, Q., & Zhang, J. (2020). Energy-efficient offloading decision- making for mobile edge computing in vehicular networks. EURASIP Journal on Wireless Communications and Networking, 2020(1), 35. 31. Gu, X., Zhu, T., Li, J., Zhang, T., & Ren, W. (2020). The impact of differential privacy on model fairness in federated learning. In International conference on network and system security (pp. 419–430). Springer. 32. Li, Y., Chen, C., Liu, N., Huang, H., Zheng, Z., & Yan, Q. (2020). A blockchain-based decentralised federated learning framework with committee consensus. IEEE Network, 32, 1513–1525. 33. Pokhrel, S. R., & Choi, J. (2020). Federated learning with blockchain for autonomous vehicles: Analysis and design challenges. IEEE Transactions on Communications, 68(8), 4734–4746. 34. Wang, J., Wang, S., Chen, R. R., & Ji, M. (2020). Local averaging helps: Hierarchical federated learning and convergence analysis. arXiv preprint arXiv:2010.12998.
42
A. Koçyiğit and E. Ever
35. Ye, Y., Li, S., Liu, F., Tang, Y., & Hu, W. (2020). EdgeFed: Optimized Federated Learning Based on Edge Computing. IEEE Access, 8, 209191–209198. https://doi.org/10.1109/ ACCESS.2020.3038287 36. Diao, E., Ding, J., & Tarokh, V.. (2020). HeteroFL: Computation and communication efficient federated learning for heterogeneous clients. arXiv preprint arXiv:2010.01264. 37. Mourad, A., Yang, R., Lehne, P. H., & De La Oliva, A. (2020). A baseline roadmap for advanced wireless research beyond 5G. Electronics, 9(2), 351. 38. Zhang, Y., Feng, B., Quan, W., Tian, A., Sood, K., Lin, Y., & Zhang, H. (2020). Cooperative edge caching: A multi-agent deep learning based approach. IEEE Access, 8, 133212–133224. 39. Paniagua, C., & Delsing, J. (2020). Industrial frameworks for Internet of Things: A survey. IEEE Systems Journal, 15, 1149–1159. 40. Jiang, X., Lora, M., & Chattopadhyay, S. (2020). An experimental analysis of security vulnerabilities in industrial IoT devices. ACM Transactions on Internet Technology (TOIT), 20(2), 1–24. 41. Kairouz, P., McMahan, H. B., Avent, B., Bellet, A., Bennis, M., Bhagoji, A. N., Bonawitz, K., et al. (2019). Advances and open problems in federated learning. arXiv preprint arXiv:1912.04977. 42. Aïvodji, U. M., Gambs, S., & Martin, A. (2019). IOTFLA: A secured and privacy-preserving smart home architecture implementing federated learning. In 2019 IEEE security and privacy workshops (SPW), San Francisco, CA, USA, pp. 175–180. https://doi.org/10.1109/ SPW.2019.00041. 43. Jiang, J. C., Kantarci, B., Oktug, S., & Soyata, T. (2020). Federated learning in smart city sensing: Challenges and opportunities. Sensors, 20(21), 6230. https://doi.org/10.3390/s20216230 44. Rieke, N., Hancox, J., Li, W., Milletari, F., Roth, H. R., Albarqouni, S., Bakas, S., et al. The future of digital health with federated learning. NPJ Digital Medicine, 3(1), 1–7. 45. Wei, Y., Zhou, S., Leng, S., Maharjan, S. & Zhang, Y. Federated learning empowered end- edge-cloud cooperation for 5G HetNet security. In IEEE Network. https://doi.org/10.1109/ MNET.011.2000340. 46. Qi, K., Liu, T., & Yang, C. (2020). Federated learning based proactive handover in millimeter- wave vehicular networks. In 2020 15th IEEE International Conference on Signal Processing (ICSP), Beijing, China, pp. 401–406, https://doi.org/10.1109/ICSP48669.2020.9320974. 47. Niknam, S., Dhillon, H. S., & Reed, J. H. (2020). Federated learning for wireless communications: Motivation, opportunities, and challenges. IEEE Communications Magazine, 58(6), 46–51.
Chapter 3
Cooperative Mobile Traffic Offloading in Mobile Edge Computing for 5G HetNet IoT Applications B. D. Deebak
3.1 Introduction IoT applications proliferate the usage of smart computing devices to build a constructive infrastructure. The computing devices cover the different scope of the communities to deliver accessible data ubiquitously. The data accessibility is envisioned to fulfill the demands of 5G/6G networks which can connect smart things to communicate with service infrastructure. The computation-intensive applications emerge from smart mobile devices and mobile cloud computing to mitigate the computation cost of mobility devices and to extend the concepts of multi-access edge computing environments including partial and full traffic offloading. Mobile cloud computing reserves powerful computing resources to accelerate the execution of real-time tasks and to protect the consumption of energy sources. However, it may evoke several technical challenges such as computation overhead, traffic offloading, and reliability among end device, edge server, and cloud. To address the issue effectively, mobile edge computing prefers the capabilities of cloud computing at the network edge. This technical strategy may precisely sense the proximity of edge server and end device to gain less communication latency and better service reliability [1]. Lately, the European Telecommunication Standard Institute (ETSI) has revised a term known as “mobile” into “multi-access edge computing” for the applicability of heterogeneous networks such as fixed access technologies. The traditional cloud supplies storage and computation capabilities to distant end devices. However, it may experience high latency due to limited computation capabilities. On the other hand, edge computing can mitigate the excessive communication latency, but it has B. D. Deebak (*) School of Computer Science and Engineering, Vellore Institute of Technology, Vellore, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 F. Al-Turjman (ed.), Real-Time Intelligence for Heterogeneous Networks, https://doi.org/10.1007/978-3-030-75614-7_3
43
44
B. D. Deebak
a possible processing latency due to restricted computation capabilities. In order to address the issues of latency such as processing and communication, the edge server and remote cloud coexist. It may broaden the vision of cooperative mobile traffic offloading to offer a feature of ultralow response time. The traffic offloading can connect various mobility devices to collect and exchange over a dedicated wireless channel. Various mobility devices always ensure a property of mutual authentication to improve the service features of machine-to-machine (M2M) communication. Most of the real-time applications can improve the scope of ubiquitous connectivity to solve computational problems. The voluminous data demands a heterogeneous technology to meet the standard requirements of 5G/6G communication systems. The evolving technologies have a large number of small cells to address the key issue such as data streaming over IoT- enabled 5G network. The smart computing device periodically changes the mode of interactions between 5G core and HetNet to discover a reservation scheme dynamically. In addition, it tries to forecast the necessary resources to optimize the core functionalities. The intelligent device applies a mechanism of congestion admission control to reserve the essential resources of mobility networks. The advancement of fifth-generation (5G) development has enabled mobile communication for the convergence of mobile devices and the Internet of Things (IoT) specifically for large-scale wireless sensor networks. The 5G cellular technology aimed to provide high peak data rates with low latency, reliability, and substantial network capabilities [1]. With the expansion of 5G networks, people can connect and share information not only between terminals and can also share among most of the daily used objects. As per statistical reports presented in [2], it is estimated that more than 50 billion sensor devices will be connected to the Internet across the globe by the end of 2020. According to the user application view, 5G can be very much capable of providing numerous advancements such as 10 Gbps high bandwidth, 1 millisecond low latency, 2.16 GHz channel width, ultrahigh mobility support, and low operational costs. These advancements help to induce extensive quality of service (QoS) and quality of experience (QoE). A detailed comparison of 5G cellular network properties with LTE and LTE-A is depicted in Table 3.1. Of late, IoT has become an integral part of human lives and can be widely used in various complex domains such as production, supply chain, Table 3.1 Properties of advanced cellular technologies Properties Range Frequency band Channel width Latency Mobility support Bandwidth Operational cost
LTE 30 km 700–2600 MHz 1,4,3,5,10,15,20 MHz 10 ms Very high 50 Mbps Low
LTE-A 30 km 450 MHz–4.99 GHz Up to 100 MHz – Very high 1 Gbps Low
5G 500 m (approx.) 57.05–64 GHz 2.16 GHz 1 ms Ultrahigh 10 Gbps Very low
3 Cooperative Mobile Traffic Offloading in Mobile Edge Computing for 5G HetNet…
45
transportation, buildings, hospitals, large-scale industrial systems, etc. IoT can be equipped with a large number of electronic devices, which comprise sensors, actuators, and other related network systems. The electronic devices are capable of connecting, computing, and communicating with other nodes, and mostly these devices are deployed in a hostile environment where user interference is not highly demanded. Further, IoT devices can be connected with wireless sensor networks (WSN) that are capable to sense and send the gathered information to the nearest base station. However, IoT devices have been implemented to facilitate a variety of services with optimized resources, including limited battery, limited processing, and storage capabilities [3]. Therefore, intermediate data processing and storage mechanisms are highly demanded to handle these large volumes of data. Cloud computing (CC) is emerged to facilitate on-demand self-service to advanced computing systems to process large volumes of data and enable permanent massive storage capabilities. Since the inception in 2005, CC has changed the way it provides solutions and services, for example, (1) software as a service (SaaS) provides numerous end user applications, for instance, Facebook, Flickr, Twitter, etc., to be a part of every individual lives; (2) platform as a service (PaaS) provides essential services for runtime development environments; and (3) infrastructure as a service (IaaS) ensures the scalable infrastructure services and processing mechanisms to run applications have significantly influenced the way of a succession of a business. Besides, current IoT systems create a lot of impact on data and access control, computing resources, networking infrastructures, and storage abilities to accommodate the requirements of connected devices. In some cases, IoT applications might necessitate less response time; some might generate a massive amount of data, and some might involve sensitive private data; some might require long-term storage that could lead to a heavy burden to the IoT network systems. Nevertheless, traditional cloud computing environments are not sufficient to provide essential services to these IoT applications. However, there exist various architectures that are incapable of meeting the advanced requirements of highlevel processing, communication, and computation capabilities. Therefore, meeting the essential requirements necessitates the modification of existing architectures and cloud services. Furthermore, edge computing has become one of the emerging solutions in order to provide substantial computing and storage resources [4] such as cloudlets, mini-local data centers, and fog nodes that can be deployed at the edge of close vicinity to the sensor nodes. In the late 1990s, Akamai introduced the content delivery networks (CDN) to enhance web performance. Besides, CDN utilizes the nodes deployed at the edge of networks closer towards the application user’s to fetch and cache web content. Thereafter, EC generalizes and explores the concept of CDN in order to strengthen cloud infrastructure [5]. Moreover, the European Telecommunication Standard Institute (ETSI) and Industry Specification Group (ISG) coined the mobile edge computing [6] with provisional services such as higher processing, seamless data access, and
46
B. D. Deebak
storage capabilities. Cloud computing model uses the conceptual idea of edge computing to offer massive data processing which enhances the network features such as mobility, localization, minimized latency, and user experience [7]. The advanced technologies including IoT, cyber-physical systems (CPS), and cloud infrastructure are primarily integrated to ease the complexities of realtime computing systems. These developments highly prefer edge computing to connect various devices such as computer systems, laptops, access points, wearable devices, and mobile devices to offer seamless connectivity. Generally speaking, edge computing devices access user’s locations with limited security properties to prevent privacy threats. However, security and privacy issues are still challenging in the evolving technologies. While providing the centralized services [8], the decentralized network architecture experiences various security vulnerabilities. As a consequence, the centralized core network emerges a technique of multi- access edge computing (MEC) to enable effective data transmission from the core network to the edge network. The core networks control the computations and processing tasks performed by the centralized servers which are coexisting with the architecture of the edge networks. The edge networks deliver high flexibility to overlay the development of new applications and customized services. Many real-time applications are exercised to implement the traditional mechanisms, which practically analyze the technological difficulties. With the integration of edge computing, mobile application users can provisionally gain to access resources effectively. It can even enable a strategy to offload the mobile traffic; resultantly network traffic in the core network can be minimized. Of late, augmented intelligence (AI) has introduced an intelligent decision-making process for the edge computing technologies which adopt conditional and continually varying environments to analyze the environment features [9]. The features of the intelligent system apply machine learning (ML) algorithms to explore supervised/unsupervised learning, reinforcement learning, deep learning, and deep reinforced learning, which handles complex problems [10]. Internet connectivity has risen many generations folds in the past decade. The network components are expected to rise at a breakneck pace. As reported by IHS Markit, a world leader in critical information, analytics, and solutions, the number of connected Internet of Things (IoT) devices worldwide will jump 12% on average per annum, from almost 27 billion in 2017 to 125 billion in 2030 [11]. The present network cannot cater to such a large number of devices, and the need for a faster network would be required. Many IoT devices would require data transmission to be very rapid, and the throughput required would be high. Today, IoT is discovering many applications in critical systems that require secure data transmission to be instantaneous [12, 13]. Hence, MEC, along with network function virtualization (NFV), would allow us to cater to such a large number of devices with a better QoE. 4G was a result of the collaboration between Charmed Technologies, Inc., from Beverly Hills, California, and Software Technology Parks, Karnataka, India [14].
3 Cooperative Mobile Traffic Offloading in Mobile Edge Computing for 5G HetNet…
47
3.2 5G-Enabled Edge Computing Networks Recent developments in the 4G evolution have been carried out in the R&D departments in Korea. The hyper-connected IT infrastructure “Giga Korea” from 2013 to 2020 is being developed. It uses a broader spectrum of special-purpose networks, greener networks and devices, and other state-of-the-art network topologies [13]. Many research contributions toward 3GPP standards have been conducted by Nokia Siemens Networks (NSN). They believe that radio evolution will continue to happen with improvements in digital processing power, efficient use of optical fiber, and advancements in radio implementation of the bandwidth [14]. A full-scale small cell heterogeneous deployment, a cognitive radio network (CRN), self-organizing networks (SON), and fast interference coordination and cancellation lead to drastic improvements in the throughput of the network and reduction in the latency [15]. The availability of sizeable untapped spectrum resources in the millimeter-wave (mm-wave) band is suitable for providing a gigabit experience with the right local feel using high-capacity small cells [16]. Far-reaching research has been done in the field of computer science for system architecture like MAUI [17], power management [18], mobile computation offloading (MCO) [19], and machine migration [20]. Mobile edge computation offloading (MECO) provides an energy-efficient structure for offloading core network traffic. It combines the design of MCO and wireless communication techniques. Some researches are done on this topic for multi-user and single-user [21] offloading system. Link failure can be widespread in high mobility clients [22]. Energy consumption of localized computing is used as the deciding factor for optimal offloading in a single-user MECO system [23]. Dynamic offloading using adaptive LTE and Wi-Fi was discussed in [24] to achieve higher energy efficiency. Computational offloading was designed using game theory for energy and latency minimization for multiple user MECO [24, 25].
3.2.1 Multimedia Traffic: Analysis, Offloading, and Tools Mobile communications have seen tremendous growth in the last decade. Moreover, popular tools such as social networks and multimedia streaming primarily increase the demands for smartphones and tablets. The network traffic is exponentially growing in the upcoming years due to the convergence of IoT technologies. Edge computing may associate with the core networks to mitigate traffic bottlenecks [26]. The computing mechanism demands the support of the underlying network since it can sustain the application services across the devices with extended connectivity [27]. Therefore, the computing devices prefer a feature of low power consumption to improve the service connectivity and network performance using a strategy of smart charging [28].
48
B. D. Deebak
Real-time services, such as augmented reality, interactive gaming, smart traffic control systems, and face recognition, are significantly benefitted using the state-of- the-art technology of MEC [29]. The computing systems are aimed to address challenges of the network systems such as traffic offloading and service latency and data processing. The use of an edge network might result in the execution of network capacity to explore the limited access such as fast computation and easy access [30]. The fast computation and easy access provided by the MEC are attracting considerable attention. Of late, this technological collaboration has scaled the scope of automated artificial intelligence (AI) services which involve high computation to solve complex problems. Conversely, the architecture of edge computing has a systematic issue such as resource shortage and software breakdown to affect the quality of experience (QoE). The shortage of software breakdowns can significantly improve the performance of devices employed in the healthcare sector [31]. Also, the advent of unmanned aerial vehicles (UAV) has increased the utility and data rate to many folds [32]. As a consequence, commercial networks prefer a convergence of AI to optimize network traffic [33] (Fig. 3.1). Traffic analysis in MEC can be done on the edge nodes by using the software- defined radio (SDR) kit for emulation purposes. Live data streams from YouTube live channels or other broadcast streams can be used for collecting real-time traffic along with Steel Enclosure Accessory for USRP B200/B210 (Green PCB Only) that can be used for setting up the emulation [34]. The USRP B210 is a single-board, low-cost kit that provides 56 MHz real-time bandwidth and combines it with AD9361 RFIC direct conversion. It can be employed across a wide range of spectrums from FM radio to GPS and Wi-Fi [35, 36].
Fig. 3.1 Measuring setup of performing traffic analysis in the mobile edge computing (MEC) architecture
3 Cooperative Mobile Traffic Offloading in Mobile Edge Computing for 5G HetNet…
49
3.2.2 Quality of Service (QoS) Parameters Quality of service (QoS) parameters are used to describe the characteristics of transmission among two different users. It may prefer data link service (DLS) to observe the sole dependency of the DLS provider. The linking services are aimed to establish a better scope to improve the quality of the application services. These different types of requirements are expressed using the QoS parameters. The major factors are as follows: (a) Latency: It is the delay between the input into the system and the desired outcome. It refers to the time taken for a single packet of data to travel from the source to the destination. (b) Loss: It refers to the percentage loss in the data during the transmission of the packet from the source to the destination. (c) Bandwidth: It is defined as the amount of data transmitted in a fixed amount of time. It is typically expressed in bits per second (bps). (d) Jitter: It refers to the difference in the packet delay. It can also be articulated as the time difference between the inter-packet arrivals. (e) Throughput: It is defined as the total number of bits successfully transferred per unit time. The MEC architecture applies the QoS parameters to examine the rate of network traffic which may utilize an efficient strategy to claim a higher service quality.
3.3 Traffic Offloading Traffic offloading solves the issues of high latency and excessive load on any congested network using machine learning approaches [37]. The congested networks incorporate edge computing to offload the network traffic nearby any computing devices. It may use an edge network as a substitute for any centralized core. Any centralized core permits the computing devices to access the proximity of nearby end users to mitigate the traffics. This technical approach may ease the delay factor to meet the standard requirements of 5G/6G networks. The critical metrics such as delay, latency, network load, and peak time preferably opt for the activities of the end users to categorize the process of traffic offloading. The major classifications are as follows: Processing Delay: It is defined as the consumption time taken by the router to process the packet header. Queuing Delay: It is defined as the time consumed by data packets in router queues. Transmission Delay: It is defined as the time consumed in advertising the packet bits onto the data link. Propagation Delay: It is defined as the time taken by the transmission signal to arrive at a destination.
50
B. D. Deebak
Fig. 3.2 MEC architecture and its utilization
The traffic offloading performs upon the request of the computing device to cover a transmission distance; thus fewer nodes may cautiously route the data packets to reduce the transmission delay. Most of the computing services utilize edge computing to handle pre-processing of massive data, performing intensive computation at the edge server, which may optimize the resource usage through various context-aware services. The massive data demands proper pre-processing to mitigate the network load resulting in the reduction of network congestion to the core network as shown in Figs. 3.2 and 3.3. The control and user plane separate the coexistence of various radio technologies that provide an efficient strategy to handle the issues of control network architecture. It has a high capacity of small cells to offload the network traffic specifically in edge computing architecture [38]. The core functions include splitting control plane and user plane such as C/U split to optimize the operational process of any real-time applications. It has a provision to examine the network performance which proposes various use cases to manage the network loads. In the network domain, efficient management exploits the real-time resources through the macrocells to analyze the difficulties of the network control plane. With the logical/physical decoupling of the user plane and the control plane, a large number of heterogeneous small cells are integrated. In wireless networks, the use of dense low-power small cells is expected to handle the increasing data demand from mobile users. As a result, unlocking the high spatial reuse is applied. With the small access points closer to the users, the capacity for spatially localized
3 Cooperative Mobile Traffic Offloading in Mobile Edge Computing for 5G HetNet…
51
Fig. 3.3 C/U split architecture
transmission is increased by the small cell network allowing a more efficient spectrum utilization with multiple concurrent connections [39]. Also, this enhances mobility robustness. Moreover, the handover failure rate is reduced by maintaining a macrocell C-plane connection [40]. In conclusion, the C/U split significantly reduces the signaling overhead toward the core network. A logical decoupling between the control and user plane is created using software defined networking (SDN) [41]. C/U split helps SDN ease the development of new network functionalities and also isolates the complexity by providing a programmable hardware framework. In MEC architecture, SDN provides the required flexibility in the network monitoring, network management, and policy installation acting as a catalyst for a better architecture [42–44].
3.4 Intelligent Mechanisms for Offloading in MEC The need for faster transmission of data is more than ever considering the increasing number of things that require faster and better Internet. The number of self-driving vehicles is on the rise, and so is the number of devices connected to the Internet (IoT). The benefits of MEC in the form of reduced latency and increased throughput with a faster transmission time are crucial for such devices to function correctly as a split-second delay could claim someone’s life. The need for a more reliable and faster Internet could be fulfilled by employing the architecture [45]. In the real-time MEC, the offloading decision-making is optimized because of the influential factors such as multidimensional, random generators and time-varying inconsistent data collection. Artificial intelligence-based ML and DL algorithms can extract
52
B. D. Deebak
functional information from large volumes of data, provides learning for several functionalities to optimize, predict, decisions making in conditional environments. Besides, the integration of AI with MEC provides massive computing resources and reliable low latency services for intelligent computing. As a result, the MEC becomes the edge network to be self-optimized and self-adaptive such that it can form AI decision-making in it. Furthermore, the convergence of augmented intelligence with MEC not only pertains to artificial intelligence mechanisms to design algorithms and protocols; it acts as an intelligent system framework to design a smart MEC decision-making practice [46]. Of late, researchers have implemented various AI-based solutions for offloading in edge networks with enhanced infrastructures for its ever-changing requirements [47]. Edge intelligence is discussed by various authors in order to explore the drawbacks in collaboration between the edge and cloud, as a countermeasure deploying AI mechanisms on the edge servers [48].
3.4.1 Reinforcement Learning It examines the mechanisms on how a decision-maker should select the optimal solution continuously with ever-changing environments. Besides, the main aim of this learning system is to select an action for every state of the system to maximize the accumulative benefits for the long-term result. It is majorly suitable for automatic control and decision-making concerns for dynamic environments. By comparing with other AI-based algorithms, this mechanism cannot learn from available data; rather it needs to learn from its own understanding. Further, it develops a mathematical framework known as Markov decision process (MDP) in order to model decision-making practice. Besides, a decision-maker selects an action on a specific state st, and then it will go on to the next feasible states with the transition probability P(st| st, at) with an expected profit r(st, at). This procedure could be continued over time, and the decision-maker can get a series of profits as shown in Fig. 3.4.
Fig. 3.4 Reinforcement learning
3 Cooperative Mobile Traffic Offloading in Mobile Edge Computing for 5G HetNet…
53
According to the application point of view, offloading could save user’s energy and enhances computing capabilities. However, it requires additional transmission and computational resources. In this connection, it needs to select whether it can offload or not. If yes, the corresponding Manufacturing Execution System (MES) server should be selected, and correspondingly the required workload needs to be determined to offload. Many researchers have implemented MDP as an optimization [49] mechanism; the MDP can be solved using a dynamic programming mechanism without learning action. The limitation with these methods includes computational complexity that is growing to multiply with the number of states that caused the dimensionality problem [50].
3.4.2 Q-Learning It is an essential kind of a reinforcement learning algorithm that pursues to find the optimal selection policy utilizing Q-function. This Q-function used the Bellman equation and takes two inputs such as state (s) and state (a). Using the equation Qπ (st, at) = E [Rt + 1 + γRt + 2, +γ2R t + 3 + ……st, at], we get the values of Q in the table. This algorithm can be specifically used for resource scheduling in spectrum management along with the security in IoT. This reinforcement algorithm estimates the reward after every action, and correspondingly, it moves to the new state. Then it will get an advantage for good action and penalization for bad actions accordingly. In a dynamic MEC environment, it is not possible to acquire perfect knowledge of the edge network. Therefore, Q-learning can be a suitable mechanism for decision-making with limited information in a dynamic environment. Firstly, identify the actions, states, and profit functions in order to apply Q-learning in the MEC environment. The exploitation of Q-learning can update the action-value function by identifying the feedback given by states and actions.
3.4.3 Supervised and Unsupervised Learning Supervised learning is an essential part of ML mechanisms in which it wishes to learn the analytical model that could predict the output-based input dataset. Besides, it allows an inferring function to analyze the labeled training data by utilizing statistical data. Support vector machine is a typical instance of supervised learning mechanism, mainly used for discrete-valued classification [51]. Support Vector Machine (SVM) it is a versatile supervised learning widely used for both classification and regression problems. But the SVM algorithm is mostly used in classification problems. Support vectors are merely the coordinates of every individual observation. The SVM algorithm plots every data item as a point in n-dimensional space with the value of every feature working that constitutes the
54
B. D. Deebak
value of particular coordinates. Subsequently, it will perform classification by performing hyper-plane that will certainly mark the classes [52]. On the other hand, the labeled input data is not known in unsupervised learning that aims to obtain properties and structures in the hidden data to acquire predictions. K-means is one of the most used unsupervised learning algorithms that efforts to divide the data whose category is unknown into various disjoint groups. K-Means it is a well-recognized and commonly used clustering algorithm named the K-means clustering algorithm. Clustering can be defined as the task of identifying subgroups in the data. The data points in the same cluster are very similar and remain different while they are in different clusters. K-means is one of the versatile unsupervised ML algorithms. This algorithm identifies the K number of centroids that are not overlapped clusters and later allocates every data point to the nearest cluster when keeping the centroids as small as possible. As the server requirements are dynamic and timely varied in MEC, it is constructive to customize the offloading decision for multiple services using classification and clustered mechanisms. Also, these can be widely used in predicting radio characteristics, policy management in HetNets, anomaly detection in IoT environments, etc. Of late, various researchers [53] implemented supervised learning mechanisms to optimize power consumption and computational efficiency for smart mobile devices, in which the dataset is built by predicting power consumption in multiple contexts. Besides, to diminish the interference in mobile HetNets, a K-means-based context-aware mechanism was implemented [54] for small cell clustering practice. Moreover, the offloading and server choice in MEC is made using clusters rather than individual decisions that can be most effective to decay a significant amount of participants as shown in Fig. 3.5.
Fig. 3.5 Supervised and unsupervised learning
3 Cooperative Mobile Traffic Offloading in Mobile Edge Computing for 5G HetNet…
55
3.4.4 Deep Learning It is a widely known form of ML that allows systems to learn from experience and understand according to the hierarchy. DL enables computational models which comprise multi-processing layers to learn representations of data with multi-level abstraction. Besides, the DL representations are based on multi-layer neural networks that enable the computational model to automatically obtain the feature required for classification from a huge amount of data [55]. Many researchers have identified that the main advantage of DL is that features can be learned without any manual setup. It is highly efficient than reinforcement learning and Q-learning. Since the inception of DL, it is widely used in various complex domains in order to mitigate numerous issues including fault detection, traffic behavior prediction, dynamic routing, etc. By using DL mechanisms in MEC, it can help in order to forecast conditional changes for offloading optimized decision-making as shown in Fig. 3.6.
3.5 Conclusions This chapter has studied the architecture of mobile edge computing for 5G networks. The computation-intensive applications emerged from smart mobile devices and mobile cloud computing to mitigate the computation cost of mobility devices. The concepts of multi-access edge computing environments have included partial and full traffic offloading to leverage the design of edge IoT systems. As a result, the voluminous data has demanded a heterogeneous technology to meet the standard requirements of 5G/6G communication systems. In some cases, IoT applications
Fig. 3.6 Deep learning
56
B. D. Deebak
might necessitate less response time; some might generate a massive amount of data, and some might involve sensitive private data; some might require long-term storage that could lead to a heavy burden to the IoT network systems. Moreover, this technology applies intelligent mechanisms to develop a suitable traffic offloading mechanism. It may combine the converging technologies such as software-defined networking, network function virtualization, cloud, and open-source software to explore the core functionality of multi-access edge computing. It can reduce the computation capabilities to the edge network that reduces the complexities of the core network. Moreover, the scope of open-source platforms can enrich the state-of- the-art technologies to improve the quality metrics such as cost, time, and performance.
References 1. Gupta, A., & Jha, R. K. (2015). A survey of 5G network: Architecture and emerging technologies. IEEE access, 3, 1206–1232. 2. Akpakwu, G. A., Silva, B. J., Hancke, G. P., & Abu-Mahfouz, A. M. (2017). A survey on 5G networks for the Internet of Things: Communication technologies and challenges. IEEE access, 6, 3619–3647. 3. Satyanarayanan, M. (2017). The emergence of edge computing. Computer, 50(1), 30–39. Deebak, B. D. (2020). Lightweight authentication and key management in mobile-sink for smart IoT-assisted systems. Sustainable Cities and Society, 102416. 4. Dilley, J., Maggs, B., Parikh, J., Prokop, H., Sitaraman, R., & Weihl, B. (2002). Globally distributed content delivery. IEEE Internet Computing, 6(5), 50–58. 5. Hu, Y. C., Patel, M., Sabella, D., Sprecher, N., & Young, V. (2015). Mobile edge computing— A key technology towards 5G. ETSI white paper, 11(11), 1–16. 6. Cao, B., Zhang, L., Li, Y., Feng, D., & Cao, W. (2019). Intelligent offloading in multi-access edge computing: A state-of-the-art review and framework. IEEE Communications Magazine, 57(3), 56–62. 7. Behmann, F., & Wu, K. (2015). Collaborative internet of things (C-IoT): For future smart connected life and business. Wiley. 8. Date, E. T. (2019). The road to 5G networks. 9. Oueis, J. (2016). Joint communication and computation resources allocation for cloud- empowered future wireless networks. Doctoral dissertation. 10. Petri, I., Diaz-Montes, J., Rana, O., Punceva, M., Rodero, I., & Parashar, M. (2015). Modelling and implementing social community clouds. IEEE Transactions on Services Computing, 10(3), 410–422. 11. Hamza, A., & Tripp, T. (2020). Optical wireless communication for the Internet of Things: Advances, challenges, and opportunities. 12. Crossler, R. E., Johnston, A. C., Lowry, P. B., Hu, Q., Warkentin, M., & Baskerville, R. (2013). Future directions for behavioral information security research. Computers & Security, 32, 90–101. 13. Zhang, D., & Nunamaker, J. F. (2003). Powering e-learning in the new millennium: An overview of e-learning and enabling technology. Information Systems Frontiers, 5(2), 207–218 14. Zhang, D., & Nunamaker, J. F. (2003). Powering e-learning in the new millennium: an overview of e-learning and enabling technology. Information systems frontiers, 5(2), 207–218. 15. Ko, S. W., Huang, K., Kim, S. L., & Chae, H. (2017). Live prefetching for mobile computation offloading. IEEE Transactions on Wireless Communications, 16(5), 3057–3071.
3 Cooperative Mobile Traffic Offloading in Mobile Edge Computing for 5G HetNet…
57
16. Gu, T., Yang, Z., Basu, D., & Mohapatra, P. (2019, May). BeamSniff: Enabling seamless communication under mobility and blockage in 60 GHz networks. In 2019 IFIP networking conference (IFIP networking) (pp. 1–9). IEEE. 17. Nouri, N., Abouei, J., Jaseemuddin, M., & Anpalagan, A. (2019). Joint access and resource allocation in ultradense mmWave NOMA networks with mobile edge computing. IEEE Internet of Things Journal, 7(2), 1531–1547. 18. Nawrocki, P., & Sniezynski, B. (2020). Adaptive context-aware energy optimization for services on mobile devices with use of machine learning. Wireless Personal Communications, 1–29. 19. Gu, X., Ji, C., & Zhang, G. (2020). Energy-optimal latency-constrained application offloading in mobile-edge computing. Sensors, 20(11), 3064. 20. Ren, J., Zhang, D., He, S., Zhang, Y., & Li, T. (2019). A survey on end-edge-cloud orchestrated network computing paradigms: Transparent computing, mobile edge computing, fog computing, and cloudlet. ACM Computing Surveys (CSUR), 52(6), 1–36. 21. Zeadally, S., & Bello, O. (2019). Harnessing the power of Internet of Things based connectivity to improve healthcare. Internet of Things, 100074. 22. Lockhart, S. L., Duggan, L. V., Wax, R. S., Saad, S., & Grocott, H. P. (2020). Personal protective equipment (PPE) for both anesthesiologists and other airway managers: Principles and practice during the COVID-19 pandemic. Canadian Journal of Anesthesia/Journal canadien d’anesthésie, 67, 1–11. 23. Lin, L., Liao, X., Jin, H., & Li, P. (2019). Computation offloading toward edge computing. Proceedings of the IEEE, 107(8), 1584–1607. 24. Carvalho, G., Cabral, B., Pereira, V., & Bernardino, J. (2020). Computation offloading in edge computing environments using artificial intelligence techniques. Engineering Applications of Artificial Intelligence, 95, 103840. 25. Jha, K., Doshi, A., Patel, P., & Shah, M. (2019). A comprehensive review on automation in agriculture using artificial intelligence. Artificial Intelligence in Agriculture, 2, 1–12. 26. Toth, J., & Gilpin-Jackson, A. (2010, October). Smart view for a smart grid—Unmanned aerial vehicles for transmission lines. In 2010 1st international conference on applied robotics for the power industry (pp. 1–6). IEEE. 27. Gember, A., Krishnamurthy, A., John, S. S., Grandl, R., Gao, X., Anand, A., ... & Sekar, V. (2013). Stratos: A network-aware orchestration layer for middleboxes in the cloud. Technical Report. 28. Ouaddah, A., Mousannif, H., Abou Elkalam, A., & Ouahman, A. A. (2017). Access control in the internet of things: Big challenges and new opportunities. Computer Networks, 112, 237–262. 29. Taleb, T., Samdanis, K., Mada, B., Flinck, H., Dutta, S., & Sabella, D. (2017). On multi-access edge computing: A survey of the emerging 5G network edge cloud architecture and orchestration. IEEE Communications Surveys & Tutorials, 19(3), 1657–1681. 30. Wichtlhuber, M., Reinecke, R., & Hausheer, D. (2015). An SDN-based CDN/ISP collaboration architecture for managing high-volume flows. IEEE Transactions on Network and Service Management, 12(1), 48–60. 31. Han, B., Hui, P., Kumar, V. A., Marathe, M. V., Pei, G., & Srinivasan, A. (2010, September). Cellular traffic offloading through opportunistic communications: A case study. In Proceedings of the 5th ACM workshop on challenged networks (pp. 31–38). 32. Monserrat, J. F., Mange, G., Braun, V., Tullberg, H., Zimmermann, G., & Bulakci, Ö. (2015). METIS research advances towards the 5G mobile and wireless system definition. EURASIP Journal on Wireless Communications and Networking, 2015(1), 53. 33. Zhang, H., Li, X., Senarath, N. G., Vrzic, S., Ngoc-Dung, D. A. O., & Farmanbar, H. (2018). U.S. Patent No. 10,084,643. U.S. Patent and Trademark Office. 34. Yan, L., Fang, X., & Fang, Y. (2017). A novel network architecture for C/U-plane staggered handover in 5G decoupled heterogeneous railway wireless systems. IEEE Transactions on Intelligent Transportation Systems, 18(12), 3350–3362.
58
B. D. Deebak
35. Duan, Q., Ansari, N., & Toy, M. (2016). Software-defined network virtualization: An architectural framework for integrating SDN and NFV for service provisioning in future networks. IEEE Network, 30(5), 10–16. 36. Taleb, T., Samdanis, K., Mada, B., Flinck, H., Dutta, S., & Sabella, D. (2017). On multi-access edge computing: A survey of the emerging 5G network edge cloud architecture and orchestration. IEEE Communications Surveys & Tutorials, 19(3), 1657–1681. 37. Dinh, H. T., Lee, C., Niyato, D., & Wang, P. (2013). A survey of mobile cloud computing: Architecture, applications, and approaches. Wireless communications and mobile computing, 13(18), 1587–1611. 38. Akyildiz, I. F., Lin, S. C., & Wang, P. (2015). Wireless software-defined networks (W-SDNs) and network function virtualization (NFV) for 5G cellular systems: An overview and qualitative evaluation. Computer Networks, 93, 66–79. 39. Orange, J. S. B., Armada, A. G., Evans, B., Galis, A., & Karl, H. (2015). White paper for research beyond 5G. Accessed 23, 2016. 40. Hegmann, G., & Yarranton, G. T. (2011). Alchemy to reason: Effective use of cumulative effects assessment in resource management. Environmental impact assessment review, 31(5), 484–490. 41. Schiller, E., Nikaein, N., Kalogeiton, E., Gasparyan, M., & Braun, T. (2018). CDS-MEC: NFV/SDN-based application management for MEC in 5G systems. Computer Networks, 135, 96–107. 42. Bradai, A., Rehmani, M. H., Haque, I., Nogueira, M., & Bukhari, S. H. R. (2020). Software- defined networking (SDN) and network function virtualization (NFV) for a hyperconnected world: Challenges, applications, and major advancements. Journal of Network and Systems Management, 28(3), 433–435. 43. Papavassiliou, S. (2020). Software Defined Networking (SDN) and Network Function Virtualization (NFV). 44. Cao, B., Zhang, L., Li, Y., Feng, D., & Cao, W. (2019). Intelligent offloading in multi-access edge computing: A state-of-the-art review and framework. IEEE Communications Magazine, 57(3), 56–62. 45. Zhang, Y., Niyato, D., & Wang, P. (2015). Offloading in mobile cloudlet systems with intermittent connectivity. IEEE Transactions on Mobile Computing, 14(12), 2516–2529. 46. Sutton, R. S., & Barto, A. G. (2018). Reinforcement learning: An introduction. MIT press. 47. Karampatsis, D., Shaheen, K. M., & Patel, M. (2017). U.S. Patent No. 9,826,389. U.S. Patent and Trademark Office. 48. Bonfim, M. S., Dias, K. L., & Fernandes, S. F. (2019). Integrated NFV/SDN architectures: A systematic literature review. ACM Computing Surveys (CSUR), 51(6), 1–39. 49. Moradi, M., Lin, Y., Mao, Z. M., Sen, S., & Spatscheck, O. (2018). Softbox: A customizable, low-latency, and scalable 5g core network architecture. IEEE Journal on Selected Areas in Communications, 36(3), 438–456. 50. Ford, R. D. (2017). Low latency fifth-generation cellular networks. Doctoral dissertation, Polytechnic Institute of New York University. 51. Vermesan, O., & Friess, P. (Eds.). (2014). Internet of things-from research and innovation to market deployment (Vol. 29). River publishers. 52. Osisanwo, F. Y., Akinsola, J. E. T., Awodele, O., Hinmikaiye, J. O., Olakanmi, O., & Akinjobi, J. (2017). Supervised machine learning algorithms: classification and comparison. International Journal of Computer Trends and Technology (IJCTT), 48(3), 128–138. 53. Moradi, M., Lin, Y., Mao, Z. M., Sen, S., & Spatscheck, O. (2018). Softbox: A customizable, low-latency, and scalable 5g core network architecture. IEEE Journal on Selected Areas in Communications, 36(3), 438–456. 54. Ford, R. D. (2017). Low latency fifth-generation cellular networks. Doctoral dissertation, Polytechnic Institute of New York University. 55. Vermesan, O., & Friess, P. (Eds.). (2014). Internet of things-from research and innovation to market deployment, 29. Aalborg: River publishers.
Chapter 4
Nuclear Radiation Monitoring in the Heterogeneous Internet of Things Era Akbar Abbasi
, Fatemeh Mirekhtiary and Hesham M.H. Zakalya
4.1 Introduction The Internet of Things (IoT) model has become one of the biggest attractions for academics and businesses related to the exponential growth of information technology. Many researchers feel it will be the next major technological breakthrough. The Internet of Things (IoT) is focused on the capacity of various objects to communicate and collaborate to achieve shared aims with limited human intervention. Such objects include sensors, RFIDs, actuators, and mobile phones [1]. IoT plays a vital role in environmental, infrastructural, industrial, disaster, or threat monitoring applications. For example, radiological contamination in nuclear mill tailings is constantly monitored using a random deployment of several small IoT sensors in the area of interest (AoI). The concept of “target localization” is crucial in environmental monitoring. It operates by deploying sensing nodes to locate a specific target in a particular AoI. Although a few strong, powerful sensors may be sufficient for detecting the target, this is not always the case for pinpointing its location. It has been shown in A. Abbasi (*) Faculty of Engineering, University of Kyrenia, Mersin, Turkey e-mail: [email protected] F. Mirekhtiary Faculty of Engineering, Near East University, Mersin, Turkey University of Manitoba, Department of Electrical & Computer Engineering, Winnipeg, Manitoba, Canada Hesham M.H. Zakalya Institute of Physics and Technology, Ural Federal University, Yekaterinburg, Russia Department of Physics, Faculty of Science, Al-Azhar University, Winnipeg, Assiut, 71542, Egypt e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 F. Al-Turjman (ed.), Real-Time Intelligence for Heterogeneous Networks, https://doi.org/10.1007/978-3-030-75614-7_4
59
60
A. Abbasi et al.
the literature that using a large number of small sensors improves efficiency in localization tasks, demonstrating the utility of IoT nodes in such applications. However, using IoT sensors in such activities is difficult due to the design of the sensors used, which have limited resources such as energy. There is air in our natural environment, as well as other forms of radiation such as ionizing and non-ionizing radiation. Radiation is a kind of nuclear exposure that has both synthetic and natural origins. Both of them are dangerous, while others are not, depending on the limits of radiation dose and the length of time they are seen on a human body. During World War II, America dropped the world’s first deployed atomic bomb over Hiroshima, Japan, destroying 90% of the city and instantly killing 90,000–146,000 inhabitants due to radiation poisoning. Three days later, a second bomb was dropped on Nagasaki, killing an estimated 39,000–80,000 civilians due to overexposure to radiation. Ionizing radiation has always been existing in the environment. Ionizing radiation can be present in various places, including water, air, soil, and man-made structures. On the other hand, ionizing radiation is located below the visible range of the electromagnetic spectrum and has no odor. As a result, human senses are unable to identify it. When ionizing radiation is located outside of its permissible limits, it poses a risk to human health It is difficult to detect and has a high ionizing force and penetration strength. Ionizing radiation’s harmful effects on human health must be closely controlled to avoid injury, overexposure, or even death. Environmental safety, radiation control, and the implementation of security protocols all require the ability to classify radiation sources, particularly radioisotopes, and quantify radiation quantities [2]. Environmental radiation monitoring is an important area in the International Commission on Radiological Protection on radiation monitoring of the International Atomic Energy Agency (IAEA) [3]. It accompanies the Safety Guide on Regulatory Control of Radioactive Discharges (SGRCRD) to the environment. Many researchers have reported environmental radiation monitoring in terrestrial materials by passive methods [4, 5]. Also, the monitoring of environmental radiation in water has been done by the same authors [6–22]. Wireless sensor networks (WSNs) serve as the foundation of the Internet of Things systems and are critical to their growth. The Internet of Things hierarchical system is combined to construct various technologies such as sensing layer, networking layer, and computing layer for integrating monitoring and control applications. IoT systems can help accelerate real-time monitoring and control applications by gathering precise sensing data [7]. The abbreviations used in this paper are mentioned in Table 4.1 for your convenience.
4 Nuclear Radiation Monitoring in the Heterogeneous Internet of Things Era
61
Table 4.1 List of abbreviations Term Internet of Things Early warning system Radiofrequency identification Area of interest International Atomic Energy Agency Safety Guide on Regulatory Control of Radioactive Discharges Wireless sensor networks Naturally occurring radioactive materials High-level natural radiation area International Commission on Radiological Protection Wireless sensor and actuator network International Nuclear and Radiological Event Scale Radiation online monitoring Mean time between failures Mean time to repair Decision support system
Abbreviation IoT EWS RFID AoI IAEA SGRCRD WSNs NORM HLNRA ICRP WSAN INES OLM MTBF MTTR DSS
4.2 Background Radiation System Natural radioactive materials of 226Ra and 232Th series along with decay products and 40K individual radionuclides were heterogeneously distributed in the Earth’s crust. These radionuclides are adsorbed by suspended solids, then deposited directly on the surfaces in the form of direct sediment or dust forms. Natural radiation dose can damage the living cells and cause chromosome aberrations, mutation, and cell transformation due to DNA damage. The type of cell damage depends on the magnitude and time of the exposure. For example, visible harm to organs or tissues occurs from the dose level of more than 1 mSv/year [8]. The average effective dose received by the human population is 2.8 mSvy−1, of which over 85% is from natural radiation sources, with about half coming from radon (222Rn) decay products (UNSCEAR, 2008). An HLNRA is an area or a complex of living place where the sum of natural radioactivity in soil, indoor and outdoor air, water, food, etc. leads to chronic exposure situations from internal and external exposures that result in an annual effective dose to the public above a defined level [10]. According to HLNRA classification, the annual effective dose rate has been categorized into four (low, medium, high, and very high) sections [11]. This classification level also considered the International Basic Safety Standards [12] and ICRP 60 [13]. The global average activity concentration of 226Ra, 232Th, and 40K in the soil was 32, 45, and 412 Bq·kg-1, respectively. The maximum activities in the soil were calculated as 1000 Bq·kg-1 for 226 Ra (Sweden), 360 Bq·kg-1 for 232Th (China), and 3200 Bq·kg-1 for 40K (United Kingdom) [9].
62
A. Abbasi et al.
Fig. 4.1 The typical of a WSAN with distributed sensors and actuator node at the environmental background monitoring
Research has been conducted throughout the world in recent decades to determine the level of radioactivity in the marine environment by studying specific environmental components, such as sediments, coastal sands, and marine contamination [5, 14–20]. One of the most important criteria for improving IoT applications’ efficiency is the use of data collection systems. It aims to reduce energy consumption by extending the network lifespan and dealing with system failures. Figure 4.1 depicts a standard wireless sensor and actuator network (WSAN) architecture with dispersed sensors in the corresponding monitoring region. These sensors work together to gather data so that the actuator and remote operator can make appropriate decisions [21]. The system can evaluate an area online for ambient gamma radiation. This method is an effective method for areas that need continuous monitoring.
4.3 Emergency Radiation Situations To properly manage measurements during a disaster, emergency monitoring procedures must be in place ahead of time. Developing a robust and practical disaster monitoring plan is no simple task. This approach must begin with the assessment and review of possible threat scenarios in advance and meet the decision-making process’s needs. Furthermore, this approach is influenced by several variables whose status or contents are recognized or accessible before any nuclear or radiological emergency. Population distribution, geography and topography, land use, regulations and official arrangements, and a set of future sources (nuclear plants, medical facilities
4 Nuclear Radiation Monitoring in the Heterogeneous Internet of Things Era
63
Fig. 4.2 The radial distribution of nuclear radioactivity by distance from accident core
using radioactive chemicals, and nuclear fuel transmutation [22, 23]) are only a few examples [24, 25]. The radial distribution of nuclear radioactivity as a function of distance from the accident center is shown in Fig. 4.2. As shown in this figure, the distribution for radioactive pollution is radial in the absence of wind direction. Radioactive sources are used in various applications, resulting in a toxic working condition for humans and side effects that can lead to cancer and other diseases. As a result, certain radioactive elements must be monitored by calculating the signal-to-noise ratio in distributed networks. With the advancement of wireless sensor networks, there is a rapid increase in the monitoring of radiations and a method to trace a travelling radioactive source. Such monitoring may lead to a comparative study of results in the distributed network in terms of efficiency and false detection. A series of detectors are available for detecting man-made radiation sources such as Cs, Mn, Co-58, Co-60, and Ir-192. The portable ambient radiation detection device is one such detector that is used to
64
A. Abbasi et al.
Fig. 4.3 INES classification pyramid for nuclear accident level [27]
assess the exposure-dose concentration at environmental levels. While such a device helps measure radiation in the atmosphere, there is no early warning or alarm system in place for any radioactive leakage in a radiation therapy space, a manufacturing facility, or a nuclear power plant. The leakage radiation can also be calculated using an intelligent disaster information optimized tool that uses the Internet of Things (IoT). The platform integrates wireless sensor networks, online services, and smart mobile devices for early notification [21]. Due to industrial and economic growth, nuclear energy is accepted as one of the most significant energy sources for electricity generation in the twenty-first century. Though fission energy has been indicated to have many interests, it can make enormous outcomes and irrevocable damage to the living organisms and environment at the time of the accident [26]. The severe nuclear accidents on the International Nuclear and Radiological Event Scale (INES) [27] chart classification are shown in Fig. 4.3. Figure 4.4 shows IoT deployment of a WSN for environmental radiological risk control, focusing on the nuclear plant incident. The following is a preliminary presentation of a first hybrid prototype, which is used to identify and validate nodes and any problems in the final network.
4.4 Early Warning System (EWS) The ambient radiation online detection and alarm services are a part of the early warning system. The device was created to continuously track the gamma air dose rate in the facility’s atmosphere.
4 Nuclear Radiation Monitoring in the Heterogeneous Internet of Things Era
65
Fig. 4.4 The WSN configuration system for emergency alarm, with a focus on the nuclear plant incident
Sensor stability, data storage facilities, and data processing mechanisms at the server all suggest environmental radiation online monitoring (OLM). The difficulties of OLM are focused on different types of radiation sensors, and real-time data processing is limited not only by power but also by connectivity constraints. Continuous measuring (OLM) or collecting samples at the checked field were used to detect environmental radiation in nuclear facilities (offline). Offline screening for detection discharges involves taking samples at points of significance according to a fixed sampling schedule and measuring activity concentrations in the facility [28]. OLM system apply to measures radiation levels around nuclear plants in real time and measure discharges. The radioactive material concentrations at the point of release from nuclear facilities were indicated to the operator by OLM. The OLM sensors can be used as early warning devices in some nuclear plants if there is some deviation from the usual threshold. Data from real-time observations on the server could be fed into the device for fast corrective action. The automatic activation of an early warning system will increase the system’s efficiency in a nuclear emergency. To develop a nuclear preparedness early warning system, the system is combined with meteorological data, software in the central control system that will alert the operator in the event of an unexpected rise in the dose rate, and an SMS gateway. The early warning system, based on online ambient radioactivity tracking and meteorological data, sends out warnings for potentially dangerous conditions to the operator. The early warning system’s key idea is to issue information when the radiation dose reference level exceeds a certain threshold and to take prompt nuclear preparedness measures.
66
A. Abbasi et al.
The following are some of the most important characteristics of early warning systems[29]: • Sensors and other components of the device are installed at various positions in the tracked environment. The criteria for selecting calculating locations are based on the representative details that the systems require. • Data must be provided in real time by systems. A device must activate environmental warnings (e.g., a radiological early warning system). The system is primarily intended to provide historical data from sensors at predetermined intervals, such as any minute, half hour, or hour [29]. Furthermore, any lack of data transfer from the node to the server impacts device output. • Environmental surveillance systems are designed to provide long-term environmental data (years or decades). As a result, the MTBF (mean time between failures) and MTTR (mean time to repair) of repairable products should be as short as possible. Proper equipment selection and maintenance by trained technicians are needed for proper system design to achieve this aim. • Early alert devices are, in general, very sophisticated measuring devices. Regular servicing and calibration of measurement instruments is the best way to ensure that all measuring precision and repeatability are met. Figure 4.5 depicts conventional ambient radiation and meteorological tracking scheme. The infrastructure consists of data producers, power supplies, data communication systems, data storage, early warning systems (EWS) , and data presentation devices for remote meteorological and radiation monitoring. Modelling or
Fig. 4.5 The meteorological and environmental radiation monitoring system is defined in detail
4 Nuclear Radiation Monitoring in the Heterogeneous Internet of Things Era
67
simulation tools and a decision support system (DSS) can be used as part of the system for more advanced usage scenarios. The dosage amount of air radiation from gamma rays is normally measured by a radiation detection unit (Sv). This category of radiation detection instrument is divided into two categories: (1) a device that merely measures gross gamma radiation without detecting the type of radionuclide and (2) a gamma ray spectrometer that can determine the spectrum (distribution) of the intensity of gamma radiation versus energy. Nonetheless, since each radionuclide’s photon energy is distinct, a spectrometer may be used to distinguish the radiation source and intensity. The wind speed and direction, air temperature, solar radiation, relative humidity, and air pressure are all measured by a meteorological monitoring instrument. When evaluating the distribution of radioactive particulate in the air, assessing the route with the lowest radiation exposure risk, and preparing an evacuation procedure in an emergency, these criteria are critical. The communication module, which links the remote monitoring station and the centralized monitoring unit, is the next component. A control station that takes remote radiation and meteorological parameters must send the data to a storage system for visualization, study, and understanding. The mode of communication chosen is determined by the situation on the ground as well as infrastructure availability. The calculation data will be stored in data storage after it has been transmitted via a communication channel. A database is typically required to store large amounts of measurement data obtained continuously from multiple measurement instruments over a long time. The visualization module will interpret and display the stored measurement data to the user. If the calculated value is outside of a standard range, the early warning device module will trigger an alarm and send it to the operator and other relevant parties [30].
4.5 Conclusion and Future Directions Constantly, people are exposed to both natural and man-made radiation sources. Natural radiation is created from some sources, including radioactive materials present in the soil, water, and atmosphere. On the other hand, excessive radiation is dangerous to the human body, and the symptoms vary depending on the situation. IoT technology can detect ionizing radiation after it was implemented. In this chapter, we reviewed IoT technology’s application for monitoring natural background radiation and nuclear emergency radiation. Also, we emphasized using the EWS system in the nuclear accident area. The EWS system is suitable techniques to alarm the civilizing area. We predicate that by advancing in IoT technology, radiation monitoring and early warning systems will be more impressive to monitor nuclear facility area and urban area.
68
A. Abbasi et al.
References 1. Alagha, A., Singh, S., Mizouni, R., Ouali, A., & Otrok, H. (2019). Data-driven dynamic active node selection for event localization in IoT applications-a case study of radiation localization. IEEE Access, 7, 16168–16183. 2. Mahatab, T. A., Muradi, M. H., Ahmed, S., & Kafi, A. (2018). Design and analysis of IoT based ionizing radiation monitoring system. In 2018 International Conference on Innovations in Science, Engineering and Technology (ICISET), pp. 432–436. 3. I. A. E. Agency. (2005). Environmental and Source Monitoring for Purposes of Radiation Protection. Safety Guide RSG1. 8. International Atomic Energy Agency. 4. Abbasi, A., & Mirekhtiary, F. (2011). Survey gamma radiation measurements in commercially- used natural tiling rocks in Iran. World Academy of Science, Engineering and Technology, 5, 561–567. 5. Abbasi, A., Kurnaz, A., Turhan, Ş., & Mirekhtiary, F. (2020). Radiation hazards and natural radioactivity levels in surface soil samples from dwelling areas of North Cyprus. Journal of Radioanalytical and Nuclear Chemistry, 1–8. 6. Abbasi, A., & Bashiry, V. (2016). Measurement of radium-226 concentration and dose calculation of drinking water samples in Guilan province of Iran. International Journal of Radiation Research, 14(4). https://doi.org/10.18869/acadpub.ijrr.14.4.361 7. Gaber, M. I., Khalaf, A. A. M., Mahmoud, I. I., & El-Tokhy, M. S. (2021). Development of information collection scheme in internet of things environment for intelligent radiation monitoring systems. Telecommunication Systems, 76(1), 33–48. 8. Ziajahromi, S., Khanizadeh, M., & Nejadkoorki, F. (2015). Using the RESRAD code to assess human exposure risk to 226Ra, 232Th, and 40K in soil. Human and Ecological Risk Assessment. An International Journal, 21, 250–264. 9. UNSCEAR and U. N. S. C. on the E. of A. Radiation (2008). Report of the United Nations Scientific Committee on the Effects of Atomic Radiation: Fifty-sixth Session (10–18 July 2008). United Nations Publications. 10. Hendry, J. H., et al. (2009). Human exposure to high natural background radiation: What can it teach us about radiation risks? Journal of Radiological Protection, 29, A29. 11. Abbasi, A., & Mirekhtiary, S. F. (2020). Radiological impacts in the high-level natural radiation exposure area residents in the Ramsar, Iran. European Physical Journal Plus, 135(3). https://doi.org/10.1140/epjp/s13360-020-00306-x 12. W. H. Organization. (1996). International basic safety standards for protecting against ionizing radiation and for the safety of radiation sources. 13. ICRP. 1990 Recommendations of the International Commission on Radiological Protection. ICRP Publications, 21, 1991. 14. Abbasi, A., Mirekhtiary, F., & Mirekhtiary, S. F. (2018). Risk assessment due to various terrestrial radionuclides concentrations scenarios. International Journal of Radiation Biology, 1–22. https://doi.org/10.1080/09553002.2019.1539881 15. Birami, F. A., Moore, F., Faghihi, R., & Keshavarzi, B. (2020). Assessment of spring water quality and associated health risks in a high-level natural radiation area, North Iran. Environmental Science and Pollution Research, 27(6), 6589–6602. 16. Birami, F. A., Moore, F., Faghihi, R., & Keshavarzi, B. (2019). Distribution of natural radionuclides and assessment of the associated radiological hazards in the rock and soil samples from a high-level natural radiation area, Northern Iran. Journal of Radioanalytical and Nuclear Chemistry, 322(3), 2091–2103. 17. Mubarak, F., Fayez-Hassan, M., Mansour, N. A., Ahmed, T. S., & Ali, A. (2017). Radiological investigation of high background radiation areas. Scientific Reports, 7, 15223. 18. Abbasi, A., Zakaly, H. M. H., & Badawi, A. (2021). The anthropogenic radiotoxic element of 137Cs accumulate to biota in the Mediterranean Sea. Marine Pollution Bulletin, 164, 112043.
4 Nuclear Radiation Monitoring in the Heterogeneous Internet of Things Era
69
19. Tawfic, A. F., et al. (2021). Natural radioactivity levels and radiological implications in the high natural radiation area of Wadi El Reddah, Egypt. Journal of Radioanalytical and Nuclear Chemistry, 1–10. 20. Abbasi, A., Zakaly, H. M. H., & Mirekhtiary, F. (2020). Baseline levels of natural radionuclides concentration in sediments East coastline of North Cyprus. Marine Pollution Bulletin, 161, 111793. 21. Gomaa, R., Adly, I., Sharshar, K., Safwat, A., & Ragai, H. (2013). ZigBee wireless sensor network for radiation monitoring at nuclear facilities. In 6th Joint IFIP Wireless and Mobile Networking Conference (WMNC), pp. 1–4. 22. Abbasi, A. (2021). Nuclear fuel transmutation. In Nuclear power plants-The processes from the cradle to the grave, IntechOpen. 23. Mirekhtiary, S. F., & Abbasi, A. (2018). Uranium oxide fuel cycle analysis in VVER-1000 with VISTA simulation code. AIP Conference Proceedings, 1935. https://doi. org/10.1063/1.5025993. 24. Lahtinen, J. (2006). Emergency monitoring strategy and radiation measurements. Work. Doc. NKS Proj. Emerg. Manag. Radiat. Monit. Nucl. Radiol. Accid. (EMARAD). Radiat. Nucl. Saf. Auth. (STUK), Finl. 25. Dragusin, M., Stanga, D., Gurau, D., & Ionescu, E. (2014). Radiation monitoring under emergency conditions. Romanian Journal of Physiology, 59(9–10), 891–903. 26. Aghaie, M., Minuchehr, A., & Alahyarizadeh, G. (2019). Evaluation of atmospheric dispersion of radioactive materials in a severe accident of the BNPP based on Gaussian model. Progress in Nuclear Energy, 113, 114–127. 27. I. INES. (2008). The International Nuclear and Radiological Event Scale-User’s Manual, 2008 Edition. IAEA OECD/NEA. 28. Gomaa, R. I., Shohdy, I. A., Sharshar, K. A., Al-Kabbani, A. S., & Ragai, H. F. (2014). Real- time radiological monitoring of nuclear facilities using ZigBee technology. IEEE Sensors Journal, 14(11), 4007–4013. 29. Farid, M. M., Prawito, Susila, I. P., & Yuniarto, A. (2017). Design of early warning system for nuclear preparedness case study at Serpong. AIP Conference Proceedings, 1862(1), 30067. 30. Susila, I. P., Istofa, Kusuma, G., Sukandar, & Isnaini, I. (2018). Development of IoT based meteorological and environmental gamma radiation monitoring system. AIP Conference Proceedings, 1(1), 60004.
Chapter 5
Convergence of Blockchain in IoT Applications for Heterogeneous Networks Firdous Kausar, Mohamed Abdul Karim Sadiq, and Hafiz Muhammad Asif
5.1 Introduction Internet of Things (IoT) is a network of devices involving sensors and actuators that are extensively used in many applications. IoT devices are interconnected and communicate through the Internet [1]. The interacting devices eventually produce many types of data such as text or images. The subsequent personal information is vulnerable to unauthorized access and has to be protected. Once the security issues are addressed appropriately, widespread IoT applications will evolve into ubiquitous and secure systems. With recent advancement in 5G technology, the IoT finds its applications in heterogeneous networks owing to the fact that the IoT architecture is heterogeneous in nature [2]. Therefore, better connectivity can be implemented in different cell structures (pico, femto, micro) of 5G systems with the IoT theoretical concepts ranging from structure all the way to network security. The network architectures of IoT are intrinsically heterogeneous, including wireless sensor network (WSN), wireless fidelity network (Wi-Fi), wireless mesh network (WMN), mobile communication network (MCN), and vehicular network. The primary functioning of IoT devices is driven by inputs from their sensors. This has resulted in the automation of continuous data collection. The enormous amounts of generated data are considered a boon for data analytics, statistics, and F. Kausar (*) · H. M. Asif Department of Electrical and Computer Engineering, College of Engineering, Sultan Qaboos University, Muscat, Oman e-mail: [email protected]; [email protected] M. A. K. Sadiq Department of Information Technology, University of Technology and Applied Sciences, Sohar, Oman e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 F. Al-Turjman (ed.), Real-Time Intelligence for Heterogeneous Networks, https://doi.org/10.1007/978-3-030-75614-7_5
71
72
F. Kausar et al.
decision support systems in diverse applications. However, the availability of decentralized information poses security issues. In recent years, as IoT gains popularity, safeguarding the associated data has become a major concern. The data gets gathered on the pretext of providing personalization to the consumers. In pursuit of enhancing the accuracy of services offered, the collected data may involve sensitive content such as biometrics, location, and textual and visual traits, which could lead to social profiling and targeted surveillance by unauthorized miscreants. Such profiling or surveillance is undesired even if done on large scale by a government. In many regions of the world, consent from users and authoritative regulatory bodies is mandatory for the utilization of sensitive data. IoT devices have a vital role in gathering reliable data from the surrounding environment to efficacious operations of different IoT applications. It requires implementing the mechanism that ensures the reliability, immutability, and transparency of the data. Blockchain technology is one of the potential mechanisms, which has intrinsic characteristics of security and irreversibility, and can offer a solution to the data reliability defiance in IoT. Apart from blockchain, other techniques based on machine learning and big data analytics are considered for ensuring the security of data assimilated with IoT. Machine learning is being applied for the analysis of voluminous data from IoT devices and also promising to deal with fraud detection [3]. In this chapter, we first give an introduction to blockchain technology and architecture and different consensus algorithms. Next, the chapter elaborates on some use cases for the application of blockchain in IoT. It further explores the consensus mechanisms and data stored in the blockchain of different IoT applications and how it is distinct from the traditional Bitcoin blockchain. In the end, it lists a few recommendations and open research problems in this area.
5.2 Blockchain Technology Blockchain technology has created quite a buzz over the past few years along with distributed ledger technology (DLT). It is believed that it can promote an exceptional degree of digital innovation that has not been seen since the advent of the Internet. Blockchain technology facilitates a profound shift from a traditional centralized transaction model to a decentralized model. Blockchain is a decentralized and public distributed ledger technology that enhanced security, easier traceability, and greater transparency [5]. Blockchain is a trustless system that has pseudonymous participants. It is a peer-to-peer (P2P) network of nodes where multiple copies of the blockchain database exist. The computer systems that participate in the blockchain are called nodes which form the P2P network system. Each node in the network has access to a copy of the blockchain database. These nodes are capable to initiate, receive, or verify transactions and generate new blocks to be added to the blockchain.
5 Convergence of Blockchain in IoT Applications for Heterogeneous Networks
73
There are primarily two types of blockchain available: (1) public blockchain and (2) permissioned/private blockchain [6]. There are some other variants also available, e.g., hybrid and consortium blockchain. A public blockchain is permission- less and non-restrictive, which means that anyone who has a computer system connected to the Internet can access the public blockchain and work as a node in the blockchain network. These nodes can verify transactions and add new blocks to the blockchain. The examples of public blockchain are Bitcoin and Ethereum. A private blockchain is permissioned and restrictive where nodes need permission to participate in the blockchain network. These blockchains are governed by a single entity or organization that wants to keep their data private within the organization and does not want to disclose it to everyone. An example of a private blockchain is Hyperledger Fabric and Ripple. Both in public and private blockchains, the participating nodes have a copy of the ledger, can verify transactions, and add blocks to the blockchain.
5.2.1 Blockchain Architecture Blockchain stores information in specialized blocks. These blocks are chained together in chronological order with the help of a cryptographic hash algorithm. Each block consists of a set of transactions and a block header that stores the hash of the previous block. When new information is generated, or an event occurs, it creates a transaction that is added into the memory pool of the transactions. A new block is created by selecting the transaction from the memory pool, and a block header is added to it. The hash of the previous block is also added to the block header of the current block. A cryptographic hash is calculated on the block header of each block in the blockchain which acts as an identifier of the block. All blocks in the blockchain are linked together in chronological order because each block header also contains the hash of the previous block, which in turn is used during the hash calculation of the current block. These blocks are vertically stacked on top of each other in order, thus creating a chain-like structure where each block contains the hash of its parent (previous) block that reverts to the first block ever created known as the genesis block. The changes in the blockchain are visible to all the nodes in the network. Transactions cannot be altered or deleted from blocks once they have been added to the blockchain. Let’s suppose a blockchain consists of m number of blocks (B1, B2,…., Bn-1, Bn, Bn+1,….., Bm) as shown in Fig. 5.1. The “previous block hash” field is within the block header, affecting the hash of the current block BN. If the previous block, BN-1, is altered in some way, it modifies its hash value. This modification in the hash of the previous block, BN-1, needs to change the “previous block hash” field in the current block header which changes the hash of the current block, BN. This in turn will require to change in the hashes of all the subsequent blocks from BN+1, BN+2,…… BM. This shows that in a lengthy blockchain, it requires to perform massive computations during the recalculation of valid block hashes, making it immutable and more secure.
74
F. Kausar et al.
Genesis Block
B1
Bn-1
Bn
Bm
Version Previous block hash Merkle Root time Bits Nonce
Version Previous block hash Merkle Root time Bits Nonce
Version Previous block hash Merkle Root time Bits Nonce
Version Previous block hash Merkle Root time Bits Nonce
Block header hash
Block header hash
Block header hash
Block header hash
Tx1
Tx2
Txn
Tx1
Tx2
Txn
Tx1
Tx2
Txn
Tx1
Tx2
Txn
Fig. 5.1 Blockchain architecture
Blockchain guarantees the data integrity in each block by leveraging two cryptographic mechanisms. First is the chain-like structure of blocks, where each newly added block has the hash of the previous block. If there is a fabrication in any of the previous block, it will invalidate all of the subsequent blocks. Second is the inclusion of the Merkle root in the block header of each block. This Merkle root is calculated by applying the Merkle tree structure on all the transactions in the block. Merkle tree is a binary tree-like structure where the transactions act as leaf nodes of the tree as shown in Fig. 5.2. To begin with, the hash is calculated for each transaction, and then iteratively the hash is calculated on the pair of all non-leaf nodes until the root of the Merkle tree is calculated which has the information of all the transactions in the block. If there is falsification in any of the transactions, it can be easily detected by recalculating the Merkle root and comparing it with the stored Merkle root value in the block header.
5.2.2 Consensus Algorithms All nodes in the distributed peer-to-peer network agree on the same copy of a blockchain by a mechanism called consensus algorithm which provides fault tolerance in case even if some nodes fail in the network. When a new block is created, it needs to be validated by all the nodes in the network before allowing it to add to the existing blockchain. This validation process is done by applying the consensus algorithm. There are different types of consensus algorithms used, e.g., proof of work (PoW), proof of stake (PoS), proof of authority (PoA), proof of elapsed time (PoET), proof of burn (PoB), and practical Byzantine fault tolerance (PBFT) [7]. The selection of the most suitable consensus algorithm depends on the nature of the IoT application and the type of blockchain technology selected for implementation. In the PoW algorithm, all the nodes in the network compete with each other to solve a complex and computationally difficult mathematical problem [8]. The first node, which solves this problem, adds its candidate block into the existing block as
5 Convergence of Blockchain in IoT Applications for Heterogeneous Networks
75
Merkle Root H(Tx 1, Tx 2, ...Tx n-1, Tx n)
H(Tx1, Tx2)
H(Txn-1, Txn)
H(Tx1)
H(Tx2)
H(Txn-1)
H(Txn)
Tx1
Tx2
Txn-1
Txn
Fig. 5.2 Merkle tree structure
a valid block. All other nodes in the network can verify the validity of this newly added block without difficulty [9]. Two nodes in the distributed system may solve the puzzle at the same time and broadcast their candidate block as a valid block for verification by other nodes in the network. In that case, all nodes will not add the same new block in the blockchain because of the latency issue in the peer-to-peer network which results in the discrepancy in the blockchain. It is solved by following the longest chain rule which accepts the longest blockchain as a valid one. The shorter chains will become deserted automatically, and future blocks will be added on top of the longest chain [5]. The PoW consensus algorithm is susceptible to a 51% attack, where it is assumed that if the adversary can get hold of 51% of the network computing power or hash rate, it can manipulate the blockchain by not allowing the confirmation of the new transactions or reversing the past transactions making double-spend of the coins possible. Therefore, the trustfulness of PoW depends on the assumption that the majority of the nodes are trustful and will not collude their computation power to make 51% attack possible [8]. The limitations of PoW are solved by the PoS that was proposed as a new consensus algorithm in 2011. The PoS algorithm is more energy-efficient and validates the transaction in much less time as compared to the PoW algorithm. The validation of the next block is performed by the node which is selected using a random process based on different parameters including the sufficient stake in the system, nodes’ wealth, and nodes’ age. The process of block validation is called forging, and the nodes participating in this process are called forgers. The forger nodes need to lock
76
F. Kausar et al.
a certain amount of coins as their stake in the network. Before adding a new block into the blockchain, the elected forger node performs the validation of all the transactions included in that block and then signs the block with its private key. The transaction fee associated with the newly added block is paid as a reward to the designated forger node. The security of the PoS algorithm depends on the stake of forger nodes. If the fraudulent transaction is detected in any block, it results in the loss of coins from the forger stake and failing to participate in the forging process in the future. The 51% attack on the PoS algorithm is impractical because it is not possible to get hold of the 51% of the total stake in the network. The disadvantage of the PoS algorithm is that wealthier nodes may get dominations in the network by getting more chances of validation which results in biased distribution. It is resolved by proposing the alternative random selection method that depends on the age of the forger node in the network instead of stake [10]. The PoA algorithm is based on the PoS algorithm and is used especially in the private blockchain where the reputation of nodes is on the stake instead of coins as in the PoS algorithm [11]. Preselected validator nodes participate in the consensus process and can generate the new block. The nodes which want to become validators need to reveal their real identities and have to pass preliminary authentication. It does not require high computational power to perform the consensus algorithm, and transactions can be validated in less time, but it relinquishes the decentralization concept of blockchain. The attacker can compromise the system by influencing the publicly known validator nodes to act corruptly. The PoB algorithm has been proposed to overcome the high energy consumption concern in the PoW algorithm and to avoid the problem of double-spending in cryptocurrencies. It is based on the concept of “burning” the coin by the miners to participate in the mining process [12]. To burn the coins, the node needs to send coins to verifiable, unspendable, and public address. The node that burns the highest amount of coin will get the right to mine the next block in the blockchain. The advantage of the PoB algorithm is that it reduces the circulating supply of coins by burning them, resulting in a gradual increase in coin value. It is not considered eco- friendly as it destroys the coins generated with high energy consumption by the PoW algorithm. It is not scalable, and the verification process of blocks in the blockchain takes more time as compared to the PoW algorithm. PoET algorithm was developed by Intel [13] to allow permissioned blockchain network construction based on a rational draw system where the winner mining node is elected based on randomly generated elapsed time. The use of a trusted computing platform makes the draw results verifiable by the other participants of the network. PBFT algorithm presented in [14] was intended to perform resourcefully in low- scale IoT applications. Its main goal is to reach a consensus in the network even if few nodes malfunction or provide erroneous information. All nodes participate in the consensus process and acquire reward resulting in low reward variance. It has low computational overhead, high throughput, and low latency, making it suitable for many IoT applications.
5 Convergence of Blockchain in IoT Applications for Heterogeneous Networks
77
5.3 Use Cases for the Application of Blockchain in IoT In this section, use cases are presented to elaborate on the various scenarios where blockchain technology has been applied in IoT. The application areas covered here are broadly relevant to smart grid, healthcare, supply chain management, and vehicle networks.
5.3.1 Industrial IoT The industrial Internet of Things (IIoT) aims to utilize the Internet to connect the things in the industry to realize the Fourth Industrial Revolution (4IR). Some mature examples of IIoT platforms serving smart factories are GE Predix, Schneider EcoStruxure, and Siemens MindSphere. The shortcomings of the IIoT architecture are predominantly centered around the security of private data. If these issues are not addressed, such vulnerabilities could lead to economic losses on a large scale and have a negative impact on society. Blockchain technology has been applied to IIoT to improve its architecture and solve security issues [15]. They introduce Bitcoin design for building a privatized, expandable, lightweight, and partially decentralized IIoT architecture in a smart factory. The key aspects of this architecture have been analyzed by introducing a security and privacy model. The whitelist, asymmetric encryption, and other mechanisms were introduced for improving the security and privacy of the IIoT architecture. They also transformed an automated production platform based on this architecture and discussed the relevant defense measures specific to that platform. Before applying blockchain, many smart factories were based on cloud-based manufacturing architecture that allowed sharing of resources. Though the shared pool of manufacturing resources was accessible anywhere and anytime on demand, the centralized system was fragile. The blockchain-based IIoT system decentralized the architecture enabling mutual supervision of nodes. Blockchain guarantees the reliability of transactions in the food supply chain by providing verification of authenticity without a centralized authority [16]. It also ensures traceability as each transaction needs verification of its previous transaction. The application of blockchain satisfies the demand of governments, enterprises, and consumers. While the governments establish regulatory ideas, the enterprises can ensure quality, and the consumers’ rights can be protected. Tse et al. employ PEST analysis to demonstrate blockchain application in terms of political, economic, social, and technical factors. All these are external factors affecting the industry and contribute to its macro-environment position. The political factors increasingly deal with requirements of the government regulatory authorities, statistical analysis, and information release. The economic factors deal with changes due to market demand and climate changes. The social factors analyze purchase behavior and trust in government. The technology factor deals with
78
F. Kausar et al.
supporting research on blockchain technology and its suitability for the food supply chain. The result analysis discusses the traditional food management model in China, the blockchain application theory, and the decentralized food supply chain authentication model.
5.3.2 Healthcare Preserving privacy while sharing electronic health records (EHRs) is essential especially when done across organizations. The legitimacy of EHRs must be verifiable, and the identity of signers has to be protected. Both of these objectives are achieved by a healthcare blockchain that employs a decentralized attribute-based signature (DABS) scheme [17]. The system facilitates verification of the signer’s attributes without disclosing the user’s identity. The blockchain ensures secure sharing of EHR data by a combination of on-chain and off-chain storage model shown in Fig. 5.3. Once EHR is created and signed with private keys, it is stored in a private database. The address of this stored data is signed with DABS in ProposalRecord (a smart contract) and shared by publishing on the blockchain. Patients can be monitored remotely using IoT sensors to collect data such as blood pressure and insulin level. The raw data is aggregated and formatted by a mobile application that sends it to a relevant smart contract in a blockchain [18]. A complete analysis is done, and alerts at threshold values are triggered by the smart contracts to healthcare providers. The actuator nodes in IoT could be activated with instructions for the patient’s treatment. Confidential information is not stored on the blockchain, though it is private and consortium-led. Despite the use of IoT and blockchain, certain limitations exist in data acquisition among healthcare providers. The patient-centric system architecture has been proposed for cryptography-based access control policy in [19]. This approach uses blockchain to implement a permission-based EHR sharing system. The
ea
tes
cr
EHR data
signs and stores
Database (off-chain)
address of data
sign ed
publishes Healthcare worker
ProposalRecord
Fig. 5.3 On-chain and off-chain storage model
Block (on-chain)
5 Convergence of Blockchain in IoT Applications for Heterogeneous Networks
79
stakeholders register their request to the certificate authority, which provides the certificate and private key for enrolling the participant. Access is granted exclusively to the requested records. New records can be added by committing a transaction to the blockchain and distributing it over the network. Hence, unauthorized deletion or modification is prevented. Apart from the above-listed examples, blockchain technology has been applied to many other healthcare systems. Detection of overdose by tracking prescriptions, managing patient data, sharing critical records, and adjudication of health insurance claims are some use cases described in [20].
5.3.3 Vehicular Networks Vehicular blockchain is a secure peer-to-peer data sharing system based on consortium blockchain technology [21]. The blockchain is established on roadside units (RSUs), serving as the preselected nodes for publicly auditing and storing shared data. The consortium blockchain is utilized for a secure and distributed vehicular network for data management in vehicular edge computing and networks (VECON). Smart contracts are deployed on the vehicular blockchain to achieve secure and efficient data storage on RSUs and data sharing among vehicles. [21] developed a reputation-based data sharing scheme with three-weight subjective logic (TWSL) model for choosing reliable data sources to improve data credibility. A vehicular blockchain-based framework is proposed for correcting the position by error sharing, ensuring security and trustworthiness of the data [22]. The scenario involves two types of vehicles, viz., sensor-rich vehicles that have a camera, Lidar, etc. and common vehicles that rely merely on GPS for their position. As shown in Fig. 5.4, both types of vehicles access the mobile edge computing nodes (MECNs) for requesting service within the communication range. The location of landmarks in the area is stored in MECNs with sufficient space and computing power. The GPS measurements in standard vehicles are prone to a variety of errors in determining their positions. The accuracy is improved using a deep neural network (DNN)-based error correction algorithm running on an edge server. Collective learning strategy is adaptable to changing environments and validated with many simulations. The positioning framework aids in error prediction, correction, and sharing of the data. A variety of sensors on vehicles contribute to collecting data in vehicular ad hoc networks (VANETs). The on-board sensors such as accelerometer or vibration sensors send data to roadside units (RSUs) deployed for the purpose. The two main challenges are preserving privacy and reliability [23]. As the sensory data is dependent on location, its disclosure entails a breach of privacy. Furthermore, the data captured from multiple vehicles must be efficiently aggregated. While sharing the data, reliability must be ensured, especially due to the fact that RSUs are deployed in a pervasive manner. Blockchain technology has been proposed for reliable data sharing in decentralized vehicular sensor networks.
80
F. Kausar et al.
Blocks
Data store
MECN
Appro xim distanc ate e by G PS
Ac c dis urate tan ce
Landmark
MECN
sensor-rich vehicle (data provider)
common vehicle (data requester)
Fig. 5.4 Architecture of vehicular blockchain for GPS error correction
The system model in [24] constitutes a vehicular air pollution monitoring system with a server, RSUs, and sensors on vehicles. The communication between vehicles and RSUs is through wireless access, while the RSUs are connected to the server preferably using a wired link with high bandwidth. The security requirements are location privacy, verifiability, and immutability. A permissioned blockchain framework is realized with restricted access rights to read, write, and verify. The design details are described in terms of transaction, ledger, and consensus.
5.3.4 Smart Cities With the advent of digitalization, cutting-edge technologies have started playing a major role in development. As the digital economy is booming, a large number of megacities are being transformed into smart cities [4]. These metropolitan areas are comprised of increasingly dense populations with a predominant use of smart cars, smartphones, and wearable devices such as smartwatches and other tracking equipment. An ensemble of complementary smart gadgets ossifies in the form of an intelligent network. Apart from the individuals generating enormous amounts of data, many private and government entities deploy surveillance cameras and sensors to monitor people, movement, and the environment. Consequently, useful information is extracted on large scale in a continuous manner. Integrity-based attacks may be carried out, not only by intruders but also by malicious insiders.
5 Convergence of Blockchain in IoT Applications for Heterogeneous Networks
81
In this regard, certain relevant issues that need to be overcome are data transparency, real-time data exchange, IoT data analytics, and network issues. The following design principles [4] have been introduced to develop sustainable smart cities: –– –– –– –– –– –– –– –– ––
Resilient environment Interoperable and flexible infrastructure Decision support system Behavior monitoring Energy sources and distribution Intelligent infrastructure Scalability Smart healthcare Secure infrastructure
Case studies of three real-world applications of blockchain technology in smart cities are presented [25]. These are being implemented in Dubai, France, and Cambodia. Smart Dubai is a current research project to transform Dubai into a smart city. The aim of the project is to secure transactions related to real estates, banks, utilities, passports, and visas. Blockchain-based solutions are used to accomplish traceability, trust, and transferability. The vision is to make Dubai the happiest city on earth by leveraging blockchain, AI, and data science [26]. French City Brain is a project based on IoT and AI offering management services for storing energy and monitoring air quality and safety in the city. Blockchain technology deals with the security issues emanating from the interoperability of data from high-end sensors. It includes the development of a green blockchain for smart energy contracts. The project has a combination of applications that minimize carbon footprint and apply machine learning for real-time decisions regarding energy consumption constraints [27]. Limestone Network is a Singaporean startup building a smart city in Phnom Penh, Cambodia. Hybrid blockchain infrastructure is proposed to assimilate residents’ information using many touchpoints. The Limestone mobile app serves as an interface to provide the services offered with blockchain-powered digital passports. The users could be residents, workers, or commuters. They can digitally pay to travel or buy using their digital persona stored on the blockchain securely [28].
5.4 Mining Algorithms and Data Storage in IoT Applications Table 5.1 provides the details of mining algorithms used in different IoT applications and the type of data stored in each application’s blockchain. The proliferation of renewable and distributed energy has made conventional power systems more complex based on centralized grid management. In addition, customers have become more empowered by installing their own distributed energy sources (e.g., solar power panels) and capable of managing their electricity
82
F. Kausar et al.
Table 5.1 Mining algorithms and data stored in the blockchain in different IoT applications Type of Paper blockchain [15] Private
Data in the blockchain The node address, access request (store, read, control), and corresponding response by management hub Addresses of electronic health record data Medical record generation event information
[17]
Private
[18]
Private and consortium blockchain Public (not Encrypted healthcare data mentioned clearly) Consortium Medical record of patients blockchain
[30] [19]
[21] [22] [23]
Consortium blockchain Consortium blockchain Consortium/ private blockchain
Encrypted and digitally signed traffic-related information GPS positioning error prediction model and the resulting error evolution Encrypted and digitally signed vehicular sensory data
Consensus algorithm Statistical process control (SPC) Practical Byzantine fault tolerance (PBFT) Practical Byzantine fault tolerance (PBFT) Not mentioned Crash fault tolerance (CFT) and Byzantine fault tolerance (BFT) Proof of storage Practical Byzantine fault tolerance (PBFT) Practical Byzantine fault tolerance (PBFT)
consumption without the help of a centralized grid management authority. Blockchain technology has changed the traditional way of business transactions. One of the possible blockchain applications in the electric power sector is to trade the electricity between consumers and prosumers by changing the centralized electricity grid into a distributed peer-to-peer trading network. [29] developed a blockchain-based peer-topeer transaction network to trade renewable energy to each other without the need for a central intermediary. They also proposed a design of a credit-based payment scheme to counter the problem of transaction confirmation delay. It uses the energy coin as a cryptocurrency for trading the energy between the buyers and sellers nodes. The local transaction records of trading are collected from the memory pool to create the block, which is then verified by the selected nodes. The verification is done using the proofof-flow consensus algorithm based on the overall quantity of traded energy. J. Wan et al. discuss a security architecture based on blockchain for industrial IoT (IIoT) which provides security and privacy services in the smart factory [15]. It consists of five layers: the sensing layer, the management hub layer, the storage layer, the firmware layer, and the application layer. The sensing layer consists of sensing devices that produce data that is sent to the management hub layer for storing in the blockchain and database. They consider the private blockchain and statistical process control (SPC) as PoW for the consensus process in the blockchain network. Each block’s transaction data consists of sensing node address, access request (store, read, control), and corresponding response by the management hub. Y. Sun et al. proposed a secure and authenticated private blockchain-based electronic health record storage system where the addresses of electronic health records
5 Convergence of Blockchain in IoT Applications for Heterogeneous Networks
83
are stored in the blocks of the blockchain, while the actual data is encrypted and stored in the database instead of a blockchain [17]. This system helps in securely sharing the authenticated healthcare data of patients among different healthcare- providing organizations. PBFT [14] is used as the consensus algorithm. C. Esposito et al. proposed that when a new patient healthcare data is generated, it creates a new block in the blockchain and is disseminated in the network for validation through a consensus mechanism [30]. If this newly generated block is validated, then it can be added to the blockchain. N. Griggs et al. provide a generic architecture for implementing the private blockchain to authenticate the patients’ health records among many healthcare provider organizations [18]. T. Sudeep et al. investigated the implementation of blockchain technology in healthcare systems for access control in the sharing of patients’ health records between different healthcare providers after patient authorization [19]. L. Ekblaw et al. presented a case study using blockchain technology to share electronic health record data [31]. The participating nodes in the blockchain network are patients and healthcare providers where patients have full control over their medical records and can share them with healthcare providers after authorization from the patient. J. Kang et al. employed consortium blockchain to create a vehicular blockchain for secure data storage and sharing in vehicular edge computing and networks [21]. The vehicular network consists of vehicles and roadside units that collect data from vehicles and send it to the cloud for permanent storage. The vehicle sends encrypted and digitally signed traffic-related data to the roadside units which then create the data blocks from it. The preselected roadside units participated in the consensus process to validate the new block of data before inserting it into the blockchain through the proof-of-storage algorithm. C. Li et al. proposed the use of consortium blockchain technology to enhance the positioning accuracy of vehicles [22]. They make use of standard vehicles and sensor-rich vehicles to find the GPS positioning error and develop the deep neural network-based error prediction model. The blocks in the blockchain contain information about the positioning error prediction model and the resulting error evolution. This information can be used by other vehicles to correct their positioning errors. The designated mobile edge computing nodes participated in the PBFT- based [14] consensus algorithm for block corroboration. Q. Kong et al. proposed an anonymous and provable data sharing model in vehicular networks based on the blockchain [23]. The vehicles send the sensory information to the roadside units, which create transactions from it and broadcast in the network. Roadside units are distributed to form subgroups. Each subgroup generates its own block that consists of the transactions broadcasted in that subgroup. At a given time t, one roadside unit is chosen randomly from each subgroup to participate in the consensus process based on PBFT for the verification of block. Almost all the IoT applications use private or consortium blockchain compared to the public blockchain used in Bitcoin or other cryptocurrency networks. These are permissioned blockchain which gives control of the nodes who can join the network providing the limited access of users’ sensitive information stored in the
84
F. Kausar et al.
blockchain. As far as the data stored in the blockchain, it can be seen in Table 5.1 that users’ private data is generally avoided to store in the blockchain to provide privacy and anonymity to the users. The traditional Bitcoin network uses PoW as the consensus algorithm, while many IoT applications of blockchain use the PBFT as a consensus algorithm because it is more energy-efficient and can take a very little amount of time compared to PoW to validate the transactions.
5.5 Challenges and Open Issues There are many technical and implementation challenges associated with blockchain applications in IoT. One of the main issues is the privacy of blockchain users. The public access feature of blockchain authorizes anyone to view the data stored in the blockchain, which may lead to personal and legal issues, and it is unacceptable to users. This problem can be solved by allowing the minimum storage of personal data on the blockchain and ensuring the anonymity of users. Lack of regulations is another critical issue in the effective deployment and adoption of blockchain technology. Distinctive regulations are required for different application areas of blockchain technology. There is a need to agree on international regulations for different applications of blockchain technology by all the stakeholders. The public blockchain is especially susceptible to a lack of scalability problem, which is not the case in a private blockchain. Each blockchain network leverages its own type of blockchain, consensus algorithm, implementation platform, coding language, and privacy measures, resulting in the lack of interoperability between different blockchain networks. There is a need to define application-specific standard protocols and procedures to make communication possible between different types of blockchain networks. The implementation of blockchain in any IoT application requires restructuring their old systems. The integration of blockchain in the legacy system is therefore a challenging task. The potential risk of data loss during the integration process makes it reluctant for companies to transition to blockchain. It is necessary to develop the integration mechanisms and procedures for the integration of legacy systems. The lack of awareness and understanding of how blockchain technology works are also some of the hindrances toward widespread implementation and public acceptance of the implementation of blockchain in different IoT applications. It is essential to educate the users at all levels for the attainment of the trust of users on blockchain technology. The well-known blockchain consensus algorithms, e.g., PoW, PoS, and PoB, are developed especially for financial applications and are not suitable for IoT applications of blockchain. PBFT is one of the most commonly used consensus algorithms for IoT applications, but it is computationally intensive, complicated, and not scalable [31]. Further, it is also susceptible to denial of service attacks and not fault- tolerant [32]. It is, therefore, necessary to design a new consensus algorithm that
5 Convergence of Blockchain in IoT Applications for Heterogeneous Networks
85
satisfies the requirement of IoT applications including scalability, fault tolerance, and approximate real-time transaction confirmation. The blockchain-based IoT applications have to manage complex operations such as efficient consensus process, execution and management of a large number of transactions, smart contract development, and economical processing of a variety of data. There is a need to propose secure, efficient, privacy-preserving, and robust blockchain architecture for different IoT applications concerning their exclusive requirements.
5.5.1 Conclusion(s) IoT is susceptible to security threats, and sometimes personal data may be compromised. Blockchain is appropriate for protecting the data as it is decentralized and employs a public distributed ledger, which ensures transparency and traceability. The different types of blockchain have been described with the general architecture. Various consensus algorithms such as PoW, PoS, PoB, PoET, and PBFT have been compared. Use cases for the application of blockchain in IoT have been covered in industrial IoT, healthcare, vehicular networks, and smart cities. The mining algorithms and data stored in the blockchain of eight different IoT applications are summarized in a systematic manner. The research challenges and open issues related to privacy, regulatory authority, scalability, interoperability, integrability, and acceptability are discussed for future work in this promising field.
References 1. A. V. a. J. C. D. M. M. Conoscenti. (2016). Blockchain for the Internet of Things: A systematic literature review. In 13th Int. Conf. Comput. Syst. Appl. (AICCSA), Agadir, Morocco. 2. Qiu, T., Chen, N., Li, K., Atiquzzaman, M., & Zhao, W. (2018). How can heterogeneous Internet of things build our future: A survey. IEEE Communications Surveys & Tutorials, 20(3), 2011–2027. 3. R. K. (2018). Convergence of AI, IoT, big data and blockchain: A review. The Lake Institute Journal, 1(1), 1–18. 4. P. K. S. B. Y. M. S. G. H. C. I.-H. R. Saurabh Singh. (2020). Convergence of blockchain and artificial intelligence in IoT network for the sustainable smart city. Elsevier Sustainable Cities and Society, 63. 5. D. a. M. P. a. R. N. a. S. K. Yaga. (2018). Blockchain technology overview. National Institute of Standards and Technology. 6. Laurence, T. (2019). Introduction to Blockchain Technology , The many faces of blockchain technology in the 21st century. Van Haren Publishing. 7. M. X. Z. Z. W. X. C. Q. Du Mingxiao. (2017). A review on consensus algorithm of blockchain. In 2017 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Banff, AB. 8. Nakamoto, S. (2008). Bitcoin: A peer-to-peer electronic cash system. 9. P. J. T. C. X. L. Q. W. Xiaoqi Li. (2020). A survey on the security of blockchain systems. Future Generation Computer Systems, 841–853.
86
F. Kausar et al.
10. W. L. A.-M. B. Karame. (2017). Securing Proof-of-stake blockchain protocols. In International Workshop on Data Privacy Management, Cryptocurrencies and Blockchain Technology. 11. “Proof of Authority- POA,” 8 2020. [Online]. Available: https://www.poa.network/for-users/ whitepaper/poadao-v1/proof-of-authority. Accessed 28 Jan 2021. 12. A. K. D. Z. Kostis Karantias. (2020). Proof-of-Burn. In International Conference on Financial Cryptography and Data Security. 13. “Intel: Sawtooth Lake (2017),” [Online]. 14. Miguel Castro, B. L. (1999). Practical Byzantine fault tolerance. In Third symposium on Operating systems design and implementation (OSDI '99), USENIX Association. 15. Wan, J., Li, J., Imran, M., Li, D., & Fazal-e-Amin. (2019). A blockchain-based solution for enhancing security and privacy in smart factory. IEEE Transactions on Industrial Informatics, 15(6), 3652–3660. 16. B. Z. Y. Y. C. C. H. M. Daniel Tse. (2017). Blockchain application in food supply information security. In IEEE International Conference on Industrial Engineering and Engineering Management, Singapore. 17. Sun, Y., Zhang R., Wang, X., Gao, K., & Liu, L. (2018). A decentralizing attribute-based signature for healthcare blockchain. In 27th International Conference on Computer Communication and Networks (ICCCN), Hangzhou. 18. O. O. P. K. A. N. B. E. A. H. Kristen N. Griggs. (2018). Healthcare blockchain system using smart contracts for secure automated remote patient monitoring. Journal of Medical Systems, 2018. 19. K. P., R. E. Sudeep Tanwar. (2020). Blockchain-based electronic healthcare record system for healthcare 4.0 applications. Journal of Information Security and Applications, 50. 20. D. C. S. a. J. W. Peng Zhang. (2018). Blockchain technology use cases in healthcare. Advances in Computers, 2018, 1–41. 21. Kang, J., Yu, R., Huang, X., Wu, M., Maharjan, S., Xie, S., & Zhang, Y. (2019). Blockchain for secure and efficient data sharing in vehicular edge computing and networks. IEEE Internet of Things Journal, 6(3), 4660–4670. 22. Li, C., Fu, Y., Yu, F. R., Luan, T. H., & Zhang, Y. (2020). Vehicle position correction: A vehicular blockchain networks-based GPS error sharing framework. IEEE Transactions on Intelligent Transportation Systems, 1–15. 23. Kong, Q., Su, L., & Ma, M. (2020). Achieving privacy-preserving and verifiable data sharing in vehicular fog with blockchain. IEEE Transactions on Intelligent Transportation Systems, 1–10. 24. K. W. G. G. I. M. a. G. N. Hakak, S. (2020). Securing smart cities through blockchain technology: Architecture, requirements and challenges. IEEE Xplore, 8–14. 25. “Smart Dubai,” [Online]. Available: https://www.smartdubai.ae/. 26. “French City Brain,” [Online]. Available: https://frenchcitybrain.com/en/. 27. “Blockchain Land,” [Online]. Available: https://theblockchainland.com/2019/08/16/ building-smart-city-heart-cambodia-capital-overview/. 28. Li, Z., Kang, J., Yu, R., Ye, D., Deng, Q., & Zhang, Y. (2018). Consortium blockchain for secure energy trading in industrial Internet of Things. IEEE Transactions on Industrial Informatics, 14(8), 3690–3700. 29. Esposito, C., Santis, A. D., Tortora, G., Chang, H., & Choo, K.-K. R. (2018). Blockchain: A panacea for healthcare cloud-based data security and privacy? IEEE Cloud Computing, 5(1), 31–37. 30. A. A. A. J. D. H. A. L., Ekblaw. (2016). A case study for blockchain in healthcare: “MedRec ” prototype for electronic health records and medical research data. In IEEE Open & Big Data Conference, Vienna, Austria. 31. Vukolić, M. (2015). The quest for scalable blockchain fabric: Proof-of-work. In Proceedings of the International Workshop on Open Problems in Network Security. Springer. 32. Y. X. K. C. E. S. D. S., Andrew Miller. (2016). The Honey Badger of BFT Protocols. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security.
Chapter 6
Measuring Clock Reliability in Cloud Virtual Machines Aarush Ahuja, Vanita Jain, and Dharmender Saini
6.1 Introduction Synchronized clocks in a distributed system can help increase its efficiency and implement correctness by decreasing the overhead of coordination in the nodes of the cluster. Liskov et al. [1] list various techniques possible with the assumption of synchronized clocks such as at-most-once messages, reliable authentication schemes, distributed cache consistency, etc. External consistency is one such application made possible by accurately synchronized clocks used in practical systems of considerable value; Google’s Spanner [2] and CockroachDB [3] are practical databases depending on the assumption of synchronized clocks in its operational model. Advancements and investments in virtualization, especially x86 hardware virtualization extensions [5], have enabled cloud service providers to provide compute resources at an affordable cost. Cloud providers have built hypervisor technology upon the x86 hardware virtualization extensions that provide the same high-level value of enabling virtual machines to be executed over a single physical CPU but have subtle differences in their implementations. This paper aims to study the implementation differences in virtualized clocks across three major cloud providers using three different hypervisor implementations for x86 CPUs. We chose Amazon Web Services, Google Cloud Platform (GCP), and Microsoft Azure as the target providers for our study, and while being the major competitors in the public cloud market, they all use three different hypervisors: Xen for AWS EC2 [7], KVM for Google Compute Engine [6], and Hyper-V for Azure virtual machines [8], respectively, as the foundation of their cloud compute platforms. AWS also offers instances running on their Nitro hypervisor [7] for certain machine types built on top of KVM which is out A. Ahuja · V. Jain (*) · D. Saini Bharati Vidyapeeth’s College of Engineering, New Delhi, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 F. Al-Turjman (ed.), Real-Time Intelligence for Heterogeneous Networks, https://doi.org/10.1007/978-3-030-75614-7_6
87
88
A. Ahuja et al.
of the scope of this research. Our study conducted experiments on a cluster of three nodes on the three providers measuring network latency and clock skew in the nodes of the cluster through the lifetime of the experiment and provided a comparison of the three providers’ clock reliability. Section 6.2 reviews the literature on clock synchronization and applications of systems with synchronized clocks, virtualized clocks, and hypervisors. Section 6.3 discusses how time is maintained and synchronized on a physical machine. Section 6.4 discusses how clocks are virtualized and synchronized in the three chosen cloud providers. Section 6.5 delves into the possible sources of clock drift in physical and virtual machines. Section 6.6 discusses possible solutions for mitigating clock drift in cloud virtual machines. Section 6.7 presents the results of our experiments conducted on AWS, GCP, and Azure. Section 6.8 discusses future work that could be pursued in the context of monitoring clock skew. Section 6.9 summarizes the paper’s contribution.
6.2 Literature Review Clock synchronization in systems has become ubiquitous with the adoption of the Network Time Protocol (NTP) [9], which is a set of algorithms allowing clocks to be synchronized by distributing time references over packet-based networks. Network clock synchronization algorithms transmit time references over the network and accommodate for the unpredictability of networks and latency. IEEE Standard 1588 [10] is an IEEE standard establishing the Precision Time Protocol (PTP) for clock synchronization in a local network of nodes. Liskov et al. [1] and Adya et al. [11, 12] highlight the applications of network synchronized clocks in distributed systems such as at-most-once messages and atomicity in distributed transactions. Liskov et al. [1] suggests that many applications only require synchronization of clock rates on systems rather than synchronization of clocks. Some listed applications like commit window have resulted in powering practical systems such as Google’s Spanner [2]. Spanner [2] is a database developed and used at Google which presented a novel approach to synchronization and transactions using the TrueTime API implementing commit window using atomic clocks and GPS receivers as clocksources deployed in data centers with a synchronization accuracy under 10 ms. Following in the steps of Spanner, CockroachDB [3] is a database designed to be deployed at a geographical scale in data centers under the assumption of synchronized clocks; however, instead of using specialized hardware, CockroachDB assumes the presence of only software-level clock synchronization (i.e., NTP and other protocols) implementing a hybrid logical clock model comprising of logical clocks and wall-clock references distributed across the cluster.
6 Measuring Clock Reliability in Cloud Virtual Machines
89
6.3 Timekeeping A system implements timekeeping using physical clocks and timers available to the system. A system might have two kinds of clocks or timers available to use in the system, a wall-clock timer and a monotonic clock [13]. Wall-clock timer represents the time as it will be on a wall clock. Usually, it is counted as the number of time units since January 1, 1970, 00:00:00 UTC. This clock can go forward or backward in time as desired by synchronization algorithm such as NTP or the user to represent the current time; due to this reason, they are not recommended for measuring duration of an event. A monotonic clock is a constantly increasing timer with guarantees that it will never go backward in time. The frequency rate of the timer might not be constant and be adjusted by synchronization algorithms. This implies that one measured second might actually be a different amount of time from another measured second. The absolute value of the timer is not useful as it is based on counting time from an arbitrary point in the past, but it is rather useful for measuring durations with high certainty and often offers a higher resolution than wall-clock timers. The underlying hardware layer used to implement the different timers and clocks uses counters. There can be two types of counters: tick counters and tickless counters. Tick counters involve setting up a hardware device to send “ticks” at a fixed interval known to the OS, and the OS keeps track of the time based on the received ticks. Tickless counters involve a hardware counter keeping the count of time units passed since system boot. The operating system only reads the counter when required. The CPU does not spend any time maintaining this timer and provides higher resolution. The x86 time stamp counter (TSC) is a tickless counter present on x86 CPUs which can be read using the RDTSC or RDTSCP x86 instructions. In an x86 system, there might be various physical devices [15] available to provide timekeeping functionality: • Programmable Interval Timer (PIT): A hardware device which sends a signal when a certain programmed count is reached [14]. Depending on one-shot or periodic modes, it might send the signal only once or keep sending it periodically. • CMOS Real-Time Clock (RTC) [15]: A battery-backed clock which stores the wall-clock time to the nearest second and maintains it while the system is off. • Per-Processor x86 Advanced Programmable Interrupt Controller (APIC) Timer: A per-processor integrated onto the processor itself (whereas the PIT is an external device). It oscillates at the same frequency as the CPU which can vary from machine to machine. • Advanced Configuration and Power Interface (ACPI) Timer [15]: Required as per the ACPI spec, it is a 24-bit timer oscillating at three times the PIT frequency used for power management in the system. • High Precision Event Timer (HPET) [15]: An external hardware timer available on some newer systems. An HPET could be 32 or 64 bits and can provide higher resolution than the PIT or CMOS RTC. Due to lax specifications, HPET
90
A. Ahuja et al.
implementations are not guaranteed to have a high resolution or low drift. It is designed to replace the PIT and CMOS in newer systems. • Time Stamp Counter (TSC): it is a tickless counter available on modern x86 processors which provides apparent time in the system. It is the highest-resolution clock available on the system. It is a 64-bit timer oscillating at the CPU frequency and counts CPU cycles. Unlike the PIT or APIC, it does not generate any interrupts. Instead, it is read when required. A model-specific register exposes the TSC which is accessible in both user mode and kernel mode [14]. The TSC is known to be unreliable [14] due to varying rates of ticking, multicore issues, variable CPU frequency, etc. In newer CPUs, a constant-rate TSC is available which oscillates independently of the current CPU frequency, counting the passage of time rather than CPU clock cycles. The above listed devices are responsible for maintaining a constant rate of movement of time and thus operate at fixed frequencies; however, due to degradation over time, the fixed frequencies can change causing the abstracted clock to skew from its real value. In the Linux kernel, the different hardware timers and clocks are implemented via the lower-level clocksource framework. It abstracts the physical hardware’s differences into a generic API for accessing the clocks using different user mode functions, gettimeofday and clock_gettime, exposing different clocks to the user with different features such as CLOCK_REALTIME, a wall clock, and CLOCK_ MONOTONIC, a monotonic clock dependent on the underlying physical timekeeping devices. In the Linux kernel, time is usually maintained by combining one timer like the PIT counter and another timer like the x86 TSC to interpolate between ticks of the first counter [15, 26, 27] (Figs. 6.1 and 6.2).
Fig. 6.1 Clock skew in AWS t2.micro nodes
6 Measuring Clock Reliability in Cloud Virtual Machines
91
Fig. 6.2 RPC latency in AWS t2.micro nodes
6.4 Virtualized Timekeeping For full virtualization, hypervisors need to emulate the physical devices powering the OS clocks, i.e., the different devices listed in Section 3. Implementing clocks in hypervisors has orthogonal design choices to network cards, and hypervisor designers can choose to emulate the interface of physical timekeeping devices such as emulating an APIC timer using a Memory-Mapped Input/Output interface or using paravirtualization to integrate with the OS kernel and using hypercalls to implement the clocksource interface as discussed before. VMware [15] uses the former implementing the x86 TSC in the hypervisor, whereas Hyper-V uses the paravirtualization approach implementing hypercalls for exposing the physical x86 machine’s time stamp counter to the virtual machine. There are several challenges in accurate timekeeping on guest virtual machines. Guests do not receive true interrupts like a physical system; rather interrupts are first received by the hypervisor and then injected into a guest. Thus, precise timing typically required by the interrupts would not be possible in virtual machines (Table 6.1). A. AWS The AWS Security white paper [7] highlights that AWS uses the Xen hypervisor [4] to provision EC2 virtual machines. It was one of the first hypervisors to popularize paravirtualization on x86 CPUs to improve the performance of virtualization. In the paravirtualized approach to virtualization, introduced by Denali [17] in 2002, sacrifices are made in terms of the equivalence property of the Popek-Goldberg [18] theorem for higher efficiency or performance. Paravirtualization modifies the running operating system to use interfaces provided by the hypervisor to improve virtualization efficiency. In this case, guests are exposed to a notion of “virtual” time which only proceeds during the machine execution and “real” time maintained via the processor’s cycle counter and wall-clock time [4]. Linux guests can make use of these timers via the paravirtualized interface used to implement a Linux clocksource. The Xen clocksource works by combining the readings of the HPET and the
92 Table 6.1 Cloud providers and underlying hypervisors
A. Ahuja et al. Cloud AWS GCP Azure
Hypervisor Xen/Nitro KVM Hyper-V
Clocksource xen_tsc kvm-clock hyperv_clocksource_tsc_page
TSC depending on the reliability of the HPET and low access latency of the TSC. The TSC is used to interpolate readings between ticks of the HPET [20]. Without a constant-rate TSC, the Xen clocksource is susceptible to change in clock rate due to frequency change or lost ticks. AWS provides the Time Sync Service delivered over NTP. Time Sync is backed by satellite-connected and atomic clocks inside data centers to provide clock synchronization. B. GCP GCP uses KVM to power its Compute Engine [6]. KVM virtualizes the time stamp counter on x86 CPUs which can be then used by the guest to implement clock functionality as it would in a physical system. KVM offers the ability to both emulate the TSC and allow passthrough of the physical TSC to the guest [14]. GCP utilizes the paravirtualized clock kvm-clock [19] which is a linuxclocksource available in Compute Engine instances. Guests can access kvm-clock by registering a memory page in their address space to contain kvm-clock data. The hypervisors provide guests access to the most recent host time, the TSC timestamp at the time of recording host time. The TSC is visible to the guests; thus they can read the current TSC value and compute the current time using hypervisor-provided host timestamp and TSC timestamp. C. Azure Azure uses Microsoft’s Hyper-V hypervisor [8] for provisioning virtual machines. Hyper-V also follows the paravirtualization approach used by Xen and KVM. Similar to KVM, it exposes the per-processor TSC to the guest [16]. The timing services provided by the hypervisor depend on a constant-rate reference time source; this can be the constant-rate TSC in newer x86 processors. This single time source is used to virtualize other timer services for the guest such as the virtual APIC, per-processor single-shot and periodic timing, virtual ACPI timer, etc. Hyper-V implements the paravirtualized linux clocksource hyperv_clocksource_ tsc_page as a variant of the kvm-clock providing TSC timestamp, scale, and offset to a guest for maintaining time. The TSC page in a Hyper-V virtual machine is a single structure for all processors in the VM.
6 Measuring Clock Reliability in Cloud Virtual Machines
93
6.5 Sources of Clock Drift Clock drift can happen in both physical and virtual machines due to various reasons concerning the underlying hardware, time sharing in multiple virtual machines, and different hypervisor implementations [15]. Physical x86 machines use a variety of timing devices, as discussed in Section 3, and they tend to drift over the lifetime of the hardware. Thus, the virtualized timers drift with their underlying physical timers. Other factors also include network synchronization servers drifting themselves, broken networks preventing clock synchronization, etc. Unreliable networks and high latency to time servers can also cause clocks to drift. • Hardware Degradation: Over time the frequency of the underlying oscillator on the system will decrease [15]. • VM Suspend: Suspending a VM causes the hypervisor to save the state of the clock and restore it from that exact point on resumption hours later. This can cause clock drift during suspend-resume cycles. • VM Time Sharing: Due to being run as a virtual machine, the guest does not have full control of the CPU, despite it making that assumption. It might disable interrupt sources in the OS; however, the physical machine might still be pre-empted to handle interrupts. Such a virtual machine will eventually drift as the host is not able to keep up with injecting interrupts and synchronizing the clocksources [14]. • VM Migration: The process might take a long time, and during the process, interrupts cannot be delivered to the guest. The guest will have to be caught up with real-time post-migration. Depending on how the hypervisor virtualizes the physical timers, kernel algorithms using devices like the TSC or HPET may be running at different frequencies due to migration to a different physical system [14] (Figs. 6.3 and 6.4).
Fig. 6.3 Clock skew in GCP e2-micro nodes
94
A. Ahuja et al.
Fig. 6.4 RPC latency in GCP e2-micro nodes
6.6 Mitigating Clock Drift HUYGENS [21] presents a novel software-level clock synchronization solution for network interface cards using network effects and support vector machines improving the accuracy of clock synchronization to within 100 s of nanoseconds. Clock drift during migration can be mitigated by storing multipliers and offsets against the TSC for the guest [14] instead of just the TSC timestamp. kvm-clock [19] also offers KVM_GET_CLOCK and KVM_SET_CLOCK ioctl for the host to aid during migration allowing the source host to get the last valid timestamp and letting the destination set it during the migration. CockroachDB, as discussed in Section 6.2, depends on cluster node clocks being synchronized and recommends [22] using Google’s NTP services when possible and using the chrony daemon instead of ntpd on Linux guests where Google’s NTP services cannot be used. It implements forward time jump checking to handle case where the hypervisor might migrate a VM to a different machine with different time. Broomhead et al. [20] evaluate the Xen clocksource and present an improved virtualized timekeeping architecture for Xen based on the RADclock [23] algorithm.
6.7 Measurements A. Experimental Methodology We developed a server and client in Golang to be deployed on cloud virtual machines. The clients communicated to the server via a gRPC channel. We selected three cloud providers for the measurements: Amazon Web Services, Microsoft Azure, and Google Cloud Platform. For AWS, we used three t2.micro instances running Ubuntu 20.04 in the ap-south1 region. For Azure, we used B1s instances running Ubuntu 18.04 in the us-east region.
6 Measuring Clock Reliability in Cloud Virtual Machines
95
For GCP, we used e2-micro instances running Ubuntu 20.04 in the ap-south region. The instances downloaded precompiled x86 binaries for the coordinator and followers with the deployment orchestrated via Terraform. For each experiment, two follower nodes are configured to connect to a single coordinator node where all nodes are of the same instance type inside a virtual private network. The followers transmit current timestamp, latency for the last sent message, and a unique message ID to the coordinator over a gRPC channel. The coordinator logs the followers’ address, the current timestamp on the host, and the current known timestamps and latency for the two followers. The followers execute a blocking RPC call every 50 ms transmitting the data and measuring its latency. The coordinator receives the RPC and atomically sets the followers’ timestamp and latency finally logging it to stdout. Each experiment is run for 3 h (Figs. 6.5 and 6.6). In this case, the clock of the coordinator is assumed to be the source of truth and used to measure clock skew in the nodes of the clusters. In each different cloud provider, the physical source of the linuxclocksource varies depending on the underlying hypervisor.
Fig. 6.5 Clock skew in Azure B1s nodes
Fig. 6.6 RPC latency in Azure B1s nodes
96
A. Ahuja et al.
In an experiment like this, the unpredictable nature of network latency is clearly visible where an anomalous RPC call with high latency could affect the overall results heavily by making the coordinators’ worldview stale by a considerable amount and also causing false alerts of clock skew in a real system. For all three providers, we plot the clock skew from the host clock for both followers and latency experienced by the followers during the experiment. B. Clock Skew and Latency Measurements Comparing clock skew across the three providers based on Figs. 6.1, 6.3, and 6.5, AWS nodes displayed a 99th percentile clock skew up to 60 ms, GCP nodes up to 52 ms, and Azure nodes up to 53 ms. Comparing 50th percentile clock skew across the providers, AWS nodes skewed 9 ms, GCP nodes skewed a minimal 1 ms, and Azure nodes skewed up to 2 ms. The skew values are taken as the average of the two followers. Latency plays an important role in maintaining an updated global view of the cluster. AWS displayed a 99th percentile tail latency of 1.7 ms, GCP up to 2 ms, and Azure up to 10 ms. Comparing 50th percentile latency across the providers, AWS nodes experienced latencies up to 1 ms, GCP up to 1 ms, and Azure up to 1.5 ms, as can be seen in the plots in Figs. 6.2, 6.4, and 6.6. Based on the results in Tables 6.2 and 6.3, AWS displayed a reliable network with more predictable latency but a large clock skew for most requests, and Azure while having more anomalous and unreliable network latency synchronized clocks more accurately than AWS. GCP nodes displayed reliable latency and clock skew on par with Azure nodes. GCP was the most reliable of the three providers showing stable latency as well as the smallest clock skew spread with synchronization under 1 ms. Table 6.2 Average clock skew percentiles in AWS, GCP, and Azure Cloud AWS GCP Azure
50th percentile 8.47 ms 0.80 ms 1.9 ms
90th percentile 47.45 ms 43.65 ms 49.84 ms
99th percentile 58.35 ms 52.00 ms 55.53 ms
Table 6.3 Average RPC latency percentiles in AWS, GCP, and Azure Cloud AWS GCP Azure
50th percentile 0.89 ms 0.98 ms 1.47 ms
90th percentile 1.08 ms 1.26 ms 2.38 ms
99th percentile 1.72 ms 1.83 ms 9.92 ms
6 Measuring Clock Reliability in Cloud Virtual Machines
97
6.8 Future Work We suggest pursuing future work in the area of measuring and monitoring clock skew in cloud compute clusters. • Other hypervisors, measuring clock reliability of platforms like VMware vSphere comparing different hardware deployments as well as other implementations of KVM used by cloud providers like DigitalOcean. • Development of a system to continuously monitor clock skew in a cluster and alert the cluster when a node goes skewed too far or shows anomalous clocks. This can provide value to users of orchestrators like Kubernetes [24] or Apache Mesos [25] working with time-critical applications such as CockroachDB [3].
6.9 Conclusion We covered the background in how physical and virtual machines implement clocks and timers. We evaluated the different clock virtualization implementations in the three major cloud providers, AWS, GCP, and Azure, which use Xen, KVM, and Hyper-V as their underlying hypervisors, respectively. We discuss various sources of clock drift, especially in virtual machines, and various techniques that have been used to mitigate clock drift. We evaluated the three cloud providers AWS, GCP, and Azure with an experiment to measure clock skew across a three-node cluster. Our evaluation provides us the result that GCP’s KVM implementation running kvm- clock synchronized via Google’s internal NTP services displays the most reliable clock among the three cloud providers with the lowest tail clock skew and network latencies.
References 1. Liskov, B. (1991). Practical uses of synchronized clocks in distributed systems. In Proceedings of the tenth annual ACM symposium on principles of distributed computing - PODC ’91, pp. 1–9. https://doi.org/10.1145/112600.112601 2. Corbett, J. C., Dean, J., Epstein, M., Fikes, A., Frost, C., Furman, J. J., Ghemawat, S., et al. (2013). Spanner: Google’s globally distributed database. ACM Transactions on Computer Systems (TOCS), 31(3), 1–22. 3. Taft, R., Sharif, I., Matei, A., Vanbenschoten, N., Lewis, J., Grieger, T., Niemi, K., Woods, A., Birzin, A., Poss, R., Bardea, P., Ranade, A., Darnell, B., Gruneir, B., Jaffray, J., Zhang, L., & Mattis, P. (2020). CockroachDB: The Resilient Geo-Distributed SQL Database, 20. https://doi. org/10.1145/3318464.3386134 4. Barham, P., Dragovic, B., Fraser, K., Hand, S., Harris, T., Ho, A., Neugebauer, R., Pratt, I., & Warfield, A. (2003). Xen and the art of virtualization. ACM SIGOPS Operating Systems Review, 37(5), 164–177.
98
A. Ahuja et al.
5. Adams, K., & Agesen, O. (2006). A comparison of software and hardware techniques for x86 virtualization. ACM Sigplan Notices, 41(11), 2–13. 6. 7 ways we harden our KVM hypervisor at Google Cloud: security in plaintext. (n.d.). https:// cloud.google.com/blog/products/gcp/7-ways-we-harden-our-kvm-hypervisor-at-googlecloud- security-in-plaintext 7. Amazon Web Services: Overview of Security Processes. (n.d.). https://docs.aws.amazon.com/ whitepapers/latest/awsoverview-security-processes/hypervisor.html 8. How does Azure work? (n.d.). https://docs.microsoft.com/enus/azure/ cloud-adoption-framework/get-started/what-is-azure 9. Mills, D (1992). RFC1305: Network Time Protocol (Version 3) Specification, Implementation. RFC Editor. 10. Eidson, J. C., & Lee, K. (2003). Sharing a common sense of time. IEEE Instrumentation Measurement Magazine, 6(1), 26–32. 11. Adya, A., & Liskov, B. (1997). Lazy consistency using loosely synchronized clocks. In Proceedings of the sixteenth annual ACM symposium on Principles of distributed computing (PODC ’97). Association for Computing Machinery, New York, NY, USA, pp. 73–82. https:// doi.org/10.1145/259380.259425 12. Adya, A., Gruber, R., Liskov, B., & Maheshwari, U. (1995). Efficient optimistic concurrency control using loosely synchronized clocks. 13. Kleppmann, M. (2015). Designing data-intensive applications. 14. Timekeeping Virtualization for X86-Based Architectures. (n.d.). https://www.kernel.org/doc/ Documentation/virtual/kvm/timekeeping.txt 15. VMware, E. S. X. (2008). Timekeeping in VMware Virtual Machines. 16. Hypervisor Top Level Functional Specification. (n.d.). https://docs.microsoft.com/en-us/ virtualization/hyper-von-windows/reference/tlfs 17. Whitaker, A., Shaw, M., & Gribble, S. D. (2002). Scale and performance in the Denali isolation kernel. ACM SIGOPS Operating Systems Review, 36(SI), 195–209. 18. Popek, G. J., & Goldberg, R. P. (1974). Formal requirements for virtualizable third generation architectures. Communications of the ACM, 17(7), 412–421. 19. kvm-clock. (n.d.). https://lkml.org/lkml/2010/4/15/355 20. Broomhead, T., Cremean, L., Ridoux, J., & Veitch, D. (2010). Virtualize everything but time. OSDI, 10, 1–6. 21. Geng, Y. (2018). Exploiting a natural network effect for scalable, fine-grained clock synchronization, 81–94. https://www.usenix.org/conference/nsdi18/presentation/geng 22. CockroachDB Operational FAQs. (n.d.). https://www.cockroachlabs.com/docs/stable/ operational-faqs.htmlwhathappens-when-node-clocks-are-not-properly-synchronized 23. Ridoux, J., & Veitch, D. (2010). Principles of robust timing over the internet. Communications of the ACM, 53(5), 54–61. 24. Burns, B., Grant, B., Oppenheimer, D., Brewer, E., & Wilkes, J. (2016). Borg, omega, and kubernetes. Queue, 14(1), 70–93. 25. Apache Mesos. (n.d.). http://mesos.apache.org/ 26. Afzal, H. M. R., Luo, S., Afzal, M. K., Chaudhary, G., Khari, M., & Kumar, S. A. P. (2020). 3D face reconstruction from single 2D image using distinctive features. IEEE Access, 8, 180681–180689. 27. Srivastava, V., Srivastava, S., Chaudhary, G., & Al-Turjman, F. (2020). A systematic approach for COVID-19 predictions and parameter estimation. Personal and Ubiquitous Computing, 1–13, 2020.
Chapter 7
Analysis of LTE Downlink Performance Under Heterogeneous Mobility Conditions via a Mobile App Adeosun Nehemiah Olufemi and Fadi Al-Turjman
7.1 Introduction With the fast increment of smartphone gadgets, the users are additionally expanding as these turn into a significant wellspring of data. It is obviously showing the interest of users in portable applications that give storage, analysis, and visualization of the information [1, 2]. A large portion of advanced phones furnished with PC preparing capacity are utilized to get to organize and utilize various applications created to meet the user requirement via mobile communication network [3]. Several mobile Apps have been developed and integrated with the Global Positioning System (GPS) [4]. Recent years have seen expanding force for portable applications that can adjust their conduct as indicated by users. Such settings necessitates access to the client profile, location, and preferences. There are a few stages for creating advanced cell applications. The majority of the telephones and helpful gadgets uphold Android operating systems (OS) (Fig. 7.1). The aim of this work is to develop a mobile app on Android platform to analyze the connected Wi-Fi and LTE performance under different mobility conditions. We have used the LTE and Wi-Fi for this comparison and ensure that the proposed work has the best performance. After critical and rigorous analysis on the response time of the existing similar apps, it is recorded that the existing apps have high response time, and I am able to pick this limitation with the following novelties:
A. N. Olufemi (*) Research Center for AI and IoT, Near East University, Mersin, Turkey F. Al-Turjman Artificial Intelligence Engineering Department, Research Center for AI and IoT, Near East University, Mersin 10, Turkey e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 F. Al-Turjman (ed.), Real-Time Intelligence for Heterogeneous Networks, https://doi.org/10.1007/978-3-030-75614-7_7
99
100
A. N. Olufemi and F. Al-Turjman
Fig. 7.1 Different features on the app user interface
(a) Implementation of splash screen: Introducing the welcome screen activity (which is also known as the mobile app splash activity) whenever the application is running is one of the steps to help improve the experience of the mobile user by masking the genuine application response. I have implemented a splash screen which shows up when the app is launched. This feature is not available in some of the existing mobile apps. (b) Removal of unnecessary data and files: Too much data/files on an app take a very long time to download. Response time of the proposed mobile app has been fully optimized by diminishing the size of a payload sent from the server to the cell phone; the time a user spends holding up can be decreased. The data that users do not need has been removed so as to reduce the app response time. Sluggish response time can abate adoption, and usage can lower the value of the app and any ensuing return on investment (ROI). Faster response time can enhance cellular achievement both in phrases of worker and client adoption.
7.2 Literature Review In Internet of Things (IoT) various sorts of items, such as cell phones, sensors, and/or wearable gadgets, are communicating with one another over the web. As the human portability turns into a significant perspective in many smart city applications,
7 Analysis of LTE Downlink Performance Under Heterogeneous Mobility Conditions…
101
information from cell phones has been given a lot of attention. IoT encourages a wide assortment of heterogeneous gadgets which are to be associated with, convey, and produce a huge measure of information. The technology is advancing in such a way that everything is now getting connected to the Internet which enables people to easily communicate and share information with no stress. These days, urban communities face complex difficulties to improve their residents’ personal satisfaction. Because of metropolitan fixation, individuals’ everyday environments have been affected by expanded gridlocks, carbon dioxide, ozone-harming substance outflows, and garbage removal. The idea of a smart city has become famous in the course of recent years. It accepts a few measurements relying upon the importance of “smart” and advantages from creative utilizations of new sorts of data and correspondence innovation to help public sharing. The possibility of the digital/smart city is a response to all the arising issues. Numerous urban communities characterize themselves as “brilliant” when they recognize a portion of attributes as being so. Enhancement plan is utilized as the activities and procedures have to be followed during the plan and activity of correspondence organizations, along these lines contributing decency and throughput.
7.2.1 Arduino Wi-Fi Network Analyzer Nowadays, communication through mobiles or computers between people is the most important way to keep distant people close to each other. People use the Internet to check on their distant relatives and check their updates. The most important way to connect to the Internet worldwide is by using a Wi-Fi network. The project in this paper will be about a Wi-Fi network analyzer. The purpose of this project is to find the Wi-Fi networks surrounding the user and show the name of each Wi-Fi network with its signal strength in dBm and type of encryption. The system was built by connecting a Wi-Fi Shield to single-board Arduino and shows the numerical results on an LCD screen. This is a mobile application that was developed for the same purpose. Processing data and displaying numerical values are some of the most important engineering aspects. Microcontrollers, such as Arduino, are important tools for analyzing data wirelessly with the help of a Wi-Fi Shield. This project focused on designing a simple Wi-Fi communication analyzer using Arduino microcontroller so that it interacts with Wi-Fi Shield and displays numerical results of Wi-Fi networks into an LCD monitor. The goals of this work are to integrate programming with communication system design, to apply the knowledge of wireless protocols, and to understand the basics of Arduino microcontroller. Using the information given by the program, we can connect our laptops or mobile phones to the best available Wi-Fi signal. Due to numerical values display and external power supply, this model can be useful for distributing Wi-Fi networks in a wide area where no Wi-Fi network is available. The idea of this project is to use the Arduino chip and the Wi-Fi Shield to detect Wi-Fi networks and show results on a small LCD screen. The system also shows the strength of each Wi-Fi channel. To do this program, the design components consist of:
102
• • • •
A. N. Olufemi and F. Al-Turjman
Arduino microcontroller Wi-Fi Shield LCD/LED screen DC power supply
The program flowchart implemented and showed in this paper can be used to estimate the surrounding Wi-Fi networks’ signal stength. Using such information, we can connect our laptops or mobile phones to the best available Wi-Fi signal.
7.3 Methodology NEU Internet Analyzer provides useful information about wireless signals in a particular environment. It supports up to 2.4 GHz and 5 GHz Wi-Fi networks. We have developed the proposed app using the Android Studio software. This is software for mobile app developers to develop Android apps. This app has been tested on Samsung Galaxy S8 (this device is an A-GPS mobile device). The CPU of the device is Android 7.0 (Marshmallow). To perform network operations in our application, we have included the android.permission.INTERNET in our manifest file. Without this permission, the mobile app will not be able to connect to the Internet. The other permissions offered by Android which have been used in this app are ACCESS_COARSE_LOCATION and RECEIVE_BOOT_COMPLETED. We have used these two permissions to determine the Internet service provider (ISP) and to execute the internet analysis. NEU Wi-Fi Analyzer mobile app has been built with Java using Android Studio and SQL. The project is as simple as possible for better understanding. It contains few Java and XML files (Fig. 7.2). There are three key loops to use when within activities: (a) The entire lifetime of an activity: this occurs between the main call to onCreate(Bundle) and to a solitary last call to onDestroy(). A movement will do all arrangement of “worldwide” state in onCreate() and delivery all excess assets in onDestroy(). For instance, in the event that it has a string running out of sight to download information from the organization, it might make that string in onCreate() and afterward stop the string in onDestroy(). (b) The visible lifetime of an activity: it occurs between a call to onStart() and a comparing call to onStop(). During this time, the client can see the movement on-screen; however, it may not be in the closer view and collaborating with the client. Between these two techniques, you can keep up assets that are expected to show the action to the client. For instance, you can enlist a BroadcastReceiver in onStart() to screen for changes that sway your UI and unregister it in onStop() when the client no longer sees what you are showing. The onStart() and onStop() strategies can be called on different occasions, as the movement gets noticeable and covered up to the client. (c) The foreground lifetime of an activity: it occurs between a call to onResume() and a comparing call to onPause(). During this time, the action is noticeable,
7 Analysis of LTE Downlink Performance Under Heterogeneous Mobility Conditions…
103
Fig. 7.2 Android mobile application activity life cycle [10]
dynamic, and associating with the client. An action can as often as possible go between the continued and stopped states – for instance, when the gadget rests, when a movement result is conveyed, and when another aim is conveyed – so the code in these strategies ought to be genuinely lightweight.
104
A. N. Olufemi and F. Al-Turjman
The whole life cycle of an action is characterized by the accompanying activity techniques. These are snares that we abrogate to accomplish suitable work when the action changes state. All exercises will execute onCreate(Bundle) to do their underlying arrangement; many will likewise actualize onPause() to submit changes to information and get ready to stop communicating with the client and onStop() to deal with done being obvious on screen. A. Material Design and Optimization The first thing a developer should consider when working on a project is to design the project structure including the UI/UX design for the system. Material Design is a visual language that integrates the exemplary standards of good plan with the advancement of innovation and science. It is very important to have a presentable user interface by using a well-pleasing design [5]. To recognize a bunch of UI segment classifications, we referred to famous plan apparatuses and dialects that uncover segment libraries such as Balsamiq [6] and Google’s Material Design [7]. The Material Design has been maintained for Location Finder mobile application to decorate the user interface and to ensure the app is mobile-friendly and fully optimized. All basic components are nicely decorated with unique-looking and gorgeous color combination. Optimization of code is essential to make the app run smoothly without lagging, and that is why app has been developed with the code fully optimized. Every single implementation has been optimized for highest performance. Also, the code has been beautifully crafted and modularized to enable other developers to easily understand the code. Comments are used where necessary to describe certain line of code. B. Android Studio Setup This is free software built purposely to develop Android apps. After installing this software, the next thing is to download the necessary plug-ins including Java IDE and SDK. The next step is to launch the app and start a new project as illustrated below (Fig. 7.3): If a project has been created already, it will be listed at the left corner of the screen, and the project can be easily selected. In this case, the project folder is named NEAR EAST UNIVERSITY PLACES. Once that has been selected, the project will be synced by Gradle, and the screenshot below will be displayed (Fig. 7.4). The files in the folder contain the Java and XML files. Each file has its own functionality. After testing our app on the emulator, the apk file is generated by selecting Build > Generate Signed Bundle/APK, as illustrated below (Fig. 7.5): After generating the apk file, the app can be either installed on any Android device or published on Google Play Store for people to install.
7 Analysis of LTE Downlink Performance Under Heterogeneous Mobility Conditions…
105
Fig. 7.3 Android Studio interface
Fig. 7.4 Android Studio interface after opening a project
7.4 Results and Discussions The NEU Internet Analyzer app interface is designed to allow users of small screen devices to test the speed of the available Wi-Fi and connect to the best. The features of NEU Internet Analyzer app are:
106
A. N. Olufemi and F. Al-Turjman
• • • • •
The app lists the available Internet connections. The user can connect to different available Internet to analyze. It displays the speed of the available Wi-Fi. It shows the ping time of analyzed Wi-Fi and displays the accurate result in MS. It shows the download time of analyzed Wi-Fi and displays the accurate result in Mbps. • It shows the upload time of analyzed Wi-Fi and displays the accurate result in Mbps. The screenshots of the user interface are listed below: Using NEU Internet Analyzer mobile app to analyze the connected Wi-Fi (Fig. 7.6), the user at first opens up NEU Internet Analyzer app, and the first screen that shows up is the splash screen with the app logo and a short description (Fig. 7.6a); the user then gets ready to run a test (Fig. 7.6b); the result is displayed (Fig. 7.6c); and the result is stored in a beautifully designed table to compare with the other analysis made on different Wi-Fi connections (Fig. 7.6d). A. Installation The process of the app installation is planned to be direct and should not take long or any responsibility from user. This mobile app should initially be downloaded and installed from the Google Play Store and requires under 8 MB of space. Immediately after installing the app, it can be launched for usage. It permits app users to see how the mobile application functions, approve suitable authorizations, and affirm that information assortment can start.
Fig. 7.5 Generating the .apk file
7 Analysis of LTE Downlink Performance Under Heterogeneous Mobility Conditions…
107
Fig. 7.6 Six screens that explain how the app works. (a) The first screen that user will see is the splash screen activity with a beautifully designed logo with some description. (b) The user is ready to analyze the connected Wi-Fi. (c) The result after running a test. (d) Results stored in a beautifully designed table to compare with the other analysis made on different Wi-Fi. (e) Settings screen. (f) Choose the data rate units
108
A. N. Olufemi and F. Al-Turjman
Fig. 7.6 (continued)
B. Background Operations NEU Places mobile application relies on the ACCESS_NETWORK_STATE, ACCESS_WIFI_STATE, and ACCESS_COARSE_LOCATION permissions. This gives admittance to GPS, Wi-Fi, and organization investigation to recover the current ISP and network state. The application is viewed as high need, which implies that the most precise perusing accessible is given, paying little mind to battery use. C. Resilience of the NEU Places Application Data collection can be stopped inadvertently by the app users with the following steps: (i) switching off their mobile device, (ii) existing or shutting down the app, (iii) stopping the foreground tasks which are running, (iv) forcing the shutting down of all active applications, (v) disabling the location services, (vi) activating the power saving modes, and (vii) uninstalling/removing the mobile app completely from the device. Additionally, if the frontal area segment of the application is shut, the foundation administration will keep on running. Regardless of whether all closer view applications are cleared, foundation administrations won’t be interfered. In any case, on the off chance that a power conclusion of all applications happens, at that point, the member will be needed to open the application again to proceed with information assortment. In the activity that a user doesn’t have area consents empowered, then the app will send the user a warning. This reminds the user that area authorizations ought to be empowered. Users can tap on the warning, which
7 Analysis of LTE Downlink Performance Under Heterogeneous Mobility Conditions…
109
will guide them toward the pertinent settings through which applicable authorizations can be re-empowered. Furthermore, the force saving modes present in some Android gadgets may restrict the quantity of area focuses recorded by a gadget on the off chance that it has not been utilized for a protracted time frame [9]. In any case, this can halfway be relieved by guaranteeing that clients physically whitelist the application, which expands the quantity of accessible information logging Windows. Lastly, removing/uninstalling the application is interpreted as a desire to withdraw from the usage, and this will stop the collection of data and delete all the files that are associated with the app. D. Response Time The mobile API response time matters when it comes to mobile app development. Slow response times can be hindering to the adoption of mobile apps. A delay of just a few seconds is sufficient to make a few users relinquish an application. Application response time is a significant factor that can impact the reception and utilization of portable. The expense of portable application improvement ought to be offset with the worth it makes for the business to help produce a quantifiable profit. With mobile applications that employees rely upon to finish their jobs, slow load times can decrease efficiency and lead to representative disappointment and customer disappointment. Portable applications, in contrast to work area applications, have restricted assets, for example, battery life, preparing pace, and transfer speed use, and UI/UX plan choices need to represent this. Luckily, a few stunts and acclimations to the RESTful application programming interfaces (APIs) that regularly give information to portable applications can help improve load times while being mindful to asset utilization. NEU Internet Analyzer mobile app has been fully optimized in order to get a better and fast response time; some of the methods used are listed below: i. Implementation of splash screen: A significant part of mobile UI/UX configuration is the apparent reaction season of portable APIs in stacking information. Introducing a sprinkle screen when the application is dispatched is one stunt to help improve client experience by veiling the genuine application load time. ii. Mobile-centric APIs to connect back end: Numerous organizations actually depend on inheritance frameworks that are not very popular. Information coming from inheritance APIs may not be advanced for show or preparing on a cell phone and can debase the client’s experience. In any case, numerous undertaking portable arrangements need the information and data that are put away on heritage frameworks. MBaaS contributions give the instruments and climate important to make versatile driven RESTful APIs that are intended to coordinate with heritage frameworks, offer quicker burden times, and create portable cordial information. iii. Removal of unnecessary data and files: Huge volumes of information set aside a long effort to download. By limiting the size of a payload sent from the worker to a cell phone, the time a client spends holding up can be diminished. Numerous
110
A. N. Olufemi and F. Al-Turjman
methods exist to decrease payloads. One technique is to eliminate information that the cell phone and client needn’t bother with. While this may appear glaringly evident, it can undoubtedly be neglected during improvement. Current Mobile Backend as a Service (MBaaS) arrangements frequently uphold the Node.js JavaScript runtime, which can rearrange mentioning and marshaling heritage information to eliminate superfluous fields and change to gadget inviting JSON designs. This can diminish the payload and consequently cut the time a cell phone spends downloading information. Moderate reaction time or anything that can hamper appropriation or use can bring down the estimation of the application and any subsequent ROI. Quicker reaction time can improve portable achievement both as far as representative and client selections. Portabledriven APIs, storing instruments, and some other great UI/UX practices can assist with accomplishing this. For any mobile app, performance is very critical. We ensure that the performance of the proposed mobile app is fully optimized because if the mobile app is slow, the end user will uninstall the app and find other related application that performs better. Our system is compatible and able to take benefits of existing cloud services. We have chosen this approach because we are interested in the communication between the mobile application and the remote server. E. Test Environment and Test Plan The environment where the tests will run, together with the tools and required hardware, constitutes the test environment. The test environment in this research consists of: • Mobile devices for testing • Tools for monitoring the utilization of hardware resources in the mobile devices • Tools to analyze and visualize the data The list of available wireless networks and their speed is displayed in Table 7.1 For each analysis we made on different available wireless networks, we are able to generate different results. The ping time unit is in millisecond, the download time is in megabit per second, and the upload time is also in megabit per second. We analyzed three available Wi-Fi networks and the results are listed in Table 6.1. The first available Wi-Fi that has been scanned is BIGMOO which gives the results of ping time, download time, and upload time at 79.18 ms, 3.55 Mbps, and 0.66 Mbps, respectively. The second best available Wi-Fi is Springfield which has a ping time equal to 65.97 ms, download time equal to 6.44 Mbps, and upload time equal to 1.23 Mbps. Finally NEU Student Wi-Fi has a ping time equal to 71.34 ms, download time equal to 2.89 Mbps, and upload time equal to 4.56 Mbps. F. Experimental Setups There are a couple of rules that talk about the application response time and the user experience. Response time that is below 100 ms will be very fast on user’s device. Any response time that is up to 1 second (s) can be adequate to the users.
7 Analysis of LTE Downlink Performance Under Heterogeneous Mobility Conditions…
111
Mobile application testing is a sort of A/B testing wherein diverse user portions are given different assortments of an in-application experience to figure out which one initiates the ideal activity from them. Mobile application testing incorporates all capacity tests which run on the server side, as well as permiting advertisers with the managers of item products to optimize the experiences in the end-to-end user interface. A few models are considered to determine the tools for the cross-platforms to exhibit the examination. Most importantly, there is the accessibility of a PropertyCross usage utilizing the device. This doesn’t confine the exploration, as most notable and regularly utilized cross-platform devices are upheld [11]. G. Launch Time In Table 7.2, we have compared the launch time of the proposed app when it has been developed on different platforms. This table shows the presented results of cross-platform devices in a slow dispatch time contrasted with local turn of events. Generally, the execution with Touch Sencha 2 is the slowest to begin. The general request of the other JavaScript structures is reliant on the platform on which the application is running. For example, jQuery portable has the best dispatch time on iOS while it performs worse on Android and Windows Phone. Albeit less noticeable contrasted with the WebView for JavaScript system executions, the runtime is for the biggest part answerable for the dispatch time overhead of the cross-stage apparatus usage in the runtime classification. NeoMAD is a source code interpreter that doesn’t utilize a runtime yet deciphers the source code of the application to devoted source code for the various platforms. The proposed application is a Native application which runs on Android platform. Consequently, the launch time of the NeoMAD is expected to closely look like the Native launch time which works like the Native platforms. We have used the Native implementation to develop our proposed app. H. Time to Navigate Between Pages of the Application In the section, we investigate the response times for exploring an activity and then getting back to another activity when the application is created on various programming platforms. In Table 7.3, utilizing JavaScript usage systems (such as Famo.us and Mgwt) for the most part accomplishes great reaction seasons of under 100 ms. Nonetheless, when we open up an activity, it is held in the phone memory, and when we switch to the next activity, the response time is calculated. The proposed application is a Native application which runs on Android devices.
Table 7.1 Testing results Available Wi-Fi BIGMOO Springfield NEU Student
Ping time (in ms) 79.18 65.97 71.34
Download time (in Mbps) 3.55 6.44 2.89
Upload time (in Mbps) 0.66 1.23 4.56
112
A. N. Olufemi and F. Al-Turjman
Table 7.2 Launch times in ms
Native Famo.us Intel App Framework Ionic jQuery Mobile Mgwt Sencha Touch 2 Adobe AIR NeoMAD Titanium Xamarin
Android iOS HE LE HE LE Native implementation for each platform 293 460 191 611 JavaScript frameworks 1282 1980 438 1495 1009 1383 537 1806 1225 1810 731 2762 1790 2515 424 1223 1186 1433 503 1789 2434 2858 758 2967 Source code translation and runtimes 1364 2782 1191 5568 392 500 285 805 820 1547 331 1152 890 1177 347 1383
Windows HE 876 2252 1500 1750 2501 2000 2516 n/a 3144 n/a 1001
The response time of a few programming platforms was better in contrast to the local variant. This can be credited to the quick delivery of some activity on the screen, the activity that happens when exchanging between different pages on local apps. In Table 7.4, reaction delays for getting back to previous activity are for the most part in a way that is better than when opening another activity. We have utilized the Native execution to build up our proposed application which runs on Android gadgets. Aside from the seventh framework on the table above, rendition on the low-end devices, every reaction delay is underneath 100 milliseconds. Henceforth, users must definitely encounter this progress as momentary. The presented tables show contrasts in memory utilized by the different assumed platforms. Android distributes essentially additional random access memory (RAM) for every execution contrasted with iOS. We have utilized the Native usage to build up our proposed application which runs on Android devices. I. The Usage of Central Processing Unit (CPU) The usage of CPU is the piece of the complete accessible CPU time spent by an application during a specific time span (Table 7.5). We have compared all these results with the proposed research. The mobile app has been developed on Android platform. We ensure that the response time of the proposed research is lower than the other results. This has been achieved by fully optimizing our mobile app by getting rid of codes that are not adding any value to the app, checking the apps’ efficiency, trying and testing the code, using profiling tools for monitoring, emphasizing on increasing app usability, and focusing on the user interface. Taking these into consideration, we are able to experience a better response time and make our app faster than other existing Apps. The response time for the proposed research over the Wi-Fi is 5.28 s and 5.71 s over 4G network. We
7 Analysis of LTE Downlink Performance Under Heterogeneous Mobility Conditions…
113
Table 7.3 The time taken to open a different activity (in ms)
Native Famo.us Intel App Framework Ionic jQuery Mobile Mgwt Sencha Touch 2 Adobe AIR NeoMAD Titanium Xamarin
Android iOS HE LE HE LE Native implementation for each platform 91 109 28 43 JavaScript frameworks 26 20 4 12 49 50 32 50 80 90 39 119 100 113 71 295 38 40 15 26 267 425 135 492 Source code translation and runtimes 4 5 108 111 126 21 23 113 198 45 163 207 216 36 54
Windows HE 129 31 75 144 296 53 821 n/a 76 n/a 75
Table 7.4 The time taken to return to the previous activity (in ms) Android iOS HE LE HE Native implementation for each platform Native (proposed research) 12 20 2 JavaScript frameworks Famo.us 12 20 19 Intel App Framework 29 30 16 Ionic 43 60 4 jQuery Mobile 1 3 13 Mgwt 21 20 19 Sencha Touch 2 1 1 1 Source code translation and runtimes Adobe AIR 1 1 8 NeoMAD 8 10 1 Titanium 1 3 1 Xamarin 40 244 3
LE
Windows HE
8
12
26 31 4 25 1
16 52 63 77 27 337
7 5 7 10
n/a 10 n/a 12
have used the Native implementation to develop our proposed app which runs on Android devices.
114
A. N. Olufemi and F. Al-Turjman
Table 7.5 The consumption of the phone memory after launching the app (in MB) Android iOS HE LE HE Native implementation for each platform Native (proposed research) 72.62 9.80 11.62 JavaScript frameworks Famo.us 113.65 33.68 7.65 Intel App Framework 114.61 28.33 7.23 Ionic 119.45 32.38 7.18 jQuery Mobile 141.11 34.74 7.24 Mgwt 117.61 32.15 6.61 Sencha Touch 2 161.61 42.73 7.20 Source code translation and runtimes Adobe AIR 57.55 43.87 199.4 NeoMAD 79.22 11.77 8.10 Titanium 89.58 26.66 12.9 Xamarin 81.99 17.77 9.29
LE
Windows HE
2.88
19.20
10.54 8.48 12.82 13.28 10.70 17.32
43.84 39.25 48.35 44.13 46.44 52.14
68.60 2.95 26.45 4.16
n/a 19.80 n/a 19.84
7.5 Conclusion Mobile phone’s application changed the method of lives more than nothing particularly Android mobile applications. In excess of 190 nations around the planet, a great many Android mobile phones are established, and these numbers are growing daily (Meier, 2012). NEU Internet Analyzer mobile application has been developed and discussed in this report for easy usage on Android devices. This application works as different Wi-Fi connections within the campus vicinity can be analyzed for best performing connection. NEU Internet Analyzer mobile application creates a flawless wireless network. It provides advanced mapping systems that will show different places of interest for app users. It also helps to find optimal placement for Wi-Fi receivers, and it provides the information individually on Wi-Fi channels. The performance was assessed on three unique Wi-Fi connections. In light of the analysis of the performance, general finishes of which application developers ought to know while choosing a particular tool for cross-platform are drawn. Notwithstanding, utilization of cross-platform tool can affect explicit useful parts of the applications that are developed for mobile.
7.6 Declarations Funding The authors declare no funding. Conflicts of Interest The authors declare no conflict of interest.
7 Analysis of LTE Downlink Performance Under Heterogeneous Mobility Conditions…
115
Data Availability The authors declare no data availability. Supplementary Materials The authors declare no supplementary material.
References 1. Alienman Technology LLC. (2015). Where My Droid. Retrieved from play.google.com:https:// play.google.com/store/apps/details?id=com.alienmanfc6.wheresmyandroid&hl=en 2. Hardy, B. (2014). The Big Nerd Ranch Guide. 3. Salman, K., Ali, H., & Saleem, S. (2015). Research on auto mobile-Pc upload images application through Bluetooth using Java. VAWKUM Transaction on Computer Sciences. Retrieved from http://vfast.org/journals/index.php/VTCS/article/view/356 4. Saleem, S., Khan, I., Khan, S., & Rahman, A. (2015). Comparison of cooperative diversity protocols in various relay locations through network coding. VFAST Transactions on Software Engineering. Retrieved from http://vfast.org/journals/index.php/VTSE/article/view/357 5. Material Design. https://material.io/design/ 6. Balsamiq Studios. 2018. basalmiq. (2018). https://balsamiq.com/ 7. Call-Em-All. 2018. Material-UI. (2018). https://material-ui-next.com/ 8. Canzian, L., & Musolesi, M. (2015). Trajectories of depression: Unobtrusive monitoring of depressive states by means of smartphone mobility traces analysis. In Proceedings of the 2015 ACM international joint conference on pervasive and ubiquitous computing (pp. 1293–1304). ACM Press. 9. Android.(2018). Services overview. Retrieved from https://developer.android.com/guide/ components/services 10. Android mobile application activities lifecycle. https://developer.android.com/reference/ android/app/Activity 11. Android SDK. http://developer.android.com/sdk/index.html. Accessed Feb 2013.
Chapter 8
Classification of Solid Wastes Using CNN Method in IoT Het-Net Era Fadi Al-Turjman, Rahib H. Abiyev, Hamit Altıparmak, Meliz Yuvalı, and Şerife Kaba
8.1 Introduction Throughout history, crowded cities and countries have developed approaches to dispose of human-generated waste. One of the biggest problems of today is environmental pollution. Many countries are working to find solutions to environmental pollution. Recycling, one of the solutions for environmental pollution, moves toward becoming a big industry [3]. Everything in nature is discarded and recycled, ensuring that there is no waste. Plants and animals die and rot in the soil supplying food to plants and animals in succession. Both the reality of the waste and the descriptive words of the waste have grown together for centuries. Several types of word structures define waste in many languages, including common semantic relationships [1]. Substantial waste accumulation in the urban area is a significant problem in the world and, if not adequately controlled, causes environmental pollution, which is a significant danger to human health. It is crucial to have an advanced or intelligent F. Al-Turjman Artificial Intelligence Engineering Department, Research Center for AI and IoT, Near East University, Mersin 10, Turkey e-mail: [email protected] R. H. Abiyev · H. Altıparmak (*) Department of Computer Engineering, Near East University, Mersin, Turkey e-mail: [email protected]; [email protected] M. Yuvalı Faculty of Medicine, Department of Biostatistics, Near East University, Mersin, Turkey e-mail: [email protected] Ş. Kaba Department of Biomedical Engineering, Near East University, Mersin, Turkey e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 F. Al-Turjman (ed.), Real-Time Intelligence for Heterogeneous Networks, https://doi.org/10.1007/978-3-030-75614-7_8
117
118
F. Al-Turjman et al.
waste management system to control various waste materials. Stages of waste management is separating waste into various components, which is usually done through manual collection [1]. For a sustainable society, recycling is essential. The recycling process allows recycling facilities to sort garbage by hand and use a series of large filters to isolate more specific items [18]. The harmful substances in the trash may infect the person separating the waste. Thus, it is essential to develop an automatic system that can sort the waste and protect the health of the human. With this intelligent system, waste can be separated quickly, which gives more accurate results than the manual method. When the system is activated, usefully separated wastes can be recycled and converted into energy and fuel for the sustainable economic growth. This system provides a significant advantage in automatically classifying wastes and reducing human intervention, preventing infection and contamination [1]. The authors in [1] presented an intelligent system for the classification of waste materials, which is improved via the 50-layer residual net pre-train (ResNet-50) convolutional neural network model that is known as a machine learning tool and support vector machine (SVM) that is used for the classification of waste into various groups such as metal, glass, plastic, and paper. In their study, the trash image dataset developed by Gary Thung and Mindy Yang is used, and the experimental results obtained 87% accuracy in the dataset. The authors in [6] presented a system to classify plastic waste using image processing methods and convolutional neural networks. Plastic waste images are collected from the WaDaBa database. Plastic waste is classified as polystyrene, high-density polyethylene, polyethylene terephthalate, and polypropylene. Experimental studies consisted of training the two CNN structures and allocating classification accuracy for training and test data in different input data parts. In the experiments, it was observed that the proposed 15-layer network is highly efficient for images with a resolution of twice as low as the existing 23-layer network with a specific resolution of 227 × 227 pixels. The classification of segregated waste into four basic classes is carried out in most cases without error. The author in [4] presented a system for classifying solid waste using deep learning methods. The TrashNet dataset is used to automate the recycling of solid waste images. Two approaches are used in their study. The use of transfer learning in the DenseNet architecture instead of training from scratch caused the highest possible accuracy in the TrashNet dataset. In other ways, the main innovation of their research is the proposed model, RecycleNet. The author in [17] presented a system to classify garbage into six classes involving paper, glass, trash, metal, cardboard, and plastic by using support vector machine (SVM) and convolutional neural network (CNN) approaches. Additionally, a dataset containing approximately 400–500 images was created for each class collected manually. As a result, SVM had better results than CNN. Sixty-three percent achieved test accuracy. Our motivation was to find an automated method of sorting garbage. This method will have the potential to make separation and recycling facilities more efficient and
8 Classification of Solid Wastes Using CNN Method in IoT Het-Net Era
119
help reduce waste because it is not possible for employees to do everything with 100% accuracy.
8.2 Methodology Convolutional neural networks are a kind of neural network. Convolutional neural networks generally consist of four layers: input image, convolution, partnering, and full connection. In this article, we classify solid wastes divided into six classes using reinforced neural networks. The classes used in the article are cardboard, glass, metal, paper, plastic, and garbage. First of all, to increase the number of pictures in our database and the original pictures by image manipulation method, we have doubled the number of data by turning all the pictures upside down and perpendicular to the right and zooming. Our dataset consists of 3198 different pictures from 6 classes.
8.2.1 Two-Dimensional Convolution The first stage of the CNN application begins with the acquisition of two-dimensional convolution [7]. The two-dimensional information’s symmetry to be applied to the information is taken according to two main axes. All values are multiplied by taking combinations in the matrix, and the sum of the values obtained is stored as an element of the output matrix. In Fig. 8.1, the two-dimensional convolution is explained [8]. The colored images in Fig. 8.2 consist of three different channels in red, green, and blue. Consequently, convolution is performed for every three channels. The
Fig. 8.1 Two-dimensional convolution representation [13]
120
F. Al-Turjman et al.
Fig. 8.2 Applying convolution to three color layers in the image [13]
channel of the output signal and the channel of the applied filter are always considered equal.
8.2.2 Horizontal and Vertical Filtering Horizontal and vertical filtering can be considered as a layer in the artificial neural network. Here the filter and input image are the weight matrix, updated continuously with the backpropagation algorithm. At this stage, a bias value is added to the activated output matrix. Figure 8.3 shows the flow of the convolution process [12].
8.2.3 Padding Process After convolution, we can manage and calculate the size difference. This size difference is the size difference between the input and the output sign. We can do this with extra pixels added to the input matrix [14] (Fig. 8.4). Adding pixels in this way is called padding. The input matrix is represented by nxn; in the case of the filter (weight) matrix (f × f), we can use the following formula if the output matrix is expected to be the same size as the input:
(n + 2p – f + 1) × (n + 2p – f + 1)
The value “p” shown in the above formula is the pixel size, i.e., padding. To achieve this, we can use the equation p = (f−1)/2 (Fig. 8.5).
8 Classification of Solid Wastes Using CNN Method in IoT Het-Net Era
121
Fig. 8.3 Flow of convolution [4]
8.2.4 Stride Process At this stage, it informs that it will shift the filter, which is the weight matrix for convolution, in pixel-by-pixel or more pixel steps on the image. This is another parameter that directly affects the output size [2] (Fig. 8.6). For example, when the padding value is p = 1, when s = 2 is selected, the size of the output matrix is
(((n + 2p − f ) / s ) + 1) × (((n + 2p − f ) / s ) + 1)
If n = 5 and f = 3, the output size is (3) × (3). The pixels added in the padding process can consist of 0s, as in the example in Fig. 8.8 It can also be done by copying the value of the next pixel [11].
122
F. Al-Turjman et al.
Fig. 8.4 Padding process [10]
8.2.5 Pooling Process The maximum centering method is used in this layer. There is no learning situation in this layer of the network. The height and width values can be reduced by keeping the number of channels of the input matrix constant. It is a necessary step to make the calculation easier [16] (Fig. 8.7). It gives good results even in problems where location information is not very important. In the pooling process, a new output is obtained by selecting the highest value from the sections with the same color. [9]. In the example Fig 8.8, 2 × 2 maxcentering was performed by shifting two steps (pixels). The maximum value in the field with the four elements is transferred to the output. At the output, a one in fourdimensional data is obtained.
8.2.6 Dataset In this study, the TrashNet dataset was used [15]. This dataset is a dataset with open- source access. The TrashNet dataset consists of six different classes. These classes are, respectively, cardboard, metal, plastic, glass, paper, and trash. In this study, 3198 different pictures were used. Different samples were used for each class, but attention was paid to the fact that these numbers were close to each other. In this paper, we used 482 cardboard images, 430 metal images, 474 plastic images, 979
8 Classification of Solid Wastes Using CNN Method in IoT Het-Net Era
123
Fig. 8.5 Padding result [10] Fig. 8.6 Stride formula
glass images, 559 paper images, and 274 trash paper images. The background is white in the visuals. All images used as datasets have different exposure and light values. All images used in the TrashNet dataset have dimensions of 512 × 384 pixels. Our dataset’s visuals were inverted by the data augmentation method and shifted to the right. The data augmentation method is used to make the artificial neural network better predicted in visuals coming from different angles. The dataset size used is 4 Gigabyte. Our dataset consists of 3198 different pictures from 6 classes (Fig. 8.8 and Table 8.1).
124
F. Al-Turjman et al.
Fig. 8.7 Representation of pooling process [5]
Fig. 8.8 Sample images selected from dataset [4]
8.3 Result and Discussion It has been found that humans’ waste separation is not suitable in terms of time, economy, and health. It has been seen that the separation of wastes has negative as well as positive aspects. Hygiene is the most critical aspect. From an economic point of view, it is seen that wastes are more costly to separate by people. In this article, an artificial intelligence-based system is proposed to separate solid wastes, and 69.7% success is achieved by using the TrashNet dataset. In addition, based on this result, it was observed that artificial intelligence can separate solid wastes much faster and more accurately than humans. In this article, one of the artificial intelligence methods, convolutional neural network method, is used. It has been seen that the success achieved is promising for the use of artificial intelligence in the separation of solid wastes. As a result of this article, solid wastes divided into 6 classes consisting of 3198 images generally achieved 69.7% success in Fig. 8.1. As shown in Fig. 8.9, a total of 482 cardboard images in our dataset were 68.3% successful. Next, we have 979 glass images, and the success achieved for the glass category is 35.4%. The lowest success in terms of a category has been seen in the glass category, although it has more visuals, because the bias or variance may have
8 Classification of Solid Wastes Using CNN Method in IoT Het-Net Era Table 8.1 Number of materials belonging to six classes in the database [15]
Material Cardboard Glass Metal Paper Plastic Trash
125 Total (3198) 482 979 430 559 474 274
Fig. 8.9 Accuracy graph
occurred in our data for the category. There are 430 images in our metal category, and our success for the metal category was 59.7%. In the paper category, 88.0% success was achieved with 559 visuals. 83.8% of success was achieved with 474 images in our plastic category. Finally, 87.8% of success was achieved with 274 images in our trash category (Fig. 8.10).
8.4 Conclusion In this study, 69.7% success was achieved with the convolutional neural network method using the TrashNet database. If more data is used, the success will increase much more. The system we train using the convolutional neural network method has the lowest success; although it has the highest visuality, it has offered glass categories. Although 979 images were found in our glass category, they achieved 35.4% success. In the future, we plan to increase the system’s success by adding more features to our study. Mainly because there is much more data in the glass category than the other categories, the category failed during the training. We plan to increase the system’s overall success and category-based success, primarily by increasing the number of samples in other categories.Compliance with Ethical Standards
126
F. Al-Turjman et al.
Fig. 8.10 Confusion matrix
Conflict of Interest The authors declare that they have no conflict of interest. Funding The authors did not receive support from any organization for the submitted work. Ethical Approval “Not applicable” Consent to Participate “Not applicable” Consent to Publish “Not applicable” Authors Contributions All authors have read and agreed to the published version of the manuscript.
8 Classification of Solid Wastes Using CNN Method in IoT Het-Net Era
127
Availability of Data and Materials Open access link of the dataset: https://github. com/garythung/trashnet
References 1. Adedeji, O., & Wang, Z. (2019). Intelligent waste classification system using deep learning convolutional neural network. Procedia Manufacturing, 35, 607–612. 2. Akhtar, N., & Ragavendran, U. (2019). Interpretation of intelligence in CNN-pooling processes: a methodological survey. Neural Computing and Applications, 1–20. 3. Arayakandy, A. (2019). Design and development of classification model for recyclability status of trash using SVM. International Journal for Research in Applied Science and Engineering Technology, 7(3), 2146–2150. 4. Bircanoğlu, C., Atay, M., Beşer, F., Genç, Ö., & Kızrak, M. A. (2018, July). RecycleNet: Intelligent waste sorting using deep neural networks. In 2018 innovations in intelligent systems and applications (INISTA), pp. 1–7. IEEE. 5. Biswas, D., Su, H., Wang, C., Blankenship, J., & Stevanovic, A. (2017). An automatic car counting system using OverFeat framework. Sensors, 17(7), 1535. 6. Bobulski, J., & Kubanek, M. (2019, June). Waste classification system using image processing and convolutional neural networks. In International work-conference on artificial neural networks, pp. 350–361. Springer. 7. Bush, I. J., Abiyev, R., Ma’aitah, M. K. S., & Altıparmak, H. (2018). Integrated artificial intelligence algorithm for skin detection. In ITM Web of conferences, Vol. 16, p. 02004. EDP Sciences. 8. Chen, H., Liang, D., & Zhao, L. L. (2019, April). A quantitative trading method using deep convolution neural network. In IOP conference series: materials science and engineering, Vol. 490, No. 4, p. 042018. IOP Publishing. 9. Chintala, S. (2020). Convolutional neural networks for visual recognition. https://cs231n. github.io/convolutional-networks/#add. Accessed 10 Apr 2019. 10. Dumoulin, V., & Visin, F. (2016). A guide to convolution arithmetic for deep learning. arXiv preprint arXiv:1603.07285. 11. Ghatak, A. (2017). Introduction to machine learning. In Machine learning with R (pp. 57–78). Springer. 12. Gorelick, L., & Veksler, O. (2019). Simulating CRF with CNN for CNN. arXiv preprint arXiv:1905.02163. 13. Kızrak, A. (2018). Derine daha derine: Evrişimli Sinir Ağları. https://medium.com/@ ayyucekizrak/deri%CC%87ne-daha-deri%CC%87ne-evri%C5%9Fimli-sinir-a%C4%9Flar% C4%B1-2813a2c8b2a9. Accessed 9 June 2019. 14. Nam, N. T., & Hung, P. D. (2019, January). Padding methods in convolutional sequence model: An application in japanese handwriting recognition. In Proceedings of the 3rd international conference on machine learning and soft computing, pp. 138–142. ACM. 15. Thung, G. (2016). “Trashnet,” GitHub repository. 16. Xu, C., Yang, J., Lai, H., Gao, J., Shen, L., & Yan, S. (2019). UP-CNN: Un-pooling augmented convolutional neural network. Pattern Recognition Letters, 119, 34–40. 17. Yang, M., & Thung, G. (2016). Classification of trash for recyclability status. CS229 Project Report, 2016. 18. Zikmund, W., & Stanton, W. (1971). Recycling solid wastes: A channels-of-distribution problem. Journal of Marketing, 35(3), 34–39.
Chapter 9
Characterization and Benchmarking of Message-Oriented Middleware Aarush Ahuja, Vanita Jain, and Dharmender Saini
9.1 Introduction Message-oriented middleware (MoM) allows independent applications on a distributed system to communicate with each other, using messages. They are useful for processing streaming data like analytics, IoT sensor networks [1], microservice communication [2], telemetry, etc. The design of message-oriented middleware consists of named message queues transmitting messages using binary- and text- based protocols to transfer data efficiently. Message-oriented middleware offer various messaging semantics like at-least-once and at-most-once messaging, making them reliable for critical systems. They are often required to be deployed as a distributed system, a cluster of highly available, reliable, and fault-tolerant systems. Message-oriented middleware have emerged very impactful in the financial [3] and telecommunication industry where the requirement of fault-tolerant and reliable message buses have enabled their evolution and research; the technical infrastructure of stock markets [4] and high-frequency trading is powered by message buses. Research is being concentrated toward building message-oriented middleware with very low latencies, large-scale distribution, high throughput, and cloudfirst deployment [5] to handle the massive volumes of the stock markets. Apart from the financial industry, message-oriented middleware are found in supply chains [6] as the enterprise message bus, Internet of Things devices, and overlay networks [7]. The publish/subscribe messaging pattern is one of the most crucial technologies emerging from the development of messaging systems over time. It has proved of value to users like financial institutions [3], cloud providers [8], Internet of Things [1], etc. Our survey focuses on the evaluation of messaging systems, understanding
A. Ahuja · V. Jain (*) · D. Saini Bharati Vidyapeeth’s College of Engineering, New Delhi, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 F. Al-Turjman (ed.), Real-Time Intelligence for Heterogeneous Networks, https://doi.org/10.1007/978-3-030-75614-7_9
129
130
A. Ahuja et al.
the evaluation methodologies for message-oriented middleware over time, their evolution, and the ideas popularized by them. Section 9.2 describes our survey methodology, how we collected our sources, and the questions we aim to answer with this paper. Section 9.3 presents an overview of the architecture of message-oriented middleware in terms of design, distribution, reliability, protocols, and compatibility. Section 9.4 describes three popular benchmarks for message-oriented middleware, SPECjms2007, jms2009-PS, and OpenMessaging, and how they compare with each other. Section 9.5 describes the metrics and indicators important for the evaluation of message-oriented middleware. Section 9.6 discusses our conclusion.
9.2 Survey Methodology 9.2.1 Survey Sources To locate sources concerning the evaluation of message-oriented middleware, we used digital libraries to collect books and published articles. To execute the research, the following are instrumental: 1. arXiv 2. IEEE Xplore 3. ACM Digital Library 4. Google Scholar Search Engine Our research is focused on the evaluation of message-oriented middleware as well as considers the evolution of the systems and their designs, and our sources range from the years 2003 to 2017.
9.2.2 Survey Questions Our survey aims to answer these questions: 1. What methodologies have been used for qualitative and quantitative evaluation of message-oriented middleware? 2. How is benchmarking performed for message-oriented middleware, and which quantitative metrics are impactful? 3. What is the difference in the methodology for evaluating and benchmarking cloud-based message-oriented middleware?
9 Characterization and Benchmarking of Message-Oriented Middleware
131
9.3 Architecture We study the architecture of message-oriented middleware in terms of its design, distribution of the middleware, reliability of the middleware, protocols supported by the middleware, and client compatibility.
9.3.1 Design Message-oriented middleware can be separated into daemon-based and broker- based designs. TIBCO Rendezvous [9] is a daemon-based message-oriented middleware; it uses a distributed design where the daemon resides on each host in the network as the RV daemon and handles messaging for programs utilizing the RV API. SonicMQ is a broker-based JMS-compliant message-oriented middleware which runs on a centralized server working as the “message broker”, all applications wishing to communicate, send it to the SonicMQ [9] broker who routes and sends it to the receiver. The design approaches of daemon-based middleware and broker- based middleware are represented in Fig. 9.1. Modern message-oriented middleware designs have shifted exclusively toward broker-based designs as can be seen with Apache Kafka and RabbitMQ [10, 11] which along with being high- performance message-oriented middleware provide horizontal scalability [11]. In broker-based middleware, the message queue model [10] is a necessary abstraction which is used to facilitate communication between clients, where a client can access a named “queue” either as a producer of messages or consumer of messages. Message brokers like RabbitMQ, Apache Kafka, and ActiveMQ work on the message queue model [12]. A brief history of message-oriented middleware can be found in Table 9.1. Message-oriented middleware usually provides two types of messaging models: point-to-point communication or publish-subscribe [13] communication. The point- to-point model allows delivery of a message to a single specific receiver; named queues are used to implement point-to-point communication where a receiver can create a unique queue [10, 12], an “inbox” which will exclusively transfer messages to a particular client. The publish-subscribe model uses message queues to implement a one-to-many or many-to-many messaging model where publishers can publish messages to a named queue and subscribers can receive these messages as they are published [9, 11]. Different variations of the publish-subscribe pattern are classified as topic-based, content- based, and type-based publish-subscribe [13]. Publish-subscribe-based communication in distributed systems is decoupled in space, time, and synchronization [13]; this has led to various features and implementation issues being studied like message dissemination, message persistence, message prioritization, and reliability of the services. In the publish-subscribe model, the alternate terminologies for “named queues” could be “topics,” ”channels,” ”subjects,” etc. and publishers as
132
A. Ahuja et al.
Fig. 9.1 Daemon-based and broker-based message-oriented middleware [9]
“producers” and subscribers as “consumers” [10, 12]. Many publish-subscribe message-oriented middleware like Apache Kafka [11] and RabbitMQ [10, 14] provide durability of message queues, where a durable queue is a queue whose messages are persisted to disk (or a non-volatile store) and can be read later instead of being a fire-and-forget system, as is the case of NATS [15].
9 Characterization and Benchmarking of Message-Oriented Middleware
133
Table 9.1 History of message-oriented middleware and protocols First released 1993 2000 2004 2005 2007 2009 2011 2011
Broker IBM MQSeries TIBCO Rendezvous Apache ActiveMQ Apache Qpid RabbitMQ Mosquitto Apache Kafka NATS
Protocol JMS, IBM MQI [24] Proprietary protocol [9] JMS, AMQP [12] AMQP [3] AMQP, STOMP [10, 12] MQTT [25] Custom binary protocol [10] Custom text protocol [15]
9.3.2 Distribution and Reliability Earlier message brokers like TIBCO Rendezvous [9] which used background daemons deployed on each client presented a decentralized distributed design, which provided scalability and fault tolerance. TIBCO Rendezvous was designed to handle workload inside an organizational LAN, and this architecture does not apply to the requirements of the Internet where clients are distributed across the Internet often behind NATS. Scalability and resiliency [14] have been used as the factors for distribution of message-oriented middleware. By scalability, the focus is on the improvement of performance in terms of messaging throughput, latency, and concurrency as we will discuss in Section 9.4. Vertical scalability [5] is the process of adding more resources to the system, whereas with horizontal scalability [10, 14], resources in the deployed system are distributed across more systems working together to form a cohesive system. Vertical scalability in a cloud-based as well as on-premises environment is relatively easy to perform. Still, it is prohibitively expensive, and hardware resources like the amount of RAM, capability of network interface, and disk performance are of prime importance [5]. Horizontal scalability is the task of distributing the message-oriented middleware across more than one machine [14], combining their resources to perform as one cohesive distributed system. Compared to vertical scaling, it’s a technically challenging, non-trivial task. The task of clustering in message-oriented middleware is to create a process group [10] of middleware instances to which the load of the system can be shared. RabbitMQ [14] contains features to scale the message-oriented middleware to create high availability, or HA, configurations achieved by using a multi-node multi- queue system and a fault-tolerant, or FT, configuration achieved by using mirrored queues across the cluster. Message-oriented middleware use the concept of partitioning to separate and balance load across a cluster where a system can be partitioned [16] in such a way that particular machines only handle a specific set of named queues. Replication has also become an essential part of message brokers like Apache Kafka [10] and Apache Pulsar where persistence is actively used in practical systems; following the CAP theorem, either HA or FT [14] can be implemented based on the requirements considering trade-offs in performance and resiliency. Figure 9.2 is a general representation of HA and FT queue configurations in
134
A. Ahuja et al.
Fig. 9.2 High availability and fault-tolerance configurations in a message-oriented middleware [14]
9 Characterization and Benchmarking of Message-Oriented Middleware
135
message-oriented middleware, inspired by mirrored queues and partitions in RabbitMQ [14]. More configurations being a combination of the partitioning and mirroring approach are possible to provide a middle ground between high availability and fault tolerance. For example, in case of two queues and four nodes, two partitions could be created with each partitions hosting one queue each; this provides availability to the system, and each individual partition can be composed of two nodes mirroring the queues between themselves providing fault tolerance [14]. However, in line with the CAP theorem, in the case of replicated queues, consistency will remain an issue. Table 9.2 highlights support for distribution in various message-oriented middleware.
9.3.3 Protocols and Compatibility The purpose of a message-oriented middleware is to connect many clients and applications together, and there are various standard and customized protocols used by message-oriented Middleware [19]. JMS or Java Messaging Service has been the most recurring name throughout our survey, and it can be attributed to the popularity of Java in the enterprise software industry where message-oriented middleware has many use cases and is abundantly used. Java Messaging Service [19] is a Java-specific protocol for messaging systems, and various message-oriented middleware have support for JMS including early brokers like IBM MQSeries [13, 20] and Apache ActiveMQ [12]. The JMS protocol defines the abstract interfaces to be used by interacting applications. Still, it leaves the implementation and definition of brokers and the wire protocol for message-oriented middleware developers. AMQP or Advanced Message Queuing Protocol [21] is another protocol for message-oriented middleware used extensively; it was developed by the financial industry to fulfil the requirement of a standard messaging system across functional entities in the financial sector [3]. The AMQP provides the binary wire protocol [22, 23] for transmission of messages and also gives hints for the design of the message broker, thus providing wide compatibility and system agnostic behavior. Apache ActiveMQ and RabbitMQ [12] are two Table 9.2 Support for distribution in message-oriented middleware Middleware RabbitMQ
Apache Kafka Apache Pulsar NATS Streaming
Clustering Apache ActiveMQ Leader-follower replication, no built-in clustering [18]. Replicated queues, safe quorum-based queues, cluster of nodes transparently forming single logical broker [17] Supports queue partitioning and log replication across partitions via state machine replication [10]. Built-in partitioning, clustering, and replication using reliable components like ZooKeeper and BookKeeper [16] Data replication implemented with Raft, limited to small cluster sizes only [15]
136
A. Ahuja et al.
popular message brokers compatible with the AMQP. Message-oriented middleware usually supports more than one messaging protocol, as can be seen in Table 9.1; RabbitMQ supports both AMQP and STOMP, and ActiveMQ supports both JMS and AMQP [12]. OpenMessaging [8] is a joint effort by Amazon and Alibaba to build a vendor-neutral specification for distributed messaging interoperable across clouds to supersede the JMS specification. It puts focus on the requirements of messaging in the cloud providing wide client compatibility, usability with various wirelevel protocols like AMQP and MQTT [22], a standardized benchmark, and Java APIs for implementations. It aims to be instrumental for building hybrid clouds across vendors using OpenMessaging as the messaging system specification. Apache RocketMQ, Apache Pulsar, and Apache Kafka are few message brokers part of the OpenMessaging API.
9.4 Benchmarking The importance of message-oriented middleware and the challenge of building performant middleware for asynchronous decoupled communication patterns are paramount. The widespread use and application of messaging technology also demand performance evaluation tools. Benchmarks are beneficial tools for gauging the performance of message-oriented middleware for maximum scalability and flexibility [26]. We aim to survey and understand the performance modelling techniques and design of benchmarks for messaging systems and the useful metrics generated by them. Previous work [11, 12] has often developed custom benchmarks or used open-source benchmarks to facilitate message-oriented middleware benchmarking; the design of the benchmark itself is not discussed enough, and not much insight can be gained on them. jms2007 [6] and SPECjms2009-PS [27] are two benchmarks for message-oriented middleware developed to provide a standardized and formally verified benchmark for evaluating message-oriented middleware. They focus on evaluating a JMS-based message-oriented middleware via a suite of macro- and microbenchmarks for point-to-point and publish-subscribe messaging [6]. There also have been efforts to support more protocols in the SPECjms2007 benchmark, including work being done on AMQP-based SPECjms2007 [21]. Microbenchmarks for message-oriented middleware are focused on evaluating performance of individual subsystems or features separately from the larger system of the message-oriented middleware. This can include determining messaging throughput and latency [28] by varying a single factor like the number of publishers or subscribers individually and choosing point-to-point or publish-subscribe communication. The results of microbenchmarks are only representative of the subsystem under test. Macrobenchmarks for message-oriented middleware are focused on evaluating performance of the system as a whole. In SPECjms2007, this is present as the BASE [28] parameter which is varied to evaluate metrics like message throughput and latency across the middleware. The variation of BASE affects both vertical and
9 Characterization and Benchmarking of Message-Oriented Middleware
137
horizontal topologies varying the number of publishers, number of consumers, message size, desired messaging rate, and ratio of messages sent via publish-subscribe or point-to-point modes as a whole via the single BASE parameter. SPECjms2007 is a credible and formally verified benchmark, but the publish- subscribe benchmarks in SPECjms2007 were found to be limiting. Thus, a new benchmark jms2009-PS was designed on top of SPECjms2007 to rigorously evaluate publish-subscribe messaging in message-oriented middleware. Earlier benchmarks [9] were run on local machines with limited resources; a shift toward cloud infrastructure [5] and deployments where Kubernetes orchestrates middleware run in containers require benchmarks to be updated and designed for the abilities and scale of cloud infrastructure. In modern organizations expecting ingestion and processing of millions of messages, it has been suggested that factors like portability, fairness, configurability, real-world representativeness, scalability, etc. are essential for cloud-centric [5] evaluation of message-oriented middleware. OpenMessaging benchmark [8] is a joint effort by Alibaba and Amazon to build a standardized benchmark for cloud-centric message broker, and it focuses on evaluating large- scale server-to-server messaging applications, defines useful metrics, and provides an easy-to-extend, highly automated implementation of the benchmark. Table 9.3 presents a comparison of SPECjms2007, jms2009-PS, and OpenMessaging on various factors including cost, portability, scalability, workload, etc.
9.4.1 SPECjms2007 SPECjms2007 [6] is an industry-standard benchmark for JMS-compliant message- oriented middleware, providing the workload and metric definitions for a message- oriented middleware under test. The benchmark focuses on evaluating both point-to-point and publish-subscribe patterns [21] focusing on creating workloads Table 9.3 Comparison of benchmarks for message-oriented middleware Parameter Year Standardized Cost Portability Cloud-based Scalability Workload Configurability Compatibility Implementation Focus Pub/Sub support
SPECjms2007 [6] jms2009-PS [30] OpenMessaging [31] 2007 2009 2017 Yes No Yes Commercial Research-only Free No No Yes, automated No No Yes Yes Yes Yes Real-world Real-world Synthetic High Highest Low JMS JMS Many brokers Java Java Java P2P Pub-Sub Cloud Minimal Yes Yes
138
A. Ahuja et al.
that represent a real-world application being able to stress selected features for performance, scalability, and reliability measurements. Workload [28] should be designed to make the benchmark configurable and specific for various features available in the messaging system. SPECjms2007 implements a supermarket supply chain (described in Fig. 9.2) where RFID tags are used to track the flow of goods via various interactions including price updates, sales statistics, order/shipment events, etc. [6] (Fig. 9.3). The benchmark allows customization in terms of message sizes, message throughput, and configurable topologies of the interacting clients. The benchmark also considers horizontal scalability of the messaging system by scaling the interactions across P2P and Pub/Sub clients by scaling a parameter called BASE [28]. The different entities in the benchmark like producers, consumers, and topics in the system scale as a function of the BASE parameter. Various dimensions for the payload are defined, which include message types, destinations, message sizes, message throughput, and message delivery modes. The benchmark can be customized by varying different payload dimensions to be able to stress only certain features of a middleware [29]. Thus microbenchmarks are also enabled by SPECjms2007, and this is also formally proved by the authors.
9.4.2 jms2009-PS The jms2009-PS [30] benchmark is built by the authors of SPECjms2007 to evaluate publish/subscribe systems effectively. The SPECjms2007 is used as a baseline for jms2009-PS to develop a benchmark for publish-subscribe systems benchmark
Fig. 9.3 SPECjms2007 supply chain design [6]
9 Characterization and Benchmarking of Message-Oriented Middleware
139
models a supply chain stream where goods are tracked, and messages are sent throughout the organization. The jms2009-PS is built on top of SPECjms2007 and extends [26] it for publish/subscribe communication. The jms2009-PS benchmarks provide 80 new configuration parameters [27] over SPECjms2007 allowing configuration of number of topics, subscriptions, message delivery modes, etc. The jms2009-PS benchmark focuses on six critical factors: number of topics and queues, number of transactional and non-transactional messages, number of persistent and non-persistent messages, total traffic per topic and queue, number of subscribers per topic, and filter statements [30]. It has been tested on the ActiveMQ message broker used as a JMS server for connecting clients distributed across four different machines. Results [30] of the benchmark led to the conclusion of the most important factor affecting system performance, the number of topics or queues used for communication, suggesting that having a low number provides maximum flexibility.
9.4.3 OpenMessaging Benchmark OpenMessaging benchmark [8] is a cloud-centric benchmark designed by Alibaba and Amazon. It is aimed at being an easy-to-use, transparent, and realistic benchmark. It provides transparency by being completely open source and uses realistic use cases for benchmarking the application instead of focusing on unrealistic edge cases. Currently, the OpenSource implementation is in Java and supports benchmarking of Apache Kafka, Apache RocketMQ, RabbitMQ, Apache Pulsar, and NATS Streaming [8]. Similar to SPECjms2007, OpenMessaging also provides various dimensions for the workloads, parameters like partitions per topic and consumer backlogs are new additions, and the recommended defaults for the parameters are relatively larger compared to parameters in previous works (like SPECjms2007 [6]). The open-source implementation is highly extensible and automated providing Terraform configurations and Ansible playbooks to deploy infrastructure and automate the benchmark on AliCloud and AWS.
9.5 Metrics and Indicators Message-oriented middleware performance measured by running benchmarks generates various metrics. It is vital to specify a formal definition of these metrics as well as understand their impact. Along with quantitative metrics, we also focus on two indicators, cost and portability, suggested as an indicator of a successful benchmark for message-oriented middleware [5].
140
A. Ahuja et al.
9.5.1 Hardware Resources The CPU, RAM, disk, and network are essential parts of the hardware to be considered carefully for message-oriented middleware. The overall CPU utilization [26] of the system by the message-oriented middleware program and CPU time per message handled are two crucial factors related to the CPU. Usually, queues and messages are stored in memory. Thus vertical scalability of queues and messages is very dependent on the available RAM. In the case of log-based systems like Apache Kafka, where durable queues can be formed, it is necessary to use high-performance disks like NVMe SSDs to provide efficient performance [5]. The network hardware should be able to provide sufficient bandwidth to the system, such as to utilize the potential of the CPU fully; minimal time should be spent blocked by network I/O [9].
9.5.2 Throughput Throughput can be defined in terms of a publisher and subscriber pair. Where the number of messages published by a client in a time interval and the number of messages received by a subscriber in the same time interval. Message-oriented middleware like Apache Kafka [10] can achieve publisher and subscriber throughput of more than 100,000 messages per second [11]. Throughput should be tested in various dimensions: by varying the number of publishers, by varying the number of subscribers for a given message, and by varying cluster configurations. Publisher throughput can be directly increased using parallel publishers, and subscriber throughput can be increased by employing more subscribers to subscribe to a single topic in a configuration where the message-oriented middleware will fundamentally balance the load between the subscribers instead of transmitting the same message to each subscriber. Highly available [14] configurations have been found to have higher performance throughput compared to fault-tolerant configuration; this stems from the fact that a fault-tolerant configuration has to enable consistency and replication of information leading to a performance trade-off. Figure 9.4 provides a general overview of how publisher and subscriber throughput is determined.
9.5.3 Latency It is the time interval during which a message is published by a publisher and is subsequently delivered and acknowledged by a subscriber. Similar to throughput, latency can also be severely affected by the number of publishers, subscribers, and queues in the system [10]; metrics should be generated by varying the number of publishers and subscribers. Along with immediate latency measures like end-to-end
9 Characterization and Benchmarking of Message-Oriented Middleware
141
Fig. 9.4 Publisher and subscriber throughput measurement in a message-oriented middleware
publish-subscribe latency [9], tail latency measurement is also an impactful metric for message-oriented middleware, 99th percentile latency, 99th percentile wait times, and 99th percentile [5] dropped request numbers, especially in the case of cloud deployments with large number of clients. It has been seen that an increase in
142
A. Ahuja et al.
the number of publishers publishing messages at the same time can lead to severe latency degradation as had been studied in the case of RabbitMQ [11]; this can be mitigated by clustering message-oriented middleware and creating high availability [14] configurations to handle multiple producers and maintain low latency. Modern message-oriented middleware like RabbitMQ, Kafka, or Apache Pulsar [16] which provide queue replication support for high availability configurations also suffer from the disadvantages of replication in terms of latency, especially when reading from the queue [10]. Table 9.4 highlights how replication affects latency in the case of Kafka and RabbitMQ [10]. Figure 9.5 provides a general overview of how messaging latency is measured in a message-oriented middleware (Table 9.5).
9.5.4 Workload The workload is the complete configuration of elements in a benchmark that produce, consume, or process messages. The workload dimensions are composed of all the elements part of the benchmark: it can include customization in terms of message sizes, message throughput, and configurable topologies [28] of the interacting clients. The workload can be defined to be realistic [5] as is the case of SPECjms2007 and jms2009-PS benchmarks, where a supermarket supply chain is modelled via the messaging system for tracking assets throughout the organization. SPECjms2007 especially also allows efficient horizontal scalability in the benchmark by defining BASE. The interactions across P2P and Pub/Sub clients can be dynamically scaled as a function of the BASE parameter [28]. While the workload is designed according to the supermarket supply chain system in SPECjms2007, OpenMessaging benchmark takes a more synthetic approach is workload but giving more variability useful for testing cloud brokers cloud-based brokers, parameters that can be varied in the OpenMessaging benchmark include partitions, message size, producers, consumers, desired producer message rate, desired consumer message rate, and the queue backlog size. In SPECjms2007, parameter sizes like producer rate and consumer rate were in the order of thousands of message [6]. The OpenMessaging
Table 9.4 Metrics and indicators for benchmarking message-oriented middleware Metric Hardware resources Throughput Latency Workload Cost Portability
Characteristics CPU, RAM, and disk performance [26] Publisher and subscriber message rate [10] Pub/Sub latency, 99%ile measurements [5] Highly configurable and realistic [28, 31] Low cost of hardware resources [8] Easy-to-deploy, platform agnostic [5, 8]
9 Characterization and Benchmarking of Message-Oriented Middleware
Fig. 9.5 End-to-end publish-subscribe latency in a message-oriented middleware
143
144
A. Ahuja et al.
Table 9.5 Kafka and RabbitMQ in-memory read latency with and without replication [10] Middleware RabbitMQ Apache Kafka
Without replication 2 ms 15 ms
With replication 17 ms 30 ms
benchmark has producer message rate and consumer message rate in the order of hundreds of thousands of messages per seconds [31].
9.5.5 Cost and Portability The cost of infrastructure should also be considered while building a benchmark, and this factor was not considered in earlier studies as equipment was often physical and had a one-time cost. In the case of cloud deployments [5], the cost is defined by usage and the hardware resources used. Thus, benchmarks should factor in costs of infrastructure and design workload and benchmark times. In the case of the OpenMessaging benchmark, the given configuration for the benchmark uses high- performance compute instances from AliCloud and AWS [8], which could turn out to be extremely expensive if deployed for an extended period. Portability for a benchmark is related to the ease of deployment, and automation is vital for a benchmark; it should be able to automate as many parts of the benchmark as possible and be able to run irrespective of platform or system if the dependencies are fulfilled. In the case of OpenMessaging benchmark [8], the benchmark is highly portable as it uses Java which can run on a variety of operating systems using the JVM and contains Terraform templates and Ansible playbooks for infrastructure deployment on AliCloud and AWS. Figure 9.6 shows a snippet of the Terraform script for automating deployment of Apache Pulsar and OpenMessaging to AWS, using tools like Terraform, which also enable resources to be easily ported to other cloud platforms and make resource management straightforward and automatic.
9.6 Conclusion We have succeeded in answering our survey questions, as listed in Section 9.2. Our survey presents a holistic view of message-oriented middleware and how they’ve been evaluated in the past. The axes of architecture in message-oriented middleware have evolved from daemon-based design to message brokers, as well as the emergence of the publish-subscribe messaging model. These message brokers are focused on providing high performance, low latency, and reliability as is the case of battle-tested brokers Apache Kafka and RabbitMQ. The design of benchmarks has led to identification and generation of useful metrics to gauge the performance of middleware, and the workload put on a message broker reveals its performance via
9 Characterization and Benchmarking of Message-Oriented Middleware Fig. 9.6 Cluster from OpenMessaging benchmark on AWS [31]
145
146
A. Ahuja et al.
the measurement of throughput and latency from the publisher and subscribers. We also considered the deployment of message-oriented middleware to the cloud, identifying differences in traditional approaches like SPECjms2007 and comparing the OpenMessaging benchmark, which highlights how persistence has become an essential part of message queues. Along with quantitative metrics, we focused on cost and portability indicators related to a benchmark and how they are useful, especially in the cloud. Continuous evaluation of message-oriented middleware on qualitative and quantitative grounds is a crucial and challenging task, and new paradigms like cloud-based services are actively used; we suggest future research be done in the evaluation of feature-packed cloud-centric message brokers like Apache Pulsar and NATS.
References 1. Souto, E., Guimaraes, G., Vasconcelos, G., Vieira, M., Rosa, N., & Ferraz, C. (2004). A message-oriented middleware for sensor networks. In Proceedings of the 2nd workshop on Middleware for pervasive and ad-hoc computing, pp. 127–134. 2. Bao, K., Mauser, I., Kochanneck, S., Xu, H., & Schmeck, H. (2016). A microservice architecture for the intranet of things and energy in smart buildings. In Proceedings of the 1st international workshop on mashups of things and APIs, pp. 1–6. 3. Subramoni, H., Marsh, G., Narravula, S., Lai, P., & Panda, D. K. (2008). Design and evaluation of benchmarks for financial applications using Advanced Message Queuing Protocol (AMQP) over InfiniBand. In 2008 workshop on high performance computational finance, pp. 1–8. IEEE. 4. Oliveira, J. P., & Pereira, J. (2013). Experience with a middleware´ infrastructure for service oriented financial applications. In Proceedings of the 28th annual ACM symposium on applied computing, pp. 479–484. 5. Folkerts, E., Alexandrov, A., Sachs, K., AlexandruIosup, Markl, V., & Tosun, C. (2012). Benchmarking in the cloud: What it should, can, and cannot be. In Technology conference on performance evaluation and benchmarking, pp. 173–188. Springer. 6. Sachs, K., Kounev, S., Bacon, J., & Buchmann, A. (2009). Performance evaluation of message- oriented middleware using the SPECjms2007 benchmark. Performance Evaluation, 66(8), 410–434. 7. Hakiri, A., & Gokhale, A. (2016). Data-centric publish/subscribe routing middleware for realizing proactive overlay software-defined networking. In Proceedings of the 10th ACM international conference on distributed and event-based systems, pp. 246–257. 8. OpenMessaging. (n.d.). Retrieved from http://openmessaging.cloud/ 9. Maheshwari, P., & Pang, M. (2005). Benchmarking message-oriented middleware: TIB/ RV versus SonicMQ. Concurrency and Computation: Practice and Experience, 17(12), 1507–1526. 10. Dobbelaere, P., & Esmaili, K. S. (2017). Kafka versus RabbitMQ: A comparative study of two industry reference publish/subscribe implementations: Industry Paper. In Proceedings of the 11th ACM international conference on distributed and event-based systems, pp. 227–238. 11. John, V., & Liu, X. (2017). A survey of distributed message broker queues. arXiv preprint arXiv:1704.00411. 12. Ionescu, V. M. (2015). The analysis of the performance of RabbitMQ and ActiveMQ. In 2015 14th RoEduNet international conference networking in education and research (RoEduNet NER), pp. 132–137. IEEE.
9 Characterization and Benchmarking of Message-Oriented Middleware
147
13. Eugster, P. T., Felber, P. A., Guerraoui, R., & Kermarrec, A. M. (2003). The many faces of publish/subscribe. ACM computing surveys (CSUR), 35(2), 114–131. 14. Rostanski, M., Grochla, K., & Seman, A. (2014). Evaluation of highly available and fault- tolerant middleware clustered architectures using RabbitMQ. In 2014 federated conference on computer science and information systems, pp. 879–884. IEEE. 15. Docs.nats.io. (2020). Introduction [online]. Available at: https://docs.nats.io/ 16. Pulsar.apache.org. (2020). Pulsar Overview · Apache Pulsar [online]. Available at: https://pulsar.apache.org/docs/en/concepts-overview/ 17. Documentation: Table of Contents – RabbitMQ [online] Available at: https://www.rabbitmq. com/documentation.html 18. ActiveMQ [online]. Available at: https://activemq.apache.org/clustering.html 19. Ahuja, S. P., & Mupparaju, N. (2014). Performance evaluation and comparison of distributed messaging using message oriented middleware. Computer and information science, 7(4), 9. 20. Tran, P., Greenfield, P., & Gorton, I. (2002). Behavior and performance of message-oriented middleware systems. In Proceedings 22nd international conference on distributed computing systems workshops, pp. 645–650. IEEE. 21. Appel, S., Sachs, K., & Buchmann, A. (2010). Towards benchmarking of AMQP. In Proceedings of the fourth ACM international conference on distributed event-based systems, pp. 99–100. 22. Luzuriaga, J. E., Perez, M., Boronat, P., Cano, J. C., Calafate, C., & Manzoni, P. (2015). A comparative evaluation of AMQP and MQTT protocols over unstable and mobile networks. In 12th annual IEEE consumer communications and networking conference (CCNC), pp. 931–936. IEEE. 23. van Steen, M., & Tanenbaum, A. S. (2017). Distributed systems (3rd Ed.). distributed- systems.net 24. Perry, M., Delporte, C., Demi, F., Ghosh, A., & Luong, M. (2001). MQSeries publish/subscribe applications. IBM Redbook. 25. https://mosquitto.org/ChangeLog.txt. (2020). Introduction. [online] Available at: https://mosquitto.org/ChangeLog.txt 26. Sachs, K., Kounev, S., Appel, S., & Buchmann, A. (2009). Benchmarking of message-oriented middleware. In Proceedings of the third ACM international conference on distributed event- based systems, pp. 1–2. 27. Sachs, K., Kounev, S., Appel, S., & Buchmann, A. (2009). A performance test harness for publish/subscribe middleware. In SIGMETRICS/Performance. 28. Sachs, K., Kounev, S., Bacon, J., & Buchmann, A. (2007). Workload characterization of the SPECjms2007 benchmark. In European performance engineering workshop, pp. 228–244. Springer. 29. Sachs, K., Kounev, S., & Buchmann, A. (2013). Performance modeling and analysis of message-oriented event-driven systems. Software Systems Modeling, 12(4), 705–729. 30. Sachs, K., Appel, S., Kounev, S., & Buchmann, A. (2010). Benchmarking publish/subscribe- based messaging systems. In International conference on database systems for advanced applications, pp. 203–214. Springer. 31. The OpenMessaging Benchmark. (n.d.). Retrieved from http://openmessaging.cloud/docs/ benchmarks/
Chapter 10
Intelligent Mobile Application for Crime Reporting in the Heterogenous IoT Era Father Phuthego and Fadi Al-Turjman
10.1 Introduction The use of smartphones beyond making calls and texting has grown drastically over the years. Functionalities included on smartphones are aimed at helping the user with their daily activities [1]. One other thing which seems to have escalating growth is crime; recent crime statistics from the Botswana Police Service indicate continued high rates of robbery and petty crimes taking place within populated regions. These reports which are periodically produced by the Police Service are usually accompanied with the plea to aid in the apprehension of suspects displayed in the images. Such raised the question of what is the significance of the public input in crime prevention? Clearly the answer is equally resourceful. This application aims to bridge the gap as provided by the current way of making reports which allow for the fleeing of suspects or lack of evidence. The application will make it possible for victims to report occurring crime to officials, and the report will include an accurate current location and media as captured by the victim from the scene. The main goal for the application is for users to send reports with details which will ensure ease and fast access by the police and also possibly provide evidence for prosecution. The availability of Google Maps API [2] will assist in building this location-aware application by capturing the location of a citizen and sending it to be used by the application.
F. Phuthego (*) Research Center for AI and IoT, Near East University, Mersin, Turkey F. Al-Turjman Artificial Intelligence Engineering Department, Research Center for AI and IoT, Near East University, Mersin 10, Turkey e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 F. Al-Turjman (ed.), Real-Time Intelligence for Heterogeneous Networks, https://doi.org/10.1007/978-3-030-75614-7_10
149
150
F. Phuthego and F. Al-Turjman
10.1.1 H eterogenous Networks and Its Importance to the Proposed Solution As stated earlier, this application will rely on sensory data from mobiles regardless of which brand it is. This calls for the joining of multiple devices in a cloud. [3] have indicated that the key idea which warrants high-quality services from these sensors is cross-correlation of the sensed data as received from different sensors and analyzing them using sophisticated algorithms. This will help address issues of speed in service providing and use of data in the absence of Wi-Fi. In a similar research, [4] lamented that heterogenous network is a promising way to achieving quality service and satisfying traffic demand when using mobile devices; this is because heterogenous network has multiple types of low-power radio access nodes. Subsequently, it addresses issues of battery power and communication overheads. Preventing and solving crime issues has its limitations, with the underlying cause being that police are trusted to be the key personnel in assuring safety and preventing crime and they cannot have eyes and ears everywhere, thus making their job hard. Currently victims of crime either report crime through telephone call or a walk-in in a police station; both these methods have proven to provide little or no chance of success in the solving of the said crime. In most instances, victims cannot give helpful description of their location resulting in the late response by officers, and also the narration of the crime usually does not give a clear image to narrow down suspects. And hence, the main motivations behind this work can be summarized as follows: • The trending use of mobile phones beyond making calls and text messaging. • The increasing crime rate cases of Botswana. • Police Service indicates mostly robbery and petty crimes taking place within populated regions. • Vague description of location during crime report. • Lack of evidence for reported crimes. This development is a phone application, specifically for mobile phones running on android OS. The application will capture the location of the user with the integration of Google Maps API, and the user will send a crime alert message or upload additional media that the user directly captures using their mobile inbuilt components, i.e., image and video documenting the crime being committed. The media along with the location will be sent to the respective emergency assistance, e.g., a message reporting theft will be sent to the police. By sharing the location of the victim, it will ensure that the officials do not take time arriving at the scene. Images or videos shared on the report will also serve as evidence which will speed up the investigation process. The main objective is to build a mobile-based application in which end users (in our case victims) can instantly report crime to security officials using their smartphones. The application is to address four major objectives being to improve the process of reporting crime as done by victims, to reduce the time taken by officers
10 Intelligent Mobile Application for Crime Reporting in the Heterogenous IoT Era
151
to arrive at the crime scene by providing an accurate crime location, to provide clear and accurate evidence of crimes committed through capturing and adding media to reports, and to sensitize users on crime prevention tips. The development of this application requires software and hardware resources. The main environment is Android Studio which will be used for coding the program, and Microsoft Visio software will be used for design and application modeling, Pencil software for prototyping, and Microsoft Office for documenting the project. A laptop or desktop with an Intel processor will be needed as Android Studio components require an Intel series processor and lastly a physical android device (phone) for testing the application.
10.2 Literature Review Nowadays, mobile devices are commonly equipped with GPS receivers which can find a location easily using the many satellites orbiting the earth [5]. These caused the explosive growth of development of mobile apps in recent years which manipulates options of the mobiles. According to [6], one category of apps that is very popular is location-based services, commonly known as LBS. LBS apps track your location and may offer additional services, such as locating amenities nearby, as well as offer suggestions for route planning and so on (car navigation systems). Of course, one of the key ingredients in an LBS app is maps, which present a visual representation of your location using GPS. Possible manipulations which follow from this ability are displaying Google Maps in applications and zoom controls on the map; switching between the different map views; adding markers to maps; obtaining geographical data using GPS, cell ID, and Wi-Fi triangulation; and monitoring a location. Our project seeks to use these functionalities to aid crime prevention, and similar studies and developments already exist. Crime Watch, Crime Spotter, and Risk Ahead are among the applications available in Play Store which we observed; they have the same mandate but differ in functionality; with Crime Watch, it allows the residents of Orlando, FL, to see the crime activity in the city or neighborhood in real time by displaying the police activity on an interactive map and provides a short description, address, and the date and time the incident was reported; the advantage of this application is just to make residents aware of occurring crimes; however, it does not specifically address the problem of crime prevention, and it is similar to getting crime news in the evening news bulletin. Crime Spotter is a free crime reporting and local alert system which plots to a map all crimes reported. Each user’s map view will display crimes within their radius. The main disadvantage of Crime Spotter is that reports are not made to the security or law enforcement officers (as indicated by the user review in Fig. 10.1); they are just plotted onto the map, making it a good neighborhood watch application because fellow residents will be the one to act or offer form of assistance once they read an alert. Lastly, the Risk Ahead application sends warnings to users when crime happens within their vicinity; as crimes are reported by users, they manually type/fill in
152
F. Phuthego and F. Al-Turjman
Fig. 10.1 Play Store user review for Crime Spotter
Fig. 10.2 User review of Risk Ahead from Play Store
the location of the crime. This location capturing by the application leaves room for human error; a wrong location may be typed, and users who are residents to the actual crime occurring in the neighborhood may not receive alerts as indicated in Fig. 10.2 which is a user review from Play Store [7]. With no similar developments in Botswana and the abovementioned shortfalls, our development aims to make improvements. Instead of the residents just being aware of the crimes being committed in their location, they themselves can make reports which are directly sent to security or law enforcement officers. It will ensure that the accurate location is auto-reported by using the user’s mobile phone GPS to record their current location. This will curb the human error of recording the wrong location or situation where users are not familiar with the location which they are currently at. It also addresses the issues of information security where users will be more trusting to know their data resides in a database manned by police officials instead of unknown and unverified hosts; this concern was articulated in [8]. In Table 10.1, we have put forth a comparison of the proposed development with the above-identified crime applications.
10.3 The Proposed Approach The method proposed to solve the problem is the prototyping approach which includes key idea of not freezing requirements before a design but rather defines them as coding proceeds. The main reason why we opted to use this approach is because the design was complicated and relatively large; by doing phased prototypes, it ensured we understood the requirements of the desired application. The
10 Intelligent Mobile Application for Crime Reporting in the Heterogenous IoT Era
153
Table 10.1 Comparison to other crime applications Application Crime Report Crime Spotter Risk Ahead Crime Watch
Make report Yes Yes Yes No
Accurate location Yes No No Yes
Reports to officials Yes No No No
Share media Yes No No No
Fig. 10.3 Diagram description of the prototype model
software environment used to deploy this approach is Android Studio using Java language, and the objects in object-oriented (OO) software are reusable components which may often be used in different applications. In OO environments, i.e., chart engine, object libraries may be available for the programmer to build into solutions. It has a huge support community. Figure 10.3 illustrates the proposed approach of development as discussed above.
10.3.1 Functional/Processing Requirement Analysis The application “Crime Report” should meet the following functional requirements: A user will be able to create an account if they don’t already have one, and they should be able to log in using created credentials. Once logged in, they should be able to make crime reports by filling in or capturing all necessary information; also they should be able to navigate and view crime tips. Following the user’s interaction with the application, the application should be able to allow capturing of media once granted permission by device hardware, to capture the current location of the user once the user makes the report, and lastly to load the report data to the linked cloud database.
154
F. Phuthego and F. Al-Turjman
There is also concern of non-functional requirements; thus the application should always be available; for performance analysis, it should be user-friendly, and it should be reliable and accurate. Lastly the application should be scalable, and it shall be designed for easy maintenance and upgradability. It shall be available for a wide range of android versions.
10.3.2 System Models Use Cases The application has two main actors. The first actor is the victim/user which represents a civilian who downloaded the application for use. A public user will be seen as the only human user who populates the database with reports. The second actor is the database which represents the reports being loaded/made by the user. Use Case Diagram The general use case of the application is illustrated in Fig. 10.4; the two actors being victim and database are represented by stick mans. The diagram shows a sequence of events of operating the application. The victim actor interacts with the application to either view crime tips or make a report. If the latter option is selected, service extension from Google Maps API is received to provide user location which forms part of the report details as input by the user before being sent to the database actor.
Fig. 10.4 Use case diagram for the application
10 Intelligent Mobile Application for Crime Reporting in the Heterogenous IoT Era
155
Detailed Specification In this section, we will look at each use case, thus the written description of performing a particular task of the application. The description of each task is represented by a sequence of steps, starting with the user’s goal and the application’s intended response and ending with the fulfillment of the goal. Crime Reporting The sequence description of reporting a crime is listed in Table 10.2; the table shows us the steps involved in reporting a crime from start to finish. Reading Crime Tips The sequence description of reading crime tips is listed in Table 10.3; the table shows us the steps involved in viewing crime tips from start to finish.
10.3.3 System Architecture Figure 10.5 illustrates the system architecture; it shows that the user accesses the application to make a report which extends through integration with the Google API services to provide the current user location as coordinates. The application is linked Table 10.2 Use case description for use case report crime Name Report crime Participating actors User, database Entry conditions This use case starts when a user wishes to make report of an occurring crime Flow of events 1. Log in 2. The retrieval of currently logged in user’s details 3. The user either takes an image using mobile camera or loads it from the gallery 4. The user types details of the crime thus type and description 5. The user uploads the report 6. The application captures the current location of the user 7. The application submits report as user details, crime details, and location to the database Exit conditions The use case exits when the user selects close Exceptions • No location found • No media Special Internet connection to use Google Maps services and database connection requirements
156
F. Phuthego and F. Al-Turjman
Table 10.3 Use case description for use case crime tips Name Participating actors Entry conditions Flow of events
Exit conditions Exceptions Special requirements
Read crime tips Public user This use case starts when a public user wishes to read crime tips which will provide information on how to act when one is a victim of crime or what to do to prevent being a victim 1. User selects the View Tips button at home screen 2. The application displays a drop-down list view display 3. Once the drop-down is selected, the specific tips are displayed in points form, with page break to categorize the types of crimes 4. User may scroll up and down to read the displayed tips The use case exits when the user selects close or back after browsing the page None None
Fig. 10.5 Application architecture
to the cloud database so that it can write data to it. The Firestore database through its cloud functions allows for user creation, user authentication, and storage of application data.
10.3.4 Input Requirement Analysis The vital input required for the efficient use of the application is the user’s searched location which we will get by using the Location Manager class provided by the Android SDK. The first input passed to the system when loading the application will be user details; if they are not registered, they will be required to register. The second input may be the crime type and description. We use the word maybe to show that a report may be made by simply clicking the upload button and auto update of currently logged user’s details and their location will be submitted as a report short of crime type and description. This is for cases where a user is unable
10 Intelligent Mobile Application for Crime Reporting in the Heterogenous IoT Era
157
to manually type the descriptions as required by the application because crime scenario may not afford them that. The third input may be the media that is to be attached to the report; as with the descriptions, the report can still be made without the media. For the other functionality of viewing tips, not much input is needed from the user. The tips simply display texts (paragraphs) making use of list views. The displayed content is embedded within the application code as arrays.
10.3.5 Output Requirement Analysis The output will be crime report data residing in the Firestore cloud database for the officials to view. Crime tips can also be said to be the output of the application, and it will give users instructions or suggestions on how to react when they find themselves in the heart of a crime in the form of text (paragraph). The tips displayed are inter alia protecting yourself, reporting a crime, and protecting property to mention but a few.
10.3.6 Interface Requirement Analysis The project highly concentrates on the client side which is the android mobile application. The environment will meet all the roles to be played by the user which are to report crime and view tips. The proposed interfaces are the login/registration screens which firstly the user uses to either create an account or authenticate to access the application. The second is the home screen which provides user with the options of either reporting crime (report crime) or reading crime tips (crime tips). If a user selects report crime, he/she will be directed to the reporting page at which they have an option of opening the camera to capture images or loading an existing one from the gallery; there will also be two text fields for crime type and crime description for the user to manually type in those details. This page also has an upload button which is used to make the actual report by prompting the loading of these report details to the cloud. When crime tips is selected at the home screen instead of the report crime, a text view page will be loaded, with paragraphs listing tips on different crime scenarios.
10.3.7 Dynamic Model Specification of the System Sequence Diagram for Reporting Crime Figure 10.6 represents a sequence diagram of reporting a crime, which simply depicts the interaction that takes place between objects in the process of reporting a crime. Also known as event scenario, the diagram below shows that firstly the user
158
F. Phuthego and F. Al-Turjman
wishes to make a report and the event that occurs is that of them inputting crime details once they have successfully logged in the application. Once the user instructs for the upload of the typed details, the Google API objects provide the user’s current location as coordinates. Both these details are sent together as a report to the cloud database.
10.4 Design Aspects The objective of this chapter is to serve as the Design Specification Document of Crime Report. A detailed description of the system to be developed is provided in this section. The aim of the description is to be thorough enough to provide all the specification required for the implementation. The description will achieve this by describing how the system will fulfill information requirements as determined by the system analysis.
Fig. 10.6 Sequence diagram of crime report
10 Intelligent Mobile Application for Crime Reporting in the Heterogenous IoT Era
159
10.4.1 Functional Design Specification Hardware/Software Mapping/Deployment Diagram A description of the deployment view of the architecture describes the various physical nodes for the most typical platform configurations. It also describes the allocation of tasks (from the process view) to the physical nodes. This section is organized by physical network configuration; each such configuration is illustrated by a deployment diagram, followed by a mapping of processes to each processor. Figure 10.7 is a brief illustration of how the application will be interacting with already existing platforms being Firestore and Google API. Each platform has details of what service it will be rendering to the application. It further depicts the need for a graphical interphase for the user to interact with the application.
10.4.2 Database and Data Structure Design Specification Crime Report application as indicated on the input requirement specification will critically depend on a cloud database to perform some of its major operations like reporting crime. User’s details will be stored in the database for authentication; furthermore, the reports being made by victims are stored on the database. Data will be stored in Firestore online database. Database subsystem will contain all necessary queries that will be accessible by the rest of the subsystems; this commercial database already has inbuilt functionalities accessible through libraries in the SDK as indicated. User record: Table 10.4 lists the data structure of user’s details as recorded on the database. All fields are of text data type. The userID is listed as the primary key
Fig. 10.7 Deployment diagram for crime reporting
160
F. Phuthego and F. Al-Turjman
Fig. 10.8 Application launch screen
Table 10.4 Data structure of the user’s record Field name userID Fullname Email phone Password (encrypted)
Type Text Text Text Text Text
Length 10 10 100 50 50
Index Primary key
because it will be used as the unique identifier of a user; it is for this reason that the database autogenerates this data so that no duplicates exist. Report record: Table 10.5 shows the data structure of crime report as stored on the database. Six fields being caseID, crime type, crime description, the victim’s full name, the victim’s cellphone number, and the victim’s email are of text data type. Location is a float data type because it shall hold user location coordinates. Note
10 Intelligent Mobile Application for Crime Reporting in the Heterogenous IoT Era
161
that because the caseID is the unique identifier of each reported incident, the data is autogenerated by the database each time a new record is made. Access Control and Security There is a need for the application’s participants and platforms to follow an access control matrix; these are basically security measures put in place to regulate components based on their right to view, use, and access an environment. Table 10.6 lists participants of crime report application and their respective rights based on the role they play, for example, the user can create both crime type and description through typing and has no access to the editing or inputting of their location as it is automatically written by Google Maps API as geopoint as received from the GPS.
Fig. 10.9 User registration screen
162
F. Phuthego and F. Al-Turjman
Table 10.5 Data structure of a crime report record Field name caseID Crime type Crime description Full name Cell number Email Location
Type Text Text Text Text Text Text Float
Length 10 10 100 50 50 50 10,6 and 10,6
Index Primary key
Fig. 10.10 Screenshot of created user’s record
10.5 Implementation and Evaluation In this section is the detailed specifics of the proposed application development and how it is to be tested.
10.5.1 System Testing System testing is one of the fundamental phases of software development. This stage of the development process exists to identify, investigate, and report the situations in which the product may fail. The system was tested for each requirement stated in the requirement section. The system was also tested in different mobiles to see how it performs in different android versions. This will be a continuous process to ensure the application’s compartments do what they are supposed to do. We used an android physical device (phone) for this
10 Intelligent Mobile Application for Crime Reporting in the Heterogenous IoT Era Table 10.6 Access control matrix of application components GPS Google Maps API Database (Firestore)
User
Fig. 10.11 Home screen
Report getLocation ()
makePoint () getUser() setLocation() setDescription() setType ()
createType() createDescription()
Tips
view
163
164
F. Phuthego and F. Al-Turjman
development testing because android simulator cannot effectively run applications which requires Google Maps services. Through the use of the device, we will be able to experience the actual feel of the desired or final use of the application, so any possible errors detected by the device during testing will call for code debugging to fix them. Table 10.7 summarizes the tests to be carried out, the first field indicating the testID where ts1 means test 1 and so on. The second field is the particular functionality that is being tested, i.e., create a user account, and the third field will be the details of how the identified functionality will be tested; in the already identified first field, we will test the functionality by opening the application and registering a new user. Following the implementation of the test plan, a report of test results is generated. The recording maintains the test plan format thus to say testID are retained so that respective results are recorded against each test. Pass and Fail are used to represent the success and failure, respectively, in carrying out the function. Table 10.8 displays the results attained from carrying out Table 10.7 test plan. All tasks were a success as indicated by the “Pass” in all rows.
Table 10.7 Application test plan Test ID ts1 ts2 ts3 ts4 ts5 ts6 ts7 ts8 ts9 ts10
Function Create user account
Description Create and populate database with user details to create new user Log in Log into the application, checking authentication of user Log off Log off from the application Capture image by Open the mobile inbuilt camera to take an image from the camera application Load image from the Select an already existing image from the gallery and load to gallery frame Fill type and description Select the text view of both crime type and crime description details and type details as required Submit report Populate the database with report details (location and user details should automatically be submitted as part of the report) Viewing tips Selecting the view crime tips button at home screen, to vie crimes presented in list view Navigating the list view Interaction with the list view, thus dropping down the list view to read the tips Navigating through Going from Reporting option to crime tips application activities
10 Intelligent Mobile Application for Crime Reporting in the Heterogenous IoT Era Fig. 10.12 Report making window
165
166 Table 10.8 Test results of the executed plan
F. Phuthego and F. Al-Turjman Test ID ts1 ts2 ts3 ts4 ts5 ts6 ts7 ts8 ts9 ts10
Function Create user account Log in Log off Capture image by camera Load image from the gallery Fill type and description details Submit report Viewing tips Navigating the list view Navigating through application activities
Results Pass Pass Pass Pass Pass Pass Pass Pass Pass Pass
Fig. 10.13 Screenshot of created crime report
10.5.2 System Process Scenarios
• Figure 10.8 is the user registration window • Figure 10.7 is the launch screen, the first which gets displayed when the user presses displayed screen when the application is the “SIGN UP” button on launch screen. opened. • Figure 10.9 is the screenshot of the database console displaying the created user under users file. Each user is to have a system-generated user ID as seen on the image. • Figure 10.11 displays a report making • Figure 10.10 shows the home screen which screen where a user can take a picture and gets loaded after successfully creating an fill in text fields manually while user details account or logging in. and location will automatically be filled once the user clicks the UPLOAD button.
10 Intelligent Mobile Application for Crime Reporting in the Heterogenous IoT Era
167
Fig. 10.14 Crime tips screen
• Figure 10.12 shows the report which was created in Fig. 10.11 after clicking UPLOAD. The report is listed under the “Crime Reports” table on the cloud database. User’s details are automatically populated (sourced from “users” table using the current log in userID). The location is also recorded as geo coordinates on the background. • If the user clicks CRIME TIPS while at home • By clicking one of the available drop-down options on the crime tips screen, for screen as displayed in Fig. 10.10, they will be example, the PROTECTING YOURSELF directed to the crime tips screen as shown in option, Fig. 10.14 will be displayed, which Fig. 10.13. They can click the drop-down is the detailed tips of how the user can option to read respective tips. protect themselves from potential harm.
168
F. Phuthego and F. Al-Turjman
Fig. 10.15 Tips of protecting yourself
10.5.3 System Evaluation As the objective of the system has been already stated earlier on, the system is able to meet some requirements as stated in the requirements specification of the system. The system is performing in all the android versions. However, the application performance is bad when the Internet connection is low or not available, and the application malfunctions (crushes). This is because all data used by the application resides on the cloud database which is accessible through an Internet connection.
10 Intelligent Mobile Application for Crime Reporting in the Heterogenous IoT Era
169
The application does not have a clear officer-based interface due to the limitations which followed from integrating the application and the proposed Firestore database. As indicated earlier, the database uses inbuilt methods which are accessed through its libraries; for its authentication, it has no way of assigning different roles for listed users. This hindered the implementation of admin (officer) and user (victim) functionality. Until such functionality is availed, the police will be accessing the reports directly from the firebase console.
10.6 Conclusion The main aim of the application was met, and the developed application allows users to make real-time crime reports which has accurate location of the crime; furthermore it allows for the submission of media captured which is related to the occurring crime. This will help in crime prevention and case solving since officials are likely to arrive at the crime scene sooner than the traditional way of receiving vague directions, and it will also aid in prosecution following the availability of evidence. Should further work be carried out on this project, we would hope for the application to be able to carry out a few features, for it to have a detailed map which indicates the crime-prone area through color coding, for example, by highlighting the area with color “Red” or dialog message. This would use data which resides on the database as past reports. Furthermore, the admin (police) and user (victim) accounts could be created according to their roles. And hence, the functionality and interface may differ. Lastly, the application would duly serve as an effective crime prevention application if we implement functionality in which reports are automatically sent from recording unusual readings as captured by user’s device sensors, i.e., sudden drops, accelerated heart rate, and shaking, which are usually signs of distress.
References 1. Wongwiwatchai, N., Pongkham, P., & Sripanidkulchai, K. (2020). Detecting personally identifiable information transmission in android applications using light-weight static analysis. Computers & Security, 99, 23. 2. G. Inc. Google Map API [Online]. Available http://maps.google.com/maps/api_ signup?url=http%3A%2F%2Fbtwmdh.rasip.fer.hr 3. Al-Fagih, A., Al-Turjman, F., & Hassanein, H. (2013, October). A priced public sensing framework for heterogeneous IoT architectures. IEEE Transactions on Emerging Topics in Computing, 1(1). 4. Lei, L., Zheng, K., Chen, J., & Meng, H. (2013, June). Challenges on wireless heterogeneous networks for mobile cloud computing. IEEE Wireless Communications, 20(3), 34–44. 5. Kaasinen, E. (2003, May). User needs for location-aware mobile services. Personal and Ubiquitous Computing, 7(1), 1.
170
F. Phuthego and F. Al-Turjman
6. Lee, W.-M. (2012). Beginning Android 4 application development. Wiley. 7. G. Inc. Google Play, Google Inc [Online]. Available https://play.google.com/store/apps/ details?id=com.crimespotter&hl=en_AU&showAllReviews=true. Accessed Dec 2020. 8. Furini, M., & Yu, Z. (2014). Users behavior in location-aware services: Digital natives versus digital immigrants. Advances in Human-Computer Interaction, 19, 03.
Index
A Access control matrix, 161, 163 Advanced Configuration and Power Interface (ACPI) Timer, 89 Android mobile application, 102, 103 Android Studio interface, 105 Apache Mesos, 97 Application programming interfaces (APIs), 109 data/files, 110 mobile-centric, 109 test environment, 110 testing results, 111 test plan, 110 Artificial intelligence-based system, 124 Augmented intelligence (AI), 46 Augmented reality, 48 AWS, 91, 92, 94 Azure, 87, 92, 94 B Big data, 20 Bitcoin, 73, 77 Blockchain technology, 72, 77 architecture, 73, 74 consensus algorithm, 74 EHRs, 78 IIoT, 77 permission-based EHR sharing system, 78 permissioned/private, 73 P2P network, 72 public, 73 smart cities, 80, 81 vehicular, 79
C Car navigation systems, 151 Central Processing Unit (CPU), 112 Channel state information (CSI), 8 Clock drift, 93 CLOCK_MONOTONIC, 90 CLOCK_REALTIME, 90 Clock skew, 93, 96 Clock synchronization AWS, 91, 92 Azure, 92 clock drift, 93 clock skew and latency measurements, 96 experimental methodology, 94–96 future work, 97 GCP, 92 mitigating clock drift, 94 NTP, 88 timekeeping, 89, 90 virtualized timekeeping, 91 Cloud computing (CC), 45, 46 Cloud infrastructure, 32, 33, 36, 38, 39, 46 Cloudlets, 31 CMOS RTC, 89 CockroachDB, 87, 94 Commit window, 88 Consensus algorithm, 74 Content delivery networks (CDN), 45 Continuous measuring (OLM), 65 Crime applications, 153 Crime issues, 150 Crime report application, 159 Crime Spotter, 151, 152 Crime tips, 155 Crime Watch, 151
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 F. Al-Turjman (ed.), Real-Time Intelligence for Heterogeneous Networks, https://doi.org/10.1007/978-3-030-75614-7
171
Index
172 D Data analytics, 21, 35, 71 Database subsystem, 159 Data-driven decision-making, 21 Data integrity, 74 Data mining algorithms, 12 Data mining techniques, 4 Data structure, 160, 162 Decision-making, 21 Deep learning, 55, 118 Deep neural networks (DNN), 13 Deep Q-network, 9 Deployment diagram, 159 Design Specification Document, 158 Distributed ledger technology (DLT), 72 Distributed machine learning, 26 Distributed system, 87, 88 Download time, 106, 110
machine learning vs., 27 opportunities, 37, 38 split learning, 28, 29 Federated transfer learning, 30 Femto-cell BSs (FBSs), 9 5G networks automated artificial intelligence (AI) services, 48 mobile devices and Internet of Things, 44 mobile edge computation offloading, 47 multimedia traffic, 47 quality of service (QoS) parameters, 49 reinforcement learning, 52 supervised and unsupervised learning, 53 traffic offloading, 49, 50 Fog computing, 20, 31, 32, 40 French City Brain, 81 Functional requirements, 153
E Early alert devices, 66 Early warning system (EWS), 64, 66, 67 Edge computing, 20, 31, 32, 37, 40 E-Health applications, 36 Electronic health records (EHRs), 78 e2-micro instances, 95 Environmental pollution, 117 Environmental radiation monitoring, 60 Environmental surveillance systems, 66 Ethereum, 73 European Telecommunication Standard Institute (ETSI), 43, 45 Executed plan, 166 External consistency, 87
G Game theory, 10 Google Cloud Platform (GCP), 87, 92 Google Compute Engine, 87 Google’s NTP services, 94 Google’s Spanner, 87, 88 Graphical interphase, 159
F Face recognition, 48 Federated averaging (FedAvg), 28 Federated learning aggregator and participants, 33 autonomous moving platforms, 36 challenges, 38, 39 cross-individual category, 35 cross-organisation category, 35 e-health applications, 36 FedAvg, 28 heterogeneous nodes, 34 HetNets, 36 for IIoT applications, 35 for IoT devices, 35 learning scenario, 33 local aggregation, 33, 34
H Hardware degradation, 93 Hardware resources, 140 HeteroFL, 34 Heterogeneous information networks (HIN), 4 Heterogeneous networks (HetNets), 150 architecture, 2 emergence, 4, 5 expected benefits, 2, 3 federated learning, 36 issues, 3 and machine learning applications, specific to wireless networks, 5 backhaul management in, 10 big data analytics for real-time HetNet, 12 energy and power efficiency, 12, 13 infrastructure management, 13 interference mitigation for, 8, 9 load balancing in, 9, 10 mmWaves, 12 mobility management, 7, 8 multiband cooperative network, 13 neural networks, 6, 7
Index optimization of, 11 reinforced learning, 6 socially aware networks, 6, 7 supervised learning, 6 unsupervised learning algorithms, 6 Wi-Fi networks and, 3 Horizontal federated learning, 29 Horizontal scalability, 133 HPET, 89 HUYGENS, 94 Hyper-V, 91, 92, 97 I Industrial Internet of Things (IIoT), 35, 77 Industry Specification Group (ISG), 45 Information networks (IN), 4 Infrastructure as a service (IaaS), 45 Input requirement analysis, 156 Intelligent system, 118 Interactive gaming, 48 Interface requirement analysis, 157 International Atomic Energy Agency (IAEA), 60 International Nuclear and Radiological Event Scale (INES), 64 Internet connectivity, 46 Internet of Things (IoT), 44, 64, 100 continuous data collection, 71 industrial Internet of Things (IIoT), 77 mining algorithms and data storage, 81–83 users and authoritative regulatory bodies, 72 Internet service provider (ISP), 102 Ionizing radiation, 60 J jms2009-PS, 138 K K-Means, 54 K-nearest neighbors, 6 Kubernetes, 97 L Latency, 94, 96, 140 Launch screen, 166 Launch time, 111, 112 Limestone Network, 81 Linux kernel, 90 Local area network (LAN), 31
173 Location-based services (LBS), 151 M Machine learning (ML), 72 algorithms, 46 analytics purposes, 26 classification problems, 25 data analytics, 21 data-driven decision-making, 21 data partitioning approach, 26 experience (E), 22, 23 HetNets applications, specific to wireless networks, 5 backhaul management, 10 big data analytics for real-time HetNet, 12 energy and power efficiency, 12, 13 infrastructure management, 13 interference mitigation for, 8, 9 load balancing in, 9, 10 mmWaves, 12 mobility management, 7, 8 multiband cooperative network, 13 neural networks, 6 optimization of, 11 reinforced learning, 6 SANs, 6, 7 supervised learning, 6 unsupervised learning algorithms, 6 partial derivatives, 24 performance measure (P), 22 regression problems, 25 reinforcement learning category, 26 RSSI measurements, 22, 23 supervised learning category, 25 task (T), 22 trained model, 26 unsupervised learning category, 25 Machine-to-machine (M2M) communication, 44 Macro-base stations (MBSs), 2, 9, 10 Macrobenchmarks, 136 Markov decision process (MDP), 9 Material design, 104 Merkle tree structure, 74, 75 Message-oriented middleware (MoM), 129 architecture, 131 broker-based, 132 cost/portability, 144 daemon-based, 132 design, 131 distribution/reliability, 133
Index
174 Message-oriented middleware (MoM) (cont.) history, 133 importance, 136 metrics/indicators, 139 protocols/compatibility, 135 Microbenchmarks, 136 mmWave frequencies, 12 Mobile application, 101, 104, 106, 109–111, 114, 150 Mobile cloud computing, 30, 31 Mobile communication network (MCN), 71 Mobile communications, 47 Mobile edge computation offloading (MECO), 47 Mobile edge computing (MEC), 31 communication latency, 43 intelligent mechanisms for offloading, 51 real-time tasks, 43 technical challenges, 43, see 5G networks Mobility management, HetNets, 7, 8 Monotonic clock, 89 Multi-access edge computing (MEC), 46 Multi-agent deep learning techniques, 9 Multiband cooperative network (MBCN), 13 N Nash equilibrium (NE), 10 Native application, 111 Native execution, 112 Network function virtualization (NFV), 46 Network slicing and network virtualization, 13 Network Time Protocol (NTP), 88 Network traffic, 47 NEU Internet Analyzer mobile app, 105, 106 background operations, 108 installation, 106 response time, 109 NEU Wi-Fi Analyzer, 102 Neural networks, 6 Next-generation (NG) cellular networks, 2, 5, 12, 13 Nokia Siemens Networks (NSN), 47 Non-functional requirements, 154 O Object-oriented (OO), 153 Offline screening, 65 OpenMessaging benchmark, 137, 139, 142 Output requirement analysis, 157
P Paravirtualization approach, 91 Pencil software, 151 Permissioned/private blockchain, 73 Per-Processor x86 APIC Timer, 89 PEST analysis, 77 Ping time, 106, 110 Platform as a service (PaaS), 45 Platform configurations, 159 Portable application, 109 Precision Time Protocol (PTP), 88 Programmable Interval Timer (PIT), 89 Proof of work (PoW) algorithm, 74 Prototype model, 153 Prototyping approach, 152 P2P network system, 72 Public blockchain, 73 Public cloud, 87 Q Q-function, 53 Q-learning (QL), 8, 9, 11 Quality of service (QoS) parameters, 49 Quality of service (QoS) requirements, 8 R RabbitMQ, 136, 142, 144 Radiation, 60 DNA damage, 61 EWS, 64, 66, 67 environmental radiation monitoring, 60 industrial and economic growth, 64 ionizing radiation, 60 nuclear radioactivity, 63 radiation detection device, 63 radioactive elements, 63 radioactivity in marine environment, 62 Random access memory (RAM), 112 Real-time services, 48 Received signal strength indication (RSSI) measurements, 22 Recycling, 117, 118 Reinforced learning (RL), 6 Reinforcement learning, 26 Roadside units (RSUs), 79 S Safety Guide on Regulatory Control of Radioactive Discharges (SGRCRD), 60
Index Security and privacy issues, 46 Security measures, 161 Sequence diagram, 157, 158 Signal to interference and noise ratio (SINR) values, 8 Sixth-generation (6G) system, 1, 2, 5 Small base stations (SBSs), 3, 10 Smart Dubai, 81 Smart traffic control systems, 48 Socially aware networks (SANs), 6, 7 Software as a service (SaaS), 45 Software-defined radio (SDR), 6 Spanner, 88 SPECjms2007, 137 Split learning, 28, 29 Sprinkle screen, 109 stdout, 95 Supervised learning, 6, 25 Support vector machines (SVM), 6, 53 Survey questions, 130 Survey sources, 130 Synchronized clocks, 87, 88, 96 System architecture, 155, 156 System evaluation, 168 System models crime reporting, 155 crime tips, 155 detailed specification, 155 use cases, 154 System testing, 162 T Test plan, 164 Throughput, 140 Tick counters, 89 Tickless counters, 89 Timekeeping, 89 Time stamp counter (TSC), 89, 90
175 Timestamps, 94, 95 Traffic offloading, 49, 50 TrueTime API, 88 U Unsupervised learning, 6, 25 V Vehicular blockchain, 79 Vehicular edge computing and networks (VECON), 79 Vehicular network, 71 Vertical federated learning, 29 Virtualized timekeeping, 91 VM migration, 93 VM time sharing, 93 VMware, 91 W Wall-clock timer, 89 Waste management system, 118 Web Services, 87 Wi-Fi networks, 3, 101 Wireless fidelity network (Wi-Fi), 71 Wireless mesh network (WMN), 71 Wireless sensor and actuator network (WSAN) architecture, 62 Wireless sensor networks (WSNs), 45, 60, 71 Workload, 142 X Xen clocksource, 91, 92 x86 hardware virtualization extensions, 87 x86 machines, 93