Edge Intelligent Computing Systems in Different Domains (SpringerBriefs in Computer Science) 3031494717, 9783031494710

This book provides a comprehensive and systematic exploration of next-generation Edge Intelligence (EI) Networks. It del

136 8 4MB

English Pages 104 [98] Year 2024

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Preface
Contents
List of Figures
1 Emerging Technologies for Edge Intelligent Computing Systems
1.1 Introduction
1.2 Edge Intelligence Paradigm
1.3 Federated Learning Framework
1.4 Semantic Communications
2 Offloading Methodologies for Air-Ground Edge Intelligent Computing Systems
2.1 Introduction
2.2 Problem Statement
2.2.1 System Scenario
2.2.2 Problem Formulation
2.3 Stochastic Network Calculus Principles
2.4 End-to-End Stochastic Bound
2.5 Tasks Offloading Scheme
2.5.1 Flows Preference List
2.5.2 Computational Nodes Preference List
2.5.3 Stability Analysis
2.6 Performance Analysis
2.7 Summary
Appendix
Appendix
3 Edge Intelligent Computing Enabled Federated Learning in 6G Wireless Systems
3.1 Introduction
3.2 Federated Learning with an Ideal Channel
3.3 Federation Learning in Actual Scenarios
3.3.1 Users' Revenue Model
3.3.2 Problem Formulation
3.4 Channel Performance Prediction Empowered by Artificial Intelligence
3.5 Matching Theory for Devices Selection
3.6 Performance Analysis
3.7 Summary
4 Edge Intelligent Computing in Aqua Environments
4.1 Introduction
4.2 Intelligent Data Communication Framework
4.3 Federated Learning for Ground–Aqua Environments
4.3.1 Channels and Computation Modeling
4.3.2 Problem Formulation
4.4 Underwater Semantic Communications
4.5 Performance Analysis
4.6 Summary
5 Application of the Digital Twin Technology in Novel Edge Intelligent Computing Systems
5.1 Introduction
5.2 Problem Outline
5.2.1 Physical System Model
5.2.2 Digital Layer Integration
5.2.3 Problem Formulation
5.3 DT UAV-MEC Offloading Framework
5.3.1 AI-Empowered DTs Congestion Monitoring
5.3.2 Three-Dimensional Matching Game for Tasks Offloading
5.4 Performance Analysis
5.5 Democratized Digital Twin Technology for Industrial Edge Intelligent Computing Systems
5.5.1 Motivation
5.5.2 The Dem-AI Framework
5.5.3 DT Architecture and Functional View
5.5.4 Experimental Results
5.6 Summary
Bibliography
Recommend Papers

Edge Intelligent Computing Systems in Different Domains (SpringerBriefs in Computer Science)
 3031494717, 9783031494710

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

SpringerBriefs in Computer Science Benedetta Picano · Romano Fantacci

Edge Intelligent Computing Systems in Different Domains

SpringerBriefs in Computer Science

SpringerBriefs present concise summaries of cutting-edge research and practical applications across a wide spectrum of fields. Featuring compact volumes of 50 to 125 pages, the series covers a range of content from professional to academic. Typical topics might include: • A timely report of state-of-the art analytical techniques • A bridge between new research results, as published in journal articles, and a contextual literature review • A snapshot of a hot or emerging topic • An in-depth case study or clinical example • A presentation of core concepts that students must understand in order to make independent contributions Briefs allow authors to present their ideas and readers to absorb them with minimal time investment. Briefs will be published as part of Springer’s eBook collection, with millions of users worldwide. In addition, Briefs will be available for individual print and electronic purchase. Briefs are characterized by fast, global electronic dissemination, standard publishing contracts, easy-to-use manuscript preparation and formatting guidelines, and expedited production schedules. We aim for publication 8–12 weeks after acceptance. Both solicited and unsolicited manuscripts are considered for publication in this series. **Indexing: This series is indexed in Scopus, Ei-Compendex, and zbMATH **

Benedetta Picano • Romano Fantacci

Edge Intelligent Computing Systems in Different Domains

Benedetta Picano University of Florence Information Engineering Department Florence, Italy

Romano Fantacci University of Florence Information Engineering Department Florence, Italy

ISSN 2191-5768 ISSN 2191-5776 (electronic) SpringerBriefs in Computer Science ISBN 978-3-031-49471-0 ISBN 978-3-031-49472-7 (eBook) https://doi.org/10.1007/978-3-031-49472-7 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland Paper in this product is recyclable.

Dedicated to Annamaria.

Preface

Nowadays, there is a remarkable interest in Edge Intelligent Computing systems as key technology for next-generation intelligent applications. The emerging paradigms of distributed machine learning, digital twin, and semantic communications are considered in this book as enablers of Edge Intelligent Computing systems in novel domains. Being the Federated Learning recognized as one of the most promising and efficient distributed learning algorithms, we investigate its behavior under actual application conditions. In particular, despite Federated Learning enables end-devices to train a shared machine learning model while keeping data locally, communications between end-devices and edge servers over wireless links are required. This makes the Federated Learning process dependent on the propagation conditions of the wireless channels. Hence, in investigating the behavior of the Federated Learning process in a six generation (6G) environment, this book discusses efficient solutions both to improve the resulting converge time. Furthermore, even if Federated Learning is considered a promising approach, it struggles to adapt to the diversity of components within a same system. To overcome these limitations, the novel Democratized Learning paradigm is discussed in the book. Finally, the book discuss the Digital Twins technology as a manageable bridge between the applications and physical assets and the emerging semantic communication approach to enable the deployment of an Edge Intelligent Computing ecosystem in novel scenarios. The book consists of five chapters. The first one deals with a brief discussion on the fundamentals of the Edge Computing, Federated Learning, and Semantic Communications features. In the remaining chapters, we comprehensively discuss the design and analysis of Edge Intelligent Computing systems in different domains. Specifically, resource optimization, incentive mechanism, and quality of service preservation are considered. Furthermore, we present several solutions based on the matching theory and provide numerical results concerning the performance of the considered systems in different application scenarios. Florence, Italy Florence, Italy

Benedetta Picano Romano Fantacci vii

Acknowledgment

This work was partially supported by the European Union under the Italian National Recovery and Resilience Plan (NRRP) of NextGenerationEU, partnership on “Telecommunications of the Future” (PE0000001—program “RESTART”).

ix

Contents

1

Emerging Technologies for Edge Intelligent Computing Systems . . . . . . . 1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Edge Intelligence Paradigm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Federated Learning Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4 Semantic Communications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1 1 2 4 6

2

Offloading Methodologies for Air-Ground Edge Intelligent Computing Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Problem Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.1 System Scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.2 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Stochastic Network Calculus Principles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 End-to-End Stochastic Bound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5 Tasks Offloading Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5.1 Flows Preference List. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5.2 Computational Nodes Preference List . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5.3 Stability Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6 Performance Analysis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

9 9 11 11 14 14 15 18 19 20 20 21 23 24

3

Edge Intelligent Computing Enabled Federated Learning in 6G Wireless Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Federated Learning with an Ideal Channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Federation Learning in Actual Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.1 Users’ Revenue Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.2 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Channel Performance Prediction Empowered by Artificial Intelligence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5 Matching Theory for Devices Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

27 27 28 30 32 33 34 35 xi

xii

Contents

3.6 Performance Analysis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 3.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 4

Edge Intelligent Computing in Aqua Environments . . . . . . . . . . . . . . . . . . . . . . 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Intelligent Data Communication Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 Federated Learning for Ground–Aqua Environments . . . . . . . . . . . . . . . . . . 4.3.1 Channels and Computation Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.2 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4 Underwater Semantic Communications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5 Performance Analysis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

5

Application of the Digital Twin Technology in Novel Edge Intelligent Computing Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Problem Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.1 Physical System Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.2 Digital Layer Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.3 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 DT UAV-MEC Offloading Framework. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.1 AI-Empowered DTs Congestion Monitoring. . . . . . . . . . . . . . . . . . . 5.3.2 Three-Dimensional Matching Game for Tasks Offloading . . . 5.4 Performance Analysis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5 Democratized Digital Twin Technology for Industrial Edge Intelligent Computing Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.2 The Dem-AI Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.3 DT Architecture and Functional View . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.4 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

43 43 44 45 47 49 49 51 54 55 55 57 57 60 63 64 65 66 68 72 72 74 75 78 81

Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85

List of Figures

Fig. Fig. Fig. Fig. Fig. Fig.

1.1 1.2 2.1 2.2 2.3 2.4

Fig. 2.5 Fig. Fig. Fig. Fig. Fig. Fig. Fig. Fig. Fig.

2.6 2.7 3.1 3.2 3.3 3.4 3.5 3.6 3.7

Fig. Fig. Fig. Fig. Fig.

3.8 4.1 4.2 4.3 4.4

Fig. 4.5 Fig. Fig. Fig. Fig.

5.1 5.2 5.3 5.4

Federated learning system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Semantic communication system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . System scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SNC tandem queue models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SNC model for allocation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Reliability as a function of the molecular absorption coefficient . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Reliability as a function of the radius of the interfering region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Reliability as a function of the bandwidth . . . . . . . . . . . . . . . . . . . . . . . . . . . Reliability as a function of .pH . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Federated learning system model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ESN architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Devices selection flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . MSE as a function of the time horizon h . . . . . . . . . . . . . . . . . . . . . . . . . . . . Global FL delay as a function of the number of iterations . . . . . . . . . . Accuracy as a function of the number of iterations . . . . . . . . . . . . . . . . . System energy consumption as a function of the transmission power . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Revenue as a function of the number of devices . . . . . . . . . . . . . . . . . . . . . FL over a ground–aqua network architecture . . . . . . . . . . . . . . . . . . . . . . . . Loss function as a function of the number of epochs . . . . . . . . . . . . . . . . Loss function as a function of the encoder dimension . . . . . . . . . . . . . . Total amount of time needed to complete the FL as a function of the number of FL rounds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Accuracy reached by the FL model as a function of the number of iterations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Digital twin edge network architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Functional view of the DT entities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Class diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Digital twin edge network architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

5 6 11 16 19 22 23 24 25 29 34 36 39 39 40 41 41 45 52 52 53 54 57 61 62 64 xiii

xiv

Fig. Fig. Fig. Fig. Fig. Fig. Fig. Fig. Fig. Fig. Fig.

List of Figures

5.5 5.6 5.7 5.8 5.9 5.10 5.11 5.12 5.13 5.14 5.15

Fig. 5.16 Fig. 5.17 Fig. 5.18 Fig. 5.19

ESN forecasting error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Worst completion time as a function of the number of users . . . . . . . Worst completion time as a function of the number of users . . . . . . . Outage probability as a function of the number of users . . . . . . . . . . . . Outage probability as a function of the number of users . . . . . . . . . . . . Outage probability as a function of the number of ECs . . . . . . . . . . . . . Dem-AI sequence diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Proposed approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Dem-AI UML . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Proposed DTEN-Dem-AI framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Test accuracy as a function of the number of global rounds applying the Dem-AI scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Test accuracy as a function of the number of global rounds applying the FedAvg scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . MSE as a function of the lead time .Δt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .|O| as a function of the number of industrial devices . . . . . . . . . . . . . . . .|O| as a function of the number of industrial devices . . . . . . . . . . . . . . .

68 69 70 70 71 72 76 76 77 77 79 80 81 82 83

Chapter 1

Emerging Technologies for Edge Intelligent Computing Systems

1.1 Introduction Future next generation networks are expected to outline a wide class of ambitious challenges, mainly concerning the handling and fulfillment of requirements imposed by novel disruptive human-oriented services and applications, for example, the interactive/immersive gaming and the ultimate virtual or augmented reality [1–5], involving five senses solicitation. The more and more need of strict requirements in terms of intelligence capabilities, communication latency, and real-time response has witnessed the birth of the new Edge Intelligence (EI) era referring to a set of enabling technologies allowing computation to be performed at the edge of the network, close to the end users, according to the Edge Computing approach, in conjunction with Machine Learning (ML) capabilities. Nowadays, the novel EI paradigm is promoting the development of overarching cross-domains architectural ecosystems for the cost-effective integration of ML into new generation networks, and to contribute to the realization of hyperflexible architectures bringing humanlike intelligence into every aspect of the novel EI systems with the aim at properly supporting massive data gathering and processing. Nevertheless, in this context, with the aim at empowering Edge Computing systems with ML modules, we have to take into account that most existing ML algorithms require a large number of data for learning and model training. Meanwhile, clients datasets are typically nonindependent identically distributed, and users are often hesitant in sending personal raw data to a third-parties server, representing critical aspects for the practical realization of future intelligent environments. In this picture, the integration of Federated Learning (FL) in several applications scenarios (e.g., anomaly detection, recommendation system, next-word prediction, etc.) is becoming of paramount importance, since it favors users’ privacy preservation enabling a decentralized collaborative training process of a shared global model by using locally collected datasets. The FL approach acts as an iterative data exchange process between enddevices, named clients, and the central server unit, which updates the global model © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 B. Picano, R. Fantacci, Edge Intelligent Computing Systems in Different Domains, SpringerBriefs in Computer Science, https://doi.org/10.1007/978-3-031-49472-7_1

1

2

1 Emerging Technologies for Edge Intelligent Computing Systems

and merges the data processed by the clients side. In this way, personal users data sharing can be exploited, by keeping sensitive information protected. Benefits and improvements due to the adoption of the EI-FL paradigm within new generation networks are numerous, from enabling high-flying applications up to users’ privacy preservation, moving statistical and mathematical models training on board of users’ devices. Nevertheless, channel impairments related to the exploitation, in new sixth-generation (6G) networks, of high frequencies bands as [30, 300] GHz band millimeter waves (mmWaves) and free-space optical at the range of [200, 385] THz drastically impact the effective integration of the FL within edge intelligence landscapes. The rapidness with which terahertz (THz) channel conditions vary over time, typically due to blockage events, capturing the phenomenon in which the Line-on-Sight (LoS) signal between the considered user and its corresponding base station, prevents the actual diffusion of pervasive and secure intelligent frameworks supported by 6G networks. This has given rise to the novel channel-aware FL approach, presented in the book, where proper channel conditions forecasting can be exploited according to the EI paradigm, with the aim at selecting users able to guarantee seamless participation to FL process or prevent the failure of updating attempts. Finally, by taking into account that the next generation use cases are expected to belong to diversified application environments, this book also discusses the application of channel-aware EI-FL solutions to different environments such as airground and underwater [6–9].

1.2 Edge Intelligence Paradigm During the last decade, we have observed a huge proliferation of novel computing paradigms. Among them, the most popular and wide used is surely the Cloud computing paradigm that offers an on-demand availability of computing resources to provide services through networks. Cloud computing has attracted the research interest of Industries and Accademia until, in more recent times, the ever increasing use of smart devices and demand of advanced applications in the context of ever new forms of the Internet of Things (IoTs) has highlighted all the drawbacks of such a centralized paradigm. This IoT evolutional trend has generated new research perspectives, making the interest in decentralized solutions more and more attractive. In sight of this, Edge computing [10–14] has emerged as an effective and efficient methodology to move cloud services at the network edge, close to the end users. Edge computing offers virtual computing platforms to provide computing, storage, and networking resources, tackling most of the challenges arising in the new services and applications era such as strict real time constraints, low latency, reduction of network congestion and low energy consumption, difficult to be addressed by Cloud solutions.

1.2 Edge Intelligence Paradigm

3

In such Edge computing scenario, fog computing has been proposed as a further evolution of the Edge computing principles having as main goal that of offering a complete architecture to provided resources horizontally and vertically along the Cloud-to-Things continuum to enable Cloud and IoT interactions. In addition to this, Mobile Cloud Computing (MCC), Cloudlet Computing (CC), Mobile Edge Computing (MEC) represent, nowadays, the most popular implementations of the classic Edge computing principle. • MOBILE CLOUD COMPUTING CLOUDLET COMPUTING: Mobile Cloud Computing allows mobile devices to offload computation to remote equipment with the main goal to make mobile applications accessible to users without requiring powerful devices. The actual MCC is compliant with the Edge computing paradigm by looking to enable computation and storage capabilities at the Edge of a network instead of in a remote Cloud. The most referred application of this approach is the Cloudlet Computing that offers computation and storage capabilities to mobile users located at the edge of the network operating as a small Cloud with the possibility to be linked to a more powerful remote Cloud to handle complex tasks. • MOBILE EDGE COMPUTING: The Mobile Edge Computing is somewhat similar to the Mobile Cloud Computing or Cloudlet Computing paradigms providing for an implementation of computational and storage resources at the Edge of a network. However, the specific feature of a Mobile Edge Computing system lies in the implementation of the Cloud service inside the radio network controller or base stations of the wireless networks allowing important benefits as high data transmission rates, low latency, and context awareness. Starting from the Edge computing paradigm, one step ahead, is to look at the emerging concept of Edge Intelligence as a consequence of the huge growing interest on Artificial Intelligence (AI)-based applications and services [10, 15]. Nowadays, edge intelligence represents an emerging research area, which is receiving more and more interest in the scientific community. It takes advantages by a clever combination of Edge computing and AI techniques to empower intelligent applications by lowering the need of a centralized Cloud. In particular, Edge intelligence provides for the connection between devices and smart objects for data gathering, caching, processing, and analysis close to the data source to enable AI methodologies with the main goal of enhancing data processing while guaranteeing data privacy and security. The Edge Intelligence paradigm has to entail the following capabilities: • Edge Caching: To collect and store data from edge devices and external sources, i.e., Internet, as support to AI-based applications at the edge of the network. • Edge Training: a distributed learning procedure performed on the data set cashed at the edge. • Edge Inference: is based on the use of a trained model/algorithm to infer a testing instance.

4

1 Emerging Technologies for Edge Intelligent Computing Systems

• Edge Offloading: allows to offload application tasks to other edge devices whenever an edge device has not enough resource to support a given edge intelligence application. In the next chapter we will discuss in detail the offloading techniques in edge intelligence systems, leaving the reader the opportunity to deepen the other capabilities by a careful reading of papers [11, 15–17] and the reference therein.

1.3 Federated Learning Framework Machine learning (ML) exploits both statistical and computer science knowledge [18–21], aiming at developing algorithms able to improve the performance of computer through experience gained from the analysis of a collection of samples about a phenomenon object of the study. More in depth, ML builds statistical model on the basis of data gathered and devoted to solve a specific problem. Different class of problems identify different ML types that are: supervised learning, unsupervised learning, and reinforcement learning. The supervised learning paradigm relies on labeled datasets, exploited to train or “supervise” algorithms to perform data classification or predicting outcomes accurately. On the basis of labeled inputs and outputs, the model is able to measure the accuracy of the learning process over time. Differently, unsupervised learning exploits raw or unlabeled data, catching hidden patterns in data without the need for human intervention. Finally, in the reinforcement learning the learning process is defined through trial-and-error interactions with a dynamic environment. With the unprecedented and ever increasing proliferation of intelligent devices, such as the Internet of Things (IoT) devices, massive amount of data are generated, providing the means by which ML models can be trained to make applications and services smarter. Consequently, during last decades, a strong impetus has been given to machine learning research development. If on the one hand it is obvious that we are living in a data deluge era, evidenced by the enormous availability of large-scale amount of data continually generated, on the other hand conventional learning-based solutions based on centralized training suffer from users’ privacy leakage and are impractical since large bandwidth requirements are needed to transmit massive amount of data, and they may cause network congestion. As natural evolution of traditional centralized approaches, distributed machine learning training paradigms are emerged and developed at network edges, avoiding to upload of users’ raw data to a central server, preserving privacy and lowering traffic congestion. Recently, Federated Learning (FL) has gained momentum, since it enables the training of a shared machine learning model while keeping data locally on devices [15, 19, 22–24]. An example of a federated learning system is shown in Fig. 1.1. Although FL is considered one of the most potential development directions as far as the field of data security exchange, the machine learning model training requires communication between wireless devices and edge nodes over wireless links. Consequently, impairments of the

1.3 Federated Learning Framework

5

Fig. 1.1 Federated learning system

wireless channel, such as interference, uncertainties among wireless channel states, and noise may significantly affect the FL performance, for example transmission delay directly impacts on the convergence time of the FL. Therefore, the proper optimization of the wireless network performance results to be mandatory. At the same time, FL can also constitute a useful support for solving typical wireless communication problems to optimize network performance. FL is an iterative process through which a set of end-devices perform training of local learning models and repeatedly interact with a central server having the role to aggregate learning models. When a desired accuracy is reached on local learning models, the corresponding parameters are sent to the central server which is responsible to aggregate, following some rule, the local parameters received and to update its model (i.e., the global model). Then, the aggregation server sends back the model updated to end-devices. Finally, the aggregated weights (i.e., global model) are sent back to the end-devices. This process continues in an iterative manner for a number of global FL rounds until convergence is achieved. It is important to pose emphasis on the fact that FL involves iterative interaction between end-devices and the aggregation server. Within this context, efficient interaction between end-devices and the central aggregator has to be designed and developed, to effectively perform FL. One of the most popular FL framework is the Federated averaging (FedAvg) scheme, which is based on averaging computing local learning model updates, obtained by applying some machine learning scheme. In this sense, the choice of the local learning model strictly depends on the target application. Convolutional neural network, long-short term memory, support vector machines, and Naive Bayes [19] can be used. Furthermore, to update the local learning model weights, FedAvg uses stochastic gradient descent (SGD). In SGD, the weights are updated at end-devices using partial derivative of loss function w.r.t weights and then multiplied by the learning rate, representing the step size of the gradient descent. The computed local

6

1 Emerging Technologies for Edge Intelligent Computing Systems

learning model weights are averaged by the aggregation server. Although federated learning offers advantages, it faces few challenges. These challenges are privacy issues, computing and communication resource optimization, incentive mechanism, statistical, and system heterogeneity.

1.4 Semantic Communications As a consequence of the growing interest on the Edge intelligence paradigm, we are witnessing a deep transformation for the next generation wireless networks from the goal of “connected things” to the new ambition of “connected intelligence.” To enable this step to be completed, it is mandatory that smart devices are able to support more complex intelligent applications as well as more efficient and reliable communication capabilities. This trend has generated a new vision on how to understand what reliable communications means by giving rise to the new Semantic Communications paradigm [25–29]. This new approach, widely considered today as a breakthrough beyond the Shannon paradigm, switches over the end user need for an accurate reception of binary symbols regardless of their meaning to that of a successful transmission of semantic information conveyed by the source. As a consequence, semantic communications pursuing only the transmission of the essential information for a given task at the end user site act as an actual intelligent system giving rise to significant reduction in data traffic, network congestion, and energy consumption. This has motivated the recent advancements in terms of service diversity and service-level optimization solutions based on the content of the messages. In particular, the next generation wireless networks are moving toward a service-based architecture in order to provide different sets of applications and verticals some of which enabled only by partitioning network resources according to a semantic communication approach. In particular, the basic components of a semantic communication system (Fig. 1.2) are: • Source Semantic Encoder: is in charge of the semantic content detecting and extraction from a source signal and compresses or cleared away any unnecessary information.

Fig. 1.2 Semantic communication system

1.4 Semantic Communications

7

• Semantic Decoder: has to understand the information sent by the source and retrieved the received signal in the appropriate format for the end user. The decoder has also to account the satisfaction of the destination user to establish if the semantic communication attempt was successful or not. • Semantic Noise: is the impairments suffered during the communication process. Its impact is to give rise to misunderstanding and incorrect reception of the semantic information. It can be due to shortcomings in the encoding, data transmission, and decoding processes. Recent advancements in AI bring a powerful tool for handling crucial problems in semantic communications. In particular, as a consequence of the intense research efforts to define efficient AI-based semantic communications schemes for next generation wireless networks. In this scenario, Edge computing has a significant role to facilitate a fast pathway to semantic communications. Edge nodes can accomplish encoding and decoding operations by means of a shared knowledge base as well as data gathered from local devices. In particular, each edge node, according to the edge intelligence paradigm, may already have a suitable number of well-trained models for object identification and relation inference. Furthermore, the Federated learning approach can be applied to support multiple edge servers to train and update common models avoiding data exchange. Summarizing, the main advantages arising from the use of edge node in a semantic communication based system are: (i) a reduced network latency being close to the data and end user; (ii) a balance in the network load by avoiding to upload hug amount of data to a central cloud; (iii) a significant improvement in the network security by lowering the data exposure to the network and, hence, the possibility of a data leakage. More in general, we can say that edge intelligence enhanced with semantic communications allows an improving in the generalization capabilities of intelligent agents at a lower computation cost and significantly lower the communication overhead due to huge data exchanges. Semantic Communications will be the subject of Chap. 4.

Chapter 2

Offloading Methodologies for Air-Ground Edge Intelligent Computing Systems

Nowadays, a new domain for next Edge Intelligent computing systems, i.e., systems that combine edge computing with artificial intelligence (AI) capabilities, is that based on a functional integration of Unmanned Aerial Vehicles (UAVs) as flying computing nodes with terrestrial networks. In particular, this chapter presents a UAV-Aided Mobile Edge Intelligent Computing system, in which heterogeneous traffic flows with different quality of service constraints, have to be offloaded on processing nodes consisting of terrestrial and flying edge computing nodes. According to this, we illustrate here the use of a matching algorithm to perform proper offloading strategies. The matching algorithm provides decisions on the basis of per-flow end-to-end delay bounds formulated by resorting to the combined application of stochastic network calculus and martingale envelopes theory [11, 13, 17, 30, 31]. Furthermore, this chapter offers a theoretical discussion of the matching stability and provides some numerical results to highlight the validity of the stochastic framework proposed in fitting the actual network behavior, considering also different state-of-the-art offloading alternatives.

2.1 Introduction With the advent of the sixth-generation wireless (6G) networks, numerous highflying challenges are emerged for the Edge Intelligent systems in order to provide reliable, effective, and efficient solutions to handle the expected large volume of heterogeneous traffic flows with different quality of service (QoS) constraints and high network responsiveness guarantee, i.e., low delay. In particular, these requirements enable the application of novel intelligent technologies in many challenging scenarios. Connectivity needed in these contexts can be achieved in the case of 6G networks by exploiting bands as .30–300 GHz band millimeter waves (mmWaves) and free-space optical at the range of .200–385 THz [32]. In this way, © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 B. Picano, R. Fantacci, Edge Intelligent Computing Systems in Different Domains, SpringerBriefs in Computer Science, https://doi.org/10.1007/978-3-031-49472-7_2

9

10

2 Offloading Methodologies for Air-Ground Edge Intelligent Computing Systems

improvements over network performance can be gained, in perspective, in terms of network responsiveness and end-to-end delay, reliability, coverage, and of both spectrum and energy efficiency. Nevertheless, 6G networks are expected to face a vast amount of data traffic and disruptive intensive applications, for which the exclusively usage of terrestrial networks appears inadequate to properly satisfy the far-reaching traffic demand and the service quality posed by the emerging applications. Therefore, is through a synergism, carefully orchestrated between terrestrial and non-terrestrial segments of a same network to manage heterogeneous traffic flows, that services delay can be mitigated [33]. Nowadays, as for the terrestrial networks, the emerging multiple access edge intelligent computing paradigm (MEC) exhibits important vantages due to a distributed computing architecture [34], in which edge nodes (ENs) having artificial intelligence (AI), computation, storing, and data transmission capabilities are deployed close to the end users [15, 34], at network edges (e.g., at the small base stations (SBSs)). Due to its distributed nature, MEC aims at reducing the congestion levels and processing times offered by a cloud-based intelligent computing architecture. Therefore, MEC represents an effective paradigm to host the intelligent computation of tasks flows stemmed from devices needing of tasks offloading. In this picture, the functional integration of a UAV, able to host intelligent computation, although with reduced capacity in comparison with the terrestrial ENs, allows significant performance improvements. In particular, in such UAV-Aided MEC network, it is possible to face computation requests jams at the ENs, to menage heterogeneous traffic flows, meeting the corresponding variegated QoS constraints. Meanwhile, the design and analysis of proper offloading policies represent a key point in a UAV-Aided MEC network. To this regard, the time elapsed between the instant in which the device computation request is submitted for processing and the instant in which the device receives the computation outcome, usually referred as end-to-end (e2e) task flow delay, represents a key metric in a UAV-Aided MEC system, in particular, in the case of heterogeneous traffic flows. The analysis of e2e delay in UAV-Aided MEC networks has been largely considered, since it is recognized as an important methodology to stochastically forecast the network performance [35]. However, the validity of the e2e delay stochastic approximation deeply depends on the class of processes assumed to model arrivals and services. For example, Markovian analysis has been extensively applied, but, sometimes, it is based on too strong and restrictive assumptions about the nature of the stochastic processes involved. Nevertheless, some results about systems having general characterization for both the processes describing the system, i.e., G/G/1 systems, exist. Unfortunately, these results generally involve advanced complex analysis theory and pose severe restriction about the queuing scheduling policy. Hence, we present here a more accurate stochastic delay analysis based on the stochastic network calculus (SNC) approach [13, 36, 37], due to its ability in analyzing non-trivial traffic models [38–40], and to fully catch computer networks behavior in terms of delay [36]. We consider as key performance parameter the probability that the end-to-end (e2e) delay, i.e., the time elapsed from when the device sent out the computation request till when the computation outcome is

2.2 Problem Statement

11

received flow computation delay, overcomes a given threshold .δ. Hence, the UAVAided MEC system will be able to guarantee a high reliability when this probability is low and close to 0.

2.2 Problem Statement 2.2.1 System Scenario This paper considers an UAV-Aided MEC scenario in which, as reported in Fig. 2.1, there is one terrestrial SBS having to handle an unpredictable increasing of flow computation requests by users in its service area. Hence, to lower the drawbacks arising by this congestion occurrence the functional integration with a UAV having on-board processing capabilities, i.e., acting as a flying EN (UAV-EN) even with some computation and energy limitations, is considered. In particular, in our case, if the UAV is properly integrated with the terrestrial SBS (i.e., EN), it can provide an efficient and cost-effective flow computation solution by providing additional computation capacity to lower congestion drawbacks in terms of application constraints violation in several real-world scenarios. In addition, we have a set of devices .U = {1, . . . , U }, from which a flow of tasks is originated. For this reason, terms device and flow will be used interchangeably hereafter. Furthermore, each flow has associated a given delay constraints .du , .u ∈ U . In order to receive computation, each flow accesses to the network through the local SBS by means of an individual channel. Then, flow computation can be arranged according to the proposed offloading scheme on the most suitable SBS or UAV sites. In this context, we have made an assumption that the Small Base Station (SBS) maintains a dedicated channel for communication with the Unmanned Aerial Vehicle (UAV)

Fig. 2.1 System scenario

12

2 Offloading Methodologies for Air-Ground Edge Intelligent Computing Systems

to offload flow computations. The results of these flow computations are then disseminated to all devices within the UAV’s service areas using a broadcast mode. Similarly, the SBS shares the outcomes of flow computations with all devices within its service area through a dedicated channel. As mentioned in Sect. 2.1, our primary focus is on the stochastic analysis of end-to-end flow computation delay, a critical metric for assessing the efficiency of a UAV-Aided Mobile Edge Computing (MEC) system, especially in scenarios involving diverse traffic flows. Our objective here is to overcome the limitations associated with solely relying on a Markovian analysis or concentrating solely on evaluating the average endto-end flow computation delay, which often conceals the true system behavior. Specifically, we aim to achieve this by proposing an analytical approach based on the utilization of Stochastic Network Calculus (SNC) theory, as elaborated upon later. Regarding the analysis to be performed, for the 6G channels capacity for terrestrial communications, i.e., from devices to SBSs and vice versa, under the assumption of an equal power p in transmission for all SBSs/devices and, according to [1], LoS propagation conditions almost surely available, we have the data transmission rate is ( R = W log2 1 +

.

) pA0 d0−2 e−K(f )d0 , Σ N0 + i∈I pA0 di−2

(2.1)

where .W is the bandwidth of the communication channel, .d0 is the distance between the considered device and the appropriate SBS, and .I is the random set of interfering signals generated at distance .di with .i ∈ I. Hence, the only random factor in (2.1) is due to the second term in the denominator. Moreover, Wζ −2 −K(f )d0 ) considers both the molecular absorption .N0 = 4π gB T0 + pA0 d0 (1 − e noise and the Johnson-Nyquist noise at the receiving device site, with .gB the Boltzmann constant, .T0 the temperature in Kelvin, .ζ the wavelength, .K(f ) the 2 global absorption coefficient of the medium, and .A0 = 16πc 2 f 2 [41, 42]. Hence, we have that the transmission time for a packet formed by .L bits over the ground channels (from devices to SBSs or vice versa) results as a random variable having a probability density function defined in lemma 2 of [1] as ξ

e f (χ ) = √ 2π σI

.



(I −μI )2 2σI2

(2.2)

with ξ=

.

ln(2)LpA0 d0−2 e−K(f )d0 2L/W χ , Wχ 2 (2L/W χ − 1)2 I=

Σ

.

i∈I

pA0 di−2 ,

(2.3) (2.4)

2.2 Problem Statement

13

( μI = pi A0

.

ln(δ) − ln(d0 ) δ 2 − d02 (

2 .σI

= (pA0 )

2

π δ2ι 2

)(

)(

) π δ2ι , 2

) 1 , 2δ 2 d02

(2.5)

(2.6)

where .ι models the intensity of the isotropic homogeneous Matern hardcore point process expresses the SBSs spatial distribution. Therefore, the end-to-end (e2e) delay for a task offloaded on an SBS is influenced by the following three delay components • The uplink computation request transmission time • The computation system time, i.e., the time required to perform tasks flows computation on board of the local SBS, which follows a hyper-exponential distribution [43], plus the time spent to wait on the SBS to receive computation • The downlink computation outcome transmission time When assessing the end-to-end delay for a task flow offloaded onto a UAV, it is crucial to consider not only the delay associated with the uplink computation request transmission time but also two additional delay components. These include: • Computation Request Transmission Time: This delay accounts for the time taken to transmit the computation request from the local Small Base Station (SBS) to the connected UAV. • Outcome Data Packet Transmission Time: This delay encompasses the time it takes for the UAV to transmit the outcome data packet to the destination device. Furthermore, it is important to acknowledge that the UAV typically possesses lower computation capabilities compared to stationary devices in the network. This characteristic adds complexity to the overall computation process, as the UAV may require more time to perform the required computations. Therefore, when evaluating the end-to-end delay for a task flow offloaded to a UAV, it is essential to factor in these additional delays and account for the UAV’s constrained computation capabilities. Assuming hereafter that the computation requests packets and the outcome data packets formed by a same number, i.e., .L, of bits and neglecting the interference effects for the uplink/downlink dedicated channels, we have two equal constant delay contributions, each one given by: χuav = L/ ruav ,

(2.7)

) ( pA0 d0−2 e−K(f )d0 , ruav = W log2 1 + N0

(2.8)

.

where we have .

where .d0 is the distance between a given SBS and the linked UAV.

14

2 Offloading Methodologies for Air-Ground Edge Intelligent Computing Systems

2.2.2 Problem Formulation The main objective of this paper is the minimization of the outage probability of flows belonging to .U , which require computation offloading on the terrestrial EN, (T-EN) or to the UAV-EN depending on the congestion level and e2e delay constraints. More in depth, we have to produce, throughout the matching game formulated in Sect. 2.5, an allocation matrix .A ∈ {0, 1}U ×(S+V ) , whose generic element .αu,c is equal to 1 if the flow u is offloaded on the network node c, with .c ∈ C = {T − EN, U AV − EN}, or, zero, otherwise. In formal terms, we can define the following optimization problem .

min A

1 ΣΣ Pc,u (W (u) ≥ td,u )αu,c , |U |

(2.9)

u∈U c∈C

where .Pc,u represents the probabilistic bound formulated as in (2.22), and expressing the probability that flow u exceeds the corresponding deadline .td,u , being offloaded on node c. Specifically, .Pc,u expresses the Complementary Cumulative Distribution of the e2e delay suffered by packets flow u, i.e., .W (u), in being offloaded on node c. Finally, we would like to stress that the analysis to derive .Pc,u is provided here in a general form and, hence, it is applicable to any (Markov or nonMarkov) service time distribution [36, 37]. In this reference, in Sect. 2.6, we have considered the computation nodes service time as HyperExponentially distributed, accordingly to a wide adopted non-Markov computation time distribution model as discussed in [43].

2.3 Stochastic Network Calculus Principles In what follows, for the sake of the readability, the primary principles of the SNC are recalled while the main definitions concerning the martingale envelope are given in Appendix. Note that these principles and the consequent analysis are instrumental to build the preference lists utility functions involved in the matching game designed in Sect. 2.5. However, for any interested reader, we recommend [36, 37]. Let .T = [a, b] the observation time interval of the system of interest. Intuitively, the cumulative amount of arrivals at the server node during T can be expressed by [35] Aa,b = A(a, b) =

b Σ

.

t=a+1

where .Xt is the number of packets arrived at time t.

Xt ,

(2.10)

2.4 End-to-End Stochastic Bound

15

Σ On the other hand, .Sa,b = S(a, b) = bt=a+1 St describes the corresponding services counted in T . According to [37], we can introduce the .(min, +) convolution operator .⊗ and define the bivariate departure process .D(a, b) as D(b) = D(0, b) ≥ A ⊗ S(0, b) := inf {A(g) + S(g, b)}.

.

0≤g≤b

(2.11)

Then, the delay process .W (b) is represented by [35, 38] W (b) = W (0, b) = inf{c ≥ 0|A(b − c) ≤ D(b)},

.

(2.12)

and the complementary cumulative distribution function of .W (b) results to be [35], P(W (b) > c) = P(A(b − c) ≥ D(b)).

.

(2.13)

A key point here is that the SNC is a powerful and flexible tool to analyze the e2e delay in computer networks systems. Nevertheless, applying the standard SNC envelopes, the resulting bounds are given exclusively on the bases of the arrival processes. Differently, by involving the martingale envelopes, whose definitions are recalled in Appendix for the convenience of the reader, a proper exponential transformation taking into account both the arrivals and service processes can be formulated.

2.4 End-to-End Stochastic Bound In this section, we formulate the probabilistic limit per-flow, which describes the likelihood that tasks within a flow will experience an end-to-end delay exceeding their respective deadlines. It is important to note that this analysis is crucial for understanding the matching offloading scheme proposed in the following paragraph. Our analysis assumes the availability of a Line-of-Sight (LoS) link with a very high probability. However, to validate this assumption in the specific application scenario under consideration, we will compare the analytical predictions obtained in this section with simulation results in Sect. 2.6, assuming the existence of actual propagation conditions, which means a non-zero probability of encountering a NonLoS (NLoS) condition. To achieve this, we have examined the tandem systems illustrated in Fig. 2.2, where we consider a different choice for the computation site, either a Small Base Station (SBS) or an Unmanned Aerial Vehicle (UAV). Specifically, the tandem system modeling computation at an SBS consists of three main subsystems: • The uplink transmission subsystem, which involves a specific (i.e., tagged) device and the most convenient access SBS, such as the nearest one

16

2 Offloading Methodologies for Air-Ground Edge Intelligent Computing Systems

Fig. 2.2 SNC tandem queue models

• The computation subsystem at a particular SBS1 • The downlink transmission subsystem involving the access SBS and the tagged device Likewise, the tandem system associated with the scenario where computation takes place on a UAV consists of the following subsystems: • Uplink Transmission Subsystem (Tagged Device and Access SBS): This encompasses the transmission process involving a tagged device and the access Small Base Station (SBS). • Air Uplink Transmission Subsystem from the Access SBS to the UAV: In this stage, we consider the transmission in the air from the access SBS that has a Line-of-Sight (LoS) link to the UAV. • Computation Subsystem at the UAV Site: Here, we delve into the computation process carried out at the UAV site. • Downlink Air Transmission Subsystem from the UAV to the Requesting Device: This component pertains to the air transmission from the UAV to the device making the request. To ensure that the order of task flow offloading is properly considered, we have employed a static priority (SP) scheduling policy. Under this policy, packets are served based on their priority class, which depends on the order in which a flow is allocated to the computation nodes (T-EN or UAV-EN). In other words, packets from flows that were allocated first are served with higher priority. It is important to note that the analysis we have proposed here provides a probabilistic e2e delay bound on a per-flow basis. In this context, a “flow” refers to a sequence of data tasks sent out by a given device.

1 Note

that the computation SBS may differ from the access SBS.

2.4 End-to-End Stochastic Bound

17

In performing our analysis we assume that all the devices have a data task to send out for computation with a same probability, .pH , per unit of time. Hence, by focusing on a given flow, the tagged flow, we define as .A1 the cumulative arrivals of the tagged flow and as .A2 the cumulative Σ −1aggregated arrivals of a generic number U of cross-traffic flows, i.e., .A2 = U g=1 Cg , in which .Cg is the g-th traffic flow crossing .A1 . Therefore, as detailed in [36], the SP scheduling for the tagged flow is defined as S Type 1 = [S Tot (m, n) − A2 (m, n)]+ 1n>x ˆ ,

.

(2.14)

in which .S Tot is the service curve of the network, obtained applying the minplus convolution of the service curves of each server, x is a fixed parameter freely chosen, and .1 is the indicator function assuming value 1 if the condition .n−m > x is satisfied, zero otherwise, accordingly to [35, 38]. Assuming that both the arrivals and services flows admit the martingale envelopes, hereafter we refer to .MAu , .u ∈ U , for the arrivals processes. Furthermore, we refer to .MS i , .i ∈ {2, 3}, for service processes, where .MS 2 expresses the martingale service envelope of the computation subsystem, i.e., .S 2 , and .MS 3 is referred to the martingale service envelope of the downlink transmission subsystem, i. e., .S 3 . Consequently, we can conclude that [35, 36, 38]2 P(W (n) ≥ k) ≤ .

P( sup {A1 (k, n) + A2 (n) − S 1 (n) ⊗ S 2 (n) ⊗ S 3 (n)} ≥ 0),

(2.15)

0≤k≤n 1 (k,n)−(n−k)K A1 )

MA1 ≈ hA1 (an1 )eθ(A

.

2 (k,n)−(n−k)K A2 )

MA2 ≈ hA2 (an2 )eθ(A

.

MS i ≈ hS i (sτi )eθ(τi Ks −S

.

i (τ )) i

,

(2.16)

,

(2.17)

.

(2.18)

Therefore, the supermartingale process, follows from the product of (2.16), (2.17), and (2.18), is given by ||

M=

.

Mj .

j ∈{A1 ,A2 ,S 1 ,S 2 ,S 3 }

After some algebraic manipulations, and taking into account that

2 The

complete analysis in reported in [38].

(2.19)

18

2 Offloading Methodologies for Air-Ground Edge Intelligent Computing Systems

E[M(k)] = E[MA1 (0)MA2 (0)

3 ||

MS j ],

(2.20)

E[MS j (0)].

(2.21)

.

j =1

we have E[M(k)] ≤ E[MA1 (0)]E[MA2 (0)]

3 ||

.

j =1

Then, the martingale bound considering the computation of the tagged flow on a SBS results to be P(W (n) ≥ k) ≤ e−θ

.

∗ kK A1 −kK A2

B,

(2.22)

where k coincides with the deadline associated to the tagged flow, and .S 2 represents the service envelope of the service curve of the selected SBS, and B=

.

E[MA1 (0)]E[MA2 (0)]E[MS 1 (0)]E[MS 2 (0)]E[MS 3 (0)] , H

(2.23)

where .H = min{hA1 (an1 )hS i (sτi ) : an − sτi > 0}, and .θ ∗ = sup{θ > 0 : Ka ≤ Ks }, in accordance with [38]. Differently, the bound considering the processing on a UAV, and, hence, taking into account the deterministic contributions due to the uplink and downlink air task transmission time, results to be ˆ ≤ e−θ ∗ kK A P(W (n) ≥ k)

.

1

−kK A

2

B,

(2.24)

in which .kˆ = du − 2τ , with .τ expressing the unit of time, .S 2 represents the service envelope of the service curve of the UAV, and

B=

.

E[MA1 (0)]E[MA2 (0)]E[MS 1 (0)]E[MS 2 (0)]E[MS 3 (0)] . H

(2.25)

2.5 Tasks Offloading Scheme In this paragraph, the proposed tasks offloading policy based on the Matching Theory is described. Matching theory is a well-established mathematical framework that excels in establishing mutually beneficial relationships between elements belonging to two distinct sets. Matching algorithms are particularly valuable because, in contrast to more traditional greedy algorithms, they take into account

2.5 Tasks Offloading Scheme

19

the utility of both participating sets in a game. This feature enables them to strike a valuable balance between the interests promoted by both sets. Additionally, due to its decentralized structure, matching theory, as opposed to standard game theory and Auction theory that rely on complete knowledge of players’ utilities and actions, operates without requiring full knowledge. Instead, it exclusively involves local utility metrics. Therefore, algorithms based on matching theory may offer effective strategies in offloading environments. As mentioned earlier, matching theory operates based on preference lists, where each element belonging to the sets involved in the matching game expresses its level of satisfaction in being matched with each element from the opposite set, and vice versa. In the following sections, we will begin by defining preference lists and proceed to formulate the matching game between the two computational node alternatives in .C and the set .U of flows is formulated.

2.5.1 Flows Preference List For each flow u, having deadline .td,u , and for each .c ∈ C, the preference list metric Vu (c) of the u-th flow considering c as computation site is given by

.

Vu (c) = Pc,u (W (n) ≥ td,u ),

.

(2.26)

where .Pc,u is the probabilistic bound formulated as in (2.22). The number of allocated flows, i.e., the flows for which the computation site has been selected, grows as the matching game proceeds. In particular, after the allocation of each flow, i.e., after each algorithm iteration, the flows preference lists of the unallocated flows have to be updated in order to take into account the impact of the previously allocated flows. Considering the SNC perspective, as represented also in Fig. 2.3, the presence of previously allocated flows on a node .c ∈ C is modeled as high priority traffic flows, in which the priority results to be inversely proportional to the allocation order [44]. Therefore, the flows allocated for the early acquire higher priority than those allocated later during subsequent algorithm steps.

Fig. 2.3 SNC model for allocation

20

2 Offloading Methodologies for Air-Ground Edge Intelligent Computing Systems

2.5.2 Computational Nodes Preference List Each computational node .c ∈ C, for each .u ∈ U , creates its preference list .Ec (u) as Ec (u) =

.

1 , td,u

(2.27)

expressing a higher preference for the flows having a lower deadline. A modified version of the Gale-Shapley algorithm (GSA) [44] is, therefore, defined as 1. Each flow .u ∈ U creates the associate preferences list in reference to (2.26). 2. Each computation node .c ∈ C builds its preference list in accordance with (2.27). 3. Each .c ∈ C receiving more than one proposal accepts the most favorite one in accordance with its preference list (2.27). The others are rejected. 4. Repeat (1)–(3) until all the flow have been allocated on one .c ∈ C. Since the preferences list of each flow depends on the preferences of other flows, the matching game formulated can be referred to as matching games with externalities. In fact, a matching game with externalities is a matching in which there exist interdependencies and relations among the players’ preferences lists.

2.5.3 Stability Analysis In contrast to standard matching games, the games with externalities are very challenging to be handled, since there not exists any matching algorithm which surely converges into a stable matching. With the aim at proving the stability of the matching game formulated, the following strictly-two-sided exchange-stability (S2ES) definition is introduced, on the basis of the definition previously detailed in [45]. Definition 1 Let .Z be the outcome matching. Let .Z(i) be the c node matched with the u-th flow, in accordance with matching .Z. Matching .Z satisfies the S2ES if there not exists a pair of flows .(u1 , u2 ) s.t.: 1. 2. 3. 4. 5. 6.

Vu1 (Z(u1 )) ≥ Vi1 (Z(u2 )). Vu2 (Z(u2 )) ≥ Vu2 (Z(u1 )). .EZ (u ) (u2 ) ≥ EZ (u ) (u1 ). 1 1 .EZ (u ) (u1 ) ≥ EZ (u ) (u2 ). 2 2 .∃ψ ∈ {u1 , u2 } s.t. at least one of the conditions .(1)–(2) is strictly verified. .∃φ ∈ {Z(u1 ), Z(u2 , } s.t. at least one of the conditions .(3)–(4) is strictly verified. . .

2.6 Performance Analysis

21

The insight of Definition 1 is that a swap is allowed only if an improvement to at least one between the players involved in the game is achieved, and all the rest of the elements do not get worse. In order to discuss the stability of the game formulated, we admit the existence of a pair of flows .(u1 , u2 ), for which the conditions .(1)–(2) of Definition 1 are satisfied. Supposing that .Z(u1 ) = c1 and .Z(u2 ) = c2 , we obtain Vu1 (c1 ) ≤ Vu1 (c2 ),

(2.28)

Vu2 (c2 ) ≤ Vu2 (c1 ).

(2.29)

.

.

In reference to the satisfaction of condition .(5) of Definition 1 by (2.28) and (2.29), Since the proposed offloading policy does not include any discard strategy, the delay suffered by the flows allocated cannot change after their assignment, i. e., the delay cannot decrease. Therefore, we have .Vu1 (c1 ) = Vu1 (c2 ), .Vu2 (c2 ) = Vu2 (c1 ), and .(5) is not satisfied. Vice versa, considering .d1 and .d2 the time deadlines corresponding to flows .u1 and .u2 , respectively, if .c1 prefers .u2 to .u1 , this necessarily means that .d2 ≤ d1 . In the same way, if .c2 prefers .u1 instead of .u2 , it means that .d1 ≤ d2 . Therefore, we have .d1 = d2 . In conclusion, neither .c1 nor .c2 obtains benefit in switching, and the condition .6) is not verified, implying that the proposed matching game produces an outcome which satisfies the S2ES property.

2.6 Performance Analysis This paragraph focuses on the performance evaluation of the proposed task offloading approach. Furthermore, it includes performance comparisons with various alternative offloading schemes to confirm the effectiveness of the proposed solution. In this context, the network scenario has been configured to align with the parameter values outlined in [1]. Consequently, simulation results have been generated under actual propagation conditions, resulting in a non-line-of-sight (NLoS) probability of .10−3 . Furthermore, we have assumed T-EN and UAV-EN as two heterogeneous computation nodes with an independent identically distributed (.i.i.d.) computation time assumed here hyperexponentially distributed. The mean computation time for UAV-EN is 5.25 ms, whereas that of the T-EN is 3.5 ms. Reference deadlines have been set equal to 30 ms. Then, the device computation tasks have been assumed as data packets with a fixed size equal to 10 Mbits, while .pH , unless otherwise stated, has been assumed the same for all the devices and equal to .0.01. Figure 2.4 illustrates the system’s performance reliability, specifically the probability of achieving an end-to-end (e2e) delay lower than the flow’s deadline, as the coefficient of molecular absorption increases. The figure presents a comparison

22

2 Offloading Methodologies for Air-Ground Edge Intelligent Computing Systems

Fig. 2.4 Reliability as a function of the molecular absorption coefficient

between analytical predictions (MG) and simulation results (SR), which exhibit a high degree of agreement. Additionally, the figure clearly demonstrates the significant impact of molecular absorption on system performance. Similarly, Fig. 2.5 portrays the reliability as a function of the radius of the interfering region, which represents the distance within which Small Base Stations (SBSs) are considered as interfering with each other. Furthermore, Fig. 2.6 illustrates how the reliability behaves concerning the bandwidth utilized by both the uplink and downlink channels connecting devices and SBSs. Remarkably, due to the high transmission rates facilitated by 6G connectivity, characterized by low transmission times, the reliability is not significantly affected by the channel bandwidth. Instead, the computation time, which exceeds the transmission time, emerges as the system bottleneck. Figure 2.7 demonstrates the reliability’s behavior as a function of .pH . Across all these figures, we observe a strong alignment between analytical predictions and simulation results. This alignment underscores the accuracy of the proposed analytical framework, validating its suitability for efficient system design and parameter tuning, thus eliminating the need for computationally expensive simulations.

2.7 Summary

23

Fig. 2.5 Reliability as a function of the radius of the interfering region

2.7 Summary This chapter has focused on the functional integration of an Unmanned Aerial Vehicle (UAV) equipped with on-board processing capabilities within a terrestrial Edge Node (EN). This integration is part of the emerging paradigm of a UAVAided Mobile Edge Computing (MEC) system, designed to efficiently manage computation load congestion scenarios and meet stringent Quality of Service (QoS) requirements for novel applications supported by 6G networks. To achieve this objective, we discussed a stable matching algorithm, based on per-flow end-to-end stochastic bounds analysis. This analysis was formulated by utilizing Stochastic Network Calculus (SNC) and martingale envelopes, as detailed in references [46] and [37]. These techniques were employed to enhance the accuracy of the SNC approach. Subsequently, we theoretically established the stability of the proposed matching algorithm. Moreover, we provided performance comparisons with alternative task offloading schemes, highlighting the superior performance of our proposed solution. Finally, we demonstrated the strong alignment between our analytical predictions

24

2 Offloading Methodologies for Air-Ground Edge Intelligent Computing Systems

Fig. 2.6 Reliability as a function of the bandwidth

and simulation results. These simulations were conducted under real-world conditions, including a non-zero probability of non-line-of-sight (NLoS) scenarios. This alignment not only validated the accuracy of our approach but also affirmed its effectiveness in guiding system design and parameter configuration, obviating the need for extensive simulation campaigns.

Appendix In this appendix the main definitions concerning the martingale envelopes are reported as support to the analytical evaluations provided in the text. Definition 2 (Submartingale Process) Let .{Fn }n be a filtration such that the stochastic process .{Yn }n is .Fn - measurable. .{Yn }n is a submartingale process if, for any time .n ≥ 1, .Y1 , Y2 , . . . satisfy E[|Yn |] < ∞,

.

E[Yn+1 |Fn ] ≥ Yn .

.

(2.30)

2.7 Summary

25

Fig. 2.7 Reliability as a function of .pH

Definition 3 (Martingale Process) The stochastic process .Y1 , Y2 , . . . is a martingale process if, for any time .n ≥ 1, it satisfies E[|Yn |] < ∞,

.

(2.31)

E[Yn+1 |Fn ] = Yn .

.

Definition 4 (Supermartingale Process) The stochastic process .Y1 , Y2 , . . . is a supermartingale process if, for any time .n ≥ 1, it satisfies E[|Yn |] < ∞,

.

(2.32)

E[Yn+1 |Fn ] ≤ Yn .

.

Definition 5 (Arrival Martingale) The arrival process A exhibits martingale arrivals if, for any .θ > 0, ∃Ka ≥ 0, and .ha : C(X) → R+ it satisfies ha (Xb )eθ(A(b)−bKa ) , b ≥ 1.

.

and the process is supermartingale.

(2.33)

26

2 Offloading Methodologies for Air-Ground Edge Intelligent Computing Systems

Definition 6 (Service Martingales) The service process S exhibits martingale arrivals if, for any .θ > 0, ∃Ks ≥ 0, and .hs : C(X) → R+ that satisfies hs (Sb )eθ(bK2 −S(b)) , b ≥ 1,

.

(2.34)

and the process is supermartingale. Definition 7 (Arrivals/Service Martingales) Let .R1 , R2 , . . . , be i.i.d random variables, in which the corresponding distributions are nonnegative. By assuming Σ generically .A(b) = S(b) = bg=1 Rg , follows that both A and S admit arrival and service martingales, respectively.

Chapter 3

Edge Intelligent Computing Enabled Federated Learning in 6G Wireless Systems

Federated Learning (FL) methodologies are expected to enable a huge number of applications according to intelligent distributed frameworks based on machine learning [19]. In such a context, the sixth-generation (6G) networks technology appears as a promising opportunity to offer fast and reliable communications. This chapter illustrates a FL framework that takes into account both the hesitation of users to take part at the Fl process without receiving compensation and the impact of the communication channel conditions. In particular, to support D2D communications among users to lower the energy wasted and improve the convergence time of the FL process, the use of an echo state network, running in local at each user site, is discussed [22]. Numerical results are provided to highlight the suitability of the considered approach for applications foreseen for the forthcoming 6G networks.

3.1 Introduction In the current landscape, the emerging applications envisioned for the next generation networks, commonly referred to as the sixth generation (6G), are poised to revolve around handling vast volumes of data to build intelligent distributed frameworks based on machine learning [47]. Specifically, the forthcoming 6G networks are anticipated to harness the capabilities of artificial intelligence (AI) technologies, offering efficient and effective approaches to transmit, collect, amalgamate, process, and decipher large datasets seamlessly and ubiquitously. This AI-driven evolution is intended to bolster transformative applications and intelligent services, catalyzing the shift from a paradigm of “connected things” to one of “connected intelligence” [23, 48]. However, two major issues have to be solved to make ML oriented solutions a concrete reality [49]: .(i) data usually need to be kept private and cannot be shared with others due to some legal concern or general data protection regulation (GDPR); © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 B. Picano, R. Fantacci, Edge Intelligent Computing Systems in Different Domains, SpringerBriefs in Computer Science, https://doi.org/10.1007/978-3-031-49472-7_3

27

28

3 Edge Intelligent Computing Enabled Federated Learning in 6G Wireless Systems

(ii) adverse behavior of communication channels in THz bands [1]. As a result of these challenges, decentralized learning paradigms are gaining traction as flexible solutions to mitigate or even circumvent these issues. These paradigms aim to keep private data localized on the original device, perform model training locally, and reduce the volume of information transmitted to central servers. In this context, FL has emerged as a promising decentralized learning framework. FL involves collaborative interaction between devices and a central server to train a shared machine learning model [50, 51]. Here is how it works: The shared model is initially downloaded from the central server by participating devices, each of which then trains the model using its local data. After local computations, these devices upload only the results of their model updates to the server, rather than transmitting raw data [50, 51]. This collaborative training process benefits the user group, and the resulting model can be utilized by any device within the network [52]. It is worth noting that devices are often constrained in terms of energy, making energy-efficient FL deployment a critical challenge. Furthermore, in real-world scenarios, it is not reasonable to assume that users will unconditionally participate in the FL process. Users may have varying levels of commitment in terms of resources and intentions to engage in FL activities.1 As a consequence, an incentive mechanism to induce users to participate in the FL framework is discussed here. More specifically, this chapter focuses on a 6G network in which a combined ML and matching theory (MT) framework has been developed to incentive users to take part in the learning process and select a suitable set of them [15, 22]. In particular, the forecasting of the channel conditions enables an efficient selection of the set of users participating in the FL process.

.

3.2 Federated Learning with an Ideal Channel As reported in Fig. 3.1, the FL [47, 53] enables a local training at the devices level. In particular, the FL process resorts to the use of an iterative approach articulated in global epochs, each of which consists of the following phases: (1) local computation; (2) model exchange; (3) central computation. During the first step, devices perform the local data training on the basis of a shared model downloaded from the aggregator .A. Each device i belonging to the set of devices participating to the FL .N = {1, . . . , N }, with .N ⊂ M, has a dataset of individual data denoted with .Di , typically derived from the application usage of user i (for example, the response time of the device applications [54]). For each sample j in .Di , the objective is to find a model parameter w able to minimize the loss function .Lj (w). Therefore, each device i has to solve the following minimization [54]:

1 In what follows, with the aim at simplified notation, users and devices will be used interchangeably.

3.2 Federated Learning with an Ideal Channel

29

Fig. 3.1 Federated learning system model

.

min Li (w) = w

1 ∑ Lj (w). |Di |

(3.1)

j ∈Di

Hence, the corresponding learning model results to be the minimization of the following global loss function:

.

mine L(w) =

w∈R

N ∑ i=1

|Di | Li (w), ∑N i=1 |Di |

(3.2)

where e denotes the input size. In particular, the device i has to solve the local problem [54] at each local computation round t of the FL process, wi(t) = arg min Fi (wi |w (t−1) , ∇L(t−1) ),

.

wi ∈Re

(3.3)

where .Fi is the objective of user i, .w (t−1) is the global parameter produced during the previous iteration, and .L(t−1) is the global loss function at time .(t − 1). Once completing the local model training, each device i uploads .wit to .A during the second step, in which the aggregator receives the device weights. On the other side, .A possesses the global model that, during the step (3), is improved by performing the weighted average of the local updates .wit previously uploaded by devices.

30

3 Edge Intelligent Computing Enabled Federated Learning in 6G Wireless Systems

Hence, at phase (3), the .A operates as follows: w (t+1) =

.

N 1 ∑ (t) wi , N

(3.4)

i=1

and ∇L

.

(t+1)

N 1 ∑ (t) ∇Li . = N

(3.5)

i=1

Then, the iterative scheme is repeated, until a desired accuracy or the maximum number of iterations is achieved. We stress here again on the fact that, due to its distributed nature, the FL process presents several advantages in terms of device privacy since local training is performed exclusively at the devices site.

3.3 Federation Learning in Actual Scenarios As a case of interest, we focus on a 6G network operating at the THz frequencies [15, 22]. Here, we have a set .M of devices all with the ability to be linked to small base stations (SBSs) and a central processing unit .A, named aggregator, which is in charge to merge, understand, and/or store the data stemming from the network. Without loss of generality, for the ease of tractability, we assume here that .A coincides with the SBS to which the devices taking part at the FL process are connected. We assume here that each device i is equipped with a CPU having working frequency .fi and has a total computational capacity of .fi given in the number of CPU cycles per unit time. Moreover, being .λi the percentage of occupied processing capacity,2 we have that the available processing capacity results to be .(1 − λi )fi as in [52]. From the above, we have that the time spent by the device i to perform the local model computation results in ( ti = log

.

) 1 (1 − λi )fi Di , ∈i

(3.6)

( ) in which .log

1 ∈i

represents the number of local iterations needed to achieve the

local accuracy .∈i that device i can provide [52, 55].

2 .λ

i represents the computational capacity occupied by background processes or other activities such as the ESN described in the next section.

3.3 Federation Learning in Actual Scenarios

31

Hence, being .pi the power consumption of the i-th CPU, we have ( Eti = pi log

.

) 1 (1 − λi )fi Di = pi ti . ∈i

(3.7)

Furthermore, we have to take into account of the communication cost of both the local parameters uploading to the .A and global parameter broadcasting to users. For simplicity, we discuss in what follows [56] only the uplink channel case and assume no mutual interference among users [15, 22]. Hence, under the assumption of an equal transmission power P for all user and an equal data rate toward .A, we have [1, 57, 58] ( ) .RA = W log2 1 + SNRt .

(3.8)

In this context, the term .SNRt represents the minimum signal-to-noise ratio required at the Small Base Station (SBS) site to ensure reliable data detection. Given the inherent susceptibility of Terahertz (THz) channels to blockage, molecular absorption effects, and variations in communication range, the quality of the communication channel can fluctuate significantly over time. Specifically, when the instantaneous signal-to-noise ratio (SNR) at the SBS site falls below the threshold .SNRt , it becomes impossible to achieve reliable data detection. To mitigate the adverse effects of these fluctuations on the FL process, we will discuss the advantages of employing a reliable prediction mechanism based on a local ESN for forecasting communication channel conditions. Differently from previous solutions, if bad channel conditions are foreseen by the local ESN introduced in Sect. 3.4, the interested user i activates a device-to-device (D2D) link toward an appropriate neighboring device properly selected as outlined in Sect. 3.5 using the same channel assigned to i to communicate with .A. Moreover, a power level .PD is used in order to allow a data rate equal3 to .RA . Consequently, denoting with .vi the local parameter size expressed in bits for user i, each communication round exhibits a cost in time defined as τi =

.

vi (xA + 2(1 − xA )) RA

(3.9)

in which .xA is a binary variable equal to 1 if device i uses the direct link toward the aggregator .A, i.e., the cellular link, zero otherwise. From (3.9), it follows that the energy consumption needed to transmit .vi is ) ( vi vi vi (1 − xA ). xA + PD +P .Eτi = P RA RA RA

(3.10)

3 In order to simplify the discussion of the problem, we have assumed the same data rate for both the direct device to .A and D2D communications.

32

3 Edge Intelligent Computing Enabled Federated Learning in 6G Wireless Systems

Therefore, considering the i-th device, the total amount of time spent is Ti = ti + τi .

.

(3.11)

Similarly, the overall energy consumption results in ETi = Eti + Eτi .

.

(3.12)

3.3.1 Users’ Revenue Model In practice, it is often reasonable to assume that users may be hesitant to actively participate in the learning process without receiving some form of compensation or incentive. This underscores the importance of incorporating an effective device incentive mechanism and device selection approach within the FL framework. One illustrative example of this concept is provided in [22], while a real-world instance of such an approach can be seen in the case of the Google Keyboard app [56]. In this scenario, users generate a substantial amount of local data while interacting with the keyboard app on their smartphones. From Google’s perspective, training a next-word prediction model using this user-generated data can be highly desirable. To encourage user participation, the server can employ advertisement pop-ups within the mobile app. When a user expresses interest in the learning project sponsored by the Google Keyboard, the app prompts the user to submit relevant parameters and calculates the expected utility. If the user confirms their interest in participating, they can download the dedicated app for the project, and their participation application is submitted. Once the aggregator receives all the proposals within a specified timeframe, it proceeds with device selection and initiates the training process by disseminating an initial global model to the selected users. Upon completion of the model training project, the aggregator rewards users, often in the form of monetary compensation, based on the bids it has won. This approach effectively addresses the challenge of incentivizing user participation in FL while ensuring that users are duly compensated for their contributions to the learning process. In [22], it is assumed that each device i has a given cost .χi that varies on the basis of the availability of its hardware resources as the accuracy provided4 χi = Kti ,

.

(3.13)

4 From a theoretical perspective, a more accurate functional cost may include the quality and the type of the channel exploited. In authors’ opinion, this a priori knowledge is not a reasonable hypothesis considering practical applications.

3.3 Federation Learning in Actual Scenarios

33

in which .K is the parameter that rules the amount of the user cost that is the same for all ∑ the participants. As a consequence, the overall cost that aggregator .A has to pay is . i χi . From the other hand, each user has a cost .μi = Eti , in taking part to the process, which implies that the utility of user i is ) ( ζi = δχi − μi .

(3.14)

.

Hence, the corresponding overall utility results to be N ∑

U=

.

(3.15)

ζi bi ,

i=1

where .bi = 1 if user i is selected to take part in the FL framework, zero otherwise. In addition, .δ (J/sec) is a weight modeling an additional energy cost as in [54].

3.3.2 Problem Formulation In order to optimize the FL framework, we pursue here the joint minimization of both the energy consumption and the time spent to converge, i.e., the time needed to complete the model training. According to the approach outlined in [15, 22], we have to face with the following optimization problem:

.

min Z

(∑ N

Γ,X

) JOIN max U

ETi AND Ti

Γ,X

i=1



s.t.

.

χi bi ≤ BA ,

(3.16)

(3.17)

i N +1 ∑ .

xi,j = 1,

∀i ∈ N

(3.18)

j =1 N ∑ .

γi ≤ N,

(3.19)

i=1

in which Z is the upper bound to the number of iterations needed to converge and it depends on the accuracy required, as detailed in [54], and .Γ ∈ {0, 1}N is the vector whose generic element is 1 if device i participates to the FL process, zero otherwise. Then, matrix .X ∈ {0, 1}N ×N +1 represents the link selection matrix. Considering

34

3 Edge Intelligent Computing Enabled Federated Learning in 6G Wireless Systems

i, j ∈ {1, . . . , N }, i /= j , the element .xi,j = 1 when device i exploits the D2D interface and transmits toward the device j , zero otherwise. Finally, .xi,N +1 = 1 whenever the device i communicates toward .A throughout the direct link. Moreover, .xi,N +1 = 1 means .xA = 1 in (3.10) and (3.9). Constraint (3.17) imposes that the overall cost paid by aggregator cannot exceed the maximum available aggregator resources .BA . Similarly, constraint (3.18) points out that each user can enable only one communication interface, i.e., direct link or D2D interface. Finally, Eq. (3.19) denotes that the maximum number of users involved in the FL framework cannot exceed the actual number of users in the network. .

3.4 Channel Performance Prediction Empowered by Artificial Intelligence The ESN considered here is sketched in Fig. 3.2 and is detailed in [15]. In accordance to literature [59], the ESN consists of three components: the input weight matrix I , the reservoir weight matrix R, and the output weight matrix W . We denote with .xq×1 the input vector, assuming as reservoir weight matrix updating rule the following equation [59]: s×q

us×1 (q) = tanh(Win xq×1 (q) + Ws×1 r (q − 1)),

.

(3.20) s×q

in which .us×1 represents a vector of internal units in the reservoir part, and .Win is the weights matrix associated to the connections existing between the input layer and the reservoir level. Then, the .Ws×1 u (q − 1) is the recurrent weights matrix. q×s Denoting with .v(q) the output vector and .Wout the weight matrix associated to the connection between the reservoir and the output layer, the relationship between the reservoir and the output level can be described as Fig. 3.2 ESN architecture

3.5 Matching Theory for Devices Selection

35 q×s

v(q) = Wout us×1 (q).

.

(3.21)

Therefore, the ESN employed for our specific purpose utilizes historical channel samples denoted as I to predict the forthcoming channel conditions for each device within the set .N . Given the presence of temporal correlations between successive samples, the ESN proves to be a highly valuable tool for channel condition prediction. This is attributed to its capability to capture and model the temporal relationships among these samples effectively.

3.5

Matching Theory for Devices Selection

The local ECNs enable the associated devices to communicate to .A the channel predictions achieved on the basis of a target quality value .SNRt . Let .Eh,i be the set of predicted channel state information coefficients considering as time horizon the instant h, i.e., .Eh,i = {e1,i , . . . , eh,i }, in which .ey,i is a binary term related to the yth forecast sample ahead from the end of the training set used by the ESN on device i at the step y, assuming the value 1 if the foreseen .SNR at the SBS site is greater or equal to .SNRt or 0, otherwise. Furthermore, all devices belonging to .M send the .ζi values to .A. Once the aggregator has received all the devices information, it creates a list .Δ and, for each device, .A performs a threshold mechanism ranking in descending order the received .Eh,i sets on the basis of the sum of the number of samples having value 1. In formal terms, for each device .i ∈ M, the following set is created Bi,h = {ey,i ∈ Eh,i |ey,i = 1}.

.

(3.22)

Then, all the sets .Bi,h are sorted in descending order considering the cardinality of the sets .|Bi,h |, .∀ device .i ∈ M. At this point the one-side matching game algorithm [60] is applied between the set of devices .M, and the aggregator .A, in order to select devices for the FL process advantageous for all the players belonging to .M and for .A [44, 60]. In this chapter, the preference lists of .A over .M represent the utility of the system in accepting the users for the federated training, considering also the users revenue. Therefore, the .A preference list is built considering the following metric: ( ) .HA (i) = ζi N − rank(i, Δ) , (3.23) for all device i belonging to .M. Consequently, the most preferred device .i ∗ is given by i ∗ = arg max HA (i).

.

i∈M

(3.24)

36

3 Edge Intelligent Computing Enabled Federated Learning in 6G Wireless Systems

Fig. 3.3 Devices selection flow

The corresponding preference list is built sorting the devices in descending order on the basis of the .HA (i) value, .∀i ∈ M. Then, scrolling the preference list top-down, .A accepts all the devices until .BA is not zero. Summarizing, as also reported in Fig. 3.3, the algorithm steps are as follows: 1. Each device .i ∈ M performs the prediction of the channel state information (CSI) of the direct link toward the aggregator, h steps ahead, exploiting the local ESN, and sends this information to .A. The set .Ei,h is created. 2. For each user .i ∈ M, .A creates the sets .Bi,h and creates the list .Δ sorting the .Bi,h sets in descending order on the basis of their cardinality. 3. .A builds its preference list. 4. .A selects the most preferred device .i ∗ from its preference list to take part in the FL process. 5. .A pays .χi ∗ to device .i ∗ . ∑ 6. Update the available resources of .A: .BA − i χi bi . 7. Delete .i ∗ from the preference list. ∑ 8. Repeat steps (5)–(8) until .BA − i χi bi ≥ bi ∗ , and there exists at least one device unselected. We refer in what follows to the following cooperative D2D selection schemes [22] under the assumption that the number of devices selected to participate in the FL process is always lower than the total number of available devices: • Ideal D2D selection scheme: Once the device selection process has been completed, the aggregator .A communicates to the selected devices their participation in the FL process through a beacon signal that marks the commencement of the FL cycle. At the start of each FL round, every participating device obtains an updated forecast of the channel propagation conditions toward .A by utilizing its local ESN, based on the information available at the end of the current FL processing round. In cases where a participating device detects unfavorable channel conditions, it initiates a device discovery procedure. This procedure involves searching among nearby devices that are not engaged in the FL process and do not

3.5 Matching Theory for Devices Selection

37

experience poor propagation conditions toward .A. Subsequently, the interested device establishes a D2D communication link with the selected nearby device, making use of the same channel allocated for communications with .A. The selected nearby device then forwards the received update to .A over the same channel. This approach eliminates the need for additional channel allocation by the Small Base Station (SBS) and helps prevent D2D interferences. In scenarios where a device encounters unfavorable channel conditions and is unable to locate an appropriate nearby device for D2D communication during the discovery procedure, it chooses to skip the training step. Consequently, it refrains from transmitting its update to the aggregator .A. This approach is designed to prevent energy wastage in futile transmission attempts. It is worth noting that while this D2D selection scheme is ideal for various reasons, we refrain from discussing the specific details here. • SBS-assisted D2D selection scheme: In contrast to the previous ideal scheme, where D2D selection is performed by each interested device during the FL execution, this procedure adopts an “a priori” device selection approach carried out by the aggregator .(A). After the initial selection of devices participating in the FL framework, each device within .M sends its set .Eh,i and location information to .A. Following the completion of the matching game to choose the devices that will partake in FL (referred to here as FL selected devices), as described in Sect. 3.5, .A searches for the nearest device to each FL selected device that is not engaged in the FL process. This nearby device is designated as a cooperating D2D host and is intended to be leveraged in cases of anticipated poor channel conditions at the end of an FL round. In such instances, the beacon signal marking the commencement of the FL process, along with the notification regarding the involvement of FL selected devices in the FL process, also conveys information about the potential activation of the D2D procedure for each selected device. If D2D activation is deemed necessary, the beacon signal specifies the most suitable nearby device for cooperation. Furthermore, through the beacon signal, all devices selected to support D2D communications with the FL selected devices receive instructions about their assigned tasks and the designated communication channel for both D2D communications and transmitting the received updated model information back to .A. It is worth emphasizing that each device can serve as a cooperating D2D host for only one FL selected device, ensuring efficient utilization of resources.5 Moreover, in the case that no cooperating D2D hosts was found for a given FL selected device, the D2D procedure for that device is denied, and the training step as well the update transmission to .A is skipped, in order to prevent an energy waste. As for the ideal D2D selection scheme outlined before, the D2D as well as

5 In the case of a multiple selection as D2D host by more than one device involved in the FL process, the choice is random.

38

3 Edge Intelligent Computing Enabled Federated Learning in 6G Wireless Systems

the next selected cooperative device to .A data updated transmission is performed on the same channel allocated to the interested FL involved device.

3.6 Performance Analysis In order to present some numerical results related to the performance of the scheme discussed in this chapter, we refer to the system parameter values as in [nostro]: −1 , vapor water .P = 600 mW, .PD = 200 mW, .W = 13 GHz, .K(φ) = 0.0016 m percentage equal to .1%, a carrier frequency equal to 1 THz as in [1]. Moreover, the range for discovering nearby devices is set to .3.86 m to allow D2D communications at the same rate of the direct link, .∈i = 0.1, .vi = 5 MB and equal for all the devices belonging to .M, and, finally, the amount of data for each device has been set to .Di = 1000 MB as in [55]. Similarly, in reference to [55], the CPU frequency has been randomly selected within the interval .[10, 20] MHz. The accuracy of the ESN considered in this chapter to foresee the channel conditions is provided in terms of mean squared error (MSE) defined as |Ei,h | 1 ∑ (ˆιEi,h − ιEi,h )2 , .MSE = |Ei,h |

(3.25)

η=1

in which .ιˆEi,h and .ιEi,h are the predicted and actual values, respectively. In addition, in order to test the ESN accuracy, we have resorted to actual 6G channel data measurements released by [61]. Figure 3.4 illustrates the accuracy of Channel State Information (CSI) predictions made by the ESN as the time horizon h increases. As evident from the figure, as the time horizon expands, the Mean Squared Error (MSE) deteriorates, meaning that the MSE value increases. This phenomenon occurs due to the inherent complexity of predicting long-term time series behavior. However, it is important to note that the MSE remains relatively low, even when .h = 100. Figure 3.5 delves into the global convergence time in relation to the number of iterations, comparing it to the vanilla version of the FL process. It considers the two D2D selection strategies outlined in paragraph Sect. 3.5 and takes into account the scenario where the number of selected FL devices equals the overall number of devices. As anticipated, the ideal D2D selection scheme attains superior performance (upper bound) in comparison to the SBS-assisted D2D selection scheme. Nevertheless, it is crucial to emphasize that the performance of the SBSassisted D2D selection scheme closely approaches the upper bound. This suggests that the SBS-assisted approach is an efficient and practical solution for real-world applications. Figure 3.5 also demonstrates the impact of incorporating an incentive mechanism to enhance FL performance in terms of global FL delay, which refers to the overall time taken to complete model training.

3.6 Performance Analysis

39

0.1 0.09

ESN

0.08

CSI MSE

0.07 0.06 0.05 0.04 0.03 0.02 0.01 0 40

50

60

70

80

90

100

Predic tion horizon h Fig. 3.4 MSE as a function of the time horizon h

110 Ideal D2D Realistic D2D Vanilla FL

100

Global FL Delay (ms)

90 80 70 60 50 40 30 20 10

0

20

40

60

80

Iterations Fig. 3.5 Global FL delay as a function of the number of iterations

100

120

40

3 Edge Intelligent Computing Enabled Federated Learning in 6G Wireless Systems

1 0.95

FL Model Accuracy

0.9 0.85 0.8 0.75 0.7 0.65 0.6 Proposed Framework Anticipated D2D

0.55 0.5

0

20

40

60

80

100

120

Iterations Fig. 3.6 Accuracy as a function of the number of iterations

Furthermore, Fig. 3.6 displays that the accuracy reaches higher levels when adopting the proposed framework compared to the vanilla alternative. It is evident that accuracy increases as the number of iterations grows for both schemes, considering the same communication opportunities. This improvement is a result of the ongoing learning process. Lastly, Fig. 3.7 showcases the system’s energy consumption as a function of the power level used for transmission by the FL selected devices when communicating with the aggregator. As shown, increasing the transmission power results in higher energy consumption. Nevertheless, Fig. 3.7 highlights again the advantages of the proposed scheme in comparison to the vanilla alternative. Finally, Fig. 3.8 shows the revenue trend as a function of the number of devices. Again, the proposed framework clearly outperforms the vanilla scheme. In fact, the curves behavior highlight the effectiveness of both the incentive and the selection strategy to improve the federated training behavior.

3.6 Performance Analysis

41

System Energy Consumption (J)

100

95

90

85

80 Proposed Framework Vanilla Framework

75

1

1.1

1.2

1.3

1.4

1.5

1.6

1.7

Transmission power (W) Fig. 3.7 System energy consumption as a function of the transmission power

35

Revenue

30

25

20

15 Proposed Framework Vanilla FL

10 20

40

60

80

Number of Devices Fig. 3.8 Revenue as a function of the number of devices

100

120

42

3 Edge Intelligent Computing Enabled Federated Learning in 6G Wireless Systems

3.7 Summary This chapter has addressed the challenge of device selection with incentives in a D2D-assisted FL scheme designed for applications in 6G-based networks. Within this context, the paper has showcased the benefits of combining Machine Learning (ML) and Matching Theory (MT) approaches within a well-designed FL framework. In this framework, users have the capability to forecast channel communication conditions toward the aggregator using a local ESN. Based on these channel state predictions, users can take action when poor propagation conditions are anticipated. This involves enlisting the cooperation of a nearby device, if available, and activating D2D communication to transmit their local updates to the aggregator. Simulation results have been presented to underscore the advantages of the proposed D2D-aided FL scheme with an incentive mechanism within a 6G environment. These advantages are showcased in comparison to a vanilla alternative, particularly in terms of achieving lower global FL delay and reduced energy consumption.

Chapter 4

Edge Intelligent Computing in Aqua Environments

Nowadays, an ambitious target of the next generation networks is to develop edge intelligence ecosystem able to efficiently operate in heterogeneous domains. Toward this end, the underwater environment requires a special attention, since they are recognized as the most challenging domain. Within this context, this chapter aims at illustrating a self-intelligent ground–aqua integrated system where the emerging semantic communication paradigm is envisaged to counteract the hostile behavior of underwater channels. In particular, we discuss in this chapter the use of deep convolution neural networks-based encoder–decoder architecture for this purpose. Numerical results are included here to show the better behavior of the proposed system in comparison with the conventional alternative that does not provide the use of the semantic communications approach. Finally, a specific performance evaluation analysis is devoted to the analysis of the convergence behavior of the proposed Federated Learning (FL) procedure in reference to the cross ground– aqua system considered to highlight its advantages with respect to a classical implementation.

4.1 Introduction The emergence of the next generation networks is giving rise to a novel and wide class of applications, requiring us to create an intelligent environment able to properly support disruptive new generation applications. Nowadays, the ever increasing proliferation of mobile computing [9] has led to the need of developing seamlessly integration of systems operating in different domains, e.g., ground, air, space, end even underwater [9]. Nowadays, this has generated an intense research effort to push the artificial intelligence (AI) frontiers to the network edge regardless of the domain in which they operate [14]. This trend is in accordance with the emerging edge intelligence (EI) paradigm that involves the use of edge computing © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 B. Picano, R. Fantacci, Edge Intelligent Computing Systems in Different Domains, SpringerBriefs in Computer Science, https://doi.org/10.1007/978-3-031-49472-7_4

43

44

4 Edge Intelligent Computing in Aqua Environments

nodes (ENs), arranged in proximity of the source of data, to support data gathering and computations with the aim at hosting dedicated machine learning procedure to properly interpret and manipulate data stemmed from the network devices [28]. Nevertheless, EI is still at its infancy, and the effective exploitation of AI techniques at network edges still represents a crucial problem. Recently, with the advent of 6G technology and the highfalutin applications that bring with it, has generated the imperative need to extend the unified AI-based network paradigm to space– air–ground–aqua integrated domains [9]. Such an innovative vision finds its roots in the fact that about 71% of the Earth area is occupied by oceans, and a large portion of these is still unexplored. The challenging objective here is to include AI capabilities at the edge of a new generation integrated network (NGIN), to give rise to an EI-NGIN capable of information acquisition, processing, interpreting, and transmission in different domains and in relation to application purposes [9]. In this picture, this chapter considers the case of a proper ground–aqua EI-NGIN able to support the FL paradigm [50, 51, 54] to effectively train models on data (i.e., images) collected by underwater devices to perform infrastructures (e.g., oil platforms) or environment (e.g., marine life) monitoring. In this context, the optimization of underwater data communications becomes crucial. Therefore, an AI-empowered semantic communications framework has been envisaged to extract and send only bits related to the semantic information of each collected image, instead of bits due to the statistic knowledge of source symbols.

4.2 Intelligent Data Communication Framework In reference to Fig. 4.1, a two-layer EI-network has been considered. Whereas the first layer represents the aqua network domain, the second layer consists of the ground network level, in which a small ground base station (SBS) .S coincides with an edge network node (ENN), for which terms SBS and ENN will be used interchangeably. The ENN .S has been assumed equipped with processing and storage capability, in order to perform data fusion, interpretation, and manipulation of images collected by underwater devices. Since underwater acoustic wireless network uses sound waves as the medium, we have considered the presence of a set of shore-SBS (SSBS) .B = {1, . . . , B}, also in this case represented by ENNs capable of computation, arranged to received and forward data deriving from the aqua layer. Therefore, the presence of a set of maritime devices .M = {1, . . . , M} performing data gathering, i.e., in our case images from the deep sea, has been assumed. In this reference, we have considered a set of images .I = {s m }M m=1 , and each maritime device has a given picture belonging to .I needing to be sent to the associated SSBS. Exploiting the image processing techniques, in each picture .s u , the corresponding semantic information .ιu can be properly extracted. Note that the semantic communications scheme proposed in Sect. 4.4 is assumed embedded onboard of each maritime device belonging to .M, since it is instrumental to effectively transmit images gathered from devices in .M to the SSBSs. Then, on the basis

4.3 Federated Learning for Ground–Aqua Environments

45

Fig. 4.1 FL over a ground–aqua network architecture

of images collected by SSBSs and stemmed from maritime devices, the FL is performed involving the SSBSs and the ENN.

4.3 Federated Learning for Ground–Aqua Environments Federated Learning, as described in references [53] and [47], is a learning paradigm involving two types of agents: low-level devices, often end-devices, and a more organized and powerful node, such as a central unit or an aggregator. In FL, enddevices perform local training, and then there is mutual interaction between these devices and the aggregator to improve a shared machine learning model. FL operates iteratively and consists of global epochs, each of which is further divided into three phases: • Local Computation: In this phase, end-devices perform local model training using their own data. • Model Exchange: Following local computation, end-devices send their locally trained models (or model updates) to the central aggregator. • Central Computation: The central aggregator receives the model updates from the end-devices and aggregates them to generate an updated global model. This global model is then sent back to the end-devices for the next round of local computation.

46

4 Edge Intelligent Computing in Aqua Environments

In our specific context, FL is executed between the Small-Size Base Stations (SSBSs) and the Edge Node (EN). The SSBSs, which typically correspond to the end-device level in this framework, gather data from maritime devices belonging to .M for training their models. The central aggregator, represented by the ground Small Base Station .S, is connected to the SSBSs through radio frequency channels. It plays the role of aggregating and coordinating the training process, ensuring that a global model is continuously improved based on the contributions of all participating SSBSs. This approach is particularly relevant in challenging environments such as underwater scenarios, where communication conditions are affected by factors such as propagation and absorption. In step (1), SSBSs provide local data training considering the shared model previously downloaded from .S, and the dataset created collecting images from devices belonging to the aqua layer following the approach presented in Sect. 4.4. Therefore, we denote with .Db the dataset of the b-th SSBS that is composed by the images received by the sea devices that lie in the coverage area of SSBS b.1 Consequently, for each sample j in .Db , the main goal is the identification of a model parameter w that minimizes the loss function .Lj (w). Therefore, each SSBS b solves the minimization problem [54] .

min Lb (w) = w

1 ∑ Lj (w). |Db |

(4.1)

j ∈Db

The corresponding learning model is represented by the minimization of the global loss function given by

.

mine L(w) =

w∈R

B ∑ b=1

|Db | Lb (w), ∑B b=1 |Db |

(4.2)

in which e is the input size. Furthermore, during each local computation round t of the FL framework, the SSBS b solves the local problem [54] (t)

wb = arg min Fb (wb |w (t−1) , ∇L(t−1) ),

.

wb ∈Re

(4.3)

in which .Fb represents the objective function of SSBS b, .w (t−1) is the global parameter produced during the previous iteration, and .L(t−1) is the global loss function at time .(t − 1). Once each SSBS b has completed the local model training, the SSBS b uploads .wbt to .S during the second step, in which .S collects the weights received by the SSBSs belonging to .B. In its turn, in step (3), .S improves the global model by performing the weighted average of the local updates .wbt previously uploaded by SSBSs.

1 We

have assumed that each underwater device sends gathered images to the nearest SSBS.

4.3 Federated Learning for Ground–Aqua Environments

47

As a consequence, .S aggregates the received information providing the following computations: w (t+1) =

.

B 1 ∑ (t) wb , B

(4.4)

b=1

and ∇L

.

(t+1)

B 1 ∑ (t) ∇Lb . = B

(4.5)

b=1

The procedure just presented is repeated iteratively, until a desired accuracy or a termination criterion such as the maximum number of iterations is satisfied.

4.3.1 Channels and Computation Modeling In reference to Fig. 4.1, due to the heterogeneity of the nature of the channels involved, i.e., the coexistence of both RF and acoustic communications, the link capacity characterization changes on the basis of the type of the channel considered. Note that similarly to [56], this chapter exclusively considers the uplink channel. In addition, in this chapter, interference issues are not taken into account. The RF channels, i.e., the channels exploited for terrestrial communications between SSBSs and .S, are 6G links. Therefore, the link between the SSBS b and .S is characterized by a data rate .Rb,S given by ( ) Rb,S = W log2 1 + SNRt ,

.

SNRt =

.

P A0 d0−2 e−K(f )d0 N0

(4.6)

(4.7)

in which P is the transmission power assumed equal for both SSBSs and .S, whereas W represents the bandwidth of the communication link, and .N0 , considering both the molecular absorption noise and the Johnson–Nyquist noise at the receiving site, results in

.

N0 =

.

Wζ gB T0 + P A0 d0−2 (1 − e−K(f )d0 ), 4π

(4.8)

where .gB consists of the Boltzmann constant, .T0 is the temperature in Kelvin, .ζ is the wavelength, .K(f ) the global absorption coefficient of the physical medium, and

48

4 Edge Intelligent Computing in Aqua Environments 2

A0 = 16πc 2 f 2 [1]. Differently, the link connecting the generic underwater device m in .M and the nearest SSBS b is characterized by the following .SNRu :

.

SNRu =

.

Sl (f ) , A(l, f )N(f )

(4.9)

where .Sl (f ) expresses the power spectral density of the transmitted signal, and A(l, f ) is the path loss given by

.

( A(l, f ) =

.

l lk

)k a(f )l−lr ,

(4.10)

in which f is the signal frequency and l is the transmission distance, measured in reference to the position of the receiver .lr . Then, k is the path loss exponent and models the spreading loss, whereas .N(f ) represents the ambient noise given as a function of f . Therefore, the underwater channel capacity is ( ) Rm,b = W log2 1 + SNRu .

(4.11)

.

In reference to the computation model, each SSBS b has on-board a CPU with working frequency .fb , given in the number of CPU cycles per unit time. Therefore, the time needed by the SSBS b to perform the local model computation is ( ) 1 fb Db , .tb = log ∈b

(4.12)

( ) where .log ∈1b is the number of local iterations needed to achieve the local accuracy ∈b [52, 55]. Let .vb be the local parameter size expressed in bits associated to the SSBS b, and each communication round exhibits a cost in time defined as

.

τb =

.

vb Rb,S

(4.13)

Therefore, considering the b-th SSBS, denoting with N the number of communications round, the total amount of time spent is 

um .Tb = N(tb + τb ) + max Rm,b

 ,

(4.14)

m∈M

where .um is the size in bits of the data sent by underwater device m toward the associated SSBS b.

4.4 Underwater Semantic Communications

49

4.3.2 Problem Formulation With the aim at optimizing the FL framework for hybrid aqua–ground domain, the minimization of the mean overall time, i.e., the time needed to receive data from the underwater layer plus the time spent to actually training the model, is crucial. Therefore, the main objective of the paper is to design an overarching framework able to optimize the following problem: B 1 ∑ . min Tb . B

(4.15)

b=1

The optimization of problem (4.15) is very challenging, since the involvement of underwater communications in the term .Tb dramatically impacts the transmission speed, resulting in massive delays and high FL convergence time. In this picture, in the next section, a machine-learning-based semantic communication framework is proposed in order to limit the transmission size of the images produced by maritime devices, sending exclusively the semantic meaning previously extracted.

4.4 Underwater Semantic Communications In semantic communications, the primary objective is to transmit the semantic meaning of the source data rather than the entire dataset [25–29]. The fundamental distinction from classical communication systems lies in the employment of a semantic encoder at the transmitter’s end, which is capable of extracting the semantic features from the data. Consequently, only these semantic features are transmitted, and upon reception at the destination, they are processed solely based on their semantic content by the receiver, as opposed to traditional systems where processing occurs at the bit level. As depicted in Fig. 4.1, the semantic communications framework consists of two individual cascaded sub-modules: the semantic and the transmission part. In the context of semantic communications, there are two distinct levels of operations: • Semantic Level: This level is responsible for semantic encoding and decoding. It involves information processing and the extraction of semantic content from the data. Here, the focus is on understanding and representing the meaning of the information. • Transmission Level: At this level, the primary concern is ensuring the correct transmission of the semantic information over the physical communication channel. This involves addressing the challenges posed by the underwater channel, which is known for its various impairments and poor propagation conditions.

50

4 Edge Intelligent Computing in Aqua Environments

Given the specific difficulties associated with underwater communication, extensive efforts have been made over the years to mitigate these challenges. One approach involves the adoption of novel modulation techniques designed to alleviate issues related to low data rates, which are often encountered in underwater communication scenarios. These techniques aim to optimize the transmission of semantic information, taking into account the unique characteristics of the underwater channel. For example, we may assume here the application of the multicarrier binary frequency shift keying (MC-BFSK), recently proposed in [62]. Accordingly to [62], . M 2 parallel and independent subcarriers from a single transducer have been considered, resulting into a composite signal where each signaling instance carries M . 2 channel coded bits [62]. From results reported in [62], it emerges that all bit errors occur for estimated .SNR < 1 dB and that error-free communication is achieved in several cases down to .−2.5 dB [62]. Consequently, the errors due to channel impairments in our paper have been assumed negligible. Before transmission, the picture .s is mapped into symbols .xm in order to be sent over the physical channel, experiencing transmission impairments due to sea environment, which implies that the symbol received at the receiver .ym,b is subjected to wireless channel impairments [63, 64]. The framework of interest here focuses on the image transmission and consists of an encode–decoder architecture based on deep CNNs [26]. More in depth, the semantic encoder is realized through the usage of a stack of Conv2D and maxpooling layers, whereas the decoder consists of a stack of Conv2D and upsampling layers. According to the literature on this subject [63–65], the encoder part is formed by semantic and physical coding modules. Let .α 1 and .α 2 be the neural network parameters set for the channel and the semantic encoder modules, respectively. Therefore, the encoded symbol .xm can be expressed as xm = Cα 1 (Sα 2 (s)),

.

(4.16)

where .Cα 1 (·) and .Sα 2 (·) express the channel and the semantic encoder functions, respectively. Then, .ym,b is decoded by the receiver, with the aim at retrieving the original .s. As a consequence, the retrieved copy of .s, .sˆ, results in sˆ = Sβ−1 (Cβ−1 (ym,b )),

.

1

2

(4.17)

in which .Sβ−1 (·) is the semantic decoder with parameter .β 1 , whereas .Cβ−1 (·) is the 1 2 channel decoder having parameter .β 2 . Since the sea environment usually exhibits deep channel fluctuations and adverse propagation conditions, the primary objective here is to design an integrated channel-semantic coding able to maximize the similarity between .sˆ and .s. Referring to [27, 28], a common loss function adopted is represented by the binary cross-entropy metric that, with the Adam optimizer, aims at giving insight about the similarity between .sˆ and .s. In order to have a measurement of the goodness of the encoding–decoding procedure provided, for a maritime device m, with .m ∈ M, transmitting toward the SSBS b, with .b ∈ B,

4.5 Performance Analysis

51

the cosine images similarity metric [65] has been exploited. Hence, the similarity between the image sent by maritime device m toward the SSBS b results in ζm,b =

.

s · sˆ , ||s||||ˆs ||

(4.18)

that is the ratio between the dot product of the images expressed as vectors, and the product of L2-norm of both the vectors. The meaning of parameter .ζm,b is of a feedback metric to catch the level of validity and accuracy provided by the transmission system. In fact, a .ζm,b decrease due to hash propagation conditions has a negative impact on the semantic communication quality.

4.5 Performance Analysis This section provides a performance analysis of the proposed system whose focus is the functional integration of a FL process with an underwater semantic communications system. Being the investigation of the data underwater transmission system out of the scope of this chapter, without loss of generality, we have referred here, as an example, to the MC-BFSK underwater data communication system proposed in [62] and, as a consequence, assumed a data transmission rate of 229 bps. Furthermore, in order to properly test the performance of the proposed framework, we have resorted to the MNIST handwritten digits dataset [66], considering a images resolution of .28 × 28 pixels, where one pixel has been assumed composed by 8 bits. Aiming at validating the proposed SCs scheme, Fig. 4.2 shows the loss function value as the number of learning epochs grows. As it is straightforward to note in this figure, the loss function decreases, i.e., the performance of the learning model improves, considering high values of the number of epochs. The performance behavior depicted in Fig. 4.2 is also confirmed by the increase of the encoder dimension considered in Fig. 4.3 in relation to the achieved loss function performance. Also in this case, the loss trend grows as the number of neural unit increases. This is clearly due to the fact that as the encoder dimension grows, the intelligence capability of the considered deep-CNNs network increases. Then, Fig. 4.4 depicts the mean total amount of time spent to perform the model training, with and without the involvement of the semantic communications framework. More in depth, when the semantic communications are not applied (WSC curve), the image is sent in the conventional method, without extracting and sending exclusively the semantic information. As it is evident to note from Fig. 4.4, the time required to send data when the semantic communications (SC curve) approach is used is significantly lower than the time spent with the conventional method, implying a significant impact on the overall FL completion time, which means an improvement on the practical applicability of the training framework. The behavior observed in Fig. 4.4 becomes even more evident when we analyze it in conjunction with the results presented in Fig. 4.5, which focuses on the accuracy gap compared to the

52

4 Edge Intelligent Computing in Aqua Environments

0.0125 Proposed Framework

Loss function

0.012

0.0115

0.011

0.0105

0.01 10

15

25

20

Number of epochs Fig. 4.2 Loss function as a function of the number of epochs

0.026 Proposed Framework

0.024

Loss function

0.022 0.02 0.018 0.016 0.014 0.012 0.01 5

10

15

Encoder dimension Fig. 4.3 Loss function as a function of the encoder dimension

20

25

4.5 Performance Analysis

53

300 290

Global FL Delay (s)

280 270 260 WSC SC

250 240 230 220 210 200

0

20

40

60

80

100

120

Iterations Fig. 4.4 Total amount of time needed to complete the FL as a function of the number of FL rounds

ideal case achieved by employing the considered FL scheme that includes semantic extraction and reconstruction (WSC curve). From Fig. 4.5, it is evident that the accuracy of the FL process is not significantly affected by the semantic extraction–reconstruction process. In fact, the FL process achieves nearly the same accuracy values as the scheme that utilizes original images. When we combine the findings from Figs. 4.4 and 4.5, it becomes apparent that the improvements in convergence time due to the semantic extraction–reconstruction process do not have a substantial impact on the accuracy of the overall framework. Then, we can say from this analysis that the semantic extraction–reconstruction process introduces significant benefits by reducing the amount of data to be transmitted, which in turn accelerates the FL convergence process. This outcome is particularly important in practical applications, as it highlights the feasibility of implementing an Earth-IoT (EI) ground–aqua network. This network leverages semantic communications to mitigate the challenges posed by the hostile underwater communication channels, all while maintaining high data accuracy and achieving a significant reduction in FL convergence time. This has considerable implications for various application scenarios where timely and accurate data processing is crucial.

54

4 Edge Intelligent Computing in Aqua Environments

1 0.9 0.8

Accuracy

0.7 0.6 0.5 0.4 0.3 0.2 0.1 0

0

WSC SC

20

40

60

80

100

120

Iterations Fig. 4.5 Accuracy reached by the FL model as a function of the number of iterations

4.6 Summary This chapter has addressed the challenge of implementing the FL paradigm in an integrated ground–aqua environment, with a focus on utilizing data collected from underwater devices. Specifically, a ground–aqua network, primarily designed for monitoring activities based on image transmissions, has been explored to facilitate the execution of the FL framework. To cope with the challenging underwater communication environment, the semantic communication approach was introduced. This approach involves the extraction and transmission of semantic information, which can help overcome issues associated with hostile sea propagation conditions and long data transmission delays. A deep convolutional neural networks (CNNs) architecture was designed for this purpose, and its performance was validated using an appropriate underwater channel model. Furthermore, the chapter presented a performance analysis to demonstrate the effectiveness of the proposed framework. Key aspects evaluated include accuracy and the overall time required for the FL process. The results underscore the viability of the comprehensive framework, particularly in terms of maintaining data accuracy and reducing the overall time needed for the FL process. This framework has significant implications for various applications that require monitoring and data processing in challenging environments, such as underwater settings.

Chapter 5

Application of the Digital Twin Technology in Novel Edge Intelligent Computing Systems

New generation networks are expected to represent a concrete support to novel edge intelligence computing environments, enabling a novel plethora of disruptive delay-sensitive applications. In such a context, the advent of the digital twin (DT) paradigm has given new opportunities to Edge Intelligent systems and enabled new applications. This chapter presents an AI-empowered DT framework for a novel Edge Intelligent ecosystem according to the emerging paradigm of an integrated ground–air multiaccess edge computing infrastructure. In particular, we discuss the case of an unmanned aerial vehicle-aided multiaccess edge computing (UAVMEC) system, where a set of DTs provide decisions to lower the edge computation congestion at the ground edge nodes on the basis of an appropriate approach. A three-dimensional matching game has been presented to perform the planning of the tasks offloading, channels allocation, and assignment of the UAV support to the congested ground edge nodes. Then, an architectural overview about the DT network has been detailed, considering a DT as a composition of elementary cyber entities. Finally, extensive experimental simulations are provided to highlight the advantages of the proposed framework, in terms of task completion delay and lower system congestion.

5.1 Introduction The emergence of sixth-generation (6G) networks has led to a growing demand for innovative applications. Integrated ground–air networks (IGANs) have surfaced as a promising networking solution to meet diverse service requirements, offering coverage over wide areas with guaranteed service availability, continuity, and scalability. IGANs typically involve cooperative schemes between UAVs and various groundbased mobile devices or network infrastructure nodes, including Edge Computing nodes (ECs) or cloud resources. This collaboration enhances network performance, © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 B. Picano, R. Fantacci, Edge Intelligent Computing Systems in Different Domains, SpringerBriefs in Computer Science, https://doi.org/10.1007/978-3-031-49472-7_5

55

56

5 Application of the Digital Twin Technology in Novel Edge Intelligent. . .

resulting in improvements in network behavior and the quality of experience for users. These enhancements address common challenges faced by existing wireless networks. The integration of UAVs equipped with computing capabilities into a MEC network offers several important advantages. It provides high deployment flexibility across various scenarios, supporting real-time demands efficiently. Additionally, it offers effective computation offloading solutions, particularly in areas where the network infrastructure is overloaded or experiencing failures. This integration of aerial and ground-based resources within IGANs represents a significant advancement in the field of networking, with the potential to revolutionize how we meet the evolving communication needs of modern applications and services. Typical UAV-MEC environments consist of a set of EC nodes equipped with computational and storage capability, arranged to host intelligent computations. Accordingly to MEC, computation capabilities are moved at the edges of the network, e.g., at the wireless network SBSs, lowering delay and network congestion, typical of previous cloud-based solutions. On the basis of the quality of service (QoS) constraints imposed by applications, tasks originated by EDs have to be offloaded on the UAV-MEC system following a certain rule. In this reference, a proper radio resource exploitation has to be pursued to guarantee efficiency and effectiveness of service provisioning to lower channel interference drawbacks, providing advantageous cooperation between the UAV and the ground MEC infrastructure. Furthermore, the functional integration of the DT technology with artificial intelligence (AI) capabilities allows to make the concept of an intelligent UAVMEC network. In particular, the DT traces the next wave in simulation, emerging to integrate cyber and physical realities to provide virtual models for physical objects in the digital way and to simulate their behaviors [67]. This plays a fundamental role to increase the data-driven optimization of system and service planning, achieving unprecedented levels of performance and reliability through real-time monitoring, prediction, estimation, analysis of dynamic changes, and actuation between the physical objects and the digital copies. AI is recognized as a powerful tool for making data-driven decisions, given its ability to self-learn and become increasingly intelligent through data analytics. It plays a crucial role in analyzing real system data, uncovering valuable insights, and obtaining knowledge from the information collected. Despite the growing emphasis on the importance of DTs in various domains, including manufacturing applications, their integration into UAV-MEC environments remains largely unexplored, representing an open research challenge. This chapter aims to address this gap by outlining the functional integration of DTs empowered by AI within a UAV-MEC network. The goal is to optimize task offloading, minimize the worst task completion time, and reduce network congestion. In this network scenario, we consider disjoint areas, each containing a SBS paired with an EC node, with the terms SBS and EC used interchangeably. To alleviate occasional computational congestion, each SBS can be assisted by a UAV equipped with computational capabilities, especially during temporary traffic

5.2 Problem Outline

57

overload conditions. The integration of DTs enables data-driven decision-making processes, including managing channel interference and identifying congested ECs, by collecting data from the physical plane. Within this context, a solution based on Matching Theory (MT) is proposed. This solution aims to allocate tasks to computation sites, whether UAVs or ECs, and make informed decisions regarding channel assignments and determining which service areas require UAV support. The integration of AI-driven DTs and MT offers a promising approach to enhance the efficiency and performance of UAV-MEC network.

5.2 Problem Outline 5.2.1 Physical System Model We deal here with an integrated DT UAV-MEC system consisting of two layers: the physical layer, and the digital layer, that, in its turn, is divided into local DTs sublayer and system control sublayer, as illustrated in Fig. 5.1. In this regard, our goal is to illustrate a possible cooperation mode between the physical reality and a hierarchical architecture, where local DTs interact with the control aggregator (CA), in order to decompose the proposed problem into several subproblems lowering the original complexity of the problem. The physical layer is represented by the network

Fig. 5.1 Digital twin edge network architecture

58

5 Application of the Digital Twin Technology in Novel Edge Intelligent. . .

infrastructure having connectivity compliant to the 6G technology operating over THz channels, with S potentially interfering SBSs, denoted as .S, each related to a specific service area where a set of channels is available to provide to EDs the access to the local EN for task offloading. As a case to study, we assume SBSs in a given area, deployed according to an isotropic homogeneous Matern Hard Core Point Process (MHCPP) with intensity .ι and with the constraint of a minimum distance .ρ, since the distance between SBSs cannot be arbitrarily small. Moreover, in the system architecture depicted in Fig. 5.1, we also have a central aggregator (CA) devoted to collect data on the congestion level of each service area belonging to a same network area by interacting with the associated DT to acquire the congestion predictions provided by the local ESN as described in Sect. 5.3.1. Furthermore, we have assumed that each SBS hosts an EC, equipped with a central processing unit (CPU) with a specific speed, denoted as .fj for a given EC j , with .j ∈ S. In addition, we have considered in each service area the presence of a set .U of EDs (i.e., users) needing task computation. Furthermore, we assumed that each ED accesses the linked EC by means of individual uplink channels provided by the associated SBS. Likewise, being First-In-First-Out (FIFO), the EC tasks processing scheduling policy, and the associated SBS broadcasts to EDs the task processing outcomes by means of a single (downlink) channel. All the uplink and downlink channels in the network are assumed having the same bandwidth .W. In accordance with [1, 57, 58], we assumed that the instantaneous available rate, .Ru for the communication channels connecting users to the appropriate SBS (uplinks), is given by   P A0 do−2 e−K(θ)d0 , .Ru = W log2 1 + N0 + Ih

(5.1)

where .d0 is the distance between the sender and the receiver, whereas .N0 , due to both the molecular absorption noise and the Johnson–Nyquist noise at the receiver site, is defined as [1, 57, 58] N0 =

.

  Wζ gB T0 + P A0 di−2 1 − e−K(θ)di . 4π

(5.2)

In (5.1) and (5.2), .W, as stated before, is the bandwidth of the communication channel,1 here considered as channel h, .gB the Boltzmann constant, .T0 the temperature in Kelvin, .ζ the wavelength, .K(θ ) the global absorption coefficient 2 of the medium, and .A0 = 16πc2 (θ)2 [1], where .θ is the carrier frequency, .di is the distance of the interfering source i and the receiver, and c is the speed of light. Furthermore, P represents the transmission power adopted by all elements in .U to perform uplink/downlink communications. In addition, the term .Ih =  S −2 γ =1,γ /=i P A0 dγ uγ ,h represents the aggregate power of the interfering signals 1 All

the possible communication channels for any SBS have the same bandwidth .W .

5.2 Problem Outline

59

at the linked SBS due to the use of the same channel h by EDs linked to the other S − 1 cells close enough to lie within a non-negligible interfering distance. Finally, the term .uγ ,h has value 1 if the channel h is allocated to the interfering .γ , and 0, otherwise. Note that values of terms .uγ ,h result from a certain rule, i.e., the channel allocation strategy proposed in Sect. 5.3. Likewise, the instantaneous available rate, .Rd , for the downlink channel used by the SBS to dispatch the task computation outcomes to all the interested users within its service area results in   P A0 do−2 e−K(θ)d0 . .Rd = W log2 1 + (5.3) N0

.

Note that the task computation outcomes are assumed to send out in a broadcast mode. As a consequence, it is straightforward to note that only the ED that recognizes its own ID in the header of the received data message is interested on it, while the other ignores the packet. From the above, it follows that for a generic task i, the computation time on the (ground) EC j results in ti,j =

.

si + ωj , fj

(5.4)

where .si is the number of CPU cycle employed by task i to be computed, .fj is expressed as the number of CPU cycles per unit of time, and .ωj represents the queuing time on EC j , i.e., the time spent by task i waiting for receiving computation. Such a time is due to the presence of tasks previously offloaded on the same computation site by other users. Alternatively, in the case of congestion of the ground EC, the task i can be computed on-board the UAV, assumed equipped by an embedded CPU with computational capacity .φu .Without loss of generality, we assumed the presence of only one UAV for all the network, which is temporarily assigned to a given SBS service area to lower the local computation congestion. Due to the weight and severe battery constraints affecting UAVs, a common assumption is that CPUs on UAVs are less powerful than those residing on ground ECs [68]. In particular, in the integrated UAV-MEC system under consideration, an ED i needing task processing sends its task to the linked SBS. Hence, if the associated EC is not congested, the task receives computation on-board of the EC. Otherwise, if the EC node is congested and the UAV is assigned by the CA procedure, detailed later, to that area, the computation of task i (see Sect. 5.3) can be offloaded to the UAV. After processing completion for each offloaded task, the UAV broadcasts the processing outcomes to the ground EDs.The corresponding computation time of i, when offloaded on the UAV u, i.e., .ti,u , is given by ti,u =

.

si + ωu , φu

(5.5)

60

5 Application of the Digital Twin Technology in Novel Edge Intelligent. . .

where .ωu represents the queuing time that task i has to wait on the UAV before computation. In order to define the completion time, i.e., the time spent by user in submitting a computation request and receiving back the corresponding outcome, the request/outcome transmission time has to be taken into account, giving rise to a completion time expressed by (5.6), where .βi,j = 1 when task i is offloaded on EC j , and .βi,j = 0 when the computation is performed on-board the UAV. In this case, .Rj,u in (5.6) represents the rate of the channel connecting the UAV to the ground EDs within the service area of the SBSj , in a broadcast mode and without mutual interference, resulting in  Ti =

.

   si si si si βi,j + (1 − βi,j ), + ti,j + + ti,u + 2 · Ru Rd Ru Rj,u Rj,u

.

 −K(θ)dj u  Pu A0 dj−2 ue , = Wu log2 1 + N0

(5.6)

(5.7)

where .Pu is the transmission power of the UAV and .dj u is the distance between the SBSj and the UAV. As for the ground ECs case, the computation outcomes of tasks offloaded to the UAV are sent to the interested user by a downlink channel for which the available rate, .Rdu , is Rdu

.

 −2 −K(θ)dui  Pu A0 dui e , = Wu log2 1 + N0

(5.8)

where .dui is the distance between the UAV and the ground user i.

5.2.2 Digital Layer Integration In the local DT sublayer (Fig. 5.2), each SBS is associated to an individual DT whose execution is borne by computation facilities of the corresponding EC, as in [16], with the aim at keeping its state consistent with the actual conditions of the service area of interest in terms of the number of EDs connected and related computation requests, level of interference over all the SBS channels, EC workload, and so on.2 The DT replicating and simulating the surrounding service area, on the basis of data periodically gathered from the physical layer, has been assumed having an architecture compliant with the emerging ISO 23247, whose main entities with related sub-entities and functionalities are the following [69]:

2 In compliance with literature data are assumed collected real time and automatically by the physical layer.

5.2 Problem Outline

61

Fig. 5.2 Functional view of the DT entities

• User entity (UE). Is responsible for mutually interfacing the user entity to the DT entity (DTE). • Digital twin entity (DTE). The DTE contains three sub-entity blocks. The first is the Operation and management sub-entity, responsible for keeping the DT operational and consistent with the actual state of the physical layer. Additionally, it formats information as required by DTs. The Application and Service subentity refers to the functional entities performing prediction, simulation, and analysis of the service area behavior. Resource Access and Interchange enables the integration and the dynamical connection between DTs and the physical counterpart. • Device communication entity (DCE). This entity is composed of two subentities: the data collection sub-entity, and the device control sub-entity. The first collects, identifies, and pre-processes data from the physical twin, whereas the second unambiguously identifies, sends command, and actuates EC in response to a request from other entities. By assuming each DT empowered by a local ESN to monitor the congestion level of the physical EC j , service area counterpart in terms of the number of computation requests .Ωj predicted arrived at the EC j . Moreover, each DT also has in charge here the data gathering to train the ESN of the local area it belongs to having the possibility to take advantage of the stored large amount of data concerning the historical congestion levels in the service area of interest. More specifically, each DT has an own time-series dataset .Dj in the DCE, representing historical data about

62

5 Application of the Digital Twin Technology in Novel Edge Intelligent. . .

Fig. 5.3 Class diagram

the individual EC congestion level .Ωj , which is transferred to the associated DTE through the data collection sub-entity. From a software perspective, the DT can be designed as a composition of cyberentities, each of which represents the digital model of the physical counterpart, as illustrated in Fig. 5.3. In fact, physical components such as EDs, channels, processors, and so on compose the whole aspects characterizing the service area associated to each ED j . In doing so, the DT of the generic EC j can obtain proper information about service area j to build the corresponding approximated model as follows: Ψjt (fj , Ωj , Mj , χS , τw,j , ωj ),

.

(5.9)

where .fj represents the computational capability of the EC j , expressed in terms of CPU frequency, and .Mj is a vector with a number of elements equal to the number of the available channels that provides a virtual model mirroring channel interference level in the SBS service area j of interest, to trigger an efficient channel allocation procedure as detailed in Sect. 5.3. The .χS term refers to a channel map that, for each SBS in .S, keeps updated the list of available channels in the other cells with reference to their interference level. Note that to lower both the energy consumption at the ED level and the communication links congestion in each service area, we have assumed that DTs monitor the interference level of the available channels by gathering periodical measurements performed by the related SBSs. The DT also mirrors the worst task completion time .τw,j , i.e., the greatest time elapsed from the arrival instant of a task computation request at the EC j to when the task computation outcome is received back at the interested ED. In formal terms, we have τw,j = max Ti .

.

i∈U

(5.10)

5.2 Problem Outline

63

5.2.3 Problem Formulation Considering an interval time .Δt , we have to provide an optimized planning over Δt through proper tasks offloading strategy (selecting computation site between the ground ECs and the UAV) and the channel allocation policy to reach the SBS where task computation is performed or where task forwarding toward the UAV is triggered. Consequently, our main objective has to be the minimization of both the worst task completion time and the number of EDs requests in outage in a given service area, defined as

.

G Δt = {i ∈ U |Ti ≥ δi },

.

(5.11)

where .δi is the deadline associated to the generic ED i, expressing the maximum acceptable value to complete the task computation. Similarly, denoting with .TiΔt the worst completion time registered in .Δt in a given service area, we have3

.

min max TiΔt AND i

B,X,H,r

s.t.

|S | 

.

min |G Δt |,

B,X,H,r

βi,j + xi,j = 1, ∀i ∈ U

(5.12)

(5.13)

j =1 |C |  .

ui,h = 1, ∀i ∈ U

(5.14)

j =1 |S |  .

rj ≤ 1.

(5.15)

j =1

Denote with .B ∈ {0, 1}|U |×|S | the edge offloading matrix, whose generic element is 1 if user i offloads task on EC j , zero, otherwise. Then, matrix .X ∈ {0, 1}|U |×|S | represents the UAV selection matrix, where the element .xi,j = 1 if task i is offloaded on the UAV through EC j , zero otherwise. Let .|C| be the number of available channels. The matrix .H ∈ {0, 1}|U |×|C | is the channel allocation matrix, respectively, and, as before, .ui,h = 1 if user i is transmitting over channel h, zero otherwise. The vector of the UAV position is given by .r ∈ {0, 1}|S | , and its generic element .rj is 1 if the UAV is assigned to the j -th service area.4 Constraint (5.13) expresses that each task can be offloaded only on the tagged EC or the UAV. Similarly, constraint (5.14) 3 For

the ease of illustration, in what follows, we drop the superscript .Δt . UAV flight paths to switch to from a given service area to a new one are assumed to be preloaded in order to lower the energy consumption (e.g., according to the procedure outlined in [6]). 4 Optimal

64

5 Application of the Digital Twin Technology in Novel Edge Intelligent. . .

points out that each user can be allocated on only one channel. Constraint (5.15) claims that the UAV can serve, during each .Δt time period, at most one single service area of the network, i.e., it can support in computation only one EC j .

5.3 DT UAV-MEC Offloading Framework In order to identify an effective decision-making framework to address the problem formulated in (5.12)–(5.15), we design a procedure to solve the following problems: • Problem 1 (P1): UAV allocation problem, i.e., identification of the SBS where the UAV has to fly in order to lower congestion • Problem 2 (P2): Computation offloading problem, in a given congested service area, to properly select between the ground EC or the temporary allocated UAV • Problem 3 (P3): Channel selection problem within a service area in order to minimize interference, i.e., lowering the transmission time As depicted in Fig. 5.4, with the aim at providing an effective strategy to solve P1P3, as detailed in Sect. 5.3.1, the traffic congestion in each service area is monitored by the CA on the basis of interactions with the AI-empowered EC nodes in the network area, based on the prediction performed by ESNs. Through the use of the local ESNs, it is possible to provide to the CA reliable predictions about congestion level of each EC service area over time. As a consequence, the CA can take the most proper decisions about the service area where the UAV has to fly to reduce the EC congestion.

Fig. 5.4 Digital twin edge network architecture

5.3 DT UAV-MEC Offloading Framework

65

Finally, on the basis of the UAV intervention determined by the CA, a threedimensional matching game is performed, as described in Sect. 5.3.2.

5.3.1 AI-Empowered DTs Congestion Monitoring In the considered UAV-MEC network, each SBS is AI-empowered by a local ESN. As stated before, we have assumed that the training phase of ESNs is performed on the basis of available data gathered by monitoring the congestion behavior over a suitable interval of observation where the UAV support is not provided. More specifically, the performance of a given service area j is assumed to be evaluated in terms of the number of tasks originated in the area, i.e., .Ωj , at the instant considered within .Δt . The ESN represents an instance of the more general Reservoir Computing concept, inheriting benefits from recurrent neural networks (RNNs), such as the ability in processing inputs exhibiting time dependencies [59], without incurring in the typical problems affecting the training of RNNs, as the vanishing gradient issue, for example. The salient components of the ESN are the following [59]: • • • •

Neurons randomly connected Sparse connection links A large number of neurons Low in energy and time demand

In accordance with [59], the ESN consists of three components: the input weight matrix I , the reservoir weight matrix R, and the output weight matrix W . We denote with .xq×1 the input vector, assuming as reservoir weight matrix updating rule the following equation [59]:   s×q us×1 (q) = tanh Win xq×1 (q) + Ws×1 r (q − 1) ,

.

(5.16) s×q

where .us×1 represents a vector of internal units in the reservoir part, and .Win is the weights matrix associated to the connections existing between the input layer and the reservoir level. Then, the .Ws×1 u (q − 1) is the recurrent weights matrix. q×s Denoting with .v(q) the output vector and .Wout the weight matrix associated to the connection between the reservoir and the output layer, the relationship between the reservoir and the output level can be described as q×s

v(q) = Wout us×1 (q).

.

(5.17)

Due to the ability of the ESN in catching temporal relationship among consecutive samples, the ESN exploits historical samples to forecast the upcoming traffic conditions for each service area, expressed as the number of requests originated on the considered service area. Let .Cn be the critical threshold under which the area

66

5 Application of the Digital Twin Technology in Novel Edge Intelligent. . .

is not considered congested, and the CA selects the zone exhibiting the highest level of congestion greater than .Cn , among the S service areas composing the network.

5.3.2

Three-Dimensional Matching Game for Tasks Offloading

Matching theory (MT) represents a mathematical technique capable to match together elements belonging to two opposite sets, basing the assignment process on the preferences lists built by players, expressing the level of satisfaction of each participant in being matched to each element of the opposite set and vice versa. Consequently, the matching procedure gives rise to an effective trade-off between the preferences exhibited by players. Among all possible alternatives, we discuss here the case of a 3D matching game, due to the three-dimensional allocation decision that has to be provided, i.e., access channel assignment, computation site (EC or UAV, when available) selection. In particular, in order to properly provide a co-channel spatial deployment, channels selection has to be performed in a suitable manner. Furthermore, when the UAV is assigned to a given service area, we have to take decision about the most convenient task offloading alternative between the local EC or the allocated UAV. More in depth, a matching game is developed for each decision to be taken. Note that the assignment is provided for the time interval .Δt , and it is enabled by the interaction with the DT that provides. As for the assignment of EDs to access channels, the following preference lists are built: • Users preferences over channels: For each user i, in a giving cell j , the channel is selected taking information from .Mj and .χS by selecting the best channel h, i.e., with the lowest interference level in .Mj , having the highest interference level in the neighboring cells interfering with the cell j . • Channel preferences over users: For each channel h, the preferences over users are established sorting users in ascending order, considering that the most preferred user .i * is given by i * = arg min

.

i

   si hi,h . Rh h

(5.18)

i

The channel allocation algorithm is given by: 1. Both users and channels build their own preferences list on the opposite set. 2. Each user proposes allocation to the most preferred channel .h* . 3. Among the received users proposals, .h* accepts the most favorite .i * .

5.3 DT UAV-MEC Offloading Framework

67

4. Each channel h receiving at least one proposal accepts its most favorite user .i * for allocation. 5. Repeat steps (1)–(5) until all users are allocated. Once the UAV arrives into a service area j , a new matching game starts to take decision about the splitting requests between the local EC and the UAV. Therefore, the matching game is performed between the computational sites, i.e., the congested EC j and the UAV, and the EDs in .U act as follows: • Users preferences over computation sites: For each user .i ∈ U , the most preferred computation site is .j * , such that  j * = arg min Δj , Δu ,

.

(5.19)

j,u

where si si + ti,j + , R↑ R↓

(5.20)

si si si + ti,u + +2∗ . R↑ R↓ Rj,u

(5.21)

Δj =

.

and Δu =

.

• Computation sites preferences over users: Each computation site sorts the user requests in ascending order on the basis of the associated deadline .δi , i.e., the most favorite user .i * results to be given by i * = arg min δi .

.

(5.22)

i

The users computation site association algorithm consists of the following steps: 1. Both users in .U and computation sites build their own preferences list on the opposite set. 2. Each user proposes allocation to the most preferred computation site .j * . 3. Among the received users proposals, each computation site accepts the most favorite .i * . 4. Each computation site receiving at least one proposal accepts its most favorite user .i * for allocation. 5. Repeat steps (1)–(4) until all the users are allocated.

68

5 Application of the Digital Twin Technology in Novel Edge Intelligent. . .

5.4 Performance Analysis In this paragraph, an in-depth performance analysis is provided, in order to accurately test the behavior of the proposed framework. All results reported in what follows have been obtained by averaging .103 independent simulation runs, where system parameters have been changed as detailed later. Channel simulation parameters have been set as in [1, 6]. Specifically, we have assumed f uniformly distributed within .[0.2 THz, 1 THz], .K(f ) = 0.0016 m−1 , transmission power equal to 1 W, .Wh = 13 GHz, and water vapor percentage equal to .1%. Then, we have considered .|S| = 5. ENs’ CPU configurations have been selected with an equal probability among five possible Intel processor cores alternatives: the Core i7, Core i5, Core i3, Pentium, and Celeron, with CPU clock rates of 3.6, 2.7, 2.4, 1.9, and 2.8 GHz. Furthermore, the Kolkata Paise Restaurant Game algorithm [70] is considered here in order to validate the good behavior of the discussed 3D matching approach, as comparison method. First of all, the accuracy of the ESN is reported in Fig. 5.5, considering different prediction horizons. Figure 5.5 expresses the mean squared error (MSE), defined as

3

x10–4

2.5

MSE

2

1.5

1

0.5 ESN

0 10

20

30

40

Time Horizon Fig. 5.5 ESN forecasting error

50

60

5.4 Performance Analysis

69

240

Worst completion time (ms)

220 200 180 160 140 120 100 80 60 40 20

Proposed Framework Kolkata Algorithm

25

30

35

40

45

50

Number of users Fig. 5.6 Worst completion time as a function of the number of users

|E |

MSE =

.

1  (ˆι − ι)2 , |E|

(5.23)

η=1

and .ιˆ and .ιEi,h are the predicted and actual values, respectively, and .E is the set of samples. As it is straightforward to understand, the greater the prediction horizon, the greater the error due to the difficulty in forecasting long-term behavior. Furthermore, Fig. 5.6 shows the worst completion time as a function of the EDs number. The curves reported in the figure highlight the remarkable performance of the proposed framework in comparison to the worst completion time values reached by applying the Kolkata algorithm. Similarly to Fig. 5.6, also Fig. 5.7 depicts the worst completion time as a function of the number of users. Differently from the previous figure, Fig. 5.7 shows performance comparison between the proposed framework and a scheme acting as the proposed framework where the network area served by the UAV is randomly selected. As it is evident to note, the optimization about network area selection introduced by the proposed framework remarkably increases the system performance, lowering the worst completion time. In the same way, Fig. 5.8 highlights the advantages deriving from the application of the optimized network areas selection strategy proposed, expressed in terms of outage probability as the number of users increases, in comparison to the scheme in which

220

Worst completion time (ms)

200 180 160 140 120 100 80 60 40 20

Proposed Framework Random Zones Selection

25

30

35

40

45

50

Number of users Fig. 5.7 Worst completion time as a function of the number of users

45 40

Outage probability

35 30 25 20 15 10 20

Proposed Framework Random Zones Selection

25

30

35

40

Number of users Fig. 5.8 Outage probability as a function of the number of users

45

50

5.4 Performance Analysis

71

50 45

Outage probability

40 35 30 25 20 15 10 20

Proposed Framework Kolkata Algorithm

25

30

35

40

45

50

Number of users Fig. 5.9 Outage probability as a function of the number of users

network areas selection has been performed on a random basis. As it is evident from figure, also in this case, a proper network areas selection allows to reach performance improvements in comparison to the alternative here considered. The superiority of the framework outlined in this chapter is also confirmed by Fig. 5.9, where the percentage of task belonging to the set .G is illustrated as a function of the users. As it is reasonable to understand, the percentage of requests experimenting outage grows as the number of users increases. This is due to the fact that the capability of the infrastructure remains the same, whereas the traffic of the network, i.e., the number of tasks needing computation, increases. Then Fig. 5.10 shows the trend of the percentage of requests in outage achieved by applying the two schemes, as the number of ECs in the network increases. Differently from Fig. 5.9, in this case, the outage percentage decreases as the number of ECs grows. This behavior is due to the fact that, keeping the number of requests equal to 50, by increasing the number of ECs in the network, the computational resources increase, offering more computational support to tasks that, in their turn, experience less outage probability. Once again, the valuable behavior of the framework discussed in this chapter is confirmed compared to the alternative Kolkata algorithm.

72

5 Application of the Digital Twin Technology in Novel Edge Intelligent. . .

50 45

Outage probability

40 35 30 25 20 15 Proposed Framework Kolkata Algorithm

10 5

4

4.5

5

5.5

6

Number of ECs Fig. 5.10 Outage probability as a function of the number of ECs

5.5 Democratized Digital Twin Technology for Industrial Edge Intelligent Computing Systems 5.5.1 Motivation In the transition toward a smarter industrial landscape, the DT concept plays a pivotal role by enhancing context awareness within the new generation of Industrial Edge Intelligent Computing systems. This is achieved through the utilization of sensors and high-speed wireless communication to collect extensive data throughout a product’s lifecycle [71]. Smart manufacturing, underpinned by DTs, hinges on the integration of the physical and cyber realms [71–73]. The goal is to enhance the performance of Observable Manufacturing Elements (OMEs), including equipment, network nodes, materials, manufacturing processes, facilities, and products, among others, that constitute the physical environment [72, 74]. The DT vision envisions a seamless connection between the physical and cyber spaces, allowing for continuous monitoring, data collection, and processing. This, in turn, facilitates and supports the development of optimized engineering decisionmaking strategies. To achieve this, DTs replicate physical OMEs in dedicated servers, creating virtual multidimensional models that empower various applications

5.5 Democratized Digital Twin Technology for Industrial Edge Intelligent. . .

73

to simulate their behavior. This simulation capability enables the creation of superior products more efficiently and quickly [69]. Advanced data analytics is employed to discover or predict failures, streamline supply chains, optimize product performance, and enhance production efficiency [71, 72]. The implementation of DTs introduces intelligence and flexibility to both physical and computationally constrained OMEs, with additional support from emerging Edge Intelligent Computing infrastructures [75]. In alignment with this paradigm, computation is shifted closer to the network edge, near the data source. As it is well known, this approach reduces network congestion and latency compared to previous network architectures, such as cloud computing. It fosters real-time data collection and interactions between DTs and OMEs, enabling more agile and responsive operations. All of these developments pave the way for associating varying levels of intelligence with different configurations within the system. Furthermore, this enables the application of Machine Learning (ML) techniques across diverse industrial contexts and applications, particularly in the domains of just-in-time maintenance and online failure prediction. ML is increasingly being harnessed to support Edge Intelligent Computing for predicting failures [76–78]. As discussed in previous chapters, a recent proposal to enhance the efficiency of an Edge Intelligent Computing system is the Federated Learning paradigm (FL). In FL, as outlined in Chaps. 3 and 4, individual devices locally train their models using their own datasets. Only the model parameters are transmitted to a central aggregator, avoiding the transfer of sensitive raw data. However, FL faces challenges in properly tailoring local models to specific needs. Its primary objective is to construct a global model based on local models, without accommodating variations in agents’ learning models or datasets. Consequently, FL struggles to adapt to the diversity of components within an industrial system, such as product lines or industrial states. To address these limitations, the novel Democratized Learning (Dem-AI) paradigm was introduced in a seminal paper [79]. Dem-AI seeks to overcome the drawbacks of FL by formalizing both specialized and generalized processes within a self-organizing hierarchical structure of large-scale distributed learning systems [79], while preventing data leakage. Dem-AI mandates that specialized and generalized processes work together to achieve a common learning goal, leveraging collective learning from biased learning agents, each with their local data and limited learning capabilities. Consequently, a Dem-AI system aims to establish a framework for collectively solving complex learning tasks with the participation of a large number of learning agents [79]. DemAI employs a hierarchical organization that facilitates the creation and maintenance of multiple levels of specialized groups [79]. As a result, Dem-AI has emerged as one of the most promising paradigms for enabling Edge Intelligent Computing systems based on the use of the DT technology. It is well-acknowledged in the literature [80] that DT technology inherently introduces deviations between the DT and true values. This mismatch tends to escalate when data fed to the DT originate from typical trust-weighted

74

5 Application of the Digital Twin Technology in Novel Edge Intelligent. . .

aggregation strategies, such as FL. In this context, Dem-AI plays a crucial role in balancing the influence of specialized groups on the global model, thereby enhancing the reliability and accuracy of learned models [79, 80].

5.5.2 The Dem-AI Framework In this paragraph, we provide a brief discussion of the basic DEM-AI principle and features. Toward this end, we consider a set .E = {1, . . . , E} of ENs generating a substantial volume of diverse data. These data hold valuable potential for addressing a common learning challenge using individual datasets De. Given that participants in the learning process often come from various industrial domains and manage distinct industrial clusters Ie, it is essential to have a model that can adapt to the different varieties and fluctuations seen in production lines over time. To achieve this adaptability, we illustrate here the agglomerative clustering approach [79] as a suitable methodology. This approach allows us to construct a learning process that can preserve both generalized and specialized knowledge, establishing a hierarchy among the learning agents. Within this hierarchy, generalized knowledge is developed through a bottom-up generalized process that begins with the specialized knowledge at each subordinate hierarchical level. On the other hand, specialized processes enable group members to diverge from the shared generalized knowledge, as they incorporate higher-level generalized knowledge into lower-level groups in a top-down manner [79]. This approach differs from more traditional federated analytics methods, which prioritize the performance of a global model. Instead, this approach enhances the learning capabilities of individual ENs. To mitigate the significant accuracy errors that can occur when training a single model, such as in FL, the Dem-AI approach improves knowledge transfer by aggregating statistical models from agents with similar learning characteristics, even if they operate on different scales. Let e be a learning agent, having local model parameters given by .w = (w1 , . . . , w M ), in which M is the total number of learning parameters. A possible metric applied to measure the distance between two agents e and f can be expressed as .φ e,f = ||we − w f ||, as exploited later. Through the application of the agglomerative hierarchical clustering, based on the similarity or dissimilarity of all learning agents, the system constructs K levels of the generalization to build these generalized models for specialized groups in a recursive form, starting from the bottom [79]. Each learning agent e has a local learning to optimize, i.e., |D |

Je (w|De ) =

.

e 1  li (wi ) + ηR(wi ), |De |

(5.24)

i=1

where .η is the regularization coefficient and .R(wi ) is the regularization term. The personalized learning objective of each agent can be built comprising: (i) a

5.5 Democratized Digital Twin Technology for Industrial Edge Intelligent. . .

75

personalized learning goal .Je (w|De ); and (ii) a regularizer .φ e,f instrumental to catch the difference between the new model parameters and the model parameters of the higher-level specialized groups, acting as generalized knowledge contribution. Consequently, the personalized learning model, for each generic agglomerative level k of the learning hierarchy established, is

.

min αJe (w|De ) + β w

K 

1 (h) 2 g ||w − w || , N h=k+1 h

(5.25)

where .α and .β are control parameters ruling the importance of the specialized model in comparison to the generalized contribution. The updating learning rule for the generalized model .w(h) is w

.

(h)

=

g  Nh−1 g∈SG

g Nh

g

w h−1 .

(5.26)

For each learning group .G, the corresponding group learning problem becomes .

min JG (w) = w



Je (w|De ),

(5.27)

e∈SG g

where .SG is the set of subgroups in the level k, and .Nh is the number of elements belonging to the learning group considered. Note that the Dem-AI learning scheme consists of several iterations that terminate when a predetermined accuracy level over the learning model is reached (Figs. 5.11, 5.12, and 5.13).

5.5.3 DT Architecture and Functional View From a practical perspective, to cast the behavior of the Dem-AI framework, we considered a hierarchical four layers infrastructure, illustrated in Fig. 5.14, comprising of the set .E, a set of aggregators, whose generic element is denoted with .A, and a regional server .S. In the proposed architecture, ENs represent OMEs that are clustered under a local aggregator .A. In their turn, aggregators are grouped under a central regional server .S. Each aggregator .A plays the role of assisting synchronous communications within the network and carries out the first aggregation of the Dem-AI scheme. Then, aggregated learning models are forwarded to .S that acts as regional aggregator. .S builds an aggregated regional learning model (hierarchical aggregation), which then broadcasts back to OMEs. The process terminates when the desired global accuracy is satisfied. In order to enable Dem-AI on the infrastructure, we have to consider associated to each OME a DT replicating, mapping, and simulating the surrounding environment, on the basis

76

5 Application of the Digital Twin Technology in Novel Edge Intelligent. . .

Fig. 5.11 Dem-AI sequence diagram

Fig. 5.12 Proposed approach

of data periodically gathered from the layer of industrial devices. As a matter of fact, the digital copy of the real system provided by the DT is affected by some bias due to approximations or empirical measurements. Consequently, each OME e contains on-board a DT producing a cyber model of the subnetwork controlled by the OME e, denoted with .Tˆe , containing the virtual replica of .Ie and the nearest EN .e∗ of e, i.e., ∗ .Tˆe = {Ie , e }. Note that the regional server .S, .A, and OME e implement a common interface as they are able to send data and to update their learning parameters. In what follows, e denotes both the OME and the associated DT, interchangeably. We assumed a data ingestion architecture to keep consistency between the DT and

5.5 Democratized Digital Twin Technology for Industrial Edge Intelligent. . .

77

Fig. 5.13 Dem-AI UML

Fig. 5.14 Proposed DTEN-Dem-AI framework

physical layer. Moreover, the main DT entities with the corresponding sub-entities and functionalities are the following: • User entity (UE). Is responsible for interfacing the user entity to the digital twin entity. • Digital twin entity (DTE). Inside there are three sub-entity blocks. The first is named Operation and management sub-entity, and it is responsible for keeping operational and consistent with the actual physical state of the system the DT models. Then, its role consists in transforming information in the format required by DTs. The Application and Service sub-entity refers to the functional entities responsible for predicting, simulating, and analyzing the OMEs behavior. Resource Access and Interchange is concerned with enabling the integration and the dynamical connection between DTs and OMEs. • Device communication entity (DCE). This entity is composed by two subentities: the data collection sub-entity and the device control sub-entity. The first

78

5 Application of the Digital Twin Technology in Novel Edge Intelligent. . .

is responsible for collecting, identifying, and pre-processing data from OMEs, whereas the second unambiguously identifies, sends command, and actuates OMEs in response to a request from other entities. Each OME e has an own time-series dataset .De in the DCE, representing historical data about the individual OME failure, which is transferred to the associated DTE through the data collection sub-entity. Let .N be the number of overall training data available Eat the DTEs of OMEs residing in the considered network, and we have .N = e=1 |De |. Once the Dem-AI training is completed, the DT of each OME starts the monitoring phase of the associated OME, which consists in predicting the OME failure, on the basis of the learning model acquired. When the DT forecasts a failure on its corresponding EN, a fault management procedure is triggered by the DT associated to the EN e on which a failure is imminently predicted, consisting in migrating devices belonging to .Ie toward the nearest OME where a failure is not expected. The activation of the fault management procedure is enabled by DT e that has knowledge about the surrounding environment due to its digital representation ∗ .Tˆe . Consequently, the destination node .e is selected, and the tasks stemmed from industrial devices belonging to .Ie are migrated on .e∗ . In this proposed architecture, edge nodes (ENs) represent Observable Manufacturing Elements (OMEs), and these OMEs are organized into clusters under local aggregators denoted as .A. In turn, these aggregators are grouped under a central regional server .S. Each aggregator .A facilitates synchronous communications within the network and performs the initial aggregation step of the Dem-AI scheme. Subsequently, the aggregated learning models are transmitted to .S, which functions as the regional aggregator, collecting data from various DTEN networks. .S constructs an aggregated regional learning model through hierarchical aggregation, which is then broadcast back to the OMEs. This process continues until the desired global accuracy is achieved.

5.5.4 Experimental Results We tested applicability and the behavior of the proposed framework with the aim at answering to the following questions: • Q1. How does the proposed framework perform in comparison with the alternative FL scheme? • Q2. How does the forecasting accuracy vary applying the moving average method? • Q3. How does the cardinality of .O changes with and without the application of the proposed framework? Performance analysis is provided by resorting to extensive numerical simulations, with the aim at testing the validity and the effectiveness of the framework outlined in this chapter. In particular, we have adopted the time-series dataset available at [61]. The Dem-AI approach is compared with the FedAvg FL algorithm [50]. Moreover,

5.5 Democratized Digital Twin Technology for Industrial Edge Intelligent. . .

79

the adoption of the alternative moving average (MA) forecasting algorithm is evaluated as comparison method. Performance analysis of the trained model has focused on the evaluation of: (i) the regional accuracy (RA), (ii) the generalized accuracy (GA), and (iii) the specialization accuracy (SA), measured on the three layers of the network (.k = 2, k = 3, k = 4). Furthermore, the number of industrial devices experiencing service break due to the occurrence of OME failure is considered. As simulation environment we have used TensorFlow 1.13.1, and the following hardware specifications: CPU Intel Core i5-1035G7 x4; and RAM 8,00 GB. Figures 5.15 and 5.16 depict the test accuracy reached by applying the DemAI learning paradigm in comparison to the well-known FedAvg alternative, as the number of global rounds increases, in terms of RA, GA, and SA trend. Note that the number of global rounds represents the number of learning iterations required to reach a desired level of accuracy. From Figs. 5.15 and 5.16, it is evident that the application of the Dem-AI paradigm significantly improves both the RA and the GA, in comparison with the vanilla FedAvg scheme. Similarly, Dem-AI shows competitive trend also in achieving SA, compared with the FedAvg. It is important to highlight that the number of global rounds expresses a measure of

Fig. 5.15 Test accuracy as a function of the number of global rounds applying the Dem-AI scheme

80

5 Application of the Digital Twin Technology in Novel Edge Intelligent. . .

Fig. 5.16 Test accuracy as a function of the number of global rounds applying the FedAvg scheme

the training time  needed to reach the desired level of accuracy and follows the principle .log ɛ1b denoting the number of local iterations needed to achieve the local accuracy .ɛb [52, 55]. Therefore, the ability of Dem-AI in reaching higher accuracy values in comparison to the FedAvg implies that a lower number of global rounds is needed, resulting in fast knowledge acquisition and efficiency in terms of communication rounds required to converge. Figure 5.17 shows the ability of the Dem-AI in performing time-series prediction, in comparison with the MA strategy. The accuracy is measured in terms of mean squared error (MSE), whose definition is MSE =

.

X 1  (xˆt+Δt − xt+Δt )2 , X

(5.28)

t=1

where X represents the number of the samples in test dataset, whereas .xˆt+Δt and xt+Δt are the actual and predicted values at time .t +Δt . As it is clear from Fig. 5.17, the MSE values reached by Dem-AI are lower than those produced by resorting to the MA application. The MSE is represented as a function of the time horizon on which the failure prediction is performed, i.e., the lead time .Δt . Differently,

.

5.6 Summary

81

Fig. 5.17 MSE as a function of the lead time .Δt

Fig. 5.18 represents the trend of the number of elements in .O, as a function of the number of industrial devices experiencing a service break due to the occurrence of ENs failure. With the aim at corroborating the importance of integrating forecasting failure in the proposed system, we have compared the proposed Dem-AI framework with the vanilla framework (VF) where failure forecasting has not been included. As it is evident from Fig. 5.18, introducing failure forecasting procedure provides remarkable impact on the system, compared with a more traditional approach, i.e., the VF. Similarly, Fig. 5.19 shows the improvements gained applying the Dem-AI to perform prediction, in comparison to the strategy in which the MA is adopted to predict future failure. As it is evident to note, the Dem-AI approach outperforms the MA, highlighting the validity of the framework proposed.

5.6 Summary This chapter has investigated the development of a 6G DT edge network environment to effectively manage tasks offloading, considering both communication and computation resources. A DT empowered by AI has been integrated in the network

82

5 Application of the Digital Twin Technology in Novel Edge Intelligent. . .

Fig. 5.18 .|O| as a function of the number of industrial devices

to monitor and predict network congestion by applying an ESN, in order to optimize UAV allocation to the congested network areas. A three-dimensional matching game has been formulated to properly select channel allocation, computation site, and network zone, in order to jointly minimize the worst completion time and the number of requests experiencing outage. A deep integration and cooperation between the physical and the digital network layer has been designed and discussed, sketching DT architecture and responsibilities among entities involved in the digitalization process, in compliance with the emerging standard ISO 23247. Finally, the provided performance analysis has clearly confirmed the effectiveness of the functional integration of the DT technology in the considered UAV-MEC system. Hence, we can conclude by stating that even if the DT technology is in its infancy and further technological advances are needed to make it a reality, nowadays, it appears to be as an essential methodology to provide seamless and instantaneous connectivity between the physical process and the digital platform able to provide significant performance enhancements for next generation systems and applications. Furthermore, the emerging paradigm of Dem-DT has been discussed, and its specific features for application in Industrial Edge Intelligent Computing systems have been highlighted. In particular, we have discussed the problem of predicting failures in OME within an industrial context. To achieve this, we have applied the innovative

5.6 Summary

83

Fig. 5.19 .|O| as a function of the number of industrial devices

Dem-AI paradigm, a distributed hierarchical framework capable of capturing both specialization and generalization aspects of the adopted learning model. Additionally, leveraging the insights gained from failure predictions facilitated by the DT technology, we have implemented a fault management procedure. This procedure is triggered when a failure is anticipated in an OME. It promotes the migration of tasks originating from the production line monitored by the OME expected to experience failure to a more reliable OME, thereby preventing potential disruptions. Our experimental results demonstrate the effectiveness and validity of the proposed framework. We have compared its performance to that of the state-of-theart federated learning scheme, showcasing the advantages of adopting the Dem-AI paradigm in this industrial context. Then, alternative prediction schemes have been considered, in order to corroborate the effectiveness of the approach designed. The Dem-AI results to be particularly suitable in industrial landscapes, since contexts of smart Industry 4.0 and beyond typically originate a huge amount of sensitive data that need to be kept locally on devices and that can be properly exploited by data-driven approaches. In addition, Dem-AI permits to specialize different learning models for different industrial activity, representing a valuable tool to perform configuration management for production lines requiring a different variety and variability of product types over time.

Bibliography

1. C. Chaccour, M. Soorki, W. Saad, M. Bennis, P. Popovski, Can terahertz provide high-rate reliable low latency communications for wireless VR? (2020) 2. R. Fantacci, B. Picano, End-to-end delay bound for wireless uVR services over 6G terahertz communications. IEEE Internet Things J. 8(23), 17090–17099 (2021) 3. R. Fantacci, B. Picano, When network slicing meets prospect theory: a service provider revenue maximization framework. IEEE Trans. Veh. Technol. 69(3), 3179–3189 (2020) 4. B. Picano, Multi-sensorial human perceptual experience model identifier for haptics virtual reality services in tactful networking. IEEE Access 9, 147549–147558 (2021) 5. R. Fantacci, B. Picano, Edge-based virtual reality over 6G terahertz channels. IEEE Netw. 35(5), 28–33 (2021) 6. M. Mozaffari, W. Saad, M. Bennis, Y.-H. Nam, M. Debbah, A tutorial on UAVS for wireless networks: applications, challenges, and open problems. IEEE Commun. Surv. Tutorials 21(3), 2334–2360 (2019) 7. M. Jahanbakht, W. Xiang, L. Hanzo, M. Rahimi Azghadi, Internet of underwater things and big marine data analytics—a comprehensive survey. IEEE Commun. Surv. Tutorials 23(2), 904–956 (2021) 8. Y. Yang, R. Elsinghorst, J.J. Martinez, H. Hou, J. Lu, Z.D. Deng, A real-time underwater acoustic telemetry receiver with edge computing for studying fish behavior and environmental sensing. IEEE Internet Things J. 9(18), 17821–17831 (2022) 9. J. Liu, X. Du, J. Cui, M. Pan, D. Wei, Task-oriented intelligent networking architecture for the space–air–ground–aqua integrated network. IEEE Internet Things J. 7(6), 5345–5358 (2020) 10. X. Wang, Y. Han, V.C.M. Leung, D. Niyato, X. Yan, X. Chen, Convergence of edge computing and deep learning: a comprehensive survey. IEEE Commun. Surv. Tutorials 22(2), 869–904 (2020) 11. R. Fantacci, B. Picano, Performance analysis of a delay constrained data offloading scheme in an integrated cloud-fog-edge computing system. IEEE Trans. Veh. Technol. 69(10), 12004– 12014 (2020) 12. R. Fantacci, T. Pecorella, B. Picano, L. Pierucci, Martingale theory application to the delay analysis of a multi-hop aloha NOMA scheme in edge computing systems. IEEE/ACM Trans. Networking 29(6), 2834–2842 (2021) 13. B. Picano, R. Fantacci, Z. Han, Aging and delay analysis based on Lyapunov optimization and martingale theory. IEEE Trans. Veh. Technol. 70(8), 8216–8226 (2021) 14. Z. Zhou, X. Chen, E. Li, L. Zeng, K. Luo, J. Zhang, Edge intelligence: paving the last mile of artificial intelligence with edge computing. Proc. IEEE 107(8), 1738–1762 (2019)

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 B. Picano, R. Fantacci, Edge Intelligent Computing Systems in Different Domains, SpringerBriefs in Computer Science, https://doi.org/10.1007/978-3-031-49472-7

85

86

Bibliography

15. B. Picano, R. Fantacci, Human-in-the-loop virtual reality offloading scheme in wireless 6G terahertz networks. Comput. Netw. 214, 109152 (2022). https://www.sciencedirect.com/ science/article/pii/S138912862200264X 16. W. Sun, H. Zhang, R. Wang, Y. Zhang, Reducing offloading latency for digital twin edge networks in 6G. IEEE Trans. Veh. Technol. 69(10), 12240–12251 (2020) 17. B. Picano, R. Fantacci, Z. Han, Price control for computational offloading services with chaotic data, in 2020 International Conference on Computing, Networking and Communications (ICNC) (2020), pp. 785–790 18. H.B. McMahan, E. Moore, D. Ramage, S. Hampson, B.A. y Arcas, Communication-efficient learning of deep networks from decentralized data (2023) 19. C.S. Hong, L.U. Khan, M. Chen, D. Chen, W. Saad, Z. Han, Federated Learning for Wireless Networks (Springer, Singapore, 2021) 20. M. Catelani, L. Ciani, R. Fantacci, G. Patrizi, B. Picano, Remaining useful life estimation for prognostics of lithium-ion batteries based on recurrent neural network. IEEE Trans. Instrum. Meas. 70, 1–11 (2021) 21. G. Patrizi, B. Picano, M. Catelani, R. Fantacci, L. Ciani, Validation of RUL estimation method for battery prognostic under different fast-charging conditions, in 2022 IEEE International Instrumentation and Measurement Technology Conference (I2MTC) (2022), pp. 1–6 22. R. Fantacci, B. Picano, A d2d-aided federated learning scheme with incentive mechanism in 6G networks. IEEE Access 11, 107–117 (2023) 23. Y. Liu, X. Yuan, Z. Xiong, J. Kang, X. Wang, D. Niyato, Federated learning for 6G communications: challenges, methods, and future directions (2020) 24. F. Sun, Z. Zhang, S. Zeadally, G. Han, S. Tong, Edge computing-enabled internet of vehicles: towards federated learning empowered scheduling. IEEE Trans. Veh. Technol. 71(9), 10088– 10103 (2022) 25. Q. Lan, D. Wen, Z. Zhang, Q. Zeng, X. Chen, P. Popovski, K. Huang, What is semantic communication? A view on conveying meaning in the era of machine intelligence. J. Commun. Inf. Networks 6(4), 336–371 (2021) 26. R. Fantacci, B. Picano, Multi-user semantic communications system with spectrum scarcity. J. Commun. Inf. Networks 7(4), 375–382 (2022) 27. H. Xie, Z. Qin, G. Y. Li, B.-H. Juang, Deep learning enabled semantic communication systems. IEEE Trans. Signal Process. 69, 2663–2675 (2021) 28. W. Yang, Z.Q. Liew, W.Y.B. Lim, Z. Xiong, D. Niyato, X. Chi, X. Cao, K.B. Letaief, Semantic communication meets edge intelligence (2022) 29. L. Yan, Z. Qin, R. Zhang, Y. Li, G.Y. Li, Resource allocation for text semantic communications. IEEE Wireless Commun. Lett. 11(7), 1394–1398 (2022) 30. F. Chiti, R. Fantacci, B. Picano, A matching game for tasks offloading in integrated edge-fog computing systems. Trans. Emerg. Telecommun. Technol. 31(2), e3718 (2020). https://doi.org/ 10.1002/ett.3718 31. B. Picano, E. Vocario, R. Fantacci, An efficient flows dispatching scheme for tardiness minimization of data-intensive applications in heterogeneous systems. IEEE Trans. Network Sci. Eng., 3232–3241 (2023) 32. Y.F. Al-Eryani, E. Hossain, Delta-OMA (D-OMA): a new method for massive multiple access in 6G (2019). ArXiv abs/1901.07100 33. P.P. Ray, A review on 6G for space-air-ground integrated network: key enablers, open challenges, and future direction. J. King Saud Univ. Comput. Inf. Sci., 6949–6976 (2021) 34. Y. Shi, Y. Zhu, Research on aided reading system of digital library based on text image features and edge computing. IEEE Access 8, 205980–205988 (2020) 35. T. Liu, J. Li, F. Shut, Z. Han, Quality-of-service driven resource allocation based on martingale theory, in 2018 IEEE Global Communications Conference (GLOBECOM) (2018), pp. 1–6 36. M. Fidler, Survey of deterministic and stochastic service curve models in the network calculus. IEEE Commun. Surv. Tutorials 12(1), 59–86 (2010) 37. F. Poloczek, F. Ciucu, Scheduling analysis with martingales. Perform. Eval. 79, 56–72 (2014), special Issue: Performance 2014. https://www.sciencedirect.com/science/article/abs/ pii/S0166531614000674

Bibliography

87

38. M. Fidler, A. Rizk, A guide to the stochastic network calculus. IEEE Commun. Surv. Tutorials 17(1), 92–105 (2015) 39. Y. Jiang, A basic stochastic network calculus. SIGCOMM Comput. Commun. Rev. 36(4), 123– 134 (2006). https://doi.acm.org/10.1145/1151659.1159929 40. C. Li, A. Burchard, J. Liebeherr, A network calculus with effective bandwidth. IEEE/ACM Trans. Networking 15(6), 1442–1453 (2007). 41. C. Chaccour, M.N. Soorki, W. Saad, M. Bennis, P. Popovski, Risk-based optimization of virtual reality over terahertz reconfigurable intelligent surfaces (2020) 42. R. Zhang, K. Yang, Q.H. Abbasi, K.A. Qaraqe, A. Alomainy, Analytical modelling of the effect of noise on the terahertz in-vivo communication channel for body-centric nano-networks. Nano Commun. Networks 15, 59–68 (2018). https://www.sciencedirect.com/science/article/ pii/S1878778917300297 43. K. Trivedi, Probability and Statistics with Reliability, Queuing, and Computer Science Applications (Wiley, Chichester, 2001) 44. S. Bayat, Y. Li, L. Song, Z. Han, Matching theory: applications in wireless communications. IEEE Signal Process. Mag. 33(6), 103–122 (2016) 45. B. Picano, End-to-end delay bound for VR services in 6G terahertz networks with heterogeneous traffic and different scheduling policies. Mathematics 9(14), 1638 (2021). http://dx.doi. org/10.3390/math9141638 46. F. Ciucu, J. Schmitt, Perspectives on network calculus: no free lunch, but still good value. SIGCOMM Comput. Commun. Rev. 42(4), 311–322 (2012). http://doi.acm.org/10.1145/ 2377677.2377747 47. Q. Yang, Y. Liu, T. Chen, Y. Tong, Federated machine learning: concept and applications, vol. abs/1902.04885 (2019). https://arxiv.org/abs/1902.04885 48. K.B. Letaief, W. Chen, Y. Shi, J. Zhang, Y.-J.A. Zhang, The roadmap to 6G: Ai empowered wireless networks. IEEE Commun. Mag. 57(8), 84–90 (2019) 49. L. Li, Y. Fan, M. Tse, K.-Y. Lin, A review of applications in federated learning. Comput. Ind. Eng. 149, 106854 (2020) 50. Y. Xiao, G. Shi, M. Krunz, Towards ubiquitous ai in 6G with federated learning (2020) 51. J. Konecny, H.B. McMahan, F.X. Yu, P. Richtarik, A.T. Suresh, D. Bacon, Federated learning: strategies for improving communication efficiency (2017) 52. L.U. Khan, M. Alsenwi, Z. Han, C.S. Hong, Self organizing federated learning over wireless networks: a socially aware clustering approach, in 2020 International Conference on Information Networking (ICOIN) (2020), pp. 453–458 53. V. Smith, C. Chiang, M. Sanjabi, A. Talwalkar, Federated multi-task learning, vol. abs/1705.10467 (2017). http://arxiv.org/abs/1705.10467 54. N.H. Tran, W. Bao, A. Zomaya, M.N.H. Nguyen, C.S. Hong, Federated learning over wireless networks: optimization model design and analysis, in IEEE INFOCOM 2019—IEEE Conference on Computer Communications (2019), pp. 1387–1395 55. D. Chen, C.S. Hong, L. Wang, Y. Zha, Y. Zhang, X. Liu, Z. Han, Matching-theory-based lowlatency scheme for multitask federated learning in MEC networks. IEEE Internet Things J. 8(14), 11415–11426 (2021) 56. T.H. Thi Le, N.H. Tran, Y.K. Tun, M.N.H. Nguyen, S.R. Pandey, Z. Han, C.S. Hong, An incentive mechanism for federated learning in wireless cellular networks: an auction approach. IEEE Trans. Wirel. Commun. 20(8), 4874–4887 (2021) 57. J. Kokkoniemi, J. Lehtomäki, K. Umebayashi, M. Juntti, Frequency and time domain channel models for nanonetworks in terahertz band. IEEE Trans. Antennas Propag. 63(2), 678–691 (2015) 58. C. Han, W. Tong, X.-W. Yao, MA-ADM: a memory-assisted angular-division-multiplexing MAC protocol in terahertz communication networks. Nano Commun. Networks 13, 51–59 (2017). https://www.sciencedirect.com/science/article/pii/S1878778917300960 59. H. Peng, C. Chen, C.-C. Lai, L.-C. Wang, Z. Han, A predictive on-demand placement of UAV base stations using echo state network, in 2019 IEEE/CIC International Conference on Communications in China (ICCC) (2019). http://dx.doi.org/10.1109/ICCChina.2019.8855868

88

Bibliography

60. D. Manlove, Algorithmics of Matching Under Preferences, vol. 2 (World Scientific, Singapore, 2013) 61. K. Tekbıyık, A.R. Ekti, G. Karabulut Kurt, A. Görçin, THz wireless channel measurements in between 240 GHz and 300 GHz (2019). https://dx.doi.org/10.21227/2jhd-wp15 62. P. Zetterberg, F. Lindqvist, B. Nilsson, Underwater acoustic communication with multicarrier binary frequency shift keying. IEEE J. Ocean. Eng. 47(1), 255–267 (2022) 63. Z. Weng, Z. Qin, Semantic communication systems for speech transmission (2021) 64. H. Xie, Z. Qin, A lite distributed semantic communication system for internet of things. IEEE J. Sel. Areas Commun. 39(1), 142–153 (2021) 65. A.R. Lahitani, A.E. Permanasari, N.A. Setiawan, Cosine similarity to determine similarity measure: study case in online essay assessment, in 2016 4th International Conference on Cyber and IT Service Management (2016), pp. 1–6 66. L. Deng, The mnist database of handwritten digit images for machine learning research. IEEE Signal Process. Mag. 29(6), 141–142 (2012) 67. K. Zhang, J. Cao, Y. Zhang, Adaptive digital twin and multiagent deep reinforcement learning for vehicular edge computing and networks. IEEE Trans. Industr. Inform. 18(2), 1405–1413 (2022) 68. J. Huang, S. Xu, J. Zhang, Y. Wu, Resource allocation and 3d deployment of UAVs-assisted MEC network with air-ground cooperation. Sensors 22(7), 2590 (2022). https://www.mdpi. com/1424-8220/22/7/2590 69. G. Shao, Use case scenarios for digital twin implementation based on iso 23247, 2021-05-04 04:05:00 (2021). https://tsapps.nist.gov/publication/get_pdf.cfm?pub_id=932269 70. The Kolkata Paise Restaurant problem and resource utilization. Physica A: Stat. Mech. Appl. 388(12), 2420–2426 (2009) 71. Smart manufacturing must embrace big data. Nature 544, 23–25 (2017) 72. F. Tao, H. Zhang, A. Liu, A.Y.C. Nee, Digital twin in industry: state-of-the-art. IEEE Trans. Industr. Inform. 15(4), 2405–2415 (2019) 73. E. Ferko, A. Bucaioni, M. Behnam, Architecting digital twins. IEEE Access 10, 50335–50350 (2022) 74. Q. Qi, F. Tao, Digital twin and big data towards smart manufacturing and industry 4.0: 360 degree comparison. IEEE Access 6, 3585–3593 (2018) 75. F. Tang, X. Chen, T.K. Rodrigues, M. Zhao, N. Kato, Survey on digital twin edge networks (DITEN) toward 6G. IEEE Open J. Commun. Soc. 3, 1360–1381 (2022) 76. J. Leukel, J. Gonzalez, M. Riekert, Adoption of machine learning technology for failure prediction in industrial maintenance: a systematic review. J. Manuf. Syst. 61, 87–96 (2021). https://www.sciencedirect.com/science/article/pii/S0278612521001849 77. M. Züfle, F. Erhard, S. Kounev, Machine learning model update strategies for hard disk drive failure prediction, in 2021 20th IEEE International Conference on Machine Learning and Applications (ICMLA) (2021), pp. 1379–1386 78. Q. Hai, S. Zhang, C. Liu, G. Han, Hard disk drive failure prediction based on GRU neural network, in 2022 IEEE/CIC International Conference on Communications in China (ICCC) (2022), pp. 696–701 79. M.N.H. Nguyen, S.R. Pandey, K. Thar, N.H. Tran, M. Chen, W. Saad, C.S. Hong, Distributed and democratized learning: philosophy and research challenges (2020) 80. Q. Song, S. Lei, W. Sun, Y. Zhang, Adaptive federated learning for digital twin driven industrial internet of things, in 2021 IEEE Wireless Communications and Networking Conference (WCNC) (2021), pp. 1–6