115 42 24MB
English Pages 347 [327] Year 2024
Kai Liu · Penglin Dai · Victor C. S. Lee · Joseph Kee-Yin Ng · Sang Hyuk Son
Toward Connected, Cooperative and Intelligent IoV Frontier Technologies and Applications
Toward Connected, Cooperative and Intelligent IoV
Kai Liu • Penglin Dai • Victor C. S. Lee • Joseph Kee-Yin Ng • Sang Hyuk Son
Toward Connected, Cooperative and Intelligent IoV Frontier Technologies and Applications
Kai Liu College of Computer Science, and National Elite Institute of Engineering Chongqing University Chongqing, China
Penglin Dai School of Computing and Artificial Intelligence Southwest Jiaotong University Chengdu, Sichuan, China
Victor C. S. Lee Department of Electrical and Electronic Engineering University of Hong Kong Lung Fu Shan, Hong Kong
Joseph Kee-Yin Ng Department of Computer Science Hong Kong Baptist University Kowloon Tong, Hong Kong
Sang Hyuk Son Electrical Engineering and Computer Science Daegu Gyeongbuk Institute of Science and Technology Daegu, Korea (Republic of)
ISBN 978-981-99-9646-9 ISBN 978-981-99-9647-6 https://doi.org/10.1007/978-981-99-9647-6
(eBook)
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd. The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore Paper in this product is recyclable.
Preface
Driven by cutting-edge technologies on wireless communication, mobile computing, and artificial intelligence, Internet of Vehicles (IoV) is thriving in an era of unprecedented development and innovation. On the other hand, the global automobile industry is accelerating its transformation towards electrification, networking, intelligence, and sharing, where IoV is emerging as the most fundamental platform for realizing future Intelligent Transportation Systems (ITSs) and smart cities. By looking back on the history of mobile phones in the past decade, they evolve from conventional calling and messaging tools to current smart devices with impressing functions. It is foreseeable that in near future, similar trends will happen on vehicles such that they are not just transportation tools but will act as cooperated intelligent agents immersed with rich and exciting smart applications. Currently, the intelligent assisted driving technologies are getting mature and being commercialized rapidly. From the technical point of view, the development trend is shifting from single-vehicle intelligence to vehicle-road-cloud collaborative intelligence. IoV, which intrinsically incorporates characteristics of Vehicle-toEverything (V2X) communications, end-edge-cloud cooperation, and intelligent computation, will be a fundamental platform in supporting the development of future Intelligent and Connected Vehicles (ICV). Evidently, it is of great significance to explore both key scientific problems and practical engineering demands of IoV. Although IoV has received tremendous attention in both academia and industry, we are still lack of a holistic view on IoV with respect to its latest technological advances in theories, methodologies, and systems. In view of this, this monograph is dedicated to presenting our representative achievements and forming a systematic introduction on connected, cooperative, and intelligent IoV. As shown in Fig. 1, this monograph consists of six parts, and the primary contents and relationships are summarized below. Part I Introduction This part includes two chapters. Chapter 1 introduces the background of IoV, including the concept, the architecture, and the challenges. Chapter 2 reviews the state of the art with respect to communication, cooperation, and computation technologies of IoV. v
vi
Preface
Fig. 1 Overview of this monograph
Part II Connected IoV: Vehicular Communication and Data Dissemination This part includes four chapters, focusing on system architectures and scheduling algorithms for data dissemination via Infrastructure-to-Vehicle (I2V)/Vehicle-toVehicle (V2V) communications. In particular, Chap. 3 presents Software-Defined Vehicular Network (SDVN) architecture to enable efficient data dissemination by exploiting the synergy between V2V and I2V communications. Chapter 4 further explores the network coding technology for enhancing the bandwidth efficiency of data broadcasting in SDVN. Chapter 5 presents a fog computing empowered architecture together with a tailored algorithm for data dissemination in heterogeneous vehicular networks. Finally, Chap. 6 investigates temporal information services in
Preface
vii
vehicular networks, aiming at striking a balance between enhancing service quality and improving service ratio, where the two operations compete for the limited wireless communication bandwidth. Part III Cooperative IoV: End-Edge-Cloud Cooperative Scheduling and Optimization This part includes three chapters, focusing on cooperative task offloading and resource allocation via end-edge-cloud cooperation in vehicular networks. In particular, Chap. 7 investigates the end-edge-cloud cooperated task offloading problem in IoV by modeling the procedures of task uploading, migration, and computation based on queuing theory, which is formulated as a convex optimization problem. Chapter 8 further considers input data for tasks and jointly schedules data uploading and task offloading via end-edge-cloud cooperation in IoV, and an approximation algorithm is proposed. Finally, Chap. 9 investigates task offloading and workload balancing in Vehicular Edge Computing (VEC) environments, in which computation resources of the edge servers and the cloud are scheduled in a distributed way based on a multi-armed bandit learning method. Part IV Intelligent IoV: Key Enabling Technologies in Vehicular Edge Intelligence This part includes three chapters, focusing on theories, models, and algorithms for enabling cooperative intelligence in IoV. In particular, Chap. 10 investigates Deep Neural Network (DNN) inference acceleration with reliability guarantee in VEC. Chapter 11 presents a deep-Q-Network-based solution for adaptive multimedia streaming in vehicular edge intelligence, target at achieving both smooth play and high-quality service by jointly considering caching replacement and bandwidth allocation strategies. Chapter 12 proposes a multiagent multi-objective deep reinforcement learning solution for cooperative sensing and heterogeneous information fusion in IoV, which aims at constructing vehicular digital twin at the edge with high quality and low cost. Part V Case Studies This part includes five chapters, focusing on illustrating system innovation and typical applications in IoV, as well as presenting system prototypes for the proof-of-concept. In particular, Chap. 13 presents a “See Through” system based on V2V and I2V communications. Chapter 14 presents a “Non-Lightof-Sight Collision Warning” system based on I2V communication and trajectory prediction. Chapter 15 presents a “Proactive Traffic Abnormity Warning” system based on cooperative sensing, communication, and computation in IoV. Chapter 16 presents a “UAV-Assisted Blind Area Pedestrian Detection” system, which exercises the vehicular edge intelligence via task offloading. Finally, Chap. 17 presents a “Vehicular Indoor Localization and Tracking” system based on Wi-Fi fingerprintbased localization and inertial sensor-based dead reckoning. Part VI Conclusion and Future Directions This part includes two chapters, where Chap. 18 summarizes the main content of this monograph and Chap. 19 discusses the future directions. To sum up, this monograph first gives an overview of IoV with respect to basic concepts, key characteristics, and enabling technologies. Then, it presents the latest architectures, theories, models, and algorithms in IoV from three dimensions,
viii
Preface
namely, networking, cooperation, and intelligence, followed by typical applications in IoV with system prototype implementation details. Finally, it gives the summary and discusses future research directions. We believe that this monograph has both theoretical significance and practical value, which is suitable as both technical reference for professionals and textbook for postgraduate students. Chongqing, China Chengdu, China Hong Kong, China Hong Kong, China Daegu, South Korea
Kai Liu Penglin Dai Victor C. S. Lee Joseph Kee-Yin Ng Sang Hyuk Son
Acknowledgments
First and foremost, I would like to take this opportunity to acknowledge the contributions of the co-authors, research collaborators, and experts in the field, who have generously shared their knowledge, ideas, experiences, and insights. Their professional contributions have greatly enhanced the overall value of this monograph. Furthermore, I am grateful to my colleagues and friends from the College of Computer Science, Chongqing University, and the National Elite Institute of Engineering, Chongqing University. Their continuous encouragement, thoughtful discussions, and valuable suggestions have been instrumental in shaping this monograph. I would also like to express my sincere gratitude and appreciation to the editors and the dedicated team at the Springer, for their valuable feedback and suggestions, as well as the guidance, support, and expertise throughout the entire publishing process, which make this monograph a reality. Special thanks go to the fellows at the Internet of Vehicles Lab of Chongqing University, including Chunhui Liu, Guozhi Yan, Hualing Ren, Feiyu Jin, Xincao Xu, and Tongtong Cheng, who have dedicated substantial effort to ensure the successful completion of this monograph. Last but not least, I would like to extend my heartfelt appreciation to my family for their support and understanding throughout this journey, and their constant motivation has been invaluable in completing this endeavor. Kai Liu
ix
Contents
Part I Introduction 1
Background of IoV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 What Is IoV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 A Hierarchical Architecture of IoV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Challenges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3 3 6 9 10
2
State of the Art . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Data Dissemination in IoV. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.1 Vehicular Communication Protocols . . . . . . . . . . . . . . . . . . . . . . 2.1.2 Data Dissemination in SDVN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.3 Advanced Scheduling for Vehicular Data Dissemination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Vehicular End–Edge–Cloud Cooperation . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.1 Cloud-Based Services in IoV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.2 MEC-Based Services in IoV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.3 End–Edge–Cloud Cooperated Services in IoV. . . . . . . . . . . . 2.3 Vehicular Edge Intelligence. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.1 Key Technologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.2 Cooperative Sensing and Heterogeneous Information Fusion in Vehicular Edge Intelligence . . . . . . . 2.3.3 Vehicular Edge Intelligence Empowered ITSs . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
13 13 13 14 15 16 16 16 17 18 18 19 20 21
Part II Connected IoV: Vehicular Communications and Data Dissemination 3
Data Dissemination via I2V/V2V Communications in Software Defined Vehicular Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 System Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
27 27 29 xi
xii
Contents
3.3
4
5
Cooperative Data Scheduling (CDS) Problem . . . . . . . . . . . . . . . . . . . . . . 3.3.1 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.2 NP-Hardness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Proposed Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.1 Identify Conflicting TSs and Compute the Weight . . . . . . . 3.4.2 Construct G and Select TSs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.3 Generate Outputs and Update Service Queue . . . . . . . . . . . . . 3.5 Performance Evaluation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5.1 Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5.2 Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5.3 Simulation Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
31 31 36 38 40 40 41 42 42 43 44 47 47
Network Coding-Assisted Data Broadcast in Large-Scale Vehicular Networks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 System Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 Coding-Assisted Broadcast Scheduling (CBS) Problem . . . . . . . . . . . 4.3.1 Problem Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.2 NP-Hardness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4 Proposed Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.1 Initialization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.2 Fitness Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.3 Offspring Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.4 Local Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.5 Repair Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.6 Population Replacement. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.7 Termination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5 Performance Evaluation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5.1 Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5.2 Simulation Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
49 49 51 53 53 53 56 58 58 59 59 59 60 60 61 61 63 68 68
Fog Computing Empowered Data Dissemination in Heterogeneous Vehicular Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 System Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.2 Operational Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.3 An Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.4 Assumptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
71 71 73 73 74 75 76
Contents
xiii
5.3
76 76 78 80 81 81 84 85 86 86 88 92 92
Fog Assisted Cooperative Service (FACS) Problem . . . . . . . . . . . . . . . . 5.3.1 Problem Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.2 Problem Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.3 NP-Hardness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4 Proposed Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.1 Graph Transformation Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.2 CSS Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.3 Complexity Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5 Performance Evaluation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.1 Simulation Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.2 Simulation Result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6
Temporal Data Uploading and Dissemination in Real-Time Vehicular Networks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 System Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3 Temporal Data Uploading and Dissemination (TDUD) Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.1 Preliminary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.2 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4 Proposed Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.1 Multi-Objective Decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.2 Chromosome Encoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.3 MO-TDUD Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5 Performance Evaluation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5.1 Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5.2 Simulation Results and Analysis. . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
95 95 97 100 100 101 104 104 104 105 108 108 110 114 115
Part III Cooperative IoV: End-Edge-Cloud Cooperative Scheduling and Optimization 7
Convex Optimization on Vehicular End–Edge–Cloud Cooperative Task Offloading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 System Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3 Cooperative Task Offloading (CTO) Problem . . . . . . . . . . . . . . . . . . . . . . 7.3.1 Preliminary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.2 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4 Proposed Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.1 The Convexity of Objective Function . . . . . . . . . . . . . . . . . . . . . 7.4.2 Probabilistic Task Offloading Algorithm . . . . . . . . . . . . . . . . .
119 119 121 123 123 123 128 128 129
xiv
Contents
7.5
Performance Evaluation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5.1 Default Setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5.2 Simulation Results and Analysis. . . . . . . . . . . . . . . . . . . . . . . . . . . 7.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
9
An Approximation Algorithm for Joint Data Uploading and Task Offloading in IoV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2 System Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3 Joint Data Uploading and Task Offloading (JDUTO) Problem . . . . 8.3.1 Preliminary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.2 Vehicle Mobility Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.3 Delay Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.4 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4 Proposed Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.1 The Decision of Cooperative Data Uploading . . . . . . . . . . . . 8.4.2 Allocation of Transmission and Computation Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.3 Filter Mechanism-Based Markov-Approximation Algorithm . . . . . . . . . . . . . . . . . . . . . . . . 8.4.4 Theoretical Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5 Performance Evaluation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.1 Simulation Setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.2 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Distributed Task Offloading and Workload Balancing in IoV. . . . . . . . . 9.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2 System Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3 Distributed Task Offloading (DTO) Problem . . . . . . . . . . . . . . . . . . . . . . . 9.3.1 Problem Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3.2 NP-Hardness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.4 Proposed Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.4.1 MAB-Based Task Offloading Mechanism. . . . . . . . . . . . . . . . . 9.4.2 Probabilistic Task Offloading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.4.3 The Procedure of UL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5 Performance Evaluation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.1 Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.2 Simulation Results and Analysis. . . . . . . . . . . . . . . . . . . . . . . . . . . 9.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
133 133 136 141 142 145 145 147 149 149 150 152 154 157 157 157 160 162 163 163 165 169 169 173 173 175 178 178 181 182 183 184 186 188 188 190 193 194
Contents
xv
Part IV Intelligent IoV: Key Enabling Technologies in Vehicular Edge Intelligence 10
11
12
Toward Timely and Reliable DNN Inference in Vehicular Edge Intelligence. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2 System Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2.1 System Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2.2 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3 Cooperative Partitioning and Offloading (CPO) Problem . . . . . . . . . . 10.3.1 Offloading Reliability Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3.2 DNN Inference Delay Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3.3 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3.4 NP-Hardness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.4 Proposed Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.4.1 Submodular Approximation Allocation Algorithm . . . . . . . 10.4.2 Feed Me the Rest Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.5 Performance Evaluation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.5.1 Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.5.2 Simulation Results and Analysis. . . . . . . . . . . . . . . . . . . . . . . . . . . 10.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
199 199 201 201 203 205 205 208 212 213 214 214 217 219 219 221 225 225
Deep Q-Learning-Based Adaptive Multimedia Streaming in Vehicular Edge Intelligence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 System Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.3 Joint Resource Optimization (JRO) Problem . . . . . . . . . . . . . . . . . . . . . . . 11.3.1 Analytic Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.3.2 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.4 Proposed Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.4.1 Reinforcement Learning-Based Chunk Placement. . . . . . . . 11.4.2 Adaptive-Quality-Based Chunk Selection. . . . . . . . . . . . . . . . . 11.5 Performance Evaluation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.5.1 Default Setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.5.2 Simulation Results and Analysis. . . . . . . . . . . . . . . . . . . . . . . . . . . 11.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
227 227 229 231 231 235 236 237 240 244 244 245 250 252
A Multi-agent Multi-objective Deep Reinforcement Learning Solution for Digital Twin in Vehicular Edge Intelligence . . . 12.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.2 System Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.3 Quality–Cost Tradeoff Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.3.1 Notations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.3.2 Cooperative Sensing Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
255 255 257 259 259 261
xvi
Contents
12.3.3 V2I Uploading Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.3.4 Quality of Digital Twin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.3.5 Cost of Digital Twin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.3.6 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.4 Proposed Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.4.1 Multi-agent Distributed Policy Execution . . . . . . . . . . . . . . . . . 12.4.2 Multi-objective Policy Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . 12.4.3 Network Learning and Updating. . . . . . . . . . . . . . . . . . . . . . . . . . . 12.5 Performance Evaluation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.5.1 Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.5.2 Result Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
262 263 264 265 267 268 270 271 273 273 275 280 280
Part V Case Studies 13
See Through System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.2 System Prototype . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.3 Performance Evaluation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
285 285 287 287 291
14
Non-line-of-sight Collision Warning System. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.2 System Prototype . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.3 Performance Evaluation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
293 293 294 295 298
15
Proactive Traffic Abnormity Warning System . . . . . . . . . . . . . . . . . . . . . . . . . . 15.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.2 System Prototype . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.3 Performance Evaluation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
299 299 301 302 304
16
UAV-Assisted Pedestrian Detection System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.2 System Prototype . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.3 Performance Evaluation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
305 305 306 308 310
17
Vehicular Indoor Localization and Tracking System . . . . . . . . . . . . . . . . . . . 17.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.2 System Prototype . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.3 Performance Evaluation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
311 311 311 314 315
Contents
Part VI
xvii
Conclusion and Future Directions
18
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319
19
Future Directions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19.1 Vehicle–road–cloud Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19.2 Vehicular Cyber-physical Fusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19.3 Generative AI Empowered IoV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
325 325 326 326
Acronyms
ADMM AI AP API BS C-V2X CATT CDF CNN CPU CSMA CSMA/CA CV CVRP D2D DAG DDPG DNN DQN DRL DSRC DT FCC FL GNSS GPRS HTTP IBN ICV IEEE IoT
Alternating Direction Method of Multipliers Artificial Intelligence Access Point Application Programming Interface Base Station Cellular Vehicle-to-Everything China Academy of Telecommunications Technology Cumulative Density Function Convolutional Neural Network Central Processing Unit Carrier Sense Multiple Access Carrier Sense Multiple Access with Collision Avoidance Computer Vision Capacitated Vehicle Routing Problem Device-to-Device Directed Acyclic Graph Deep Deterministic Policy Gradient Deep Neural Network Deep Q-Networks Deep Reinforcement Learning Dedicated Short-Range Communications Digital Twin Federal Communications Commission Federated Learning Global Navigation Satellite System General Packet Radio Service Hypertext Transfer Protocol Intent-Based Networking Intelligent and Connected Vehicles Institute of Electrical and Electronics Engineers Internet of Things xix
xx
IoV IP ITS LFU LSTM LTE-V MAB MDP MEC NEV NFV NLOS NDN NS OBU PPO QoS RP RSU SDVN SMTP SNR TCP TDMA UAV V2C V2F V2I V2V V2X V-NDN VCPS VEC
Acronyms
Internet of Vehicles Internet Protocol Intelligent Transportation System Least Frequently Used Long Short-Term Memory Long Term Evolution-Vehicle Multi-Armed Bandit Markov Decision Process Mobile Edge Computing New Energy Vehicle Network Function Virtualization Non-Line-of-Sight Named Data Networking Network Slicing On-Board Unit Proximal Policy Optimization Quality of Service Reference Point Roadside Unit Software Defined Vehicular Network Simple Mail Transfer Protocol Signal-Noise Ratio Transmission Control Protocol Time Division Multiple Access Unmanned Aerial Vehicle Vehicle-to-Cloud Vehicle-to-Fog Vehicle-to-Infrastructure Vehicle-to-Vehicle Vehicle-to-Everything Vehicular NDN Vehicular Cyber-Physical System Vehicular Edge Computing
Part I
Introduction
Chapter 1
Background of IoV
Abstract Driven by recent advances in Vehicle-to-Everything (V2X) communication, artificial intelligence, and autonomous driving technologies, a new round of global technological and industrial revolution in automobile industry is booming. The Internet of Vehicles (IoV) is a typical interdisciplinary field that involves computer science, communication engineering, automation control, transportation engineering, vehicle engineering, etc. Also, it is a combination of multiple cuttingedge technologies including mobile Internet, information technology, big data analytics, cloud computing, mobile edge computing, computation intelligence, etc. IoV is expected to promote innovation and breakthroughs of these new technologies and drive the transformation and upgrade of automobile industry. In this chapter, we first introduce the concept of IoV from three different perspectives. Then, we present a hierarchical architecture for IoV by analyzing its unique characteristics. Finally, we discuss new arising challenges as well as opportunities in future IoV. Keywords IoV concept · Hierarchical architecture · Challenges
1.1 What Is IoV In this section, we discuss the concept of IoV from three different perspectives, namely, the network evolution, the technology development, and the industry trend. From the view point of the network evolution, firstly, looking back the history of computer networks, since late 1950s, the computers can be connected via the local area network [1], which represents the beginning of the first generation of the network. Representative applications in the first generation of network include World Wide Web and Email, supported by key technologies including Transmission Control Protocol (TCP), Internet Protocol (IP), Hypertext Transfer Protocol (HTTP), Simple Mail Transfer Protocol (SMTP), etc. Clearly, the primary function of the network in this generation is to connect the hosts and servers and to enable fetching and reading information. Then, around the year of 2001, the mobile devices with General Packet Radio Service (GPRS) started to roll out [2], which can be considered as the beginning © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 K. Liu et al., Toward Connected, Cooperative and Intelligent IoV, https://doi.org/10.1007/978-981-99-9647-6_1
3
4
1 Background of IoV
of the second generation of the network. Since then, the Time Division Multiple Access (TDMA) was developed for radio access, and both circuit switching and packet switching could be supported by the core network, meaning that the public telephone network and the Internet are integrated for mobile device connection. Currently, with the rapid development of mobile communication technologies, representative applications in the second generation of the network such as WeChat, TikTok, and Facebook are emerging, which enable social media, online streaming, and real-time sharing for mobile users. As we may observe, this generation of network literally connects people together. Now, driven by the convergence of multiple technologies, including the 5G cellular networks, artificial intelligence, and big data technologies, and everincreasing capacities of embedded systems, computing chips, etc., we are stepping into the third generation of the network, which collectively enables the connection of everything, namely, Internet of Things (IoT) [3]. In light of this, IoV emerges as a concrete form from the network evolution point of view, and it is a typical application scenario of IoT. From the view point of the technology development, first and foremost, the standardization of vehicular communication protocols plays an important role. In October 1999, the United States Federal Communications Commission (FCC) allocated 75 MHz of spectrum in the 5.9 GHz band for Dedicated Short-Range Communications (DSRC) [4], which is standardized as Institute of Electrical and Electronics Engineers (IEEE) 802.11p in 2010, in supporting direct I2V and V2V communications. The overall DSRC communication stack is standardized by the IEEE 1609 working group. On the other hand, along with the development of cellular networks, a novel technical path for Vehicle-to-Everything (V2X) was conceived, which is so-called Cellular Vehicle-to-Everything (C-V2X). In May 2013, Prof. Shanzhi Chen from China Academy of Telecommunications Technology (CATT) proposed the Long-Term Evolution Vehicle (LTE-V), which represented the first dedicated V2X wireless communication technology integrating cellular communication mode and short-range direct communication mode. As the original version, LTE-V2X established the system architecture and key technical principles of C-V2X. Along with the evolution of cellular system from 4G to 5G, C-V2X has formed two technical stages: LTE-V2X and NR-V2X [5]. On the basis of wireless communications, another critical driven technology is the artificial intelligence (AI), which enables the vehicle to perceive and comprehend the environments and make decisions accordingly. First, with an increase of computation capacity of onboard chips and the rapid development of deep learning technologies, computer vision (CV) becomes one of the most prevalent solutions to auto-driving by utilizing in-car and roadside cameras for real-time detection of traffic environments such as lanes, road signs, and obstacles. To enhance system robustness and scalability in extreme weather or dark environments, other sensors such as millimeter-wave radar and LIDAR can also be incorporated for object detection. For instance, LIDAR employs laser beams to scan the surrounding environment and generates highly accurate point cloud data, which can be utilized for distance measurement, high-precision mapping, and 3D modeling. Furthermore,
1.1 What Is IoV
5
multi-modal fusion technology can be adopted to enhance perception accuracy and robustness by fusing data from different types of sensors. On top of communication and intelligence, to truly realize comprehensive environment perception, heterogenous resource allocation, adaptive service scheduling, and cooperative driving, it is imperative to have new system paradigms in vehicular networks. Cloud computing has been demonstrated as one of the most successful computation paradigms in supporting applications with great storage and computation demands. Nevertheless, solely relying on cloud computing in vehicular networks may result in excessive service delay due to the limited bandwidth and the massive amount of data. In this regard, the edge/fog computing paradigm has been extensively studied, which is expected to support high-density device connection, low-delay data transmission, and real-time task computation by offloading computing, networking, storage, communication, and data resources closer to the end users. Meanwhile, note that the end users such as vehicles also have certain computation and storage capacities, typically lower than that of the edge and cloud nodes. Therefore, the computation paradigm of end–edge–cloud cooperation emerges, in which the heterogenous resources with respect to networking, computation, communication, and storage will be orchestrated in the system, aiming at maximizing the resource utilization in dynamic environments. In light of this, IoV can be regarded as a technical carrier, which integrates wireless communication, intelligent perception, and end–edge–cloud cooperation. From the view point of the industry trend, the New Energy Vehicle (NEV) market has soared over the past few years and a new round of industrial chain adjustments and upgrades are in progress. One of the most noteworthy development trends of NEVs is the significant improvement of the smart cockpit, where the cockpit is no longer be a combination of a steering wheel, an instrument panel, and a screen, but a comfortable, multifunctional, and smart mobile space for both the driver and passengers. Technical breakthroughs such as human–machine interaction, vehicle operating systems, and driver assistance systems make the vehicle better understand both the driver and the passengers’ status and demands. In the near future, we may envision that the cockpit will become a truly intelligent space that integrates the physical and virtual worlds for safety driving, immersive entertainment, efficient working, and comfortable travelling. One of the closely related industry developments with the smart cockpit is the intelligent assisted and auto-driving, which leads the technical path of singlevehicle intelligence. Cutting-edge technologies including environmental perception, behavior decision, path planning, and motion control become strong driving forces of autonomous vehicles. Various forms of autonomous vehicles have emerged in specific scenarios, such as airports, park areas, mine areas, closed or semiclosed low-speed driving scenarios, and certain demonstration areas for public roads. Although the technologies for single-vehicle autonomous driving are getting mature, they may suffer the long tail effect, which hinders the practical use and commercial deployment of real auto-driving. To address the aforementioned challenges, the industry is moving toward the development of Intelligent and Connected Vehicles (ICVs), where the intelligen-
6
1 Background of IoV
tization and the networking of vehicles evolve in parallel, which complement each other to form a holistic auto-driving system. For instance, when a vehicle can communicate with its surrounding objects such as neighboring vehicles, road infrastructures, and pedestrians, it may require less single-vehicle intelligence to realize the same driving automation level (e.g., L3) [6]. In contrast, higher capability of single-vehicle intelligent driving can reduce the dependency on communications and connections. Different parties including the governments, industries, and academia have reached the consensus of developing ICV. Taking China as an example, ICV has been considered as China’s national development strategy, and a series of policies, specifications, and technologies with respect to ICV’s functional safety, cyber security, data security, and road testing have been released. Prof. Keqiang Li from Tsinghua University interpreted the roadmap of ICV in China from the aspects of strategic significance and technical contents and proposed a Chinese Solution, which is promising to make breakthroughs of autonomous driving industry via the integration of vehicles, roads, and clouds [7]. Looking ahead, the ICV will embrace both autonomous driving and vehicular communication technologies, which are tightly coupled and synergistically enhanced. In light of this, IoV serves as a fundamental platform, which facilitates the transformation of intelligent automotive industry toward ICV.
1.2 A Hierarchical Architecture of IoV It is of great significance to explore new paradigms of IoV in supporting largescale, real-time, and reliable applications in future ICVs and ITSs, which are evolving toward networking, cooperation, and intelligence. In this section, we present a hierarchical architecture of IoV, which aims at synthesizing the paradigms of Software Defined Networking (SDN) and fog computing and exploiting their synergistic effects on empowering emerging applications. Specifically, a four-layer architecture is designed, including the application layer, the control layer, the virtualization layer, and the data layer, with objectives of (a) enabling logically centralized control via the separation of the control plane and the data plane, (b) facilitating adaptive resource allocation and Quality of Service (QoS) oriented services based on Network Function Virtualization (NFV) and Network Slicing (NS), and (c) enhancing system scalability, responsiveness, and reliability by exploiting the networking, computation, communication, and storage capacities of fog-based services. SDN was originally proposed for renovating conventional network architectures and enabling rapid network innovation, which has shown great advantages on control and management in cloud systems [8]. The core idea of SDN is to simplify the network management and expedite system evolution by decoupling the control plane and the data plane, where the network intelligence is logically centralized in the software-based controller, and the network nodes such as switches will forward data packets based on the decisions made by the controller. Apparently, considering
1.2 A Hierarchical Architecture of IoV
7
the features of IoV, such as dynamic network topologies, high mobility of vehicles, and heterogeneous communication interfaces, it is desired to have an SDN-based framework for abstracting resources and implementing optimal service scheduling in such a system. Fog computing is an emergent networking paradigm for enabling low-latency and high-reliability information services for billions of connected devices in the era of IoT by offloading computing, networking, storage, communication, and data resources closer to end users [9]. It is designed to complement conventional cloudbased services on supporting high-density device connection, massive volume of data transmission, and intensive computation at the network edge. Moreover, IoV not only represents the connection among vehicles but also implies the collaboration among pedestrians, roads, infrastructures, etc. Undoubtedly, as one of the most representative application scenarios in IoT, IoV is expected to benefit tremendously from the development of fog-based services. In addition, with the maturity of ultrareliable and low-latency 5G technology, as well as the fast development of modern vehicles with respect to their computation, storage, and communication capabilities, it is naturally and promisingly to bring fog computing into IoV. With the above motivation, we design a novel hierarchical architecture for IoV, which aims at enhancing the system scalability and reliability, improving application agility and flexibility, and laying a solid foundation for enabling future ICVs and ITSs. As shown in Fig. 1.1, the architecture integrates the paradigms of SDN and fog computing with four layers, namely, the application layer, the control layer, the virtualization layer, and the data layer. Detailed architecture characteristics are presented as follows. SDN-Based Framework in IoV First, as shown in the control layer of the hierarchical architecture, the SDN controller is resided in the backbone network, which connects to cloud data centers and Internet via the core network. Second, similar to traditional SDN components, the controller communicates with upper layer applications such as environment sensing, road safety management, and ITS management, via the northbound interface. Dedicated Application Programming Interfaces (APIs) can be designed based on particular application requirements, including functions for computation, communication, and storage resource allocation, security requirements, access control, etc. Third, the SDN controller communicates with the underlying resources via the southbound interface. Note that instead of managing heterogeneous physical resources directly, the SDN controller can obtain a uniform view of virtual resources based on the resource abstraction at the virtualization layer, which facilitates the service scheduling at the controller. NFV and NS in IoV Although the technologies of NFV and NS have been widely studied in the 5G network, it is non-trivial to migrate them into IoV, especially when considering the characteristics such as highly heterogeneous and distributed underlying resources, as well as highly dynamic and various service requirements imposed by upper layer applications. Accordingly, we present a tailored virtualization layer, which is responsible for the abstraction of networking, computation, communication, and storage resources in IoV. Nevertheless, due to
8
1 Background of IoV
Fig. 1.1 Hierarchical architecture for IoV
the fast changing of network topology, various radio coverages of different wireless communication interfaces, and a vast amount of information that is continually generated, sensed, and shared among the nodes in the data layer, it is challenging to maintain an accurate logical view for underlying resources. In this regard, we consider another virtual layer by abstracting part of the data nodes as fog nodes, in which certain intelligence is implemented, including the provision of certain services based on local computation, communication, and data resources and the abstraction and management of available local resources. In this manner, it not only reduces the dynamics of underlying resources but also alleviates the workload of resource virtualization for the upper layer. Moreover, such a hierarchical architecture facilitates the vertical implementation for the NFV and NS. For example, given a set of applications with their respective QoS requirements, the virtual resources can be orchestrated in different ways based on either distributed scheduling at the fog layer or the centralized scheduling at the SDN controller. In either way, the services are transparent and self-contained for individual upper layer applications. Fog-Based Services in IoV The data layer consists of nodes with heterogeneous wireless communication interfaces such as cellular stations, Roadside Units (RSUs), Wi-Fi Access Points (APs), 5G small cells, and vehicles. In addition to wireless radio access, these nodes are associated with computation units and they can be
1.3 Challenges
9
abstracted as fog nodes for distributed service provision. Unlike previous standalone fog-based services adopted in vehicular networks, the designed virtualization layer smoothly bridges the gap between the logically centralized control at the SDN and the distributed services at the fog layer. Specifically, in this architecture, both mobile and static data nodes can be dynamically assigned as fog nodes based on the scheduling for different services. When acting as fog nodes, they will not only follow the rules deployed by the SDN controller but also implement certain intelligence for local services. In addition, the fog nodes will perform certain aggregation and abstraction for underlying resources and update real-time status to the virtualization layer, which in turn helps the management of virtual resources and facilitates the scheduling for service offloading and load balancing at the SDN controller.
1.3 Challenges The new arising challenges of IoV based on the above presented architecture are discussed as follows. Global System Knowledge Acquisition at the Control Layer To enable logically centralized control, the SDN controller is expected to acquire global knowledge of the system accurately and timely, including service status, resource status, vehicle status, etc. Nevertheless, in such an intermittent wireless connection and highly dynamic environment, some vital problems, such as transmission delay, packet loss, and bandwidth competition, are inevitable, which may severely hinder the performance of system knowledge acquisition and monitoring at the control layer. Furthermore, in practice, SDN controllers are actually deployed distributedly in a large-scale service area. Therefore, how to efficiently synthesize information from multiple controllers and form the global view of the system is critical to realize logically centralized control. In this regard, new modules are expected to be incorporated at the control layer, aiming at bridging the gap between the biased view at the SDN controller and the authentic system status, and new technologies are needed for efficient synthesizing local and distributed system status and constructing a logical view of global system knowledge. Heterogeneous Resource Management at the Virtualization Layer First, due to the highly diverse and dynamic features of resources in IoV, it is non-trivial to characterize them in a uniform and coherent way. For example, different wireless communication interfaces have different radio coverages, transmission rates, and access capacities. Also, the link availability and connection capacities of different communication resources may change dynamically with the varying network topologies. Moreover, other resources such as computation and storage may keep changing due to the inherent dynamics and complexity of the system status. For instance, data nodes such as vehicles or RSUs may constantly sense and generate new data for computing, transmitting, and storing. Therefore, how to construct a
10
1 Background of IoV
uniform and coherent view for heterogeneous resources and virtualize them as sliced entities is another critical issue deserving future efforts. Second, considering dynamic and various QoS requirements imposed by upper layer applications, the NFV and NS are supposed to enable service orchestration with virtualized entities. Nevertheless, it is difficult to realize transparent and adaptive services to guarantee the QoS requirements in such a circumstance. For example, when virtualizing the routing function, although the data plane and the control plane could be decoupled, it still cannot guarantee the allocation of continuous resources for vehicles in fast changing network environments. Therefore, it is imperative to further investigate the coordination between fog- and cloud-based services in IoV for better resource management and allocation in both centralized and distributed ways. Third, although the proposed fog-based service at the data layer helps for the resource abstraction, due to large-scale distribution and the high heterogeneity of underlying resources, it is still challenging to enable seamless and flexible management of virtual resources in IoV. Large-Scale Data Transmission at the Data Layer With rapid growth of service scales and blooming of data-driven applications in modern IoV, off-the-shelf communication protocols cannot efficiently support parallel I2V/V2V communications, especially in high density and high mobility traffic environments. For instance, IEEE 802.11p adopts Carrier Sense Multiple Access (CSMA) at the MAC layer, which has been demonstrated ineffective on bandwidth utilization when a large amount of I2V-/V2V-based data transmissions are taking place concurrently within certain area. With the proposed architecture, it is desired to have a cross-layer design of communication protocols for enhancing resource utilization on concurrent data transmissions. For instance, the co-design between the power control in the physical layer and the multiplexing access protocol in the MAC layer is expected to improve data throughput by striking a balance among transmission range, data rate, and interference. On the other hand, it is critical to better coordinate heterogeneous resources at the data layer to enable large-scale data services. Nevertheless, considering the heterogeneity of data nodes with respect to storage, computation, and communication capacities, as well as mobility, it is challenging to optimize resource utilization at the data layer.
References 1. Wikipedia, Computer network. https://en.wikipedia.org/wiki/Computer_network 2. Wikipedia, General packet radio service. https://en.wikipedia.org/wiki/General_Packet_Radio_ Service 3. Wikipedia, Internet of things. https://en.wikipedia.org/wiki/Internet_of_things
References
11
4. Wikipedia, Dedicated short-range communications. https://en.wikipedia.org/wiki/Dedicated_ short-range_communications 5. S. Chen, J. Hu, L. Zhao, R. Zhao, J. Fang, Y. Shi, H. Xu, Cellular Vehicle-to-everything (C-V2X). Wireless Networks. (Springer Nature, Singapore, 2023). https://doi.org/10.1007/978-981-195130-5_1 6. SAE, Taxonomy and definitions for terms related to driving automation systems for on-road motor vehicles j3016_202104. https://www.sae.org/standards/content/j3016_202104/ 7. Q. Xu, K. Li, J. Wang, Q. Yuan, Y. Yang, W. Chu, The status, challenges, and trends: an interpretation of technology roadmap of intelligent and connected vehicles in China (2020). J. Intell. Connected Veh. 5(3), 1–7 (2022) 8. R. Jain, S. Paul, Network virtualization and software defined networking for cloud computing: a survey. IEEE Commun. Mag. 51(11), 24–31 (2013) 9. M. Chiang, T. Zhang, Fog and IoT: an overview of research opportunities. IEEE Internet Things J. 3(6), 854–864 (2016)
Chapter 2
State of the Art
Abstract This chapter gives a comprehensive review of the state-of-the-art in IoV with respect to connection, cooperation, and intelligence. Specifically, it first reviews representative studies on data dissemination in IoV, including vehicular communication protocols, emerging service architectures, and scheduling algorithms. Then, it reviews literatures on cloud-based, edge-based, and end-edge-cloud cooperated services in IoV. Finally, it reviews related work on vehicular edge intelligence (VEI), including key enabler technologies, cooperative sensing and information fusion technologies, and the VEI empowered intelligent transportation systems. Keywords Vehicular data dissemination · Vehicular end-edge-cloud cooperation · Vehicular edge intelligence
2.1 Data Dissemination in IoV 2.1.1 Vehicular Communication Protocols Data dissemination problems have been extensively studied in IoV. Jeong et al. [1] proposed a Spatio-Temporal coordination-based Media Access Control (STMAC) protocol for efficiently sharing driving safety information in urban vehicular networks, where a contention-free channel access scheme was designed to exchange safety messages simultaneously. Chen et al. [2] proposed an Adaptive Vehicular MAC protocol (A-VeMAC) based on Time Division Multiple Access (TDMA), which adaptively partitions the frame size to better support unbalanced traffic conditions and achieve higher channel utilization. Zhang et al. [3] proposed a Vehicular Cooperative Media Access Control (VC-MAC) protocol, which exploits the cooperative of I2V and V2V communications to enhance spatial reusability. It considered the scenario where vehicles request files of common interest when passing through the RSU. The VC-MAC enables vehicles to cooperatively share their cached information via V2V communication when they are out of the RSU’s coverage, so as to improve the system throughput. Bi et al. [4] proposed a Multi© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 K. Liu et al., Toward Connected, Cooperative and Intelligent IoV, https://doi.org/10.1007/978-981-99-9647-6_2
13
14
2 State of the Art
Channel Token Ring MAC Protocol (MCTRP) for V2V communication. The Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA) mechanism was applied for delivering emergency messages. In addition, a token-based data exchange protocol was designed to improve bandwidth efficiency for non-safety applications. Zeng et al. [5] incorporated channel prediction into the centralized cooperative data dissemination scheduling strategy for vehicular communications. Specifically, a recursive least squares algorithm was proposed to realize large-scale channel prediction with low computational complexity. Then, a scheduling strategy was proposed for cooperative data dissemination. Cheng et al. [6] investigated the feasibility of V2V and V2I integration by operating V2V links as underlay Device-to-Device (D2D) links. A suite of methods was proposed for low-complexity yet high-performance D2D operations, including an interference coordination approach, a resource allocation scheme, and a transmission scheduling framework. Lyu et al. [7] proposed a novel distributed graph-based approach to jointly optimize multicast link establishment with data dissemination and processing decisions by only exchanging partial information among neighboring vehicles. A three-layer graph model was proposed to maximize system energy efficiency. Hadded et al. [8] gave an overview of TDMA-based MAC protocols in vehicular networks and emphasized the importance of designing collision-free MAC mechanisms.
2.1.2 Data Dissemination in SDVN Many studies have investigated data dissemination in Vehicular Software Defined Networks (SDVNs). Liu et al. [9] presented an SDN architecture for GeoBroadcast in vehicular networks. The SDN controller helped the source vehicles to find the path toward the destination vehicles with the knowledge of the topological and geographical information. He et al. [10] proposed an SDN-based architecture to enable rapid network innovation for vehicular communications, in which vehicles and RSUs were abstracted as SDN switches. In addition, network resources such as wireless bandwidth can be allocated and assigned by the logically centralized control plane, which provides a more agile configuration capability. Singh et al. [11] designed an SDN-based intelligent data dissemination approach in vehicular edge computing. They combined the Intent-Based Networking (IBN) and SDN technologies to minimize energy consumption and reduce latency. Cheng et al. [12] proposed a 5GenCIV framework, which integrates 5G and intelligent vehicles to enable affordable and reliable autonomous driving. Specifically, the SDN feature in the 5G network was exploited to reduce the network latency for specific selfdriving operations with the centralized control plane. Zhang et al. [13] proposed a novel Unmanned Aerial Vehicle (UAV)-enabled scheduling protocol consisting of a proactive caching policy and a file sharing strategy in V2X networks, where a Dynamic Trajectory Scheduling (DTS) algorithm was designed to optimize the caching duration. Bhatia et al. [14] presented an adaptive broadcast interval for
2.1 Data Dissemination in IoV
15
data dissemination in V2I environments, where the SDN controller was adopted to determine the broadcast time interval for data dissemination among the vehicles.
2.1.3 Advanced Scheduling for Vehicular Data Dissemination A number of studies have incorporated network coding into data dissemination in vehicular networks. Bhatia et al. [15] presented a data dissemination framework that incorporates network coding with Multi-Generation-Mixing (MGM) functionalities to increase reliability as well as security of data transmission in vehicular networks. Liu et al. [16] considered both communication constraints and application requirements in vehicular networks. A network coding-assisted scheduling algorithm was proposed to best exploit the joint effect of V2V and I2V communications. The work in [17] considered real-time data dissemination in vehicular networks. A memetic algorithm that incorporates vehicular caching and network coding was designed to enhance the bandwidth efficiency of RSU. Ali et al. [18] considered the effect of heterogeneous data sizes in network coding and proposed a dynamic thresholdbased coding-assisted approach for real-time data broadcast. Named Data Networking (NDN), in which the nodes’ communication involves named-based data-centric operations decoupled from the data provider address/location, is considered as a promising way to improve the content delivery efficiency in vehicular networks. Wang et al. [19] proposed a novel Predictive Forwarding Strategy (PRFS) for Vehicular NDN (V-NDN), which establishes an accurate neighbor table based on locations predicted by the Long Short-Term Memory (LSTM) network, and then it selects the next hop among the neighbors based on link reliability and distance. Bian et al. [20] proposed a geo-based NDN forwarding strategy to achieve efficient and reliable packet delivery in urban VANET scenarios. Furthermore, they investigated the caching redundancy problem with the default full-path caching scheme and presented heuristic strategies to reduce unnecessary cached copies. Content caching is also widely adopted to enhance data dissemination in vehicular networks. Kumar et al. [21] proposed a peer-to-peer cooperative caching scheme, where traffic information is shared among vehicles using a Markov chain model to minimize the load on the infrastructure. A probabilistic content replacement was designed to accommodate newly arrived data based on the waiting time and the access frequency. Yu et al. [22] proposed a mobility-aware proactive edge caching scheme based on federated learning, which enables collaborative learning of a global model to predict content popularity with private training data distributed on different vehicles.
16
2 State of the Art
2.2 Vehicular End–Edge–Cloud Cooperation 2.2.1 Cloud-Based Services in IoV Extensive researches have been conducted on cloud-based services in IoV. Shahzad et al. [23] put forward a dynamic programming algorithm with randomization to maximize the number of tasks that could be offloaded to the cloud server. Ashok et al. [24] proposed a dynamic method for offloading specific components of vehicle applications to the cloud. The method is based on dynamic decision-making techniques that can respond to changes in network conditions and reduce response time. Sun et al. [25] introduced a joint on-board task offloading and job scheduling method to find the best offloading position and improve the QoS. It utilizes ant colony optimization based on the obtained task transmission route information to achieve the multi-objective optimization. Bitam et al. [26] proposed a new cloud computing model called ITS-Cloud, which consists of two sub-models: the statistic sub-model inherits the benefits of conventional cloud computing, while the dynamic sub-model represents a temporary cloud formed by the vehicles themselves, acting as cloud data centers. Chen et al. [27] addressed the offloading problem with the objective of maximizing the Quality of Experience (QoE) for users under resource constraints. Zhang et al. [28] introduced a fiber–wireless technology by integrating centralized network management and multiple communication techniques, in which a load balancing task offloading scheme was used to minimize the processing delay.
2.2.2 MEC-Based Services in IoV Various studies have explored Mobile Edge Computing (MEC)-based services in IoV. Some studies focused on leveraging edge caching to enhance service performance. Zhao et al. [29] addressed the optimization of service caching, request scheduling, and resource allocation strategies in edge caching and computation management. The Lyapunov optimization, matching theory, and the consensus alternating direction method of multipliers were applied to achieve near-optimal delay performance in an online and distributed manner. Yin et al. [30] designed a content-centric data distribution scheme based on edge caching for distributing large-size files and cache node selection. Notably, the scheme incorporated 5G/6G communication to bolster the engineering stability of I2V-based data broadcast. Some studies focused on investigating the computation capacities of MEC in IoV. Li et al. [31] introduced an on-demand edge computing framework that utilizes the collaboration between devices and edge nodes for inference tasks. This framework dynamically partitions the computation tasks between edge devices and the cloud, enabling real-time inference and reducing computing latency through early exiting inference. Ye et al. [32] proposed a backscatter-assisted wireless-powered MEC network that takes into account the limited computation capacity of the MEC server,
2.2 Vehicular End–Edge–Cloud Cooperation
17
as well as QoS and energy-causality constraints. Two resource allocation schemes were proposed to maximize the total computation bits and to improve the system’s computation energy efficiency. Ke et al. [33] designed an adaptive computation offloading model for heterogeneous IoV, considering multiple stochastic tasks, timevarying wireless channels, and dynamic bandwidth. The goal is to strike a balance between energy consumption and data transmission delay. A number of studies have considered distributed task offloading methods in IoV. Bute et al. [34] proposed an efficient distributed task offloading scheme that selects nearby vehicles with available computing resources to perform tasks in parallel. This scheme considered factors such as link reliability, distance, available computing resources, and relative velocity. Sun et al. [35] developed a learningbased task offloading framework based on the concept of multi-armed bandit. This framework enabled vehicles to learn from neighboring vehicles to minimize average offloading delay and improve performance. Mei et al. [36] designed a computation task maximum offloading algorithm based on the Lyapunov optimization theory to make real-time decisions and maximize the system’s throughput which equips low-performance CPUs and low-capacity energies. Chen et al. [37] proposed a distributed multi-hop task offloading decision model aimed at improving task execution efficiency. This method selected neighboring vehicles within the wireless communication range as candidate work nodes and utilized a greedy and discrete bat algorithm to solve the offloading problem.
2.2.3 End–Edge–Cloud Cooperated Services in IoV The end–edge–cloud cooperated services have been extensively studied in IoV. Liu et al. [38] investigated task offloading in a three-layer service architecture, wherein vehicles, fog servers, and central cloud resources were scheduled cooperatively. This architecture combined the alternating direction method of multipliers and particle swarm optimization to determine the optimal offloading decision based on the weighted sum of execution delay, energy consumption, and payment cost. Liu et al. [39] presented a vehicular fog computing architecture that leverages the synergistic effect of the cloud, static fog, and mobile fog in processing time-critical tasks in IoV. An adaptive task offloading algorithm was designed by considering dynamic requirements and resource constraints, which makes offloading decisions for maximizing the completion ratio of time-critical tasks. Xu et al. [40] proposed a scheme called fuzzy task offloading and resource allocation, which utilizes the corresponding fuzzy neural network named T-S FNN, and game theory to minimize task processing latency for users with limited edge server resources. The cloud server predicted future traffic flow by T-S FNN and the RSUs adjusted and balanced the current load based on the predicted result of T-S FNN and the game theory. Ni et al. [41] considered an end–edge–cloud collaborative vehicular network where a virtual resource pool was constructed by integrating heterogeneous resources. They proposed a multi-scenario offloading schedule for data processing and analysis and
18
2 State of the Art
optimized the parameters of the collaborative network to maximize system utility. Busacca et al. [42] considered a multi-layer job-offloading scenario in IoV based on vehicular, MEC, and backhaul domains. This framework modeled interactions among stakeholders by a method called the Markov decision process in the three domains, which makes optimal decisions on the number of jobs to offload between MEC servers and computing resources of each job. Kazmi et al. [43] introduced an incentive mechanism based on contract theory to maximize the social welfare of vehicular networks by motivating neighboring vehicles to share their resources. This approach utilized RSUs to provide appropriate rewards through tailored contracts to resource-sharing vehicles based on their contribution. Dai et al. [44] formulated a data-driven task offloading problem by jointly optimizing bandwidth/computation resource allocation among vehicles, MEC servers, and the cloud. An asynchronous approach was designed to determine offloading decisions, which achieved fast convergence by training the local model at each agent in parallel and uploading parameters for global model update asynchronously. Specifically, they decomposed the resource allocation problem into several independent subproblems which were solved by convex theory.
2.3 Vehicular Edge Intelligence 2.3.1 Key Technologies Edge intelligence is expected to provide MEC together with AI to vehicles in close proximity [45]. Great efforts have been made to enable the fundamental functions of edge intelligence. For accelerating DNN inference in resources-constrained edge devices, Zeng et al. [46] proposed a CoEdge, which orchestrates cooperative DNN inference over heterogeneous edge devices by dynamically partitioning workloads based on devices’ computing capabilities and network conditions. Zhao et al. [47] proposed the DeepThings to adaptively distribute DNN inference on tightly resource-constrained IoT edge clusters, where a Fused Tile Partitioning (FTP) method and a workload stealing approach were proposed to reduce resource consumption and accelerate inference. Zhang et al. [48] proposed DeepSlicing to support customized flexible fine-grained partitioning and enable the collaborative and adaptive DNN inference, where both DNN models and the associated data are partitioned to achieve the trade-off between computation and synchronization. Ku et al. [49] jointly optimized the delay and accuracy of vehicular object detection applications, where a dynamic programming-based heuristic algorithm was proposed to partition and offload the DNN inference. Wu et al. [50] proposed a Deep Reinforcement Learning (DRL)-based two-level scheduling algorithm for DNN inference in vehicular networks, to minimize both response time and energy consumption.
2.3 Vehicular Edge Intelligence
19
For enabling efficient Federated Learning (FL) in IoV, Zhou et al. [51] proposed a two-layer FL model to achieve more efficient and more accurate learning while ensuring data privacy protection and reducing communication overheads. A multilayer heterogeneous model selection and aggregation scheme was designed to better utilize the local contexts of individual vehicles and the global contexts of RSUs, respectively. Hammoud et al. [52] proposed a horizontal-based FL architecture in IoV, where a Hedonic game-theoretical model was used to construct stable fog federations. Then, service providers belonging to the same federations can migrate services upon demand in order to cope with the FL requirements adaptively. To adapt to distinct capabilities and data qualities of vehicles, Xiao et al. [53] studied the participation selection and resource allocation problem under learning time and energy consumption constraints. They used the Lagrangian dual function and the sub-gradient projection method to approximate the optimal value iteratively for resource allocation. Then, an adaptive harmony algorithm was developed for improving local model accuracy.
2.3.2 Cooperative Sensing and Heterogeneous Information Fusion in Vehicular Edge Intelligence Cooperative sensing and heterogeneous information fusion are critical issues in vehicular edge intelligence. Zhao et al. [54] designed a social-aware incentive mechanism based on Proximal Policy Optimization (PPO) to derive the optimal long-term sensing strategy. Dong et al. [55] presented a Deep Q-Networks (DQNs)-based approach to fuse information obtained from the local downstream environment for reliable lane change decisions. Mika et al. [56] proposed a Deep Deterministic Policy Gradient (DDPG)-based solution to minimize the age of information by scheduling resource block and broadcast coverage. Xu et al. [57] presented a MultiAgent Distributed Distributional Deep Deterministic Policy Gradient (MAD4PG) to maximize the service ratio by jointly scheduling the task offloading and computing resource allocation in vehicular edge computing. He et al. [58] proposed a Multiagent Actor-Critic (MAC) algorithm to allocate resources for vehicles with strict delay requirements and minimum bandwidth consumption. The Digital Twin (DT) technologies have been investigated based on vehicular edge intelligence framework [59]. Zhang et al. [60] proposed a coordination graphdriven vehicular task offloading scheme to minimize offloading costs by efficiently integrating service matching exploitation and intelligent offloading scheduling in vehicular digital twin system. Xu et al. [61] proposed a service offloading method by utilizing deep reinforcement learning, which employs the DQN to optimize offloading decisions, for DT-empowered vehicular edge computing. Zhang et al. [62] presented a social-aware vehicular edge caching mechanism, which dynamically manages the cache capability of RSUs and vehicles based on user preference similarity and service availability. These solutions provide support for
20
2 State of the Art
the implementation of digital twin vehicular networks through task offloading and cache management methods. Some researchers have considered the information quality including timeliness and accuracy in DT-empowered vehicular networks. Liu et al. [63] proposed a scheduling algorithm for temporal data dissemination in Vehicular Cyber-Physical Systems (VCPSs), which strikes a balance between real-time data dissemination and timely information sensing. Dai et al. [64] proposed an evolutionary multiobjective algorithm to enhance the information quality and improve the data delivery ratio. Rager et al. [65] developed a framework to enhance the information quality by modeling random data loads to capture the stochastic nature of real networks. Yoon et al. [66] presented a unified cooperative perception framework to obtain the accurate motion states of vehicles, considering communication loss in vehicular networks and random vehicle motions. These solutions primarily focus on the assessment of the quality of heterogeneous information fusion, such as timeliness and accuracy, to further support vehicular edge intelligence.
2.3.3 Vehicular Edge Intelligence Empowered ITSs Vehicular edge intelligence has been widely studied to empower various ITS applications [67]. Zhao et al. [68] proposed a DRL-based algorithm with a mobility detection strategy and a task priority determination scheme to overcome the influence of vehicle mobility and further improve the performance of offloading dependent tasks. By jointly considering multiple stochastic tasks, time-varying wireless channels, and dynamic bandwidth, Ke et al. [69] proposed a DRL-based adaptive computation offloading method to obtain the tradeoff between the cost of energy consumption and the cost of data transmission. Li et al. [70] proposed an algorithm based on multi-agent DQN to solve the SDN controller placement problem in IoV, which jointly optimizes the processing delay, load balancing, and path reliability. Zhang et al. [71] proposed an edge caching approach in IoV based on multi-agent DRL, where vehicles adaptively make decisions on content caching and access to minimize the delay of content distribution in dynamic environments. Similarly, He et al. [72] proposed a spatial-temporal correlation approach to predict content popularity by analyzing the historical requests of RSUs. They presented a multi-agent DRL-based caching strategy, where each RSU individually chooses caching content to maximize the cumulative reward. Fu et al. [73] proposed a soft actor-critic DRL-based algorithm with the maximum entropy theory for providing efficient video streaming services in IoV. They jointly optimized vehicle scheduling, bitrate selection, and resource allocation, to maximize video bitrate and decrease time delay and bitrate variation in IoV. Yun et al. [74] proposed a DRL-based scheme to assist the Base Stations (BSs) in delivering the desired videos to users in IoV. It provides high quality while limiting packet drops, avoiding playback stalls, reducing quality fluctuations, and saving backhaul usage.
References
21
References 1. J. Jeong, Y. Shen, S. Jeong, S. Lee, H. Jeong, T. Oh, T. Park, M.U. Ilyas, S.H. Son, D.H. Du, STMAC: Spatio-temporal coordination-based MAC protocol for driving safety in urban vehicular networks. IEEE Trans. Intell. Transp. Syst. 19(5), 1520–1536 (2017) 2. P. Chen, J. Zheng, Y. Wu, A-VeMAC: an adaptive vehicular MAC protocol for vehicular ad hoc networks, in Proceedings of the IEEE International Conference on Communications (ICC’17) (IEEE, 2017), pp. 1–6 3. J. Zhang, Q. Zhang, W. Jia, VC-MAC: a cooperative MAC protocol in vehicular networks. IEEE Trans. Vehicular Technol. 58(3), 1561–1571 (2009) 4. Y. Bi, K.-H. Liu, L. X. Cai, X. Shen, H. Zhao, A multi-channel token ring protocol for QoS provisioning in inter-vehicle communications. IEEE Trans. Wirel. Commun. 8(11), 5621– 5631, 2009 5. F. Zeng, R. Zhang, X. Cheng, L. Yang, Channel prediction based scheduling for data dissemination in VANETs. IEEE Commun. Lett. 21(6), 1409–1412 (2017) 6. X. Cheng, L. Yang, X. Shen, D2D for intelligent transportation systems: a feasibility study. IEEE Trans. Intell. Transp. Syst. 16(4), 1784–1793 (2015) 7. X. Lyu, C. Zhang, C. Ren, Y. Hou, Distributed graph-based optimization of multicast data dissemination for internet of vehicles. IEEE Trans. Intell. Transp. Syst. 24(3), 3117–3128 (2022) 8. M. Hadded, P. Muhlethaler, A. Laouiti, R. Zagrouba, L.A. Saidane, TDMA-based MAC protocols for vehicular ad hoc networks: a survey, qualitative analysis, and open research issues. IEEE Commun. Surv. & Tutorials 17(4), 2461–2492 (2015) 9. Y.-C. Liu, C. Chen, S. Chakraborty, A software defined network architecture for geobroadcast in VANETs, in Proceedings of the IEEE International Conference on Communications (ICC’15) (2015), pp. 6559–6564 10. Z. He, J. Cao, X. Liu, SDVN: enabling rapid network innovation for heterogeneous vehicular communication. IEEE Netw. 30(4), 10–15 (2016) 11. A. Singh, G.S. Aujla, R.S. Bali, Intent-based network for data dissemination in softwaredefined vehicular edge computing. IEEE Trans. Intell. Transp. Syst. 22(8), 5310–5318 (2020) 12. X. Cheng, C. Chen, W. Zhang, Y. Yang, 5G-enabled cooperative intelligent vehicular (5GenCIV) framework: when Benz meets Marconi. IEEE Intell. Syst. 32(3), 53–59 (2017) 13. R. Zhang, R. Lu, X. Cheng, N. Wang, L. Yang, A UAV-enabled data dissemination protocol with proactive caching and file sharing in V2X networks. IEEE Trans. Commun. 69(6), 3930– 3942 (2021) 14. J. Bhatia, J. Dave, M. Bhavsar, S. Tanwar, N. Kumar, SDN-enabled adaptive broadcast timer for data dissemination in vehicular ad hoc networks. IEEE Trans. Veh. Technol. 70(8), 8134– 8147 (2021) 15. J. Bhatia, P. Kakadia, M. Bhavsar, S. Tanwar, SDN-enabled network coding-based secure data dissemination in VANET environment. IEEE Internet Things J. 7(7), 6078–6087 (2019) 16. K. Liu, J.K.-Y. Ng, J. Wang, V.C. Lee, W. Wu, S.H. Son, Network-coding-assisted data dissemination via cooperative vehicle-to-vehicle/-infrastructure communications. IEEE Trans. Intell. Transp. Syst. 17(6), 1509–1520 (2016) 17. K. Liu, L. Feng, P. Dai, V.C. Lee, S.H. Son, J. Cao, Coding-assisted broadcast scheduling via memetic computing in SDN-based vehicular networks. IEEE Trans. Intell. Transp. Syst. 19(8), 2420–2431 (2017) 18. G.M.N. Ali, M. Noor-A-Rahim, M.A. Rahman, S.K. Samantha, P.H.J. Chong, Y.L. Guan, Efficient real-time coding-assisted heterogeneous data access in vehicular networks. IEEE Internet Things J. 5(5), 3499–3512 (2018) 19. J. Wang, J. Luo, Y. Ran, J. Yang, K. Liu, S. Guo, Towards predictive forwarding strategy in vehicular named data networking. IEEE Trans. Veh. Technol. 72(3), 3751–3763 (2022) 20. C. Bian, T. Zhao, X. Li, W. Yan, Boosting named data networking for data dissemination in urban VANET scenarios. Veh. Commun. 2(4), 195–207 (2015)
22
2 State of the Art
21. N. Kumar, J.-H. Lee, Peer-to-peer cooperative caching for data dissemination in urban vehicular communications. IEEE Syst. J. 8(4), 1136–1144 (2013) 22. Z. Yu, J. Hu, G. Min, Z. Zhao, W. Miao, M.S. Hossain, Mobility-aware proactive edge caching for connected vehicles using federated learning. IEEE Trans. Intell. Transp. Syst. 22(8), 5341– 5351 (2020) 23. H. Shahzad, T.H. Szymanski, A dynamic programming offloading algorithm using biased randomization, in Proceedings of the IEEE 9th International Conference on Cloud Computing (CLOUD’16) (IEEE, 2016), pp. 960–965 24. A. Ashok, P. Steenkiste, F. Bai, Vehicular cloud computing through dynamic computation offloading. Comput. Commun. 120, 125–137 (2018) 25. Y. Sun, Z. Wu, K. Meng, Y. Zheng, Vehicular task offloading and job scheduling method based on cloud-edge computing. IEEE Trans. Intell. Transp. Syst. 1–12 (2023). https://doi.org/10. 1109/TITS.2023.3300437 26. S. Bitam, A. Mellouk, ITS-Cloud: cloud computing for intelligent transportation system, in Proceedings of the 12th IEEE Global Communications Conference (GLOBECOM’12) (IEEE, 2012), pp. 2054–2059 27. Y. Chen, J. Zhao, Y. Wu, J. Huang, X.S. Shen, QoE-aware decentralized task offloading and resource allocation for end-edge-cloud systems: a game-theoretical approach. IEEE Trans. Mob. Comput. 1–17 (2022). https://doi.org/10.1109/TMC.2022.3223119 28. J. Zhang, H. Guo, J. Liu, Y. Zhang, Task offloading in vehicular edge computing networks: a load-balancing solution. IEEE Trans. Veh. Technol. 69(2), 2092–2104 (2019) 29. J. Zhao, X. Sun, Q. Li, X. Ma, Edge caching and computation management for real-time internet of vehicles: an online and distributed approach. IEEE Trans. Intell. Transp. Syst. 22(4), 2183–2197 (2020) 30. X. Yin, J. Liu, X. Cheng, X. Xiong, Large-size data distribution in IoV based on 5G/6G compatible heterogeneous network. IEEE Trans. Intell. Transp. Syst. 23(7), 9840–9852 (2021) 31. E. Li, L. Zeng, Z. Zhou, X. Chen, Edge AI: on-demand accelerating deep neural network inference via edge computing. IEEE Trans. Wirel. Commun. 19(1), 447–457 (2019) 32. Y. Ye, L. Shi, X. Chu, R.Q. Hu, G. Lu, Resource allocation in backscatter-assisted wireless powered MEC networks with limited MEC computation capacity. IEEE Trans. Wirel. Commun. 21(12), 10678–10694 (2022) 33. H. Ke, J. Wang, L. Deng, Y. Ge, H. Wang, Deep reinforcement learning-based adaptive computation offloading for MEC in heterogeneous vehicular networks. IEEE Trans. Veh. Technol. 69(7), 7916–7929 (2020) 34. M.S. Bute, P. Fan, L. Zhang, F. Abbas, An efficient distributed task offloading scheme for vehicular edge computing networks. IEEE Trans. Veh. Technol. 70(12), 13149–13161 (2021) 35. Y. Sun, X. Guo, S. Zhou, Z. Jiang, X. Liu, Z. Niu, Learning-based task offloading for vehicular cloud computing systems, in Proceedings of the IEEE International Conference on Communications (ICC’18) (IEEE, 2018), pp. 1–7 36. J. Mei, L. Dai, Z. Tong, X. Deng, K. Li, Throughput-aware dynamic task offloading under resource constant for MEC with energy harvesting devices. IEEE Trans. Netw. Serv. Manag. 20(3), 3460–3473 (2023) 37. C. Chen, Y. Zeng, H. Li, Y. Liu, S. Wan, A multihop task offloading decision model in MECenabled internet of vehicles. IEEE Internet Things J. 10(4), 3215–3230 (2022) 38. Z. Liu, P. Dai, H. Xing, Z. Yu, W. Zhang, A distributed algorithm for task offloading in vehicular networks with hybrid fog/cloud computing. IEEE Trans. Syst. Man Cybernet.: Syst. 52(7), 4388–4401 (2021) 39. C. Liu, K. Liu, S. Guo, R. Xie, V.C. Lee, S.H. Son, Adaptive offloading for time-critical tasks in heterogeneous internet of vehicles. IEEE Internet Things J. 7(9), 7999–8011 (2020) 40. X. Xu, Q. Jiang, P. Zhang, X. Cao, M.R. Khosravi, L.T. Alex, L. Qi, W. Dou, Game theory for distributed IoV task offloading with fuzzy neural network in edge computing. IEEE Trans. Fuzzy Syst. 30(11), 4593–4604 (2022)
References
23
41. Z. Ni, H. Chen, Z. Li, X. Wang, N. Yan, W. Liu, F. Xia, MSCET: a multi-scenario offloading schedule for biomedical data processing and analysis in cloud-edge-terminal collaborative vehicular networks. IEEE/ACM Trans. Comput. Biol. Bioinf. 20(4), 2376–2386 (2021) 42. F. Busacca, G. Faraci, C. Grasso, S. Palazzo, G. Schembra, Designing a multi-layer edgecomputing platform for energy-efficient and delay-aware offloading in vehicular networks. Comput. Netw. 198, 108330 (2021) 43. S.A. Kazmi, T.N. Dang, I. Yaqoob, A. Manzoor, R. Hussain, A. Khan, C.S. Hong, K. Salah, A novel contract theory-based incentive mechanism for cooperative task-offloading in electrical vehicular networks. IEEE Trans. Intell. Transp. Syst. 23(7), 8380–8395 (2021) 44. P. Dai, K. Hu, X. Wu, H. Xing, Z. Yu, Asynchronous deep reinforcement learning for datadriven task offloading in MEC-empowered vehicular networks, in Proceedings of the IEEE Conference on Computer Communications (INFOCOM’21) (IEEE, 2021), pp. 1–10 45. B. Yang, X. Cao, K. Xiong, C. Yuen, Y.L. Guan, S. Leng, L. Qian, Z. Han, Edge intelligence for autonomous driving in 6G wireless system: design challenges and solutions. IEEE Wirel. Commun. 28(2), 40–47 (2021) 46. L. Zeng, X. Chen, Z. Zhou, L. Yang, J. Zhang, CoEdge: cooperative DNN inference with adaptive workload partitioning over heterogeneous edge devices. IEEE/ACM Trans. Netw. 29(2), 595–608 (2021) 47. Z. Zhao, K.M. Barijough, A. Gerstlauer, DeepThings: distributed adaptive deep learning inference on resource-constrained IoT edge clusters. IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst. 37(11), 2348–2359 (2018) 48. S. Zhang, S. Zhang, Z. Qian, J. Wu, Y. Jin, S. Lu, DeepSlicing: collaborative and adaptive CNN inference with low latency. IEEE Trans. Parallel Distrib. Syst. 32(9), 2175–2187 (2021) 49. Y.-J. Ku, S. Baidya, S. Dey, Adaptive computation partitioning and offloading in real-time sustainable vehicular edge computing. IEEE Trans. Veh. Technol. 70(12), 13221–13237 (2021) 50. Y. Wu, J. Wu, M. Yao, B. Liu, L. Chen, S.K. Lam, Two-level scheduling algorithms for deep neural network inference in vehicular networks. IEEE Trans. Intell. Transp. Syst. 24(9), 9324– 9343 (2023) 51. X. Zhou, W. Liang, J. She, Z. Yan, K.I.-K. Wang, Two-layer federated learning with heterogeneous model aggregation for 6G supported internet of vehicles. IEEE Trans. Veh. Technol. 70(6), 5308–5317 (2021) 52. A. Hammoud, H. Otrok, A. Mourad, Z. Dziong, On demand fog federations for horizontal federated learning in IoV. IEEE Trans. Netw. Serv. Manag. 19(3), 3062–3075 (2022) 53. H. Xiao, J. Zhao, Q. Pei, J. Feng, L. Liu, W. Shi, Vehicle selection and resource optimization for federated learning in vehicular edge computing. IEEE Trans. Intell. Transp. Syst. 23(8), 11073–11087 (2022) 54. Y. Zhao, C.H. Liu, Social-aware incentive mechanism for vehicular crowdsensing by deep reinforcement learning. IEEE Trans. Intell. Transp. Syst. 22(4), 2314–2325 (2020) 55. J. Dong, S. Chen, Y. Li, P.Y.J. Ha, R. Du, A. Steinfeld, S. Labi, Spatio-weighted information fusion and DRL-based control for connected autonomous vehicles, in Proceedings of the IEEE International Conference on Intelligent Transportation Systems (ITSC’20) (2020), pp. 1–6 56. Z. Mlika, S. Cherkaoui, Deep deterministic policy gradient to minimize the age of information in cellular V2X communications. IEEE Trans. Intell. Transp. Syst. 23(12), 23597–23612 (2022) 57. X. Xu, K. Liu, P. Dai, F. Jin, H. Ren, C. Zhan, S. Guo, Joint task offloading and resource optimization in NOMA-based vehicular edge computing: a game-theoretic DRL approach. J. Syst. Arch. 134, 102780 (2023) 58. Y. He, Y. Wang, F.R. Yu, Q. Lin, J. Li, V.C. Leung, Efficient resource allocation for multibeam satellite-terrestrial vehicular networks: a multi-agent actor-critic method with attention mechanism. IEEE Trans. Intell. Transp. Syst. 23(3), 2727–2738 (2021) 59. B. Fan, Z. Su, Y. Chen, Y. Wu, C. Xu, T.Q.S. Quek, Ubiquitous control over heterogeneous vehicles: a digital twin empowered edge AI approach. IEEE Wirel. Commun. 30(1), 166–173 (2023)
24
2 State of the Art
60. K. Zhang, J. Cao, Y. Zhang, Adaptive digital twin and multiagent deep reinforcement learning for vehicular edge computing and networks. IEEE Trans. Ind. Inf. 18(2), 1405–1413 (2022) 61. X. Xu, B. Shen, S. Ding, G. Srivastava, M. Bilal, M.R. Khosravi, V.G. Menon, M.A. Jan, M. Wang, Service offloading with deep Q-network for digital twinning-empowered internet of vehicles in edge computing. IEEE Trans. Ind. Inf. 18(2), 1414–1423 (2022) 62. K. Zhang, J. Cao, S. Maharjan, Y. Zhang, Digital twin empowered content caching in socialaware vehicular edge networks. IEEE Trans. Comput. Soc. Syst. 9(1), 239–251 (2022) 63. K. Liu, V.C.S. Lee, J.K.-Y. Ng, J. Chen, S.H. Son, Temporal data dissemination in vehicular cyber–physical systems. IEEE Trans. Intell. Transp. Syst. 15(6), 2419–2431 (2014) 64. P. Dai, K. Liu, L. Feng, H. Zhang, V.C.S. Lee, S.H. Son, X. Wu, Temporal information services in large-scale vehicular networks through evolutionary multi-objective optimization. IEEE Trans. Intell. Transp. Syst. 20(1), 218–231 (2019) 65. S.T. Rager, E.N. Ciftcioglu, R. Ramanathan, T.F. La Porta, R. Govindan, Scalability and satisfiability of quality-of-information in wireless networks. IEEE-ACM Trans. Netw. 26(1), 398–411 (2017) 66. D.D. Yoon, B. Ayalew, G.G.M. Nawaz Ali, Performance of decentralized cooperative perception in V2V connected traffic. IEEE Trans. Intell. Transp. Syst. 23(7), 6850–6863 (2022) 67. J. Zhang, K.B. Letaief, Mobile edge intelligence and computing for the internet of vehicles. Proc. IEEE 108(2), 246–261 (2020) 68. L. Zhao, E. Zhang, S. Wan, A. Hawbani, A.Y. Al-Dubai, G. Min, A.Y. Zomaya, MESON: a mobility-aware dependent task offloading scheme for urban vehicular edge computing. IEEE Trans. Mob. Comput. 1–15 (2023). https://doi.org/10.1109/TMC.2023.3289611 69. H. Ke, J. Wang, L. Deng, Y. Ge, H. Wang, Deep reinforcement learning-based adaptive computation offloading for MEC in heterogeneous vehicular networks. IEEE Trans. Veh. Technol. 69(7), 7916–7929 (2020) 70. B. Li, X. Deng, X. Chen, Y. Deng, J. Yin, MEC-based dynamic controller placement in SDIoV: a deep reinforcement learning approach. IEEE Trans. Veh. Technol. 71(9), 10044–10058 (2022) 71. D. Zhang, W. Wang, J. Zhang, T. Zhang, J. Du, C. Yang, Novel edge caching approach based on multi-agent deep reinforcement learning for internet of vehicles. IEEE Trans. Intell. Transp. Syst. 24(8), 8324–8338 (2023) 72. P. He, L. Cao, Y. Cui, R. Wang, D. Wu, Multi-agent caching strategy for spatial-temporal popularity in IoV. IEEE Trans. Veh. Technol. 1–12 (2023). https://doi.org/10.1109/TVT.2023. 3277191 73. F. Fu, Y. Kang, Z. Zhang, F.R. Yu, T. Wu, Soft actor–critic DRL for live transcoding and streaming in vehicular fog-computing-enabled IoV. IEEE Internet Things J. 8(3), 1308–1321 (2021) 74. W.J. Yun, D. Kwon, M. Choi, J. Kim, G. Caire, A.F. Molisch, Quality-aware deep reinforcement learning for streaming in infrastructure-assisted connected vehicles. IEEE Trans. Veh. Technol. 71(2), 2002–2017 (2022)
Part II
Connected IoV: Vehicular Communications and Data Dissemination
Chapter 3
Data Dissemination via I2V/V2V Communications in Software Defined Vehicular Networks
Abstract This chapter presents the study on scheduling for cooperative data dissemination in a hybrid I2V and V2V communication environment. We formulate the novel problem of Cooperative Data Scheduling (CDS). Each vehicle informs the RSU the list of its current neighboring vehicles and the identifiers of the retrieved and newly requested data. The RSU then selects sender and receiver vehicles and corresponding data for V2V communication, while it simultaneously broadcasts a data item to vehicles that are instructed to tune in to the I2V channel. The goal is to maximize the number of vehicles that retrieve their requested data. We prove that CDS is NP-hard by constructing a polynomial-time reduction from the Maximum Weighted Independent Set (MWIS) problem. Scheduling decisions are made by transforming CDS to MWIS and using a greedy method to approximately solve MWIS. We build a simulation model based on realistic traffic and communication characteristics and demonstrate the superiority and scalability of the proposed solution. Keywords Cooperative data dissemination · I2V/V2V communication · Software defined vehicular network
3.1 Introduction Recent advances in vehicular communications have motivated increasing research interests in emerging ITSs, such as collision avoidance [1], roadway reservation [2], and autonomous intersection management [3], to name but a few. The implementation of these systems imposes strict requirements on efficient data services in IoV [4]. The DSRC [5] is an unprecedented wireless technology intended to support both I2V and V2V communications. In general, DSRC refers to a suite of standards including IEEE 802.11p, IEEE 1609.1/.2/.3/.4 protocol family, and SAE J2735 message set dictionary [6]. In DSRC, the RSU is a fixed infrastructure installed along the road to provide data services, while the On-Board Unit (OBU) is mounted on vehicles and enables them to communicate with the RSU and their neighboring vehicles. © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 K. Liu et al., Toward Connected, Cooperative and Intelligent IoV, https://doi.org/10.1007/978-981-99-9647-6_3
27
28
3 Data Dissemination in Software Defined Vehicular Networks
Great endeavor has been made by automotive manufacturers, governments, and research universities toward efficient vehicular communications, which significantly boosts the development of innovative ITSs. In industry, the current generation of vehicles already equips with devices with certain computation and communication capacities, such as MyFord Touch from the Ford Motor Company, Entune from the Toyota Motor Company, Mbrace2 from the Mercedes-Benz Company, etc. Meanwhile, the US Department of Transportation (USDOT) is actively collaborating with automotive manufacturers and universities on a variety of advanced ITS projects, including the Connect Vehicle program, the Vehicle Infrastructure Integration (VII) project, the Berkeley PATH project, etc. This chapter is dedicated to the scheduling of data dissemination in a hybrid of I2V and V2V communication environment. We consider the delay-sensitive applications (e.g., the service has to be completed before vehicles leaving the RSU’s coverage [7]). Application scenarios include intersection control systems [3], speed advisory systems [8], traffic management systems [9], etc. For example, considering the traffic management system for emergency situations, the emergency message on the road should be transmitted to vehicles around the RSU as file of interest nearby. The quality of such services highly depends on the efficiency of communications between the RSU and vehicles. In addition, it is desirable to enhance data dissemination performance by further exploiting the capacity of V2V communication. Sharing data items among vehicles has the potential to improve the bandwidth efficiency of the RSU [10], as it may reduce the redundancy of rebroadcasting the same data item via I2V communication. In addition, by appropriately exploiting the spatial reusability, multiple data items can be disseminated via V2V communication simultaneously without interference. In particular, the main contributions of this chapter are outlined as follows: • We investigate the potential benefit of exploiting V2V communication to assist RSU-based data services and discuss the challenges on providing efficient data services in such a hybrid I2V/V2V communication environment. To the best of our knowledge, this is the first study on scheduling for cooperative data dissemination in IoV with consideration of both communication constraints and application requirements. • We formulate the novel problem of Cooperative Data Scheduling (CDS). Vehicles can switch to either I2V or V2V communication at a time. In a scheduling period, vehicles in the I2V channel can receive a data item from the RSU, while other vehicles can simultaneously transmit or receive a data item over the V2V channel. Each vehicle maintains a set of requested data items. Only one data item can be transmitted or received for each vehicle in one scheduling period. Furthermore, vehicles cannot transmit and receive a data item at the same time. Each vehicle has a weight for being served. In our particular implementation, the weight is inversely proportional to the estimated remaining dwell time of the receiver vehicle, which captures the urgency of services. The objective of scheduling is to maximize the weighted gain, which is the summation of the weight of each served vehicle in a scheduling period. Furthermore, We prove that
3.2 System Architecture
29
CDS is NP-hard by constructing a polynomial-time reduction from the Maximum Weighted Independent Set (MWIS) problem [11]. • We solve CDS problem and present its detailed implementation via the hybrid of I2V and V2V communications. In the proposed on-line scheduling algorithm, scheduling decisions are computed by the RSU based on a greedy method, which approximately solves the MWIS problem. • We describe the first application of SDN concept [12] in IoV. SDN is an emergent paradigm in computing and networking, which separates the control and data communication layers to simplify the network management and expedite system evolution. In SDN, the network intelligence is logically centralized in softwarebased controllers (i.e., control plane), and the nodes in the network will forward data packets based on the decisions made by the server. In our centralized implementation, vehicles do not need to maintain any control information. Logically centralized control is fully exercised by the RSU.
3.2 System Architecture Figure 3.1 shows the data dissemination system in the hybrid vehicular communication environment. In accordance with IEEE 1609.4, we consider one control channel and two service channels in the system. Specifically, the control channel is used for disseminating management information, service advertisements, and control messages. One of the service channels is used for I2V data dissemination, while the other one is used for V2V data dissemination. We consider single-radio OBUs as they are commonly adopted in IoV due to both deployment and economic
Fig. 3.1 Data dissemination via I2V/V2V communications in software defined vehicular networks
30
3 Data Dissemination in Software Defined Vehicular Networks
concerns. Therefore, vehicles can tune in to only one of the channels at a time [13]. The time unit adopted in this chapter refers to a scheduling period, which consists of three phases as introduced below. In the first phase, all the vehicles are set to the V2V mode and broadcast their heartbeat messages (i.e., the Basic Safety Message as defined in SAE J2735), so that each vehicle is able to identify a list of its neighboring vehicles. For instance, by measuring the signal-to-noise ratio through the heartbeat messages received from other vehicles, a vehicle can recognize a set of vehicles, to which it can transmit and receive data items. In the second phase, all the vehicles switch to the I2V mode and communicate with the RSU. Specifically, each vehicle informs the RSU with its updated information, including the list of its current neighbors and the identifiers of the retrieved and newly requested data items. This information is piggybacked into the Probe Vehicle Message as defined in SAE J2735. Each request is made only for one data item, and the request is satisfied as long as the corresponding data item is retrieved via either I2V or V2V communication. Outstanding requests are pended in the service queue. According to a certain algorithm, the scheduling decisions are announced via the control channel (i.e., piggybacked into the WAVE service advertisements [14]). In the third phase, each vehicle participates into either I2V or V2V communication based on the scheduling decisions. Multiple instances of data dissemination may take place simultaneously in this phase. Specifically, some vehicles will be instructed to tune in to the I2V mode and retrieve the data item transmitted from the RSU, while some others will be instructed to tune in to the V2V mode for data transmitting or receiving. Note that this chapter considers only one-hop V2V data dissemination. To enable a collaborative I2V and V2V data dissemination, the algorithm is expected to make the following scheduling decisions. First, the algorithm divides the vehicles into two groups. One group is for I2V communication, while the other is for V2V communication. Second, the algorithm selects one data item to be transmitted from the RSU, so that vehicles in the I2V group can retrieve this data item via the I2V service channel. Third, for vehicles in the V2V group, the algorithm determines a set of sender vehicles and the corresponding data items disseminated by each sender vehicle, so that the neighbors of each sender vehicle may have chance to retrieve their requested data items via the V2V service channel. The vehicles are assumed to stay in the same neighborhood for a short period of time (i.e., during a scheduling period) [15]. Figure 3.1 illustrates a toy configured example. Vehicles in the I2V group can retrieve the data item only when they are in the RSU’s coverage, which is represented by the dotted ellipse. In this example, three vehicles are designated into the I2V group (i.e., .V3 , .V5 , and .V7 ). The data block without shadow represents that the corresponding data item has been requested by the vehicle but has not yet been retrieved. In contrast, the shadowed data block means that the corresponding data item has been retrieved and cached. Accordingly, when a is broadcast from the RSU, .V3 , .V5 , and .V7 can retrieve it via the I2V service channel. In V2V communication, the double edge arrow represents that the two vehicles are neighbors. The data item
3.3 Cooperative Data Scheduling (CDS) Problem
31
to be disseminated by a sender vehicle has to be cached in advance. For instance, V4 has cached e and d, and it could be selected as a sender for disseminating d with the target receiver of .V6 . Similarly, .V2 has cached b, and it could be selected as another sender for disseminating b to serve .V1 . However, due to the broadcast nature in wireless communications, simultaneously disseminating data items from the vehicles which are in the immediate or adjacent neighborhoods will lead to the data collision [15]. In this example, .V1 is in the neighborhood of both .V2 and .V4 . Therefore, data collision happens at .V1 when .V2 and .V4 are disseminating data items at the same time. Thus, .V1 cannot receive b from .V2 because of the interference caused by .V4 . Note that for .V3 , .V5 , and .V7 , as they are tuned in to the I2V service channel, they will not be influenced by V2V communication.
.
3.3 Cooperative Data Scheduling (CDS) Problem 3.3.1 Problem Formulation 3.3.1.1
Notations
For clear exposition, the primary notations throughout the problem description are summarized in Table 3.1. The database .D = {d1 , d2 , . . . , d|D| } consists of .|D| data items. The set of vehicles is denoted by .V (t) = {V1 , V2 , · · · , V|V (t)| }, where .|V (t)| is the total number of vehicles at time t. Depending on the communication mode of vehicles, .V (t) is divided into two sets: .VI (t) and .VV (t), where .VI (t) represents the set of vehicles in the I2V mode, and .VV (t) represents the set of vehicles in the V2V mode. Each vehicle stays in either I2V or V2V mode at a time, namely, .VI (t) ∩ VV (t) = ∅ and .VI (t) ∪ VV (t) = V (t). Each .Vi (.1 ≤ i ≤ |V (t)|) has a set of requests, which is denoted by .QVi (t) = |QVi (t)|
{qV1i , qV2i , . . . qVi
j .Vi at time t. Each .q Vi
}, where .|QVi (t)| is the total number of requests submitted by
(.1 ≤ j ≤ |QVi (t)|) corresponds to a data item in the database, and it is satisfied once this data item is retrieved by .Vi . According to the service status of requests (i.e., satisfied or not), .QVi (t) is divided into two sets: .SQVi (t) and .P QVi (t), where .SQVi (t) represents the set of satisfied requests, while .P QVi (t) represents the set of pending requests. Then, we have .SQVi (t) ∩ P QVi (t) = ∅ and j .SQVi (t) ∪ P QVi (t) = QVi (t). Since each request .q Vi corresponds to a data item .dk , without causing ambiguity, the expression .dk ∈ SQVi (t) is adopted to represent j that .dk is requested by .Vi and it has been retrieved (i.e., .qVi has been satisfied). For each .Vi in the V2V mode, the set of its neighboring vehicles (i.e., the set of vehicles in the V2V mode and within the V2V communication range of .Vi ) is denoted by .NVi (t), where .NVi (t) ⊂ VV (t). The RSU maintains an entry in the service queue for each .Vi , which is characterized by a 3-tuple: .< Vi , QVi (t), NVi (t) >. The values of .QVi (t) and .NVi (t) are updated in every scheduling period. To facilitate the formulation of CDS, relevant concepts are defined as follows.
32
3 Data Dissemination in Software Defined Vehicular Networks
Table 3.1 Summary of notations Notations D V (t) VI (t) VV (t) QVi (t) j
qVi SQVi (t) P QVi (t) NVi (t) dI (t) VRSU (t) SV (t)
Descriptions Set of data items in the database Set of vehicles at time t Set of vehicles in the I2V mode Set of vehicles in the V2V mode Set of requests submitted by Vi j th request of Vi Set of satisfied requests of Vi Set of pending requests of Vi Set of neighboring vehicles of Vi Data item transmitted from the RSU Set of vehicles within the RSU’s coverage Set of sender vehicles
The data item disseminated by SVi D(SV (t)) Set of data items disseminated by sender vehicles RV (dI (t)) Set of receiver vehicles for dI (t) Set of receiver vehicles RV (d(SVi )) for d(SVi ) RV (D(SV (t))) Set of receiver vehicles in V2V communication Set of I2V tentative T SI 2V (t) schedules Set of V2V tentative T SV 2V (t) schedules G(t) Gain of scalability WVi (t) Weight of Vi Gw (t) Weighted gain d(SVi )
Notes D = {d1 , d2 , · · · , d|D| } V (t) = {V1 , V2 , . . . , V|V (t)| } VI (t) ⊆ V (t) VI (t) ∩ VV (t) = ∅ and VI (t) ∪ VV (t) = V (t) |QVi (t)|
QVi (t) = {qV1i , qV2i , . . . qVi
}
j
1 ≤ j ≤ |QVi (t)| and qVi ∈ D SQVi (t) ⊆ QVi (t) SQVi (t) ∩ P QVi (t) = ∅ and SQVi (t) ∪ P QVi (t) = QVi (t) NVi (t) ⊂ VV (t) dI (t) ∈ D VRSU (t) ⊆ V (t) SV (t) = {SV1 , SV2 , . . . , SV|SV (t)| } and SV (t) ⊆ VV (t) SVi ∈ SV (t) D(SV (t)) = {d(SV1 ), d(SV2 ), . . . , d(SV|SV (t)| )} RV (dI (t)) ⊆ VI (t) RV (d(SVi )) ⊆ NSVi (t) RV (D(SV (t))) =
SVi ∈SV (t)
RV (d(SVi ))
ˆ r ∈ T SI 2V (t) dˆ ∈ P QVr (t), ∀R dV ˆ r ∈ T SV 2V (t) dˆ ∈ SQVs (t)∧dˆ ∈ P QVr (t), ∀Vs dV
Definitions In I2V communication, the RSU broadcasts one data item in each scheduling period, which is denoted by .dI (t), where .dI (t) ∈ D. Denote .VRSU (t) as the set of vehicles within the RSU’s coverage. Only when .Vi ∈ VRSU (t), it can
3.3 Cooperative Data Scheduling (CDS) Problem
33
retrieve .dI (t) via the I2V service channel. Specifically, the receiver vehicle set in I2V communication is defined as follows. Definition 3.1 Receiver vehicle set in I2V communication. Given the data item dI (t) transmitted from the RSU, the set of receiver vehicles for .dI (t), denoted by .RV (dI (t)), consists of any vehicle .Vi , which satisfies the following conditions: (1) .Vi is in the RSU’s coverage, (2) .Vi is in the I2V mode, and (3) .dI (t) is requested by .Vi and it has not yet been retrieved; that is, .
RV (dI (t)) = {Vi |Vi ∈ VRSU (t) ∧ Vi ∈ VI (t) ∧ dI (t) ∈ P QVi (t)}
.
(3.1)
In V2V communication, a set of sender vehicles is designated to disseminate data items, which is denoted by .SV (t) = {SV1 , SV2 , . . . , SV|SV (t)| }, where .|SV (t)| is the number of designated sender vehicles. All sender vehicles are in the V2V mode, that is, .SV (t) ⊆ VV (t). The set of data items to be disseminated is denoted by .D(SV (t)) = {d(SV1 ), d(SV2 ), . . . , d(SV|SV (t)| )}, where .d(SVi ) (.1 ≤ i ≤ |SV (t)|) is the data item disseminated by .SVi . Note that .d(SVi ) has to be retrieved by .SVi in advance, namely, .d(SVi ) ∈ SQSVi (t). Due to the broadcast effect, simultaneous data dissemination of multiple sender vehicles may cause data collision at receivers. Specifically, the set of receiver vehicles suffering from data collision is defined as follows. Definition 3.2 Receiver vehicle set suffering from data collision. Given the set of sender vehicles .SV (t), for any .Vk in the V2V mode, if .Vk is in the neighborhood of both .SVi and .SVj (.SVi , SVj ∈ SV (t)), then data collision happens at .Vk . Accordingly, the receiver vehicle set suffering from data collision is represented by .{Vk |Vk ∈ VV (t) ∧ Vk ∈ NSVi (t) ∧ Vk ∈ NSVj (t)} (.∀ .SVi , SVj ∈ SV (t)). Considering data collision, given a sender vehicle .SVi with the transmitted data item .d(SVi ), the set of receiver vehicles for .d(SVi ) is defined as follows. Definition 3.3 Receiver vehicle set for .d(SVi ). The receiver vehicle set for .d(SVi ), denoted by .RV (d(SVi )), consists of any vehicle .Vj , which satisfies the following four conditions: (a) .Vj is in the neighborhood of .SVi , (b) .d(SVi ) is requested by .Vj , but it has not yet been retrieved, (c) .Vj is not in the sender vehicle set, and (d) .Vj is not in the neighborhood of any other sender vehicles excepting for .SVi ; that is, RV (d(SVi )) = {Vj |Vj ∈ NSVi (t) ∧ d(SVi ) ∈ P QVj (t)∧
.
Vj ∈ / SV (t) ∧ Vj ∈ / NSVk (t), ∀SVk ∈ {SV (t) − SVi }}
(3.2)
The first two conditions are straightforward. The third condition means that a vehicle cannot be the sender and the receiver at the same time. The forth condition guarantees that no data collision happens at the receiver. On this basis, given the set of sender vehicles .SV (t) with the corresponding data items .D(SV (t)), the receiver vehicle set in V2V communication is defined as follows.
34
3 Data Dissemination in Software Defined Vehicular Networks
Definition 3.4 Receiver vehicle set in V2V communication. Given .D(SV (t)), the receiver vehicle set in V2V communication, denoted by .RV (D(SV (t))), is the union of receiver vehicle sets for each .d(SVi ) ∈ D(SV (t)); that is, RV (D(SV (t))) =
.
∪
d(SVi )∈D(SV (t))
RV (d(SVi ))
(3.3)
In view of the dynamic traffic workload and the heavy data service demand, it is imperative to enhance the system scalability via cooperative data dissemination. Therefore, one of the primary objectives is to maximize the total number of vehicles which can be served via either I2V or V2V communication in each scheduling period. To this end, the gain of scalability is defined as follows. Definition 3.5 Gain of scalability. Given the data item .dI (t) transmitted from the RSU, the set of sender vehicles .SV (t), and the corresponding set of data items .D(SV (t)) in V2V communication, the gain of scalability, denoted by .G(t), is the total number of vehicles which can be served via either I2V or V2V communication in a scheduling period, which is computed by G(t) = |RV (dI (t))| + |RV (D(SV (t)))|
.
(3.4)
where .|RV (dI (t))| is the number of receiver vehicles in I2V communication, and |RV (D(SV (t)))| is the number of receiver vehicles in V2V communication.
.
In practice, based on a specific scheduling objective (to be elaborated in Sect. 3.4), serving different vehicles may have different impacts on overall system performance. For general purposes, we define the weighted gain as follows. Definition 3.6 Weighted gain. Denote .WVi (t) as the weight of serving .Vi at t. The weighted gain, denoted by .Gw (t), is the summation of the weight for each served vehicle in a scheduling period, which is computed by Gw (t) =
.
WVi (t)
(3.5)
Vi ∈RV (dI (t))∪RV (D(SV (t)))
3.3.1.2
CDS
With the above knowledge, CDS is formulated as follows. Given the database D = {d1 , d2 , . . . , d|D| }, the set of vehicles .V (t) = {V1 , V2 , . . . , V|V (t)| }, the set of requests .Q(t) = {QV1 , QV2 , . . . , QV|V (t)| }, and the set of weights for serving each vehicle .W (t) = {WV1 , WV2 , . . . , WV|V (t)| }, the algorithm makes the following scheduling decisions. First, it divides the vehicles into I2V and V2V sets, namely, .VI (t) and .VV (t). Then, a data item .dI (t) is selected to broadcast via I2V communication. Meanwhile, a set of sender vehicles .SV (t) together with the corresponding set of data items .D(SV (t)) is selected in V2V communication. Given
.
3.3 Cooperative Data Scheduling (CDS) Problem
35
D, V , Q, and W , let .Λ(D, V , Q, W ) be the set of scheduling decisions for .VI , .VV , dI , SV , and .D(SV ). CDS is to find an optimal scheduling decision, denoted by ∗ w .(VI , VV , dI , SV , D(SV )) , such that the weighted gain .G (t) is maximized; that is, .
(VI ,. VV , dI , SV , D(SV ))∗ = arg
3.3.1.3
max
(VI ,VV ,dI ,SV ,D(SV ))∈Λ(D,V ,Q,W )
Gw (t)
(3.6)
Example
Figure 3.2 shows an example of CDS. The vehicle set .V = {V1 , V2 , . . . , V6 } and requests of each vehicle are represented by data blocks. Specifically, the shadowed data block represents that the corresponding data item has been retrieved. In contrast, the data block without shadow represents the outstanding request. The edge represents the neighborhood relationship of vehicles. Assuming all the vehicles are within the RSU’s coverage (i.e., .V = VRSU ) and assuming the gain of serving each vehicle is 1 (i.e., .WVi = 1, .i = 1, 2, . . . 6), then it is not difficult to observe the following optimal solution: (a) c is scheduled to be transmitted from the RSU (i.e., .dI = c), (b) .V3 and .V5 are set to the I2V mode (i.e., .VI = {V3 , V5 }), while the other four vehicles are set to the V2V mode (i.e., .VV = {V1 , V2 , V4 , V6 }), and (c) .V1 and .V4 are designated as sender vehicles (i.e., .SV = {V1 , V4 }) for disseminating a and b, respectively. Accordingly, the set of data items in V2V communication is .D(SV ) = {a, b}. Given such a schedule, the receiver vehicle sets in I2V and V2V communications are .RV (dI ) = {V3 , V5 } and .RV (D(SV )) = {V2 , V6 }, respectively. As shown, all the outstanding requests can be served in this scheduling period, and w = |RV (d )| + |RV (D(SV ))| = 4. .G I
Fig. 3.2 An example of CDS problem
36
3 Data Dissemination in Software Defined Vehicular Networks
3.3.2 NP-Hardness We prove that CDS is NP-hard by constructing a polynomial-time reduction from a well-known NP-hard problem MWIS [11]. Before presenting the formal proof, a sketch of the idea is outlined as follows. First, we introduce a set of operations (to be defined as the “tentative schedule” formally), which forms the basis of finding an optimal solution of CDS. Second, based on certain constraints on cooperative data dissemination in IoV, we establish a set of rules to identify conflicting operations such that any pair of conflicting operations cannot coexist in an optimal solution of CDS. Third, we construct an undirected graph G by creating a vertex for each operation and adding an edge between any two conflicting operations. The weight of each vertex is set as the weight of the corresponding operation. With the above mapping, we demonstrate that the optimal schedule of CDS is derived if and only if the MWIS of G is computed. Therefore, CDS is NP-hard. To have clearer exposition, we further illustrate the idea with an example. As shown in Fig. 3.3, an undirected graph G is constructed based on the example shown in Fig. 3.2. The identifier of each vertex represents a viable operation. For instance, the vertex .V1 cV3 represents that .V1 transmits c to .V3 . Referring to Fig. 3.2, it is viable because .V1 has cached c, while its neighbor .V3 is requesting for c. Therefore, this operation has the potential to serve .V3 . So, there is a one-to-one mapping between each operation and each vertex. An edge between two vertices represents that the two corresponding operations are in conflict with each other. For instance, the edge between .V1 cV3 and .V3 aV2 means that the two operations (i.e., .V1 transmits c to .V3 and .V3 transmits a to .V2 ) cannot be scheduled at the same time. This is due to the constraint that .V3 cannot be the sender and the receiver simultaneously. We will define a set of rules to capture all the constraints on cooperative data dissemination so that very pair of conflicting operations can be identified. In accordance with the assumption in Fig. 3.2 (i.e., .WVi = 1), the weight of each vertex is set to 1. Given the constructed G, we can check that the four shadowed vertices shown in Fig. 3.3, namely .RcV3 , .RcV5 , .V1 aV2 , and .V4 bV6 are the MWIS of G. Accordingly, the total weight is 4, which is consistent with the maximum weighted gain derived from Fig. 3.2. The formal description of the above idea is presented as follows: Fig. 3.3 An example of reduction from MWIS to CDS
3.3 Cooperative Data Scheduling (CDS) Problem
37
1. Tentative schedules (TSs). A TS refers to an operation which has the potential to serve one pending request via either I2V or V2V communication. In this regard, we classify the TS into two sets, .T SI 2V (t) and .T SV 2V (t), where .T SI 2V (t) is the set of TSs serving requests via I2V communication, and .T SV 2V (t) is the set of TSs serving requests via V2V communication. To facilitate the analysis, a TS in ˆ r , where R represents the RSU, .dˆ represents the data .T SI 2V (t) is parsed by .R dV ˆ Note item transmitted from the RSU, and .Vr represents the receiver vehicle for .d. ˆ ˆ that .d has to be in the pending request set of .Vr (i.e., .d ∈ P QVr (t)). Similarly, ˆ r , where .Vs represents the sender vehicle, .dˆ a TS in .T SV 2V (t) is parsed by .Vs dV represents the data item to be disseminated by .Vs , and .Vr represents the receiver ˆ Note that .dˆ has to be in the satisfied request set of .Vs . In the vehicle for .d. meantime, it has to be in the pending request set of .Vr (i.e., .dˆ ∈ SQVs (t) ∧ dˆ ∈ P QVr (t)). As specified, each TS has the potential to serve an outstanding request via either I2V or V2V communication. For instance, as shown in Fig. 3.2, .V1 aV2 is a TS, which can be interpreted as the potential service by assigning .V1 as the sender and .V2 as the receiver with respect to the data item a. In contrast, .V1 aV3 is not a TS, because .V3 has already received a, and this schedule cannot serve any outstanding request. 2. Conflicting TSs. Different TSs may be in conflict with each other due to the following constraints on cooperative data dissemination. (a) The RSU can only broadcast one data item at a time. (b) Each sender vehicle can only disseminate one data item at a time. (c) A vehicle cannot be both the sender and the receiver at the same time. (d) Data collision happens at receivers. (e) Each vehicle can be only in one of the modes (i.e., I2V or V2V) at a time. The corresponding five rules for identifying conflicting TSs are introduced as follows: ˆ r ∈ T SI 2V (t) (a) If the two TSs are both for I2V communication (i.e., .R dV ˆ and .R d Vr ∈ T SI 2V (t)), but they specify different data items to broadcast ˆ r is in conflict with .R dˆ Vr , because the RSU can (i.e., .dˆ = dˆ ), then .R dV only broadcast one data item at a time. For example, .RcV3 and .RaV2 are in conflict with each other. ˆ r ∈ T SV 2V (t) (b) If the two TSs are both for V2V communication (i.e., .Vs dV ˆ and .Vs d Vr ∈ T SV 2V (t)), but they designate the same sender vehicle to ˆ r is disseminate different data items (i.e., .Vs = Vs and .dˆ = dˆ ), then .Vs dV ˆ in conflict with .Vs d Vr , because each sender vehicle can only disseminate one data item at a time. For example, .V1 cV3 and .V1 aV2 are in conflict with each other. ˆ r ∈ T SV 2V (t) (c) If the two TSs are both for V2V communication (i.e., .Vs dV ˆ and .Vs d Vr ∈ T SV 2V (t)), where one TS designates a vehicle as the sender, while the other TS designates the same vehicle as the receiver (i.e., .Vs = Vr ˆ r is in conflict with .Vs dˆ Vr , because a vehicle or .Vr = Vs ), then .Vs dV cannot be both the sender and the receiver at the same time. For example, .V1 cV3 and .V3 aV2 are in conflict with each other. ˆ r ∈ T SV 2V (t) (d) If the two TSs are both for V2V communication (i.e., .Vs dV ˆ and .Vs d Vr ∈ T SV 2V (t)), but a receiver is the neighbor of both the senders
38
3 Data Dissemination in Software Defined Vehicular Networks
ˆ r is in conflict with .Vs dˆ Vr , (i.e., .Vr ∈ NVs (t) or .Vr ∈ NVs (t)), then .Vs dV because data collision happens at one of the receivers. For example, .V1 cV3 and .V4 bV6 are in conflict with each other. ˆ r ∈ T SI 2V (t)) and the other (e) If one TS is for I2V communication (i.e., .R dV TS is for V2V communication (i.e., .Vs dˆ Vr ∈ T SV 2V (t)), but the receiver in I2V communication is the same with either the sender or the receiver in ˆ r is in conflict V2V communication (i.e., .Vr = Vs or .Vr = Vr ), then .R dV with .Vs dˆ Vr , because a vehicle can be only in one of the modes (i.e., I2V or V2V) at a time. For example, .RcV3 and .V1 cV3 are in conflict with each other. 3. Constructing the graph. First, we find the set of TSs by traversing both the retrieved and outstanding data items for each vehicle. Then, the undirected graph G can be constructed by the following procedures. (a) Create a vertex for each TS. (b) Set the weight of each vertex to the weight of the receiver vehicle in the corresponding TS. (c) For any two conflicting TSs, add an edge between the two corresponding vertices. Apparently, G can be constructed in polynomial time. There is a one-to-one mapping between each vertex and each TS. Any two non-adjacent vertices in G represent that the two corresponding TSs are not in conflict with each other, and hence their weighted gain can be accumulated, which is equivalent to the summation of the weight of the two corresponding vertices. Overall, the maximum weighted gain of CDS is achieved if and only if the MWIS of G is computed. The above proves that CDS is NP-hard.
3.4 Proposed Algorithm We propose an on-line scheduling algorithm, which is called CDD, for cooperative data dissemination. To enhance the overall system performance on data services, simply maximizing the gain of scalability as defined in Sect. 3.3 in each scheduling period cannot guarantee global optimal scheduling performance, as it cannot distinguish the urgency of different served vehicles in a particular scheduling period. In view of this, to capture the service urgency and improve the overall scheduling performance, it is expected to give a higher priority for vehicles that have shorter remaining dwell time in the service region. Figure 3.4 illustrates an example to give an insight into the algorithm design. Figure 3.4a shows the scheduling scenario at .t1 , in which .V1 , .V2 , .V3 , and .V4 ask for b, a, a, and b, respectively. Besides, .V1 has cached a. The service region is represented by the dotted box. Accordingly, as shown in Fig. 3.4b, .V5 and .V6 drive into the service region at .t2 , and they ask for b and a, respectively. Meanwhile, .V1 has left. When considering to maximize the number of served vehicles at .t1 , the optimal schedule is to set .VI (t1 ) = {V4 }, .VV (t1 ) = {V1 , V2 , V3 }, .dI (t1 ) = b, .SV (t1 ) = {V1 }, and .D(SV (t1 )) = {a}. With such a schedule, .V2 and .V3 retrieve a from .V1 , and .V4 retrieves b from the RSU. Nevertheless, .V1 cannot retrieve b
3.4 Proposed Algorithm
39
Fig. 3.4 Scheduling scenarios. (a) At .t1 , .V1 , .V2 , .V3 , and .V4 are in the service region. (b) At .t2 , .V1 has left the service region, while .V5 and .V6 have arrived
at .t1 . When .V5 and .V6 arrive at .t2 , the optimal schedule in this round is to let .V4 disseminate b to .V5 via V2V communication, and let the RSU disseminate a to .V6 via I2V communication. In this case, .V1 cannot be served since it has left the service region at .t2 . Although the above schedule maximizes the number of served vehicles in each scheduling period, it does not distinguish the service urgency for different vehicles, which results in the failure of serving .V1 . In contrast, we consider the following schedule at .t1 : .VI (t1 ) = {V1 , V4 }, .dI (t1 ) = {b}, and no vehicles participate into the V2V communication. Although only two vehicles (i.e., .V1 and .V4 ) are served at .t1 , all the remaining vehicles can be served with the following schedule at .t2 : .VI (t2 ) = {V2 , V3 , V6 }, .VV (t2 ) = {V4 , V5 }, .dI (t2 ) = a, .SV (t2 ) = {V4 }, and .D(SV (t2 )) = {b}. With such a schedule, .V2 , .V3 , and .V6 will retrieve a from the RSU, and .V5 will retrieve b from .V4 (note that .V4 has cached b at .t1 ). With the above observations, it is expected to estimate the remaining time slots of vehicles (.SlackVi (t)) in the service region, which is computed by .SlackVi (t) = DisVi (t) V elVi (t) ,
where .DisVi (t) is the current distance to the exit of the service region and .V elVi (t) is the current velocity of .Vi . Note that there could be a feasible schedule only when .SlackVi (t) ≥ 1. To give higher priority on serving more urgent vehicles, the weight of .Vi is inversely proportional to its slack time, which is defined as 1 .WVi (t) = SlackVi (t)α , where .α (.α > 0) is the tuning parameter to weight the urgency factor. To give an overview, CDD schedules with the following three steps: • First, CDD examines all the TSs in both .T SI 2V (t) and .T SV 2V (t) to find every pair of conflicting TSs and compute the weight of each TS. • Second, CDD constructs the graph G and transforms CDS to MWIS. Then, it selects a subset of TSs based on a greedy method. • Third, CDD generates the following outputs: (a) the data item .dI (t) to be transmitted from the RSU, (b) the set of receiver vehicles .RV (dI (t)) for .dI (t), (c) the set of sender vehicles .SV (t) in V2V communication, (d) the set of data
40
3 Data Dissemination in Software Defined Vehicular Networks
items .D(SV (t)) for each sender vehicle, and (e) the set of receiver vehicles RV (D(SV (t))) in V2V communication.
.
The details of each step are presented as follows.
3.4.1 Identify Conflicting TSs and Compute the Weight This step consists of four operations. First, CDD determines the set of vehicles VRSU (t), which are in the RSU’s coverage. Second, it finds all the TSs in both I2V and V2V communications and constructs .T SI 2V (t) and .T SV 2V (t). Third, it identifies any pair of conflicting TSs. Finally, it computes the weight for each TS. The implementation is elaborated as follows. In order to determine .VRSU (t), CDD checks each entry .< Vi , QVi (t), NVi (t) > maintained in the service queue. Since every vehicle within the RSU’s coverage shall update its information in each scheduling period, in case there is no update received for an entry .< Vi , QVi (t), NVi (t) >, it implies that .Vi has left the RSU’s coverage. In order to construct .T SI 2V (t) and .T SV 2V (t), CDD examines the entry .< Vj , QVj (t), NVj (t) > for each .Vj ∈ VRSU (t). Specifically, for each pending request of .Vj (i.e., .∀qVmj ∈ P QVj (t)), there is a TS: .{RqVmj Vj } ∈ T SI 2V (t), which represents that the RSU disseminates .qVmj to .Vj via I2V communication. On the other hand, for each satisfied request of .Vj (i.e., .∀qVmj ∈ SQVj (t)), if .qVmj is a pending request of .Vk (i.e., .qVmj ∈ P QVk (t)) and .Vk is the neighbor of .Vj (i.e., .Vk ∈ NVj (t)), then there is a TS: .{Vj qVmj Vk } ∈ T SV 2V (t), which represents that .Vj disseminates m .q Vj to .Vk via V2V communication. Note that .Vk may not necessarily in the RSU’s coverage. In order to identify each pair of conflicting TSs, CDD follows the five rules as specified in Sect. 3.3.2. Finally, for each .Vi ∈ VRSU (t), CDD updates its current velocity .V elVi (t) and its current distance to the exit .DisVi (t), so that the .
remaining dwell time is estimated by .SlackVi (t) = receiver vehicle .Vr , its weight is computed
DisVi (t) V elVi (t) .
Given any TS with a
by . Slack1V (t)α . r
3.4.2 Construct G and Select TSs This step consists of three operations. First, CDD constructs the graph G based on the mapping rules. Second, it approximately solves the weighted independent set problem using a greedy method. Third, it constructs the set of selected TS. The implementation is elaborated as follows. In order to construct G, CDD creates a vertex v for each TS derived from Step 1, and it sets the weight of the corresponding TS to .w(v). For each pair of the identified conflicting TSs, they are mapped to the corresponding vertices (e.g., .vi and .vj ), and
3.4 Proposed Algorithm
41
then an edge .eij is added between .vi and .vj . In order to select independent vertices, CDD adopts the greedy method [16].
3.4.3 Generate Outputs and Update Service Queue This step consists of two operations. First, CDD parses each selected TS to make the scheduling decisions, including the determination of .dI (t), .RV (dI (t)), .SV (t), .D(SV (t)), and .RV (D(SV (t))). Second, it updates the service queue by adding the entries for newly arrived vehicles and removing the entries for left vehicles. The implementation is elaborated as follows. In order to generate the outputs, CDD parses each TS in .T Sselected (t). Specifˆ r ∈ ically, for any selected TS belonging to I2V communication (i.e., .∀R dV ˆ ˆ T Sselected (t)), the data item .d will be the same, because all the selected .R dVr are not in conflict with one another. Accordingly, .dˆ is scheduled to be transmitted from ˆ r is selected the RSU, which determines .dI (t). Then, the union of .Vr from each .R dV as the set of receiver vehicles in I2V communication, which determines .RV (dI (t)). On the other hand, for any selected TS belonging to V2V communication (i.e., ˆ r ∈ T Sselected (t)), the union of .Vs from each .Vs dV ˆ r forms the set of sender .∀Vs dV ˆ r vehicles so that .SV (t) is determined. Meanwhile, the union of .dˆ from each .Vs dV forms the set of data items to be disseminated via V2V communication, and thus ˆ r forms the set of .D(SV (t)) is determined. Last, the union of .Vr from each .Vs dV receiver vehicles so that .RV (D(SV (t))) is determined. In order to maintain the service queue, the system needs to add an entry for each newly arrived vehicle and remove the entry for each left vehicle. Note that for a left vehicle .Vi , its entry is removed only when .Vi is out of the RSU’s coverage (i.e., .Vi ∈ / VRSU (t)) and .Vi is not in the neighborhood of any vehicle within the RSU’s coverage (i.e., .Vi ∈ / NVk (t), ∀Vk ∈ VRSU (t)). This is because if .Vk ∈ VRSU (t) and .Vi ∈ NVk (t), then .Vi may still have chance to retrieve the data item from .Vk via V2V communication, even though .Vi ∈ / VRSU (t). To demonstrate the scalability of scheduling, we analyze the algorithm complexity. Suppose there are .|V | vehicles and the maximum number of requests submitted by a vehicle is a constant .|Q|. In Step 1, the upper bound for finding .T SI 2V is .|V | · |P Q|, where .|P Q| represents the number of pending requests of a vehicle and .|P Q| ≤ |Q|. Meanwhile, the upper bound for finding .T SV 2V is .|V | · |SQ| · (|V | − 1) · |P Q|, where .|SQ| represents the number of satisfied requests of a vehicle and .|SQ| ≤ |Q|. Note that .(|V | − 1) · |P Q| represents that in the worst case, a vehicle is in the neighborhood of all other vehicles and it requires to check all the neighbors’ pending requests to find valid TSs. Therefore, the complexity in Step 1 is .O(|V |2 ). In Step 2, suppose there are n vertices in the constructed graph. According to the Greedy method, the worst case to find the independent set requires to check all the vertices when all of them are independent with one another, giving the complexity of .O(n). On the other hand, according to the mapping principle, we have .n = |T SI 2V |+|T SV 2V |. In addition, since .|T SI 2V | ≤ |V | · |Q|
42
3 Data Dissemination in Software Defined Vehicular Networks
and .|T SV 2V | ≤ |V | · |Q| · (|V | − 1), the complexity in Step 2 is .O(|V |2 ). In Step 3, it requires to check .|T Sselected | vertices to generate the outputs. Since 2 .|T Sselected | ≤ |T SI 2V |+|T SV 2V |, the complexity is .O(|V | ). To sum up, the 2 complexity of CDD is .O(|V | ), which is reasonable and will not be the hurdle of scheduling scalability.
3.5 Performance Evaluation 3.5.1 Setup The simulation model is built based on the system architecture described in Sect. 3.2, and it is implemented by CSIM19 [17]. The traffic characteristics are simulated according to Greenshield’s model [18], which is widely adopted in simulating macroscopic traffic scenarios [19]. Specifically, the relationship between the vehicle f velocity (v) and the traffic density (k) is represented by .v = V f − VK j · k, where f is the free flow speed (i.e., the maximum speed limit) and .K j is the jam density .V (i.e. the density that causes the traffic jam). Three lanes are simulated, and the free f f flow speeds of the three lanes are set to .V1 = 120 km/h, .V2 = 100 km/h, and f j .V 3 = 80 km/h, respectively. The same jam density .K is set for each lane, which is .100 vehicles/km. Consider that all the vehicles drive in the same direction and the arrival of vehicles in each lane follows the Poisson process. In order to evaluate the system performance under different traffic workloads, a wide range of vehicle arrival rates are simulated. Given a specific vehicle arrival rate on each lane, the corresponding vehicle velocities and vehicle densities are also collected. Detailed traffic statistics are summarized in Table 3.2. The communication characteristics are simulated based on DSRC. In particular, the radius of RSU’s coverage is set to 300 m, and the V2V communication range is set to 150 m. We do not specify absolute values of the data size and the wireless bandwidth but set the scheduling period as 1 s. The database size is set to 100. Each vehicle may submit a request at random time when passing through the RSU.
Table 3.2 Simulation statistics under different traffic scenarios Traffic scenarios 1 2 3 4 5
Mean arrival rate (vehicles/h) Lane 1 Lane 2 1200 1000 1600 1400 1800 2000 2200 2400 2600 2800
Lane 3 800 1200 1600 2000 2400
Mean velocity (km/h) Lane 1 Lane 2 104.32 86.83 98.75 81.17 91.31 74.64 83.43 60.99 64.77 39.30
Lane 3 70.59 65.03 56.03 40.37 28.14
Mean density (vehicles/km) Lane 1 Lane 2 13.06 13.17 17.69 18.82 23.89 25.34 30.44 38.96 45.96 60.66
Lane 3 11.79 18.71 29.95 49.50 64.80
3.5 Performance Evaluation
43
The total number of submitted requests is uniformly distributed from 1 to 7. The data access pattern follows the Zipf distribution [20] with the parameter .θ = 0.7. θ Specifically, the access probability of a data item .di is computed by . |D|(1/i) , where
(1/j )θ
j =1
|D| is the size of the database. We implement two well-known algorithms for performance comparison. One is FCFS (First Come First Served) [21], which broadcasts data items according to the arrival order of requests. The other is MRF (Most Requested First) [22], which broadcasts the data item with the maximum number of pending requests. Although FCFS and MRF can be only applied for I2V communication, they are the most close solutions for comparison. In addition, to be elaborated in the performance analysis, they are also competitive solutions to the stated problem. Note that all the algorithms are compared with the same timeline (i.e., the same time slot of each scheduling period) regardless of whether they can support the hybrid of I2V and V2V communications. The tuning parameter .α of CDD is set to .4%, which gives it the best performance in the default setting.
.
3.5.2 Metrics We design the following metrics to quantitatively analyze the algorithm performance: • Gain of scalability: It is the total number of vehicles that are served via either I2V or V2V communication in a scheduling period. • I2V broadcast productivity: Given the number of data items broadcast from the RSU (.nr ) and the total number of served requests via I2V communication (.nrs ), the I2V broadcast productivity is computed by .nrs /nr . A high I2V broadcast productivity implies that the algorithm is good at exploiting the broadcast effect and it is able to utilize the RSU’s bandwidth more efficiently. • Distribution of gains: This metric partitions the served requests into two sets. One set contains those requests served by the RSU, and the other set contains the requests served by neighboring vehicles. The proportion of each set reflects the contribution of I2V and V2V communications to the overall performance. • Service ratio: Given the total number of served requests (.ns ) and the total number of submitted requests (n) by all vehicles, the service ratio is computed by .ns /n. • Service delay: It measures the waiting time of served requests, which is the duration from the instance when the request is submitted to the time when the corresponding data item is retrieved.
44
3 Data Dissemination in Software Defined Vehicular Networks
Fig. 3.5 Gain of scalability under different traffic scenarios
3.5.3 Simulation Results Figure 3.5 shows the gain of scalability of algorithms under different traffic scenarios. The ID of each traffic scenario (x-axis) corresponds to the index number in Table 3.2. As shown, a larger traffic scenario ID corresponds to a higher vehicle arrival rate on each lane. In other words, the traffic workload is the lightest in scenario 1 and it is getting heavier in the subsequent scenarios. For FCFS and MRF, which schedule via pure I2V communication, their gain of scalability is the average number of vehicles served via I2V communication in each scheduling period. In contrast, CDD is dedicated to striking a balance between I2V and V2V communications for data services, so that the total number of served vehicles can be maximized. As shown in Fig. 3.5, CDD outperforms other algorithms significantly, especially in heavier traffic workload scenarios. This result demonstrates the superiority of CDD on enhancing system scalability. Figure 3.6 shows the I2V broadcast productivity of algorithms under different traffic scenarios. As demonstrated in previous work [23], the effectiveness of broadcast effect increases by giving preference of scheduling hot data items. MRF always schedules the data item with the most pending requests. Accordingly, it achieves the highest I2V broadcast productivity. CDD also considers the popularity of data items. However, it even ranks behind FCFS, which does not consider data Fig. 3.6 I2V broadcast productivity under different traffic scenarios
3.5 Performance Evaluation
45
Fig. 3.7 Distribution of gains under different traffic scenarios
popularity in scheduling. This is because CDD considers not only the number of vehicles which can be served via I2V communication but also the cooperation of vehicles on data sharing via V2V communication. Vehicles in the V2V mode cannot retrieve any data item from the RSU at the same time, which disperses the I2V broadcast productivity. The result demonstrates that pure I2V communication algorithms can better exploit the RSU’s broadcast effect, and CDD would not be able to add significant benefits to the performance without proper coordination between I2V and V2V data dissemination. Figure 3.7 examines the distribution of gains contributed by I2V and V2V communications for CDD. As observed, when the traffic workload is light, most vehicles are served via I2V data dissemination. This is because the vehicle density on each lane is low (i.e., as shown in Table 3.2, only around .13 vehicles/km on each lane in scenario 1). Therefore, with few neighbors of each vehicle, the chance to retrieve an interested data item via V2V communication is slim. With an increase of the traffic workload, the contribution of V2V data dissemination increases notably. This makes sense due to the following reasons. First, the vehicle density is getting higher in a heavier traffic workload environment, resulting more neighbors of each vehicle. Accordingly, the chance to have common requests among neighboring vehicles is higher. Note that these common requests of different vehicles may be submitted at a different time. Therefore, it is not likely to serve all of them via a single I2V broadcast, leaving the rest requests have higher possibility to be served via V2V communication. Second, a higher traffic workload also causes longer dwell time of vehicles in the service region. This gives a higher chance for each vehicle to retrieve more requested data items, which gives higher opportunity for V2V data sharing. Last, there are more requests submitted by vehicles asking for different data items when the traffic workload is getting heavier, but only one data item can be broadcast from the RSU in each scheduling period. This limits the portion of contribution via I2V communication. In contrast, by appropriately exploiting the spatial reusability, multiple data items can be disseminated via V2V communication simultaneously without conflicting, which enhances the portion of V2V contribution in a heavy traffic scenario. To sum up, CDD is able to strike a balance between I2V and V2V communications on data services.
46
3 Data Dissemination in Software Defined Vehicular Networks
Fig. 3.8 Service ratio under different traffic scenarios
Figure 3.8 shows the service ratio of algorithms under different traffic scenarios. As observed, the service ratio of all the algorithms decline to a certain extent when the traffic workload starts to get heavier. When the traffic workload keeps increasing, the service ratio of all the algorithms is getting higher. The reasons are explained as follows. At the beginning (i.e., in scenario 1), although vehicles pass through the service region with pretty high velocities due to the low density, the system can still achieve reasonably good performance due to the small number of total submitted requests. When the vehicle arrival rate starts to increase (i.e., in scenario 2), although the velocity drops accordingly, the increased number of requests dominates the performance, which results in the decline of the service ratio. When the vehicle velocity keeps dropping in a heavier traffic workload environment, the long dwell time of vehicles gradually dominates the performance. Accordingly, the service ratio is getting higher. As shown, CDD outperforms other algorithms significantly in all scenarios. Note that although this work only focuses on data dissemination from a single RSU, it is straightforward to extend the solution and further enhance the system performance when multiple RSUs can cooperate to provide data services. Figure 3.9 shows the service delay of algorithms under different traffic scenarios. Although CDD performs closely to MRF in light traffic scenarios, it gradually achieves shorter service delay when the traffic workload is getting heavier. This is because as analyzed in Fig. 3.7, the benefit of V2V communication achieved Fig. 3.9 Service delay under different traffic scenarios
References
47
by CDD is more significant in a heavy traffic environment. Furthermore, note that the mean service delay is derived from all the served requests. As demonstrated in Fig. 3.8, CDD serves more requests than both MRF and FCFS in all scenarios, which further demonstrates the superiority of CDD as it is not trivial to achieve both shorter service delay and higher service ratio.
3.6 Conclusion In this chapter, we presented a data dissemination system via the hybrid of I2V and V2V communications and discussed the scheduling challenges arising in such an environment. We gave an intensive analysis on both the requirement and the constraint on data dissemination in hybrid vehicular communication environments. On this basis, we gave a formal description of the cooperative data scheduling (CDS) problem. We proved that CDS is NP-hard from the reduction of MWIS. An online scheduling algorithm CDD was proposed to enhance the data dissemination performance by best exploiting the synergy between I2V and V2V communications. In particular, CDD makes scheduling decisions by transforming CDS to MWIS and approximately solving MWIS using a greedy method. It enables the centralized scheduling at the RSU, which resembles the concept of software defined network in IoV. We built a simulation model based on realistic traffic and communication characteristics. The simulation results under a wide range of traffic workloads demonstrated the superiority of CDD.
References 1. S.K. Gehrig, F.J. Stein, Collision avoidance for vehicle-following systems. IEEE Trans. Intell. Transp. Syst. 8(2), 233–244 (2007) 2. K. Liu, S.H. Son, V.C. Lee, K. Kapitanova, A token-based admission control and request scheduling in lane reservation systems, in Proceedings of the 14th International IEEE Conference on Intelligent Transportation Systems (ITSC’11) (IEEE, 2011), pp. 1489–1494 3. J. Lee, B. Park, Development and evaluation of a cooperative vehicle intersection control algorithm under the connected vehicles environment. IEEE Trans. Intell. Transp. Syst. 13(1), 81–90 (2012) 4. W. Kang, K. Kapitanova, S.H. Son, RDDS: a real-time data distribution service for cyberphysical systems. IEEE Trans. Ind. Inform. 8(2), 393–405 (2012) 5. FCC, FCC report and order 06-110. Amendment of the commission’s rules regarding dedicated short-range communication services in the 5.850–5.925 GHz band, 20-07-2006 6. Y.L. Morgan, Notes on DSRC & WAVE standards suite: its architecture, design, and characteristics. IEEE Commun. Surv. Tutorials 12(4), 504–518 (2010) 7. K. Liu, V.C. Lee, RSU-based real-time data access in dynamic vehicular networks, in Proceedings of the 13th International IEEE Conference on Intelligent Transportation Systems (ITSC’10) (IEEE, 2010), pp. 1051–1056 8. E. Adell, A. Várhelyi, M.d. Fontana, The effects of a driver assistance system for safe speed and safe distance—a real-life field study. Transp. Res. C: Emer. Technol. 19(1), 145–155 (2011)
48
3 Data Dissemination in Software Defined Vehicular Networks
9. A. Bermejo, J. Villadangos, J.J. Astrain, A. Cordoba, L. Azpilicueta, U. Garate, F. Falcone, Ontology based road traffic management in emergency situations. Ad hoc Sensor Wirel. Netw. 20(1–2), 47–69 (2014) 10. K. Liu, J.K.Y. Ng, V.C.S. Lee, S.H. Son, I. Stojmenovic, Cooperative data scheduling in hybrid vehicular ad hoc networks: VANET as a software defined network. IEEE/ACM Trans. Netw. 24, 1759–1773 (2016) 11. D.S. Hochba, Approximation algorithms for NP-hard problems. ACM SIGACT News 28(2), 40–52 (1997) 12. ONF, Software-defined networking: the new norm for networks, in Open Networking Foundation White Paper (2012) 13. T.K. Mak, K.P. Laberteaux, R. Sengupta, A multi-channel VANET providing concurrent safety and commercial services, in Proceedings of the 2nd ACM International Workshop on Vehicular Ad Hoc Networks (VANET ’05) (ACM, 2005), pp. 1–9 14. C. Campolo, A. Vinel, A. Molinaro, Y. Koucheryavy, Modeling broadcasting in IEEE 802.11 p/WAVE vehicular networks. IEEE Commun. Lett. 15(2), 199–201 (2011) 15. J. Zhang, Q. Zhang, W. Jia, VC-MAC: a cooperative MAC protocol in vehicular networks. IEEE Trans. Veh. Technol. 58(3), 1561–1571 (2009) 16. S. Sakai, M. Togasaki, K. Yamazaki, A note on greedy algorithms for the maximum weighted independent set problem. Discrete Appl. Math. 126(2), 313–322 (2003) 17. H. Schwetman, Csim19: a powerful tool for building system models, in Proceedings of the 33nd Conference on Winter Simulation (WSC’01) (IEEE, 2001), pp. 250–255 18. C.F. Daganzo, Fundamentals of Transportation and Traffic Operations (1997) 19. P. Edara, D. Teodorovi´c, Model of an advance-booking system for highway trips. Transp. Res. C: Emer. Technol. 16(1), 36–53 (2008) 20. G. Zipf, Human Behavior and the Principle of Least Effort: An Introduction to Human Ecology (Addison-Wesley Press, Cambridge, 1949) 21. J. Wong, M.H. Ammar, Analysis of broadcast delivery in a videotex system. IEEE Trans. Comput. 100(9), 863–866 (1985) 22. J.W. Wong, Broadcast delivery. Proc. IEEE 7612, 1566–1577 (1988) 23. J. Chen, V. Lee, K. Liu, G.M. N. Ali, E. Chan, Efficient processing of requests with network coding in on-demand data broadcast environments. Inform. Sci. 232, 27–43 (2013)
Chapter 4
Network Coding-Assisted Data Broadcast in Large-Scale Vehicular Networks
Abstract This chapter studies on exploiting the synergy between vehicular caching and network coding for enhancing the bandwidth efficiency of data broadcasting in large-scale vehicular networks. In particular, we consider the scenario where vehicles request a set of information, and they could be served via heterogeneous wireless interfaces. We formulate a novel problem of coding-assisted broadcast scheduling (CBS), aiming at maximizing the broadcast efficiency for the limited BS bandwidth by exploring the synergistic effect between vehicular caching and network coding. We prove the NP-hardness of the CBS problem by constructing a polynomial-time reduction from the simultaneous matrix completion problem. To efficiently solve the CBS problem, we employ memetic computing, which is a nature-inspired computational paradigm for tackling complex problems. Specifically, we propose a memetic algorithm (MA), which consists of a binary vector representation for encoding solutions, a fitness function for solution evaluation, a set of operators for offspring generation, a local search method for solution enhancement, and a repair operator for fixing infeasible solutions. Finally, we build the simulation model and give a comprehensive performance evaluation to demonstrate the superiority of the proposed solution. Keywords Data broadcast · Network coding · Vehicular caching · Memetic algorithm
4.1 Introduction Vehicular networks have been envisioned as a promising paradigm for achieving breakthroughs in future intelligent transportation systems by improving road safety, enhancing driving experiences, and providing various value-added services [1]. The DSRC is being standardized as a de facto protocol in vehicular networks to support both I2V and V2V communications [2]. Meanwhile, alternative wireless interfaces such as cellular networks, Wi-Fi, and Bluetooth also coexist to form a heterogeneous vehicular network. Despite the advances in current wireless communication technologies, it is still challenging to provide efficient data services and optimize the © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 K. Liu et al., Toward Connected, Cooperative and Intelligent IoV, https://doi.org/10.1007/978-981-99-9647-6_4
49
50
4 Network Coding-Assisted Data Broadcast in Large-Scale Vehicular Networks
bandwidth efficiency in vehicular networks, due to its intrinsic characteristics such as high mobility of vehicles, sparse distribution of RSUs, diverse vehicle densities, dynamic traffic conditions, and application requirements, etc. A great number of studies have investigated RSU-based data services in vehicular networks [3–5]. However, with ever-increasing data demands by various applications, simply relying on I2V communications for data dissemination within RSUs’ coverage cannot support future large-scale systems. In addition, the data transmission with such an architecture may suffer from intermittent connections and unpredictable delay due to the short I2V communication range and the sparse deployment of RSUs. Many studies have incorporated V2V communications to assist I2V-based data dissemination and enhance system scalability [6–8]. Nevertheless, these studies cannot well exploit the heterogeneous wireless communication resources for data services. This chapter considers a heterogeneous vehicular network, where vehicles can retrieve part of their requested data items from RSUs via DSRC when they are passing by, and when they are out of RSUs’ coverage, the rest of services will be completed by the BSs of the cellular network. Accordingly, vehicles may be jointly served via different interfaces. This chapter will focus on the data scheduling for the BSs with the aims of maximizing its bandwidth efficiency and enhancing the overall system performance. Network coding has aroused significant interest in the community of wireless communication, and it has shown great potential on improving bandwidth efficiency in vehicular networks [9, 10]. In this chapter, the bitwise exclusive-or (.⊕) coding operation is considered in data scheduling at BSs based on the knowledge of vehicular cache information. On the other hand, the concept of SDN has shown great potential for facilitating data scheduling, improving resource utilization, and enhancing service management in vehicular networks [11–14]. The core idea of SDN is the separation of the control plane and the data plane. The SDN controller, which has global network knowledge, formulates data broadcasting and forwarding rules for network nodes such as RSUs, BSs, and vehicles via the control plane. As a result, the SDN controller can define the behavior of individual vehicles, RSUs, and BSs by making scheduling decisions based on the global view. In this chapter, an SDN-based service architecture is presented to enable “logically centralized” control in a heterogenous vehicular network, which forms the basis of incorporating network coding and vehicular caching in scheduling. Inspired from the superiority of memetic computing in solving complex problems [15], this chapter makes the first effort on proposing a memetic algorithm for efficient data scheduling at the SDN controller, with the objectives of maximizing bandwidth efficiency and enhancing system scalability. The main contributions of this chapter are outlined as follows: • We present an SDN-based service architecture for heterogeneous vehicular communication environments, which enables the centralized scheduling at the controller with global view of vehicles, RSUs, and BSs. The implementation, benefits as well as challenges of data services in such an architecture have been discussed.
4.2 System Architecture
51
• We formulate a novel problem of coding-assisted broadcast scheduling (CBS), which targets at incorporating vehicular caching and network coding into the scheduler at the control plane, and maximizing the bandwidth efficiency of the BS on data services. • This is the first known study on proposing a particular memetic algorithm (MA) for solving the data dissemination problem in vehicular networks. Specifically, based on the characteristics of the formulated CBS problem, the proposed MA consists of a binary vector representation for encoding solutions, a fitness function for solution evaluation, a set of reproduction operators (i.e., parents selection, crossover, and mutation) for solution evolution, as well as a repair operator for fixing infeasible solutions.
4.2 System Architecture Figure 4.1 shows a typical system architecture of network coding-assisted data broadcasting in SDN-based heterogeneous vehicular networks [11], where RSUs, BSs, vehicles, and other wireless devices are abstracted as switches in conventional SDN, representing the data plane. On the other hand, given specific services, the scheduling decisions including the bandwidth allocation, routing protocol, data scheduling, etc., are exercised by the control plane. Based on the received control messages, switches such as RSUs and BSs will operate accordingly. In the concerned service scenario, vehicles ask for the service with common interest such as parking slots, road conditions, gas stations, etc., which are jointly provided by heterogenous wireless communication interfaces such RSUs and BSs. RSUs are installed sparsely along the roads. Vehicles in the RSU’s coverage (denoted by the dotted circle in Fig. 4.1) can retrieve information via DSRC. Nevertheless, due to the limited communication range and intermittent connections
Fig. 4.1 Network coding-assisted data broadcast in large-scale vehicular networks
52
4 Network Coding-Assisted Data Broadcast in Large-Scale Vehicular Networks
of RSUs, services may not be able to be completed in the RSU’s coverage. For those partially served vehicles by RSUs, they are able to continue retrieving data items via other communication interfaces with larger coverage, such as cellular networks. Although the capacity of modern cellular networks (e.g., 4G and 5G) has improved significantly, due to the ever-increasing data service demand by various mobile applications, it is still critical to make best use of the scare bandwidth resource of BSs allocated to vehicular information services. Therefore, it is desirable to maximize the bandwidth efficiency of BSs for data services in heterogeneous vehicular communication environments. To this end, we present an SDN-based service architecture to enable “logically centralized” control and facilitate the data scheduling in such a system. Detailed service procedures are presented as follows. Vehicles periodically update their status to the controller via the cellular network interface, including positions, cached contents, and kinematic information, etc. Vehicles in the RSU’s coverage will be instructed to turn into the DSRC interface and communicate with the RSU, so that the RSU is aware of the set of vehicles in its coverage as well as their service status. RSUs can either follow the control instructions from the control plane, or alternatively, they may exercise certain intelligence and make part of scheduling decisions individually. At early stage of exploring such a data dissemination problem in SDN-based vehicular networks, this part does not consider tight cooperation between RSUs and BSs on information services. In other words, any existing scheduling algorithms proposed for RSUs such as [4, 16] can be incorporated into the presented system architecture. In any case, vehicles will be able to cache part of their requested data items via RSUs’ services. Accordingly, when vehicles are not in the coverage of RSUs, they will be instructed by the controller to turn into the cellular network interface to complete the service. Finally, with the global information of vehicle service status, the control plane makes scheduling decisions and informs the BS via the control message, so that the BS will provide data services accordingly via the cellular network interface. With the above SDN-based service architecture, the system aims at maximizing the overall system performance by implementing an efficient scheduling policy at the control plane. In particular, the bitwise exclusive-or (.⊕) coding operation is adopted at the BS for data broadcast due to its trivial implementation overhead. For example, given an encoded packet .p = d1 ⊕ d2 , to decode a data item (say .d1 ) from p, it requires the remaining data items in p (say .d2 ) by computing .d1 = p ⊕ d2 . Meanwhile, different from traditional wireless sensor nodes that have very limited cache size, vehicles can support large storage, and the cache size will not be a hurdle of system performance. In view of this, we consider that vehicles are allowed to cache those encoded packets even if these packets cannot be decoded out immediately. Figure 4.1 shows an example to illustrate the benefit of caching encoded packets, and meanwhile, it reveals the challenges in designing an efficient scheduling policy. Consider there are four vehicles .V1 , .V2 , .V3 , and .V4 , which are out of the RSUs’ coverage. The set of data items is .d1 , .d2 , .d3 , and .d4 . The current cached packets are .d4 and .d1 ⊕d3 for .V1 ; .d2 and .d1 ⊕d3 for .V2 ; .d4 and .d2 ⊕d3 ⊕d4 for .V3 ; and .d1 and .d2 for .V4 . To maximize the bandwidth efficiency of the BS, it is expected to complete
4.3 Coding-Assisted Broadcast Scheduling (CBS) Problem
53
the service to all the vehicles with the minimum number of broadcast transmissions. Consider the duration for broadcasting a data item (or an encoded packet) as a time unit. In this example, it is observed that at least two time units is required to complete the service by broadcasting .d3 and .d1 ⊕ d2 ⊕ d4 , respectively. Taking .V1 as an example, when .d3 is broadcast, it can decode out .d1 from .d1 ⊕ d3 . Then, with .d1 and .d4 , it can further decode out .d2 from .d1 ⊕ d2 ⊕ d4 , and hence, the service is completed for .V1 . Similar decoding procedures can be applied to other vehicles to complete the service. In contrast, if it does not allow vehicles to cache encoded packets, namely, .V1 , .V2 , .V3 only cache .d4 , .d2 , .d4 , respectively, and .V4 caches .d1 and .d2 . Then, it can be checked that one of the best solutions is to broadcast .d2 ⊕ d4 , .d3 and .d1 in sequence to serve all the vehicles, which requires 3 time slots.
4.3 Coding-Assisted Broadcast Scheduling (CBS) Problem 4.3.1 Problem Analysis Denote the set of vehicles as .V = {V1 , V2 , . . . , V|V | }, where .|V | is the total number of vehicles. The requested data set is denoted by .D = {d1 , d2 , . . . , d|D| }, where .|D| is the total number of data items. An encoded packet is denoted by p, which corresponds to a D-dimension coefficient vector .a(p) = [a(p)1 a(p)2 . . . a(p)|D| ], where .a(p)i ∈ {0, 1} (.1 ≤ i ≤ |D|). Specifically, if .di is encoded in p, then .a(p)i is set to 1. Otherwise, .a(p)i is set to 0. Note that a non-encoded data item (e.g., .dj ) can be represented by a special case of an encoded packet, where .a(p)j = 1 and .a(p)k = 0 for .k = 1, 2, . . . |D|, .k = j . In the rest of this chapter, the term “packet” is used to refer to either encoded or non-encoded data items without causing ambiguity. Given a vehicle .Vm (.1 ≤ m ≤ |V |), the set of its cached packets m }, where .|C(Vm )| is the cardinality of is denoted by .C(Vm ) = {p1m , p2m , . . . , p|C(V m )| the set .C(Vm ). Accordingly, the corresponding coefficient vectors for .Vm are .a(p1m ), m m .a(p ), . . . ,.a(p 2 |C(Vm )| ). The primary notations are summarized in Table 4.1. With the above knowledge, the coding-assisted broadcast scheduling (CBS) problem is described as follows. Given the set of vehicles .V = {V1 , V2 , . . . , V|V | }, the set of data items .D = {d1 , d2 , . . . , d|D| }, and the set of cached packets of each vehicle .C(Vm ) (.Vm ∈ V ), the control plane targets at maximizing the bandwidth efficiency of the BS by determining the set of encoded packets P , which can complete the service to all the vehicles with the minimum number of time slots.
4.3.2 NP-Hardness We prove the NP-hardness of CBS by constructing a polynomial-time reduction from the simultaneous matrix completion problem, which is a well-known NP-hard
54
4 Network Coding-Assisted Data Broadcast in Large-Scale Vehicular Networks
Table 4.1 Summary of Notations Notations V D m .pk m .a(pk ) .C(Vm ) .A(Vi )
Descriptions The set of vehicles, .V = {V1 , V2 , . . . . , V|V | } The set of requested data items, .D = {d1 , d2 , . . . , d|D| } The kth encoded packet of vehicle .Vm , where .Vm ∈ V Coefficient vector of .pkm , .a(pkm ) = [a(pkm )1 a(pkm )2 . . . a(pkm )|D| ] m The set of cached packets by vehicle .Vm , .C(Vm ) = {p1m , p2m , . . . , p|C(V } m )| A .|C (Vi )| × |D| coefficient matrix constructed for .Vi , T i represented by . a (p1i ) a (p2i ) . . . a (p|C (V )| ) i
T N .χm
The maximum scheduling period The total number of D-dimension coefficient vectors initialized for solution search A feasible solution, represented by a binary vector with N N -dimension that satisfies .0 < χm [i] ≤ T
M .0 .rj
The number of feasible solutions in the population Initialized population, .0 = {χ1 , χ2 , . . . χM } The rank of the matrix .A(Vj ) constructed for vehicle .Vj |V | The fitness function, .f = ( (rj − rj )/|V |)/km ,
i=1
f
j =1
where .km is the number of scheduled packets in solution .χm
problem [17]. The general concept of the simultaneous matrix completion problem is described as follows. Given a set of matrices, each matrix contains a mixture of numbers and variables. Each particular variable can appear only once per matrix but may appear in several matrices. The objective is to find values for these variables such that all resulting matrices simultaneously have full rank. It has been shown in [17] that the simultaneous matrix completion problem over .GF (2) is NP-complete. To have a clear exposition, before giving the formal proof, we illustrate a sketch of the idea with an example. Suppose .D = {d1 , d2 , d3 , d4 }, and there are two vehicles .V1 and .V2 with cache sets .C(V1 ) = {d4 , d1 ⊕ d3 , d1 ⊕ d3 ⊕ d4 } and .C(V2 ) = {d2 , d1 ⊕ d3 }. For .Vi , the number of cached packets is denoted by .|C(Vi )|, and each cached packet is represented by a .|D|-dimension coefficient vector. Then, we can construct a .|C(Vi )| × |D| matrix for .Vi , representing its cached contents. For example, the 0001
constructed matrix for .V1 is . 1 0 1 0 . 1011
In this example, we observe that the packet .p31 (i.e., .d1 ⊕ d3 ⊕ d4 ) in .C(V1 ) can be derived from the other two packets .p11 and .p21 in .C(V1 ) by computing 1 1 1 1 1 .p ⊕ p . It indicates that given .p and .p , .p actually has no further contribution 1 2 1 2 3 to decode out any new data item for .V1 . Therefore, it is expected to find the set of cached packets that are independent with each other. To this end, we further transform the matrix to its reduced row echelon form by Gaussian elimination. In
4.3 Coding-Assisted Broadcast Scheduling (CBS) Problem
55
1010 this example, the reduced row echelon form of the constructed matrix is . . 0001
The transformed matrix for .Vi is denoted by .A(Vi ), which is a .|C (Vi )|×|D| matrix, T i , where .|C (Vi )| is the number represented by . a (p1i ) a (p2i ) . . . a (p|C (V )| ) i
of independent vectors and .|C (Vi )| ≤ |C(Vi )|. In addition, since the rows of .A(Vi ) are independent, we have .|C (Vi )| ≤ |D|. In particular, if .|C (Vi )|== |D|, .A(Vi ) is a full rank matrix (it is an identity matrix in the reduced row echelon form). Then, all the data items can be decoded out from .C(Vi ). Otherwise (e.g., .|C (Vi )|< |D|), at least .|D|−|C (Vi )| packets to serve .Vi completely. are required By obtaining .A(V1 ) =
1010 0001
and .A(V2 ) =
1010 , we may observe that 0100
there are at least two packets (e.g., .p1∗ and .p2∗ ) are required to serve both .V1 and .V2 . Suppose we could find such two packets, by pending the two corresponding coefficient vectors .a(p1∗ ) and .a(p2∗ ) to both .A(V1 ) and .A(V2 ), the two matrices 1
∗ ∗ T and . a (p 2 ) a (p 2 ) a(p ∗ ) a(p ∗ ) T should be 1 . a (p ) a (p ) a(p ) a(p ) 1 2 1 2 1 2 1 2 full rank. In this example, we can check that by setting .a(p1∗ ) = [0 0 1 0] and ∗ .a(p ) = [0 1 0 1], the two matrices will become full rank. Therefore, both .V1 2 and .V2 will be completely served. Theorem 4.1 The coding-assisted broadcast scheduling problem is NP-hard. Proof Given V , D, and .C(Vi ) (.Vi ∈ V ), we construct a set of coefficient matrices .A = {A(V1 ), A(V2 ), . . . , A(V|V | )}, where .A(Vi ) is represented by T i i . a (p ) a (p ) , which is the reduced row echelon form of the matrix 1 |C (Vi )| T i i i . a(p ) a(p ) . . . a(p 1 2 |C(Vi )| ) . Accordingly, there are .|C (Vi )| independent packets cached by .Vi . On the other hand, as a vehicle can decode out at most one data item by receiving a packet, it requires at least another .|D| − |C (Vi )| packets to serve .Vi (i.e., to decode out .|D| data items). Further, considering the service to all the vehicles, denote .L = min(|C (Vi )|) (.∀Vi ∈ V ); then it requires at least .|D| − L packets to complete the service. Therefore, the CBS problem is to schedule the .|D| − L packets to serve all the vehicles and maximize the bandwidth efficiency. ∗ Denote the set of scheduled packets as .P ∗ = {p1∗ , p2∗ , . . . , p|D|−L }, and the ∗ ∗ ∗ ∗ coefficient vector of .pj is denoted by .a(pj ) (.pj ∈ P ). Then, by pending the set of coefficient vectors .a(pj∗ ) (.∀pj∗ ∈ P ∗ ) to each .A(Vi ) ∈ A, we obtain a new set of matrices .A∗ = {A∗ (V1 ), A∗ (V2 ), . . . , A∗ (V|V | )}, where .A∗ (Vi ) = T i ∗ ) . . . a(p ∗ a(p ) . Since .Vi can be served if and a (p1i ) . . . a (p|C (V )| ) |D|−L 1 i ∗ only if .A (Vi ) can be transformed to an identity matrix by Gaussian elimination (i.e., .A∗ (Vi ) is full rank), the CBS problem is equivalent to the determination of ∗ the set of vectors .a(p1∗ ), a(p2∗ ), . . . , a(p|D|−L ) over .GF (2), so that all the matrices ∗ in .A are in full rank simultaneously. The above proves that the CBS problem is NP-hard.
56
4 Network Coding-Assisted Data Broadcast in Large-Scale Vehicular Networks
The above analysis shows that to find the optimal solution, namely, .P ∗ =
∗ }, it is equivalent to determining the set of coefficient {p1∗ , p2∗ , . . . , p|D|−L ∗ ∗ vectors .a(p1 ), a(p2∗ ), . . . , a(p|D|−L ), so that the set of matrices .A∗ (Vi ) = T ∗ a (p1i ) . . . a(p|D|−L ) (.∀Vi ∈ V ) are in full rank simultaneously. As .a(p) is
a D-dimension vector over .GF (2) (i.e., .a(p) = [a(p)1 a(p)2 . . . a(p)|D| ], and .a(p)i ∈ {0, 1} ), there are in total .2|D| candidate coefficient vectors. Then, |D|−L there are .C2|D| combinations for selecting .|D| − L coefficient vectors out of the whole set. For the ith combination, denote the corresponding set of packets as .P i |D|−L i (.1 ≤ i ≤ C2|D| ), which is represented by .P i = {p1i , p2i , . . . , p|D|−L }. Denote P = {P 1 , P 2 , . . . , P
.
|D|−L 2|D|
C
}. Then, the optimal solution .P ∗ is represented by
P ∗ = {P k |P k ∈ P & Cf ullrank }
.
(4.1)
where .Cf ullrank stands for the condition that the corresponding set of coefficient k vectors of .P k (i.e., .a(p1k ), a(p2k ), . . . , a(p|D|−L )) results in the simultaneous matrix i completion for the set of matrices .Ak (Vi ) = a (p1i ) . . . a (p|C a(p1k ) . . . (V )| ) i T k . a(p |D|−L ) , .∀Vi ∈ V .
4.4 Proposed Algorithm With the fast growth of learning and optimization techniques [18], it is promising to have dedicated intelligent algorithms for solving complex data dissemination problems in vehicular networks. Memetic computing is a new paradigm proposed in the literature, which has been successfully applied to solve many real-world problems such as permutation flow shop scheduling, quadratic assignment problem, feature selection, etc. [15]. In this section, we propose a memetic algorithm (MA) to solve the CBS problem. The general idea of the proposed MA is outlined as follows. First, MA generates a population of solutions. Each solution represents a set of selected packets in a scheduling period. Subsequently, based on the designed fitness function, parent selection is performed to identify the solutions to undergo crossover and mutation for offspring generation. Further, a local search process kicks in to refine the solution. A repair operator is designed to fix infeasible solutions in due course. Afterward, the fitter solutions among both parent solutions and the generated offspring solutions will be survived for the next generation through the population replacement process. The above procedures will repeat until the predefined stopping criterion is satisfied. Detailed steps of MA are presented as follows and the pseudocode is shown in Algorithm 4.1.
4.4 Proposed Algorithm
57
Algorithm 4.1: Memetic Algorithm (MA) Input: The set of cached packet C(Vm ), m = 1, 2, . . . , |V |, the population size M, and the maximum scheduling period T Output: The solution with the highest fitness value // Initialization: initialize the population 0 = {χ1 , χ2 , . . . χM } for m = 1 to M do for i = 1 to N do Randomly determine χm [i] as 0 or 1 if ik=1 χm [k] ≥ T then break end if end for end for Compute fitness value fχm of each χm ∈ 0 according to Eq. (4.2) while the stopping criterion is not satisfied do for m = 1 to M/2 do // Offspring generation: generate the new population Randomly select k individuals from Sort the k individuals in descending order according to their fitness values f Select the top-2 individuals χl and χn as parent solutions for i = 1 to N do Randomly assign χm [i] ← χl [i] or χm [i] ← χn [i] χm+1 [i] ← χl [i] + χn [i] − χm [i] end for for each child χj ∈ {χm , χm+1 } do for i = 1 to N do Randomly generate rand in [0,1] if rand < ρ then χj [i] ← 1 − χj [i] end if end for if N r=1 χj [r] > T then
Apply the repair operator to χj end if end for // Local Search: for each child χj ∈ {χm , χm+1 } do for i = 1 to N do tmp ← χj tmp[i] ← 1 − χj [i] if N r=1 χj [r] > T then Apply the repair operator to χj end if Compute fitness value ftmp of tmp if ftmp > fχj then
χj ← tmp end if end for end for end for // Population replacement: Sort the individuals in according to f Choose the top-M individuals as the new population end while Output the best individual in the population
58
4 Network Coding-Assisted Data Broadcast in Large-Scale Vehicular Networks
Fig. 4.2 Representation of a feasible solution
4.4.1 Initialization Denote a solution as .χ , and it is represented by a binary vector with N -dimension, where N is the total number of the randomly generated coefficient vectors with dimension .|D|. Specifically, we set .χ [i] = 1 if the corresponding coefficient vector .ai (.1 ≤ i ≤ N) is selected. Otherwise, .χ [i] = 0. Given the maximum scheduling period T , as most T packets can be scheduled at a time. Accordingly, a feasible N χm [i] ≤ T . Figure 4.2 shows the representation of solution should satisfy .0 < i=1
a feasible solution. With this binary representation, the population initialization is then completed by randomly generating M feasible solutions, which is denoted by .0 = {χ1 , χ2 , . . . χM }. The procedures of population initialization are shown in lines .1 ∼ 9 in Algorithm 4.1.
4.4.2 Fitness Function Recall that the objective of the CBS problem is to maximize the bandwidth efficiency by finding the minimum set of packets to complete the service. With this in mind, we design the following function to evaluate the fitness of a solution. Given a solution .χm (.1 ≤ m ≤ M), if .χm [i] = 1 (.1 ≤ i ≤ N), then we attach the vector .ai to the coefficient matrix of each vehicle. The total number of attached vectors N χm [i]. For vehicle .Vj , denote its originally and is denoted by .km , where .km = i=1
newly obtained matrices as .A(Vj ) and .A(Vj ) , respectively, and their corresponding ranks are denoted by .rj and .rj . Then, the fitness function f of a solution is defined as the average matrix rank improvement contributed by each packet in this solution, which is computed by f =(
|V |
.
(rj − rj )/|V |)/km
(4.2)
j =1
As defined, if the two solutions contain the same number of coefficient vectors (i.e., the same number of packets is scheduled), then the one improving more of the average matrix rank is preferred because more data items can be retrieved by vehicles with the same bandwidth consumption. On the other hand, if the two
4.4 Proposed Algorithm
59
solutions improve the same amount of average matrix rank (e.g., the same number of data items can be retrieved by vehicles), then the one containing a less number of coefficient vectors is preferred because it can achieve the same portion of services with less bandwidth consumption.
4.4.3 Offspring Generation Parent selection, crossover, and mutation are the reproduction operators conducted for generating offspring solutions. In particular, the tournament selection [19] is employed to identify the two parents in the population for reproduction. The basic idea of tournament selection is to select the best two solutions .χl and .χn in terms of the fitness value f from k individuals in .. The method is shown in lines .12 ∼ 14 in Algorithm 4.1. In addition, the uniform crossover and bit mutation [20] are adopted. Specifically, each bit of the offspring is randomly selected from either .χl or .χn , and hence, each offspring has approximately half of the genes from each parent. Further, to increase the diversity of the search process, the bit mutation operator is applied to all dimensions in each offspring solution with a predefined mutation probability .ρ. The procedures of crossover and mutation operators are shown in lines .15 ∼ 29 in Algorithm 4.1.
4.4.4 Local Search The local search is implemented by sequential flipping operations conducted on each solution. In particular, given a solution .χm , the local search will be executed from the first to the last dimension with value flipping, i.e., 0 to 1 or 1 to 0. Given the fitness function f , only if the flipping with fitness improvement would be accepted and stored in the solution. For example, suppose .χm is the newly obtained solution by flipping the ith bit of .χm . Then, we have .χm [i] ← 1 − χm [i] and .χm [j ] ← χm [j ], ∀j = i. Afterward, we compute and compare the fitness values of .χm and .χm , denoted by .fχm and .f , respectively. If .f > fχm , then .χm is replaced by χ χ
m
m
χm in this round of local search. The procedures are shown in lines .30 ∼ 43 in Algorithm 4.1.
.
4.4.5 Repair Operator Note that the operations of crossover, mutation, and local search may induce N infeasible solutions where . χm [i] > T . To address this issue, we design the i=1
60
4 Network Coding-Assisted Data Broadcast in Large-Scale Vehicular Networks
following repair operator. First, it finds the set of “1” bits in the solution .χm , which is .S = {i|χm [i] = 1}. Second, it randomly selects .|S| − T elements from S and sets N them to 0. Clearly, we have . χm [i] = T by applying this repair operator. i=1
4.4.6 Population Replacement To determine the solutions for survival in the next generation and keep the population size unchanged, as shown in lines .44 ∼ 45 in Algorithm 4.1, MA compares the fitness value of solutions in the fusion of parent and offspring. Then, the top-M solutions are kept in the population for further evolution.
4.4.7 Termination The stopping criterion in this chapter is defined as the condition that the predefined maximum number of generations (.Gmax ) or the global optimum is reached, which is commonly used for memetic algorithms in the literature [21, 22]. Note that it is also straightforward to adopt other criterions for the proposed MA, such as the maximal number of fitness evaluations, etc. Based on the above-described procedures, the computational complexity of MA is analyzed as follows. In the initialization phase, to generate a feasible solution .χ , it traverses at most N bits. So, given the population size of M, the overhead of initialization is .O(M · N). In the offspring generation phase, for parent generation, it computes the fitness value of k random individuals. Based on the definition in Eq. (4.2), to compute f , it requires to check the coefficient matrix of .|V | vehicles, and the matrix size is bounded by .|D| · |D|, where .|D| is the size of database. For the crossover operation, it will traverse the two children solutions once with the overhead of 2N . For the mutation operation, it will traverse the two N -dimension children solutions. For each traverse, at most N times of repair operation will be applied. As illustrated, the overhead of applying the repair operator is N. Accordingly, the overhead of offspring generation is .O(k · |V | · |D|2 + 2N + 2N 2 ), which is .O(|V | · |D|2 + N 2 ). In the local search phase, it will traverse the two N-dimension children solutions. When examining each element, the overhead of applying repair operator is N , and the overhead of computing the fitness value is 2 2 .|V | · |D| . Accordingly, the overhead of local search is .O(2N · (N + |V | · |D| )), 2 2 which is .O(N · |V | · |D| + N ). According to the stopping criterion, the bound of iteration is .Gmax . Overall, the computational complexity of the proposed MA is .O(M · N) + 2 2 2 2 .O(Gmax · (|V | · |D| + N )) + O(Gmax · (N · |V | · |D| + N )), which is 2 2 .O(M · N + Gmax · (N · |V | · |D| + N )). In practice, M, N, and .Gmax are constant parameters defined by the algorithm, and their values do not depend on the system
4.5 Performance Evaluation
61
scale. For example, M, N, and .Gmax are set to 100, 30, and 100, respectively, in the simulation, which give the algorithm near-optimal performance. Therefore, the computational complexity is reduced to .O(|V | · |D|2 ). Furthermore, since D is the set of data items requested by all the vehicles, it is reasonable to assume a constant and small size of .|D| given particular applications. With above analysis, we may conclude that the complexity of MA can be approximate to a linear function of the number of vehicles in the service region. Therefore, the computational overhead of MA will not be the hurdle of system scalability.
4.5 Performance Evaluation 4.5.1 Setup The simulation model is built based on the system architecture presented in Sect. 4.2 for performance evaluation. Specifically, SUMO [23] is adopted to simulate vehicle mobility and generate vehicle traces. The map is extracted from a .2.2 × 2.2 km area of the University Town in Chongqing City, in which 4 RSUs and 1 BS are simulated. Further, a control module based on C programming is implemented for enabling logically centralized control, including trace analysis, information collection, service coordination and management, scheduling and service interfaces for RSUs and BSs, etc. The MA is implemented by MATLAB, which runs the algorithm based on the parameters from the control module and then outputs scheduling decisions to the controller. To give a clear view of the developed simulation model, Fig. 4.3 illustrates the key functions and relationship of the three modules. Due to the dedicated service architecture as presented in Sect. 4.2, previous data scheduling algorithms designed for vehicular networks are not suitable to be adopted into the concerned system environment. Therefore, for comparison purpose, we implement two classic data broadcast algorithms in mobile computing systems. One is the Most Requested First (MRF) [24], which broadcasts the data item with the maximum number of pending requests. The other is Round-Robin [25], which broadcasts all the data items periodically. For MA, by default, the population size M is set to 100; the dimension N of the solution vector is set to 30; the maximum number of generations .Gmax is set to 100; the mutation probability .ρ is set to 0.01; and the maximum scheduling period is set to 5 time units. For the system, a random linear coding strategy is implemented in each RSU, and the default transmission rate is 0.4 packet per time unit. The number of requested data items is set to 50, and 120 vehicles are simulated. Unless stated otherwise, all the experiments are conducted with the default setting.
62
4 Network Coding-Assisted Data Broadcast in Large-Scale Vehicular Networks
Fig. 4.3 Simulation modules
In order to quantitatively evaluate the performance of algorithms, we design and analyze the following metrics: • Number of Broadcast Packets (NBP): It is defined as the number of broadcast packets to serve all the requests. According to the analysis in Sect. 4.3.2, the lower bound of NBP can be computed by .|D| − L, where .|D| is the set of requested data items and .L = min∀Vi ∈V (|C (Vi )|). • Average Service Delay (ASD): It is designed for evaluating the algorithm performance on satisfying delay-tolerant services. Specifically, denote the service delay for .Vj as .lj . That is, it takes .lj time slots to achieve a full rank of the coefficient matrix for .Vj . Then, ASD is defined as the summation of each vehicle’s service delay over the number of vehicles, which is computed by |V | ASD = ( lj )/|V |
.
(4.3)
j =1
A lower value of ASD indicates a better performance of the algorithm on satisfying delay-tolerant services. • Broadcast Productivity (BP): It is designed for evaluating the algorithm performance on enhancing the bandwidth efficiency. Specifically, denote the ranks of the originally (at time .tjstart ) and newly obtained (at time .tjend ) matrices of .Vj as .rj and .rj , respectively, and the service duration (i.e., .tjend − tjstart ) is denoted by .tj . Then, BP is defined as the summation of each vehicle’s improved rank per
4.5 Performance Evaluation
63
time unit over the number of vehicles, which is computed by |V | rj − rj .BP = ( )/|V | tj
(4.4)
j =1
A higher value of BP indicates a better performance of the algorithm on enhancing the efficiency of broadcast bandwidth. The upper bound of BP is 1 because at most the average rank improves by one in a time unit when all the vehicles receive an outstanding data. • Average Service Ratio (ASR): It is designed for evaluating the algorithm performance on scheduling real-time services. Specifically, given a slack time .τ , denote the rank of the coefficient matrix of .Vj as .r after .τ . Then, the service j ratio for .Vj is computed by .rj / |D|. Accordingly, ASR of the system is defined as the summation of each vehicle’s service ratio over the number of vehicles, which is computed by ASR = (
|V |
.
rj / |D|)/|V |
(4.5)
j =1
A higher value of ASR indicates a better performance of the algorithm on scheduling real-time data services.
4.5.2 Simulation Results Effect of System Workload on Delay-Tolerant Services The first set of experiments evaluates the algorithm performance on scheduling delay-tolerant services under different system workloads (Figs. 4.4, 4.5, and 4.6). The more data items Fig. 4.4 NBP under different system workloads
64
4 Network Coding-Assisted Data Broadcast in Large-Scale Vehicular Networks
Fig. 4.5 ASD under different system workloads
Fig. 4.6 BP under different system workloads
requested by vehicles represent a higher system workload. Specifically, Fig. 4.4 compares the NBP of algorithms. As analyzed, the lower bound of the NBP is computed by .|D| − min∀Vi ∈V (|C (Vi )|), where .|C (Vi )| is obtained based on the cached packet of .Vi in the simulation. We observe from Fig. 4.4 that MA can achieve the optimal performance in terms of minimizing NBP. In contrast, either MRF or Round-Robin almost reaches the upper bound of the NBP in all cases, which is the number of requested data items. This is because neither MRF nor RoundRobin adopted network coding by considering the cached packets of vehicles, which indicates that a data item has to be broadcast even if only one of the vehicles is waiting for it. Due to the diversity of outstanding requests by different vehicles, most likely, the algorithm has to schedule all the data items for broadcasting to complete the service. Figure 4.5 compares the ASD of algorithms under different system workloads. As shown, the ASD of all the algorithms increases with the increasing of the system workload. Nevertheless, MA always achieves the lowest ASD in all cases. To better comprehend and verify such a superiority of MA, we further examine the result shown in Fig. 4.6, where the BP of algorithms under different system workloads are compared. As analyzed, the higher value of BP demonstrates the better capability of the algorithm on improving the bandwidth efficiency and the upper bound is 1.
4.5 Performance Evaluation
65
Fig. 4.7 NBP under different data transmission rates of RSUs
Fig. 4.8 ASD under different data transmission rates of RSUs
Evidently, MA achieves near-optimal BP in all scenarios, which is much higher than other algorithms. Also, note that MRF performs better than Round-Robin because MRF considers data productivity in scheduling by broadcasting the data item with the most pending requests. This observation is consistent with findings in previous work [9]. Effect of Data Transmission Rates of RSUs on Delay-Tolerant Services This set of experiments evaluates the algorithm performance on scheduling delay-tolerant services under different data transmission rates of RSUs (Figs. 4.7, 4.8, and 4.9). As stated above, for fair comparison, the same random linear coding strategy is adopted at RSUs for all the algorithms. Nevertheless, the different data transmission rate of RSUs will result in different numbers of cached packets of vehicles, which may influence algorithm performance at BSs. Figure 4.7 compares the NBP of algorithms under different data transmission rates of RSUs. Clearly, when the data transmission rate of RSUs is higher, vehicles are able to cache more packets when they are passing by. According to the collected statistics, the average number of cached packets of each vehicle increases from 9 to 25 with the increasing of the data transmission rate. Intuitively, the BS should be able to broadcast a fewer number of packets to complete the service when more packets are cached by vehicles. Nevertheless, we
66
4 Network Coding-Assisted Data Broadcast in Large-Scale Vehicular Networks
Fig. 4.9 BP under different data transmission rates of RSUs
note that the NBP of MRF and Round-Robin almost unchanged when the data rate of RSUs is getting higher. This is because MRF and Round-Robin can barely take the advantage of those cached packets as most of them are in the encoded form. In contrast, MA can always achieve the optimal result by well exploiting the benefit of vehicular caching and network coding. Figure 4.8 compares the ASD of algorithms under different data transmission rates of RSUs. First, the performances of all the algorithms are getting better when the data transmission rate of RSUs is getting higher, which makes sense because the overall system workload is relieving. Second, we note that MA constantly outperforms other algorithms in all the scenarios, especially when the service rate of RSUs is getting higher. It further demonstrates the superiority of MA on exploiting the vehicular cache content in making coding decisions. Further, we analyze the BP of algorithms under different data transmission rates of RSUs in Fig. 4.9. As observed, MA always manages to achieve the upper bound of BP in different scenarios, while the performance of MRF and Round-Robin is decreasing significantly. This is because when there are more packets received from RSUs, it is more likely that a broadcast data item by MRF and Round-Robin can only serve a few number of vehicles, which results in the decreasing of BP. Effect of Slack Time on Real-Time Services The following experiments evaluate algorithm performance on scheduling real-time services (Figs. 4.10, 4.11, and 4.12). Specifically, Fig. 4.10 evaluates the algorithm performance on scheduling real-time services with different slack times. A shorter slack time gives a more stringent time constraint on data services, beyond which the service will be failed. As expected, all the algorithms have higher ASR when the slack time is getting longer. Also, MA constantly outperforms other algorithms in all ranges. Effect of System Workload on Real-Time Services Figure 4.11 evaluates the algorithm performance on scheduling real-time services under different system workloads. As shown, the ASR of all the algorithms decreases with an increasing number of requested data items, but MA always achieves the highest ASR in all the scenarios, which demonstrates the scalability of the solution.
4.5 Performance Evaluation
67
Fig. 4.10 ASR under different service time constraints
Fig. 4.11 ASR under different system workloads
Fig. 4.12 ASR under different data transmission rates of RSUs
Effect of Data Transmission Rate of RSUs on Real-Time Services Figure 4.12 evaluates the algorithm performance on scheduling real-time services under different data transmission rates of RSUs. As shown, with higher data transmission rates of RSUs, vehicles are able to cache more packets when they are passing by. Accordingly, it is expected that all the algorithms will achieve a higher ASR. Also, we note that MA achieves the best performance across the whole range.
68
4 Network Coding-Assisted Data Broadcast in Large-Scale Vehicular Networks
4.6 Conclusion In this chapter, we presented an SDN-based data service architecture in heterogeneous vehicular communication environments, where RSUs, BSs, and vehicles are abstracted as the data plane, while the scheduling decisions are exercised by the logically centralized control plane. On this basis, we gave a formal description of the coding-assisted broadcast scheduling problem, CBS, which aims at maximizing the bandwidth efficiency by incorporating vehicular caching and network coding in making scheduling decisions. We proved that CBS is NP-hard by constructing a polynomial-time reduction from the simultaneous matrix completion problem. We proposed a memetic algorithm MA to solve the CBS problem, which consists of a binary vector representation for encoding solutions, a fitness function for solution evaluation, a set of reproduction operators (i.e., parent selection, crossover, and mutation) for offspring generation, a local search method for solution enhancement, and a repair operator for fixing infeasible solutions. Finally, we built the simulation model and gave a comprehensive performance evaluation. The simulation results conclusively demonstrated the effectiveness of the proposed solution under a variety of circumstances.
References 1. K. Liu, L. Feng, P. Dai, V.C. Lee, S.H. Son, J. Cao, Coding-assisted broadcast scheduling via memetic computing in SDN-based vehicular networks. IEEE Trans. Intell. Transp. Syst. 19(8), 2420–2431 (2017) 2. Y.L. Morgan, Notes on DSRC & WAVE standards suite: its architecture, design, and characteristics. IEEE Commun. Surv. Tutor. 12(4), 504–518 (2010) 3. C.-J. Chang, R.-G. Cheng, H.-T. Shih, Y.-S. Chen, Maximum freedom last scheduling algorithm for downlinks of DSRC networks. IEEE Trans. Intell. Transp. Syst. 8(2), 223–232 (2007) 4. K. Liu, V.C.S. Lee, J.K.-Y. Ng, J. Chen, S.H. Son, Temporal data dissemination in vehicular cyber–physical systems. IEEE Trans. Intell. Transp. Syst. 15(6), 2419–2431 (2014) 5. X. Shen, X. Cheng, L. Yang, R. Zhang, B. Jiao, Data dissemination in VANETs: A scheduling approach. IEEE Trans. Intell. Transp. Syst. 15(5), 2213–2223 (2014) 6. Q. Xiang, X. Chen, L. Kong, L. Rao, X. Liu, Data preference matters: a new perspective of safety data dissemination in vehicular ad hoc networks, in Proceedings of the IEEE Conference on Computer Communications (INFOCOM’15) (2015), pp. 1149–1157 7. H.A. Omar, W. Zhuang, L. Li, On multihop communications for in-vehicle Internet access based on a TDMA MAC protocol, in Proceedings of the IEEE Conference on Computer Communications (INFOCOM’14) (2014), pp. 1770–1778 8. Z. He, J. Cao, X. Liu, To carry or to forward? A traffic-aware data collection protocol in VANETs, in Proceedings of the 11th International Conference on Mobile Ad Hoc and Sensor Systems (MASS’14) (2014), pp. 272–276 9. K. Liu, J.K.-Y. Ng, J. Wang, V. Lee, W. Wu, S.H. Son, Network-coding-assisted data dissemination via cooperative vehicle-to-vehicle/-infrastructure communications. IEEE Trans. Intell. Transp. Syst. 17(6), 1509–1520 (2016)
References
69
10. M. Li, Z. Yang, W. Lou, CodeOn: cooperative popular content distribution for vehicular networks using symbol level network coding. IEEE J. Sel. Areas Commun. 29(1), 223–235 (2011) 11. Z. He, J. Cao, X. Liu, SDVN: enabling rapid network innovation for heterogeneous vehicular communication. IEEE Netw. 30(4), 10–15 (2016) 12. Q. Zheng, K. Zheng, H. Zhang, V.C. Leung, Delay-optimal virtualized radio resource scheduling in software-defined vehicular networks via stochastic learning. IEEE Trans. Veh. Technol. 65(10), 7857–7867 (2016) 13. K. Zheng, L. Hou, H. Meng, Q. Zheng, N. Lu, L. Lei, Soft-defined heterogeneous vehicular network: architecture and challenges. IEEE Netw. 30(4), 72–80 (2016) 14. Y.-C. Liu, C. Chen, S. Chakraborty, A software defined network architecture for GeoBroadcast in VANETs, in Proceedings of the IEEE International Conference on Communications (ICC’15) (2015), pp. 6559–6564 15. X. Chen, Y.-S. Ong, M.-H. Lim, K.C. Tan, A multi-facet survey on memetic computation. IEEE Trans. Evol. Comput. 15(5), 591–607 (2011) 16. R. Kim, H. Lim, B. Krishnamachari, Prefetching-based data dissemination in vehicular cloud systems. IEEE Trans. Veh. Technol. 65(1), 292–306 (2016) 17. N.J. Harvey, D.R. Karger, S. Yekhanin, The complexity of matrix completion, in Proceedings of the 7th Annual ACM-SIAM Symposium on Discrete Algorithm (SODA’06) (2006), pp. 1103– 1111 18. H. Zhang, X. Cao, J.K. Ho, T.W. Chow, Object-level video advertising: an optimization framework. IEEE Trans. Ind. Inform. 13(2), 520–531 (2017) 19. B.L. Miller, D.E. Goldberg, Genetic algorithms, tournament selection, and the effects of noise. Complex Syst. 9(3), 193–212 (1995) 20. H. Mühlenbein, M. Gorges-Schleuter, O. Krämer, Evolution algorithms in combinatorial optimization. Parallel Comput. 7(1), 65–85 (1988) 21. Y.S. Ong, A.J. Keane, Meta-Lamarckian learning in memetic algorithms. IEEE Trans. Evol. Comput. 8(2), 99–110 (2004) 22. F. Neri, C. Cotta, P. Moscato, Handbook of Memetic Algorithms. Studies in Computational Intelligence (2011), vol. 379 23. M. Behrisch, L. Bieker, J. Erdmann, D. Krajzewicz, SUMO–simulation of urban mobility: an overview, in Proceedings of the 3rd International Conference on Advances in System Simulation (SIMUL’11), ThinkMind (2011) 24. J.W. Wong, Broadcast delivery. Proc. IEEE 76(12), 1566–1577 (1988) 25. S. Khanna, S. Zhou, On indexed data broadcast, in Proceedings of the 13th Annual ACM Symposium on Theory of Computing (STOC’98). ACM (1998), pp. 463–472
Chapter 5
Fog Computing Empowered Data Dissemination in Heterogeneous Vehicular Networks
Abstract This chapter presents a fog computing empowered architecture together with a dedicated scheduling algorithm for data dissemination in software defined heterogeneous vehicular networks. Specifically, the architecture supports both the logically centralized control via the cloud node in the core network and the distributed data dissemination via the fog nodes at the network edge. A problem called Fog Assisted Cooperative Service (FACS) is formulated, which takes network coding and vehicular caching into consideration, and aims at minimizing the overall service delay via the cooperation of V2C, Vehicle-to-Fog (V2F) and V2V communications. Furthermore, we derive an equivalence problem of FACS and prove that FACS is NP-hard. On this basis, we propose a Clique Searching based Scheduling (CSS) algorithm at the SDN controller, which considers the heterogeneous communication interfaces and vehicle mobility in scheduling and enables the collaborative data encoding and transmission among the cloud nodes, fog nodes, and vehicles. The complexity analysis demonstrates the feasibility of the proposed algorithm. Finally, we build the simulation model and give a comprehensive performance evaluation based on real vehicular trajectories extracted from different time and space. The simulation results conclusively demonstrate the superiority of the proposed solution. Keywords Fog computing · Software defined vehicular network · Network coding · Data dissemination
5.1 Introduction The IoV is stepping into a new era with the fast evolution of wireless communication technologies such as DSRC and C-V2X [1], as well as with the emergence of cutting-edge networking paradigms such as SDN and fog computing [2]. A variety of emerging ITSs such as active-safety, auto-driving, connected-cars, and pedestrian-vehicle-road collaboration are around the corner. Nevertheless, due to the heterogeneity of network nodes in IoV in terms of communication, computation, and storage capacities [3] and the massive data demands by future ITS applications © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 K. Liu et al., Toward Connected, Cooperative and Intelligent IoV, https://doi.org/10.1007/978-981-99-9647-6_5
71
72
5 Fog Computing Empowered Data Dissemination in Heterogeneous Vehicular Networks
where over gigabytes of data could be sensed and generated by each vehicle in every second [4], it is of great significance on designing novel service architectures and corresponding scheduling algorithms to enable efficient data dissemination in vehicular networks. In automotive manufacturers, the mainstream technologies to enable modern cars with communication capabilities are based on the off-the-shelf cellular networks and Wi-Fi access points [5], which are not scalable to those applications requiring massive data transmission and low service latency. In academia, there have been extensive studies on developing service architectures [2, 6, 7], communication protocols [8–10], and scheduling algorithms [11–13] in IoV. In spite of the above efforts, due to the intrinsic characteristics of vehicular networks, including high mobility of vehicles, intermittent wireless connections, and heterogeneous networking resources, etc., it is still challenging yet imperative to solve the following critical issues on data dissemination in IoV. First, it is desired to improve system responsiveness and adaptability in dealing with dynamically changing application requirements on data services. Second, it is crucial to best coordinate the heterogeneous wireless communication resources by improving the bandwidth efficiency on both temporal and spatial domains, so as to better support massive data demands in large-scale vehicular networks. Last but not least, it is important to explore the communication and caching capabilities of the nodes on the edge of the network to further reduce service latency and enhance system scalability. To address the aforementioned issues, this chapter aims at designing a novel service architecture by synthesizing the paradigms of SDN and fog computing in IoV and proposing a dedicated data scheduling algorithm based on network coding. The basic rationales are summarized as follows. First, it has been demonstrated that integrating SDN into IoV is promising to facilitate data scheduling and improve system flexibility in highly dynamic vehicular environments [6]. Therefore, with the SDN-based architecture, the network intelligence (i.e., the scheduler) is implemented in a centralized software-based controller, and the network nodes such as vehicles and RSUs will transmit data packets based on the scheduling decisions made by the controller. Such an architecture forms the basis of exploiting heterogeneous communication resources in IoV for efficient data dissemination. Second, to best coordinate the nodes at the edge of vehicular networks, the fog computing paradigm [7] is co-designed in the architecture, aiming at exploiting the data dissemination capabilities of distributed nodes in IoV to further reduce service latency and enhance system responsiveness. Finally, in view of the fact that the network coding technique has shown great potential on improving bandwidth efficiency in vehicular networks [14, 15], we propose a data scheduling algorithm based on bitwise XOR (.⊕) coding operation, by which one broadcast packet may have the potential to serve multiple requests for different data items. To be elaborated later, since the network nodes in IoV normally have much higher caching capacities than those in conventional wireless sensor networks, the benefit in terms of improving bandwidth efficiency would be more significant in IoV provided that the synergistic effect of network coding and vehicular caching could be exploited. The main contributions of this chapter are outlined as follows:
5.2 System Architecture
73
• We present a fog computing empowered data dissemination architecture in software defined heterogeneous vehicular networks, where infrastructures that are physically closer to the vehicles such as RSUs and 5G small/micro stations are considered as the fog nodes. On the other hand, infrastructures that are farther from the vehicles such as conventional cellular BSs are considered as the cloud nodes. Cloud nodes, fog nodes, and vehicles have heterogeneous data transmission rates and radio coverage. The SDN controller, which makes scheduling decisions based on global knowledge of the system, realizes the cooperative services among cloud nodes, fog nodes, and vehicles via V2C, V2F, and V2V communications. • We formulate a FACS problem, in which the SDN controller makes data scheduling decisions based on both cyber (e.g., cached and requested data items of vehicles) and physical (e.g., heterogeneous communication resources and vehicle status) information of the system. The objective of FACS is to best exploit the synergistic effect of network coding and vehicular caching and minimize the service delay via the cooperation among the cloud nodes, fog nodes, and vehicles. Furthermore, we derive an equivalence problem of FACS to give insight into the problem analysis. Finally, we prove the NP-hardness of FACS by reducing a well-known NP-hard problem called Minimum Clique Cover (MCC) [16]. • We propose a CSS algorithm at the SDN controller for making decisions on cooperative data dissemination via V2C, V2F, and V2V communications. Specifically, a graph transformation scheme is proposed by considering heterogeneous data rates and radio coverage of wireless communication interfaces and mobility of vehicles, which forms the basis of deriving valid solutions. Then, the detailed procedures of CSS are presented, which makes scheduling decisions on packet encoding, and the corresponding operations for the cloud and fog nodes, and vehicles, so that they can collaboratively transmit data items via V2C, V2F, and V2V communications. Finally, we analyze the complexity of CSS to show the feasibility of the algorithm.
5.2 System Architecture 5.2.1 Overview In this section, we present a fog computing empowered data dissemination architecture in software defined heterogeneous vehicular networks. As shown in Fig. 5.1, the system consists of three layers. In the top layer, the cloud nodes such as cellular BSs are connected to the SDN controller via the core network. Cloud nodes can communicate with vehicles via cellular interfaces such as LTE-V and C-V2X. For simplicity, we call it V2C communication. In the middle layer, the fog nodes such as RSUs and 5G small/micro stations are connected with each other via wired connection. Each fog node can communicate with vehicles in its radio coverage,
74
5 Fog Computing Empowered Data Dissemination in Heterogeneous Vehicular Networks
Fig. 5.1 Fog computing empowered data dissemination architecture in IoV
and we call this V2F communication. The end users (i.e., vehicles) form the bottom layer. Vehicles can communicate with their neighbors via V2V communication, and they can also communicate with the cloud and fog nodes via V2C and V2F communications, respectively.
5.2.2 Operational Flow The operational flow of system consists of five steps as shown in Fig. 5.1. Step 1: Vehicles can encapsulate their physical (e.g., location, velocity, and direction) and cyber (e.g., requested data, and cached data) information into the control message and submit them to the cloud node via V2C communication. Step 2: Each cloud node will forward its local information to the control plane via the core network so that the SDN controller is able to obtain the global knowledge of the system. Step 3: The SDN controller makes scheduling decisions and forwards them to the cloud nodes. Step 4: The scheduling decisions are piggybacked into the control message and broadcast to fog nodes and vehicles. Step 5: The cloud nodes, fog nodes, and vehicles will collaboratively perform data dissemination via V2C, V2F, and V2V communications, respectively, by acting on their received instructions.
5.2 System Architecture
75
5.2.3 An Example With the above system architecture, the following of this chapter focuses on the data scheduling at the SDN controller, which aims at exploiting the synergistic effect of network coding and vehicular caching via the cooperation among the cloud nodes, fog nodes, and vehicles, so that the overall service delay is minimized. We give an example to illustrate the concerned problem as well as to reveal the scheduling challenges. As labeled in Fig. 5.1, suppose there are three vehicles .k1 , .k2 , and .k3 , which are in the service range of a cloud node. Meanwhile, suppose .k1 and .k2 are neighboring vehicles, which can share data items via V2V communication, whereas .k3 and .k1 are in the coverage of different fog nodes. The data items in the blank box are requested by the vehicle (e.g., .k2 requests .d1 , .d2 , and .d3 ), while the data items in the shadowed box are cached by the vehicles (e.g., .k1 has cached .d2 and .d3 ). Suppose the bitwise XOR (.⊕) operation is used for encoding data items of uniform size, and so the encoded packet has the same size as the data items. With the above setting, we compare the following three strategies. • Without network coding: Based on conventional broadcast scheduling policies, since the three vehicles have different outstanding data items (e.g., .d1 , and .d3 , .d2 are requested by .k1 , .k2 and .k3 , respectively), it requires at least three broadcast ticks to complete the service by broadcasting the three data items one by one. • Using network coding without fog/vehicle cooperation: This approach exploits vehicular caching and network coding in scheduling. Specifically, in view of the data items cached in .k1 and .k3 , the packet .p = d1 ⊕ d2 can be encoded and broadcast can serve both .k1 and .k3 simultaneously by computing .p ⊕ d2 and .p⊕d1 at .k1 and .k3 , respectively. Clearly, such a solution brings the advantage that one broadcast packet could serve more than one requests for different data items, which improves the bandwidth efficiency in conventional broadcast environment. Nevertheless, note that .k2 still cannot be served in one broadcast tick solely via V2C communication. • Using network coding with fog/vehicle cooperation: This approach further considers the heterogeneous communication interfaces for exploiting both temporal and spatial reusability of the bandwidth. Specifically, the cloud can broadcast .d1 ⊕ d2 ⊕ d3 via V2C communication. In the meantime, .k1 can share its cached .d2 to .k2 via V2V communication, and .k3 can share its cached .d1 to .k2 via fog cooperation (i.e., uploading .d1 via V2F communication, and the fog node which received .d1 further transmits it to a peer fog node, in which .k2 is dwelling, so that .k2 can retrieve .d1 via V2F communication). Accordingly, .k2 can retrieve .d1 ⊕ d2 ⊕ d3 , .d1 , and .d2 in parallel via V2C, V2F, and V2V communications, respectively. Suppose the delay of fog/vehicle cooperation is no longer than that of the cloud, and then all the vehicles can decode out their requested data items in one broadcast, which minimizes the service time.
76
5 Fog Computing Empowered Data Dissemination in Heterogeneous Vehicular Networks
5.2.4 Assumptions To focus on the scheduling problem as illustrated in the above example, we make the following assumptions: • We ignore the overhead of transmitting control messages, including the time taken for submitting requests and notifying scheduling decisions, since the size of such signaling messages is much smaller than that of data items. • We ignore the overhead of data transmission via wired connection, including the communication between the cloud nodes and the controller in the core network and the communication among fog nodes that are connected via the wired network. • We assume that the cloud nodes have full coverage of the service area. That is, every vehicle is able to communicate with at least one of the cloud nodes at any time. Note that it is not required to form a seamless coverage of fog nodes, which are likely to be deployed sparsely.
5.3 Fog Assisted Cooperative Service (FACS) Problem 5.3.1 Problem Analysis First, we formulate the FACS problem. The primary notations are summarized in Table 5.1. Without loss of generality, we consider one cloud node connected to the core network and it can retrieve data items from the database .D = {d1 , d2 , . . . , d|D| }. Note that it can be straightforwardly extended to the scenario where multiple cloud nodes are connected to the SDN controller. The set of fog nodes is denoted by .N = {n1 , n2 , . . . , n|N | }. At time t, the set of vehicles in the system is denoted by .K(t) = {k1 , k2 , . . . , k|K(t)| }. For vehicle .km (.1 ≤ m ≤ |K(t)|), km the set of its requested data items is denoted by .Qkm = {d1km , d2km , . . . , d|Q } km | (.Qkm ⊆ D). Given the radio coverage of a fog node .ni (.1 ≤ i ≤ |N |), the set ni of vehicles in the coverage of .ni is denoted by .Kni (t) = {k1ni , k2ni , . . . , k|K } ni (t)| (.Kni (t) ⊆ K(t)). Using the bitwise XOR (.⊕) operation, an encoded packet p can be represented p p p p by .p = d1 ⊕ d2 . . . ⊕ dn , where .di ∈ D .(1 ≤ i ≤ n) and n is the number of data items encoded in p. We consider data items of uniform size, denoted by s, and thus the size of an encoded packet p is also s. For vehicle .km , the set of km its cached packets at time t is denoted by .Ckm (t) = {p1km , p2km , . . . p|C }. For km (t)| simplicity, we consider the data transmission rates of the cloud and fog nodes are .rc and .rf , respectively. Note that, in practice, both cloud and fog nodes may have heterogeneous communication interfaces with different data rates. The data transmission rate of V2V communication is denoted by .rv , and the V2V communication range is denoted by .lv2v . Accordingly, if the distance between two
5.3 Fog Assisted Cooperative Service (FACS) Problem
77
Table 5.1 Summary of notations Notations D
Notes = {d1 , d2 , · · · , d|D| }
N
Descriptions Set of data items in the database Set of fog nodes
.K(t)
Set of vehicles at time t
.K(t)
= {k1 , k2 , . . . , k|K(t)| }
Set of data items requested by
.Qkm
km = {d1km , d2km , . . . , d|Q }, Qkm ⊆ D k |
.Qkm
.D .N
.km
= {n1 , n2 , · · · , n|N | }
m
.Kni (t)
Set of vehicles covered by fog node .ni
.Kni (t)
p
Encoded packet
.p
.Ckm (t)
Set of cached data items/packets in vehicle .km Size of data items/packets Data transmission rate of V2F communication Data transmission rate of V2C communication Data transmission rate of V2V communication V2V communication range Distance between .km and .kn at time t Data transmission delay of V2C communication Data transmission delay of V2V communication Data transmission delay of V2F communication
.Ckm (t)
s .rf .rc .rv .lv2v .lkm kn (t) .δc .δv .δf
ni = {k1ni , k2ni , . . . , k|K n
p
p
p
i (t)|
}, Kni (t) ⊆ K(t)
p
= d1 ⊕ d2 . . . ⊕ dn , .di ∈ D .(1 ≤ i ≤ n) km = {p1km , p2km , . . . p|C k
.δc
= s/rc
.δv
= s/rv
.δf
= s/rf
m (t)|
}
vehicles .km and .kn (denoted by .lkm kn (t)) is shorter than .lv2v (i.e., .lkm kn (t) ≤ lv2v ), then, .km and .kn are able to share data items via V2V communication at time t. With the above description, the FACS problem is formulated as follows. Given the database D, the set of fog nodes N with given radio coverage, the vehicle set K with V2V communication range of .lv2v , the request set Q, the cache set C, and the data transmission rates .rv , .rf , and .rc for V2V, V2F, and V2C communications, respectively, FACS is to determine a set of encoded packets P at the cloud node for broadcasting, as well as the corresponding data dissemination operations via V2V and V2F communications, so that the total service time for Q is minimized.
78
5 Fog Computing Empowered Data Dissemination in Heterogeneous Vehicular Networks
5.3.2 Problem Transformation In this part, to facilitate the complexity analysis of FACS, the vehicle mobility is omitted, but it will be taken into consideration in algorithm design. The general idea of problem transformation is described as follows. First, we define a set of rules to create vertices and edges, so that a graph G can be constructed based on the specification of FACS. Second, we define corresponding data dissemination operations for FACS based on the constructed graph G and analyze how do they correspond to the designed rules. Third, we map the service delay of FACS to the weights of edges and cliques in G. Finally, we show that finding an optimal solution to FACS is equivalent to finding a set of cliques which cover G, and the sum of their weights is minimized. Details are elaborated below. Given the database D, the set of vehicles .K(t), and the set of data items requested by each vehicle .Qkm , the rule for creating the set of vertices V in G is defined as follows: Definition 5.1 V is initialized as an empty set (i.e., .V ← φ). Then, .∀km ∈ K(t) and .∀di ∈ D, if .di is requested by .km , namely, .di ∈ Qkm , then a vertex .vmi is created (i.e., .V ← V + {vmi }). Given the set of fog nodes N, the set of vehicles in their coverage at time t can be represented by .KN (t) = Kni (t), .∀ni ∈ N. Given the set of packets cached in each vehicle .Ckm (t), the rule for creating the set of edges E in G is defined as follows. Definition 5.2 For any two vertices .vmi ∈ V and .vnj ∈ V , between which an edge is created if any of the following conditions is satisfied: • Condition 1: .i = j . • Condition 2: .i = j , .di ∈ Ckn (t) ∧ dj ∈ Ckm (t). • Condition 3: .i = j , .lkm kn (t) ≤ lv2v , .(di ∈ Ckn (t) ∧ dj ∈ Qkm ) ∨ (di ∈ Qkn ∧ dj ∈ Ckm (t)). • Condition 4: .i = j , .lkm kn (t) > lv2v , .km , kn ∈ KN (t), .(di ∈ Ckn (t) ∧ dj ∈ Qkm ) ∨ (di ∈ Qkn ∧ dj ∈ Ckm (t)). Based on the above rules, for any two connected vertices (e.g., .vmi ∈ V and vnj ∈ V ) in G, it represents that the corresponding requests (i.e., .di requested by .km and .dj requested by .kn ) can be served by broadcasting .di ⊕ dj . In the following, we analyze how each condition corresponds to data dissemination operations for FACS and also show the service delay of different operations.
.
• In Condition 1, the edge between the two vertices (i.e., .vmi and .vnj ) means that .km and .kn request the same data item .di . Therefore, the cloud node broadcasts .di via V2C communication can serve both vehicles simultaneously. The service delay for such an operation is computed by .δc = s/rc . • In Condition 2, the edge between the two vertices (i.e., .vmi and .vnj ) means that .km and .kn can be served simultaneously by broadcasting .di ⊕ dj via V2C
5.3 Fog Assisted Cooperative Service (FACS) Problem
79
communication, because .km has cached .dj and .kn has cached .di . Since the size of an encoded packet is the same with that of a data item, the service delay for such an operation is also .δc . • In Condition 3, the edge between the two vertices (i.e., .vmi and .vnj ) means that one of the vehicles (say .km ) requests both .di and .dj , and the other one (say .kn ) requests .dj and has cached .di . In addition, .km and .kn are neighboring vehicles. In this case, in addition to broadcasting .di ⊕ dj via V2C communication, it also requires the cooperation of V2V communication (say .kn transmits .di to .km ) to serve both requests. Denote .δv as the delay of V2V communication, which is .δv = s/rv . Then, the delay for such an operation (denoted as .δcv ) is computed by .δcv = max(δc , δv ). • In Condition 4, the request and cache status of the two vehicles are the same with that in Condition 3. The difference lies in that the two vehicles are out of V2V communication range, but both of them are in the coverage of certain fog nodes. Therefore, when .di ⊕ dj is broadcast via V2C communication, to complete the service, one of the vehicles needs to upload its cached data via V2F communication, and the other vehicle downloads the data via V2F communication. The delay for V2F communication is denoted by .δf , which is .s/rf . Then, the service delay for such an operation (denoted as .δcf ) is computed by .δcf = max(δc , 2 · δf ). Note that given service operations according to Conditions 3 and 4, in addition to serving the two connected vertices, there is an implicit service to another vertex. Take the case of .di ∈ Ckn (t) ∧ dj ∈ Qkm as an example (which appears in both Conditions 3 and 4), since .km requests both .di and .dj , and clearly, both of them can be retrieved via the corresponding service operations. Therefore, when an edge is created between .vmi and .vnj , there is an implicit service to another vertex .vmj based on the corresponding service operations. In the following, we map the service delay to the weight of an edge and a clique in G. Since edges are constructed based on different conditions, which correspond to different operations and service delay, we define the weight of an edge as follows. Definition 5.3 The weight of an edge is set according to the following policies: • If it is constructed based on Condition 1 or Condition 2, the weight is set to .δc . • If it is constructed based on Condition 3, the weight is set to .δcv . • If it is constructed based on Condition 4, the weight is set to .δcf . With the above analysis, given a clique in G, denoted by .ωG = {vp1 q1 , vp2 q2 , . . . , vp|ωG | q|ωG | }, an encoded packet .p = dq1 ⊕ dq2 . . . ⊕ dq|ωG | can be constructed, via which vehicles .kp1 , .kp2 , . . . , .kp|ωG | can be served. If .|ωG | > 1, namely, the clique contains more than one vertex, then the service latency of the corresponding packet is determined by the edge with the highest weight. Otherwise (.|ωG | == 1), the clique .ωG is an isolated vertex, and the service latency equals the delay of V2C communication (i.e., .δc ). Accordingly, we define the weight of a clique as follows.
.
80
5 Fog Computing Empowered Data Dissemination in Heterogeneous Vehicular Networks
Definition 5.4 Given a clique .ωG in G, denote its weight as .W (ωG ). If .|ωG | > 1, then .W (ωG ) is defined as the highest weight of edges contained in .ωG . Otherwise (.|ωG | == 1), .W (ωG ) is defined as .δc . With the above definitions, we have the following key observations: • Each clique in G corresponds to an encoded packet for FACS, which can serve those requests corresponding to the vertices of the clique. • Each clique defines a set of V2C-/V2F-/V2V-based operations for data dissemination, and its service latency equals the weight of the clique. • Serving all the requests is equivalent to completing the operations defined by a set of cliques, which covers all the vertices of G, and the total service time is the sum of the weights of those cliques. Recall that the objective of FACS is to determine a set of encoded packets, as well as corresponding V2F- and V2V-based operations for data dissemination, so that the overall service latency is minimized. We have the following theorem. Theorem 5.1 Finding an optimal solution to FACS is equivalent to finding a set of cliques which covers G, and the sum of their weights is minimized.
5.3.3 NP-Hardness In this part, we prove that FACS is NP-hard by reducing a well-known NP-hard problem, namely, MCC [16] to a special case of FACS (denoted by FACS*) in polynomial time. Recall that given an undirected graph G, MCC is to find the minimum set of cliques such that each vertex in G is at least in one of the cliques. Theorem 5.2 The FACS problem is NP-hard. Proof First, we consider a special case of FACS, denoted by FACS*. Suppose the data transmission rates of V2C, V2V, and V2F communications have the following relationship: .rc = rv and .rf = 2rc , and then we have .δc = δv = 2 · δf . Accordingly, for any condition, the assigned weight of edges (e.g., .δc , .δcv , .δcf ) will be the same. Furthermore, suppose all the vehicles stay in the same fog/cloud nodes during the service, and suppose that there are no implicit services. In the following, we prove that the optimal solution to FACS* is derived if and only if the minimum clique cover of G is found. Given the derived G from FACS*, suppose .G is a minimum set of cliques which covers G, and then .|G | encoded packets can be constructed. Furthermore, based on Definition 5.1, all the requests can be served by these encoded packets. Meanwhile, note that the cost for V2C-, V2V-, and V2F-based cooperation is the same (i.e., .δc = δv = 2 · δf ), namely, all the edges in G have the same weight. Therefore, in order to minimize the weighted sum of cliques which cover G, it is equivalent to the minimization of the number of cliques. Since .G is a minimum
5.4 Proposed Algorithm
81
clique cover of G, it gives the optimal solution to FACS*, and the overall service latency is .|G | · δc . Conversely, suppose we have the optimal solution to FACS*, containing a set of ∗ }, as well as the corresponding V2V and encoded packets .P ∗ = {p1∗ , p2∗ , . . . p|P ∗| ∗ V2F operations for each packet .pi (.1 ≤ i ≤ |P ∗ |), so that it can serve a data item .dj ∈ D requested by a vehicle .km ∈ K. Then, based on Definitions 5.1 and 5.2, ∗ .|P | cliques can be created for G. Suppose there exists a vertex .vxy in G which is not covered by the .|P ∗ | cliques, and then it implies that a data item .dy requested by vehicle .kx is not served. This is contradictory to the assumption that .P ∗ is a solution to FACS*, which serves all the requests. Therefore, all the vertices in G is at least in one of the .|P ∗ | cliques. Furthermore, since .P ∗ gives the minimum number of packets to complete the service for FACS*, the .|P ∗ | cliques form the minimum set cover of G. Therefore, the optimal solution to FACS* can be obtained if and only if the minimum set of cliques which covers G is found. The above proves the NP-hardness of FACS*. Since FACS* is a special case of FACS, Theorem 5.2 is proved.
5.4 Proposed Algorithm In this section, we propose a CSS algorithm for making coding decisions at the SDN controller and determining the corresponding data dissemination operations for the cloud nodes, fog nodes, and vehicles via V2C, V2F, and V2V communications, respectively. Before presenting detailed procedures of CSS, we design a graph transformation scheme to facilitate the searching of valid solutions.
5.4.1 Graph Transformation Scheme Given inputs of the FACS problem, a graph G can be constructed based on the rules defined in Sect. 5.3.2. However, such a graph may contain invalid solutions to FACS when considering system characteristics including heterogeneous data transmission rates and radio coverage of wireless communication interfaces and mobility of vehicles, for providing the services. In view of this, we design two policies for graph transformation, namely, the vertex merging policy and the edge validation policy, which form the basis of searching valid solutions for CSS. To better reflect the design rationale of the two policies, they are illustrated along with an example. As shown in Fig. 5.2, there are two fog nodes (i.e., .n1 and .n2 ) and three vehicles (i.e., .k1 , .k2 , and .k3 ) in the cloud’s coverage. At time t, .k1 is in the coverage of .n1 (e.g., .k1 ∈ Kn1 (t)), while .k2 and .k3 are in the coverage of .n2 (i.e., .{k2 , k3 } ⊆ Kn2 (t)). Meanwhile, .k2 and .k3 are within the V2V communication range at time t (i.e., .lk2 k3 (t) ≤ lv2v ). Suppose the shadowed block represents the cached data item, while the blank block represents the requested data item. For instance, .k1 has
82
5 Fog Computing Empowered Data Dissemination in Heterogeneous Vehicular Networks
Fig. 5.2 An example of fog assisted cooperative service
cached .d2 and it requests .d1 and .d3 . For the sake of general applicability of the analysis, in this example, we do not assign absolute values to the size of data items and the transmission rates of V2C, V2F, and V2V communications. Instead, we assume that the delay of V2C, V2F, and V2V communications is 1, 0.6, and 0.5 time slots, respectively (i.e., .δc = 1, .δf = 0.6, and .δv = 0.5). Then, based on the four conditions defined in Sect. 5.3.2, for the cloud based services (i.e., Conditions 1 and 2), the delay is .δc = 1. For the V2V cooperative service (i.e., Condition 3), the delay is .δcv = max(1, 0.5) = 1. For the V2F cooperative service (i.e., Condition 4), the delay is .δcf = max(1, 2 × 0.6) = 1.2. Then, according to Definitions 5.1, 5.2, and 5.3, a graph G can be constructed as shown in Fig. 5.3a. Recall the definition of edge construction, when two vertices (say .vmi and .vnj ) are connected, it implies that by broadcasting .di ⊕ dj , .km and .kn can retrieve .di and .dj , respectively. That is, both requests corresponding to the two vertices can be served. However, note that as analyzed in Sect. 5.3.2, for the edges created based on Conditions 3 and 4, there might be implicit services to other requests due to the corresponding V2V and V2F service operations. Therefore, when such an edge is selected when constructing a clique, in addition to the two vertices connected by this edge, some other vertices corresponding to the implicit request service should also be included, which helps to speed up the vertex cover for G. To address such an implicit service issue when selecting edges for clique construction, we propose a vertex merging policy as follows: For an edge created based on Condition 3 or Condition 4, if it is selected during the clique construction, there are two cases for executing the vertex merging: • If .di ∈ Ckn (t) ∧ dj ∈ Qkm , then the vertex .vmj is merged with the vertex .vmi . • If .di ∈ Qkn ∧ dj ∈ Ckm (t), then the vertex .vni is merged with the vertex .vnj . We give an example to show how this policy works. Suppose the edge (.v11 , .v33 ) is selected. Since this edge is created based on Condition 4, it corresponds to the
5.4 Proposed Algorithm
83
Fig. 5.3 An example of vertex merging and edge validation
operations of broadcasting .p = d1 ⊕ d3 by the cloud and transmitting .d1 from .k3 to .k1 via fog cooperation. Meanwhile, both .k1 and .k3 are able to decode out .d3 by computing .p⊕d1 . Apparently, three requests can be served, which correspond to the three vertices .v11 , .v33 , and .v13 , respectively. Therefore, as shown in Fig. 5.3b when the edge (.v11 , .v33 ) is selected, .v13 will be merged with .v11 , and the new vertex is . Similarly, when the edge (.v , .v ) is selected, the vertices .v and denoted by .v11 22 33 22 .v23 will be merged into .v . With such a transformation, Fig. 5.3b forms a clique, 22 which covers the original graph G. It corresponds to the following operations: broadcasting .d1 ⊕ d2 ⊕ d3 from the cloud, transmitting .d1 from .k3 to .k1 via V2F communication, and transmitting .d2 from .k3 to .k2 via V2V communication. Based on Definition 5.4, the weight of this clique is 1.2, and hence it is expected to serve all the requests in 1.2 time slots. However, in practice, the above solution may not be feasible when considering vehicle mobility, which imposes time constraints on data dissemination. The details are elaborated below. Suppose .k1 is driving toward the west and it is about to leave the radio coverage of .n1 in 1 time slot. Then, the fog cooperative service cannot be completed as it requires 1.2 time slots. Accordingly, the edge (.v11 , .v33 ) cannot be selected in this case. To consider vehicle mobility and service time constraint in scheduling, an edge validation policy is designed based on the following two rules: • For the edge (.vmi , .vnj ) created based on Condition 3, given the delay of V2V communication (i.e., .δv ), if it is estimated that the two vehicles .km and .kn would be out of V2V communication range within .δv , then this edge is invalid for deriving the clique. • For the edge (.vmi , .vnj ) created based on Condition 4, given the delay of V2F communication (i.e., .δf ), if it is estimated that the vehicle for data uploading (say .km uploads .di ) would leave the coverage of its fog node within .δf or the vehicle for data downloading (say .kn downloads data .di ) would leave the coverage of its fog node within .2 · δf , then this edge is invalid for deriving the clique. Figure 5.3c shows the graph after removing invalid edges, and we may check that the optimal solution contains two cliques to cover G, which are .{v11 , v22 } and .{v13 , v23 , v33 }. Since each clique has the weight of 1, the sum of weights is 2. So, the corresponding operations are to broadcast .d1 ⊕ d2 and .d3 via V2C communication, and all the requests will be served in 2 time slots.
84
5 Fog Computing Empowered Data Dissemination in Heterogeneous Vehicular Networks
5.4.2 CSS Algorithm With the above graph transformation scheme, the CSS algorithm is designed as follows. First, given the weight of edges as defined in Definition 5.3, we define the weight of a vertex as follows. Definition 5.5 Given a vertex v, denote its weight as .w(v), and denote its degree as .d(v). If .d(v) > 0, then .w(v) is defined as the highest weight of its connecting edges. Otherwise (.d(v)==0), .w(v) is defined as .w(v) = δc . Based on the analysis for Definition 5.4, to reduce service latency of a packet, it is preferable to select a vertex with lower weight, so as to decrease the weight of the constructed clique. Moreover, recall that the objective is to reduce the total service time, which corresponds to the sum of the weight of those cliques which cover G. Therefore, it is also expected to cover the vertices of G with fewer number of cliques. Intuitively, it is preferable to select a vertex with higher degree, which may give better chance to form a clique with larger size, and hence the number of required cliques can be reduced. With the above analysis, we define the priority of a vertex as follows. Definition 5.6 Given the degree .d(v) and the weight .w(v) of a vertex v, the priority of v is defined as .d(v)/w(v). Accordingly, at the beginning of clique construction, it is preferable to select a vertex with the highest priority. Then, to facilitate the selection of other vertices to complete the clique construction, we define the priority of a packet as follows. Definition 5.7 Given a clique .ωG , denote .N(ωG ) as the number of vertices in .ωG . Then, the priority of the corresponding packet derived from .ωG is computed by .N (ωG )/W (ωG ), where .W (ωG ) is the weight of the clique .ωG . As noted, a higher priority of a packet represents that the corresponding clique has larger size and lower weight, which implies that such a packet can serve more requests with shorter delay. Therefore, when adding a vertex to form a new clique, it is always preferable to maximize the priority of the corresponding packets. With the above definitions and analysis, detailed procedures of the greedy algorithm are described below. • Step 1: Initialize the graph G. It constructs the graph .G = (V , E) according to the rules defined in Sect. 5.3.2. Specifically, first, the vertex set V is created based on the database D, the set of vehicles .K(t), and the set of requested data items .Qkm for each .km ∈ K(t). Then, the edge set E is created based on the set of fog nodes N , the set of vehicles in the coverage of fog nodes .KN (t), and the set of cached packets .Ckm (t) for each .km ∈ K(t). Note that while creating the edges, the edge validation policy defined in Sect. 5.4.1 is applied to check and remove invalid edges. Finally, the weight of each edge is determined by the packet size (i.e., s) and the data transmission rates of V2C, V2V, and V2F communications (i.e., .rc , rv , and .rf ).
5.4 Proposed Algorithm
85
∗ . First, a clique .ω∗ is created as an empty set • Step 2: Initialize the clique .ωG G ∗ (i.e., .ωG ← ∅). Second, it traverses each vertex v in G and computes the priority of v based on Definition 5.6. Finally, it selects the vertex .v ∗ with the highest ∗ ← {v ∗ }. priority, and initializes the clique as .ωG ∗ • Step 3: Update the clique .ωG . It traverses the remaining vertices in G (i.e., ∗ .V (G) − V (ω )). First, if there is a vertex .v satisfying the following two G ∗ ← ω∗ + {v }. conditions, then, the clique is updated by adding .v , namely, .ωG G ∗ Condition 1: .v connects to all the vertices in .ωG ; namely, it forms a new ∗ . Condition 2: For any other vertex v (i.e., .∀v ∈ clique by adding .v into .ωG ∗ ∗, V (G) − V (ωG ) − v ), which can also form a new clique by adding v into .ωG its corresponding packet priority (by Definition 5.7) is no greater than the packet ∗ . Next, if there is another vertex .v satisfying the priority by adding .v into .ωG ∗. vertex merging policy, then, merge .v with .v into .ωG ∗ • Step 4: Return the clique .ωG . Repeat Step 3 until no vertex .v could be found ∗ is returned. to form a larger clique. Then, .ωG ∗ , the corresponding packet is determined. MeanAfter obtaining the clique .ωG while, based on the operations defined for creating each category of edges, the instructions for corresponding V2C-, V2F-, and V2V-based services can be piggybacked into the control message to enable the cooperation among the cloud nodes, fog nodes, and vehicles.
5.4.3 Complexity Analysis The complexity of CSS is analyzed based on the four steps presented above. In Step 1, first, it requires to traverse each vehicle and its requested data set to create vertices. Since there are .|K(t)| vehicles and each vehicle at most requests .|D| data items, the overhead is bounded by .|K(t)| · |D|. Second, to create the edges, it needs to check at most . |V |·(|V2 |−1) pairs of vertices. For each pair of vertices, it requires to traverse both the cache and request sets of the two corresponding vehicles, and the sum of the request and cache sizes is bounded by .|D| for each vehicle. So, the overhead for creating edges is . |V |·(|V2 |−1) · 2|D|. Third, for edge validation and weight assignment, it can be completed along with the creation of edges without extra overhead. Therefore, the overhead for initializing the graph G is .|K(t)| · |D| + |V |·(|V |−1) · 2|D|. 2 In Step 2, first, it requires to compute the priority of each vertex in G. Since both the weight and the degree of each vertex can be traced and recorded during the construction of G, it takes constant time for priority computation. Then, it requires to find the .v ∗ with the highest priority, which requires to traverse all the .|V | vertices ∗ is .|V |. in G. Therefore, the overhead for initializing the clique .ωG In Step 3, first, it requires constant time to compute the priority of the corre∗ , because both the number of vertices in .ω∗ and sponding packet given a clique .ωG G ∗ the weight of .ωG can be traced and updated whenever there is a new vertex and an
86
5 Fog Computing Empowered Data Dissemination in Heterogeneous Vehicular Networks
∗ . Second, to find the vertex .v which can form the packet edge are included into .ωG with the highest priority, it requires to check all the remaining vertices in G, which is bounded by .|V |. So, the overhead for finding .v is .|V |. Then, to check if there is another vertex .v which should be merged with .v , it requires to traverse the request set of the two vehicles corresponding to the newly added edge, and the overhead is bounded by .2|D|. Therefore, the overhead for updating the clique is .|V | + 2|D|. ∗ , Step 3 will be In Step 4, since at most .|V | vertices could be added into .ωG repeated at most .|V | times. To sum up, the overall overhead is represented by .|K(t)|·|D|+ |V |·(|V2 |−1) ·2|D|+ |V | + |V | · (|V | + 2|D|). Since the number of vertices .|V | is bounded by .|K(t)| · |D|, the complexity of CSS is .O(|K(t)|2 · |D|3 ).
5.5 Performance Evaluation 5.5.1 Simulation Setup The simulation model is built based on the system architecture described in Sect. 5.2. The real-world maps and taxi trajectories are extracted as traffic inputs. In particular, we have examined three traffic scenarios, including (1) a 3 .× 3 km area of Haidian District, Beijing, China, from 8:30 am to 8:35 am on November 13, 2015, (2) the same area from 23:50 to 23:55 on November 13, 2015, and (3) a 3.× 3 km area of Qingyang District, Chengdu, China, from 9:00 am to 9:05 am, on August 20, 2014. Note that the data sets in Scenarios 1 and 2 are not public but that in Scenario 3 is available at [17]. Detailed statistics including the total number of observed vehicles traces, average dwell time (ADT) of vehicles, variance of dwell time (VDT), average number of vehicles (ANV) in each second, and the variance of number of vehicles (VNV) in each second are summarized in Table 5.2. To better exhibit the system workload under different scenarios, Fig. 5.4 presents the number of vehicles in each scenario over time. As noted, the selected testing scenarios not only include both high (i.e., Scenarios 1 and 3) and low (i.e., Scenario 2) traffic workloads but also reflect dynamic (i.e., Scenarios 1 and 2) and steady (i.e., Scenario 1 ) traffic features. In addition, Fig. 5.5 shows the heat maps of vehicle distribution at the initial time in each scenario. Comparing Figs. 5.5a,b, it is clear that the vehicle density in the rush hour (i.e., around 8:30 am) is much higher than that during the night (i.e., around
Table 5.2 Traffic characteristics of each scenario Traffic scenario Scenario 1 Scenario 2 Scenario 3
Map Beijing Beijing Chengdu
Number of traces 403 277 393
Time 08:30–08:35 23:50–23:55 09:00–09:05
Date Nov. 13, 2015 Nov. 13, 2015 Aug. 20, 2014
ADT 214.9(s) 213.6(s) 217(s)
VDT 17.6(s) 10(s) 2.9(s)
ANV VNV 290 34.7 198.2 13.3 285.6 3.1
5.5 Performance Evaluation
87
Fig. 5.4 Number of vehicles in each time slot under different scenarios
Fig. 5.5 Heat map of initial distribution of vehicles under different scenarios. (a) Scenario 1. (b) Scenario 2. (c) Scenario 3
23:50) in the same area. Also, we may observe that the vehicle distribution is totally different in Fig. 5.5c, which is extracted from another city. There are one cloud node and nine fog nodes simulated in the system, where the cloud node has full coverage of the service area and the fog nodes are randomly deployed in each scenario. In accordance with the features of typical vehicular communication standards such as DSRC and V2X [1], by default, the transmission rates of V2C, V2F, and V2V communications are set to 3 Mb/s, 6 Mb/s, and 6 Mb/s, respectively. The communication ranges of V2F and V2V are set to 450 m and 250 m, respectively. The default size of database is set to 80 and we consider data items of a uniform size of 18 Mb. When vehicles enter into the service area, they may have cached part of data items. The default cache ratio is set to 0.3, which is the percentage of cached data items over the size of database. To evaluate the system performance in a heavy workload environment, we consider that each vehicle will request all the non-cached data items from the database upon entering the service area. Unless stated otherwise, the simulation is conducted under the default setting. For performance comparison, we have implemented two competitive algorithms ISXD. [18] and NCB [19]. Furthermore, we have designed two metrics to quantitatively evaluate algorithm performance.
88
5 Fog Computing Empowered Data Dissemination in Heterogeneous Vehicular Networks
• Average service delay (ASD): Given m served requests when vehicles are passing by the service area and the service time for each request is denoted by .ti (.1 ≤ m i ≤ m), then the ASD is computed by .( ti )/m. i=1
• Service ratio (SR): Suppose a total of M requests are submitted by vehicles and m requests are satisfied before the vehicles leave the service area, and then, the service ratio is computed by .m/M.
5.5.2 Simulation Result Effect of Traffic Scenarios Figures 5.6, 5.7, and 5.8 evaluate the algorithm performance under different traffic scenarios. Specifically, Fig. 5.6 shows the average service delay of different algorithms. As demonstrated, CSS achieves the lowest ASD under all the scenarios. Figure 5.7 shows the service ratio of different algorithms. As noted, CSS achieves the highest service ratio, which implies that CSS not only achieves shorter service delay but also serves more requests than other algorithms. Furthermore, Fig. 5.8 breaks down the service of CSS into three parts, which demonstrates the effectiveness of CSS on enabling the cooperative services via V2C, V2F, and V2V communications.
Fig. 5.6 ASD under different traffic scenarios
Fig. 5.7 SR under different traffic scenarios
5.5 Performance Evaluation
89
Fig. 5.8 Service proportion of V2C/V2V/V2F under different traffic scenarios
Fig. 5.9 SR of different algorithms in each time slot under different scenarios. (a) Scenario 1. (b) Scenario 2. (c) Scenario 3
Figure 5.9 shows the cumulative service ratio of different algorithms at the end of each second under different traffic scenarios. Note that the denominator is the sum of served requests, missed requests (i.e., those requests that cannot be served before the vehicles that submit the requests left the area), and the current pending requests at time t. First, it can be observed that the service ratio of all algorithms is small initially. This is mainly because as noted from Fig. 5.4, a large portion of vehicles have been in the service area and submitted requests in the initial period, which results in a large value of the denominator. Furthermore, we note that the service ratio of CSS increases much faster than algorithms and becomes more stable around 150 s. This is mainly because most requests have been submitted before 150 s and there are few new arrival vehicles later. Therefore, it demonstrates that CSS achieves the fastest responsiveness on serving dynamically arriving requests, which in turn reduces the ASD of the system. Effect of Cache Ratio Figures 5.10, 5.11, and 5.12 evaluate the algorithm performance under different cache ratios. As shown in Fig. 5.10, it can be observed that with an increasing cache ratio, the ASD of all the algorithms decreases. In particular, CSS outperforms other algorithms significantly with an increasing number of cached data items. This demonstrates the effectiveness of CSS in terms of synergizing network coding with vehicular caching. Figure 5.11 shows the SR of different algorithms under different cache ratios, in which the SR of all the algorithms improves with an increasing cache ratio, while CSS achieves the highest
90
5 Fog Computing Empowered Data Dissemination in Heterogeneous Vehicular Networks
Fig. 5.10 ASD under different cache ratios
Fig. 5.11 SR under different cache ratios
Fig. 5.12 Service proportion of V2C/V2V/V2F under different cache ratios
SR across the whole range. Figure 5.12 shows the proportion of served requests via V2C, V2F, and V2V communications by CSS under different cache ratios. We may observe that the service proportion via V2F and V2V communications increases notably when the cache ratio reaches 0.2, which is mainly because more cached data items give higher cooperative service opportunity via V2F and V2V communications. Effect of Data Sizes Figures 5.13, 5.14, and 5.15 evaluate the algorithm performance under different sizes of data items. As larger data size gives higher system workload, as shown in Figs. 5.13 and 5.14, it is expected that the ASD of all the algorithms increases, while the SR of all the algorithms decreases. Meanwhile, CSS always outperforms other algorithms. Especially, with an increasing size of
5.5 Performance Evaluation
91
Fig. 5.13 ASD under different data sizes
Fig. 5.14 SR under different data sizes
Fig. 5.15 Service proportion of V2C/V2V/V2F under different data sizes
data, CSS keeps decent performance on maintaining the SR, which is much higher than that of other algorithms. Also, it is worth noting that even though CSS serves much more requests than other algorithms, it still manages to achieve the lowest ASD. Finally, Fig. 5.15 shows the proportion of served requests via V2C, V2F, and V2V communications. As demonstrated, CSS can always exploit fog-based communication resources effectively on serving requests, resulting in its overall best performance.
92
5 Fog Computing Empowered Data Dissemination in Heterogeneous Vehicular Networks
5.6 Conclusion This chapter presented a fog computing empowered data dissemination architecture in IoV, in which heterogeneous cloud nodes, fog nodes, and vehicles cooperate based on logically centralized scheduling at the SDN controller. The FACS problem was formulated, which aims to exploit the synergistic effect of network coding and vehicular caching and enhance the bandwidth efficiency of heterogeneous wireless communication interfaces via cooperative V2C, V2F, and V2V communications. Furthermore, we derived a graph-based problem which is equivalent to FACS. On this basis, we proved the NP-hardness of FACS by constructing a polynomial-time reduction from the MCC problem to a special case of FACS. Furthermore, we proposed an efficient online scheduling algorithm CSS at the SDN controller. Specifically, a graph transformation scheme was designed to facilitate valid solution searching by considering the heterogeneous data transmission rates and radio coverage of different wireless communication interfaces and the mobility of vehicles. Then, detailed procedures of CSS were presented, which makes coding decisions and issues instructions for cooperative data dissemination via V2C, V2F, and V2V communications. The complexity analysis showed the feasibility of CSS. Finally, we built the simulation model and conducted comprehensive performance evaluation based on real vehicle trajectories extracted from different cities at a different time. The simulation results conclusively demonstrated the superiority of the proposed solution.
References 1. S. Chen, J. Hu, Y. Shi, Y. Peng, J. Fang, R. Zhao, L. Zhao, Vehicle-to-everything (V2X) services supported by LTE-based systems and 5G. IEEE Commun. Stand. Mag. 1(2), 70–76 (2017) 2. K. Liu, X. Xu, M. Chen, B. Liu, L. Wu, V.C. Lee, A hierarchical architecture for the future internet of vehicles. IEEE Commun. Mag. 57(7), 41–47 (2019) 3. K. Zheng, Q. Zheng, P. Chatzimisios, W. Xiang, Y. Zhou, Heterogeneous vehicular networking: a survey on architecture, challenges, and solutions. IEEE Commun. Surv. Tutorials 17(4), 2377–2396 (2015) 4. N. Cheng, F. Lyu, J. Chen, W. Xu, H. Zhou, S. Zhang, X.S. Shen, Big data driven vehicular networks. IEEE Netw. 32(6), 160–167 (2018) 5. K. Abboud, H.A. Omar, W. Zhuang, Interworking of DSRC and cellular network technologies for V2X communications: a survey. IEEE Trans. Veh. Technol. 65(12), 9457–9470 (2016) 6. Z. He, J. Cao, X. Liu, SDVN: enabling rapid network innovation for heterogeneous vehicular communication. IEEE Netw. 30(4), 10–15 (2016) 7. C. Huang, R. Lu, K.-K.R. Choo, Vehicular fog computing: architecture, use case, and security and forensic challenges. IEEE Commun. Mag. 55(11), 105–111 (2017) 8. D. Jiang, V. Taliwal, A. Meier, W. Holfelder, R. Herrtwich, Design of 5.9 GHz DSRC-based vehicular safety communication. IEEE Wirel. Commun. 13(5), 36–43 (2006) 9. J. Choi, V. Va, N. Gonzalez-Prelcic, R. Daniels, C.R. Bhat, R.W. Heath, Millimeter-wave vehicular communication to support massive automotive sensing. IEEE Commun. Mag. 54(12), 160–167 (2016)
References
93
10. S.A.A. Shah, E. Ahmed, M. Imran, S. Zeadally, 5G for vehicular communications. IEEE Commun. Mag. 56(1), 111–117 (2018) 11. J. He, L. Cai, P. Cheng, J. Pan, Delay minimization for data dissemination in large-scale VANETs with buses and taxis. IEEE Trans. Mob. Comput. 15(8), 1939–1950 (2015) 12. K. Liu, J.K.-Y. Ng, J. Wang, V.C. Lee, W. Wu, S.H. Son, Network-coding-assisted data dissemination via cooperative vehicle-to-vehicle/-infrastructure communications. IEEE Trans. Intell. Transp. Syst. 17(6), 1509–1520 (2016) 13. P. Dai, K. Liu, X. Wu, Z. Yu, H. Xing, V.C.S. Lee, Cooperative temporal data dissemination in SDN-based heterogeneous vehicular networks. IEEE Internet Things J. 6(1), 72–83 (2018) 14. M. Li, Z. Yang, W. Lou, Codeon: cooperative popular content distribution for vehicular networks using symbol level network coding. IEEE J. Sel. Areas Commun. 29(1), 223–235 (2011) 15. K. Liu, K. Xiao, P. Dai, V.C. Lee, S. Guo, J. Cao, Fog computing empowered data dissemination in software defined heterogeneous VANETs. IEEE Trans. Mob. Comput. 20(11), 3181–3193 (2021) 16. R.M. Karp, Reducibility Among Combinatorial Problems. In: R.E. Miller, J.W. Thatcher, J.D. Bohlinger (eds.) Complexity of Computer Computations. The IBM Research Symposia Series. (Springer, Boston, 1972). https://doi.org/10.1007/978-1-4684-2001-2_9 17. DiDi, GAIA Open Data Set. https://outreach.didichuxing.com/research/opendata/en/. Online; Accessed 14 March 2020 18. G.M.N. Ali, M. Noor-A-Rahim, M.A. Rahman, S.K. Samantha, P.H.J. Chong, Y.L. Guan, Efficient real-time coding-assisted heterogeneous data access in vehicular networks. IEEE Internet of Things J. 5(5), 3499–3512 (2018) 19. H. Ji, V.C. Lee, C.-Y. Chow, K. Liu, G. Wu, Coding-based cooperative caching in on-demand data broadcast environments. Inf. Sci. 385, 138–156 (2017)
Chapter 6
Temporal Data Uploading and Dissemination in Real-Time Vehicular Networks
Abstract Temporal information services are critical in implementing emerging ITSs. Nevertheless, it is challenging to realize timely temporal data update and dissemination due to intermittent wireless connection and limited communication bandwidth in dynamic vehicular networks. To enhance system scalability, it is imperative to exploit the synergic effect of I2V and V2V communications for providing efficient temporal data disseminations in such an environment. With above motivations, we propose a novel system architecture to enable efficient data scheduling in hybrid I2V/V2V communications by having the global knowledge of network resources of the system. On this basis, we formulate a Temporal Data Upload and Dissemination (TDUD) problem, aiming at optimizing two conflict objectives, namely, the data quality and the delivery ratio, simultaneously. Furthermore, we propose an evolutionary multi-objective algorithm called MO-TDUD, which consists of a decomposition scheme for handling multiple objectives, a scalable chromosome representation for TDUD solution encoding, and an evolutionary operator designed for TDUD solution reproduction. The proposed MO-TDUD can be adaptive to different requirements on data quality and delivery ratio by selecting the best solution from the derived pareto solutions. Last but not least, we build the simulation model and implement MO-TDUD for performance evaluation. The comprehensive simulation results demonstrate the superiority of the proposed solution. Keywords Temporal data dissemination · Evolutionary multi-objective optimization · I2V/V2V communication · Resource allocation
6.1 Introduction Vehicular networks offer promising prospects for advancing ITSs in terms of safety, efficiency, and sustainability through V2V and I2V communications [1]. These communication modes are facilitated by temporal data services, which serve as the underlying technology for many emerging ITS applications such as road reservation [2], routing planning [3], and infotainment services [4]. The efficacy of these data © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 K. Liu et al., Toward Connected, Cooperative and Intelligent IoV, https://doi.org/10.1007/978-981-99-9647-6_6
95
96
6 Temporal Data Uploading and Dissemination in Real-Time Vehicular Networks
services is particularly important as they must be completed within a specified time window to meet user needs. However, owing to the highly dynamic nature of traffic systems, access to temporal information is necessary for many ITS applications where data quality may deteriorate over time, as in the cases of road traffic information and location-based service information. To ensure service quality, the timely update of temporal information is critical. On the other hand, due to limited I2V bandwidth and coverage, as well as high mobility of vehicles, data dissemination often suffers from unpredictable delay in hybrid I2V/V2V communications. As a result, it is imperative to investigate on an efficient data service mechanism for temporal data disseminations in real-time vehicular networks. In previous research, great efforts have been paid on network communication and data dissemination issues in vehicular networks. For example, some studies [5] focused on designing interference-free approaches for reducing packet collision in I2V and V2V communications. Some other studies [6] mainly focused on designing routing strategies to reduce the delivery delay and improve transmission reliability. A few studies [7, 8] considered the temporal data disseminations, where the quality of temporal data will degrade over time. They analyzed the data quality in terms of temporal features and proposed efficient scheduling methods for temporal data services with real-time requirements. However, these studies only considered the services within RSUs’ coverage. In addition, existing solutions cannot be adaptive to dynamic requirements of vehicular application scenarios by exploiting both I2V and V2V communication resources. Distinguished from previous works, this chapter focuses on investigating temporal data dissemination in large-scale and real-time vehicular networks [9]. Specifically, the concept of large scale refers to both the physical service range and the logical service type. First, in terms of the service range, this chapter considers the scenario where multiple RSUs distributed in large scale of geographic areas are cooperated to provide information services. Second, with respect to the service types, we consider the hybrid of I2V and V2V communications for providing data service cooperatively. Meanwhile, the real time indicates that the data dissemination has to be completed with a strict deadline. In addition, we consider the temporality feature of data items, as well as the heavy service workload on both data uploading and downloading. In particular, passing vehicles can sense temporal information along their driving trajectories and upload up-to-date data items to RSUs via I2V communication. Furthermore, multiple RSUs are connected via a backbone network, and any data update in the local database of one RSU will be synchronized in other peers. With the above description, there are two main problems to be investigated. First, due to the temporality of information, data items maintained in RSUs have to be kept up to date by timely uploading from passing vehicles. Second, a data dissemination strategy has to be designed to enhance the data delivery ratio. To sum up, due to the high mobility of vehicles and the limited wireless bandwidth, it is non-trivial to solve the above two problems simultaneously and efficiently. Keeping the above in mind, this chapter makes the first attempt to support the efficient data service in large-scale vehicular systems by considering both the quality
6.2 System Architecture
97
of temporal information and the fulfillment of service requests via hybrid I2V/V2V communications. • We consider a novel temporal data dissemination scenario in real-time vehicular networks, where vehicles with up-to-date information will be scheduled to upload data items to the RSUs when they are passing by. Meanwhile, vehicles may also submit service requests to RSUs for particular applications, and based on certain scheduling policy, these vehicles can be served via the hybrid of I2V and V2V communications. • We present a system architecture for supporting services described above. In such an architecture, real-time vehicle statuses, including GPS positions, velocities, driving directions, cache/request information, etc., are collected by RSUs via the beacon messages of vehicles. Then, each RSU will report its local information to the central scheduler via the high speed backbone network. Based on the collected information, the central scheduler is capable of estimating the network connectivity and instructing the behaviors of static RSUs as well as mobile vehicles so as to best exploit the I2V and V2V communication resources cooperatively. • We quantitatively define service quality of the requests and delivery ratio of the system, respectively. Then, we analyze that the objectives of maximizing the service quality and the delivery ratio are in conflict with each other. On this basis, we formulate the problem called Temporal Data Uploading and Dissemination (TDUD), which aims to optimize the two objectives simultaneously. • We propose a problem-specific multi-objective evolutionary algorithm called MO-TDUD, which includes chromosome encoding, population initialization, evolutionary operators, and local refinement subroutines. The proposed MOTDUD can generate a set of pareto-solutions without extra overhead, which makes it scalable and adaptable to different application requirements.
6.2 System Architecture In this section, we present a novel architecture to efficiently support temporal data dissemination in large-scale vehicular networks. As shown in Fig. 6.1, the RSUs are connected via the high-speed backbone network. Accordingly, we assume that any data update in the local database of one RSU will be synchronized in other peers. The central scheduler in the backbone network is responsible for making scheduling decision, including data broadcast strategy of all the RSUs and the vehicles. The procedure of the scheduling policy optimization is described as follows. First, the beacon message periodically broadcast by vehicles is overheard by RSUs, which is synchronized to the central scheduler via the backbone network. The beacon message includes GPS position, velocity, and cache/request information of vehicles in the system. Second, based on the mobility information of vehicles, the central scheduler estimates the network connectivity of vehicle-to-RSU and inter-
98
6 Temporal Data Uploading and Dissemination in Real-Time Vehicular Networks
Fig. 6.1 A service architecture for temporal data dissemination in large-scale vehicular networks
vehicle communication in the near future. Furthermore, based on the collected cache and request information of vehicles, the scheduler will make a sequence of data broadcast decisions via both I2V and V2V communications. Third, the message containing the scheduling policy is delivered to RSUs and vehicles via wired connection and I2V communication, respectively. Once receiving the scheduling decision, the vehicles and RSUs broadcast the data items accordingly via either I2V or V2V communication. The temporal data dissemination is one of the critical applications in vehicular networks. In the concerned scenario, the vehicles may request different types of temporal information, such as road traffic information, shopping mall promotion, location-based information, etc. Meanwhile, the requested data items have to be delivered within certain tolerated delay. For example, the vehicles driving through certain urban area may be interested in local information, such as available parking lots or current promotions in shopping malls nearby. In order to complete such a service, a maximum tolerated delay needs to be associated based on certain factors such as driver expectations or driving speed. On the other hand, since the quality of temporal information degrades over time, timely data update is required to keep the information useful. A time stamp is recorded to reflect the freshness of temporal data. Vehicles are capable of sensing and caching the up-to-date information along their trajectories, and they will send beacon messages (including cache and request information) periodically to the central scheduler via backbone. Based on the global knowledge, the scheduler instructs the RSUs to assign upload tasks to the vehicles in their respective coverage, and then the vehicles will perform data update based on the scheduling decisions. Besides, the scheduler will periodically determine the
6.2 System Architecture
99
Fig. 6.2 An example
broadcast strategy at RSUs and vehicles, so that the service can be adaptable to dynamic traffic conditions. In the following, we give an example to better illustrate the presented system architecture as well as to reveal the challenges of designing an efficient scheduling approach. First, we introduce the service scenario. As shown in Fig. 6.2, the set of RSUs and the set of vehicles are denoted by .R = {r1 , r2 , r3 } and .V = {v1 , v2 , v3 , v4 }, respectively. Furthermore, the set of database is denoted by .D = {d1 , d2 , . . . , d5 }. For simplicity, the data quality is characterized into two levels, high (denoted by H) and low (denoted by L). Assume that both the uploading and the downloading for one data time take one time slot. The global knowledge of central scheduler includes the following information: (a) local database information of each RSU .rj ; for example, the local information of RSU .r1 is denoted by .{d1 :L, .d2 :H , .d3 :H , .d4 :H , .d5 :.L}, which indicates that .d1 and .d5 have low quality and .d2 , .d3 , and .d4 have high quality and (b) the cached/requested information and enter/leave time of each vehicle .vk ; for example, the Request column and the Cache column of .v3 are .d4 and .d1 :H , .d2 :H , respectively, which indicates that .v3 requests .d4 and caches .d1 and .d2 . Furthermore, the Enter Time column and the Leave Time column of .v3 are .t0 and .t3 , which indicates that .v3 enters .r2 at .t0 and leaves .r2 at .t3 . The Deadline column of .v3 is .t3 , which represents that .v3 has to retrieve all requested data before .t3 . Furthermore, assume that .v1 is predicted to meet .v4 at certain time based on the knowledge of vehicle driving directions, velocities, trajectories, etc. The scheduler is supposed to make the following scheduling decisions, as shown in Fig. 6.2. Solution 1 is an optimal solution. At time .t0 , .r1 broadcasts .d2 to .v1 and .r2
100
6 Temporal Data Uploading and Dissemination in Real-Time Vehicular Networks
assigns update task for .v3 to upload .d1 for enhancing quality of .d1 . Simultaneously, d1 with high quality is synchronized in other local databases via the backbone network. Then, .v1 in the coverage of .r1 receives .d1 with high quality at .t2 . It is noted that .v3 receives .d3 from .r2 at .t2 for the purpose of data forwarding. .v4 finally receives .d3 at time .t3 with high quality before the deadline .t4 . The column of Result 1 shows the performance of Solution 1. As observed, all vehicles retrieve the requested data items with high quality. On the contrary, Solution 2 is a broadcast-first strategy. For each RSU, it gives higher priority to the service requests compared with update requests. As shown in the column of Result 2, .v1 receives .d1 with low data quality and .v2 misses .d1 , which leads to poor service performance. We can observe from the above example that it is not trivial to design an optimal scheduling policy to achieve high delivery ratio while maintaining satisfactory service quality, especially when the application requirement changes dynamically. .
6.3 Temporal Data Uploading and Dissemination (TDUD) Problem 6.3.1 Preliminary The set of the data items is denoted by D and the set of vehicles is denoted by V . For a vehicle .vk ∈ V , the set of cached data items at time t is denoted by .Ck (t). The set of requested data items by vehicle .vk is denoted by .Ak and the request submission time is denoted by .stk . At time t, the set of outstanding requests of vehicle .vk is denoted by .Ak (t). Furthermore, the maximum tolerated delay for serving vehicle .vk is denoted by .sdk . Accordingly, we have .Ak (t)∩Ck (t) = ∅ and .Ak (t) , Ck (t) ⊆ D. The set of RSUs is denoted by R. The set of vehicles in the coverage of RSU .ri at time t is denoted by .Vi (t). The dwelling time of a vehicle .vk ∈ Vi (t) in the coverage r and the time interval when .v and .v are within their of RSU .ri is denoted by .Iik k l r and .I v V2V communication range is denoted by .Iklv . Note that the values of .Iik kl can be estimated based on available trajectory prediction techniques [10, 11]. The primary notations in this chapter are summarized in Table 6.1. On this basis, we establish the data quality model to quantitatively evaluate the freshness of temporal information. Specifically, for data .dj , we define the data quality function as .Qdj (t) = f (tsj , ldj , t), where .tsj is the time stamp when .dj is generated, .ldj is the data valid period, and t is the current time, which indicates that the value of .dj will be outdated at .tsj + ldj . During . tsj , tsj + ldj , the quality of .dj will degrade over time if it is not updated in due course. Clearly, the degradation of data quality depends on both .tsj and .ldj . If the value of .tsj is closer to the current time t, it indicates that .dj is generated more recently. On the other hand, a larger value of .ldj represents that the valid period of .dj is longer, which indicates that .Qdj (t) degrades more slowly. For the sake of better exhibition, .Qdj (t) is normalized into the range of [0,1], where 1 means the data has no loss of quality
6.3 Temporal Data Uploading and Dissemination (TDUD) Problem
101
Table 6.1 Summary of notations Notations D V .Vi (t) .Ak (t) .Ck (t) r .Iik v .Ilk ri vk .Qd (t) (.Qd (t)) j j
Descriptions The set of data items .D = d1 , d2 , . . . , dj , . . . , dD The set of vehicles .V = v1 , v2 , . . . , vk , . . . , vV The set of vehicles in the service region of RSU .ri at time t The set of outstanding requests of .vk at time t The set of cached data items by .vk at time t The dwelling interval of .vk in the coverage of RSU .ri The time interval when .vk and .vl are in V2V communication range The data quality of .dj maintained by RSU .ri (vehicle .vk ) at time t
.tsj
The time stamp when .dj is generated The data valid period of .dj The operation solution of RSU .ri (vehicle .vk ) during the service interval The time taken for broadcasting one data item via I2V (V2V) communication
.ldj .Ori (.Ovk ) .τ1 (.τ2 )
(when it is generated) and 0 means the data value is invalid. In addition, .Qrdij (t) and vk .Q dj (t) represent the data quality of .dj maintained in the database of RSU .ri and cached by the vehicle .vk , respectively. If .vk uploads its cached .dj to RSU .ri , the value of .Qrdij (t) would be updated by .Qvdkj (t).
6.3.2 Problem Formulation Given the service interval .[0, T ], let .x = Or1 , . . . , Ori , . . . , Ov1 , . . . , Ovk , . . . be the set of operations, where .Ori represents the operation sequence of the RSU .ri , .i = 1, 2, . . . , R, and .Ovk represents the operation sequence of the vehicle .vk , .k = 1, 2, . . . , V . The two-tuple .(ol , sl ) ∈ Ori represents the lth operation of .ri , where .ol denotes the operand and .sl denotes the operation instruction. Specifically, .sl = 0 indicates the data upload and .sl = 1 indicates the data broadcast. Accordingly, the l−1 start time .tl of the .lth operation .(ol , sl ) is computed by .tl = (τ1 + sk (τ2 − τ1 )), k=1
where .τ1 and .τ2 represent the time taken for broadcasting one data item by a vehicle and an RSU, respectively. Similarly, the two-tuple .(om , tm ) ∈ Ovk represents the mth operation of .vk and .tm represents the start time of the .mth operation. Due to the limited bandwidth and vehicular mobility, several constraints are imposed on data operations, which are described as follows. When the RSU .ri broadcasts .ol at time .tl , the vehicle .vk acquires .oi if it satisfies the following conditions: (1) .vk requests the data, which has not yet been received, .ol ∈ Ak (tl ), (2) the delivery delay does not exceed the maximum tolerated delay of .vk , i.e., .tl + τ2 − stk ≤ sdk , and (3) .vk can complete the retrieval of .ol before it r . In order to evaluate the benefit leaves the coverage of RSU .ri , i.e., .[tl , tl + τ2 ] ∈ Iik of broadcasting .ol , we define the Beneficial Vehicle Set as follows:
102
6 Temporal Data Uploading and Dissemination in Real-Time Vehicular Networks
Definition 6.1 Beneficial Vehicle Set: When .ri broadcasts data .dj at time t, the beneficial vehicle set is defined as the set of vehicles in the coverage of .ri , which can acquire .dj , expressed as follows:
.
r ∩ d ∈ A (t) ∩ t + τ − st Bdrij (t) = {vk [t, t + τ2 ] ∈ Iik j k 2 k ≤ sdk , ∀vk ∈ Vi (t)}
(6.1)
As the data quality degrades over time, RSUs will perform data update operation to keep the freshness of the data. When RSU .ri performs a data update operation ri .(ol , 0) at time .tl , it will assign the data upload task to a vehicle for enhancing .Qol (t). A vehicle .vk can be selected as a candidate if it satisfies the following conditions: (1) .ol belongs to the set of cached data, i.e., .ol ∈ Ck (tl ) and (2) .vk can complete r . To the data upload of .ol before it leaves the coverage of .ri , i.e., .[tl , tl + τ1 ] ⊆ Iik achieve the best performance, an RSU always chooses the candidate with the best data quality. Therefore, in order to evaluate the benefit of data update, we define the Best Upload Quality as follows: Definition 6.2 Best Upload Quality: When RSU .ri updates data quality of .dj at time t, the best upload quality of .dj at RSU .ri at time t is defined as the highest data quality cached by the candidate vehicles, expressed as follows: Qudji (t) = max
.
∀vk ∈Vi (t)
r Qvdkj (t) [t, t + τ1 ] ⊆ Iik ∩ dj ∈ Ck (t)
(6.2)
Then, the vehicle assigned for uploading data .dj at RSU .ri is determined as follows: vj∗ = arg max
.
∀vk ∈Vi (t)
r Qvdkj (t) [t, t + τ1 ] ⊆ Iik ∩ dj ∈ Ck (t)
(6.3)
If .dj is not cached by any vehicles in the coverage of .ri , then .Qudji (t) is set to 0. Therefore, after data update operation .(ol , 0), the data quality of .ol in the database ri ui .Qol (tl + τ1 ) is updated to .Qol (tl + τ1 ). Simultaneously, the data quality of .ol in other local databases will be synchronized via the high-speed backbone network accordingly. When two vehicles .vk and .vl are in the V2V communication range and outside the coverage of RSUs, they have chance to exchange cached data items with each other via V2V communication. When .vk broadcasts .om at time .tm , .vl acquires .om if it satisfies the following conditions: (1) .om belongs to the set of outstanding request of .vl at time .tm , i.e., .om ∈ Al (tm ), (2) the delivery delay does not exceed the maximum tolerated delay of .vl , i.e., .tm + τ1 − stl ≤ sdl , and (3) data transmission of .om is completed during .Iklv , i.e., .[tm , tm + τ1 ] ⊆ Iklv . In addition, to avoid transmission collision, .vk and .vl should not broadcast simultaneously during .Iklv . Therefore, for any two operations .(om , tm ) ∈ Ovk , .(on , tn ) ∈ Ovl during .Iklv , the collision avoidance constraint is expressed as follows: .[tm , tm + τ1 ]∩[tn , tn + τ1 ] =
6.3 Temporal Data Uploading and Dissemination (TDUD) Problem
103
∅. Similarly, to evaluate the broadcast benefit, we compute the beneficial vehicle set Bovmk (t) of data .om broadcast by vehicle .vk at time t. In order to quantitatively measure the system performance, we define two metrics, namely, delivery ratio and average service quality.
.
Definition 6.3 Delivery Ratio (DR): Given an operation x, DR is defined as the ratio of the satisfied service requests to the total number of submitted service requests during service interval .[0, T ], which is computed by
r
R O i
f1 (x) =
.
i=1 l=1
vk
V
O
Bovk (tm )
sl ∗ Boril (tl ) + m k=1 m=1 Ak (0)
(6.4)
∀vk ∈V
Definition 6.4 Average Service Quality (ASQ): Given an operation x, ASQ is defined as the mean value of service qualities of all the satisfied service requests during service interval .[0, T ], which is computed by
r
R O i
Qroil (tl ) ∗ sl ∗ Boril (tl )
i=1 l=1
v
V k O
+ f2 (x) =
.
k=1 m=1
Or
R i
sl ∗ Boril i=1 l=1
Qvokm (tm ) ∗ Bovmk (tm )
vk
V
O
Bovk (tm )
(tl ) + m
(6.5)
k=1 m=1
From the system point of view, it desires both high delivery ratio and service quality. With the above definitions, we formulate the TDUD as a two-objective optimization problem as follows: max F (x) = (f1 (x) , f2 (x))
Orj
s.t. C1 : .
(τ1 + sl (τ2 − τ1 )) ≤ T , j = 1, 2, . . . , R
l=1
C2 : τ1 · Okv ≤ T , k = 1, 2, . . . , V C3 :
∩ [ti , ti + τ1 ] = ∅, if
i=m,n
∪ [ti , ti + τ1 ] ∈ Iklv ,
i=m,n
∀ (om , tm ) ∈ Ok , (on , tn ) ∈ Ol
(6.6)
104
6 Temporal Data Uploading and Dissemination in Real-Time Vehicular Networks
6.4 Proposed Algorithm In this section, we propose an evolutionary algorithm, namely, Multi-Objective Temporal Data Uploading and Dissemination (MO-TDUD), aiming at optimizing both the DR and the ASQ simultaneously. First, we present the basic idea of MOTDUD. Then, we explain the mechanism of each MO-TDUD component in detail.
6.4.1 Multi-Objective Decomposition First, we use Weighted Sum [12] to decompose the multi-objective optimization 1 N } be problem in Eq. (6.6) into N scalar optimization subproblems. k k Let .{w ,k . . . , w k k a set of evenly spread weight vectors, where .w = w1 , w2 and .w1 + w2 = 1. Each weight vector .w k corresponds to a scalar optimization problem. Therefore, for each weight vector .w k , the objective of the kth corresponding subproblem is then formulated as follows: 2 g x w k , z = wik fi (x), subj ect to x ∈
.
(6.7)
i=1
where . is the decision space. For each weight vector .w k , the best solution for the .kth subproblem during the search process is maintained for population evolution in the next generation.
6.4.2 Chromosome Encoding A problem-specific chromosome encoding form is designed to represent the solution of T DU D problem, which includes the operations of both RSUs and vehicles during service interval .[0, T ]. Figure 6.3 shows an example of chromosome encoding. The rectangles of .Ori and .Ovk represent the operation sequences of RSU .ri and vehicle .vk , respectively, and the arrow indicates the order of the operation sequences. For example, .Or2 represents the scheduling for RSU .r2 . Specifically, according to the ordered operations of .r2 , it represents updating .d6 and .d3 in sequence, followed by broadcasting .d3 , and finally updating .d7 . The total time consumed by the operations of one RSU should not exceed T . Similarly, the operations of .v1 are to broadcast .d6 at time .t1 and then broadcast .d5 at .t3 . The operations of .v2 are to broadcast .d1 at .t2 and then broadcast .d7 at time .t4 . When .v1 and .v2 are in the V2V communication during .[0, T ], the time interval of 4
broadcasting each data should not be overlapped, i.e., . ∩ [ti , ti + τ1 ] = ∅. i=1
6.4 Proposed Algorithm
105
Fig. 6.3 Encoding form of the chromosome
6.4.3 MO-TDUD Procedure First, the procedure of MO-TDUD is outlined as follows: Step 1 Initialization: Set up initial parameters and generate initial population. Step 2 Update: Update the population based on particularly designed mechanisms, including selection, mutation operators, and a local refinement method. Step 3 Output: Check the stopping criterion. If the stopping criterion is not satisfied, go back to Step 2. Otherwise, output the set of non-dominated solutions and select the best solution in this set according to the given weight vector. In the following, we give the detailed steps of the MO-TDUD, where the key components of problem-specific operators are elaborated. • Initialization: The initialization consists of three parts. First, we initialize the non-dominated set EP as an empty set, which is used for archiving the nondominated solutions in due course. Let .x n and .x m denote two solutions, since our formulated problem is a maximization problem, .F (x n ) dominates .F (x m ) if and only if .fi (x n ) ≥ fi (x m ) , i = 1, 2, and .fi (x n ) > fi (x m ) for at least one index .i ∈ {1, 2}. We call a solution x a non-dominated solution if no solutions .x ∈ whose .F x can dominate .F (x). The non-dominated set EP consists of the solutions, which cannot be dominated by any other solutions in the population. Second, for each weight vector .w k , we generate the neighborhood set which contains M neighborhood vectors. For any two weight vectors .w n , w m , ∀n, m ≤ N, we compute the Euclidean distance between .w n and .w m , i.e., .w m − w n = 2
2 wim − win . Then, for each weight vector .w k , we choose the M closest i=1
weight vectors to .w k as its neighborhood set .NB w k = w k1 , w k2 , . . . , w kM . The neighborhood set of the kth subproblem consists of all the subproblems associated with the weight vectors from .NB w k . For the kth subproblem,
106
6 Temporal Data Uploading and Dissemination in Real-Time Vehicular Networks
only the current solutions to its neighborhood subproblems are exploited for optimizing a subproblem. Third, we design a particular method to initialize the population. A solution x consists of .Ori for each RSU .ri and data operation .Ovk for each vehicle .Ovk . We first generate the initial data operations .Ori for each RSU .ri under a given weight vector .w k . For each candidate operation . dj , sj ∈ Ori , we evaluate the operation benefit in two aspects: broadcast benefit and update benefit. For
a broadcast operation . dj , sj , the broadcast benefit is denoted by .1 dj , sj , t , which is equal to the product of .Qrdij (t) and the number of vehicles in .Bdrij (t). For a data update operation, it is equal to zero since data update operation brings no benefit
on serving requests. To sum up, the broadcast benefit of an operation . dj , sj at time t is computed as follows: 1
.
dj , sj , t = Qrdij (t) · Bdrij (t) · sj
(6.8)
Similarly, the update benefit of an operation . dj , sj is computed as follows:
dj , sj , t =
R
.
ri
ui ri βdj · Qdj (t) − Qdj (t) ·
Bdj (t) + ε · 1 − sj 2
(6.9)
i=1
where .βdj is the popularity and .ε is a constant (.0 < ε < 1). .βdj is defined as the ratio of the number of service requests asking for .dj to the total number of service requests, which gives higher priority to more popular data items. For an update
operation . dj , sj , the update benefit synthesizes the popularity, the augmented data quality, and the total request number in the system. On the other hand, for a broadcast operation, the update benefit is equal to zero. Combing Eqs. (6.8) and (6.9), the benefit of each candidate data operation under a given .w k is formulated as follows:
. dj , sj , t w k = max wik · i dj , sj , t (6.10) 1≤i≤2
For each .w k , we initialize the data operation set .Ori as follows: (a) Sort the candidate operations of each RSU .ri in descending order. (b) For each RSU .ri , the operations are selected iteratively based on the sorted order until the total cumulative time reaches the bound T . Second, we initialize the data operations v .Ovk during V2V communication. For each time interval .I kl = [etkl , ltkl ], the data v to be broadcast during .Ikl is determined as follows: (a) The candidate data set (CDS) of each .vk is determined by .Ck (etkl ) ∩ Al (etkl ). (b) The .vk with longer remaining time .stk + sdk − t broadcasts the data in CDS first since the other party is more urgent to receive the data. (c) The .dj with the highest data quality in CDS is selected iteratively until the total cumulative time achieves the bound .ltkl − etkl . If the cumulative time has not yet reach the bound, the other vehicle
6.4 Proposed Algorithm
107
continues to select the data in its CDS by repeating steps (a) to (c). After step (c), the broadcast data set .Ovk →vl of .vk and .Ovl →vk of .vl during .Iklv is determined. Accordingly, during service interval [0,T], the broadcast data set of .vk is set as V
Ovk = ∪ Ovk →vl . The initial data operation set of .Ovk maintains the same for
.
l=1
each weight vector .w k . • Update: For each index k, .k = 1, 2, . . . , N, the update includes the following procedures: 1. Reproduction: the two weight vectors .w n and .w m are randomly selected from k .NB w and the two corresponding solutions .x n and .x m are chosen as parent solutions. Then, a new solution y is generated from .x n and .x m by the designed genetic operators. First, we apply a uniform crossover operator on .x n and .x m to generate one child solution y. By using a uniform crossover operator, each n m iteratively. .Ori and .Ovi of solution y is randomly selected from .x and .x Then, the mutation operator is applied to y. It mutates each .Ori and .Ovi of solution y independently with mutation probability .ρ. If one .Ori (.Ovi ) is selected to be mutated, then one operation of .Ori (.Ovi ) is randomly selected to be mutated. The selected operation is replaced by a data operation randomly selected from the candidate data operation set of .ri (.vi ). 2. Local refinement: a local refinement method is proposed to generate a new solution .y based on solution y. The motivation is to reduce the unnecessary data upload operation and further bandwidth utilization. Rule 1, if enhance
multiple data update operations . dj , 0 of one data .dj appear in the solution y, only the data update operation with the maximum .Qrdij (t) is remained and other data update operations are replaced by other unselected data operations according to the benefit order in Eq. (6.10). Rule 2, if there exists an update operation .(di , 0) in the .Ori , .(di , 1) of any other .orj are arranged after .(di , 0) of the .Ori . 3. Update neighborhood solution: For each weight vector .w l ∈ NB (k), we compare the objective based on solution .x l and the new l lstated
in Eq. (6.7)
l solution .y . If .g x w , z ≤ g y w , z , then the corresponding solution l .x is replaced by .y . 4. Update EP : For any solution .x ∈ EP , if .F (x) is dominated by .F y , then x is removed from
EP . If there is no .x ∈ EP which satisfies that .F (x) dominates .F y , then add .y to EP . • Output: The iteration is terminated and output the non-dominated set EP if it satisfies that the iteration number reaches the maximum value, which is pre-defined based on domain knowledge. Once the non-dominated set EP is obtained, the scheduler chooses a solution x from EP . In order to adaptively fulfill the given requirement on delivery ratio and service quality, a weight vector
is given to determine the final solution. In particular, given .w = w1 , w2 and
108
6 Temporal Data Uploading and Dissemination in Real-Time Vehicular Networks
w1 + w2 = 1, .wi ≥ 0, i = 1, 2, we evaluate the priority of a solution .x ∈ EP as follows:
.
P (x) =
2
.
fi (x) wi
(6.11)
i=1
Then, the final solution .x ∗ is chosen with the maximum value .P (x ∗ ) = max {P (x)}. In practice, the weight vector can be evaluated by certain evalu∀x∈EP
ation method based on statistical information of particular application scenarios. In addition, in order to enhance the V2V communication chance, the vehicle in the coverage of RSUs may cache non-requested data items. When the vehicle .vk receives a data broadcast by the RSU at time t, it will first check whether the data item belongs to .Ak (t). If it is true, the vehicle .vk will accept the data item and service is completed. Otherwise, .vk will further check whether the data item belongs to . ∩ Al (t). If it is true, the vehicle .vk will accept the data item and Ikl =0
forward it to the destination vehicle when they are in the V2V communication range. Otherwise, the vehicle .vk discards it.
6.5 Performance Evaluation 6.5.1 Setup In this section, we build the simulation model based on the system architecture described in Sect. 6.2 for performance evaluation. Specifically, as shown in Fig. 6.4, the simulation model integrates the real-world map, the traffic simulator SUMO, and the scheduling module. Specifically, a real-world map, extracted from a 7 .× 7 km area of Hi-tech Zone, in Chengdu, China, is downloaded from OpenStreetMap and imported to establish the traffic scenario in SUMO [13]. Then, SUMO is adopted to simulate vehicle mobility and generate vehicle traces. The scheduling module as well as the proposed algorithm is implemented based on C programming. The arrival of vehicles follows the Poisson process and the arrival rate is denoted by .λ. For each vehicle, the number of cached data items is randomly generated in the range of cn and the number of requested data items is randomly generated in the range of rn. The data valid period is randomly generated in the range of vp. The data access pattern follows the commonly used Zipf distribution [14] with a skewness parameter .θ . Furthermore, the maximum tolerated delay of each requested data is set to .T Lmax . The total number of data items maintained in the database is 200. The time units taken for broadcasting one data item via I2V and V2V communication are set to 1 s and 2 s, respectively. This is reasonable because according to DSRC [15], it supports the data rate from 3–27 Mpbs, depending on the modulation technique and vehicle speed, and hence it is sufficient to transmit a
6.5 Performance Evaluation
109
Fig. 6.4 Simulation modules
data item with normal sizes (e.g. in the order of KB) in second order. The default parameters and the corresponding descriptions are summarized in Table 6.2. For algorithm implementation, the population size and maximum iteration number is set to 500 and 100, respectively. Furthermore, mutation probability is set to 0.01 and the weight vector of MO-TDUD is set to (0.3, 0.7). Unless stated otherwise, the simulation is conducted under the default setting. To quantitatively evaluate the data quality, a commonly used linear function [16] is adopted, which is a typical setting for evaluating quality of temporal information in vehicular networks. The formulation is expressed as follows: Qdj (t) =
.
1−
sdj ldj
0,
, s d j ≤ t ≤ s d j + ld j t > s d j + ld j
(6.12)
Furthermore, for performance comparison, we have implemented two alternative solutions: (a) PBS algorithm [17], which determines the operations of downloading and uploading in each time slot based on evaluating the priority of each operation by synthesizing the number of pending requests, service deadline, and data quality.
110
6 Temporal Data Uploading and Dissemination in Real-Time Vehicular Networks
Table 6.2 Default value
Notations .||D|| .λ L cn rn vp .T Lmax .τ1
Values 200 1500 1000 m [12,15] [12, 14] [250,300] 150 s 2s
.τ2
1s
.θ
0.6
Descriptions Size of database Vehicle arrival rate The communication range of an RSU The range of cached data items The range of requested data items The range of data valid period The maximum tolerated delay The time for broadcasting one data item via V2V communication The time for broadcasting one data item via I2V communication Zipf distribution parameter
(b) Alternative-EDF, which alternatively chooses update operation or broadcast operation in each time slot. The order of update (or broadcast) operation is sorted by the deadline, namely, earliest deadline first. We collect the following statistics from the simulation: the set of requested data items by each vehicle .Avk , the set of received data items by each vehicle .Cvk , and data quality .Qvdki of each received data item. On this basis, we consider the two objectives for evaluating system performance, namely, DR and ASQ, which are computed as follows: DR =
.
Cv /
Av
k k ∀vk
ASQ =
.
(6.13)
∀vk
vk
Cv
Qdi / k ∀vk ∀di ∈Cvk
(6.14)
∀vk
6.5.2 Simulation Results and Analysis Effect of Data Valid Periods First, we compare the algorithms under different data valid periods. A longer data valid period indicates slower degradation of data quality. Figure 6.5a compares the DR of the three algorithms under different data valid periods. When the data valid period increases, the DR of both MO-TDUD and PBS also increases gradually since both the algorithms give more bandwidth for data broadcast. However, the DR of AEDF remains the same because AEDF only considers the service deadline in scheduling while ignoring the variation of data valid period. Due to the slower degradation of data quality, the ASQ of three algorithms increases gradually with an increasing data valid period, as shown in Fig. 6.5b. This set of results demonstrates that MO-TDUD achieves better overall system performance under various data valid periods.
6.5 Performance Evaluation
150~200 200~250 250~300 300~350 350~400 400~450 450~500
Data Valid Period (s)
111
150~200 200~250 250~300 300~350 350~400 400~450 450~500
Data Valid Period (s)
Fig. 6.5 Performance evaluation under different data valid periods. (a) Delivery ratio. (b) Average service quality
Effect of Traffic Workloads this part evaluates the performance of algorithms under different traffic workloads. Meanwhile, to give a comprehensive performance analysis, we also the adaptiveness of MO-TDUD under different weight
evaluate factors, w k = w1k , w2k , k = 1, 2, . . . .. Specifically, w1k ranges from 0.1 to 0.7 with interval 0.05, which indicates that the assigned weight for DR from low to high. Figure 6.6 compares the DR and ASQ of three algorithms under different traffic workloads. Each rectangular dot represents the result of MO-TDUD under a given weight vector. According to Fig. 6.6, we have the following observations. First, when w1k increases, the DR of MO-TDUD increases and the ASQ of MO-TDUD decreases. Clearly, assigning high weight on one metric will adversely result in the performance of other one. Second, the variation of two metrics is not obviously at two borders of the curve because at the beginning, the effect of changing weight on one part is too small to affect the scheduling preference between data broadcast and data update operations. Third, the points of both PBS and AEDF are below the curve of MO-TDUD in all cases in Fig. 6.6, which indicates that we can always choose a point in the curve whose ASQ and DR are both higher than PBS. It is because MO-TDUD is capable of achieving pareto-solution frontier during the multi-objective optimization search. Fourth, when the traffic workload increases as shown from Fig. 6.6a,b,c,d, we can observe that the DR of all the algorithms is decreasing. That is because the increasing vehicle density gives higher service workload. The above analysis conclusively demonstrates that MO-TDUD is able to provide adaptive solutions to strike a best balance between delivery ratio and service quality depending on any specific application requirement. Effect of Traffic Scenarios we evaluate the effectiveness of MO-TDUD under different traffic scenarios. Specifically, Figures 6.7a,b,c shows three different traffic scenarios, which are extracted from the core areas of Xicheng and Docheng districts in Beijing City, Binjiang district in Hangzhou City and Futian district in Shenzhen City, respectively. Accordingly, the scales of the three traffic scenarios are 108 km2 ,
112
6 Temporal Data Uploading and Dissemination in Real-Time Vehicular Networks
Fig. 6.6 Performance evaluation under different traffic workloads. (a) λ = 1200 veh/h. (b) λ = 1500 veh/h. (c) λ = 1800 veh/h. (d) λ = 2100 veh/h
50 km2 , and 30 km2 , respectively. Furthermore, the default settings (see Table 6.2) are adopted to simulate the vehicle mobility in all traffic scenarios. Furthermore, in order to give an insight into the performance of the particularly designed MO-TDUD algorithm, we implement three multi-objective evolutionary algorithms, i.e., SPEA-II [18], NSGA-II [19], and MOEA/D [12] for performance comparison. Specifically, we implement the three algorithms by adopting random population initialization and standard genetic operators, including binary tournament selection, uniform crossover, and bit mutation operators. Then, the same solution selection method used in MO-TDUD is adopted to select the best solutions from non-dominated set. In the simulation, all the evolutionary algorithms adopt the same default parameter settings. To give a comprehensive performance analysis, we evaluate the adaptiveness of algorithms under different weight factors, wk = {w1k , w2k }, where w1k ranges from 0.1 to 0.9 with interval 0.1. Figure 6.8a,b,c shows the performance of the four algorithms corresponding to the simulation scenarios of Fig. 6.7a,b,c, respectively. Particularly, in the figures, each rectangle, triangle, star, and circle represent the results of MO-TDUD, SPEA-II, NSGA-II, and MOEA/D
6.5 Performance Evaluation
113
Fig. 6.7 Different traffic scenarios. (a) Docheng and Xicheng districts in Beijing. (b) Binjiang district in Hangzhou. (c) Futian district in Shenzhen
under one weight factor, respectively. According to Fig. 6.8, we have the following observations. First, the three naive solutions that are based on SPEA-II, NSGA-II, and MOEA/D perform competitively. For example, as shown in Fig. 6.8a, compared with MOEA/D and SPEA-II, NSGA-II achieves better DR but less ASQ. On the contrary, MOEA/D can achieve better ASQ but lower DR compared with NSGAII and SPEA-II. Second, it can be observed that the Pareto optimal of MO-TDUD significantly dominates the solutions of the other algorithms in all the scenarios. Third, although MO-TDUD adopts the same framework with MOEA/D, we note that MO-TDUD achieves much better performance than MOEA/D in terms of both DR and ASQ. This confirmed the importance of designing particular evolutionary mechanisms for solving the encountered problem of interests. Based on the above analysis, we may safely conclude that MO-TDUD achieves the best performance under various simulation scenarios.
114
6 Temporal Data Uploading and Dissemination in Real-Time Vehicular Networks
Fig. 6.8 Performance Evaluation under different traffic scenarios. (a) Simulation results in scenario Fig. 6.7a. (b) Simulation results in scenario Fig. 6.7b. (c) Simulation results in scenario Fig. 6.7c
6.6 Conclusion This chapter presented a system architecture for temporal data services in largescale and real-time vehicular networks via the hybrid I2V/V2V communications. In such an architecture, a central scheduler makes the scheduling policy for both RSUs and vehicles based on the collected global system status. On this basis, we formulated the TDUD problem, aiming at enhancing both the service quality and the delivery ratio. Furthermore, to solve the derived problem, we proposed a multi-objective evolutionary algorithm called MO-TDUD. In particular, we designed a decomposition scheme to tackle the complexity incurred by multiple objectives, a scalable representation for encoding, and the evolutionary operators for reproduction. By adjusting the weight vectors, MO-TDUD can adaptively select the solution from the non-dominated set to strike the best balance between the ASQ and the DR based on different application requirements. Lastly, we built a simulation model and provided a comprehensive performance evaluation. The simulation results demonstrated the superiority and scalability of MO-TDUD.
References
115
References 1. J. Santa, A.F. Gómez-Skarmeta, M. Sánchez-Artigas, Architecture and evaluation of a unified V2V and V2I communication system based on cellular networks. Comput. Commun. 31(12), 2850–2861 (2008) 2. P. Su, B.B. Park, Auction-based highway reservation system an agent-based simulation study. Transp. Res. Part C: Emerg. Technol. 60, 211–226 (2015) 3. M. Wang, H. Shan, R. Lu, R. Zhang, X. Shen, F. Bai, Real-time path planning based on hybridVANET-enhanced transportation system. IEEE Trans. Veh. Technol. 64, 1664–1678 (2015) 4. A. Baiocchi, F. Cuomo, Infotainment services based on push-mode dissemination in an integrated VANET and 3G architecture. J. Commun. Netw. 15(2), 179–190 (2013) 5. R. Zhang, X. Cheng, Q. Yao, C.-X. Wang, Y. Yang, B. Jiao, Interference graph-based resourcesharing schemes for vehicular networks. IEEE Trans. Veh. Technol. 62(8), 4028–4039 (2013) 6. Y. Zeng, K. Xiang, D. Li, A.V. Vasilakos, Directional routing and scheduling for green vehicular delay tolerant networks. Wirel. Netw. 19(2), 161–173 (2013) 7. P. Dai, K. Liu, L. Feng, Q. Zhuge, V.C. Lee, S.H. Son, Adaptive scheduling for real-time and temporal information services in vehicular networks. Transp. Res. Part C: Emerg. Technol. 71, 313–332 (2016) 8. Y. Zhang, J. Zhao, G. Cao, Service scheduling of vehicle-roadside data access. Mob. Netw. Appl. 15(1), 83–96 (2010) 9. P. Dai, K. Liu, L. Feng, H. Zhang, V.C.S. Lee, S.H. Son, X. Wu, Temporal information services in large-scale vehicular networks through evolutionary multi-objective optimization. IEEE Trans. Intell. Transp. Syst. 20(1), 218–231 (2019) 10. P. Pathirana, A. Savkin, S. Jha, Location estimation and trajectory prediction for cellular networks with mobile base stations. IEEE Trans. Veh. Technol. 53, 1903–1913 (2004) 11. C. Barrios, Y. Motai, D. Huston, Trajectory estimations using smartphones. IEEE Trans. Ind. Electr. 62, 7901–7910 (2015) 12. Q. Zhang, H. Li, Moea/d: a multiobjective evolutionary algorithm based on decomposition. IEEE Trans. Evol. Comput. 11, 712–731 (2007) 13. M. Behrisch, L. Bieker, J. Erdmann, D. Krajzewicz, SUMO–simulation of urban mobility: an overview, in Proceedings of the 3rd International Conference on Advances in System Simulation (SIMUL’11) (ThinkMind, 2011) 14. G.K. Zipf, Human Behavior and the Principle of Least Effort: An Introduction to Human Ecology. (Ravenio Books, 2016) 15. J.B. Kenney, Dedicated Short-Range Communications (DSRC) standards in the united states, in Proceedings of the IEEE 99(7), 1162–1182 (2011) 16. C. Anagnostopoulos, S. Hadjiefthymiades, Delay-tolerant delivery of quality information in ad hoc networks. J. Parallel Distrib. Comput. 71(7), 974–987 (2011) 17. P. Dai, K. Liu, E. Sha, Q. Zhuge, V. Lee, S.H. Son, Vehicle assisted data update for temporal information service in vehicular networks, in Proceedings of 2015 IEEE 18th International Conference on Intelligent Transportation Systems (ITSC’15) (2015), pp. 2545–2550 18. E. Zitzler, M. Laumanns, L. Thiele, SPEA2: improving the strength pareto evolutionary algorithm. TIK Report, vol. 103 (2001) 19. K. Deb, A. Pratap, S. Agarwal, T. Meyarivan, A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. 6(2), 182–197 (2002)
Part III
Cooperative IoV: End-Edge-Cloud Cooperative Scheduling and Optimization
Chapter 7
Convex Optimization on Vehicular End–Edge–Cloud Cooperative Task Offloading
Abstract MEC is an effective paradigm in supporting computation-intensive applications. In vehicular networks, the MEC server is deployed at the roadside for task offloading. However, due to the unique characteristics of vehicular networks, including high mobility of vehicles, dynamic distribution of vehicle densities, and heterogeneous capacities of MEC servers, it is still challenging to implement efficient task offloading mechanism in MEC-assisted vehicular networks. In this chapter, we investigate a novel scenario of task offloading in the MECassisted architecture, where task uploading among multiple vehicles, task migration among MEC/cloud servers, and task computation among MEC/cloud severs are comprehensively investigated. On this basis, we formulate the Cooperative Task Offloading (CTO) problem by modeling the procedure of task upload, migration, and computation based on queuing theory, which aims at minimizing the delay of task completion. To tackle the CTO problem, we propose a Probabilistic Computation Offloading (PCO) algorithm, which enables the MEC server to independently make scheduling based on the derived allocation probability. Specifically, the PCO transforms the objective function into augmented Lagrangian and achieves the optimal solution in an iterative way, based on a convex framework called Alternating Direction Method of Multipliers (ADMMs). Finally, we build the simulation model, and the comprehensive simulation results show the superiority of the proposed algorithm under a wide range of scenarios. Keywords Queuing theory · Convex optimization · Task offloading · Mobile edge computing
7.1 Introduction MEC is an effective paradigm for supporting computation-intensive applications with low latency requirements in vehicular networks [1]. Especially, the MEC server is deployed as fixed wireless infrastructures, such as APs, BSs, and RSUs, and the computation-intensive tasks can be offloaded from mobile vehicles to MEC via wireless communication [2, 3], which could better exploit computation and © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 K. Liu et al., Toward Connected, Cooperative and Intelligent IoV, https://doi.org/10.1007/978-981-99-9647-6_7
119
120
7 End–Edge–Cloud Cooperative Task Offloading
storage resources at the edge side, and support emerging ITSs, such as large-scale traffic sensing [4], pattern recognition with image processing [5], and virtual– reality-assisted driving [6]. However, due to the unique characteristics of vehicular networks [7], such as high mobility of vehicles, dynamic distribution of vehicle density, and heterogeneous computation capacities of MEC/cloud servers, task offloading still suffers from unbalance workload distribution among MEC servers, which would seriously degrade the system performance. Recently, great efforts have been focused on task offloading in MEC-based vehicular networks. Some researchers proposed MEC-based service architectures for task offloading. Reference [8] applied SDN in implementing a programmable, flexible, and controllable network architecture, which can potentially improve resource utilization of MEC servers and achieve sustainable network development. Furthermore, some researchers proposed specific task offloading strategies, which enable terminal users to adaptively offload computation-intensive tasks to the MEC/fog nodes and improve the overall system performance, such as energy consumption [9] and task completion delay [10]. In addition, cooperative task offloading between MEC/cloud servers by synthesizing multiple critical factors, including mobility features of vehicles, heterogeneous communication, and computation capacities of MEC/cloud servers, is not investigated. In this chapter, we comprehensively investigate the service scenario of cooperative task offloading in MEC-based vehicular networks [11], where multiple MEC servers and remote cloud offload computation-intensive tasks in a cooperative way. Specifically, vehicles can upload the task associated with data size and computation requirements to MEC servers. The heterogeneous computation capacities of MEC servers are characterized by several key factors, such as processor number and computation rate. The cloud server is supposed to own a sufficient number of processors, and hence the task can always be assigned to an idle processor immediately, but note that it may have longer transmission delay for offloading tasks to the cloud server. To implement effective scheduling, the following issues are critical to be addressed. First, wireless bandwidth competition among multiple vehicles may deteriorate the performance of task uploading. Therefore, the coordination of task uploading has to be designed by considering vehicle mobility. Second, the dynamic workload distribution may result in MEC servers with weak computation capability overloaded. Accordingly, an online allocation approach is desired to dynamically allocate workload and computation capacities among MEC/cloud servers. The contributions of this chapter are outlined as follows: • We investigate the service scenario of task offloading in MEC-assisted vehicular networks, where mobility features of vehicles, dynamic distributions of vehicle density, and heterogeneous communication capabilities of MEC servers are comprehensively studied. Particularly, the horizontal and vertical cooperations between MEC/cloud servers are utilized for balancing the workload distribution in a dynamic vehicular environment. • We formulate the CTO problem by theoretically modeling the procedure of task uploading, migration, and computation based on queuing theory, which aims
7.2 System Architecture
121
at minimizing expected service delay. The task uploading and the migration procedure are characterized by the .M/M/1 model, where the time consumption is determined by dwelling time of vehicles, vehicle arrival rate, and service rate of task uploading and migration. Particularly, the task computation procedure of MEC and cloud servers are simulated by .M/M/C and .M/M/∞ queuing models, respectively, where the heterogeneous capabilities of MEC servers are characterized by different processor numbers and processing rates. • We propose the PCO algorithm, which enables the MEC server to make scheduling decisions independently based on a derived optimal allocation probability. Specifically, we first verify the convexity of the objective function by decomposing it into multiple components and then transform the objective function of the CTO problem into the formulation of augmented Lagrangian. Then, the optimal solution is achieved iteratively based on the ADMM framework.
7.2 System Architecture In this section, we present a service architecture for end–edge–cloud cooperative task offloading in vehicular networks. As shown in Fig. 7.1, it includes the mobile user layer, MEC layer, and cloud layer. Specifically, in the mobile user layer, the road network is divided into multiple sub-areas, and each one is assumed to be covered by one type of wireless interfaces, such as APs, BSs, and RSUs. Due to the high mobility of vehicles, the vehicle density in one sub-area may be timevarying and evenly distributed among these sub-areas. Furthermore, the vehicles in the same sub-area may compete for the wireless bandwidth of one wireless interface for task upload. The heterogeneous communication capacities of wireless interfaces are characterized by different service rates of task upload. For simplicity, the .M/M/1 model is used for modeling task upload procedure, which indicates that
Fig. 7.1 The service architecture for end–edge–cloud cooperative task offloading
122
7 End–Edge–Cloud Cooperative Task Offloading
vehicles have to wait in the task upload queue until the data of previous tasks is fully uploaded. In the MEC layer, servers can communicate with each other via wired connections. The task data uploaded at wireless interface is then transmitted to the MEC server via wired connections. Furthermore, the computation capacities of the MEC servers are characterized by different processor numbers, varying queuing capacities, and diverse service rates of task computation. Each processor computes one task at a time. When all the processors are busy, the task has to wait in the queue until at least one of the processors becomes available. The queuing capacity limits the maximum number of pending tasks in the queue simultaneously. Therefore, the .M/M/C queuing model is adopted for simulating the task computation procedure of each MEC server. In the cloud layer, the central cloud server is assumed to own an unlimited number of processors, which indicates the task arrived at the cloud server can be immediately assigned to an idle processor without pending delay. The .M/M/∞ queuing model is adopted to simulate the task computation procedure of the cloud. However, compared with the MEC layer, task computation in the cloud layer has to take extra communication time of migrating task from the MEC layer to the cloud layer. When the vehicle density is unevenly distributed, horizontal collaboration can effectively balance the workload in the MEC layer by efficiently migrating tasks from overloaded servers to underloaded servers. On the other hand, when the workload in the MEC layer is overloaded, vertical migration can effectively avoid intolerant pending delay at the MEC server by offloading extra tasks to the cloud server. Particularly, the .M/M/1 queuing model is used for simulating task migration among MEC/cloud servers. The MEC server makes scheduling decisions for each submitted task in its service range. The detailed scheduling procedures of the MEC server are described as follows. First, the beacon message periodically broadcast by the vehicles is monitored by the wireless interface, which includes task submission information, such as task ID and submission time, and mobility features, such as the velocity and driving direction. By receiving the collected information from the wired connected wireless interface, the MEC server maintains a submission queue for storing these new tasks. Second, the MEC server makes the scheduling decision of each task in the submission queue, including task upload server, task migration server, and task computation server. Third, if the task upload server is equal to the dwelling server, the task ID will be pushed into the upload queue and the vehicle waits for uploading task data. Otherwise, the vehicle keeps silent until driving into the coverage of determined task upload server. Fourth, after the task upload, the MEC server will check whether the task computation server is equal to the task upload server. If so, the task is then pushed into the computation queue in the local server. Otherwise, the task is pushed into the migration queue and migrated to the determined task computation server. Fifth, after task migration, the task waits in the computation queue until one of the processors is available. In this chapter, the cost of retrieving the computation result is not considered since the data size of computation result is
7.3 Cooperative Task Offloading (CTO) Problem
123
always much smaller than the task itself, which is commonly adopted in the related literature [12].
7.3 Cooperative Task Offloading (CTO) Problem 7.3.1 Preliminary In this section, we briefly introduce the notations used in this chapter. The set of MEC servers is denoted by M. Each .mi ∈ M is assumed to be wired connected to one type of wireless interfaces, where the service range of .mi is determined by the wireless coverage. For each .mi ∈ M, the arrival pattern of vehicles follows the Poisson process with parameter .λi . Each vehicle is assumed to submit one task, which requires unit communication resource (denoted by .δs ) and unit computation resource (denoted by .δc ). The dwelling time of vehicles in the service range of .mi is denoted by .li . Additionally, .ρij denotes the turning probability of vehicles driving from the service range of .mi to .mj . The communication capability of .mi is determined by the connected wireless interface, which is characterized by the service rate of uploading one task, denoted by .uui . The computation capability of p p .mi ∈ M is characterized by three-tuple .(u , pi , ci ), where .u , .ci , and .ni represent i i the service rate of computing task, the number of processors owned by .mi , and the maximum number of tasks tolerated by .mi at the same time, respectively. The service rate of migrating one task between .mi and .mj is denoted by .um ij . The cloud server is denoted by .m0 . The service rate of any processors owned by .m0 is denoted p by .u0 . It is noted that the service time of task upload, migration, and computation is assumed to follow the exponential distribution with the corresponding service rate (l) based on the queuing theory. Furthermore, the allocation probability .Pij →k is used for indicating the probability that the task submitted to .mi is uploaded at .mi (.l = 1) or .mj (.l = 2) and computed by .mk . The primary notations are listed in Table 7.1.
7.3.2 Problem Formulation Then, we formulate the CTO problem, which consists of the task upload model, the task migration model, and the task computation model, respectively. First, we establish the task upload model for each MEC server .mi ∈ M based on the .M/M/1 queuing model. For each .mi ∈ M, given arrival rate .λi , the expected number of new submitted tasks in .mi choosing .mi for task upload is computed by M M (1) . λi ρij Pij →k . Similarly, for .mj ∈ M, j = i, the expected number of new j =1 k=0
arrival tasks that choose .mi for task upload is computed by .
M
k=0
(2)
λj ρj i Pj i→k . Then,
124
7 End–Edge–Cloud Cooperative Task Offloading
Table 7.1 Summary of notations Notations M .δs
Descriptions The set of MEC servers, where .M = {mi } The unit communication resource
.δc
The unit computation resource
.λi
The arrival rate of vehicles in the coverage of .mi
.ρij
The probability of vehicles that drive from .mi to .mj
.li
The dwelling time of vehicles in the coverage of .mi u
.ui
The service rate of uploading one task by .mi
.ci
The number of processors owned by .mi
.ni
The queuing capacity of .mi
p .ui m .uij
The service rate of computing a task by .mi The service rate of migrating a task from .mi to .mj
(l)
The probability that the task of .mi is uploaded at = 1) or .mj (.l = 2) and computed by .mk .
.Pij →k
.mi (.l
the expected upload workload of .mi is formulated as follows: λui =
M M
.
j =1 k=1
(1) (2) λi ρij Pij →k + λj ρj i Pj i→k · δs
(7.1)
Based on the .M/M/1 queuing model, given expected upload workload .λui , and service rate of task upload .uui , the expected task upload time of .mi is formulated as follows: Tiu =
.
1 uui − λui
(7.2)
As the vehicle has to complete task upload before it leaves the service range of .mi , the expected upload time at each .mi cannot exceed the dwelling time .li , expressed as follows: Tiu ≤ li , ∀mi ∈ M
(7.3)
.
Therefore, the expected task upload time of the system is computed as follows: M
E(T u ) =
.
i=1
λui Tiu +
M M i=1 j =1 M i=1
λi
(2)
λi ρij Pij →k li (7.4)
7.3 Cooperative Task Offloading (CTO) Problem
where .
M M
i=1 j =1
125
(2)
λi ρij Pij →k li is the summation of dwelling time tolerated by vehicles
before entering the service range of task upload server. Second, we model the procedure of task migration between two MEC servers based on the .M/M/1 queuing model. Specifically, the workload of task migration from .mi to .mj , denoted by .λm ij , is calculated as the product of the number of tasks that choose .mi for task upload and .mj for task computation and unit communication resource, which is expressed as follows: λm ik =
M
.
j =1
(λi ρij Pij(1)→k + λj ρj i Pj(2) i→k ) · δs
(7.5)
Based on the .M/M/1 queuing model, the expected task migration time between .mi and .mk is computed as follows: Tikm =
.
1 um − λm ik ik
(7.6)
Furthermore the expected migration workload .λm ik cannot exceed the service rate of task migration .um ; otherwise the service time will achieve .∞. Therefore the ik following constraint has to be satisfied: m λm ik ≤ uik , ∀mi , mj ∈ M
.
(7.7)
Particularly, task migration workload from the MEC layer to the cloud layer is the summation of the workload of task migration from .mi ∈ M to .m0 , which is computed as follows:
λm 0 =
M M
.
i=1 j =1
λi ρij Pij(1)→0 + ρij Pij(2)→0 · δs
(7.8)
Then, the expected task migration time from the MEC layer to the cloud layer is computed as follows: T0m =
.
1 um − λm 0 0
(7.9)
Similarly, the following constraint should be satisfied: m λm 0 ≤ u0
.
(7.10)
126
7 End–Edge–Cloud Cooperative Task Offloading
Therefore, the expected task migration time of the system is computed by the summation of task migration time between MEC and cloud servers divided by the total expected number of new arrival tasks in the system. M M
E(T ) = m
.
m m m λm ik Tik + λ0 T0
i=1 k=1
M
(7.11) λi
i=1
Third, we utilize .M/M/C and .M/M/∞ queuing models to establish the task computation model of MEC and cloud servers, respectively. For each .mi ∈ M, the expected computation workload assigned to .mk is computed as follows: p
λi =
.
λk ρkj Pkj(1)→i + Pkj(2)→i δc , ∀mi ∈ M
M M k=1 j =1
(7.12)
Based on the .M/M/C queuing model, given expected task computation workload p p λi , the processor number .ci , service rate of task computation .ui , and queuing capacity .ni , the expected task computation time of .mi is formulated as follows:
.
ci p .T i
k
p p (λi /ui )k
k!
k=0
=
+
⎛
p⎜ λi ⎝
ci
p p (λi /ui )k
k!
k=0
ni
k
k=ci +1
+
k
p λi p ci ui
·ci ci
ci !
ni
p λi p ci ui
⎞
k ·ci ci
ci !
k=ci +1
(7.13)
⎟ ⎠
For cloud server .m0 , based on the .M/M/∞ model, the expected computation workload and computation time are formulated as follows: p
λ0 =
.
M M k=1 j =1
(1) (2) λk ρkj Pkj →0 + Pkj →0 δc p
T0 =
.
1 p u0
(7.14)
(7.15)
Based on Eqs. (7.12) .∼ (7.15), the expected task computation time of the system is formulated as follows: p
E(T p ) =
.
M
p
λ 0 · T0 +
i=1
M i=1
λi
p
p
λ i · Ti
(7.16)
7.3 Cooperative Task Offloading (CTO) Problem
127
It is noted that even with the same workload, the computation time of tasks can still differ with each other due to heterogeneous computation capabilities of MEC/cloud servers, which indicates unbalanced workload distribution. Based on Eqs. (7.4), (7.11), and (7.16), the expected service delay is defined as the summation of expected task upload time, expected task migration time, and expected task computation time, which is formulated as follows: E(T delay ) = E(T u ) + E(T m ) + E(T p )
.
(7.17)
The unbalanced workload distribution among MEC servers will increase the value of .E(T u ), .E(T m ), and .E(T p ), which leads to overlong service delay of tasks. The formulation of the CTO problem is presented as follows. Given the arrival rate .λi and the dwelling time .li , service rate of task upload .uui , task migration .uuij and p task computation .ui , the objective of the CTO problem is to minimize the expected (l) service delay by searching the optimal allocation probability .P = [Pij→k ], .∀i, j ∈ [1, M], k ∈ [0, M], l ∈ {1, 2}, which is formulated as follows: min E(T delay )
∀Pijl →k
s.t. C1 : Tiu ≤ li , ∀i ∈ [1, M] C2 : λu0 ≤ uu0 , p
.
p
C3 : λi ≤ ci ui , ∀i ∈ [1, M]
(7.18)
m C4 : λm ij ≤ uij , ∀i, j ∈ [1, M]
C5 :
2 M
Pijl →k = 1, ∀i, j ∈ [1, M]
k=1 l=1
C6 : 0 ≤ Pijl →k ≤ 1, ∀i, j ∈ [1, M], k ∈ [0, M], l ∈ 1, 2 Based on the formulated model in Eq. (7.18), we can have the following observations. First, the high mobility of vehicles is characterized by the dwelling time of vehicles in the coverage of MEC server and the turning probability of vehicles that drive from the coverage of one MEC server to another. Second, the dynamic distribution of vehicle density is characterized by different vehicle arrival rates at the coverage of MEC servers, which represents varying workload distribution. Third, the heterogeneous computation capacities of MEC/cloud servers are characterized by different computation models, where MEC and cloud servers are characterized by the .M/M/C queuing model and the .M/M/∞ queueing model. Particularly, MEC servers have different processor numbers and varying service rates. Fourth, the above factors can jointly cause unbalanced workload distribution among MEC servers, which results in serious service delay. Therefore, the objective of the optimization model is to minimize the service delay by deriving the optimal offloading probability.
128
7 End–Edge–Cloud Cooperative Task Offloading
7.4 Proposed Algorithm In this section, we propose the probabilistic computation offloading (PCO) algorithm, where the basic principle of the PCO is described as follows. First, The PCO achieves the optimal allocation probability for each MEC server by minimizing the service delay with satisfying the constraints of the CTO model. Accordingly, each MEC server can independently schedule the offloading decision for each pending task in its coverage in a probabilistic way. The allocation probability is determined by the given parameters including mobility features of vehicles (i.e., dwelling time and turning probability), vehicle density distributions (i.e., vehicle arrival rate), and computation capabilities of MEC/cloud servers (i.e., the number of processors and the computation rate). Specifically, we first prove the convexity of the objective function by decomposing it into multiple components and analyzing the convexity of each component. Furthermore, we achieve the optimal allocation probability in an iterative way and implement the task offloading strategy at each MEC server based on derived allocation probability.
7.4.1 The Convexity of Objective Function In this section, we analyze the convexity of the objective function .E(T s ). Based on Eq. (7.17), .E(T s ) can be decomposed into three components: .E(T u ), .E(T m ), and c .E(T ), which will be analyzed, as follows. (2) First, based on Eq. (7.4), .E(T u ) is the affine mapping of .λui Tiu and .λi ρij Pij →k . Based on the convex theory [13], affine mapping preserves the convexity property. Therefore, we only need to prove the convexity of .λui Tiu . We derive the derivative λ and the second derivative of a general function .f (λ) = u−λ with respect to .λ ∈ [0, u), which is shown in Eq. (7.19). .
f (u) =
u , (u − λ)2
f (u) =
2u (u − λ)3
(7.19)
As .f (u) > 0 with respect to .λ ∈ [0, u), .f (λ) is proved to be a convex function. Furthermore, .λui is the affine mapping of .Pijl →k , which preserves the convexity. Therefore, the first component .T u is proved to be convex with respective to .λui ∈ [0, uui ], i = 1, 2, . . . , M. m m m Second, based on Eq. (7.11), .E(T m ) is the affine mapping of .λm ik Tik and .λ0 T0 . m m u u m m Since the structure of .λik Tik is the similar to .λi Ti , then .λik Tik is convex with m m is proved to be respect to .λm ik ∈ [0, uik ]. Therefore, the second component .T m m convex with respective to .λik ∈ [0, uik ], i, k = 1, 2, . . . , M.
7.4 Proposed Algorithm
129 p
p
Third, based on Eq. (7.16), .E(T p ) is the affine mapping of .λi Ti , which p p indicates that we only need to prove the convexity of .λi Ti . For generality, we analyze the general function, which is formulated as follows: c
f (x) =
.
k
n
ck
k=c+1 n
k ck! x k +
k=0 c
k=0
k!
xk
+
k=c+1
c
k cc! x k (7.20) cc k c! x
λ are processor number, queuing capacity, and the ratio of task where c, n, and .x = cu computation workload and computation rate of MEC servers, respectively. Since the second derivative of Eq. (7.16) is too complicated to be calculated and analyzed, we plot the curves of the relationship between .f (x), processor number c, and queuing capacity n, respectively. In practical application, the processor number of MEC server and the queuing capacity are usually not large due to high deployment cost and overlong delay pending in the computation queue. Therefore, we test the p value of .λi Ti under processor number and queuing capacity varying with [1,10] and [20,100], respectively, shown in Figs. 7.2 and 7.3. It is observed that .f (x), i.e., p .λi T i under different processor numbers and queuing capacities are presented to be convex with respect to .x ∈ [0, 1). Though the convexity of .f (x) is not proved in the p theoretical way, we can boldly make the assumption that the .λi Ti is convex with p p respect to .λi ∈ [0, ui ci ) with the constraint of .ci ∈ [2, 10] and .ni ∈ [20, 100]. This assumption will be further validated by extensive simulation results that the solution will finally converge based on the proposed convex-based approach.
7.4.2 Probabilistic Task Offloading Algorithm In this section, we introduce the PCO algorithm based on ADMM. The ADMM [14] is a powerful optimization method, which can be applied to a variety of applications, p
Fig. 7.2 The curve of .λi Ti under different processor numbers
130
7 End–Edge–Cloud Cooperative Task Offloading p
Fig. 7.3 The curve of .λi Ti under different queuing capacities
especially popularly applied in wireless network [15, 16]. Particularly, the ADMM approach is still useful in solving non-convex problem by achieving a local or global optimal solution, which is suitable for solving the CTO problem. We use the PCO algorithm to derive the allocation probability offline, which can be solved by the powerful cloud server. Furthermore, the solution can be performed online at each MEC server independently with neglected communication overhead of control message between MEC and cloud servers. First, in order to utilize the ADMM approach, the CTO problem has to be transformed into the form of augmented Lagrangian. We add an .Nz -dimension z 3 2 virtual variable vector .Z = {zi }, where .Z ∈ RN + and .Nz = M + 3M + 2M + 1, to transform the inequality constraints into the set of equations. Then, Eq. (7.18) is transformed as follows: min E(T delay )
∀Pijl →k
s.t. C7 : λui + zi − li = 0, ∀i ∈ [1, M] C8 : λu0 + z1+M − uu0 = 0, p
p
C9 : λi + zi+M+1 − ci · ui = 0, ∀i ∈ [1, M] .
m C10 : λm ij + z(i+1)M+j +1 − uij = 0, ∀i, j ∈ [1, M]
C11 :
M 2
Pijl →k + z(i+1+M)M+j +1 − 1 = 0, ∀i, j ∈ [1, M]
k=1 l=1
C12 : Pijl →k + zi(2M 2 +2M)+j (2M+2)+2k+l−2M−3 − 1 = 0, ∀i, j ∈ [1, M], k ∈ [0, M], l ∈ {1, 2}
(7.21)
7.4 Proposed Algorithm
131
It is observed that the constraints .C7 ∼ C12 are linear, and then Eq. (7.21) can be formulated as follows: min f (P) + g(Z)
∀P,Z
.
(7.22)
s.t. AP + BZ = C Particularly, .g(Z) is an indicator function where .g(Z) = 0 if each .zi ∈ Z ∈ [0, 1]. Otherwise, .g(Z) = +∞. .P is an .Np -dimensional vector (.Np = 2M 2 (M + 1)), where the index of the element .Pijl →k is .(i − 1) ∗ 2M(M + 1) + (j − 1) ∗ 2(M + 1) + (k − 1) ∗ 2 + l. Accordingly, A is an .Nz × Np matrix, where each element .akl is the coefficient of .lth element of .P in the .kth constraint. Then, B is an .Nz × Nz diagonal matrix and C is an .Nz -dimensional vector. On this basis, we can transform Eq. (7.22) into the augmented Lagrangian, which is formulated as follows: Lρ (P, Z, U) = f(P) + g(Z) + U(AP + BZ − C) ρ + ||AP + BZ − C||22 2
.
(7.23)
where .U is the dual variable vector and .ρ > 0 is the step size for controlling the convergence speed. Pk+1 = arg min Lρ (P, Zk , Uk ) ⇔ arg min{f(P) + g(Zk )
.
∀P
∀P
+ Uk (AP + BZk ) +
ρ ||AP + BZk − C||22 } 2
(7.24)
Zk+1 = arg min Lρ (Pk+1 , Z, Uk ) ⇔ arg min{g(Zk ) + Uk BZ ∀Z
.
+
∀Z
ρ ||APk+1 + BZ − C||22 } 2 Uk+1 = Uk + ρ(APk+1 + BZk+1 − C)
.
(7.25)
(7.26)
Based on the ADMM approach, the solution consists of three iterations, which are formulated in Eq. (7.24).∼(7.26). It is noticed that Eqs. (7.24) and (7.25) are unconstrained convex functions, which can be solved by standard convex optimization tools. Furthermore, to evaluate the stopping criterion, the primal and dual residuals are formulated in Eq. (7.27), respectively, which is used for checking whether to terminate the iteration or not by comparing with two thresholds . pri and . dual at each iteration. r = APk + BZk − C
(7.27)
s = AT B(Zk − Zk−1 )P + BZ − C
(7.28)
.
.
132
7 End–Edge–Cloud Cooperative Task Offloading
Algorithm 7.1: Probabilistic task offloading (PCO) Step 1 (Offline): derive the optimal allocation probability 1: Initialize pri , dual , P0 , Z0 and U0 2: while convergence_indicator == 0 do 3: Update P(t) based on Eq. (7.24) Update Z(t) based on Eq. (7.25) 4: Update U(t) based on Eq. (7.26) 5: Calculate ||r(t)||2 = ||AP(t) + BZ(t) − C||2 6: 7: Calculate ||s(t)||2 = ||AT B(Z(t) − Z(t − 1))||2 8: if (||r(t)||2 ≤ pri and ||s(t)||2 ≤ dual ) or t >= tmax then 9: convergence_indicator = 1 10: end if 11: t =t +1 12: end while 13: P ← P(t) 14: Compute Pc based on Eq. (7.29) Step 2 (Online): make scheduling decision of new tasks 15: while there exists a new task submitted by vehicles do 16: mi ← the dwelling server, mj ← the next dwelling server 17: Randomly generate tmp from interval [0, 1] 18: for k from 0 to ||M|| do 19: for l from 1 to 2 do 20: if k == 1&& l == 1 && tmp < Pijc (k, l) then 21: select_indicator =1 22: else if k == 1 && l == 2 && tmp ≥ Pijc (k + 1, l − 1) && tmp < Pijc (k, l) then 23: select_indicator ==1 24: else if tmp ≥ Pijc (k − 1, l) && tmp < Pijc (k, l) then 25: select_indicator ==1 26: end if 27: if select_indicator ==1 then 28: if l == 1 then 29: mu ← i 30: else 31: mu ← j 32: end if 33: mc ← k 34: if mu == mc then 35: mc ← ∅ 36: else 37: mc ← k 38: end if 39: end if 40: end for 41: end for 42: end while
On this basis, we propose the PCO algorithm, where the pseudocode is shown in Algorithm 7.1 and the detail procedure is described as follows. First, we initialize the related parameters, including . pri , . dual , .P0 , .Z0 , and .U0 . Specifically, each 1 element .Pijl →k ∈ P is set to . 2M 2 (M+1) and .Z and .U are set to zero vectors. To
7.5 Performance Evaluation
133
guarantee the efficiency, . pri and . dual are set to 0.5 for accelerating convergence. Second, we update .Pk , .Zk , and .Uk in sequential order, which are based on Eqs. (7.24), (7.25), and (7.26), respectively. Third, two residuals r and s are computed based on Eq. 7.27. If both .||r||2 ≤ pri and .||s||2 ≤ dual are satisfied, the iteration terminates. Otherwise, the iteration continues until the maximum iteration number is achieved. After that, the allocation probability P is obtained and the cumulative probability function .Pijc (k, l) is computed based on Eq. (7.29). c .Pij (k, l)
=
⎧ ⎪ ⎨
Pij(l)→k
P c (M + 1, l − 1) + Pij(l)→k ⎪ ij c ⎩ Pij (k − 1, l) + Pij(l)→k
if k == 1, l == 1 if k == 1, l == 2 otherwise
(7.29)
The above procedure shown in lines .1 ∼ 14 can be implemented offline efficiently by powerful cloud server, since the vehicle arrival rate can be predicted or estimated based on traffic flow prediction techniques [17]. Based on .Pijc (k, l), the MEC server can make the scheduling decision for the new tasks online. For each new task, a random variable tmp is generated from [0,1]. Then, the cloud server determines the probability interval that tmp belongs to in an iterative way. Once the probability interval is determined, the MEC server determines task upload, migration, and computation servers. The detail procedure is shown in lines .17 ∼ 42 of Algorithm 7.1. The online phase only takes .O(2M) time for scheduling each new task, which is linear to the number of MEC servers.
7.5 Performance Evaluation 7.5.1 Default Setting In this section, we implement the simulation model based on the service architecture presented in Sect. 7.2. The real-world map is based on the core area within the first ring road, Chengdu City, China. The realistic vehicular trajectories are extracted on November 7, 2018, as traffic inputs, which is provided by Didi Chuxing GAIA Initiative [18]. We have examined five service scenarios with different time periods, where the traffic characteristics are shown in Table 7.2. Detailed statistics includes average arrival rate (AAR) of vehicles, the variance of arrival rate (VAR), average Table 7.2 Traffic Characteristics Scenario 1 2 3 4 5
Time period 0:00–2:00 22:00–24:00 20:00–22:00 16:00–18:00 8:00–10:00
AAR(veh/min) 6.8 12.7 19.4 25.1 31.7
VAR 10.2 42.8 109.0 186.5 100.9
ADT(s) 98.7 109.2 122.6 150.7 150.6
VDT 186.7 212.6 143.3 421.0 389.9
134
7 End–Edge–Cloud Cooperative Task Offloading
dwelling time (ADT) of vehicles at MEC servers, and the variance of dwelling times (VDTs). Specifically, the traffic workload increases from Scenario 1 to 5, as well as the dwelling time. The high value of VAR and VDT indicates the dynamic mobility feature of vehicles. To better exhibit the system workload under different scenarios, Fig. 7.4 shows the heat maps of vehicle distribution under different traffic scenarios. It is noted that the vehicle density varies greatly with different scenarios and is highly uneven distributed in each scenario, which demonstrates serious unbalanced workload. There are one cloud and nine MEC servers simulated in the system, where the MEC servers are uniformly distributed in the road map. The probability of generating a computation-intensive task by one vehicle is set to 0.2. Each task is assumed to require 50 Giga CPU cycles with 5 MB data. For each MEC server, the service rate of task upload .uui and task migration .um ij are randomly generated from the interval [4.8, 7.2] and [9.6, 12] tasks per minute, which represents for the transmission rate of [3.2, 4.8] and [6.4, 8] Mb/s, respectively. Furthermore, the p task computation rate .ui is randomly selected from [1.2, 2.4] tasks per minute, which represents for the computation rate of [1.0, 2.0] GHz. Particularly, the processor number and the queuing capacity of each MEC server are randomly generated from [1, 4] and [25, 35], respectively. For central cloud, the number of processors is unlimited and the computation rate is set to 2.4 task per minute. The similar parameter settings are also adopted in these existing literatures [19–21]. For algorithm implementation, . pri and . dual are set to 0.5. The step size .ρ starts from .0.2 and decreases with .0.01 until .ρ achieves .0.01. Unless stated otherwise, the simulation is conducted under default settings. For performance comparison, since there exists no feasible solution which can be directly applied in the CTO problem, two heuristic algorithms, i.e., Local Server Only (LSO) and Uniform Selection (US), are adopted for performance evaluation. Besides, two competitive algorithms in the literatures, called gamebased computation offloading (GCO) [22] and cooperative computation offloading at MEC (CCO-MEC) [23], are tailored to be suited to the proposed model. The details are described as follows: • GCO is a multi-user non-cooperative task offloading game, where each vehicle adjusts the offloading probability by striking the best load balance between MEC and cloud servers via vertical cooperation. • CCO-MEC is a convex optimization-based algorithm, which minimizes the service delay of tasks via horizontal cooperation between MEC servers. • LSO always prefers to upload and compute the tasks in the local server. • US randomly chooses task upload and computation servers among available candidates with equal probability.
7.5 Performance Evaluation
135
Fig. 7.4 The heat map of the distribution of vehicle densities under different traffic scenarios. (a) Scenario 1. (b) Scenario 2. (c) Scenario 3. (d) Scenario 4. (e) Scenario 5
136
7 End–Edge–Cloud Cooperative Task Offloading
For performance evaluation, we collect the following statistics: the submission time f inish and completion time of each task, denoted by .Qsubmission and .Qi , the upload i upload computation and .Qi , and time and computation time of each task, denoted by .Qi the total number of tasks and the number of completed tasks, denoted by .Qtotal and completed . On this basis, we define the following metrics. .Q • Average service delay (ASD): It is defined as the summation of service delay of all completed tasks divided by the number of completed tasks, which is computed as follows: Qcompleted ASD =
.
i=1
f inish
(Qi − Qsubmission ) i Qtotal − Qf ail
(7.30)
The low value of ASD indicates that the vehicle takes less delay for receiving the computation result, which brings better user experience. • Average upload time (AUT): It is defined as the mean value of task upload time of all the completed tasks, which is computed as follows: Qcompleted AU T =
.
upload
Qi
i=1
Qcompleted
(7.31)
The higher value of AUT indicates that more tasks compete for wireless bandwidth, which degrades the ASD. • Average computation time (ACT): It is defined as the mean value of task computation time of all the completed tasks, which is computed as follows: Qcompleted ACT =
.
i=1
computation
Qi
Qcompleted
(7.32)
The higher value of AUT indicates higher pending delay for task uploading, which further degrades the ASD.
7.5.2 Simulation Results and Analysis Effect of Traffic Workload Figure 7.5 compares the ASD of five algorithms under different traffic workloads. As noted in Table 7.2, the vehicle arrival rate increases from Scenario 1 to Scenario 5, which results in heavier system workload. Accordingly, the ASD of five algorithms increases. Specifically, the ASD of CCOMEC and LSO achieves at a low lever at light workload and increases dramatically when the workload becomes heavy. The reason behind is that neighboring MEC can cooperate with each other to handle the unbalanced workload. However, when the workload exceeds the computation capability of MEC servers, the computation time
7.5 Performance Evaluation
137
Fig. 7.5 ASD of five algorithms under different traffic scenarios
Fig. 7.6 ACT of five algorithms under different traffic scenarios
will explosively increase. It is demonstrated in Fig. 7.6, where the ACT of LSO and CCO-MEC becomes much higher than other algorithms at heavy workload. Even though, the ACT of CCO-MEC can be lower than that of LSO since CCO-MEC uses horizontal cooperation to alleviate workload unbalance. Reversely, the ASD of GCO is higher than that of CCO-MEC at first but then becomes lower than CCOMEC. It is because GCO can adaptively offload the task to the MEC or the cloud based on the real-time workload. The US always achieves the worst performance since the US uniformly assigns task upload to MEC servers and tasks tolerate long delay before the vehicles drive into the assigned server. It is verified by Fig. 7.7,
138
7 End–Edge–Cloud Cooperative Task Offloading
Fig. 7.7 AUT of five algorithms under different traffic scenarios
which shows that the AUT of GCO achieves much higher than other algorithms. Particularly, the PCO achieves the lowest ASD across all the service scenarios. Figure 7.8 shows the primal dual variable .||r||2 under different traffic scenarios, which demonstrates that the PSO can always achieve convergence in an iterative way. Based on the observation, this set of simulation result shows the scalability of the PSO against varying system workloads. Effect of Task Computation Rate Figure 7.9 shows the ASD of five algorithms under different task computation rates. The higher value of task computation rates indicates that individual task can be served with less computation time, which indicates lighter service workload. Accordingly, the ASD of five algorithms decreases dramatically at first with increasing task computation rates, and then the trend slows down when the task computation rate keeps increasing. This phenomenon is explained as follows. Figures 7.10 and 7.11 show the AUT and the ACT of five algorithms under different task computation rates, respectively. The ACT and AUT of five algorithms are 200s and 50s on average at [0.12, 1.2] task/min, respectively, which indicates that the ACT dominates the ASD at first. When the computation rate increases, the ACT of five algorithms decreases dramatically and becomes less than 50s on average after the point of [1.2, 2.4] task/min. Since the service rate of task upload remains the same, the AUT maintains at a stable level and dominates the ASD when the task computation rate keeps increasing. Furthermore, the US achieves the worst ASD since it consumes overhigh time for uniform task uploading among different MEC servers. In addition, the CCO-MEC achieves the highest ASD at beginning since the CCO-MEC only considers to offload task to
7.5 Performance Evaluation
139
Fig. 7.8 The curves of the primal dual variable .||r||2 under different traffic scenarios. (a) Scenario 1. (b) Scenario 2. (c) Scenario 3. (d) Scenario 4. (e) Scenario 5
MEC servers, which results in high ACT when the computation capability of MEC server is weak. Particularly, it is observed that the PCO still achieves the lowest ASD among five algorithms across all the scenarios. It is because that the PCO can balance the AUT and ACT by optimally determining workload distribution among MEC/cloud servers. This set of simulation results shows the effectiveness of the PCO against varying computation capabilities.
140
7 End–Edge–Cloud Cooperative Task Offloading
Fig. 7.9 ASD of five algorithms under different task computation rates
Fig. 7.10 AUT of five algorithms under different task computation rates
7.6 Conclusion
141
Fig. 7.11 ACT of five algorithms under different task computation rates
7.6 Conclusion In this chapter, we presented an MEC-assisted architecture for cooperative task offloading in vehicular networks, where MEC servers are deployed at the roadside and responsible for making task offloading decision from the mobile terminal users to the MEC/cloud servers. Specifically, the mobility features of vehicles were characterized by dwelling time and turning probability between MEC servers. The heterogeneous communication capacities of MEC servers were characterized by different service rates of task upload and migration. Furthermore, the heterogeneity of computation capacities was characterized by different processor numbers, varying service rates of task computation, and diverse queuing capacities. With such an architecture, we modeled the task upload and migration models based on the .M/M/1 queuing model and the task computation model of MEC and cloud servers based on the .M/M/C and .M/M/∞ queuing model. By synthesizing the above models, we formulated the problem of CTO, which aims at minimizing the expected system service delay by searching the optimal allocation probability. Then, we analyzed the convexity of the objective function and proposed a probabilistic approach, which consists of the online and offline phases. In the offline phase, the PCO utilizes the ADMM approach to transform the objective function into augmented Lagrangian by adding dual variables and derives the optimal solution in an iterative way by establishing three iterations. In the online phase, the PCO uses a probabilistic approach to determine the scheduling decision of each new task based on the optimal allocation probability. Finally, we built the simulation model and implemented the proposed algorithm as well as four competitive algorithms.
142
7 End–Edge–Cloud Cooperative Task Offloading
The comprehensive simulation results validated the superiority of the PCO under a wide range of service scenarios.
References 1. J. Liu, J. Wan, B. Zeng, Q. ruo Wang, H. Song, M. Qiu, A scalable and quick-response software defined vehicular network assisted by mobile edge computing. IEEE Commun. Mag. 55(7), 94–100 (2017) 2. H. Alshaer, E. Horlait, An optimized adaptive broadcast scheme for inter-vehicle communication, in Proceedings of the IEEE 61st Vehicular Technology Conference (VTC’05), vol. 5 (2005), pp. 2840–2844 3. H. Alshaer, J.M.H. Elmirghani, Road safety based on efficient vehicular broadcast communications, in Proceedings of the IEEE Intelligent Vehicles Symposium (IV’09) (2009), pp. 1155–1160 4. N. Mitrovic, M.T. Asif, J. Dauwels, P. Jaillet, Low-dimensional models for compressed sensing and prediction of large-scale traffic data. IEEE Trans. Intell. Transp. Syst. 16(5), 2949–2954 (2015) 5. B. Kaur, J. Bhattacharya, A convolutional feature map based deep network targeted towards traffic detection and classification. Expert Syst. Appl. 124, 119–129 (2018) 6. T. Sawabe, M. Kanbara, N. Hagita, Diminished reality for acceleration – motion sickness reduction with vection for autonomous driving, in Proceedings of the IEEE International Symposium on Mixed and Augmented Reality (ISMAR’16) (2016), pp. 297–299 7. P. Dai, K. Liu, X. Wu, Z. Yu, H. Xing, V.C.S. Lee, Cooperative temporal data dissemination in SDN-based heterogeneous vehicular networks. IEEE Internet Things J. 6(1), 72–83 (2019) 8. X. Huang, R. Yu, J. Kang, Y. He, Y. Zhang, Exploring mobile edge computing for 5G-enabled software defined vehicular networks. IEEE Wirel. Commun. 24(6), 55–63 (2017) 9. Z. Chang, Z. Zhou, T. Ristaniemi, Z. Niu, Energy efficient optimization for computation offloading in fog computing system, in Proceedings of the IEEE Global Communications Conference (GLOBECOM’17) (2017), pp. 1–6 10. H. Cao, J. Cai, Distributed multiuser computation offloading for cloudlet-based mobile cloud computing: a game-theoretic machine learning approach. IEEE Trans. Veh. Technol. 67(1), 752–764 (2018) 11. P. Dai, K. Hu, X. Wu, H. Xing, F. Teng, Z. Yu, A probabilistic approach for cooperative computation offloading in MEC-assisted vehicular networks. IEEE Trans. Intell. Transp. Syst. 23(2), 899–911 (2022) 12. F. Mehmeti, T. Spyropoulos, Performance analysis of mobile data offloading in heterogeneous networks. IEEE Trans. Mob. Comput. 16(2), 482–497 (2017) 13. S. Boyd, L. Vandenberghe, Convex optimization. J. Am. Stat. Assoc. 100, 1097 (2005) 14. S. Boyd, N. Parikh, E. Chu, B. Peleato, J. Eckstein, Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 3(1), 1–122 (2011) 15. J.F. Mota, J.M. Xavier, P.M. Aguiar, M. Püschel, D-ADMM: a communication-efficient distributed algorithm for separable optimization. IEEE Trans. Signal Process. 61(10), 2718– 2723 (2013) 16. C. Liang, F.R. Yu, H. Yao, Z. Han, Virtual resource allocation in information-centric wireless networks with virtualization. IEEE Trans. Veh. Technol. 65(12), 9902–9914 (2016) 17. Y. Lv, Y. Duan, W. Kang, Z.X. Li, F. Wang, Traffic flow prediction with big data: a deep learning approach. IEEE Trans. Intell. Transp. Syst. 16(2), 865–873 (2015) 18. DiDi, GAIA Open Data Set. https://outreach.didichuxing.com/research/opendata/en/. Online; Accessed 14 Mar 2020
References
143
19. B. Coll-Perales, J. Gozálvez, M. Gruteser, Sub-6GHz assisted MAC for millimeter wave vehicular communications. IEEE Commun. Mag. 57(3), 125–131 (2019) 20. J. Li, H. Gao, T. Lv, Y. Lu, Deep reinforcement learning based computation offloading and resource allocation for MEC, in Proceedings of the IEEE Wireless Communications and Networking Conference (WCNC’18) (2018), pp. 1–6 21. H. Chen, D. Zhao, Q. Chen, R. Chai, Joint computation offloading and radio resource allocations in wireless cellular networks, in Proceedings of the 10th International Conference on Wireless Communications and Signal Processing (WCSP’18) (2018), pp. 1–6 22. Y. Wang, P. Lang, D. Tian, J. Zhou, X. Duan, Y. Cao, D. Zhao, A game-based computation offloading method in vehicular multi-access edge computing networks. IEEE Internet Things J. 7, 4987–4996 (2020) 23. W. Fan, Y. Liu, B. Tang, F. Wu, Z. Wang, Computation offloading based on cooperations of mobile edge computing-enabled base stations. IEEE Access 6, 22622–22633 (2018)
Chapter 8
An Approximation Algorithm for Joint Data Uploading and Task Offloading in IoV
Abstract This chapter investigates cooperative data uploading and task offloading in heterogeneous IoV. First, considering the characteristics that different tasks may require common data and can be offloaded to heterogeneous nodes, we present an end–edge–cloud cooperative data uploading and task offloading architecture. Second, we formulate a Joint Data Uploading and Task Offloading (JDUTO) problem, aiming at minimizing the average service delay by considering common input data, heterogeneous resources, and vehicle mobility. JDUTO is proved as NP-hard by reducing the well-known NP-hard problem Capacitated Vehicle Routing Problem (CVRP) in polynomial time. Third, we propose an approximation algorithm. Specifically, we first design an optimal algorithm to select a set of vehicles with common data requirements for data uploading. Then, we adopt Lagrange multiplier method to derive the optimal solution of resource allocation. Finally, we design a filter mechanism-based Markov-approximation algorithm for task offloading. We prove that the gap of the approximation algorithm is . β1 log |O|, where .β is a positive constant and .O is the size of solution space. Finally, we build a simulation model based on real trajectories and give comprehensive performance evaluations, which conclusively demonstrate the superiority of the proposed solution. Keywords Data uploading · Task offloading · Resource allocation · Approximation algorithm
8.1 Introduction Recent advances in IoV have paved the way for the development of emerging ITSs, such as cooperative autonomous driving [1] and vehicular cyber-physical systems [2]. To enable such applications, it is of great significance to investigate the efficient processing of data-driven and computation-intensive tasks in IoV [3]. Currently, the end–edge–cloud cooperative task processing becomes a promising architecture in IoV, which has attracted great research attention [4–6]. However, the ever-increasing data requirement for various intelligent IoV applications raises great challenges on task processing. For instance, modern auto-driving vehicles may © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 K. Liu et al., Toward Connected, Cooperative and Intelligent IoV, https://doi.org/10.1007/978-981-99-9647-6_8
145
146
8 An Approximation Algorithm for Joint Data Uploading and Task Offloading in IoV
sense/generate over 1 GB data per second [7]. Clearly, due to the limited wireless bandwidth resources as well as stringent timing requirements on ITS applications, it is not practical by simply transmitting all the data items to the edge or cloud nodes for task processing. Great efforts have been devoted to task offloading in vehicular networks. Sun et al. [8], Chen et al. [9], Liu et al. [10], and Dai et al. [11] investigated the task offloading in heterogeneous IoV environments, where vehicles, edge nodes, and cloud nodes have different capacities on computing, transmission, and storage. However, these studies mainly focused on enhancing the utilization of heterogeneous resources, while cannot be adaptive to resource allocation in dynamic vehicular environments. Some studies [12–15] jointly considered the task offloading and the resource allocation in IoV, aiming at satisfying different resource and delay requirements of tasks. However, these studies did not consider the feature that different offloading tasks may require common data as input. Ning et al. [16], Rahim et al. [17], Liu et al. [18], and Asheralieva and Niyato [19] incorporated the cache mechanism or data characteristics into consideration and aimed to better enhance bandwidth utilization by prefetching the shared data from the cloud to the edge nodes. Nevertheless, none of these studies considered the characteristics of common data demands of different tasks when making offloading decisions. In reality, common input data may be required by different tasks. For instance, the video data sensed by the on-board camera of one vehicle may be required by other vehicles for enabling augmented reality applications [20]. This chapter is dedicated to addressing the following issues for joint data uploading and task offloading in heterogeneous IoV. First, the heterogeneity in terms of both tasks and resources increases the difficulty of exploring synergistic effects of task offloading and resource allocation. In particular, tasks may have different data requirements, computation demands, and temporal/space features, and meanwhile, the offloading nodes may have different computation, transmission, and memory resources. Therefore, it is non-trivial to jointly schedule the task offloading. Second, it is challenging to strike the best balance between reducing the uploading delay and computing delay. Specifically, considering tasks with common input data, the uploading delay can be reduced by only uploading the commonly required data to one of the nodes and offloading all the corresponding tasks to this node. Nonetheless, the computing delay of task for being processed at the offloading node inevitably increases since there are more tasks competing for the limited computation resources. Third, the dynamic characteristics of IoV further increases the scheduling difficulty. In particular, the stochastic mobility of vehicles may result in intermittent connections between vehicles and the corresponding offloading nodes. Extra data transferring overheads and task service delay may be caused when vehicles move out of the coverage of their corresponding offloading nodes, which may seriously deteriorate the overall system performance. With above motivation, this chapter focuses on synthesizing the characteristics of heterogeneous resources, common input data, and vehicle mobility, aiming at enabling efficient data uploading and task offloading in IoV. The main contributions of this chapter are listed as follows:
8.2 System Architecture
147
• We present an end–edge–cloud cooperative data uploading and task offloading architecture by synthesizing heterogeneous resource, common input data, and vehicle mobility. In this scenario, the task is composed by multiple independent subtasks. Different subtasks require partial or whole common input data for processing, which need to be uploaded only by one of the sensing nodes. The cloud nodes, edge nodes, and vehicles with heterogeneous transmission and computation resources are collaborated for task processing via V2C, V2I, and V2V communications. • We formulate a Joint Data Uploading and Task Offloading (JDUTO) problem, which aims at minimizing the average service delay. Specifically, we first model the vehicle mobility based on multi-class Markov chain to estimate the probability of successful task offloading. On this basis, the service delay (including uploading delay, computing delay, and downloading delay) is modeled by considering vehicle mobility, task and data features, and heterogeneous resources. Finally, we prove its NP-hardness by reducing Capacitated Vehicle Routing Problem (CVRP), a well-known NP-hard problem, to JDUTO in polynomial time. • We propose an approximation solution, which consists of three stages. First, it selects the set of vehicles for data uploading by analyzing the transmission rate between vehicles and the offloaded nodes. Second, it allocates transmission and computation resources by Lagrange multiplier method, which derives the optimal allocation solution. Third, it determines the task offloading nodes via filter mechanism-based Markov-approximation algorithm. The algorithm first transforms the offloading problem into a Markov-approximation optimization problem. Then, an Initialization Strategy is designed to maximize data sharing degree, and a State Transition Strategy is designed to determine the rescheduled tasks. We prove that the gap between the approximation algorithm and the optimal solution is . β1 log |O|, where .β is a positive constant and .O is the size of solution space, and the algorithm will coverage with the probability close to 1.
8.2 System Architecture As shown in Fig. 8.1a, the cloud node is resided in the backbone network and can communicate with vehicles via V2C communication. The cloud node can collect global information of vehicles and edge nodes (i.e., location and resource status) and make scheduling decisions, including the set of task offloading vehicles, the corresponding edge nodes, the uploading data, and the allocated V2I communication bandwidth, to minimize system service delay, including uploading delay, computing delay, and downloading delay. The edge nodes such as RSUs and micro-base stations can communicate with vehicles via V2I communication, such as DSRC, C-V2X et al. [18]. Vehicles compete for both the transmission and computation resources of edge nodes during task offloading. Moreover, due to the mobility of
148
8 An Approximation Algorithm for Joint Data Uploading and Task Offloading in IoV
Fig. 8.1 End–edge–cloud cooperative data uploading and task offloading. (a) System scenario. (b) Scheduling example
vehicles, it is possible that the task offloading would fail when they are out of the coverage of the scheduled edge nodes. In this case, the task will be sent to the cloud for processing and return to the corresponding vehicle via V2C communication. Vehicles with available computation resources can be considered as mobile edge nodes, and they can help task offloading via V2V communication. Each vehicle may generate tasks, and each task consists of multiple independent subtasks. Different tasks/subtasks may require common data for processing. In such a context, commonly requested data need to be uploaded only by one of the vehicles, and all the tasks/subtasks can be processed by the corresponding server node. To be specific, as shown in Fig. 8.1, suppose the subtasks of .v1 , .v2 , and .v3 require common data .d1 and .d2 . Due to coverage constraints, .v1 , .v2 can process its tasks locally or
8.3 Joint Data Uploading and Task Offloading (JDUTO) Problem
149
offload to edge node e and cloud node c, whereas .v3 can process its tasks locally or at c. Figure 8.1b shows one of the offloading strategies, where .n1,1 and .n2,1 are offloaded to e and .n2,2 , .n3,1 are offloaded to c. In such case, common data .d1 and .d2 need to be uploaded only once to e (or c) by .v1 (or .v3 ). The saved transmission resource between .v2 and e (or c) can be allocated to other vehicles to reduce its uploading delay and thus decrease average service delay of the whole system. The system procedures are presented as follows. First, the beacon messages of vehicles and edge nodes are sent to the cloud node, including the status of their available bandwidth, computation and memory resources, coverage and realtime location, etc. Meanwhile, vehicles send their task requests to the cloud node, including their input data, required computation resource, and output data size. Second, based on the collected information and stochastic vehicle mobility model, the cloud node makes the offloading decisions, including the offloaded nodes with allocated bandwidth and computation resources, and the set of uploading vehicles with the corresponding data items. Third, the cloud node sends the decisions to the requested vehicles and offloaded nodes. Fourth, the vehicles upload the data accordingly based on the allocated bandwidth, and each node processes the allocated subtasks based on the allocated computation resource. Fifth, the node returns the result to the vehicle via V2I communication if the vehicle is still in its coverage. Otherwise, the vehicle will be scheduled to upload the task to the cloud for reprocessing.
8.3 Joint Data Uploading and Task Offloading (JDUTO) Problem 8.3.1 Preliminary The set of vehicles is denoted by .V = {v1 , . . . , v|V | }, and the set of edge nodes is denoted by .E = {e1 , . . . , e|E| }. Each node .bi ∈ V ∪ E is characterized by a threetuple ., which indicates the wireless bandwidth, computation capability, and memory of .bi , respectively. Moreover, denote the cloud node as C. Assume the transmission and computation resources of cloud node are evenly shared among all data and tasks, respectively. Thus, the processing capability of C is characterized by a two-tuple ., where .Rc and .Lc represent the transmission rate and computation rate allocated to each data and subtask, respectively. The V2I and V2V communication ranges are denoted by .OV 2I and .OV 2V , respectively. Then, denote the distance between vehicle .vh and node .bi at time t as .disvh ,bi (t), and the set of nodes that can communicate with .vh can be represented as .Bvh (t) = {bi | (bi ∈ V ∧ disvh ,bi (t) ≤ OV 2V ) ∨ (bi ∈ E ∧ disvh ,bi (t) ≤ OV 2I )} ∪ {C}.
150
8 An Approximation Algorithm for Joint Data Uploading and Task Offloading in IoV
Table 8.1 Primary notations
Symbol
Description
D .bi
Set of data The i-th node with task processing capability
.Bi
Wireless bandwidth of .bi
.Li
Computation capability of .bi
.Mi
Memory of .bi
.vh
The h-th vehicle
.dr
The r-th data
.mr
Size of .dr
.nk
The k-th task
.hk
The index of requested vehicle of .nk
.Bk
Set of subtasks in .nk
.nk,j
The j-th subtask of .nk
.Dk,j
Set of input data of .nk,j
.lk,j
Required computation resources of .nk,j
.ok,j
Output data size of .nk,j
Further, the set of data is denoted by .D = {d1 , . . . , d|D| }, and the size of .dr ∈ D is denoted by .mr . Denote the set of tasks as .N = {n1 , . . . , n|N | }. Each task .nk is associated with a two-tuple ., where .hk represents the index of requested vehicle, and .Bk represents the set of its subtasks. Particularly, each subtask .nk,j ∈ Bk is associated with a three-tuple ., where .Dk,j (Dk,j ∈ D) represents the set of input data, .lk,j represents the required computation resources, and .ok,j represents the output data size. For easier expression, we adopt .yk,j,r to indicate whether .dr is the input data of .nk,j . In addition, we introduce a binary variable .χk,j,i (∀bi ∈ Bvhk (t)) to indicate whether .nk,j is offloaded to node .bi , and a binary variable .ηk,j,r,i to indicate whether the requested vehicle of .nk,j is selected to upload .dr to node .bi . The primary notations are summarized in Table 8.1.
8.3.2 Vehicle Mobility Model In this section, we model the variation of distance between vehicle .vh and node bi by jointly analyzing the relative heading state and relative distance changing. Specifically, we first derive the probability distribution of mobility model through multi-class Markov chain. Then, we analyze the communication probability between .vh and .bi at each time slot based on probability distribution. Finally, we calculate the probability of successful data transmission during task offloading procedure between the two nodes.
.
8.3 Joint Data Uploading and Task Offloading (JDUTO) Problem
151
We define the relative heading state to represent the relative driving direction of vehicle and node. Given vehicle .vh and node .bi , if .bi is a static node (i.e., edge node), the relative heading state of .bi is set to 0; otherwise, it is estimated based on the angle between the driving direction of vehicle .vh and the vector direction between .vh and node .bi : If the angle is in the range of .[0◦ , 90◦ ], the relative heading state of .vh is set to 1; if the angle is in the range of .[90◦ , 180◦ ], the state is set to -1. We formulate the relative distance based on the Discrete Markov Chains [21]. Given .Dismax as the maximum distance between any two nodes in the system, we divide .Dismax into .|P |(|P | = ┌ D ismax /ω ┐) segments with unit length .ω. Let .disvh ,bi (t) be the relative distance between .vh and .bi at time t, and the relative distance state between two nodes is computed as .φvh ,bi (t) = ┌ d isvh ,bi (t)/ω ┐. Then, we formulate the relative distance changing under different relative heading states as a multi-class Markov { chain. Denote the relative heading state}pairs between any two nodes as .H = hvh ,bi (t), hbi ,vh (t) | ∀vh ∈ V , bi ∈ V ∪ E . As the heading state pairs .h¯ ∈ H may share the mobility patterns, we adopt .Z to denote the set of latent mobility patterns. Then, given the latent mobility patterns ' .z ∈ Z and relative distance state .φ, the probability that .φ changes to .φ in the ' next timestamp is denoted by .P r(φ | φ, z). On this basis, utilizing historical trajectories, we first calculate the occurrence frequency of the transition from .φ to .φ ' based on the relative heading state pairs. Then the Expectation–Maximization method [22] is adopted to calculate the values of .P r(φ ' | φ, z) and .P r(z | h). ¯ Finally, we derive the variation of distance between vehicle .vh and node .bi based on their latest .τ GPS denote the }set of .τ relative distance {( points. Specifically, ) transition as .G = φ(t), φ ' (t + 1) | t ∈ {−τ, . . . , −1} , and the probability that communication pair of .vh and .bi belongs to the latent pattern .z ∈ Z is calculated as Σ .
Pr(z | G) = Σ
(φ(t),φ ' (t+1))∈G Pr
z' ∈Z
Σ
( ' ) φ (t + 1) | φ(t), z
(φ(t),φ ' (t+1))∈G Pr (φ(t
+ 1) | φ(t), z' )
(8.1)
On this basis, given driving direction and distance of two nodes .vh and .bi , we first initialize the distance state of two nodes, denoted as .πvh ,bi (0). Then the corresponding state transition probability is calculated as ⎡ .
Trvh ,bi = ⎣
Σ
(
⎤|P |×|P |
)
Pr φ ' | φ, z Pr(z | G)⎦
(8.2)
z∈Z
( ) Σ where . z∈Z Pr φ ' | φ, z Pr(z | G) is the transition probability of the relative distance state. Let .πvh ,bi (t) be the probability distribution of the relative distance state at time t, and from the transition probability, it is calculated by πvh ,bi (t) = πvh ,bi (0) × (T rvh ,bi × · · · × T rvh ,bi ) __________________________ __________________________
.
t
(8.3)
152
8 An Approximation Algorithm for Joint Data Uploading and Task Offloading in IoV
Therefore, the communication probability of node .bi for .vh at time t is calculated as P rvh ,bi (t) =
.
Σ | | j≤ O ω
πvh ,bi (t)
(8.4)
where .O ∈ {V 2V , V 2I } is the communication coverage. Since .vh needs to maintain connection with .bi during the task processing procedure (i.e., .[0, ς ]), probability of successful data transmission for .vh offloaded tasks to .bi is calculated as pvh ,bi =
.
min
ι∈{0,...,ς }
Prvh ,bi (ι)
(8.5)
The mobility model is considered in JDUTO problem as follows. First, the distance between uploading vehicles and offloaded nodes is constantly changing, causing unstable signal-to-noise ratio (SNR). To measure the transmission rate of vehicles, the mobility model is adopted to calculate the expected distance between two nodes, as defined in Eq. (8.6). Second, as described above, the task needs to be uploaded to cloud node for reprocessing when vehicle moves out the coverage of its offloaded nodes. In such case, the reprocessing delay should be added to the subtask service delay. However, in practice, it is tricky to obtain exact future location of vehicles due to dynamic movement. Thus, we adopt mobility model to estimate the expected service delay of subtasks, as defined in Eq. (8.19).
8.3.3 Delay Model Uploading Delay First, since multiple subtasks can access to the same edge node or vehicles simultaneously, the V2I/V2V wireless bandwidth is competed among the input data of these subtasks. Given the offloading decisions of subtasks, the set of required data of node .bi (bi ∈ V ∪ E) is the union of all input data allocated to it, denoted as .Rei . Then, suppose .nk,j is offloaded to .bi (i.e., .χk,j,i = 1), for each data .dr (dr ∈ Dk,j ), it has to compete bandwidth with other data. Let .vhk denote the uploaded vehicle of .dr (i.e., .ηk,j,r,i = 1) and .αk,j,r,i denote the ratio of bandwidth allocated to .dr ; then given the bandwidth .Bi , the transmission rate of uploading .dr from .vhk to .bi is calculated as ⎛ | |2 | |−θ ⎞ | | | Pvhk ,i |δvhk ,i | ζ dvk ,i | ⎟ ⎜ .Rk,j,r,i = αk,j,r,i Bi · log2 ⎝1 + (8.6) ⎠ N0 where .Pvhk ,i is the transmission power of .vhk , .δvhk ,i is the channel fading gain between .vk and .bi , .ζ is the system constant, .dvk ,i is the expectation of the distance,
8.3 Joint Data Uploading and Task Offloading (JDUTO) Problem
153
which can be derived from mobility model, .θ is the path loss exponent, and .N0 is the noise power [23]. Then given data size mr , the uploading delay of dr is calculated as follows: ⎧ trans .tk,j,r,i
=
bi = vhk Rk,j,r,i , otherwise
0,
mr
(8.7)
Note that the total allocated bandwidth cannot exceed the maximum bandwidth of bi , which is formulated as follows: |Bk | Σ |D| |N | Σ Σ .
ηk,j,r,i · αk,j,r,i = 1
(8.8)
k=1 j =1 r=1
Second, if the subtask is offloaded to the cloud node, then given the transmission rate Rc , the transmission delay of data dr ∈ Dk,j is calculated as trans tk,j,r,c =
.
mr Rc
(8.9)
On this basis, the uploading delay of subtask nk.j is the maximum uploading delay of its input data, which is calculated as follows: ⎧ ⎫ |N | Σ |Bu | ⎨Σ ⎬ { } trans trans t .tk,j,i = max I ηu,s,r,i = 1 tu,s,r,i (8.10) ⎭ ∀dr ∈Dk,j ⎩ u=1 s=1
where bi ∈ V ∪ E ∪ {C}, I {·} is the indicator function, if ηu,s,r,i = 1, I {·} = 1; otherwise, I {·} = 0. Computing Delay First, the computation resources of edge node and vehicles are competed among multiple allocated subtasks. Let βk,j,i denote the ratio of computation resources allocated to nk,j ; then given the computation resource Li of node bi , the computing delay of nk,j is calculated as com tk,j,i =
.
lk,j βk,j,i Li
(8.11)
Similarly, the summations of allocated computation resource ratio should not exceed to 1, which is formulated as follows: |N | Σ |Bk | Σ .
k=1 j =1
χk,j,i · βk,j,i = 1
(8.12)
154
8 An Approximation Algorithm for Joint Data Uploading and Task Offloading in IoV
Second, for cloud node, given the computation resource Lc of cloud node, the computing delay is formulated as follows: com tk,j,c =
.
lk,j Lc
(8.13)
Downloading Delay First, the bandwidths of edge node and vehicles are evenly shared among subtasks for result downloading. Given the bandwidth Bi of node bi (bi ∈ V ∪ E), the transmission rate for downloading the result from bi to vhk is calculated as ⎛ |−θ ⎞ | |2 | | | | | Bi log 1 + down Rk,j,i =
Pi,vh |δi,vh | ζ |di,vh | k k k N0
Σ|N | Σ|Bk |
.
(8.14)
j =1 χk,j,i
k=1
Then, the downloading delay of subtask nk,j is calculated as ok,j
down tk,j,i =
.
down Rk,j,i
, ∀bi ∈ V ∪ E
(8.15)
Second, for cloud node, given the transmission rate Rc , the downloading delay of nk,j is calculated as down tk,j,c =
.
ok,j Rc
(8.16)
8.3.4 Problem Formulation According to the service procedure as introduced in Sect. 8.2, the service delay of each subtask .nk,j , denoted by .tk,j,i , can be derived based on the following two cases: 1. If the data uploading and downloading were successfully completed between the vehicle and the offloaded nodes, the service delay of .nk,j is calculated as trans com down tk,j,i = tk,j,i + tk,j,i + tk,j,i
(8.17)
.
2. If the communication is intermittent and the data uploading or downloading was failed, the subtask has to be uploaded to the cloud node for reprocessing. In such a case, the service delay of .nk,j is calculated as ' .tk,j,i
⎧ = tk,j,i + max
∀dr ∈Dk,j
mr Rc
⎫ +
lk,j Lc
(8.18)
8.3 Joint Data Uploading and Task Offloading (JDUTO) Problem
155
With the above analysis, the expected service delay of .nk,j is computed as follows: ⎞ ⎛ ┐ ┌ ' . E tk,j,i = pvh ,i tk,j,i + 1 − pvh ,i tk,j,i (8.19) k k where .pvhk ,i is the probability of successful data transmission between vehicle vhk and node .bi .
.
Then, the service delay of task .nk is defined as the maximum delay of its subtasks Bk , formulated as follows: ⎫ ⎧ ⎨|V |+|E|+1 Σ ┐⎬ ┌ .tk = max χk,j,i · E tk,j,i (8.20) ⎭ ∀nk,j ∈Bk ⎩
.
i=1
On this basis, given the vehicle set V and edge set E with attributes ., the cloud node C with attributes ., the task .nk ∈ N with attributes ., and the subtask .nk,j ∈ Bk with attributes ., the JDUTO problem is represented as follows: |N | Σ tk X,A,B,η |N|
min
k=1
s.t.
C1 :
|V |+|E|+1 Σ
χk,j,i = 1
i=1
C2 : χk,j,i = 0, if bi ∈ Bvhk (t) C3 :
|N | Σ |Bk | |D k,j | Σ Σ
ηk,j,r,i αk,j,r,i = 1
k=1 j =1 r=1
.
C4 :
|N | Σ |Bk | Σ
(8.21)
χk,j,i βk,j,i = 1
k=1 j =1
C5 :
Σ
m r ≤ Mi
∀dr ∈Rei
C6 : yk,j,r χk,j,i −
|Bk | |N | Σ Σ
χk,j,i yk,j,r ηk,j,r,i ≤ 0
k=1 j =1
C7 : χk,j,i , ηk,j,r,i ∈ {0, 1} It is observed that .X = {χk,j,i } and .η = {ηk,j,r,i } are the set of offloading decision and the data uploading decision, respectively, where each element is an 0-1
156
8 An Approximation Algorithm for Joint Data Uploading and Task Offloading in IoV
variable. .A = {αk,j,r,i } and .B = {βk,j,i } represent the allocation of bandwidth and computation resources, respectively. Further, C1 and C2 state that each subtask can only be assigned to one node within its neighboring. C3 and C4 are the bandwidth and computation constraints. C5 ensures the allocation will not exceed the node’s memory constraint. C6 requires all input data of subtasks offloaded to node .bi must be uploaded once. Theorem 8.1 The JDUTO is an NP-hard problem. Proof We consider a special instance of JDUTO, denoted by JDUTO.∗ . Specifically, assume each vehicle only requests a task with one subtask, and the input data of all subtasks are different. Besides, suppose the bandwidth and computation resources of all nodes are evenly shared by tasks. In the following, we prove JDUTO.∗ is NP-hard by constructing a polynomial-time reduction from the Capacitated Vehicle Routing Problem (CVRP) [24], which is a well-known NP-hard problem. First, the CVRP is recapitulated as follows. Suppose there are a set of customers .F = {F1 , F2 , . . . , F|F| }, and each customer .Fk ∈ F is associated with a demand .h ¯ k . The distance between each pair of customers .Fv , Fk , is denoted as .d (Fv , Fk ). Further, suppose there are .m vehicles .K = {K1 , KE , . . . , Km } at depot to provide services for customers, each with capacity .JE . And the transportation cost per distance unit of .KE is .E . The question is to find the optimal route to minimize the transportation cost, on which each customer is served once and the vehicle capacity constraint is satisfied. Then, we construct the one-to-one mapping from the JDUTO.∗ to the CVRP. Specifically, the customer .Fk is equivalent to the requested task/subtask .nk,j , and the demand .h¯ k corresponds to the requested memory resource .mk,j . The distance .d (Fv , Fk ) is equivalent to both input data, required computation resource and output data size of .nk,j , represented as .. Further, the capacity .JE of .KE is equivalent to the node memory resource .Mi . The transportation cost per distance unit of .KE is equivalent to the service delay caused by requested resource unit. Then, the transportation cost (i.e., .E · d (Fv , Fk )) in CVRP is equivalent to the service delay for task .nk,j allocated to node .bi . Therefore, the path planning process is equivalent to the procedure for scheduling tasks to heterogeneous nodes. With above mapping policies, we prove that the optimal solution to JDUTO.∗ is found if and only if the minimum value of CVRP is derived, and vice versa. Suppose .Ω is the optimal solution of CVRP, which minimizes the transportation cost by scheduling service route of vehicles under the capacity constraint. Then, with above mapping policies, it is equivalent to minimize service delay of all tasks under the memory constraint. Conversely, suppose we have the optimal solution to JDUTO.∗ , which contains the offloaded strategy .X∗ = {χk,j,i } of all tasks; then based on the above mapping, the service route of vehicles can be created. Suppose the service route is not optimal, which implies the service delay is not minimized. This is contradictory to the assumption that .X∗ is the optimal solution of JDUTO.∗ . Therefore, .X ∗ gives the minimum transportation cost of CVRP. The above proves the NP-hardness of JDUTO.∗ . As JDUTO.∗ is a special case of U ⨅ JDUTO, Theorem 8.1 is proved.
8.4 Proposed Algorithm
157
8.4 Proposed Algorithm To solve the JUDTO problem, we propose an approximation algorithm. The overall procedures of the algorithm are summarized as follows. In Step 1, it initializes the state of Markov chain by offloading subtasks to the node with minimum uploading delay. In Step 2, for subtasks allocated to same node and with common data requirement, it decides the uploading vehicle by selecting the vehicle with maximum SNR. In Step 3, it makes transmission and computation allocation decisions via Lagrange multiplier method. In Step 4, it reschedules the subtask with minimum delay within its task randomly to generate a new state. In Step 5, the current state will transform to new state with predefined transition probability rule. In Step 6, repeat Step 2–Step 5 until the stopping criteria are reached.
8.4.1 The Decision of Cooperative Data Uploading Given offloading decision .X, as a set of subtasks would allocate to same node and request common data, each data need to be uploaded only once by one of the requested vehicles. In this premise, we observe that for arbitrary bandwidth, the uploading capability of vehicles only depends on the sign-to-noise ratio (SNR) between vehicles and objective node. Based on this rule, we first traverse all subtasks allocated to the same node (i.e., .bi ) and classified the subtasks with common data requirement (i.e., .dr ) into a list (i.e., .Lisi,r ). Then, for each subtask .nk,j (nk,j ∈ Lisi,r ), we extract the requested vehicle .vhk and its maximum transmission rate (i.e., ⎛ |−θ ⎞ | |2 | | | | | Δvk ,i = Bi · log2 1 +
.
Pvh
k ,i
|δvh
k
,i |
N0
ζ |dvh
k
,i |
). On this basis, we select the vehicle
with the highest .Δvk ,i as the uploading vehicle, which can achieve the minimum uploading delay.
8.4.2 Allocation of Transmission and Computation Resources As the offloading decision .X and cooperative transmission decision .η are given as binary variables, we propose an optimal resource allocation strategy. Note that the allocated transmission and computation resources of cloud node are given in advance. Thus, given the offloading and uploading decision, we can calculate the service delay of subtasks allocated to the cloud node based on (8.9) and (8.13) directly. Besides, we simply set the allocated bandwidth for vehicles without data uploading as 0. Then, for task .nk , denote the set of subtasks allocated to cloud node
158
8 An Approximation Algorithm for Joint Data Uploading and Task Offloading in IoV
as .Ck , denote the maximize delay of these subtasks as .TCk , and the service delay of nk can be recalculated as ⎧ ⎫ |V |+|E|+1 ⎨ ⎬ Σ . tk = max χk,j,i · tk,j,i (8.22) TCk , ⎭ ∀nkj ∈Bk −Ck ⎩
.
i=1
On this basis, the resource allocation problem is transferred to a convex problem as follows: P 1 : min A,B
.
|N | Σ
tk
(8.23)
k=1
s.t. C3∼C4 However, the problem cannot be solved directly because .max{·} is nondifferentiable. In the following, we first transformed it to a differentiable function. Specifically, we regard .tk as a continuous variable and transform into the continuous constraints, which is shown as tk ≥ TCk , ∀nk ∈ N .
(8.24)
.
tk ≥
|VΣ |+|E|
⎛ ⎞ trans com down χk,j,i tk,j,r,i + tk,j,i + tk,j,i
(8.25)
i=1
Then, the Lagrange function of the transformed optimization problem is given in Eq. (8.26), where .t = {tk } and coefficient .λ = {λ1 , λ2 , λ3 , λ4 } are the Lagrange multipliers. Therefore, the Lagrange dual function is given as ⎞ ⎛ |VΣ |+|E| Σ Σ Σ Σ f (t, α, β, λ) = tk + λ1i ⎝ ηk,j,r,i αk,j,r,i − 1⎠ ∀nk ∈N
+
|VΣ |+|E|
⎛ λ2i ⎝
i=1
.
+
Σ
Σ
∀nk ∈N ∀nk,j ∈Bk ∀dr ∈Dk,j
i=1
Σ
⎞
Σ
χk,j,i βk,j,i − 1⎠ +
∀nk ∈N ∀nk,j ∈Bk
|N | Σ
( ) λ3k TCk − tk
k=1
Σ
∀nk ∈N ∀nk,j ∈Bk ∀dr ∈Dk,j
⎛
λ4k,j,r ⎝
|V |+|E|+1 Σ
⎛
⎞
⎞
trans com down − tk ⎠ χk,j,i tk,j,r,i + tk,j,i + tk,j,i
i=1
(8.26) ¯ L(λ) = min f (t, α, β, λ)
.
t,α,β
(8.27)
8.4 Proposed Algorithm
159
The dual problem is written as .
˜ ¯ L(λ) = maxλ L(λ) 1 2 3 s.t. λ , λ , λ , λ4 ≥ 0
(8.28)
According to KKT conditions, we have ⎛ λ1i ⎝
Σ
Σ
⎞
Σ
ηk,j,r,i αk,j,r,i − 1⎠ = 0.
.
∀nk ∈N ∀nk,j ∈Bk ∀dr ∈Dk,j
⎛ λ2i ⎝
Σ
Σ
(8.29)
⎞
βk,j,i − 1⎠ = 0
(8.30)
∀nk ∈N ∀nk,j ∈Bk
Moreover, we can obtain the optimal solution of the problem P1 by Lagrange multiplier method. Specifically, by differentiating .f (t, α, β, λ) with respect to .t, α, β, and letting them equal to 0, we have .
∂f = 1 − λ3k − ∂tk
Σ
Σ
λ4k,j,r = 0
/Σ
∀dr ∈Dk,j
/Σ βk,j,i = Σ |N | Σ|Bk |
λ4k,j,r lk,j
.
k=1
{ } αk,j,r,i = I ηk,j,r,i = 1 /Σ
(8.31)
∀nk,j ∈Bk ∀dr ∈Dk,j
j =1
∀dr ∈Dk,j
λ4k,j,r lk,j
(8.32)
.
nu ∈N
Σ
∀nu,s ∈Bu
Δvk ,i
} Σ|N | Σ|Bk | Σ|Dk,j | { k=1 j =1 r=1 I ηk,j,r,i = 1
yu,s,r λ4u,s,r mr
/Σ
∀nu ∈N
Σ
∀nu,s ∈Bu Δvk ,i
yu,s,r λ4u,s,r mr
(8.33)
Therefore, given .λ4 , all variables in .f (t, α, β, λ) can be solved correspondingly. We can fast obtain the optimal solution by sub-gradient descent method, given as ┌ 4 λ4k,j,r (t + 1) = λ4k,j,r (t) − θk,j,r (t). .
⎞┐+ ⎛ trans com down + tk,j,i + tk,j,i − tCk tk,j,r,i
(8.34)
4 where .θk,j,r (t) is the step size of iteration at step t, and .[·]+ = max{·, 0}. According to [25], the algorithm will fast coverage with arbitrary initial point if Σ∞ 4 4 . n=1 θk,j,r (n) = 0 and .limt→∞ θk,j,r (t) = 0.
160
8 An Approximation Algorithm for Joint Data Uploading and Task Offloading in IoV
8.4.3 Filter Mechanism-Based Markov-Approximation Algorithm Markov-Approximation-Based Algorithm Hitherto, we have obtained the optimal .η, α, β by assuming .X is given. In this section, we modify the Markovapproximation algorithm to solve the task offloading problem [26]. Define .O = {Ok } to indicate the overall offloading decision, .Ok = {φk,j } to indicate the offloading decision of task .nk , and .φk,j ∈ {0, 1, . . . , |V | + |E| + 1} to indicate the offloaded policy of subtask .nk,j , and we transfer the original offloading problem into the following equivalence problem: P2: min ξφ
Σ
ξ φ Uφ
φ∈O
.
s.t. C8 :
Σ
ξφ = 1
(8.35)
φ∈O
where .ξφ is the probability of selecting .φ as the offloading decision; .Uφ is the average delay under decision .φ. By introducing log–sum–exp function, the problem can be further approximately transferred to the following convex problem: P3: min
Σ φ∈O
.
ξ φ Uφ +
1 Σ ξφ log ξφ β φ∈O
(8.36)
s.t. C8 where .β is a positive constant to control the approximation gap. When .β → ∞, the problem will become the original problem. Based on KKT conditions, the optimal solution of P3 is calculated as ∗ .ξφ
) ( exp −βUφ ) , ∀φ ∈ O ( =Σ φ ' ∈O exp −βUφ '
(8.37)
We can observe the selecting probability of φ is proportional to its corresponding service delay. However, it is still hard to solve P3 since the solution space is large. The key idea is to design a Time-Reversible Markov Chain (TRMC) whose state is the feasible solution. In such case, if the chain achieves stationary distribution ξφ∗ , the solution with highest probability is selected as the final offloading decision. It has been proved that there exists at least one TRMC with stationary distribution ξφ∗ if the following two conditions are satisfied [26]: • Any two states can be reachable with each other. • The balanced equation is satisfied, i.e., ξφ∗ qφ,φ ' = ξφ∗' qφ ' ,φ , where φ, φ ' are the states of Markov Chain, and qφ ' ,φ denotes the transition probability from φ to φ ' .
8.4 Proposed Algorithm
161
Therefore, to construct a problem-specific TRMC, we first let the offloading decision O = {Ok } be the state of Markov chain. Then, the transition rate between two arbitrary states φ, φ ' is q
. φ,φ '
( ) exp −θ Uφ ' { ( ) ( )} = max exp −θ Uφ ' , exp −θ Uφ
(8.38)
where θ is a non-negative constant. Obviously, if Uφ ' < Uφ , it means the performance of new solution φ ' is better than φ, the state φ will be transformed to φ⎛ ' with⎞ probability 1, and otherwise, it would be transformed with probability exp −θUφ '
. The algorithm would start with an arbitrarily offloading solution and may move to another solution by picking up a subtask and update its offloaded policy randomly according to transition rate qf,f ' . It would converge when the TRMC achieves stationary distribution. exp(−θUφ )
Solution Filter Mechanism Although the Markov-approximation algorithm can find an approximation solution, the convergence is very slowly due to the exploration of large solution space. Furthermore, heterogeneous nodes, memory constraint, and vehicle mobility further deteriorate the convergence of the algorithm. To address the slow convergence challenge, we propose a Solution Filter Mechanism to filter infeasible or in-apposite solutions, which includes solution initialization and state transition strategy. Initialization Strategy: Given arbitrary subtask nk,j , we first traverse its feasible solutions (∀bi ∈ Bvhk (t)) and calculate corresponding uploading delay: For each input data dr , if the data have been uploaded by pervious subtask, the uploading delay is set to be the same with previous uploading delay; otherwise, it is calculated as mr /Avk ,i . Then the uploading delay of nk,j is the maximize delay of its input data. On this basis, the node with minimum uploading delay is selected as the offloaded node. State Transition Strategy: As the service delay of a task only depends on the maximum delay of its subtasks, we try to redirect the offloaded node of such subtask to degrade the service delay. Specifically, during each iteration, we randomly pick a task nk , and its subtask nk,j with maximum delay is selected to reschedule. Then, we traverse all feasible nodes of nk,j to calculate its service delay. Given redirected node bi , the set of subtasks Ni has been allocated to bi , the set of required data uploaded uploaded by⎫last iteration, the service delay is Dk,j , and the set of data D ⎧i ⎛|| uploaded | ⎞ | mr |Di |+|Dk,j | l |N | calculated as max∀dr ∈Dk,j + k,jLi i . On this basis, the node Δv ,i k
with minimum delay is selected as the new node. If all nodes are not able to reduce the service delay, a random subtask in nk is selected to reschedule.
162
8 An Approximation Algorithm for Joint Data Uploading and Task Offloading in IoV
8.4.4 Theoretical Analysis Theorem 8.2 The upper bound of approximation gap between approximation and optimal solution is β1 log |O| [26]. Proof Let f ∗ denote the optimal solution with decision φmin , and ξφ ∗ denote the optimal stationary distribution of P3 with solution f . Denote ξˆφ as an indicator function; if φ = φmin , then ξˆφ = 1; otherwise, ξˆφ = 0; we have Σ
ξ φ ∗ Uφ +
φ∈O .
≤
Σ
1 Σ ∗ ξφ log ξφ∗ β φ∈O
ξˆφ Uφ +
φ∈O
1 Σ ξˆφ log ξˆφ = f ∗ β
(8.39)
φ∈O
Based on Jensen inequality, we have Σ .
⎛ ξφ∗ log ξφ∗ ≥ − log ⎝
φ∈O
Σ
⎞ log ξφ∗ ·
φ∈O
1⎠ = − log |O| ξφ∗
(8.40)
Then, by substituting Eq. (8.40) to Eq. (8.39), we can obtain f ≤ f∗ +
.
log |O| β
(8.41) U ⨅
Theorem 8.2 is proved. Theorem 8.3 The algorithm converges with probability 1 when β → 0 [26, 27].
Proof As the Markov-Chain is time-reversible, it will converge to the stationary distribution. Let φ ∗ be the optimal solution of P3. Based on Eq. (8.38), we observe that the system is prone to stay in φ with lower delay. Therefore, the MarkovChain will converge to the optimal solution φ ∗ with stationary distribution ξφ∗ . Since Eq. (8.37) can be recomputed as ξφ∗ = Σ
.
1 )) ( φ ' ∈O exp −β Uφ ∗ − Uφ ' (
(8.42)
it is obviously that ξφ∗ increases with the decreasing of β. Thus, when β → 0, the probability of selecting φ ∗ is ξφ∗ → 1. U ⨅ ⎛ Theorem 8.4 The complexity of the approximation algorithm is O I · (|V | + |E| ⎛ |⎞⎞ Σ|N | Σ|Bk | || Dk,j | . + 1) · T + k=1 j =1
8.5 Performance Evaluation
163
Proof In the data uploading decision all nodes and inputs ⎛ phase, it first traverses |⎞ Σ|N | Σ|Bk | || Dk,j | , where data of subtasks with time cost O (|V | + |E| + 1) · k=1 j =1 | | |Dk,j | is the number of input data of subtask nk,j . Then, for each node, the time overhead for Lagrange multiplier method is O(T · (|V | + |E| + 1)), where T is the number of iterations of sub-gradient descent method. On this basis, as the initialization⎛phase needs to traverse all ⎞tasks and its feasible nodes, its Σ|N | time overhead is O (|V | + |E| + 1) · k=1 |Bk | . Besides, during each iteration, the algorithm would traverse the feasible nodes of subtask with maximum delay with time cost O((|V | + |E| + 1)). Therefore, given the number of iterations of Markov-approximation algorithm I , the complexity of the proposed algorithm is ⎛ ⎛ |⎞⎞ Σ|N | Σ|Bk | | |Dk,j | . O I · (|V | + |E| + 1) · T + k=1 j =1 U ⨅
8.5 Performance Evaluation 8.5.1 Simulation Setting In this section, we implement the simulation model based on the architecture described in Sect. 8.2. Specifically, we extract vehicle trajectories from a 3 .×3 km area of Chengdu, China, from 13:00 to 13:05, on 11 Nov. 2016, which is obtained from Didi Chuxing GAIA Initiative. The average number of vehicles in this area per second is 678.35 and the standard deviation is 11.95. By default, there are 1 cloud node and 16 static edge nodes deployed in the simulated system, and 150 vehicles are randomly selected for simulation. The task requested procedure follows exponential distribution, and the input data of subtasks are randomly selected from system data set. The parameter settings are summarized in Table 8.2, which are referenced from [15, 26]. For performance comparison, we adopted the Task Allocation in Vehicular Fogs (TAVF) algorithm proposed in [28] to be tailored for task offloading. Specifically, the algorithm first sorts all subtasks based on its computation resource requirement and calculates the processing priority of tasks among available nodes, where the uploading delay and computing delay based on sharing data are considered cooperatively. Then the node with minimum delay is selected as the offloaded node. Moreover, another greedy algorithm for task scheduling is proposed for comparison, which always selects the node with minimum uploading delay. In such case, the subtasks with common input data tend to offload to the same node. Moreover, in addition to the average service delay (ASD) formulated in Sect. 8.3, we design another three metrics, named Sharing Ratio (SR), Average Uploading
164
8 An Approximation Algorithm for Joint Data Uploading and Task Offloading in IoV
Table 8.2 System parameters Parameters Transmission rate of V2C Computation rate of cloud node Wireless bandwidth of V2I Computation rate of edge node Memory capability of edge node Wireless bandwidth of V2V Computation rate of vehicle node Memory capability of vehicle node Coverage of V2V Coverage of V2I .λ Number of subtasks per task Number of input data of subtask Computation demand of subtask Download data size of subtask Number of data Size of data System constant .ζ , path loss exponent .θ Transmission power of V2V/V2I Noise power
Values 0.3 Mbps 2 .× 10.11 cycle/s [15,20] MHz [1,2] .× 10.12 cycle/s [500,1500] MB [10,15] MHz [1,10] .× 10.11 cycle/s [300,1000] MB 300 m 500 m 1/6 [4,5] [5,15] [5,100] .× 10.9 cycle/s [0.5,3.5] Mb 200 [5,7] Mb 1,3 0.01 W/0.02 W .−60 dBm
Delay (AUD), and Average Computing Delay (ACD) to evaluate the efficiency of the proposed algorithm: • Sharing Rate (SR): Given the number of subtasks .|Ai,r | that allocated to .bi and requested data .dr , it is defined as the average ratio of the number of data without transmission resource consuming and the total number of required data, which can be calculated as Σ|V |+|E| Σ|D| |Ai,r |−1 i=1 r=1 |Ai,r | .SR = |V | + |E|
(8.43)
The higher value of SR indicates more subtasks with common data requirement are allocated to same node. • Average Uploading Delay (AUD): Given the number of tasks .|N|, the number of trans of each subtask .n , it is subtasks per task .|Bk |, and the uploading delay .tk,j k,j defined as the average uploading delay among subtasks, which can be calculated as Σ|N | Σ|Bk | AU D =
.
k=1
trans j =1 tk,j
Σ|N |
k=1 |Bk |
(8.44)
8.5 Performance Evaluation
165
• Average Computing Delay (ACD): Given the number of tasks .|N |, the number com of each subtask .n , it is of subtasks per task .|Bk | and the computing delay .tk,j k,j defined as the average computing delay among subtasks, which can be calculated as Σ|N | Σ|Bk | ACD =
.
com k=1 j =1 tk,j Σ|N | k=1 |Bk |
(8.45)
8.5.2 Experimental Results Effect of the Data Uploading Decision and Resource Allocation Decision To evaluate the effectiveness of the proposed data uploading strategy and resource allocation strategy, we design two variants of the proposed algorithm and conduct an ablation experiment. To be specific, these two variants remove the data upload decision and resource allocation decision from the proposed algorithm and are denoted as AAWDU and AAWRA, respectively. The performance comparison in terms of ASD metric between the proposed algorithm and its variations is shown in Fig. 8.2. It can be observed that the proposed approximation algorithm is able to achieve the lowest ASD across the board. Compared to the two variants, the advantage of the proposed algorithm becomes more significant as the workload increases. In particular, the proposed algorithm outperforms AAWDU for about 29.% and AAWRA for about 21.% when .λ=1/2. The above results show that the proposed data upload and resource allocation strategies are effective in improving the scheduling performance.
Fig. 8.2 The ablation experiment of the proposed algorithm under different arrival rates
166
8 An Approximation Algorithm for Joint Data Uploading and Task Offloading in IoV
Fig. 8.3 Performance comparison under different mean arrival intervals. (a) Average service delay. (b) Sharing radio. (c) Average uploading delay. (d) Average computing delay
Effect of Task Arrival Rates Figure 8.3 reports the impact of variation in task arrival rate on multiple performance metrics for three algorithms. Since the task arrival rate is positively correlated with the system workload, we can see that the average service delay of all algorithms increases with the task arrival rate, as shown in Fig. 8.3a. Notably, compared to TAVF and greedy algorithm, the proposed algorithm gains up to 28% and 20% delay reduction, respectively, which illustrates the superiority of the proposed approximation algorithm. In Fig. 8.3b, it can be seen that the sharing rate also increases with workload. Meanwhile, we can find that the SR of the proposed approximation algorithm is able to maintain a lower level and is about 45% lower than that of TAVF. It is because that the proposed algorithm can balance the data sharing and node workload to decrease the system delay. Further, the comparison of average uploading delay is given in Fig. 8.3c, and the proposed algorithm achieves the lowest AUD in most cases. Finally, in Fig. 8.3d, the average computing delay of the proposed algorithm is the lowest among three algorithms, which indicates that the proposed algorithm can coordinate the data sharing with heterogeneous transmission and computation resources well for task processing.
8.5 Performance Evaluation
167
Effect of the Number of Data In this chapter, we evaluate the algorithm performance under different numbers of data in the system. As mentioned before, the input data of subtasks are sampled randomly from data set with a fixed number, and thus the sharing degree decreases with the increasing of data amount. As shown in Fig. 8.4a, the ASD rises gradually as the number of data increases. Nevertheless, the proposed approximation algorithm always achieves the lowest ASD in different data scales and is around 28% better than TAVF. Fig. 8.4b displays the variation of sharing rate of different algorithms, and the SR of the proposed approximation algorithm significantly lower than TAVF. This result demonstrates the adaptability of the proposed algorithm to different degrees of data sharing. Moreover, Fig. 8.4c shows the performance comparison with respect to AUD metric. As shown, the AUD increases with an increasing of data scale. This is due to the fact that lower SR may cause serious competition of transmission resources. However, the proposed algorithm can always keep a lowest AUD value in most ranges. Finally, the average computing delay of three algorithms is shown in Fig. 8.4d. It can be seen that the
Fig. 8.4 Performance comparison under different numbers of data. (a) Average service delay. (b) Sharing radio. (c) Average uploading delay. (d) Average computing delay
168
8 An Approximation Algorithm for Joint Data Uploading and Task Offloading in IoV
Fig. 8.5 Performance comparison under different numbers of subtasks per task. (a) Average service delay. (b) Sharing radio. (c) Average uploading delay. (d) Average computing delay
computing delay decreases with the decreasing of sharing degree, and the proposed algorithm is able to maintain a lowest computing delay. Effect of the Number of Subtasks Per Task Figure 8.5 exhibits the comparison of the algorithm performance under different numbers of subtasks per task. Please note that the delay of a task only depends on the maximum of the delays of its subtasks, so the increasing of the number of subtasks per task will increase the complexity of collaborative task offloading. Figure 8.5a shows the effect of average delay under different algorithms. As shown, the average delay increases with the increasing of the number of subtasks per task. This is because more subtasks lead to severe resource competition. The proposed approximation algorithm significantly reduces the delay by around 20% compared with the second-best algorithm. Figure 8.5b shows the sharing radio under different algorithms. Due to the increasing of data requirement, the SR increases with the increasing of the number of subtasks. And the SR of the proposed approximation algorithm is lower than TAVF by around 46%. Figure 8.5c shows the average uploading delay of three algorithms. As shown, the proposed algorithm always achieves lowest AUD, which indicates the proposed
References
169
algorithm can fully utilize the advantages of sharing mechanism and heterogeneous resource to improve the system performance. The comparison result of average computing delay is displayed in Fig. 8.5d. It is obviously that the computing delay increases with the increasing of the number of subtasks, and we can see that the proposed algorithm achieves the smallest ACD. The result suggests that the proposed algorithm is able to achieve satisfactory performance on balancing the workload and enhancing bandwidth utilization.
8.6 Conclusion In this chapter, we first presented an end–edge–cloud cooperative data uploading and task offloading architecture by considering heterogeneous resources, common input data, and vehicle mobility in vehicular networks. Then, we formulated a JDUTO problem, aiming at minimizing average service delay. The mobility feature of vehicles was modeled, which determines the successful probability of task offloading, and the service delay was modeled based on heterogeneous processing capabilities. The problem was then proved as NP-hard by giving a polynomial reduction from the well-known NP-hard problem CVRP. On this basis, we designed an approximation algorithm. Specifically, we first designed a greedy algorithm to select the optimal vehicles for common data uploading. Second, we adopted Lagrange multiplier method to obtain the optimal resource allocation decision. Third, we modified the Markov-approximation algorithm and designed specific state transition and filter mechanism to accelerate convergence. We proved that the gap between the proposed approximate algorithm and optimal solution was . β1 log |O|. Finally, we built the simulation model and gave comprehensive performance evaluation. The simulation results revealed the effectiveness of the proposed algorithm.
References 1. X. Huang, X. Cheng, Q. Geng, B. Cao, D. Zhou, P. Wang, Y. Lin, R. Yang, The ApolloScape dataset for autonomous driving, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW’18) (2018), pp. 954–960 2. X. Xu, K. Liu, Q. Zhang, H. Jiang, K. Xiao, J. Luo, Age of view: a new metric for evaluating heterogeneous information fusion in vehicular cyber-physical systems, in Proceedings of the IEEE 25th International Conference on Intelligent Transportation Systems (ITSC’22) (IEEE, 2022), pp. 3762–3767 3. H. Ren, K. Liu, C. Liu, G. Yan, Y. Li, An approximation algorithm for joint data uploading and task offloading in IoV. IEEE Trans. Consum. Electron. (2023). https://doi.org/10.1109/TCE. 2023.3325319 4. F. Dai, G. Liu, Q. Mo, W. Xu, B. Huang, Task offloading for vehicular edge computing with edge-cloud cooperation. World Wide Web 25(5), 1999–2017 (2022)
170
8 An Approximation Algorithm for Joint Data Uploading and Task Offloading in IoV
5. L. Liu, C. Chen, Q. Pei, S. Maharjan, Y. Zhang, Vehicular edge computing and networking: a survey. Mobile Networks Appl. 26, 1145–1168 (2021) 6. S.-C. Lin, K.-C. Chen, A. Karimoddini, SDVEC: software-defined vehicular edge computing with ultra-low latency. IEEE Commun. Mag. 59(12), 66–72 (2021) 7. B. Havers, R. Duvignau, H. Najdataei, V. Gulisano, M. Papatriantafilou, A.C. Koppisetty, DRIVEN: a framework for efficient data retrieval and clustering in vehicular networks. Futur. Gener. Comput. Syst. 107, 1–17 (2020) 8. Y. Sun, X. Guo, J. Song, S. Zhou, Z. Jiang, X. Liu, Z. Niu, Adaptive learning-based task offloading for vehicular edge computing systems. IEEE Trans. Veh. Technol. 68(4), 3061–3074 (2019) 9. Y. Chen, F. Zhao, X. Chen, Y. Wu, Efficient multi-vehicle task offloading for mobile edge computing in 6G networks. IEEE Trans. Veh. Technol. 71(5), 4584–4595 (2021) 10. L. Liu, M. Zhao, M. Yu, M.A. Jan, D. Lan, A. Taherkordi, Mobility-aware multi-hop task offloading for autonomous driving in vehicular edge computing and networks. IEEE Trans. Intell. Transp. Syst. 24, 2169–2182 (2023) 11. P. Dai, Z. Hang, K. Liu, X. Wu, H. Xing, Z. Yu, V.C.S. Lee, Multi-armed bandit learning for computation-intensive services in MEC-empowered vehicular networks. IEEE Trans. Veh. Technol. 69(7), 7821–7834 (2020) 12. J. Du, Y. Sun, N. Zhang, Z. Xiong, A. Sun, Z. Ding, Cost-effective task offloading in NOMAenabled vehicular mobile edge computing. IEEE Syst. J. 17, 928–939 (2023) 13. X. Li, L. Lu, W. Ni, A. Jamalipour, D. Zhang, H. Du, Federated multi-agent deep reinforcement learning for resource allocation of vehicle-to-vehicle communications. IEEE Trans. Veh. Technol. 71(8), 8810–8824 (2022) 14. N. Waqar, S.A. Hassan, A. Mahmood, K. Dev, D.-T. Do, M. Gidlund, Computation offloading and resource allocation in MEC-enabled integrated aerial-terrestrial vehicular networks: a reinforcement learning approach. IEEE Trans. Intell. Transp. Syst. 23(11), 21478–21491 (2022) 15. S. Guo, B.-J. Hu, Q. Wen, Joint resource allocation and power control for full-duplex V2I communication in high-density vehicular network. IEEE Trans. Wirel. Commun. 21(11), 9497–9508 (2022) 16. Z. Ning, K. Zhang, X. Wang, L. Guo, X. Hu, J. Huang, B. Hu, R.Y. Kwok, Intelligent edge computing in Internet of vehicles: a joint computation offloading and caching solution. IEEE Trans. Intell. Transp. Syst. 22(4), 2212–2225 (2020) 17. M. Rahim, M.A. Javed, A.N. Alvi, M. Imran, An efficient caching policy for content retrieval in autonomous connected vehicles. Transp. Res. Part A: Policy Pract. 140, 142–152 (2020) 18. K. Liu, K. Xiao, P. Dai, V.C. Lee, S. Guo, J. Cao, Fog computing empowered data dissemination in software defined heterogeneous VANETs. IEEE Trans. Mob. Comput. 20(11), 3181–3193 (2020) 19. A. Asheralieva, D. Niyato, Combining contract theory and Lyapunov optimization for content sharing with edge caching and device-to-device communications. IEEE/ACM Trans. Netw. 28(3), 1213–1226 (2020) 20. J. Beck, R. Arvin, S. Lee, A. Khattak, S. Chakraborty, Automated vehicle data pipeline for accident reconstruction: new insights from LiDAR, camera, and radar data. Accid. Anal. Prev. 180, 106923 (2023) 21. Q. Qi, J. Wang, Z. Ma, H. Sun, Y. Cao, L. Zhang, J. Liao, Knowledge-driven service offloading decision for vehicular edge computing: a deep reinforcement learning approach. IEEE Trans. Veh. Technol. 68(5), 4192–4203 (2019) 22. T. Li, Y. Wan, M. Liu, F.L. Lewis, Estimation of random mobility models using the expectationmaximization method, in Proceedings of the IEEE 14th International Conference on Control and Automation (ICCA’18) (IEEE, 2018), pp. 641–646 23. J. Wang, K. Liu, B. Li, T. Liu, R. Li, Z. Han, Delay-sensitive multi-period computation offloading with reliability guarantees in fog networks. IEEE Trans. Mob. Comput. 19(9), 2062– 2075 (2019)
References
171
24. T. Vidal, Hybrid genetic search for the CVRP: open-source implementation and SWAP* neighborhood. Comput. Oper. Res. 140, 105643 (2022) 25. M. Chen, S. Guo, K. Liu, X. Liao, B. Xiao, Robust computation offloading and resource scheduling in cloudlet-based mobile cloud computing. IEEE Trans. Mob. Comput. 20(5), 2025–2040 (2020) 26. M. Chen, S.C. Liew, Z. Shao, C. Kai, Markov approximation for combinatorial network optimization. IEEE Trans. Inf. Theory 59(10), 6301–6327 (2013) 27. H. Chen, S. Deng, H. Zhu, H. Zhao, R. Jiang, S. Dustdar, A.Y. Zomaya, Mobility-aware offloading and resource allocation for distributed services collaboration. IEEE Trans. Parallel Distrib. Syst. 33(10), 2428–2443 (2022) 28. C. Tang, X. Wei, C. Zhu, Y. Wang, W. Jia, Mobile vehicles as fog nodes for latency optimization in smart cities. IEEE Trans. Veh. Technol. 69(9), 9364–9375 (2020)
Chapter 9
Distributed Task Offloading and Workload Balancing in IoV
Abstract MEC is an emerging paradigm to offload computation from the cloud in vehicular networks, aiming at better supporting computation-intensive services with low-latency and real-time requirements. In this chapter, we investigate a new service scenario of task offloading and workload balancing in MEC-empowered vehicular networks, where the computational resources of MEC/cloud servers are cooperatively utilized. Then, we formulate a Distributed Task Offloading (DTO) problem by considering heterogeneous computation resources, high mobility of vehicles, and uneven distribution of workloads, targeting at optimizing task offloading among MEC/cloud servers and minimizing task completion time. We prove that the DTO is NP-hard. Further, we propose a multi-armed bandit learning algorithm called utility-table-based learning. For workload balancing among MEC servers, a utility table is established to determine the optimal solution, which is updated based on the feedback signal from the offloaded server. For optimal task offloading, a theoretical bound is derived to determine the ratio of workload assigned to the cloud. Lastly, we build the simulation model and conduct an extensive experiment, which demonstrates the superiority of the proposed algorithm. Keywords Distributed task offloading · Multi-armed bandit learning · Workload balancing
9.1 Introduction MEC is an emerging paradigm for enabling ever-increasing computation demands in vehicular networks, in which data processing tasks are offloaded to the edge of the network. The MEC servers are distributed along the road network to support services with requirements of low latency, real-time processing, and location awareness. There are various computation-intensive ITSs, such as aided-driving with image processing [1], collision warning with video analysis [2], and infotainment services with natural language processing [3], which can be hardly supported by individual vehicles. Meanwhile, in conventional cloud-based architecture, the tasks are offloaded to the cloud servers via cellular communication, which may result © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 K. Liu et al., Toward Connected, Cooperative and Intelligent IoV, https://doi.org/10.1007/978-981-99-9647-6_9
173
174
9 Distributed Task Offloading and Workload Balancing
in excessively long transmission latency. Despite the potential benefits of MECempowered services, there are new arising challenges on task offloading in vehicular networks due to the high mobility of vehicles, heterogeneous capabilities of MEC servers, and uneven workload distribution, which may seriously degrade the system performance. Recently, a great number of studies have been focused on resource allocation and task offloading problems in vehicular networks [4]. Liu et al. [5], Li et al. [6], and Wang et al. [7] combined the SDN with the MEC-based architecture, which enables a programmable, flexible, and controllable network architecture for resource management. Zhang et al. [8] and Wang et al. [9] investigated the cache offloading problem, which predictively downloads the content from the cloud to the local database so as to save the data access time. Zhang et al. [10] and Wang et al. [11] investigated the task offloading problem, which adaptively offloads the computation tasks from the cloud to the MEC servers by predicting vehicular mobility. However, these studies are based on centralized scheduling with high overhead of information acquiring, which is not practical for real-time and massive data services. Further, these studies only focused on the offloading problem between the cloud layer and the MEC layer without considering the load balancing among MEC servers. Based on the observation, this chapter investigates a distributed task offloading problem in MEC-empowered vehicular network, where the resources of MEC servers and the cloud are scheduled in a distributed way. The service delay of a task consists of pending, computation, and transmission time. The computation and transmission time are determined by both the heterogeneous computation capabilities of MEC/cloud servers and the mobility of vehicles. Specifically, the heterogeneous computation capacities of MEC servers are characterized by different computation rates. Further, the mobility of vehicles affects transmission time for result retrieval and might result in unbalanced workload. In addition, a task may require different computation resources stipulated by specific applications. Accordingly, individual MEC server may suffer from overwhelming service workload. In such a case, peer MEC servers or the cloud server can be scheduled to complete the tasks cooperatively. However, it is impractical to trace the real-time workloads of all the MEC servers, and hence, each MEC server has to make scheduling decisions based on its local observation. The scheduling decision of each MEC server will collaboratively affect the system performance, and it is still non-trivial to design an efficient mechanism for offloading and balancing the computation-intensive tasks among MEC/cloud servers in a distributed way. In this chapter, we propose a multi-armed bandit (MAB)-based learning algorithm [12], which enables individual MEC server to learn the global knowledge in an iterative way. In the framework of the MAB, multiple players compete for multiple arm machines. Each player chooses one of the arm machines and accordingly gets a reward based on a designed function. Each MEC server plays the roles of both the player and the arm machine. A utility table is established by each individual MEC server to trace the real-time workload distribution based on the received reward. Accordingly, dedicated exploitation and exploration mechanisms are designed for maximizing the total rewards by learning the knowledge and selecting the optimal
9.2 System Architecture
175
solution in an alternative way. The main contributions of this chapter are outlined as follows: • We investigate the service scenario of task offloading and workload balancing in an MEC-empowered vehicular network, where computation resources of the MEC servers and the cloud are scheduled in a distributed way. Particularly, each MEC server is regarded as a local scheduler and responsible for making the task offloading decisions in its own service range. • We formulate a Distributed Task Offloading (DTO) problem by synthesizing multiple critical factors, including heterogeneous computation capabilities of MEC/cloud servers, uneven distribution of workloads, and high mobility of vehicles, which targets at minimizing task completion time. Further, we prove the NP-hardness of the DTO problem by constructing a polynomial-time reduction from a well-known NP-hard problem, multiprocessor scheduling problem. • We propose a multi-armed bandit learning algorithm called utility-table-based learning (UL), in which each MEC server is regarded as a player and makes task offloading decisions by learning the global knowledge online. Specifically, a utility table is used for recording the estimated pending delay, which is periodically updated based on the feedback signal received from peer MECs. Based on the utility table, an exploitation method is designed to select the optimal MEC server for task offloading. Further, by analyzing the relationship among the computation requirements, the computation capacities of MEC servers, and the arrival patterns of vehicles, we derive an optimal threshold, which enables each MEC server to make tasks offloading decisions adaptively.
9.2 System Architecture Figure 9.1 presents an end–edge–cloud architecture for distributed task offloading and workload balancing in dynamic vehicular networks. In the infrastructure layer, RSUs are deployed in road network and equipped with local MEC servers. Vehicles may submit different types of tasks to the RSU, which requires different amounts of computation resources. In the edge computing layer, each MEC server is responsible for resource allocation and distributed task offloading in its service range. Specifically, each MEC server plays two roles: the local scheduler and the processor. As a local scheduler, the MEC server can assign tasks to one of the MEC servers or the cloud. As a local processor, the MEC server can directly process the assigned tasks. Note that if the results cannot be delivered to vehicles before they leave the service range of MEC server, they may suffer from longer delay for retrieving result. In the cloud layer, the server is assumed to own infinite computation resources and can process tasks without pending delay [13]. However, due to the competition of wireless communication bandwidth and the remote distance of cloud servers, it typically suffers from longer delay for task uploading and result notification. In order to focus on task offloading among MEC/cloud
176
9 Distributed Task Offloading and Workload Balancing
Fig. 9.1 System architecture of distributed task offloading and workload balancing
servers, the following assumptions are made: (a) the lower communication issues, such as packet loss and communication interference, are not considered in vehicular communication; (b) the dwelling time of the vehicles in the coverage of MEC server is assumed to be known in advance, which can be evaluated by existing mobility prediction techniques [14, 15]. The overall scheduling procedures of an MEC server are described as follows. Each MEC server maintains a submission list for its received new tasks. Once a vehicle is in the service range of an MEC server, it will submit a task via V2I communication, and the task is inserted into a submission list. Second, the MEC server makes the assignment of each task in the submission list iteratively based on the proposed scheduling strategy. Based on the scheduling decision, the task is transferred to the corresponding MEC server or the cloud and removed from the local submission list. The assigned server inserts the transferred task into a pending list, which is used for storing the task waiting to be processed. Third, the MEC server processes the pending tasks in order. Fourth, once the task is completed, it will be removed from the pending list. If the task is not assigned to the local MEC server, then a routing path is searched to transmit the computation result back to the original vehicle based on its real-time location. The target is how to design an optimal task offloading strategy for minimizing the overall task completion time by best balancing the workload and exploiting the heterogeneous MEC computation resources. Figure 9.2 shows an example of a typical ITS application, i.e., crowd monitoring under MEC-based architecture [16], to better illustrate the service procedure as well as to reveal the challenges of designing an efficient scheduling policy. Specifically, mobile vehicles initially submit the task of video processing to the nearby MEC server for event detection of a video clip. The computation result, such as a video frame containing event occurrence, is retrieved by the vehicles. In this example, the processing rate of the MEC server is evaluated by the number of processed
9.2 System Architecture
177
Fig. 9.2 An example
video frames per second. Further, vehicles use DSRC and cellular interfaces to communicate with MEC server and cloud, respectively. Based on the setting in these literatures [16, 17], the detail attributes are set as follows: 1. Attributes of MEC servers. As shown in Fig. 9.2a, there are three MEC servers .m1 , .m2 , and .m3 . The processing rates of .m1 , .m2 , and .m3 are set to 10, 10, 20 frames/second, respectively. Further, the current pending delays of .m1 , .m2 , .m3 are 1, 3, 0 second, respectively, which represent different pending workloads ahead to be executed. According to [17], the transmission rate of V2I is 2.4 Mb/s, and then transmission time of sending the computation results (such as one detected video frame with 0.3 MB size) from local MEC servers to the user is set 0.3 MB to . 2.4 Mb/s = 1 second. Further, it takes 1.3 seconds to retrieving results from peer MEC server, which takes additional transmission time for wired connection with 8 Mb/s. Particularly, the penalty delay of three MEC servers is set to 4 seconds, which indicates the duration of transmitting the computation result to the end user when the task cannot be completed with dwelling time. 2. Attributes of cloud server. As shown in Fig. 9.2a, the processing rate of the cloud server is set to .1∗ , which represents that any tasks can be completely computed by the cloud server within a short period, i.e., one second. Further, the pending delay of the cloud server is set to 0 second, which represents that the tasks can be immediately started for process. Last, the transmission rate of cellular interface
178
9 Distributed Task Offloading and Workload Balancing
is set to 0.6 Mb/s, and then the penalty delay of returning result from the cloud 0.3 MB to the users is set to . 0.6 Mb/s = 4 seconds. 3. Attributes of tasks. As shown in Fig. 9.2b, there are three new tasks .r1 , .r2 , and .r3 . Each task is associated with three attributes: computation requirement, dwelling MEC server, and dwelling time. For instance, .r1 submitted to .m1 needs to process a video clip consists of 40 frames, and its dwelling time in .m1 is 4 seconds. Based on the above information, we compare three solutions in Fig. 9.2c. First, the Local MEC-Only prefers to assign the tasks to the local MEC server. The output by the Local MEC-Only is .(1, 2, 3), which represents that .r1 , .r2 , and .r3 are assigned to .m1 , .m2 , and .m3 , respectively. For instance, .r1 waits for 1 second and is processed by .m1 with 4 seconds. As .r1 has left .m1 by then, it takes the penalty delay (i.e., 4 seconds) to transmit the result back to .r1 . Therefore, the service delay of .r1 equals .1 + 4 + 4 = 9 seconds. Similarly, the service delays of .r2 and .r3 are 8 and 4.7 seconds, respectively. In total, the Local MEC-Only serves all the tasks with 21.7 seconds. Second, the output by Cloud-Only is .(0, 0, 0), which indicates that the scheduler always assigns the tasks to the cloud server. Each task is processed with 1 second and transmitted to the end user with 4 seconds, which consumes 5 seconds. Therefore, the Cloud-Only completes all the tasks with 15 seconds. Lastly, the solution that coordinates the behavior of the MEC servers can achieve the global optimal service delay, which performs as follows. Specifically, for .r1 , it takes .40/20 seconds to process the task by .m3 and .1.3 seconds to migrate the task result back. Therefore, the service delay of .r1 is .3.3 seconds. Similarly, for .r2 , it takes 1 second of pending delay, 1 second of computation time, and 1.3 seconds of transmission time, which serves .r2 with 3.3 seconds. Then, for .r3 , the service delay is 5 seconds, which is processed by the cloud. Therefore, the optimal solution achieves 11.6 seconds by efficiently utilizing the global computation resources. However, the global knowledge of service conditions cannot be directly acquired by each individual MEC server. Therefore, it is challenging and imperative to design an intelligent assignment strategy to coordinate the behavior of individual MEC servers in a distributed way.
9.3 Distributed Task Offloading (DTO) Problem 9.3.1 Problem Definition The general idea of DTO problem is described as follows. The DTO considers the procedure of task process at the MEC server, which includes task submission, pending, processing, and result retrieval. Then, the DTO formulates the constraints of task completion time, where pending delay, computation time, and transmission delay are discussed, respectively. The objective of the DTO is to minimize the task completion time of the system, which is later defined as the ASD in Eq. 9.5. Finally,
9.3 Distributed Task Offloading (DTO) Problem
179
Table 9.1 Summary of notations Notations M .pi submit .Qi pending .Qi .rik .stik .lik .crik .T1 .T2 .δij .aik .pdik .ctik .tdik .sdik
Descriptions The set of MEC servers, where .M = {mi }1×|M| The processing rate of MEC server .mi The list of tasks submitted to MEC server .mi The list of pending tasks assigned to MEC server .mi pending The .k th task in .Qsubmit or .Qi i The submission time of task .rik The dwelling time of task .rik in the coverage of .mi The computation requirement of the task .rik The penalty delay of the cloud The penalty delay of the MEC server The transmission time of result from the assigned MEC server .mi back to the user in .mj The assignment of task .rik The pending delay of .rik The computation time for processing .rik The transmission delay of .rik The service delay of .rik
the problem definition is formally presented. The primary notations are summarized in Table 9.1. Given a set of MEC servers, denoted by M, each MEC server .mi ∈ M maintains a submission list .Qsubmit , where the elements of .Qsubmit are sorted in an ascending i i submit order of submission time. Each task .rik ∈ Qi can be assigned to one of the local MEC servers, a peer MEC server or the cloud, which is denoted as .aik ∈ [0, M]. Particularly, .aik = 0 represents the task is assigned to the cloud. Once .aik is determined, .rik is transferred to the assigned MEC server or the cloud server. pending , which is Meanwhile, each MEC server .mi maintains a pending list .Qi used for storing the assigned tasks waiting to processed. Particularly, the typical task process model, i.e., First-Come First-Served (FCFS) model, is adopted at the MEC server, which is a common setting in the literatures [18, 19]. That is, the task pending .rik ∈ Q with earlier submission time .stik has higher priority to be executed i by the MEC server. The task completion time consists of three parts: pending delay, computation time, and transmission delay, which are introduced as follows. First, the pending delay is the elapsed time waiting in the pending list before task is processed, which is calculated as follows. When .rik is assigned to .mj , i.e., .aik /= 0, the pending delay .pdik is the duration for completing all the pending tasks
180
9 Distributed Task Offloading and Workload Balancing
with priority higher than that of .rik . When the .rik assigned to the cloud server, i.e., aik == 0, the pending delay .pdik equals 0 indicates that the task is immediately processed. Therefore, the pending delay .pdik is computed as follows:
.
pdik =
.
⎧ ⎪ ⎪ ⎨ ⎪ ⎪ ⎩
Σ pending ∀ril ∈ Qj
cril pj ,
if aik == j, j ∈ [1, M] (9.1)
stil ≤ stik
0,
otherwise
Second, the computation time is the time consumed for processing the task by the assigned server, which is calculated as follows. For .rik assigned to MEC server .mj , i.e., .aik == j , the computation time .ctik equals the value of the required computation resources divided by the processing rate of .mj , i.e., .ctik = crpikj . For .rik assigned to the cloud server, i.e., .aik == 0, it is executed within a constant value, i.e., .ctik = c1 . Therefore, the computation time, which equals the time duration of executing the task, is computed as follows: ⎧ ctik =
.
crik pj
if aik == j, j ∈ [1, M]
c1 ,
otherwise
(9.2)
Third, the transmission delay is the time taken for returning the computation result back to the user, which is computed in four cases as follows: • For .rik assigned to the local MEC server, i.e., .aik == i, if .rik can be completed within dwelling interval, i.e., .pdik + ctik ≤ stik + lik , the computation result can be directly retrieved by the user without result migration. Therefore, we have .tdik = δii , which represents the delay of wireless transmission. • For .rik assigned to a peer MEC server .mj (j /= i), if the computation result of .rik can be retrieved by the user within dwelling interval, i.e., .pdik + ctik ≤ lik , it takes .δij time to transmit the computation result from .mj to the end user in .mi . • For .rik not completed within dwelling interval, it takes penalty delay .T2 to return the computation result. • For .rik uploaded to the cloud server, it takes a penalty delay .T1 to send the computation result back to the end user, i.e., .tdik = T1 . Therefore, the transmission delay .tdik of .rik is computed as follows: ⎧ δii , if aik == i&pdik + ctik ≤ lik ⎪ ⎪ ⎨ δij , if aik == j (j /= 0, i)&pdik + ctik ≤ lik .tdik = ⎪ T2 , if aik /= 0, pdik + ctik > lik ⎪ ⎩ T1 , if aik == 0
(9.3)
9.3 Distributed Task Offloading (DTO) Problem
181
Accordingly, the service delay of a task .rik , denoted by .sdik , is computed by the sum of pending delay .pdik , computation time .ctik , and transmission delay .tdik , which is formulated as Eq. (9.4). sdik = pdik + ctik + tdik
(9.4)
.
On this basis, we define the average service delay (ASD), i.e., the objective of the DTO, as follows. Definition 9.1 Average Service Delay (ASD): It is defined as the sum of the service delay of all tasks divided by the number of submitted tasks, which is formulated as Eq. 9.5:
A∗ = arg min
.
⎫ ⎧M Σ Σ ⎪ ⎪ ⎪ ⎪ sd ik ⎪ ⎪ ⎪ ⎪ ⎬ ⎨ i=1 ∀rik ∈Qsubmit
∀aik ∈[0,M] ⎪ ⎪ ⎪ ⎪ ⎩
i
M Σ
|| submit || ||Q ||
i=1
i
⎪ ⎪ ⎪ ⎪ ⎭
(9.5)
The DTO problem is formally defined as follows. Given the submission list Qsubmit , the set of MEC servers .M = {mi } with processing rate .pi and transmission i time .δij , .i, j = 1, 2, . . . , M, the penalty delays .T1 , and .T2 , the objective of DTO is to search the optimal task offloading .A∗ = (aik ) in a distributed way, so as to minimize the ASD defined in Eq. 9.5. Particularly, the high mobility of the vehicles is considered for task offloading in the DTO problem as follows. First, the dwelling time of tasks in the local MEC server during task submission is determined by the mobility features of the vehicles, such as position and speed. Second, the pending workload distribution of each MEC server is affected by both the dynamic vehicle demand and highly uneven vehicle density distribution, which is the core issue of the formulated DTO problem. Third, the transmission delay of computation result retrieval is classified into four cases based on the real-time position of vehicles, as defined in Eq. 9.3. Fourth, the local knowledge of the MEC server is determined by the mobility features in the covered traffic area.
.
9.3.2 NP-Hardness In fact, the DTO problem can be proved as an NP-hard problem by constructing a polynomial-time reduction from a well-known NP-hard problem, namely, the multiprocessor scheduling problem [20]. Theorem 9.1 The DTO is an NP-hard problem.
182
9 Distributed Task Offloading and Workload Balancing
Proof The general instance of the MSP is described as follows. Instance: There exist n tasks .J1 , . . . , Jn' , where each task .Ji has associated with a length .cri and a release date .sti , before which it cannot be processed on any machine. There also exist m machines .M1 , . . . , Mm , where each .Mj is with the processing rate .pj . Denote the starting time .sai of the task .Ji on .Mj ; then the completion time is computed by .ci = sai + li /pj . Question: GivenΣ a constant .K ' , is there a feasible schedule, so that the average 1 completion time . n ni=1 ci ≤ K ' ? On this basis, we construct a polynomial-time reduction from the general instance of the MSP to a special instance of the DTO. The mapping rule is established as follows. The machine .Mj with processing rate .pj is equivalent to the MEC server .mj with the computation capability .pj and unlimited dwelling time. Further, the transmission delay between each pair of MEC servers is neglected, and the cloud server is not considered in the special instance. Then, each task .Ji with the release date .sti is equivalent to a task .ri with the computation requirement .cri and the submission time .sti . The task .ri can be submitted to any MEC server in a random way. Before .sti , the task cannot be processed by any MEC server. The starting time .sai of .Ji on machine .Mj is equal to the starting time .sai of .ri on MEC server .mj . Additionally, the completion time .ci of .Ji is mapping to the service delay 1 Σn ' .sdi of .ri , i.e., .sdi = sai − sti + li /pj = ci − sti . Therefore, . i=1 ci ≤ K is n Σ Σ n n equivalent to . n1 i=1 sdi ≤ K, where .K = K ' − n1 i=1 sti . Σ Therefore, if there exists a feasible schedule of MSP such that . n1 ni=1 ci ≤ K ' , then, based Σ on the mapping rule, there exists a feasible schedule of the DTO such that . n1 ni=1 ci ≤ K and vice versa. The above proves that the MP problem can be reduced to a special instance of DTO. Therefore, DTO is an NP-hard problem. ⨆ ⨅ ⨆ ⨅
9.4 Proposed Algorithm In this section, we propose a MAB learning algorithm, called utility-based learning (UL), which is a distributed algorithm implemented at each MEC server. Specifically, we design an MAB-based task offloading mechanism for workload balance among MEC servers and a probabilistic method for optimal task offloading between MEC layer and the cloud. Finally, the general procedure of UL and time complexity are discussed, respectively.
9.4 Proposed Algorithm
183
9.4.1 MAB-Based Task Offloading Mechanism In this section, we propose a task offloading mechanism among MEC servers based on the MAB framework. Before elaboration, the neighboring set is introduced. For each .mi , the neighboring set .Ni denotes the set of MEC servers whose distance to .mi is within a predefined threshold .β, which are regarded as candidates for task offloading. Ni = {mj |dist (mi , mj ) ≤ β, ∀mj ∈ M}
.
(9.6)
where the threshold .β can be determined by certain evaluation method based on different geographic distributions. The purpose of the neighboring set is described as follows. For the local scheduler, it is difficult to track the real-time service workload of each MEC server in the system, especially when the network scale is large. When the learned knowledge is not accurate, it will even degrade the system performance by exacerbating uneven workload distribution. Therefore, the functionality of neighboring set is used to improve learning accuracy and scheduling efficiency by reducing the dimension of system state, which makes the UL adaptive to varying network scales. The system state of each MEC server .mi is defined as a utility table, denoted by .U i = (uij )1×||Ni || . Specifically, each .uij ∈ U i indicates the estimated pending delay that a task has to wait before being processed by .mj . The initial value of each i i .u ∈ U is set to 0. j The action space of .mi , denoted by .Ai , represents the set of available MEC servers, which are the candidates for assigning newly submitted task .rik of .mi . For submit , the action space .A is formulated as follows: .rik ∈ Q i i j
Ai = {mj |ui + δij +
.
crik ≤ lik , ∀mj ∈ Ni } pj
(9.7)
Equation (9.7) indicates that any MEC server in .Ai is able to return the computation result of .rik within the dwelling time .lik . However, in practice, as the utility table cannot acquire real value of workload distribution and the local server cannot know the scheduling decisions of other MEC servers, the constraint in Eq. (9.7) cannot be always satisfied. Particularly, when .Ai is an empty set, which indicates that no MEC server can satisfy the constraint, then the pending task will be uploaded to the cloud server. For a new task .rik ∈ Qsubmit , the local server .mi has two alternative operations i to determine the assignment based on utility table .U i : exploration and exploitation. For exploration, the local server .mi randomly selects one of .Ai as the assigned server .mr . For exploitation, .mi selects .mj ∈ Ai with the minimum service delay.
184
9 Distributed Task Offloading and Workload Balancing
For the task .rik , the assigned server is determined as follows: crik } pj
mrik = arg min {mj |uij + δij +
.
∀mj ∈Ai
(9.8)
To balance the tradeoff between the exploration and the exploitation, an exploration rate .γ is set to determine which operation is selected. Particularly, the exploration rate .γ is set to . √1t , where t refers to the current iteration. The exploration rate continues decays until .γ reaches 0.001. This method is commonly adopted in relevant literatures [21, 22]. To track the up-to-date state, an update mechanism is designed to refresh utility table. First, after a task .rik has been transferred to the assigned MEC server .mrik , the server will compute and embed the latest pending delay in a feedback message for sending back to the scheduler. Accordingly, the value of feedback signal .R(mrik ) is set to the pending delay of .mrik , which is computed based on Eq. (9.1). Second, a learning rate .α is set to control the update magnitude of .U i . The value of .α is set to 0.1, which is commonly adopted in the related literatures [23, 24]. Third, if the task is assigned to .mj , i.e., .mrik = mj , then based on the feedback signal .mrik , .uij ∈ U i is updated as follows: ⎧ i .uj
:=
(1 − α) · uij + α · R(mrik ), rik
R(m ),
if if
i /= j i == j
(9.9)
Particularly, as the local server is able to accurately acquire its own workload, .uii can be directly known without learning procedure. The above mechanism can reduce the service delay by coordinating MEC servers. However, the computation resources at the MEC layers may not be sufficient to handle very high service workloads, which motivates the task offloading method in the next subsection.
9.4.2 Probabilistic Task Offloading To balance the workload between the MEC servers and the cloud, we further develop a probabilistic method to adaptively determine whether a new task is assigned to the MEC layer or the cloud. Given the arrival rate of new tasks in MEC server .mi (denoted by .λi ) and the average computation requirement, denoted by .cr), we derive the threshold .ρi , the proportion of new arrival tasks at .mi assigned to cloud server. To simplify the derivation, we assume that the local server is responsible
9.4 Proposed Algorithm
185
for processing the tasks that are not assigned to the cloud. Therefore, based on the definition of .ρi , the number of new tasks assigned to the local server .mi and the cloud during time interval .[t, t + Δ t] are .(1 − ρi )λi Δ t and .ρi λi Δ t, respectively. In i , which indicates that case (1), when .(1 − ρi )λi cr Δ t ≤ pi Δ t, i.e., .ρi ≥ 1 − λpi cr the process capacity of MEC server is greater than the amount of new workloads, and hence, the new tasks assigned to the local server can be processed immediately without pending delay. Then, the service delay of tasks processed by .mi and the cloud is computed by . pcri × (1 − ρi )λi Δ t and .ρi λi Δ tT1 , respectively. Therefore, the total service delay of new arrival tasks at .mi in Case (1) is computed as follows: cr × (1 − pi )λi Δ t + ρi λi T1 Δ t pi
.
(9.10)
i , which indicates there exist pending tasks and In case (2), when .ρi < 1 − λpi cr new tasks have to tolerate pending delays before being processed. Then, the pending delay of a task at t equals . (1−ρi )λpiicrt−pi t . As t is assumed to be sufficiently large, the pending delay can be greater than the dwelling time of vehicles. Therefore, the service delay of tasks processed by .mi equals the sum of pending delay, processing time, and penalty delay, i.e., . (1−ρi )λpiicrt−pi t + pcri + T2 . Similarly, in case (2), the total service delay of new tasks at .mi is formulated as follows:
⎧ .
⎫ (1 − ρi )λi crt − pi t + cr + T2 × (1 − ρi )λi Δ t + ρi λi T1 Δ t pi
(9.11)
Therefore, the service delay of .mi during the time interval .[0, L] is computed by the integration of Eqs. (9.10) and (9.11) over .[0, L], which is formulated as follows: fi (ρi ) = ⎧ ⎧ L⎧ ⎫ cr ⎪ ⎪ ⎪ × (1 − ρ )λ + ρ λ T dt, i i i i 1 ⎪ ⎪ pi ⎪ ⎪ 0 . ⎨ ⎧ L (1 − ρi )λi crt − pi t + cr ⎪ (1 − ρi )λi + { ⎪ ⎪ pi ⎪ 0 ⎪ ⎪ ⎪ ⎩ (1 − ρi )λi T2 + ρi λi T1 }dt,
if ρi ≥ 1 −
pi crλi
(9.12)
otherwise
Then, the service delay of the system is the sum of the service delay (denoted by Eq. (9.12)) of all MEC servers. f (ρ1 , ρ2 , . . . , ρM ) =
M Σ
.
i=1
fi (ρi )
(9.13)
186
9 Distributed Task Offloading and Workload Balancing
To search the minimum value of Eq. (9.13), the partial derivative with respect to .ρi is computed as follows: df df = i = dρi dρi ⎞ ⎧ ⎛ cr ⎪ T1 − λi L, .⎪ ⎪ ⎨ pi ⎞ ⎛ ⎪ ⎪ λL pL ⎪ ⎩ i (ρi − 1)λi crL + i − cr + (T1 − T2 )pi , pi 2
In Case (1), when .ρi ∈ [1 −
pi crλi , 1],
pi crλi
(9.14)
otherwise
as the penalty delay .T1 is always longer
than the time for processing an individual task, i.e., .T1 > increasing with respect to .ρi in .[1 −
ρi ≥ 1 −
pi crλi , 1].
cr pi ,
f is strictly monotone
On the other hand, in Case (2), . df ρi
pi is obviously an increasing function with respect to .ρi ∈ [0, 1 − crλ ]. Hence, . df ρi i pi achieves the maximum at .ρi = 1 − crλi , which is computed as follows:
df pi | ρi ρi =1− crλi ⎛ ⎞ pi L λi L − cr + (T1 − T2 )pi (ρi − 1)λi crL + . = pi 2 =
λi L ((2T1 − 2T2 − L)pi − 2cr) 2pi
As the time length L can be sufficient large, that is, .L >> T1 , then df ρi
|ρi =1−
pi crλi
(9.15)
.
df ρi
≤
< 0, which verifies that f is strictly monotone decreasing with respect
pi to .ρi in .[0, 1 − crλ ]. Based on the above analysis, we can make a conclusion that f i pi with respect to the interval .[0, 1]. Therefore, achieves the minimum at .ρi = 1 − crλ i pi .ρi = min(0, 1 − crλi ). In practice, .cr and .λi can be estimated based on the history statistics of traffic arrival pattern and service workload.
9.4.3 The Procedure of UL In this section, we propose the general procedure of UL, which consists of two parts: task offloading and utility-table update. The pseudocode is illustrated in Algorithm 9.1: • Task offloading: Each .mi determines the assignment of new task .rik ∈ Qsubmit as i follows. A random variable .rand1 is generated from .[0, 1]. When .rand1 ≤ ρi , .mi assigns .rik to the cloud server. Otherwise, the action space .Ai of the local MEC server .mi is computed based on Eq. (9.7). Then, .mi compares the random
9.4 Proposed Algorithm
187
Algorithm 9.1: UL Input and initialization : j ui (0) = 0, ∀mj ∈ Ni , threshold ρi and learning rate α. 1: Set time step t = 0 2: while t < T do //Step 1: task offloading /= ∅ do 3: while Qsubmit i 4: Select a task rik ∈ Qsubmit i 5: Generate a random variable rand1 from [0, 1] 6: if rand1 ≤ ρi then 7: Assign rik to the cloud server 8: else 9: Compute Ai based on Eq. (9.7) 10: Generate a random variable rand2 from [0, 1] 11: if rand2 ≤ γ then 12: Assign rik to mrik randomly selected from Ai 13: else 14: Assign rik to mrik based on Eq. (9.8) 15: end if 16: end if ← Qsubmit \rik 17: Qsubmit i i 18: end while //Step 2: utility table update 19: for mj ∈ Ni do 20: while R(rik ) is received from mj do 21: if j /= i then j 22: uij = (1 − α) · ui + α · R(rik ) 23: else 24: uij = R(rik ) 25: end if 26: end while 27: end for 28: t =t +1 29: end while
variable .rand2 selected from .[0, 1] with a threshold .γ . If .rand2 ≤ γ , exploration is performed to randomly assign r to one solution from .Ai . Otherwise, the exploitation of UL is performed to assign .rik to .mrik with the minimum service delay, which is defined in Eq. (9.8). The detail procedure of task offloading is described in lines 3–17 in Algorithm 9.1. • Utility-table update: The utility table .U i of .mi is updated as follows. For each new feedback signal .R(mrik ), .mi updates its utility table .U i based on Eq. (9.9). The detail procedure of utility-table update is described in lines 19– 27 in Algorithm 9.1.
188
9 Distributed Task Offloading and Workload Balancing
On this basis, we analyze the complexity of UL to demonstrate its scalability and efficiency. Let .||Qsubmit || and .||Ni || be the number of new tasks and neighboring i MEC servers of .mi . The task offloading part makes the scheduling decision for each task .r ∈ Qsubmit . When assigned to the cloud, it takes .O(1) time to make the i decision. When assigned to MEC servers, it takes .O(||Ni ||) time to determine the optimal MEC server in .Ai . Therefore, it takes at most .O(||Qsubmit || · ||Ni ||) time i to perform task offloading. For utility-table update, it takes at most .O(||Qsubmit || · i ||Ni ||) time to perform update operation based on the feedback signals. Based on the above analysis, the computation complexity of the UL is .O(||Qsubmit || · ||Ni ||). i Particularly, .||Ni || can be regarded as a constant, which is much less than M. To sum up, the computation complexity of the UL is .O(||Qsubmit ||), which is in the i order of newly submitted task number during each scheduling period.
9.5 Performance Evaluation 9.5.1 Setup In this section, we implement the simulation model based on the system architecture described in Sect. 9.2. The simulation model integrates a traffic simulator with real-world map, a scheduling module, and an optimization module. Specifically, a real-world map, extracted from the core area of High-tech Zone in Chengdu, China, is imported to establish the traffic scenario in traffic simulator, SUMO [25]. SUMO is adopted to simulate vehicle mobility and generate vehicular trace. Then, a scheduling module is implemented for managing task submission of mobile vehicles, task offloading, and processing of each MEC/cloud server. Particularly, the scheduling decision is determined by the optimization module, which runs the algorithm based on the parameter input from scheduling module and exports the scheduling decision. The relationships among these modules are illustrated in Fig. 9.3. In the default setting, there are 32 MEC servers, which are evenly distributed along the road. The arrival pattern of vehicles in the service range of each MEC server .mi ∈ M follows the Poisson process, and the arrival rate is denoted by .λi , which is randomly generated from the interval .[1440, 2520] veh/h. To differentiate the computation capability, the processing rate .pi of each MEC server is randomly generated from the interval .[32, 38] resource units per time unit. For each vehicle, the computation requirement of tasks is generated from the interval .[55, 65] resource units. Particularly, the penalty delays .T1 and .T2 are all set to 60 time units. Unless stated otherwise, the simulation is conducted under the default setting. It is the first time that distributed learning-based task scheduling mechanism in hybrid MEC/cloud-based vehicular networks is taken into consideration. Accordingly, there are no existing scheduling algorithms that can be directly adopted into the concerned service scenario. For performance comparison, we implement two
9.5 Performance Evaluation
189
Fig. 9.3 Simulation modules
competitive algorithms: (1) Local First (LF), which adaptively assigns the task to the local server or the cloud based on the real-time workload of local server. Only when the task cannot be completed by the local server within dwelling interval, it will be uploaded to the cloud server. Hence, LF is competitive in the sense of workload balancing between individual MEC server and cloud layer. (2) The Ordinal Sharing Learning (OSL) [26], which is actually a learning-based offloading algorithm in grid computing systems. Each local scheduler learns the number of pending tasks on each MEC server by sharing information with the closest local scheduler periodically. The OSL is competitive in sense of workload balancing among MEC servers. Further, we collect the following statistics for performance analysis: the average pending delay of tasks assigned to MEC server .mj , denoted by .Qwait , and the total number of submitted tasks during simulation, denoted by j total .Q . Moreover, we collect the number of tasks that suffer from penalty delay of delay delay cloud and MEC servers, respectively, denoted by .QC and .QM . Besides the ASD, which is the primary objective of DTO, we define three more metrics for further discussion, which are defined as follows:
190
9 Distributed Task Offloading and Workload Balancing
• Average Pending Delay (.AP Davg ): It is defined as the mean value of waiting time of the tasks before being processed. ΣM
wait j =1 Qj
AP Davg =
.
M
(9.16)
Higher value of .AP Davg indicates that heavier workloads are assigned to MEC servers, which causes longer service delay. • Standard Deviation of APD (.AP Dstd ): It is computed as follows: / AP Dstd =
.
ΣM
j =1 (AP Dj
− AP Davg )2
M
(9.17)
Higher value of .AP Dstd indicates that the resource utilization among MEC servers is more unbalanced, which results in more unfairness among tasks. • Ratio of Tolerating Penalty Delay (RTPD): It is defined as the number of tasks that tolerate penalty delay to retrieve the computation results divided by the total task number, which is computed as follows: delay
RT P D =
.
QM
delay
+ QC
Qtotal
(9.18)
Higher value of RTPD indicates that more tasks suffer from longer transmission delay for retrieving the computation results, which increases the service delay.
9.5.2 Simulation Results and Analysis Effect of Traffic Workload Figure 9.4 shows the ASD of three algorithms under different vehicle arrival rates. Higher vehicle arrival rate brings more submitted tasks, which results in heavier workload. Accordingly, as shown in Fig. 9.4, the ASD of three algorithms decreases gradually with an increasing of vehicle arrival rate. As noted, UL achieves lower ASD than both OSL and LF, which can be explained with the observations from Figs. 9.5 and 9.6, where the APD and the RTPD of the three algorithms are compared under different vehicle arrival rates. As shown, the UL achieves the lowest APD, and its APD.std maintains at a preferable low level, which indicates that the UL can effectively balance the workload via the cooperation of MEC servers. Meanwhile, the RTPD of the UL also maintains at a low value, which verifies that the UL can adaptively achieve the optimal task offloading between the cloud layer and the MEC layer based on the derived threshold. There exists an interesting phenomenon to verify the adaptiveness of UL that the APD of UL increases at the first and then decreases when vehicle arrival rate increases. It is because that the UL can fully exploit the local computation resources of MEC
9.5 Performance Evaluation Fig. 9.4 ASD of three algorithms under different vehicle arrival rates
Fig. 9.5 APD of three algorithms under different vehicle arrival rates
Fig. 9.6 RTPD of three algorithms under different vehicle arrival rates
191
192
9 Distributed Task Offloading and Workload Balancing
servers to handle the tasks with MEC layer when the workload is low and the APD increases with vehicle arrival rate increases due to more tasks are assigned in the pending queue. However, when the workload exceeds the threshold to some extent, the UL can adaptively assign more percentage of tasks to the cloud rather than MEC server, which results in the decreasing APD and then dramatically increasing RTPD, as shown in Fig. 9.6. Further, it is observed that the ASD of OSL is lower than that of LF at the beginning, but later increases dramatically. This is because OSL assigns higher percentage of tasks to the cloud server when the traffic workload becomes higher, which causes much longer transmission delay, as shown in Fig. 9.6. The above simulation results show the effectiveness of UL against different system workloads. Effect of Network Scale Figure 9.7 shows the ASD of three algorithms under different numbers of MEC servers. More MEC servers indicate larger network scale, which makes the coordination of MEC servers more challenging. As noted, the performance of UL and LF maintains at a preferable level under varying network scales. However, the ASD of OSL presents an explosive growth with an increasing of network scale. The reason is discussed below. Figs. 9.8 and 9.9 show the APD and the RTPD of the three algorithms under different numbers of MEC servers, respectively. As shown, the APD and the RTPD of LF are stable, since the LF always prefers to the local computation capability without considering the cooperation between MEC servers, whose performance is not affected by varying network scales with the same workload. However, LF cannot fully utilize the computation resource of MEC server especially when the workload is unevenly distributed. The OSL always assigns new tasks to the server with the least number of pending tasks. However, with an increasing of network scale, the OSL cannot learn the global workload distribution in real time, which even exacerbates unbalanced workload among MEC servers and cloud. Thus, higher percentage of tasks has to tolerate penalty delay to send the computation result back, which is verified in Figs. 9.8 and 9.9. UL overcomes this issue by concentrating on cooperation in the set of Fig. 9.7 ASD of three algorithms under different numbers of MEC servers
9.6 Conclusion
193
Fig. 9.8 APD of three algorithms under different numbers of MEC servers
Fig. 9.9 RTPD of three algorithms under different numbers of MEC servers
neighboring servers, in which the server number is relatively small and does not change with varying network scales and real-time workload distribution among them can be efficiently learned. Therefore, by striking a balance between LF and OSL, UL can achieve the best performance among three algorithms. The above results demonstrate the scalability of the UL against varying network scales.
9.6 Conclusion In this chapter, we presented an MEC-empowered vehicular network architecture for task offloading and balancing in a distributed way, where each MEC server was regarded as a local scheduler and responsible for task offloading in its service range. We comprehensively investigated the effect of multiple critical factors, including computation resource requirements of tasks, processing rates of MEC servers,
194
9 Distributed Task Offloading and Workload Balancing
and mobility of vehicles. We formulated the problem of DTO to minimize the ASD by searching the optimal solution of task offloading. Further, we proved the NP-hardness of DTO by giving a polynomial-time reduction from the MSP to a special instance of DTO. On this basis, we designed an MAB algorithm called UL. Specifically, a utility table is maintained at each MEC server and records the learned global knowledge of system workload. Then, exploration and exploitation mechanisms were designed to explore service condition of neighboring MEC servers and make the optimal scheduling, respectively. By receiving the feedback signal from other MEC/cloud servers, an update mechanism was implemented for keeping the utility table up-to-date. Further, a threshold-based probabilistic method was derived to optimally allocate tasks to the cloud. Finally, we built the simulation model by integrating the traffic simulator SUMO with imported real-world map, a scheduling module, and an optimization module. The comprehensive simulation results demonstrated the effectiveness and scalability of the proposed algorithm under a wide range of scenarios.
References 1. W. Song, Y. Yang, M. Fu, Y. Li, M. Wang, Lane detection and classification for forward collision warning system based on stereo vision. IEEE Sensors J. 18(12), 5151–5163 (2018) 2. E. Dabbour, S. Easa, Proposed collision warning system for right-turning vehicles at two-way stop-controlled rural intersections. Transp. Res. Part C: Emerg. Technol. 42, 121–131 (2014) 3. F. Ali, D. Kwak, P. Khan, S.R. Islam, K.H. Kim, K. Kwak, Fuzzy ontology-based sentiment analysis of transportation and city feature reviews for safe traveling. Transp. Res. Part C: Emerg. Technol. 77, 33–48 (2017) 4. P. Dai, K. Liu, X. Wu, Z. Yu, H. Xing, V.C.S. Lee, Cooperative temporal data dissemination in SDN-based heterogeneous vehicular networks. IEEE Internet Things J. 6(1), 72–83 (2019) 5. K. Liu, J.K.-Y. Ng, V.C.S. Lee, S.H. Son, I. Stojmenovic, Cooperative data scheduling in hybrid vehicular ad hoc networks: VANET as a software defined network. IEEE/ACM Trans. Netw. 24(3), 1759–1773 (2016) 6. M. Li, P. Si, Y. Zhang, Delay-tolerant data traffic to software-defined vehicular networks with mobile edge computing in smart city. IEEE Trans. Veh. Technol. 67, 9073–9086 (2018) 7. X. Wang, X. Li, S. Pack, Z. Han, V.C.M. Leung, STCS: spatial-temporal collaborative sampling in flow-aware software defined networks. IEEE J. Sel. Areas Commun. 38, 999–1013 (2020) 8. K. Zhang, S. Leng, Y. He, S. Maharjan, Y. Zhang, Cooperative content caching in 5G networks with mobile edge computing. IEEE Wirel. Commun. 25(3), 80–87 (2018) 9. X. Wang, C. Wang, X. Li, V.C.M. Leung, T. Taleb, Federated deep reinforcement learning for Internet of things with decentralized cooperative edge caching. IEEE Internet Things J. 7, 9441–9455 (2020) 10. K. Zhang, Y. Mao, S. Leng, Y. He, Y. Zhang, Mobile-edge computing for vehicular networks: a promising network paradigm with predictive off-loading. IEEE Veh. Technol. Mag. 12(2), 36–44 (2017) 11. J. Wang, K. Liu, B. Li, T. Liu, R. Li, Z. Han, Delay-sensitive multi-period computation offloading with reliability guarantees in fog networks. IEEE Trans. Mobile Comput. 19, 2062– 2075 (2019) 12. P. Dai, Z. Hang, K. Liu, X. Wu, H. Xing, Z. Yu, V.C.S. Lee, Multi-armed bandit learning for computation-intensive services in MEC-empowered vehicular networks. IEEE Trans. Veh. Technol. 69(7), 7821–7834 (2020)
References
195
13. B.P. Rimal, D.P. Van, M. Maier, Mobile-edge computing vs. centralized cloud computing in fiber-wireless access networks, in Proceedings of the IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS’16) (2016), pp. 991–996 14. W. Liu, Y. Shoji, Edge-assisted vehicle mobility prediction to support V2X communications. IEEE Trans. Veh. Technol. 68(10), 10227–10238 (2019) 15. Z. Zhao, L. Guardalben, M. Karimzadeh, J. Silva, T. Braun, S. Sargento, Mobility predictionassisted over-the-top edge prefetching for hierarchical VANETs. IEEE J. Sel. Areas Commun. 36(8), 1786–1801 (2018) 16. C. Ballas, M.A. Marsden, D. Zhang, N.E. O’Connor, S. Little, Performance of video processing at the edge for crowd-monitoring applications, in Proceedings of the IEEE 4th World Forum on Internet of Things (WF-IoT’18) (2018), pp. 482–487 17. K. Liu, X. Xu, M. Chen, B. Liu, L. Wu, V.C.S. Lee, A hierarchical architecture for the future Internet of vehicles. IEEE Commun. Mag. 57(7), 41–47 (2019) 18. J.F. Shortle, J.M. Thompson, D. Gross, C.M. Harris, Fundamentals of Queueing Theory, vol. 399 (John Wiley & Sons, Hoboken, USA, 2018) 19. Z. Zhou, J. Feng, Z. Chang, X.S. Shen, Energy-efficient edge computing service provisioning for vehicular networks: a consensus ADMM approach. IEEE Trans. Veh. Technol. 68(5), 5087– 5099 (2019) 20. M. Skutella, G.J. Woeginger, A PTAS for minimizing the weighted sum of job completion times on parallel machines, in Proceedings of the Thirty-First Annual ACM Symposium on Theory of Computing (STOC’99) (ACM, 1999), pp. 400–407 21. A. Kara, I. Dogan, Reinforcement learning approaches for specifying ordering policies of perishable inventory systems. Expert Syst. Appl. 91, 150–158 (2018) 22. D.L. Leottau, J.R. del Solar, R. Babuška, Decentralized reinforcement learning of robot behaviors. Artif. Intell. 256, 130–159 (2018) 23. S. Xie, R.B. Girshick, P. Dollár, Z. Tu, K. He, Aggregated residual transformations for deep neural networks, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’17) (2017), pp. 5987–5995 24. H. Ye, G.Y. Li, Deep reinforcement learning for resource allocation in V2V communications, in Proceedings of the IEEE International Conference on Communications (ICC’18) (2018), pp. 1–6 25. M. Behrisch, L. Bieker, J. Erdmann, D. Krajzewicz, SUMO – simulation of urban mobility: an overview, in Proceedings of the Third International Conference on Advances in System Simulation (SIMUL’11) (2011) 26. J. Wu, X. Xu, P. Zhang, C. Liu, A novel multi-agent reinforcement learning approach for job scheduling in grid computing. Futur. Gener. Comput. Syst. 27(5), 430–439 (2011)
Part IV
Intelligent IoV: Key Enabling Technologies in Vehicular Edge Intelligence
Chapter 10
Toward Timely and Reliable DNN Inference in Vehicular Edge Intelligence
Abstract This chapter explores on accelerating DNN inference with reliability guarantee in VEC by considering the synergistic impacts of vehicle mobility and V2V/V2I communications. First, we show the necessity of striking a balance between DNN inference acceleration and reliability in VEC and give insights into the design rationale by analyzing the features of overlapped DNN partitioning and mobility-aware task offloading. Second, we formulate the Cooperative Partitioning and Offloading (CPO) problem by presenting a cooperative DNN partitioning and offloading scenario, followed by deriving an offloading reliability model and a DNN inference delay model. The CPO is proved as NP-hard. Third, we propose two approximation algorithms, i.e., Submodular Approximation Allocation Algorithm (SA.3 ) and Feed Me the Rest algorithm (FMtR). In particular, SA.3 determines the edge allocation in a centralized way, which achieves 1/3-optimal approximation on maximizing the inference reliability. On this basis, FMtR partitions the DNN models and offloads the tasks to the allocated edge nodes in a distributed way, which achieves 1/2-optimal approximation on maximizing the inference reliability. Finally, we build the simulation model and give a comprehensive performance evaluation, which demonstrates the superiority of the proposed solutions. Keywords Vehicular edge computing · DNN inference acceleration · Reliability guarantee · Overlapped partitioning
10.1 Introduction Recent advances in C-V2X [1] and DNN drive the development of VEC [2]. Meanwhile, the equipment of various sensors in modern vehicles, such as depth cameras, laser radars, and millimeter-wave radars, brings great demands on massive data transmission and computation. Clearly, to realize emerging ICVs and ITSs, it is critical to support the efficient and reliable processing of computation-intensive tasks, such as DNN inference. Although the computation capacity of modern vehicles increases, it cannot afford the ever-increasing demands on data processing. For example, the image classi© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 K. Liu et al., Toward Connected, Cooperative and Intelligent IoV, https://doi.org/10.1007/978-981-99-9647-6_10
199
200
10 Toward Timely and Reliable DNN Inference
fication with ResNET-152 [3] requires about 22.6 billion computation operations to process a .224 × 224 image, and about 33 trillion computation operations per second for processing videos with 30 fps. Considering typical end devices such as Raspberry Pi 4B and Jetson Xavier NX, it requires about 135 seconds and 4.5 seconds to process a one-second video with ResNET-152, respectively. In practice, typical modern vehicles such as Tesla Model X are already equipped with 8 cameras, 12 ultrasonic radars, and 1 millimeter-wave radar, let along the data comes from V2X communications, which will bring tremendous data computation demands. It is desired to have effective task offloading polices in VEC to enable low latency yet reliable DNN inference [4]. Although cloud computing is a promising paradigm for processing computationintensive tasks, it may suffer unexpected commutation latency and hence not suitable for most safety-critical applications. Further, note that even for non-realtime tasks, it is still likely that simply offloading all the tasks to the cloud would seriously deteriorate the overall system performance. For example, considering a crowd-sourcing-based traffic abnormity detection system, if all the vehicles compete for the limited cloud bandwidth for information uploading, the quality of data in terms of freshness cloud vary greatly when fusing at the cloud, which might result in the failure of event detection. Accordingly, this chapter focuses on the investigation of cooperative task offloading in VEC. A great number of literatures have studied task offloading in vehicular networks [5, 6]. These studies mainly focused on heterogeneous resource allocation and the cooperation among vehicles, edge nodes, and cloud servers, without considering fine-grained task offloading, such as DNN model partitioning. On the other hand, many studies have focused on the DNN model partitioning and offloading in edge computing environments to speed up the inference [7–9]. The basic rationale is summarized below. First, the input data of a DNN model can be partitioned into multiple portions and offloaded to different edge nodes, which typically have higher available computation capacities. Then, each edge node can process a smaller scale of the data in parallel and return their partially computed results to the end device. Finally, the end device integrates the results and completes the inference. However, the existing solutions cannot be directly applied in VEC. First, these solutions mainly focused on the DNN partitioning for one end device, without considering the cooperation of multiple vehicles during task offloading. More importantly, current solutions solely focused on accelerating the DNN inference, while ignoring the unique characteristics in VEC, including high vehicle mobility, dynamic network topology, and intermittent wireless connections. With above motivations, this chapter is committed to accelerating DNN inference with reliability guarantee in VEC. Specifically, we first motivate the necessity and potentiality of jointly considering overlapped partitioning and mobility-aware offloading, so as to strike the balance between accelerating DNN inference and enhancing system reliability. Then, we present the task offloading scenario, where the tasks can be partitioned and offloaded to different edge nodes, and vehicles would reprocess the task locally if the offloading was failed due to the mobility. In this context, the “task” refers to the “DNN inference,” and the two terms
10.2 System Architecture
201
are used interchangeably in the rest of this chapter. On this basis, we formulate the Cooperative Partitioning and Offloading (CPO) problem with the objective of minimizing the average DNN inference delay, and it is proved as NP-hard. Further, we propose two approximate solutions by decomposing CPO into two subproblems and give a comprehensive performance evaluation, which conclusively demonstrates the effectiveness of the proposed solutions. As far as we know, this is the first work that explores the synergistic effects of overlapped DNN partitioning and mobility-aware task offloading in VEC and realizes DNN inference acceleration with reliability guarantee. The main contributions of this chapter are outlined as follows: • We give the motivation of investigating overlapped partitioning and mobilityaware offloading in VEC. Specifically, we first verify that simply pursuing inference acceleration may seriously deteriorate system reliability. Then, we examine the effectiveness of the overlapped partitioning mechanism on balancing inference acceleration and reliability. Finally, we show the essence of jointly considering overlapped partitioning and mobility-aware offloading on accelerating DNN inference while maintaining decent system reliability. • We formulate the Cooperative Partitioning and Offloading (CPO) problem. Specifically, we first model the relative distance status and connection probability between vehicles and edge nodes. Then, we investigate the mapping property between input and output matrices of the DNN model during the partitioning. Further, we derive the DNN inference delay. Finally, we prove that CPO is NPhard. • We propose two approximation algorithms, i.e., Submodular Approximation Allocation Algorithm (SA.3 ) and Feed Me the Rest algorithm (FMtR). Specifically, SA.3 allocates the edge nodes for task offloading with the lower bound of . 13 in terms of maximizing inference reliability. With edge nodes allocation, FMtR further partitions the DNN models, achieving . 12 -optimal approximation on maximizing inference reliability, and then it completes the offloading by mapping the partitions with the allocated edge nodes to strike the best balance between DNN inference acceleration and system reliability.
10.2 System Architecture 10.2.1 System Overview Figure 10.1 shows the concerned task offloading scenario in VEC. First, vehicles with task offloading demands are considered as client vehicles, denoted as .V = {1, 2, · · · , |V|}. Peer vehicles and roadside infrastructures such as RSUs, which can help to process tasks, are considered as edge nodes, denoted as .N = {1, 2, · · · , |N|}. The cloud server is able to maintain the up-to-date system status based on information uploaded from client vehicles and edge nodes, and it is responsible
202
10 Toward Timely and Reliable DNN Inference
Fig. 10.1 System scenario
for allocating edge nodes to client vehicles. Client vehicles can offload their tasks to edge nodes via V2V/V2I communications. Note that the offloading might fail due to vehicle mobility and limited communication ranges. We focus on the offloading of non-real-time tasks, where the task would be reprocessed locally if the offloading is failed, and the primary goals are to minimize the average task processing time and enhance system reliability. The system workflow is summarized as follows. Step 1: Vehicles and roadside infrastructures upload their information (e.g., locations, directions, available communication/computation capacities, and tasks) to the cloud server and register themselves as either client vehicles or edge nodes. Step 2: Upon receiving the uploaded information, the cloud server schedules the allocation of edge nodes for each task based on the up-to-date system status and then sends notifications to all the client vehicles and edge nodes. Step 3: With the received instructions, client vehicles partition the input data and offload their tasks to the assigned edge nodes via V2V/V2I communications. Step 4: Edge nodes process the offloaded task distributedly. Step 5: Each edge node tries to return the processed results to the corresponding client vehicle via V2V/V2I communications. Step 6: If the client vehicle could receive the required results in due course, it would complete the DNN inference. Otherwise, the client vehicle has to reprocess the DNN inference locally. In this chapter, we do not specify absolute values about the scheduling period/uploading frequency, but adopting the time slot as the unit to emphasize the generality of the analysis. In practice, the period of above system procedures can be dynamically adjusted based on particular application requirements.
10.2 System Architecture
203
10.2.2 Motivation Overlapped Partitioning Intuitively, overlapped partitioning is a potential approach to enhance system reliability by counteracting the effect of vehicle mobility and intermittent wireless connection, which partitions the DNN model into multiple portions with overlapped parts and offloads them to different edge nodes. In this way, even if only parts of the edge nodes return the processing results, it is still possible that the end device could concatenate all the required information to complete the DNN inference. In the following, we take classical Convolutional Neural Networks (CNNs) as an example to elaborate the ideas. As shown in Fig. 10.2, there are one client vehicle (e.g., v) and four edge nodes (e.g., .n1 , · · · , n4 ). For simplicity, assume that they have the same computation capacity, and the connection probability of each communication pair is 80%. Then, we compare the following three solutions. Solution I, offloading with replication: The client vehicle offloads the full copy of the DNN model (i.e., all the input data) to each edge node. Solution II, offloading with non-overlapped partitioning: It divides the input data into four non-overlapped parts and offloads them to each edge node. Solution III, offloading with overlapped partitioning: It divides the input data into four portions with overlapped parts and offloads them to each edge node. For performance evaluation, we define two metrics, namely, acceleration ratio and success ratio. The former is defined as the ratio of processing time by the client vehicle over the processing time with task offloading (assuming 100% reliability on offloading). The latter is defined as the probability that the client vehicle is able to retrieve all the required data to complete the DNN inference. As summarized in the table of Fig. 10.2, Solution I achieves the success ratio of 99.84%, because the vehicle can retrieve the required data as long as one of the four edge nodes could return the result, and the probability is computed by .1−(1–80%)4 . On the other hand, the acceleration ratio is 1, since the vehicle and the edge nodes have the same computation capacity. For Solution II, it achieves the success ratio of
Fig. 10.2 Motivation on the effect of different partitioning solutions
204
10 Toward Timely and Reliable DNN Inference
Fig. 10.3 Motivation on the effect of vehicle mobility
40.96%, because it requires the output of all the four edge nodes, and the probability is computed by 80%.4 . On the other hand, the highest acceleration ratio that Solution II could achieve is 4 (when the model is equally partitioned and offloaded to the four edge nodes). Finally, Solution III divides the input data into two equal parts, and the first half are offloaded to .n1 and .n4 , while the second half are offloaded to .n2 and .n3 . It achieves the success ratio of 92.16%, since it can be completed as long as one of .n1 and .n4 sends back the output, and one of .n2 and .n3 sends back the output, which is computed by .80%4 + C14 80%3 · 20% + C12 C12 80%2 · 20%2 . Clearly, the acceleration ratio is 2 in this case. With the above analysis, we observe that the overlapped partitioning method has the potential to strike a balance between success ratio and acceleration ratio. Mobility-Aware Offloading We further investigate the effect of vehicle mobility on task offloading. As shown in Fig. 10.3, considering different relative heading directions between the vehicle v and the four edge nodes n1 , n2 , n3 , and n4 , we set the connection probabilities to 40%, 90%, 70%, and 50%, respectively. Also, assume that all the nodes have the same computation capacity. We examine a variant of solution III (i.e., Solution III with mobility-aware offloading): First, the same as Solution III, it divides the input data into two equal parts. Then, distinguishing from Solution III, it offloads the first half to n1 and n3 and offloads the second half to n2 and n4 . The results are shown in the table of Fig. 10.3. Clearly, both solutions achieve the acceleration ratio of 2 due to the same partitioning strategy. On the other hand, the success ratio of Solution III in this case decreases to 67.9% due to lower connection probability between the nodes. Nevertheless, the variant of Solution III can enhance the reliability by 10%, and this is mainly because it considers the connection probability between different nodes and tries to balance the chance of returning different portions of the model during offloading, so as to maximize the probability that all the required data can be received by the end device. To sum up, it is a promising strategy to accelerate DNN inference while maintaining decent system reliability by jointly considering the overlapped partitioning and mobility-aware offloading.
10.3 Cooperative Partitioning and Offloading (CPO) Problem
205
10.3 Cooperative Partitioning and Offloading (CPO) Problem In this section, we derive two analytical models, including the offloading reliability model and the DNN inference delay model. Further, we formulate the CPO problem and prove its NP-hardness. The primary notations are summarized in Table 10.1.
10.3.1 Offloading Reliability Model Analysis of Relative Distance Status In this part, we focus on modeling the status of relative distance between client vehicles and edge nodes, since it is the key factor that influences node connectivity and offloading reliability. Specifically, we first formulate the relative distance based on the discrete Markov Chain and formulate the relative heading state based on the directions of two corresponding nodes. On this basis, we formulate the status of relative distances under different relative heading states based on the multi-class Markov chain. Finally, we derive the probability distribution of future relative distances between client vehicles and edge nodes based on their historical trajectories. First, we formulate the variation of the relative distance by the Discrete Markov Chain [10]. Denote the maximum distance between v and n as .Dmax . Then, we divide .Dmax into a set of discrete states, .D = {1, 2, · · · , |D|}, with the unit length of .⍵ , where .|D| = ┌ Dmax /⍵ ┐. Further, the relative distance between v and n at time t is denoted by .dvn (t) ∈ [0, Dmax ]. Accordingly, we define a mapping function, .φ(dvn (t)) = j, j ∈ D, to imply that .dvn (t) is in the range of .[⍵j , .⍵ (j + 1)], namely, in state j . Second, we formulate the relative heading state. As shown in Fig. 10.4, we define the driving direction of v or n as its heading direction. Then, we define the vector direction from the client vehicle v to an edge node n as the reference direction, denoted as .ϕ vn . With that, the relative heading state (v to n) at time t is denoted → → as .ϕv|− vn (t). There are three cases to determine the value of .ϕv|− vn (t). a) It equals 1 when the angle between the heading direction of v and .ϕ vn is in .[0◦ , 90◦ ] (e.g., → and .ϕn |− → ). b) It equals .−1 when the angle is in the range of .[90◦ , 180◦ ] .ϕv|− vn 2 1 vn1 − → → ). c) It equals 0 for n when it is a static edge node (e.g., (e.g., .ϕv|vn1 and .ϕn2 |− vn 2 − → .ϕn |vn ). The table in Fig. 10.4 shows the corresponding relative heading states for 4 4 all the communication pairs. Third, we formulate the status relative distances under different relative heading states by the multi-class Markov chain [11]. Specifically, assuming the relative head→ → ing state at each timestamp is independent, we denote .W = {ϕv|− vn (t), ϕn|− vn (t)|∀v ∈ V, ∀n ∈ N, ∀t ∈ T } as the set of relative heading state pairs. Further, since relative heading state pairs may have similar mobility patterns, we denote .Z = {1, 2, · · · , |Z|} as the set of latent mobility patterns. Without loss of generality, we have .|W| ≥ |Z|. Then, given a relative heading state pair .w ∈ W, the probability
206
10 Toward Timely and Reliable DNN Inference
Table 10.1 Primary notations Notations .V, N, v, n .dvn (t) .D, ⍵ .φ(dvn (t)), φt
→ .ϕ vn , ϕv|− vn (t) .W, w .Z, z
con
.Prvn
, REvn
.k, K .Ck , Fk
in , W in , cin .H out , W out , cout i i i i i
.Hi
.Ij , Oj .qi , si (qj , sj )
i
i
.lowvn , upvn .COvn , T Ovn .χv , χn .βn , γvn
0
1
.hvn , hvn .Nv .Λv .LDv , ODvn .CODvΛv .I DvNv
B
Definitions A set of client vehicles, a set of edge nodes, .v ∈ V and .n ∈ N Relative distance between v and n at time t A set of relative distance states, the unit length of each relative distance state The mapping function from the relative distance to the state of .dvn (t), i.e., .φt ∈ D Reference direction between v and n, relative heading state (v to n) at time t Set of relative heading state pairs, → → .w = {ϕv|− vn (t), ϕn|− vn (t)} ∈ W Set of latent mobility patterns, .z ∈ Z Connecting probability and offloading reliability between v and n Type ID of DNN model, .1 ≤ k ≤ K Total number of layers in feature extraction stage and classification/detection stage of type k model Height, width, and channel value of the input/output matrices of convolutional layer i .(1 ≤ i ≤ Ck ) Length value of input and output data of fully connected layer j .(1 ≤ j ≤ Fk ) Input data size and computation requirement of layer .i(j ) .(1 ≤ i ≤ Ck , 1 ≤ j ≤ Fk ) Partitioned rows of the output matrices in layer i .(0 ≤ i ≤ Ck ), offloading from v to n Computation and transmission overhead, offloading from v to n Computation capacities of v and n Communication coverage of n, transmission rate between v and n Cooperative partitioning and offloading decision for vn, Ck Ck 0 1 . ⇔ Set of edge nodes allocated to v Set of edge nodes forming the complete results, .Λv ∈ Nv Local delay of v, offloading delay of n processing the workload from v Cooperative offloading delay of .Λv Inference delay of v completing the DNN inference, including the reprocessing overhead Constant threshold of the number of allocated edge nodes for each client vehicle
10.3 Cooperative Partitioning and Offloading (CPO) Problem
207
Fig. 10.4 An illustration of the relative heading state
that it belongs to the Σ latent mobility pattern .z ∈ Z is denoted by .Pr(z|w), with the constraint that . ∀z∈Z Pr(z|w) = 1. Moreover, given a specific latent mobility pattern .z ∈ Z and the current relative distance segment .i ∈ D, the probability that i changes Σ to .j ∈ D in the next timestamp is denoted by .Pr(j |i, z), with the constraint that . ∀j ∈D Pr(j |i, z) = 1. Fourth, utilizing the historical trajectories, we calculate the occurrence frequency of the state transition from i to j under the relative heading state pair w, which is denoted by .num(j |i, w), ∀i, j ∈ D, ∀w ∈ W. Then, the expectation–maximization method is adopted to estimate the values of .Pr(j |i, z) and .Pr(z|w), ∀i, j ∈ D, ∀z ∈ Z, ∀w ∈ W [12]. Finally, we derive the probability distribution of the mobility patterns between v and n based on their last .κ GPS points. Specifically, the .κ relative distance state transition is represented by { } H = (φt , φt+1 )|t ∈ {−κ, · · · , −1}
(10.1)
.
where .φt represents .φ(dvn (t)). Then, we denote .i, j ∈ D as the current and the next relative distance state, .φt and .φt+1 , in .H, respectively. Accordingly, the probability that the communication pair of v and n belongs to the latent mobility pattern .z ∈ Z is computed by Stynes et al. [11] Σ .
Pr(z|H) = Σ
(i,j )∈H Pr(j |i, z)
z' ∈Z
Σ
(i,j )∈H Pr(j |i, z
')
.
(10.2)
Analysis of Transmission Connectivity First, we calculate the probability distribution of the relative distance states between v and n in next ς timestamps. Specifically, we initialize the probability distribution at the current timestamp (i.e., t = 0) as Prvn (0) = [Privn (0)]{1×|D|} , ∀i ∈ D
.
(10.3)
208
10 Toward Timely and Reliable DNN Inference φ
where Prvn0 (0) = 1 and Privn (0) = 0, ∀i ∈ D \ {φ0 }. Then, the corresponding state transition is represented by Trvn = [Trvn ]{|D|×|D|} , ∀i, j ∈ D ij
.
(10.4)
Σ ij where Trvn = z∈Z Pr(j |i, z) Pr(z|H), which is the transition probability of the relative distance state from i to j . Further, the probability distribution of the relative distance state at time ς is computed by Qi et al. [10] ⎞
⎛
Prvn (ς ) = R ⎝Prvn (0) × (Trvn × · · · × Trvn )⎠
.
(10.5)
ς
Σ j where R(·) is a rescaling function to constraint that ∀j ∈D Prvn (ς ) = 1. Then, we formulate the probability of transmission connectivity between v and n at time t, denoted as Prcon vn (t), which is computed by .
Prcon vn (t) =
Σ
j
j ≤L β⍵n ⎦
Prvn (t), ∀j ∈ D
(10.6)
where βn is the communication coverage of n. Finally, the offloading reliability between v and n is estimated by the minimum probability of transmission connectivity during [0, ς ], which is computed by REvn =
.
min
ɛ∈{0,··· ,ς }
Prcon vn (ɛ)
(10.7)
10.3.2 DNN Inference Delay Model DNN Partitioning Since DNN models may have different structures, such as chain structure (e.g., Alexnet [13]), Directed Acyclic Graph (DAG) structure (e.g., GoogLeNet [14]), and loop structure (e.g., ResNet [3]), we classify DNN models into different types. Given K types of DNN models, the numbers of layers in feature extraction stage (i.e., convolution and pooling layers) and classification/detection stage (i.e., fully connected layers) of type k model .(1 ≤ k ≤ K) are represented by .Ck and .Fk , respectively. Further, we denote i .(1 ≤ i ≤ Ck ) and j .(1 ≤ j ≤ Fk ) as the layer ID in the two stages. First, we formulate the data size and the computation requirement of the type k DNN model. For convolution/pooling layer i .(1 ≤ i ≤ Ck ), its input matrices are associated with a three-tuple ., which correspond to the value of height, width, and channel, respectively. Accordingly, its output matrices are
10.3 Cooperative Partitioning and Offloading (CPO) Problem
209
associated with .. Then, the input data size .qi of layer i is computed by qi = Hiin · Wiin · ciin · Q
.
(10.8)
where .Q is the memory footprint for a unit data. Moreover, the computation requirement .si of layer i is modeled by the number of floating-point operations (FLOPs) [15], which is computed by si = 2 · Hiin · Wiin · (ciin · keri2 + 1) · ciout
.
(10.9)
where .ker i is the kernel size in layer i. Similarly, for fully connected layer j .(1 ≤ j ≤ Fk ), the length values of the unidimensional input and output arrays are denoted as .Ij and .Oj , respectively. Accordingly, its input data size .qj is computed by .qj = Ij · Q, and the computation requirement .sj is computed by sj = (2 · Ij − 1) · Oj .
.
(10.10)
Then, we formulate the data portions, which are partitioned and offloaded from v ∈ V to .n ∈ N. Specifically, we define ., where .0 ≤ low 0vn ≤ up 0vn ≤ H1in − 1, to represent that the input matrices from row .low 0vn to row .up0vn are partitioned and offloaded from v to n. Accordingly, the corresponding rows of i , up i >. For the output matrices of layer i .(1 ≤ i ≤ Ck ) are denoted by ., is obtained by Ф (Ω, b) then 12: Ф (Ω, b) = tmp 13: z(Ω, b) = z(Ω\{v}, b) ∪ {zfv } 14: end if 15: end for 16: end for 17: end for
242
11 Deep Q-Learning-Based Adaptive Multimedia Streaming '
'
during each .[tkl , tk,l+1 ] is considered to remain the same. Based on Eq. (11.4), the wireless bandwidth consumed by .zfv , denoted by .ϕ(zfv ), is computed as follows: ϕ(zfv ) =
.
(c2 − c1 + 1) · ψqv '
Emv (tkl ) · Δt
(11.22)
'
'
where .Δt is the length of scheduling period. The scheduling decision over a vehicle set .Ω is denoted by .z(Ω, Bm ) = (zfv ), ∀v ∈ Ω. Given wireless bandwidth .Bm , the solution space of .z(Ω), denoted by .Z(Ω, Bm ), is computed as follows: Σ
Z(Ω, Bm ) = {z(Ω, Bm )|
.
ϕ(zfv ) ≤ Bm }
(11.23)
∀zfv ∈z(Ω,Bm )
The rationale of benefit function is explained as follows. First, for .zfv , which schedules higher quality level of video chunk .r(q), we grant it with higher benefit. Second, for the decision .zfv , which makes the vehicles v with longer freezing delay ' .f d(t k,l ) indicating that v is more urgent to be served, it is granted with higher ' benefit. Third, for .zfv , which brings more available playback time .ρv (tk,l ) to v and makes the play smoother, it is granted with higher benefit. By synthesizing the factors above, the benefit function of .zfv is designed as follows: ⎞ ⎛ ⎞ ⎛ ' ' ' ' Ф (zfv ) = r(q) · max ρv (tk,l ), Δt · max f d(tk,l ), Δt
.
(11.24)
Given a vehicle set .Ω and wireless bandwidth .Bm , the benefit .Ф (Ω, Bm ) of a solution .z(Ω, Bm ) is defined as the summation of benefit .Ф (zfv ) of all vehicles, .∀v ∈ Ω, which is formulated as follows: Σ .Ф (Ω, Bm ) = Ф (zfv ) (11.25) ∀zfv ∈z(Ω,Bm )
To search the optimal solution efficiently, we observe the following property of the optimal benefit. Theorem 11.1 The optimal benefit .Ф ∗ (Ω, Bm ) over a vehicle set .Ω and a bandwidth .Bm is determined by the maximum among the summation of the benefit .Ф (zfv ) of any vehicle .v ∈ Ω and the optimal benefit over the remaining vehicle set .Ω\{v} ' and the rest bandwidth .Bm = LBm − ϕ(zfv )⎦, which is formulated as Eq. (11.26). Ф ∗ (Ω, Bm ) =
.
{ max
∀zfv ∈Z(v,Bm )
'
Ф ∗ (Ω\{v}, Bm ) + Ф (zfv )
} (11.26)
11.4 Proposed Algorithm
243
The corresponding optimal solution is determined as follows: zf∗v = arg
.
max
∀zfv ∈Z(v,Bm )
' {Ф (Ω\{v}, Bm ) + Ф (zfv )}
' z∗ (Ω, Bm ) = z∗ (Ω\{v}, Bm ) ∪ zf∗v
.
(11.27) (11.28)
Proof Assume that Eq. (11.26) is false, then there exists an optimal benefit Ф ∗ (Ω, Bm ), for the vehicle .v ∈ Ω with its decision .zfv ∈ Z(v, Bm ), the following equation must hold:
.
'
Ф ∗ (Ω, Bm ) > Ф ∗ (Ω\{v}, Bm ) + Ф (zfv )
.
(11.29)
For the optimal solution .z∗ (Ω, Bm ), we can have the following observations: Σ
Ф ∗ (Ω, Bm ) =
Ф (zfv )
∀zf∗v ∈z∗ (Ω,Bm )
Σ
= .
' ) ∀zf∗ ' ∈z∗ (Ω\{v},Bm
Ф (zfv' ) + Ф (zf∗v )
(11.30)
v
' =Ф (Ω\{v}, Bm ) + Ф (zf∗v ) ' ) + Ф (zf∗v ) ≤Ф ∗ (Ω\{v}, Bm
which contradicts with the assumption in Eq. (11.29). Based on Theorem 11.1, the optimal solution can be searched by dynamic programming method in an iterative way, where the detail procedure is shown in Algorithm 11.3. The bandwidth b and vehicle set .∅ are initialized to 0 and .∅. For b from 0 to .Bm , the optimal benefit .Ф (Ω, b) over vehicle .Ω is computed. Specifically, for each serving vehicle v, the benefit and consumed bandwidth of each scheduling decision .zfv are computed based on Eqs. (11.22) and (11.24), respectively. If a better .zfv is searched, .Ф (Ω, b) and .z(Ω, b) will be updated. The optimal solution .z(Ω, Bm ) can be obtained when the iteration is over. It is observed that the complexity of the proposed AQCS is .O(BV ), where B and V denote the wireless bandwidth and vehicle number. Since B is supposed to be constant for a specific edge server, the complexity of AQCS is in the linear order of vehicle number. ⨆ ⨅
244
11 Deep Q-Learning-Based Adaptive Multimedia Streaming
11.5 Performance Evaluation 11.5.1 Default Setting The simulation model integrates a traffic simulator, a scheduling module, and an optimization module. Specifically, a real map extracted from a .4 km×4 km core area of High-tech District in Chengdu city, China, is downloaded from OpenStreetMap. SUMO is adopted to generate real-time vehicle trace and periodically export the related traffic information to the scheduling module via Traci interface. Further, a scheduling module is implemented for managing the procedure of multimedia streaming in IoV, including the service submission of vehicles and file chunk placement and transmission scheduling of edge servers. All the scheduling decisions are made by the optimization module, which runs the algorithm based on the parameters input from the scheduling module. Particularly, the deep Q-learning framework is implemented by TensorFlow version 2.0, and the hyperparameters are set as follows. The discount factor .γ is set to 0.9, and the sizes of replay memory and minibatch are set to .103 and 32, respectively. The exploration rate . ɛ is initially set to .0.2 and decreases with interval .5 × 10−5 iteratively until it reaches .0.01. The DQN model is online trained by periodically collecting reward information in realtime environment. The edge server is able to retrieve the information by overhearing the beacon message from mobile vehicles. The model is updated iteratively within each scheduling period of chunk placement. In the default setting, the arrival pattern of vehicles in the simulation area follows the Poisson process, and the arrival rate is denoted by .λ, which is set to 2880 veh/h. For each edge server m, its wireless bandwidth .Bm , transmit power .Pm , and cache capacity are randomly selected from [2.7, 2.9] MHz, [2, 3] W, and [275, 350] Mb, −α , where .l respectively. The channel gain is set to .gv,m = lv,m v,m is the distance between mobile user and edge server and .α = 3 is the path loss factor, which is commonly adopted in the literature [17]. The white Gaussian noise .N0 is set to −9 . The wired rates between edge server to the cloud and inter-edge server are set .10 to 1 and 10 Mbps, respectively. A file chunk is set to own 2-second playback time. The bitrate of each chunk is set to five quality levels, i.e., 186, 499, 1101, 1292, and 1898 Kbps, whose average size is measured as 0.5, 1.0, 2.0, 2.4, and 3.5 Mb. The number of chunks requested by each vehicle is randomly selected from the interval .[100, 120]. The file number is 3 and each chunk has 5 quality levels. Then, the total number of different chunks is computed by the product of file number, the averaged chunk number of a file, and the number of quality levels, which is around 1650. The corresponding service quality is set to 1, 3, 6, 10, 15, respectively. The parameter configurations are commonly used setting in these literatures [13, 18]. Unless stated otherwise, the simulation is conducted under default settings. For performance comparison, since no existing solutions can be directly applied to the JRO problem, we implement one classical cache algorithm for cache placement and two representative adaptive-streaming algorithms for chunk transmission. For chunk placement, the LFU (Least Frequently Used) is adopted, which records
11.5 Performance Evaluation
245
the request number of each chunk and replaces the least requested chunks with the new visited chunks in every scheduling period. The two adaptive-streaming algorithms are the MDP (Markov Decision Process) [19] and the RA (Rate Adaptation) [20], respectively. The MDP establishes a model-free MDP framework and uses the Q-learning mechanism to determine the optimal quality level for next new chunk. Then, the RA adaptively determines whether to increase or decrease the quality level of the next unretrievable chunk by comparing the waiting time of retrieving the previous chunk and two predefined thresholds. Accordingly, we implement two comparison solutions LFU.+MDP and LFU.+RA for performance evaluation. Besides the ASQ and the AFD, which are the objectives of the JRO problem, we also define a metric to evaluate the cache performance, i.e., average cumulative reward (ACR). Given the set of accessed file chunks .C(t) at each time slot t and the cache reward .Rtc of each .c ∈ C(t), the ACR(T) is defined as follows: ΣT ACR(T ) =
.
t=0
Σ
c c∈C(t) Rt
(11.31)
|C(t)| × T
This metric evaluates the cache performance of edge server by synthesizing service quality and access delay.
11.5.2 Simulation Results and Analysis Effect of Traffic Workload We test the algorithm performance under different traffic workloads. Specifically, we have examined five service scenarios with different vehicle arrival rates, where the detail traffic characteristics, including Average Vehicle Number (AVN), Standard Deviation of Vehicle Number (SDVN), ADT, and Standard Deviation of Dwelling Time (SDDT) are listed in Table 11.2. As shown, the AVN and the ADT increase from Scenarios 1 to 5, which indicates that both traffic workload and dwelling time increase. Further, the SDVN and the SDDT increase dramatically, which indicates that vehicles are more dynamic when the traffic workload increases. To better exhibit the system workload under different scenarios, Fig. 11.3 shows the heat maps of vehicle distribution. It is noted that Table 11.2 Traffic characteristics
.λ (.veh/h)
1920 2400 2880 3360 3840
(AVN)(veh) 4.8 6.1 7.5 10.7 11.9
SDVN(veh) 4.0 6.9 11.9 40.0 74.2
ADT(s) 33.6 34.1 35.4 40.0 44.3
SDDT(s) 78.1 85.9 107.3 301.3 607.9
246
11 Deep Q-Learning-Based Adaptive Multimedia Streaming
Fig. 11.3 The heat map under different service workloads. (a) .λ = 1920 veh/h. (b) .λ = 2400 veh/h. (c) .λ = 2880 veh/h. (d) .λ = 3360 veh/h. (e) .λ = 3840 veh/h
11.5 Performance Evaluation
247
Fig. 11.4 Algorithm performance under different traffic workloads. (a) Average service quality. (b) Average freezing delay
the vehicle density varies greatly with different scenarios and is highly uneven distributed in each scenario, which demonstrates unbalanced workload. Figure 11.4a and b shows the ASQ and the AFD of the five algorithms under different vehicle arrival rates. As shown, the ASQ of all the algorithms decreases gradually. The AFD of LFU.+RA increases dramatically, while that of other four algorithms maintains at a stable level. Specifically, compared with the MAB.+AQCS, the DQN.+AQCS achieves comparable AFD but with higher ASQ value. It is validated by the ACR curve in Fig. 11.5. The DQN.+AQCS achieves the faster convergence and higher value of ACR than MAB.+AQCS. Specifically, the DQN.+AQCS can converge after 180 iterations, while the MAB.+AQCS cannot converge even when the simulation terminates. This demonstrates the advantage of DQN over MAB. Further, compared with DQN.+MDP, the DQN.+AQCS achieves slightly lower AFD but with much higher ASQ. It is noted that the DQN.+MDP has the same convergence speed but with much lower ACR than DQN.+AQCS, which indicates that transmission strategy has great influence on cache performance and the AQCS can better exploit the caching resources than MDP by balancing the quality level of received chunks and freezing delay. In addition, compared with LFU.+MDP and LFU.+RA, the DQN.+AQCS achieves better performance in both ASQ and AFD since the DQN achieves much higher cache performance than the LFU. This set of simulation results demonstrates the effectiveness of the algorithm against varying traffic workloads. Effect of Service Workload We compare the performance of the five algorithms under different numbers of requested chunks. The higher number of requested chunks indicates more wireless bandwidths needed for serving an individual vehicle, which burdens the system workload. Figure 11.6a and b shows the ASQ and AFD of all the algorithms under different numbers of requested chunk numbers, respectively. As shown, the ASQ of DQN+AQCS, MAB+AQCS, and LFU+RA decreases slowly in order to slow down the increasing of AFD. In contrary, as the
248
11 Deep Q-Learning-Based Adaptive Multimedia Streaming
Fig. 11.5 Average cumulative cache reward of four algorithms under different traffic workloads. (a) .λ = 1920 veh/h. (b) .λ = 2400 veh/h. (c) .λ = 2880 veh/h. (d) .λ = 3360 veh/h. (e) .λ = 3840 veh/h
11.5 Performance Evaluation
249
Fig. 11.6 Algorithm performance under different service workloads. (a) Average service quality. (b) Average freezing delay
Fig. 11.7 Algorithm performance under different cache capacities. (a) Average service quality. (b) Average freezing delay
MDP only cares about the benefit of individual vehicles, therefore, DQN+MDP and LFU+MDP prefer to improve the service quality of individual vehicle by increasing the ASQ, which results in dramatically increasing of the AFD. By benefiting from the cache performance of DQN, the DQN+MDP still achieves lower AFD than LFU+MDP and LFU+RA. The RA degrades the ASQ but cannot prevent the AFD from increasing. It is also observed that the DQN+AQCS achieves the best ASQ and AFD in all cases. This set of simulation results demonstrates the scalability of the proposed algorithms against varying service workloads. Effect of Cache Size We compare the algorithm performance under different cache capacities. Higher value of cache capacities indicates that more file chunks with higher quality level can be pre-cached by edge servers. As shown in Fig. 11.7a, the ASQ of DQN+AQCS and MAB+AQCS increases gradually, while that of other three algorithms maintains at the same level. According to Fig. 11.7b, the
250
11 Deep Q-Learning-Based Adaptive Multimedia Streaming
AFD of DQN+MDP decreases gradually, while the AFD of other four algorithms maintains at a stable level. The reason behind is analyzed as follows. As shown in Fig. 11.8, the cache performance of both DQN and MAB increases when the cache capacity increases, which indicates that the DQN and MAB can learn better cache strategy with larger cache capacity, which results in the increasing ASQ of two proposed algorithms and the decreasing AFD of DQN+MDP. However, the LFU is not sensitive to the cache capacity and maintains the same strategy and achieves the low cache performance in all cases, which makes the ASQ and the AFD of LFU+MDP and LFU+RA unchanged. Particularly, the DQN can always achieve fast convergence across all the scenarios. Accordingly, the DQN+AQCS achieves the best ASQ and AFD compared with other algorithms. This set of simulation results demonstrates the scalability of the proposed algorithm against varying cache capacities. Effect of Wireless Bandwidth We compare the algorithm performance under different communication capacities. Higher value of wireless bandwidth indicates that more file chunks can be transmitted within the same scheduling period. Figure 11.9a and b shows the ASQ and AFD of the five algorithms under different wireless bandwidths, respectively. As shown, the ASQ of all the algorithms increases gradually. The AFD of DQN+AQCS and MAB+AQCS maintains at a preferable level, but the AFD of the other three algorithms increases. It is because that the MDP and the RA only care about the benefit of individual vehicles without considering the fairness between vehicles. When the wireless bandwidth increases, some vehicles may occupy overhigh bandwidth to increase their own service quality and increase more serious freezing delay of others, which results in the increasing of AFD. Since the DQN can achieve better cache performance than LFU, the DQN+MDP achieves lower AFD than LFU+MDP and LFU+RA. On the other hand, the AQCS can maintain the AFD when the ASQ increases by considering fairness among different vehicles. In particular, the proposed algorithms achieve the highest ASQ and the lowest AFD in all the cases, which demonstrates the adaptiveness against varying network conditions.
11.6 Conclusion This chapter first presented a service architecture for ABR-based MS in heterogeneous IoV. Then, the JRO problem was formulated as a multi-objective mixed-integer nonlinear programming model to maximize ASQ and minimize AFD simultaneously with the constraints of cache and bandwidth capacity, which is NP-hard. On this basis, for chunk placement, we first proposed a low-overhead MAB-based algorithm, which maintains and updates average accumulative cache reward in the Q-table periodically. To further speed up the convergence and enhance system scalability, a tailored DQN-based algorithm was proposed, where the system state, action space, and reward function are designed based on the access pattern of
11.6 Conclusion
251
Fig. 11.8 Average cumulative cache reward of four algorithms under different cache capacities. (a) 125∼200 Mb. (b) 200∼275 Mb. (c) 275∼350 Mb. (d) 350∼425 Mb. (e) 425∼500 Mb
252
11 Deep Q-Learning-Based Adaptive Multimedia Streaming
Fig. 11.9 Algorithm performance under different communication capacities. (a) Average service quality. (b) Average freezing delay
file chunk and attributes of edge servers. In addition, delayed reward observation and asynchronous Q-update during algorithm implementation were particularly designed. On the other hand, for chunk transmission, the AQCS was proposed to maximize a dedicated designed benefit function based on dynamic programming. Finally, we built the simulation model and gave comprehensive performance evaluation, which demonstrated the superiority of the proposed solutions.
References 1. C. Guo, D. Li, G. Zhang, M. Zhai, Real-time path planning in urban area via VANET-assisted traffic information sharing. IEEE Trans. Veh. Technol. 67, 5635–5649 (2018) 2. Y. Fu, C. Li, T.H. Luan, Y. Zhang, F.R. Yu, Graded warning for rear-end collision: an artificial intelligence-aided algorithm. IEEE Trans. Intell. Transp. Syst. 21, 565–579 (2020) 3. K. Liu, X. Xu, M. Chen, B. Liu, L. Wu, V.C.S. Lee, A hierarchical architecture for the future Internet of vehicles. IEEE Commun. Mag. 57, 41–47 (2019) 4. P. Dai, Z. Hang, K. Liu, X. Wu, H. Xing, Z. Yu, V.C.S. Lee, Multi-armed bandit learning for computation-intensive services in MEC-empowered vehicular networks. IEEE Trans. Veh. Technol. 69, 7821–7834 (2020) 5. C. Xu, W. Quan, H. Zhang, L.A. Grieco, GrIMS: green information-centric multimedia streaming framework in vehicular ad hoc networks. IEEE Trans. Circuits Syst. Video Technol. 28, 483–498 (2018) 6. B. Hu, L. Fang, X. Cheng, L. Yang, Vehicle-to-vehicle distributed storage in vehicular networks, in Proceedings of the IEEE International Conference on Communications (ICC’18) (2018), pp. 1–6 7. J. Song, M. Sheng, T.Q.S. Quek, C. Xu, X. Wang, Learning-based content caching and sharing for wireless networks. IEEE Trans. Commun. 65, 4309–4324 (2017) 8. Z. Yang, M. Li, W. Lou, CodePlay: live multimedia streaming in VANETs using symbollevel network coding, in Proceedings of the 18th IEEE International Conference on Network Protocols (2010), pp. 223–232 9. K. Liu, K. Xiao, P. Dai, V. Lee, S. Guo, J. Cao, Fog computing empowered data dissemination in software defined heterogeneous VANETs. IEEE Trans. Mobile Comput. 20(11), 3181–3193 (2020)
References
253
10. R. Wang, M. Almulla, C. Rezende, A. Boukerche, Video streaming over vehicular networks by a multiple path solution with error correction, in Proceedings of the IEEE International Conference on Communications (ICC’14) (2014), pp. 580–585 11. K. Shafiee, V. Leung, Connectivity-aware minimum-delay geographic routing with vehicle tracking in VANETs. Ad Hoc Netw. 9, 1–10 (2011) 12. A.H. Zahran, J. Quinlan, D. Raca, C.J. Sreenan, E. Halepovic, R.K. Sinha, R. Jana, V. Gopalakrishnan, OSCAR: an optimized stall-cautious adaptive bitrate streaming algorithm for mobile networks, in Proceedings of the 8th International Workshop on Mobile Video (MoVid’16) (ACM, 2016), pp. 2:1–2:6 13. Y. Guo, Q. Yang, F.R. Yu, V.C.M. Leung, Cache-enabled adaptive video streaming over vehicular networks: a dynamic approach. IEEE Trans. Veh. Technol. 67, 5445–5459 (2018) 14. P. Dai, F. Song, K. Liu, Y. Dai, P. Zhou, S. Guo, Edge intelligence for adaptive multimedia streaming in heterogeneous Internet of vehicles. IEEE Trans. Mobile Comput. 22(3), 1464– 1478 (2023) 15. D. Tse, P. Viswanath, Fundamentals of Wireless Communication (Cambridge University Press, Cambridge, 2005) 16. S. Burer, A.N. Letchford, Non-convex mixed-integer nonlinear programming: a survey. Surv. Oper. Res. Manag. Sci. 17, 97–106 (2012) 17. X. Chen, L. Jiao, W. Li, X. Fu, Efficient multi-user computation offloading for mobile-edge cloud computing. IEEE/ACM Trans. Netw. 24, 2795–2808 (2016) 18. M. Rugelj, U. Sedlar, M. Volk, J. Sterle, M. Hajdinjak, A. Kos, Novel cross-layer QoE-aware radio resource allocation algorithms in multiuser OFDMA systems. IEEE Trans. Commun. 62, 3196–3208 (2014) 19. A. Bokani, M. Hassan, S. Kanhere, X. Zhu, Optimizing HTTP-based adaptive streaming in vehicular environment using Markov decision process. IEEE Trans. Multimedia 17, 2297– 2309 (2015) 20. C. Liu, I. Bouazizi, M. Gabbouj, Rate adaptation for adaptive HTTP streaming, in Proceedings of the Second Annual ACM Conference on Multimedia Systems (2011), pp. 169–174
Chapter 12
A Multi-agent Multi-objective Deep Reinforcement Learning Solution for Digital Twin in Vehicular Edge Intelligence
Abstract Recent advances in sensing technologies, wireless communications, and computing paradigms drive the evolution of vehicles toward ICV. This chapter investigates VDT via cooperative sensing and heterogeneous information fusion of vehicles, aiming at achieving the quality–cost tradeoff in VDT. First, a VDT architecture is presented, and the digital twin system is modeled to reflect the real-time status of physical vehicular environments. Second, we derive the cooperative sensing model and the V2I uploading model by considering the timeliness and consistency of the sensed information and the redundancy, sensing cost, and transmission cost of the system. On this basis, a bi-objective problem is formulated to maximize the system quality and minimize the system cost. Third, we propose a Multi-Agent Multi-Objective (MAMO) deep reinforcement learning model, including the design of distributed actors for storing replay experiences and a learner with a dueling critic network for the agent’s action evaluation. Finally, we give a comprehensive performance evaluation, which demonstrates that MAMO outperforms existing competitive solutions around 2.∼5 times on maximizing system quality, while still saving the system cost around 18%.∼44%. Keywords Digital twin · Vehicular edge computing · Deep reinforcement learning · Cooperative sensing · Information fusion
12.1 Introduction Recent advances in sensing technologies, wireless communications, and computing paradigms drive the development of modern ICVs. Various onboard sensors are equipped in modern vehicles to enhance their environment-sensing ability [1]. In addition, the development of V2X [2] enables the cooperation among vehicles, roadside infrastructure, and the cloud. Meanwhile, VEC [3] is a promising paradigm for enabling computation-intensive and latency-critical ITS [4] applications. These advances become strong driving forces of developing Vehicular Digital Twin (VDT). Specifically, the logical mapping of the physical entities in vehicular networks, such as vehicles, pedestrians, and roadside infrastructures, can be con© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 K. Liu et al., Toward Connected, Cooperative and Intelligent IoV, https://doi.org/10.1007/978-981-99-9647-6_12
255
256
12 A Multi-agent Multi-objective DRL Solution for DT in Vehicular Edge Intelligence
structed at the edge node via the cooperative sensing and heterogeneous information fusion. Great efforts have been devoted to vehicular data dissemination via V2X communications, such as end–edge–cloud cooperative data dissemination architecture [5] and intent-based network control framework [6]. To improve caching efficiency, some studies proposed vehicular content caching frameworks, such as blockchainempowered content caching [7], cooperative coding and caching scheduling [8], and edge-cooperative content caching [9]. A lot of work have been studied on task offloading in vehicular networks, such as real-time multi-period task offloading [10] and ADMMs and PSO combined task offloading [11]. These studies mainly focused on scheduling algorithms for data dissemination, information caching, and task offloading in vehicular networks. However, none of them has considered the cooperative sensing and heterogeneous information fusion in VEC. Prediction, planning, and control technologies have also been widely studied to enable digital twins in VEC. Different methods have been proposed for predicting vehicle status, such as vehicle tracking [12] and acceleration prediction [13]. Meanwhile, different scheduling schemes have been proposed in vehicular networks, such as physical-ratio-K interference model-based broadcast scheduling [14] and established map model-based path planning [15]. Some other studies have been focused on controlling algorithms for intelligent vehicles, such as intersection controlling [16] and EV charging scheduling [17]. The above studies on status detection, trajectory prediction, path scheduling, and vehicle controlling facilitated the implementation of various ITS applications. Nevertheless, they assume the availability of quality information can be constructed in VEC, without considering the sensing and uploading cost. With the above motivation, this chapter makes the first attempt to strike the best balance on enhancing the quality of information fusion and minimizing the cost of cooperative sensing and heterogeneous information fusion in VDT. The primary challenges are discussed as follows. First, due to the highly dynamic information in vehicular networks, it is crucial to evaluate the interrelationship among sensing frequency, queuing delay, and transmission delay to enhance information freshness. Second, it is possible that redundant or inconsistent information is sensed by different vehicles across different temporal or spatial domains. Thus, vehicles with different sensing capacities are expected to be cooperated in a distributed way to enhance the utilization of sensing and communication resources. Third, the physical information is heterogeneous regarding to distribution, updating frequency, and modality, which poses significant difficulties in modeling the quality of information fusion. Fourth, higher quality of constructed digital twins comes with higher cost of sensing and communication resources. To sum up, realizing the high-quality and low-cost VDT via cooperative sensing and heterogeneous information fusion is crucial yet challenging [18].
12.2 System Architecture
257
The primary contributions of this chapter are summarized as follows: . We formulate a bi-objective problem for enabling VDT, where a cooperative sensing model and a V2I uploading model are derived, and novel metrics for quantitatively evaluating system quality and cost are designed. . We propose a Multi-Agent Multi-Objective (MAMO) deep reinforcement learning model, which determines the sensing objects, sensing frequency, uploading priority, and transmission power of vehicles, as well as the V2I bandwidth allocation of edge nodes. The model includes distributed actors interacting with the environment and storing interaction experiences and a learner with a dueling critic network for evaluating actions of vehicles and edge nodes. . We give a comprehensive performance evaluation by implementing three representative algorithms, including Random Allocation (RA), Distributed Distributional Deep Deterministic Policy Gradient (D4PG), and Multi-Agent D4PG (MAD4PG), and the simulation results demonstrate that the proposed MAMO significantly outperforms existing solutions under different scenarios.
12.2 System Architecture In this section, we present a cooperative sensing and heterogeneous information fusion architecture in VDT. As shown in Fig. 12.1, the architecture can be abstracted into two layers, i.e., the physical vehicular environment and the logical views constructed by edge nodes. In particular, edge nodes such as 5G stations and roadside units (e.g., .e1 ∼ e5 ) are installed at the roadside. Vehicles are able to communicate with edge nodes within their radio coverage via V2I communications. Vehicles are equipped with various onboard sensors, such as ultrasonic sensors, LiDAR, optical cameras, and mmWave radars. The heterogeneous information, including the status of vehicles, road users, packing lots, and roadside infrastructures, can be collaboratively sensed by vehicles via onboard sensors. Each vehicle may sense different information based on their locations and sensing capacities. The sensed information is queued at each vehicle for uploading to the edge node, and each vehicle will determine the sensing frequencies and uploading priorities of this information. The edge node allocates V2I bandwidth to vehicles with uploading tasks. The edge node constructs the logical view by mapping the received physical information to the corresponding logical elements based on the requirement of specific ITS applications. Clearly, the physical information in vehicular networks is highly dynamic and temporal–spatial correlated. Meanwhile, the sensing vehicles have heterogeneous capacities and limited resources, and the vehicular communications are intermittent and unreliable. The high quality of logical views can better reflect the real-time physical vehicular environments and thus enhance the ITS performances. Nevertheless, constructing high-quality logical views may come with of higher sensing frequency, more amount of information uploading, and higher energy consumption. Therefore, it is critical to have a tailored metric to quantitively
258
12 A Multi-agent Multi-objective DRL Solution for DT in Vehicular Edge Intelligence
Fig. 12.1 Cooperative sensing and heterogeneous information fusion in VDT
evaluate the quality of the logical views constructed by the edge node, so as to measure the overall VDT performance effectively. The system characteristics are summarized as follows. First, the real-time status of different physical entities is sensed by vehicles and queued up for uploading. Second, the edge node allocates V2I bandwidth to vehicles, and on this basis, each vehicle determines the transmission power. Third, the digital twins of physical entities are modeled based on the fusion of heterogeneous information received from vehicles. Note that in such a system, the heterogeneous information is sensed by vehicles with different sensing frequencies, resulting in different freshness when uploading. Although increasing the sensing frequency may improve the freshness, it may prolong the queuing latency and cost more energy. In addition, the information of specific physical entity may be sensed by multiple vehicles, which may waste the communication resources if uploading by all vehicles. Then, due to the constraints of transmission power and V2I bandwidth, it is important to allocate the communication resources efficiently and economically to improve resource utilization. As mentioned above, it is critical and challenging to quantitatively
12.3 Quality–Cost Tradeoff Problem
259
measure the quality and cost of digital twins constructed at edge nodes and design an efficient and economical scheduling mechanism for cooperative sensing and heterogeneous information fusion to maximize the system quality and minimize the system cost in VDT. Furthermore, we give an example to better illustrate the idea. As shown in Fig. 12.1, a logical view is constructed in edge node .e1 at time t to enable the speed advisory application at the intersection based on the information sensed and uploaded by vehicles .s1 , .s2 , and .s3 . In general, the goal of such an application is to advise optimal speed to the vehicles which are approaching the intersection. So, vehicles can pass smoothly and the overall traffic efficiency can also be maximized. Suppose vehicles .s2 and .s3 can sense the traffic light information, but the values are not consistent at time t. For example, .s2 observes 17 s remaining of the red light, whereas .s3 observes 16 s, resulting in the information inconsistency. On the other hand, note that the status of the same physical element (e.g., the location of pedestrian P1) might be sensed by multiple vehicles simultaneously (e.g., .s1 and .s2 ). In a such case, it only needs to be uploaded by one of the vehicles (e.g., vehicle .s1 ) at certain time to save the V2I bandwidth. As long as the physical elements are modeled at the edge node with the same quality level, it can be applied to different applications without the need of repeatedly uploading by different vehicles. Moreover, the packet loss may cause a gap between the physical environment and the view. For example, suppose the packet for .s2 ’s location update is lost, which results in the significant inconsistency between its true location and modeled location at time t. As illustrated above, it is critical yet challenging to quantitatively measure the quality of views constructed at edge nodes and design an effective scheduling mechanism for cooperative sensing and information fusion to maximize the overall quality and minimize the cost of VDT.
12.3 Quality–Cost Tradeoff Problem 12.3.1 Notations The set of discrete time slots is denoted by .T = {1, 2, · · · , t, · · · , T }, where T is the number of time slots. Let .D denote the set of heterogeneous information, and ( ) each information .d ∈ D is characterized by a three-tuple .d = typed , ud , |d| , where .typed , .ud and .|d| are the type, updating interval, and data size, respectively. We denote .S as the set and each vehicle .s ∈ S is characterized ) ( of vehicles, by a three-tuple .s = lst , Ds , πs , where .lst , .Ds , and .πs are the location, sensed information set, and transmission power, respectively. For each information .d ∈ Ds , the sensing cost (i.e., energy consumption) in vehicle s is denoted by .φd,s . Let .E denote the set of edge nodes. Each edge node .e ∈ E is characterized by a threetuple .e = (le , re , be ), where .le , .re , and .be are the location, communication range, and bandwidth, respectively. The distance between vehicle s and edge node e is
260
12 A Multi-agent Multi-objective DRL Solution for DT in Vehicular Edge Intelligence
) ( denoted by .dists,e Δ distance lst , le , ∀s ∈ S, ∀e ∈ E, ∀t ∈ T, where .distance (·, ·) is the Euclidean distance. The set { of vehicles within the } radio coverage of edge node e at time t is denoted by .Ste = s| dists,e ≤ re , ∀s ∈ S , Ste ⊆ S. The sensing decision indicator, indicating whether information d is sensed by vehicle s at time t, is denoted by t cd,s ∈ {0, 1}, ∀d ∈ Ds , ∀s ∈ S, ∀t ∈ T
.
(12.1)
Then, the set of information sensed by vehicle s at time t is denoted by .Dts = t {d|cd,s = 1, ∀d ∈ Ds }, Dts ⊆ Ds . The information types are distinct for any information .d ∈ Dts , i.e., .typed ∗ /= typed , ∀d ∗ ∈ Dts \ {d} , ∀d ∈ Dts . The sensing frequency of information d in vehicle s at time t is denoted by .λtd,s , which should meet the requirement of the sensing ability of vehicle s. max t λtd,s ∈ [λmin d,s , λd,s ], ∀d ∈ Ds , ∀s ∈ S, ∀t ∈ T
.
(12.2)
max where .λmin d,s and .λd,s are the minimum and maximum of sensing frequency for information .d in vehicle s, respectively. The uploading priority of information d t , and we have in vehicle s at time t is denoted by .pd,s t pdt ∗ ,s /= pd,s , ∀d ∗ ∈ Dts \ {d} , ∀d ∈ Dts , ∀s ∈ S, ∀t ∈ T
.
(12.3)
where .pdt ∗ ,s is the uploading priority of information .d ∗ ∈ Dts . The transmission power of vehicle s at time t is denoted by .πst , and it cannot exceed the power capacity of vehicle s. πst ∈ [0, πs ] , ∀s ∈ S, ∀t ∈ T
.
(12.4)
The V2I bandwidth allocated by edge node e for vehicle s at time t is denoted by t , and we have bs,e
.
t bs,e ∈ [0, be ] , ∀s ∈ Ste , ∀e ∈ E, ∀t ∈ T
.
(12.5)
The Σ total V2I bandwidth allocated by edge node e cannot exceed its capacity .be , t ≤ b , ∀t ∈ T. i.e., . ∀s∈Ste bs,e e Denote the physical entity, such as vehicle, pedestrian, and roadside infrastructure, by .v ' , and denote the set of physical entities as .V' in VEC. .Dv ' is the with entity .v ' and can be represented by .Dv ' = {set of information associated } ' d | yd,v ' = 1, ∀d ∈ D , ∀v ∈ V' , where .yd,v ' is a binary indicating whether by .|Dv ' |. Each information d is associated by entity .v ' . The size of .Dv ' is denotedΣ entity may require multiple pieces of information, i.e., .|Dv ' | = ∀d∈D yd,v ' ≥ 1, ∀v ' ∈ V' . For each entity .v ' ∈ V' , there may be a digital twin v modeled in the edge node. We denote the set of digital twins by V , and the set of digital twins modeled in edge node e at time t is denoted by .Vte . Therefore, the set
12.3 Quality–Cost Tradeoff Problem
261
of information received U by edge ( node e and ) required by digital twin v can be represented by .Dtv,e = ∀s∈S Dv ' ∩ Dts,e , ∀v ∈ Vte , ∀e ∈ E, and .|Dtv,e | is the number of information receivedΣ by edge Σ node et and required by digital twin v, which is computed by .|Dtv,e | = ∀s∈S ∀d∈Ds cd,s yd,v ' .
12.3.2 Cooperative Sensing Model The cooperative sensing is modeled based on multi-class M/G/1 priority queue [19]. Assume the uploading time .gˆ td,s,e of information with .typed follows a class t t . Then, the uploading of general distributions with mean .αd,s and variance .βd,s Σ t t t t workload .ρs in vehicle s is represented by .ρs = ∀d⊆Dts λd,s αd,s . According to the principle of multi-class M/G/1 priority queue, it requires .ρst < 1 to have the steady state of the queue. The arrival time of information d before time t is denoted by .atd,s , which is computed by .
| | atd,s = 1/λtd,s tλtd,s
(12.6)
The updating time of information d before time t denoted by .utd,s is computed by .
| | utd,s = atd,s /ud ud
(12.7)
where .ud is the updating interval of information d. The set of elements with higher uploading priority than d in vehicle s at time t t , ∀d ∗ ∈ Dt }, where .p t is denoted by .Dtd,s = {d ∗ | pdt ∗ ,s > pd,s s d ∗ ,s is the uploading ∗ t priority of information .d ∈ Ds . Thus, the uploading workload ahead of information d (i.e., the amount of elements to be uploaded before d by vehicle s at time t) is computed by Σ
t ρd,s =
λtd ∗ ,s αdt ∗ ,s
.
(12.8)
∀d ∗ ∈Dtd,s
where .λtd ∗ ,s and .αdt ∗ ,s are the sensing frequency and the mean transmission time of information .d ∗ in vehicle s at time t, respectively. According to the Pollaczek– Khintchine formula [20], the queuing time of information d in vehicle s is calculated by ⎡ qtd,s =
.
⎢ t 1 ⎢α + t 1 − ρd,s ⎣ d,s
t + λtd,s βd,s
⎛
Σ ∀d ∗ ∈Dtd,s
λtd ∗ ,s βdt ∗ ,s
t − λt α t 2 1 − ρd,s d,s d,s
⎞
⎤ ⎥ ⎥ − αt d,s ⎦
(12.9)
262
12 A Multi-agent Multi-objective DRL Solution for DT in Vehicular Edge Intelligence
12.3.3 V2I Uploading Model The V2I uploading is modeled based on channel fading distribution and SNR threshold. The SNR of V2I communications between vehicle s and edge node e at time t is computed by [21] .
SNRts,e =
|2 1 || −ϕ hs,e | τ dists,e πst N0
(12.10)
where .N0 is the additive white Gaussian noise, .hs,e is the channel fading gain, .τ is a constant that depends on the antenna design, and .ϕ is the path loss exponent. | |2 Assume that .|hs,e | follows a class of distributions with the mean .μs,e and variance .σs,e , which is represented by ⎫ ⎧ ⎡| ⎡| ⎤2 |2 ⎤ |2 p˜ = P : EP |hs,e | = μs,e , EP |hs,e | − μs,e = σs,e
.
(12.11)
The transmission reliability is measured by the possibility that a successful transmission probability is beyond a reliability threshold .
⎛ ⎞ tgt inf Pr[P] SNRts,e ≥ SNRs,e ≥ δ
(12.12)
P∈p˜
tgt
where .SNRs,e and .δ are the target SNR threshold and reliability threshold, respectively. The set of information uploaded by vehicle s and received by edge node e is U denoted by .Dts,e = ∀s∈Ste Dts . According to the Shannon theory, the transmission rate of V2I communications between vehicle s and edge node e at time t, denoted by .zts,e , is computed by .
( ) zts,e = bst log2 1 + SNRts,e
(12.13)
Thus, the duration for transmitting information d from vehicle s to edge node e, denoted by .gtd,s,e , is computed by ⎧⎧ t . gd,s,e
= inf
j ∈R+
ktd,s +j
ktd,s
⎫ zts,e
d t ≥ |d|
(12.14)
where .ktd,s = t + qtd,s is the moment when vehicle s starts to transmit information d.
12.3 Quality–Cost Tradeoff Problem
263
12.3.4 Quality of Digital Twin First, since the digital twins are modeled based on the continuously uploaded and time-varying information, we define the timeliness of information d as follows. Definition 12.1 (Timeliness of information d) The timeliness .θd,s ∈ Q+ of information d in vehicle s is defined as the duration between the updating and the receiving of information d. θd,s = atd,s + qtd,s + gtd,s,e − utd,s , ∀d ∈ Dts , ∀s ∈ S
.
(12.15)
Definition 12.2 (Timeliness of digital twin v) The timeliness .Θv ∈ Q+ of digital twin v is defined as the sum of the maximum timeliness of information associated with physical entity .v ' . Θv =
Σ
.
∀s∈Ste
max
∀d∈Dv ' ∩Dts
θd,s , ∀v ∈ Vte , ∀e ∈ E
(12.16)
Second, as different types of information have different sensing frequencies and uploading priorities, we define the consistency of digital twin to measure the consistency of information associated with the same physical entity. Definition 12.3 (Consistency of digital twin v) The consistency .Ψv ∈ Q+ of digital twin v is defined as the maximum of the difference between information updating time. Ψv =
.
max
∀d∈Dtv,e ,∀s∈Ste
utd,s −
min
∀d∈Dtv,e ,∀s∈Ste
utd,s , ∀v ∈ Vte , ∀e ∈ E
(12.17)
Finally, we give the formal definition of the quality of digital twin, synthesizing the timeliness and consistency of digital twin. Definition 12.4 (Quality of digital twin (QDT)) The quality of digital twin QDTv ∈ (0, 1) is defined as a weighted average of normalized timeliness and normalized consistency of digital twin v.
.
.
QDTv = w1 (1 − Θˆv ) + w2 (1 − Ψˆv ), ∀v ∈ Vte , ∀e ∈ E
(12.18)
where .Θˆv ∈ (0, 1) and .Ψˆv ∈ (0, 1) denote the normalized timeliness and normalized consistency, respectively, which can be obtained by rescaling the range of the timeliness and consistency in .(0, 1) via the min–max normalization. The weighting factors for .Θˆv and .Ψˆv are denoted by .w1 and .w2 , respectively, which can be tuned accordingly based on the different requirements of ITS applications, and we have .w1 + w2 = 1.
264
12 A Multi-agent Multi-objective DRL Solution for DT in Vehicular Edge Intelligence
12.3.5 Cost of Digital Twin First, because the status of the same physical entity might be sensed by multiple vehicles simultaneously, we define the redundancy of information d as follows. Definition 12.5 (Redundancy of information d) The redundancy .ξd ∈ N of information d is defined as the number of additional information with .typed sensed by vehicles. | | ξd = |Dd,v,e | − 1, ∀d ∈ Dv , ∀v ∈ Vte , ∀e ∈ E
.
(12.19)
where .Dd,v,e is the set of the information with .typed received by edge node e and required by digital twin v, which is represented by } { Dd,v,e = d ∗ | typed ∗ = typed , ∀d ∗ ∈ Dtv,e
.
(12.20)
Definition 12.6 (Redundancy of digital twin v) The redundancy .Θv ∈ N of digital twin v is defined as the total redundancy of information in digital twin v. Θv =
Σ
.
ξd , ∀v ∈ Vte , ∀e ∈ E
(12.21)
∀d∈Dv '
Second, information sensing and transmission require energy consumption of vehicles, and we define the sensing cost and the transmission cost of digital twin v as follows. Definition 12.7 (Sensing cost of digital twin v) The sensing cost .Фv ∈ Q+ of digital twin v is defined as the total sensing cost of information required by digital twin v. Σ Σ .Фv = φd,s , ∀v ∈ Vte , ∀e ∈ E (12.22) ∀s∈Ste ∀d∈Dv ' ∩Dts
where .φd,s is the sensing cost of information d in vehicle s. Definition 12.8 (Transmission cost of information d) The transmission cost ωd,s ∈ Q+ of information d in vehicle s is defined as the consumed transmission power during the information uploading.
.
ωd,s = πst gtd,s,e , ∀d ∈ Dts
.
(12.23)
where .πst and .gtd,s,e are the transmission power and transmission time, respectively.
12.3 Quality–Cost Tradeoff Problem
265
Definition 12.9 (Transmission cost of digital twin v) The transmission cost . Ω v ∈ Q+ of digital twin v is defined as the total transmission cost of information required by digital twin v. Ω v =
Σ
Σ
ωd,s , ∀v ∈ Vte , ∀e ∈ E
.
(12.24)
∀s∈Ste ∀d∈Dv ' ∩Dts
Finally, we give the formal definition of the cost of digital twin, synthesizing the redundancy, sensing cost, and transmission cost. Definition 12.10 (Cost of digital twin, CDT) The cost of digital twin .CDTv ∈ (0, 1) is defined as a weighted average of normalized redundancy, normalized sensing cost, and normalized transmission cost of digital twin v. .
CDTv = w3 Θˆv + w4 Фˆv + w5 Ω ˆv , ∀v ∈ Vte , ∀e ∈ E
(12.25)
where .Θˆv ∈ (0, 1), .Фˆv ∈ (0, 1), and . Ω ˆv ∈ (0, 1) denote the normalized redundancy, normalized sensing cost, and normalized transmission cost of digital twin v, respectively. The weighting factors for .Θˆv , .Фˆv , and . Ω ˆv are denoted by .w3 , .w4 , and .w5 , respectively. Similarly, we have .w3 + w4 + w5 = 1.
12.3.6 Problem Formulation We define the system quality and the system cost as follows. Definition 12.11 (System quality) The system quality .Q ∈ (0, 1) is defined as the average of QDT for each digital twin modeled in edge nodes during the scheduling period .T. Σ
∀t∈T
Σ
Σ
Q=
.
∀e∈E
∀t∈T
Σ
Σ
∀v∈Vte QDTv t ∀e∈E |Ve |
(12.26)
Definition 12.12 (System cost) The system cost .C ∈ (0, 1) is defined as the average of CDT for each digital twin model in edge nodes during the scheduling period .T. Σ C =
.
∀t∈T
Σ
Σ
∀e∈E
∀t∈T
Σ
Σ
∀v∈Vte CDTv t ∀e∈E |Ve |
(12.27)
Given a solution .x = (C, Λ, P, | |, B), where .C denotes the determined sensing information, .Λ denotes the determined sensing frequencies, .P denotes the
266
12 A Multi-agent Multi-objective DRL Solution for DT in Vehicular Edge Intelligence
determined uploading priorities, .| | denotes the determined transmission power, and B denotes the determined V2I bandwidth allocation: { } ⎧ t |∀d ∈ D , ∀s ∈ S, ∀t ∈ T ⎪ C = c ⎪ s ⎪ } { d,s ⎪ ⎪ ⎪ t t , ∀s ∈ S, ∀t ∈ T ⎪ |∀d ∈ D Λ = λ ⎪ s d,s ⎨ { } t |∀d ∈ Dt , ∀s ∈ S, ∀t ∈ T . (12.28) P = p s d,s ⎪ ⎪ } { t ⎪ ⎪ ⎪ ⎪ ⎪| | = {πs |∀s ∈ S, ∀t ∈ T} ⎪ ⎩ B = bst |∀s ∈ S, ∀t ∈ T
.
t , .λt , and .p t are the sensing decision, sensing frequency, and uploading where .cd,s d,s d,s priority of information d in vehicle s at time t, respectively, and .πst and .bst are the transmission power and V2I bandwidth of vehicle s at time t, respectively. We formulate the quality–cost tradeoff problem aiming at maximizing the system quality and minimizing the system cost simultaneously, which is expressed by
P1 : max Q, min C x
x
s.t. C1 :
t cd,s
∈ {0, 1}, ∀d ∈ Ds , ∀s ∈ S, ∀t ∈ T
max t C2 : λtd,s ∈ [λmin d,s , λd,s ], ∀d ∈ Ds , ∀s ∈ S, ∀t ∈ T t , ∀d ∗ ∈ Dts \ {d} , ∀d ∈ Dts , ∀s ∈ S, ∀t ∈ T C3 : pdt ∗ ,s /= pd,s
C4 : πst ∈ [0, πs ] , ∀s ∈ S, ∀t ∈ T .
t C5 : bs,e ∈ [0, be ] , ∀s ∈ Ste , ∀e ∈ E, ∀t ∈ T Σ C6 : λtd,s μd < 1, ∀s ∈ S, ∀t ∈ T
(12.29)
∀d⊆Dts
⎛ ⎞ tgt C7 : inf Pr[P] SNRts,e ≥ SNRs,e ≥ δ, ∀s ∈ S, ∀t ∈ T P∈p˜
C8 :
Σ
bst ≤ be , ∀t ∈ T
∀s∈Ste
where .C1 ∼ C5 indicate the range of decision variables. .C6 guarantees the queue steady state and .C7 guarantees the transmission reliability. .C8 requires that the sum of V2I bandwidth allocated by the edge node e cannot exceed its capacity .be . According to the definition of CDT, we define the profit of digital twin as follows. Definition 12.13 (Profit of digital twin (PDT)) The profit of digital twin .PDTv ∈ (0, 1) is defined as the complement of CDT of digital twin v. .
PDTv = 1 − CDTv
Then, we define the system profit as follows.
(12.30)
12.4 Proposed Algorithm
267
Definition 12.14 (System profit) The system profit .P ∈ (0, 1) is defined as the average of the PDT for each digital twin modeled in edge nodes during the scheduling period .T. Σ
∀t∈T
Σ
Σ
P=
.
∀e∈E
∀t∈T
Σ
Σ
∀v∈Vte PDTv t ∀e∈E |Ve |
(12.31)
Thus, problem .P1 can be rewritten as follows: P2 : max (Q, P) x
.
(12.32)
s.t. C1 ∼ C8
12.4 Proposed Algorithm In this section, we propose a multi-agent multi-objective deep reinforcement learning model as shown in Fig. 12.2. Specifically, the learner consists of four neural networks, i.e., a local policy network, a local critic network, a target policy network, μ and a target critic network, the parameters of which for vehicles are denoted by .θS , μ'
'
θSQ , .θS , and .θSQ , respectively. Similarly, the parameters of the four networks for
.
μ
Q
μ'
Q'
the edge node are denoted by .θE , .θE , .θE , and .θE , respectively. The parameters of the local policy and local critic networks are randomly initialized. The parameters
Fig. 12.2 The multi-agent multi-objective deep reinforcement learning model comprises K distributed actors, a learner, and a replay buffer. The distributed actors interact independently with the environment and store their interaction experiences in the replay buffer. Within the learner, the dueling critic network is designed to evaluate the agent action by integrating the state value and the action advantage compared to n random actions
268
12 A Multi-agent Multi-objective DRL Solution for DT in Vehicular Edge Intelligence
of target policy and target critic networks are initialized as the corresponding local network. K distributed actors are launched to interact with the environment and perform the replay experience storing. Each actor consists of a local vehicle policy μ μ network and a local edge policy network, which are denoted by .θS,k and .θE,k , respectively, and they are replicated from the local policy network of the learner. The replay buffer .B with a maximum size .|B| is initialized to store replay experiences.
12.4.1 Multi-agent Distributed Policy Execution In MAMO, vehicles and the edge node determine their actions via the local policy networks in a distributed way. The local observation of the system state in vehicle s at time t is represented by } { ots = t, s, lst , Ds , Фs , Dte , DVte , w t
.
(12.33)
where t is the time slot index, s is the vehicle index, .lst is the location of vehicle s, .Ds represents the set of information that can be sensed by vehicle s, .Фs represents the sensing cost of information in .Ds , .Dte represents the set of cached information in edge node e at time t, .DVte represents the set of information required by digital twins modeled in edge node e at time t, and .wt represents the weight vector for each randomly generated in each iteration. In particular, .w t = ⎤ which is(1),t ⎡ (1),tobjective, (2),t , where .w ∈ (0, 1) and .w (2),t ∈ (0, 1) are the weight of the system w w Σ quality and system profit, respectively, and we have . ∀j ∈{1,2} w (j ),t = 1. On the other hand, the local observation of the system state in edge node e at time t is represented by } { ote = t, e, DistS,e , D1 , · · · , Ds , · · · , DS , Dte , DVte , w t
.
(12.34)
where e is the edge node index and .DistS,e represents the set of distances between vehicles and edge node e. Therefore, the system state at time t can be represented by .ot = ote ∪ ot1 ∪ . . . ∪ ots ∪ . . . ∪ otS . The action of vehicle s is represented by t a ts = {Cts , {λtd,s , pd,s | ∀d ∈ Dts }, πst }
.
(12.35)
t where .Cts is the sensing decision, .λtd,s and .pd,s are the sensing frequency and t uploading priority of information .d ∈ Ds , respectively, and .πst is the transmission power of vehicle s at time t. The actions of vehicles are generated by the local vehicle policy network based on their local observations of the system state.
( μ) a ts = μS ots | θS + ɛ s Nts
.
(12.36)
12.4 Proposed Algorithm
269
where .Nts is an exploration noise to increase the diversity of vehicle actions and . ɛ s is an exploration constant for vehicle s. The set of vehicle actions is denoted by { t } t .a = a s | ∀s ∈ S . Then, the action of edge node e is represented by S t a te = {bs,e | ∀s ∈ Ste }
.
(12.37)
t is the V2I bandwidth allocated by edge node e for vehicle s at time t. where .bs,e Similarly, the action of the edge node e can be obtained by the local edge policy network based on the system state as well as the actions of vehicles.
( μ) a te = μE ote , a tS | θE + ɛ e Nte
.
(12.38)
where .Nte and . ɛ e are the exploration noise and exploration constant for the edge node e, respectively. { Furthermore, the joint } action of vehicles and the edge node is denoted by .a t = a te , a t1 , . . . , a ts , . . . , a tS . The environment obtains the system reward vector by executing the joint action, which is represented by ⎡ ( ) ( )⎤T r t = r (1) a tS , a te | ot r (2) a tS , a te | ot
.
(12.39)
( ) ( ) where .r (1) a tS , a te | ot and .r (2) a tS , a te | ot are the reward of two objectives (i.e., the achieved system quality and system profit), respectively, which can be computed by ⎧ .
| | Σ ( ) r (1) a tS , a te | ot = 1/|Vte | ∀v∈Vte QDTv | | Σ ( ) r (2) a tS , a te | ot = 1/|Vte | ∀v∈Vte PDTv
(12.40)
Accordingly, the reward of vehicle s in the j -th objective is obtained by the difference reward (DR) [22] based reward assignment, which is the difference between the system reward and the reward achieved without its action and can be represented by (j ),t
rs
.
( ) ( ) = r (j ) a tS , a te | ot − r (j ) a tS−s , a te | ot , ∀j ∈ {1, 2}
(12.41)
( ) where .r (j ) a tS−s , a te | ot is the system reward achieved without the contribution of vehicle s, and it can be obtained by setting null action set for vehicle s. The reward ⎤T ⎡ vector of vehicle s at time t is denoted by .r ts and represented by .r ts = rs(1),t rs(2),t . The set of the difference reward for vehicles is denoted by .r tS = {r ts | ∀s ∈ S}.
270
12 A Multi-agent Multi-objective DRL Solution for DT in Vehicular Edge Intelligence
On the other hand, the system reward is further transformed into a normalized reward for the edge node via min–max normalization. The reward of edge node e in the j -th objective at time t is computed by
(j ),t
re
.
=
( ) ( ) r (j ) a tS , a te | ot − min' r (j ) a tS , a te ' | ot ∀a t
( t ) e ( ) ' (j ) t t a S , a e | o − min' r (j ) a tS , a te ' | ot max r ∀a te '
(12.42)
∀a te
where .min' r (j ) (a tS , a te ' | ot ) and .max r (j ) (a tS , a te ' | ot ) are the minimum and ∀a te
∀a te '
maximum of the system reward achieved with the unchanged vehicle actions .a tS under the same system state .ot , respectively. The reward vector of edge node e ⎤T ⎡ at time t is denoted by .r te and represented by .r te = re(1),t re(2),t . The interaction experiences including the system state .ot , vehicle actions .a tS , edge action .aet , vehicle rewards .r tS , edge reward .r te , weights .wt , and next system state .ot+1 are stored into the replay buffer .B. The interaction will continue until the training process of the learner is completed.
12.4.2 Multi-objective Policy Evaluation In this section, we propose the dueling critic network (DCN) to evaluate the agent action based on the value of state and the advantage of action. There are two fully connected networks, namely, the action-advantage (AA) network and the state-value (SV) network in the DCN. Note that the parameter of AA network for vehicles and the edge node is denoted by .θSA and .θEA , respectively. Similarly, the parameter of the SV network for vehicles and the edge node is denoted by .θSV and .θEV , respectively. ⎛ We denote the output ⎞scalar of AA network with the input
m A m of vehicle s by .AS osm , asm , a m S−s , w | θS , where .a S−s denotes the action of other vehicles. Similarly, the ⎛ output scalar of AA⎞network with the input of edge m A m node e is denoted by .AE oem , aem , a m S , w | θE , where .a S denotes the action of all ⎛ vehicles. The ⎞ output scalar of an SV network of vehicle s is denoted by V m m .VS os , w | θS . Similarly, the output scalar of an SV network of edge node e ⎛ ⎞ is denoted by .VE oem , w m | θEV . The agent action evaluation consists of three steps. First, the AA network estimates the advantage function by outputting the advantage of the agent action based on the observation, action, and weights. Second, the VS network estimates the value function by outputting the value of state according to the observation and weights. Third, an aggregating module is adopted to output a single value to evaluate the action based on the advantage of action and the value of state. Specifically, N actions
12.4 Proposed Algorithm
271
are randomly generated and replaced with the agent action in the AA network to evaluate the average action value of random actions. We denote the n-th random action of vehicle s and edge node e by .asm,n and .aem,n , respectively. Therefore, the advantage ⎛ of the n-th random action ⎞ of vehicle ⎛ s and edge node e can⎞be represented A m | θ A and .A m m,n m m , w by .AS osm , asm,n , a m E oe , ae , a S , w | θE , respectively. S S−s The aggregating module of the Q function is constructed by evaluating the advantage of the agent action over the average advantage of random actions. Thus, the action values of vehicle .s ∈ S and edge node e are computed by ⎛ ⎞ Q m QS osm , asm , a m S−s , w | θS ⎛ ⎞ ⎛ ⎞ m m V m m m m A o + A o , w | θ , a , a , w | θ = V S S s s s S S S−s . ⎛ ⎞ 1 Σ m A − AS osm , asm,n , a m S−s , w | θS N
(12.43)
∀n
⎛ ⎞ Q m QE oem , aem , a m , w | θ S E ⎛ ⎞ ⎛ ⎞ m m V m m m m A o + A o , w | θ , a , a , w | θ = V E E e e e E E S . ⎛ ⎞ 1 Σ m A − AE oem , aem,n , a m , w | θ E S N
(12.44)
∀n
Q
Q
where .θS and .θS contain the parameters of the corresponding AA and SV networks. Q'
'
'
'
'
'
θS = (θSA , θSV ), θS = (θSA , θSV ) Q
.
θEQ = (θEA , θEV ), θEQ = (θEA , θEV )
(12.45)
12.4.3 Network Learning and Updating A minibatch of M transitions is sampled from the replay buffer .B to train the policy and critic networks of vehicles and the edge ⎞ node, which is denoted by ⎛ m+1 m+1 m m m m m m m m+1 . o , oe , w , a , ae , r , r e , o , oe , w . The target value of vehicle s S S S S is denoted by ⎛ ⎞ Q' m ' m+1 m+1 m+1 m+1 o ysm = r m w + γ Q , a , a , w | θ s s s S S−s S
.
(12.46)
272
12 A Multi-agent Multi-objective DRL Solution for DT in Vehicular Edge Intelligence '
m+1 where .Q'S (osm+1 , asm+1 , a S−s , w m+1 | θSQ ) is the action value generated by the m+1 is the next vehicle actions the target vehicle critic network, .γ is the discount, .a S−s m+1 m+1 m+1 m+1 vehicle s, i.e., .a S−s = {a1 , . . . , as−1 , as+1 , . . . , aSm+1 }, and .asm+1 is the next action of vehicle s generated by the target vehicle policy network based on the local μ' observation of the next system state, i.e., .asm+1 = μ'S (osm+1 | θS ). Similarly, the target value of edge node e is denoted by
⎛ ⎞ Q' m ' m+1 m+1 m+1 m+1 o yem = r m w + γ Q , a , a , w | θ e e e E E S
.
(12.47)
Q'
where .Q'E (oem+1 , aem+1 , a Sm+1 , w m+1 | θE ) denotes the action value generated by the target edge critic network, .a Sm+1 is the next vehicle actions, and .aem+1 denotes the next edge action, which can be obtained by the target edge policy network based μ' on its local observation of the next system state, i.e., .aem+1 = μ'E (oem+1 , a Sm+1 | θE ). The loss function of the vehicle critic network and the edge critic network is obtained by the categorical distribution temporal difference (TD) learning, which is represented by ⎛ ⎞ 1 Σ 1 Σ m Q Y L θS = M m S s s
(12.48)
⎛ ⎞ 1 Σ m Q L θE = Y M m e
(12.49)
.
.
where .Ysm and .Yem are the squares of the difference between the target value and the action value generated by the local critic network for vehicle s and edge node e, respectively. .
⎛ ⎞⎞ ⎛ Q 2 m Ysm = ysm − QS osm , asm , a m , w | θ S−s S
(12.50)
⎛ ⎞⎞ ⎛ Q 2 m Yem = yem − QE oem , aem , a m S , w | θE
(12.51)
.
The vehicle and edge policy network parameters are updated via deterministic policy gradient. μ
∇θ μ J(θS ) ≈
.
S
μ
1 Σ 1 Σ m P M m S s s
(12.52)
1 Σ m P M m e
(12.53)
∇θ μ J(θE ) ≈
.
E
12.5 Performance Evaluation
273
where ⎛ ⎞ ( m μ) Q m μ Psm = ∇asm QS osm , asm , a m , w | θ S−s S ∇θ μS os | θS
(12.54)
⎛ ⎞ ( m m μ) Q m μ Pem = ∇aem QE oem , aem , a m S , w | θE ∇θ μE oe , a S | θE
(12.55)
.
S
.
E
The local policy and critic network parameters are updated with the learning rates .α and .β, respectively. In particular, vehicles and the edge node update the parameters of target networks periodically, i.e., when .t mod ttgt = 0, where .ttgt is the parameter updating period of the target networks. μ'
μ
μ'
Q'
Q
Q'
μ'
Q'
Q
Q'
θS ← nS θS + (1 − nS )θS , θS ← nS θS + (1 − nS )θS .
μ'
μ
(12.56)
θE ← nE θE + (1 − nE )θE , θE ← nE θE + (1 − nE )θE
with .nS