126 22 18MB
English Pages 1156 [1154] Year 2005
Lecture Notes in Computer Science Commenced Publication in 1973 Founding and Former Series Editors: Gerhard Goos, Juris Hartmanis, and Jan van Leeuwen
Editorial Board David Hutchison Lancaster University, UK Takeo Kanade Carnegie Mellon University, Pittsburgh, PA, USA Josef Kittler University of Surrey, Guildford, UK Jon M. Kleinberg Cornell University, Ithaca, NY, USA Friedemann Mattern ETH Zurich, Switzerland John C. Mitchell Stanford University, CA, USA Moni Naor Weizmann Institute of Science, Rehovot, Israel Oscar Nierstrasz University of Bern, Switzerland C. Pandu Rangan Indian Institute of Technology, Madras, India Bernhard Steffen University of Dortmund, Germany Madhu Sudan Massachusetts Institute of Technology, MA, USA Demetri Terzopoulos New York University, NY, USA Doug Tygar University of California, Berkeley, CA, USA Moshe Y. Vardi Rice University, Houston, TX, USA Gerhard Weikum Max-Planck Institute of Computer Science, Saarbruecken, Germany
3794
Xiaohua Jia Jie Wu Yanxiang He (Eds.)
Mobile Ad-hoc and Sensor Networks First International Conference, MSN 2005 Wuhan, China, December 13-15, 2005 Proceedings
13
Volume Editors Xiaohua Jia City University of Hong Kong, Department of Computer Science Tat Chee Avenue, Kowloon Tong, Hong Kong SAR E-mail: [email protected] Jie Wu Florida Atlantic University Department of Computer Science and Engineering Boca Raton, FL 33431, USA E-mail: [email protected] Yanxiang He Wuhan University, Computer School Wuhan, Hubei 430072, P.R. China E-mail: [email protected]
Library of Congress Control Number: 2005937083 CR Subject Classification (1998): E.3, C.2, F.2, H.4, D.4.6, K.6.5 ISSN ISBN-10 ISBN-13
0302-9743 3-540-30856-3 Springer Berlin Heidelberg New York 978-3-540-30856-0 Springer Berlin Heidelberg New York
This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting, reproduction on microfilms or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer. Violations are liable to prosecution under the German Copyright Law. Springer is a part of Springer Science+Business Media springeronline.com © Springer-Verlag Berlin Heidelberg 2005 Printed in Germany Typesetting: Camera-ready by author, data conversion by Scientific Publishing Services, Chennai, India Printed on acid-free paper SPIN: 11599463 06/3142 543210
Preface
MSN 2005, the First International Conference on Mobile Ad-hoc and Sensor Networks, was held during 13-15 December 2005, in Wuhan, China. The conference provided a forum for researchers and practitioners to exchange research results and share development experiences. MSN 2005 attracted 512 submissions (including the submissions to the Modeling and Security in Next Generation Mobile Information Systems (MSNG) workshop), among which 100 papers were accepted for the conference and 12 papers were accepted for the workshop. We would like to thank the International Program Committee for their valuable time and effort in reviewing the papers. Special thanks go to the conference PC Vicechairs, Ivan Stojmenovic, Jang-Ping Sheu and Jianzhong Li for their help in assembling the International PC and coordinating the review process. We would also like to thank the Workshop Chair, Dongchun Lee, for organizing the workshop. We would like to express our gratitude to the invited speakers, Laxmi Bhuya, Lionel Ni and Taieb Znati, for their insightful speeches. Finally, we would like to thank the Local Organization Chair, Chuanhe Huang, for making all the local arrangements for the conference.
December 2005
Xiaohua Jia Jie Wu Yanxiang He
Organization
Steering Co-chairs Lionel Ni, Hong Kong University of Science and Technology, HKSAR Jinnan Liu, Wuhan University, PRC
General Co-chairs Taieb Znati, University of Pittsburgh, USA Yanxiang He, Wuhan University, PRC
Program Co-chairs Jie Wu, Florida Atlantic University, USA Xiaohua Jia, City University of Hong Kong, HKSAR
Program Vice Chairs Ivan Stojmenovic, University of Ottawa, Canada Jang-Ping Sheu, National Central University, Taiwan Jianzhong Li, Harbin Institute of Technology, PRC
Publicity Chair Makoto Takizawa, Tokyo Denki University, Japan Weifa Liang, The Australian National University, Australia Jiannong Cao, Hong Kong Polytechnic University, HKSAR
Publications Chair Hai Liu, City University of Hong Kong, HKSAR
Local Organization Chair Chuanhe Huang, Wuhan University, PRC
VIII
Organization
Program Committee Amiya Nayak (University of Ottawa, Canada) Cai Wentong (Nanyang Technological University, Singapore) Chih-Yung Chang (Tamkang University, Taiwan) Christophe Jelger (University of Basel, Switzerland) Cho-Li Wang (The University of Hong Kong, HKSAR) Chonggang Wang (University of Arkansas, USA) Chonggun Kim (Yeungnam University, Korea) Chun-Hung Richard Lin (National Sun Yat-Sen University, Taiwan) Chung-Ta King (National Tsing Hua University, Taiwan) David Simplot-Ryl (University of Lille, France) Duan-Shin Lee (National Tsing Hua University, Taiwan) Dong Xuan (Ohio State University, USA) Eric Fleury (INRIA, France) Fei Dai (North Dakota State University, USA) Geyong Min (University of Bradford, UK) Guangbin Fan (The University of Mississippi, USA) Guihai Chen (Nanjing University, PRC) Guojun Wang (Central South University, PRC) Han-Chieh Chao (National Dong Hwa University, Taiwan) Hong Gao (Harbin Institute of Technology, PRC) Hongyi Wu (University of Louisiana at Lafayette, USA) Hsiao-Kuang Wu (National Central University, Taiwan) Hu Xiaodong (Chinese Academy of Science, PRC) Ingrid Moerman (Ghent University, Belgium) Isabelle Guerin Lassous (INRIA, France) Isabelle Simplot-Ryl (University of Lille, France) Jaideep Srivastava (University of Minnesota, USA) Jean Carle (University of Lille, France) Jehn-Ruey Jiang (National Central University, Taiwan) Jiang (Linda) Xie (The University of North Carolina at Charlotte, USA) Jie Li (University of Tsukuba, Japan) Jiangliang Xu (Baptist University of Hong Kong, HKSAR) Jiangnong Cao (Hong Kong Polytechnic University, HKSAR) Jianping Wang (Georgia Southern University, USA) Justin Lipman (Alcatel Shanghai Bell, PRC) Jyh-Cheng Chen (National Tsing Hua University, Taiwan) Kuei-Ping Shih (Tamkang University, Taiwan) Kui Wu (University of Victoria, Canada) Lars Staalhagen (Technical University of Denmark, Denmark) Li-Der Chou (National Central University, Taiwan) Li Xiao (Michigan State University, USA) Liansheng Tan (Central China Normal University, PRC) Mei Yang (UNLV, USA) Min Song (Old Dominion University, USA)
Organization
Minglu Li (Shanghai Jiao Tong University) Natalija Vlajic (York University, Toronto, Canada) Ning Li (University of Illinois, USA) Pedro M. Ruiz (University of Murcia, Spain) Qingfeng Huang (Palo Alto Research Center, Palo Alto, USA) Rong-Hong Jan (National Chiao Tung University, Taiwan) Ruay-Shiung Chang (National Dong Hwa University, Taiwan) Scott Huang (City University of Hong Kong, HKSAR) Sunghyun Choi Seoul (National University, South Korea) Suprakash Datta (York University, Canada) Timothy Shih (Tamkang University, Taiwan) Tracy Camp (Colorado School of Mines, Golden, USA) Vojislav Misic (University of Manitoba, Canada) Wan Pengjun (Illinois Ins. of Technology, USA) Weifa Liang (The Australian National University, Australia) Weijia Jia (City University of Hong Kong, HKSAR) Weili Wu (University of Texas at Dallas, USA) Wenzhan Song (Washington State University, Vancouver, USA) Winston Seah (Institute for Infocomm Research, Singapore) Wu Zhang (Shanghai University, PRC) Xiao Bin (Hong Kong Polytechnic University, HKSAR) Xiao Chen (Texas State University, USA) Xiaobo Zhou (University of Colorado at Colorado Springs, USA) Xiaoyan Maggie Cheng (University of Missouri, Rolla, USA) Xiuzhen Susan Cheng (George Washington University, USA) Xuemin Shen (University of Waterloo, Canada) Xueyan Tang (Nanyang Technological University, Singapore) Yang Xiao (The University of Memphis, USA) Yao-Nan Lien (National Chengchi University, Taiwan) Yen-Wen Chen (National Central University, Taiwan) Yongbing Zhang (University of Tsukuba, Japan) Yu-Chee Tseng (National Chiao Tung University, Taiwan) Yuh-Shyan Chen (National Chung Cheng University, Taiwan) Yu Wang (University of North Carolina, Charlotte, USA) Zhen Jiang (West Chester University, USA)
IX
Table of Contents An Overlapping Communication Protocol Using Improved Time-Slot Leasing for Bluetooth WPANs Yuh-Shyan Chen, Yun-Wei Lin, Chih-Yung Chang . . . . . . . . . . . . . . . . .
1
Full-Duplex Transmission on the Unidirectional Links of High-Rate Wireless PANs Seung Hyong Rhee, Wangjong Lee, WoongChul Choi, Kwangsue Chung, Jang-Yeon Lee, Jin-Woong Cho . . . . . . . . . . . . . . . . .
11
Server Supported Routing: A Novel Architecture and Protocol to Support Inter-vehicular Communication Ritun Patney, S.K. Baton, Nick Filer . . . . . . . . . . . . . . . . . . . . . . . . . . . .
21
Energy-Efficient Aggregate Query Evaluation in Sensor Networks Zhuoyuan Tu, Weifa Liang . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
31
Data Sampling Control and Compression in Sensor Networks* Jinbao Li, Jianzhong Li . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
42
Worst and Best Information Exposure Paths in Wireless Sensor Networks Bang Wang, Kee Chaing Chua, Wei Wang, Vikram Srinivasan . . . . . .
52
Cost Management Based Secure Framework in Mobile Ad Hoc Networks RuiJun Yang, Qi Xia, QunHua Pan, WeiNong Wang, MingLu Li . . . .
63
Efficient and Secure Password Authentication Schemes for Low-Power Devices Kee-Won Kim, Jun-Cheol Jeon, Kee-Young Yoo . . . . . . . . . . . . . . . . . . .
73
Improving IP Address Autoconfiguration Security in MANETs Using Trust Modelling Shenglan Hu, Chris J. Mitchell . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
83
On-Demand Anycast Routing in Mobile Ad Hoc Networks Jidong Wu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
93
MLMH: A Novel Energy Efficient Multicast Routing Algorithm for WANETs Sufen Zhao, Liansheng Tan, Jie Li . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
XII
Table of Contents
ZBMRP: A Zone Based Multicast Routing Protocol for Mobile Ad Hoc Networks Jieying Zhou, Simeng Wang, Jing Deng, Hongda Feng . . . . . . . . . . . . . . 113 A Survey of Intelligent Information Processing in Wireless Sensor Network Xiaohua Dai, Feng Xia, Zhi Wang, Youxian Sun . . . . . . . . . . . . . . . . . . 123 Minimum Data Aggregation Time Problem in Wireless Sensor Networks Xujin Chen, Xiaodong Hu, Jianming Zhu . . . . . . . . . . . . . . . . . . . . . . . . . 133 Mobility-Pattern Based Localization Update Algorithms for Mobile Wireless Sensor Networks Mohammad Y. Al-laho, Min Song, Jun Wang . . . . . . . . . . . . . . . . . . . . . 143 Accurate Time Synchronization for Wireless Sensor Networks Hongli Xu, Liusheng Huang, Yingyu Wan, Ben Xu . . . . . . . . . . . . . . . . . 153 A Service Discovery Protocol for Mobile Ad Hoc Networks Based on Service Provision Groups and Their Dynamic Reconfiguration Xuefeng Bai, Tomoyuki Ohta, Yoshiaki Kakuda, Atsushi Ito . . . . . . . . . 164 Population Estimation for Resource Inventory Applications over Sensor Networks Jiun-Long Huang . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 Segmented Broadcasting and Distributed Caching for Mobile Wireless Environments Anup Mayank, Chinya V. Ravishankar . . . . . . . . . . . . . . . . . . . . . . . . . . . 185 Range Adjustment for Broadcast Protocols with a Realistic Radio Transceiver Energy Model in Short-Range Wireless Networks Jialiang Lu, Fabrice Valois, Dominique Barthel . . . . . . . . . . . . . . . . . . . . 197 Reliable Gossip-Based Broadcast Protocol in Mobile Ad Hoc Networks Guojun Wang, Dingzhu Lu, Weijia Jia, Jiannong Cao . . . . . . . . . . . . . . 207 An Energy Consumption Estimation Model for Disseminating Query in Sensor Networks Guilin Li, Jianzhong Li, Longjiang Guo . . . . . . . . . . . . . . . . . . . . . . . . . . 219 EasiSOC: Towards Cheaper and Smaller Xi Huang, Ze Zhao, Li Cui . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229 Deployment Issues in Wireless Sensor Networks Liping Liu, Feng Xia, Zhi Wang, Jiming Chen, Youxian Sun . . . . . . . . 239
Table of Contents
XIII
A Geographical Cellular-Like Architecture for Wireless Sensor Networks Xiao Chen, Mingwei Xu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249 EAAR: An Approach to Environment Adaptive Application Reconfiguration in Sensor Network Dongmei Zhang, Huadong Ma, Liang Liu, Dan Tao . . . . . . . . . . . . . . . . 259 A Secure Routing Protocol SDSR for Mobile Ad Hoc Networks Huang Chuanhe, Li Jiangwei, Jia Xiaohua . . . . . . . . . . . . . . . . . . . . . . . . 269 Secure Localization and Location Verification in Sensor Networks Yann-Hang Lee, Vikram Phadke, Jin Wook Lee, Amit Deshmukh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278 Secure AODV Routing Protocol Using One-Time Signature Shidi Xu, Yi Mu, Willy Susilo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288 A Security Enhanced AODV Routing Protocol Li Zhe, Liu Jun, Lin Dan, Liu Ye . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298 A Constant Time Optimal Routing Algorithm for Undirected Double-Loop Networks Bao-Xing Chen, Ji-Xiang Meng, Wen-Jun Xiao . . . . . . . . . . . . . . . . . . . 308 A Directional Antenna Based Path Optimization Scheme for Wireless Ad Hoc Networks Sung-Ho Kim, Young-Bae Ko . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317 Optimized Path Registration with Prefix Delegation in Nested Mobile Networks Hyemee Park, Tae-Jin Lee, Hyunseung Choo . . . . . . . . . . . . . . . . . . . . . . 327 BGP-GCR+: An IPv6-Based Routing Architecture for MANETs as Transit Networks of the Internet Quan Le Trung, Gabriele Kotsis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337 A Local Repair Scheme with Adaptive Promiscuous Mode in Mobile Ad Hoc Networks Doo-Hyun Sung, Joo-Sang Youn, Ji-Hoon Lee, Chul-Hee Kang . . . . . . . 351 PSO-Based Energy Efficient Gathering in Sensor Networks Ying Liang, Haibin Yu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362 Efficient Data Gathering Schemes for Wireless Sensor Networks Po-Jen Chuang, Bo-Yi Li, Tun-Hao Chao . . . . . . . . . . . . . . . . . . . . . . . . 370
XIV
Table of Contents
Delay Efficient Data Gathering in Sensor Networks Xianjin Zhu, Bin Tang, Himanshu Gupta . . . . . . . . . . . . . . . . . . . . . . . . . 380 Localized Recursive Estimation in Wireless Sensor Networks Bang Wang, Kee Chaing Chua, Vikram Srinivasan, Wei Wang . . . . . . 390 Asynchronous Power-Saving Event-Delivery Protocols in Mobile USN Young Man Kim . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 400 Low-Complexity Authentication Scheme Based on Cellular Automata in Wireless Network Jun-Cheol Jeon, Kee-Won Kim, Kee-Young Yoo . . . . . . . . . . . . . . . . . . . 413 SeGrid: A Secure Grid Infrastructure for Sensor Networks Fengguang An, Xiuzhen Cheng, Qing Xia, Fang Liu, Liran Ma . . . . . . 422 Handling Sensed Data in Hostile Environments Oren Ben-Zwi, Shlomit S. Pinter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433 Detecting SYN Flooding Attacks Near Innocent Side Yanxiang He, Wei Chen, Bin Xiao . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443 Network Capacity of Wireless Ad Hoc Networks with Delay Constraint Jingyong Liu, Lemin Li, Bo Li . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453 Load-Based Dynamic Backoff Algorithm for QoS Support in Wireless Ad Hoc Networks Chang-Keun Seo, Weidong Wang, Sang-Jo Yoo . . . . . . . . . . . . . . . . . . . . 466 Efficient Multiplexing Protocol for Low Bit Rate Multi-point Video Conferencing Haohuan Fu, Xiaowen Li, Ji Shen, Weijia Jia . . . . . . . . . . . . . . . . . . . . . 478 A New Backoff Algorithm to Improve the Performance of IEEE 802.11 DCF Li Yun, Wei-Liang Zhao, Ke-Ping Long, Qian-bin Chen . . . . . . . . . . . . 488 Enhanced Power Saving for IEEE 802.11 WLAN with Dynamic Slot Allocation Changsu Suh, Young-Bae Ko, Jai-Hoon Kim . . . . . . . . . . . . . . . . . . . . . . 498 DIAR: A Dynamic Interference Aware Routing Protocol for IEEE 802.11-Based Mobile Ad Hoc Networks Liran Ma, Qian Zhang, Fengguang An, Xiuzhen Cheng . . . . . . . . . . . . . 508
Table of Contents
XV
A Low-Complexity Power Allocation Scheme for Distributed Wireless Links in Rayleigh Fading Channels with Capacity Optimization Dan Xu, Fangyu Hu, Qian Wang, Zhisheng Niu . . . . . . . . . . . . . . . . . . . 518 On Energy Efficient Wireless Data Access: Caching or Not? Mark Kai Ho Yeung, Yu-Kwong Kwok . . . . . . . . . . . . . . . . . . . . . . . . . . . . 528 An Efficient Power Allocation Scheme for Ad Hoc Networks in Shadowing Fading Channels Dan Xu, Fangyu Hu, Zhisheng Niu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 538 A Soft Bandwidth Constrained QoS Routing Protocol for Ad Hoc Networks Xiongwei Ren, Hongyuan Wang . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 548 Optimal QoS Mechanism: Integrating Multipath Routing, DiffServ and Distributed Traffic Control in Mobile Ad Hoc Networks Xuefei Li, Laurie Cuthbert . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 560 A New Backoff Algorithm to Support Service Differentiation in Ad Hoc Networks Li Yun, Ke-Ping Long, Wei-Liang Zhao, Chonggang Wang, Kazem Sohraby . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 570 Power Aware Multi-hop Packet Relay MAC Protocol in UWB Based WPANs Weidong Wang, Chang-Keun Seo, Sang-Jo Yoo . . . . . . . . . . . . . . . . . . . . 580 Traffic-Adaptive Energy Efficient Medium Access Control for Wireless Sensor Networks Sungrae Cho, Jin-Woong Cho, Jang-Yeon Lee, Hyun-Seok Lee, We-Duke Cho . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 593 An Energy-Conserving and Collision-Free MAC Protocol Based on TDMA for Wireless Sensor Networks Biao Ren, Junfeng Xiao, Jian Ma, Shiduan Cheng . . . . . . . . . . . . . . . . . 603 Experiments Study on a Dynamic Priority Scheduling for Wireless Sensor Networks Jiming Chen, Youxian Sun . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 613 A BPP-Based Scheduling Algorithm in Bluetooth Systems Junfeng Xiao, Biao Ren, Shiduan Cheng . . . . . . . . . . . . . . . . . . . . . . . . . . 623
XVI
Table of Contents
On the Problem of Channel Assignment for Multi-NIC Multihop Wireless Networks Leiming Xu, Yong Xiang, Meilin Shi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 633 Validity of Predicting Connectivity in Wireless Ad Hoc Networks Henry Larkin, Zheng da Wu, Warren Toomey . . . . . . . . . . . . . . . . . . . . . 643 A Novel Environment-Aware Mobility Model for Mobile Ad Hoc Networks Gang Lu, Demetrios Belis, Gordon Manson . . . . . . . . . . . . . . . . . . . . . . . 654 A Low Overhead Ad Hoc Routing Protocol with Route Recovery Chang Wu Yu, Tung-Kuang Wu, Rei Heng Cheng, Po Tsang Chen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 666 Recovering Extra Routes with the Path from Loop Recovery Protocol Po-Wah Yau . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 676 Quality of Coverage (QoC) in Integrated Heterogeneous Wireless Systems Hongyi Wu, Chunming Qiao, Swades De, Evsen Yanmaz, Ozan Tonguz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 689 ACOS: A Precise Energy-Aware Coverage Control Protocol for Wireless Sensor Networks Yanli Cai, Minglu Li, Wei Shu, Min-You Wu . . . . . . . . . . . . . . . . . . . . . 701 Coverage Analysis for Wireless Sensor Networks Ming Liu, Jiannong Cao, Wei Lou, Li-jun Chen, Xie Li . . . . . . . . . . . . 711 On Coverage Problems of Directional Sensor Networks Huadong Ma, Yonghe Liu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 721 Using MDS Codes for the Key Establishment of Wireless Sensor Networks Jing Deng, Yunghsiang S. Han . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 732 A Study on Efficient Key Management in Real Time Wireless Sensor Network Sangchul Son, Miyoun Yoon, Kwangkyum Lee, Yongtae Shin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 745 Efficient Group Key Management for Dynamic Peer Networks Wei Wang, Jianfeng Ma, SangJae Moon . . . . . . . . . . . . . . . . . . . . . . . . . . 753
Table of Contents
XVII
Improvement of the Naive Group Key Distribution Approach for Mobile Ad Hoc Networks Yujin Lim, Sanghyun Ahn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 763 RAA: A Ring-Based Address Autoconfiguration Protocol in Mobile Ad Hoc Networks Yuh-Shyan Chen, Shih-Min Lin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 773 Dual Binding Update with Additional Care of Address in Network Mobility KwangChul Jeong, Tae-Jin Lee, Hyunseung Choo . . . . . . . . . . . . . . . . . . 783 Optimistic Dynamic Address Allocation for Large Scale MANETs Longjiang Li, Xiaoming Xu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 794 Boundary-Based Time Partitioning with Flattened R-Tree for Indexing Ubiquitous Objects Youn Chul Jung, Hee Yong Youn, Eun Seok Lee . . . . . . . . . . . . . . . . . . . 804 Authentication in Fast Handover of Mobile IPv6 Applying AAA by Using Hash Value Hyungmo Kang, Youngsong Mun . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 815 The Tentative and Early Binding Update for Mobile IPv6 Fast Handover Seonggeun Ryu, Youngsong Mun . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 825 A Simulation Study to Investigate the Impact of Mobility on Stability of IP Multicast Tree Wu Qian, Jian-ping Wu, Ming-wei Xu, Deng Hui . . . . . . . . . . . . . . . . . . 836 Fast Handover Method for mSCTP Using FMIPv6 Kwang-Ryoul Kim, Sung-Gi Min, Youn-Hee Han . . . . . . . . . . . . . . . . . . 846 An Analytical Comparison of Factors Affecting the Performance of Ad Hoc Network Xin Li, Nuan Wen, Bo Hu, Yuehui Jin, Shanzhi Chen . . . . . . . . . . . . . . 856 Maximum Throughput and Minimum Delay in IEEE 802.15.4 Benoˆıt Latr´e, Pieter De Mil, Ingrid Moerman, Niek Van Dierdonck, Bart Dhoedt, Piet Demeester . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 866 On the Capacity of Hybrid Wireless Networks in Code Division Multiple Access Scheme Qin-yun Dai, Xiu-lin Hu, Zhao Jun, Yun-yu Zhang . . . . . . . . . . . . . . . . 877
XVIII Table of Contents
Performance Evaluation of Existing Approaches for Hybrid Ad Hoc Networks Across Mobility Models Francisco J. Ros, Pedro M. Ruiz, Antonio Gomez-Skarmeta . . . . . . . . . 886 UDC: A Self-adaptive Uneven Clustering Protocol for Dynamic Sensor Networks Guang Jin, Silvia Nittel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 897 A Backoff-Based Energy Efficient Clustering Algorithm for Wireless Sensor Networks Yongtao Cao, Chen He, Jun Wang . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 907 Energy-Saving Cluster Formation Algorithm in Wireless Sensor Networks Hyang-tack Lee, Dae-hong Son, Byeong-hee Roh, S.W. Yoo, Y.C. Oh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 917 RECA: A Ring-Structured Energy-Efficient Cluster Architecture for Wireless Sensor Networks Guanfeng Li, Taieb Znati . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 927 A Distributed Efficient Clustering Approach for Ad Hoc and Sensor Networks Jason H. Li, Miao Yu, Renato Levy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 937 A Novel MAC Protocol for Improving Throughput and Fairness in WLANs Xuejun Tian, Xiang Chen, Yuguang Fang . . . . . . . . . . . . . . . . . . . . . . . . . 950 Optimal Control of Packet Service Access State for Cdma2000-1x Systems Cai-xia Liu, Yu-bo Tan, Dong-nian Cheng . . . . . . . . . . . . . . . . . . . . . . . . 966 A Cross-Layer Optimization for Ad Hoc Networks Yuan Zhang, Wenwu Wu, Xinghai Yang . . . . . . . . . . . . . . . . . . . . . . . . . . 976 A Novel Media Access Control Algorithm Within Single Cluster in Hierarchical Ad Hoc Networks Dongni Li, Yasha Wang . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 986 IEE-MAC: An Improved Energy Efficient MAC Protocol for IEEE 802.11-Based Wireless Ad Hoc Networks Bo Gao, Yuhang Yang, Huiye Ma . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 996
Table of Contents
XIX
POST: A Peer-to-Peer Overlay Structure for Service and Application Deployment in MANETs Anandha Gopalan, Taieb Znati . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1006 An Efficient and Practical Greedy Algorithm for Server-Peer Selection in Wireless Peer-to-PeerFile Sharing Networks Andrew Ka Ho Leung, Yu-Kwong Kwok . . . . . . . . . . . . . . . . . . . . . . . . . . 1016 Can P2P Benefit from MANET? Performance Evaluation from Users’ Perspective Lu Yan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1026 Research on Dynamic Modeling and Grid-Based Virtual Reality Luliang Tang, Qingquan Li . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1036 Design of Wireless Sensors for Automobiles Olga L. Diaz–Gutierrez, Richard Hall . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1043 Mobile Tracking Using Fuzzy Multi-criteria Decision Making Soo Chang Kim, Jong Chan Lee, Yeon Seung Shin, Kyoung-Rok Cho . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1051 Pitfall in Using Average Travel Speed in Traffic Signalized Intersection Networks Bongsoo Son, Jae Hwan Maeng, Young Jun Han, Bong Gyou Lee . . . . 1059 Static Registration Grouping Scheme to Reduce HLR Traffic Cost in Mobile Networks Dong Chun Lee . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1065 Towards Security Analysis to Binding Update Protocol in Mobile IPv6 with Formal Method Jian-xin Li, Jin-peng Huai, Qin Li, Xian-xian Li . . . . . . . . . . . . . . . . . . 1073 Enhancing of the Prefetching Prediction for Context-Aware Mobile Information Services In Seon Choi, Hang Gon Lee, Gi Hwan Cho . . . . . . . . . . . . . . . . . . . . . . . 1081 A Mobile Multimedia Database System for Infants Education Environment Keun Wang Lee, Hyeon Seob Cho, Jong Hee Lee, Wha Yeon Cho . . . . 1088 Towards a Symbolic Bisimulation for the Spi Calculus Yinhua L¨ u, Xiaorong Chen, Luming Fang, Hangjun Wang . . . . . . . . . . 1095
XX
Table of Contents
Mobile Agent-Based Framework for Healthcare Knowledge Management System Sang-Young Lee, Yun-Hyeon Lee . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1103 Assurance Method of High Availability in Information Security Infrastructure System SiChoon Noh, JeomGu Kim, Dong Chun Lee . . . . . . . . . . . . . . . . . . . . . . 1110 Fuzzy-Based Prefetching Scheme for Effective Information Support in Mobile Networks Jin Ah Yoo, Dong Chun Lee . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1117 Duplex Method for Mobile Communication Systems Gi Sung Lee . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1125 Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1133
An Overlapping Communication Protocol Using Improved Time-Slot Leasing for Bluetooth WPANs Yuh-Shyan Chen1 , Yun-Wei Lin1 , and Chih-Yung Chang2 1 2
Department of Computer Science and Information Engineering, National Chung Cheng University, Taiwan Department of Computer Science and Information Engineering, Tamkang University, Taiwan
Abstract. In this paper, we propose an overlapping communication protocol using improved time-slot leasing in the Bluetooth WPANS. One or many slave-master-slave communications usually exist in a piconet of the Bluetooth network. A fatal communication bottleneck is incurred in the master node if many slave-master-slave communications are required at the same time. To alleviate the problem, an overlapping communication scheme is presented to allow slave node directly and simultaneously communicates with another slave node to replace with the original slavemaster-slave communication works in a piconet. This overlapping communication scheme is based on the improved time-slot leasing scheme. The key contribution of our improved time-slot leasing scheme additionally offers the overlapping communication capability and we developed an overlapping communication protocol in a Bluetooth WPANs. Finally, simulation results demonstrate that our developed communication protocol achieves the performance improvements on bandwidth utilization, transmission delay time, network congestion, and energy consumption. Keywords: Bluetooth, time-slot leasing, WPAN, wireless communication.
1
Introduction
The advances of computer technology and the population of wireless equipment have promoted the quality of our daily life. The trend of recent communication technology is to make good use of wireless equipments for constructing an ubiquitous communication environment. Bluetooth[2] is a low cost, low power, and short range communication technology that operates at 2.4GHz ISM bands. A master polls slaves by sent polling packets to slaves using round robin (RR) scheme within the piconet. The master communicates with one slave and all other slaves must hold and wait the polling packet, so the transmission of other slaces is arrested. This condition is called the ”transmission holding problem.” To reduce the ”transmission holding problem”, one interest issue is how to develop a novel scheme which can effectively solve the ”transmission holding” problem under the X. Jia, J. Wu, and Y. He (Eds.): MSN 2005, LNCS 3794, pp. 1–10, 2005. c Springer-Verlag Berlin Heidelberg 2005
2
Y.-S. Chen, Y.-W. Lin, and C.-Y. Chang
fixed-topology situation. To satisify that purpose, Zhang et al. develops a timeslot leasing scheme (TSL) [5]. Zhang et al.’s time-slot leasing scheme provides a general mechanism to support the direct slave-slave communication, but the master node uses round-robin (RR) mechanism to check slave node intended to send or receive data. Other slave node waits the polling time and gains the transmission holding time. More recently, Cordeiro et al. proposed a QoS-driven dyamic slot assignment (DSA) schedule scheme [1] to more efficiently utilize time-slot leasing scheme. With TSL and DSA scheme, the tramnsission holding problem still exists. Effort will be made to effectively reduce the the tramnsission holding problem under the fixed topology structure. In this paper, we propose an overlapping communication protocol using improved time-slot leasing in the Bluetooth WPANS. The overlapping communication scheme is based on the improved time-slot leasing scheme which modified from the original time-slot leasing scheme, while the original time-slot leasing scheme only provides the slave-to-slave communication capability. The key contribution of our improved time-slot leasing scheme additionally offers the overlapping communication capability and we developed an overlapping communication protocol in a Bluetooth WPANs. Finally, simulation results demonstrate that our developed communication protocol achieves the performance improvements on bandwidth utilization, transmission delay time, network congestion, and energy consumption. This paper is organized as follows. Section 2 describes the basic idea of our new scheme. The new communication protocol is presented in Section 3. The performance analysis is discussed in Section 4. Section 5 concludes this work.
2
Basic Idea
The transmission holding problem is originated from the drawback of the master/slave model. In a piconet, since the slave may transmit packets only if it receives the polling packet from master. As a result, when there are many salves have to transmit data, slaves must hold its transmission until receving the polling packet, as shown in Fig. 1(a). To solve the transmission holding problem, a timeslot leasing (TSL) approach [5] has been proposed. Slaves can directly transmit packets to each others without the master relaying. Using TSL approach, the waiting time of the other holding slaves are reduced. Unfortunately, the effect of transmission holding problem is reduced, but it still exists. To slove the transmission holding problem completely and overcome the drawback of RR scheme, an overlapping communication scheme is investigated in this work to offer the overlapping communication capability for multi-pair of devices within a piconet. With the overlapping communication scheme, Bluetooth device can simultaneously and directly communicate with each other. The performance of communication will be improved. In the following, we describe the main contribution of our scheme, compared to time-slot leasing (TSL) scheme [5] and QoS-driven dyamic slot assignment (DSA) scheme [1]. The frequency-hopping spread spectrum is used in the Bluetooth − → network. Let C (x ) denote the used channel for time slot x, and αβ denote slave −−→ node α sends data to slave node β. An example is shown in Fig. 1(b), S1 S2 ,
An Overlapping Communication Protocol
3
Fig. 1. (a) Transmission holding problem (b) Three communication requests (c) TSL (d) DSA (e) Overlapping communication protocol
−−→ −−→ S3 S4 , and S5 S6 simultaneously occur in a piconet. The transmission holding problem is heavily occurred in master node M. Fig. 1(c) shows that the time −−→ −−→ −−→ cost is more than t2 − t0 by using TSL scheme for the S1 S2 , S3 S4 and S5 S6 . −−→ −−→ −−→ With the same works of S1 S2 , S3 S4 and S5 S6 , time cost is obviously reduced to t2 − t0 by using the DSA scheme as illustrated in Fig. 1(d). However, the time cost will be improved by using our overlapping communication scheme. Fig. 1(e) −−→ −−→ −−→ illustrates that the works of S1 S2 , S3 S4 , and S5 S6 can be accomplished in time t1 − t 0 . To explain the frequency hopping technology, every time slot during the transmission adopts the different channel, we let channel F H(x) denote the frequency used at time slot x. From the sepcification of the Bluetooth system 1.2 [2], the consecutive five time slots keep the same channel if using a DH5 packet. The rule is same for packets DH3, DM3, and DM5. Let channel C(x) denote a Bluetooth device sends a packet at time slot x using the channel C(x). If a device sends a DH1 packet at time slot x, then we have the result of C(x) = F H(x), C(x+ 1) = F H(x+ 1), C(x+ 2) = F H(x+ 2), C(x+ 3) = F H(x+ 3), C(x+ 4) = F H(x+ 4), where C(x) = C(x+1) = C(x+2) = C(x+3) = C(x+4). But if a device sends a DH5 packet at time slot x, then five connective time slots use the same channel, C(x) = C(x+1) = C(x+2) = C(x+3) = C(x+4) = F H(x). Observe that channels F (x + 1), F (x + 2), F (x + 3), and F (x + 4) in the original frequency hopping sequence are free. Efforts will be made to significantly improve the throughput in a scatternet by using our new overlapping communication scheme. This work is achieved by developing intra-piconet and inter-piconet overlapping protocols, which are presented in the following sections. The intra-piconet overlapping com←−−→ munication protocol is shown in Figs. 2(a)(b). The data transmission of M1 S3
4
Y.-S. Chen, Y.-W. Lin, and C.-Y. Chang
Fig. 2. The concept of overlapping communication scheme
←−→ and S1 S2 can be overlapped. In addition, Fig. 2(c)(d) illustrates the overlapping condition for performing the intra-piconet overlapping communication protocol.
3
Overlapping Communication Protocol
To significantly overcome the ”transmission holding problem”, an overlapping communication protocol is presented. The overlapping communication protocol is divided into two phases; (1) queuing scheduling, (2) overlapping time-slot assignment. The details descrilbe in the following. 3.1
Queuing Scheduling Phase
The queuing scheduling phase is to achieve the overlapping communication schedule. To complete the overlapping communication, master initially forms data-flow matrix and queuing table. This work can be done in the BTIM window as shown in Fig. 3(c). During a schedule interval, each source node (Bluetooth device) in a piconet just can transmit data to one destination node. The amount of data transmission of all pair of source-destination nodes is⎡kept in the master ⎤ D10 ... D1m ⎢ .. ⎥ , node, and can be stored in a data flow matrix DMm×m+1 = ⎣ ... D . ⎦ ij
Dm0 ... Dmm where Dij denote the amount of data transmission from node i to node j, where
An Overlapping Communication Protocol
5
Fig. 3. (a) Master polling (b) Polling and transmission in origin piconet (c) The structure of BTIM
Fig. 4. (a) An example of piconet communication, (b) flow matrix, and (c) n-queue
1 ≤ i ≤ m, 0 ≤ j ≤ m, and m is the number of slave nodes in a piconet. Observe that node j is the master node if j = 0. Example is shown in Fig. 4(b). Master node further calculates a queuing sequence based on data-flow matrix DMm×m+1 . Our overlapping communication scheme is to utilize the long packet, since the high utilization of the long packet can significantly increase the chance of overlapping communication. Before describing the overlapping time-slot assignment operation, we define the following notations. First, a link with a greater number of data has the higher ← → priority for the data transmission. Therefore, we define priority function pri( ij ) ← → of link ij as Dij + Dji ← → pri( ij ) = . |Dij − Dji | A link queue Q is defined to record m + 1 pairs of source-to-destination links ← → ← → ←−→ ←→ i0 j, ..., ik j, ik+1 j, ..., and im j in a piconet, where all of these links have the ← → ←−→ ←→ ← → same destination node j. Further, let Q = { i0 j, · · · , ik j, ik+1 j, · · · , im j}, where ←−→ ←−→ ←−→ ← → pri(ik j) > pri(ik+1 j) and 0 ≤ k ≤ m − 1. For instance, {S1 S2 , S3 S2 }.
6
Y.-S. Chen, Y.-W. Lin, and C.-Y. Chang
Assumed that there are n link queues Q1 , Q2 , · · · , Qq , · · · , Qn , where each Qq has differnt destination node, and 1 ≤ q ≤ n, where n is real number. For ←−→ ←−→ ←−→ ←→ ←→ instance, Q1 = {S1 S2 , S3 S2 } and Q2 = {S4 S3 }. Given Qq = {i0 jq , · · · , ik jq , m ←−−→ ←−→ ←→ ←→ ik+1 jq , · · · , im jq }, we denote M AX(Qq ) =M AX pri(ik jq ) = pri(i0 jq ). All of k=0 ⎛ ⎞ Q1 ⎜ .. ⎟ ⎜. ⎟ ⎜ ⎟ ⎜ Qq ⎟ ⎜ ⎟, Q1 , Q2 , · · · , Qq , · · · , Qn can be combined into a queuing sequence = ⎜ ⎟ ⎜ Qq+1 ⎟ ⎜. ⎟ ⎝ .. ⎠ Qn where M AX(Qq ) > M AX(Qq+1 ), and 1 ≤ q ≤ n. Example is shown in Fig. 4(a). 3.2
Overlapping Time-Slot Assignment Phase
The queuing scheduling phase determines the appropriate transmission order which records in a queuing sequence. This queuing sequence is used in the overlapping time-slot assignment phase to assign suitable time-slots for each transmission. In the overlapping time-slot assignment phase, there are two conditions occurred, collision-free and collision detection. In the following, we introduce collision-free and collision detection. In the collision-free condition, the overlapping time-slot assignment phase ← → assign suitable time-slots to each transmission ij according to the order in the queue sequence. Qq processes before Qq+1 in queue sequence. Each Qq starts to process at time slot 2k, and q ≤ k ≤ n. For higher bandwidth utilization, DH5 packet type is the prior choice to transmit data. Overlapping communication scheme uses the appropriate packet type improves bandwidth and decrease the energy consumption. We let n DH5 and n DH3 denote the amount of DH5 and DH3 packet type respectively, Dij denote the amount of data transmission from node i to node j, and Excess denote the excess part of transmission. For each transmission, master arranges approoriate packet type by a packet type distribution rule which describes as following: n DH5 ∗ 339 + n DH3 ∗ 183 = Dij + min(Excess) The packet amount of DH5 and DH3 is 339 bytes and 183 bytes, respectively. A set of packet type distribution P Dij = {pdij 0 , pdij 1 , ..., pdij s , ..., pdij p } is used to record p+1 packet type distribution from node i transmitting to node j, where pdij s denotes the tth transmission packet type form i to j and 0 ≤ s ≤ p. The value of p is n DH5 + n DH3, where n DH5 and n DH3 are related to pdij s . For example in Fig. 4(b), DS1 S2 is 1350 bytes composed of four DH5 packets (n DH5 = 4), P DS1 S2 = {5, 5, 5, 5} and DS2 S1 is 900 composed of three DH5 packets bytes (n DH5 = 3), P DS2 S1 = {5, 5, 5}. Master gets a P Dij for link ← → ij by packet type distribution rule. Set P Dij is used to predict the occupied ← → time-slots by link ij .
An Overlapping Communication Protocol
7
Fig. 5. (a) the block slot calculation, (b) collision occurred, (c) packet type changing
According the packet type distribution, the overlapping time-slot assignment phase assigns suitable time-slot to use improved time-slot leasing for data transmission. A time-slot W Sij which is defined as the i’s wake up time-slot between ← → link ij . A set of time-slot offset SOij = {soij 0 , soij 1 , ..., soij t , ..., soij q } is defined to record q +1 time-slot offset from node i transmitting to node j, and 0 ≤ t ≤ q. The soij t denotes the tth time-slot offset transmission from i to j. SOij is accumulated form P Dij and P Dji , which is used to predict the occupied time-slot ← → offset by link ij . For using improve time-slot leasing, master announces each slave W Sij and SOij . Slaves wake up at assigned slot W Sij and obey the set of time-slot offset SOij to transmit data. About overlapping communication scheme, master assigns the roles, W Sij , ← → and SOij to each transmission ij at BTIM. We consider two roles, temp-master and slave and if Dij > Dji then i is temp-master and j is slave. For a transmission ← → from temp-master i to slave j, master assigns time-slot to link ij as follows. SOji : soji t = soijt + pdijs SOij : soij t +1 = sojit + pdjis , where the soij 0 initially sets to 0. During computing SOij , if master runs out of the P Dij set, pdji p+t sets to 1 for ACK responding. Master assigns the nearest free time slot as W Sij and W Sji = W Sij +pdij 1 . W Sij +SOij is the real time-slot. For ←−→ example about S1 S2 in Fig 4(c), DS1 S2 is 1350 bytes, SOS1 S2 = {0, 10, 20, 30},
8
Y.-S. Chen, Y.-W. Lin, and C.-Y. Chang
DS2 S1 is 900 bytes, SOS2 S1 = {5, 15, 25, 35}, W SS1 S2 = 0, W SS2 S1 = 5, and the ←−→ all transmission process of S1 S2 as shown in Fig. 5(a). In the collision detection condition, some uccupied time-slots have been assigned again to transmission possibly. Before transmission, master should detect this condition first. A set of used time-slot U S = {us0 , us1 , ..., usk } is defined to record k + 1 used time-slots in a piconet. Master add W Sij + SOij into U S after assigned time-slots for i. If master assigns time-slots including in U S to slaves, we called time-slot collision. Master uses the following equation to check the collision status. U S = { φ } (W Sij + SOij ) If the equation is ture, the problem of time-slot collision is detected. For example as shown in Fig. 5(b), master checks (W SS4 S3 + SOS4 S3 ) BS = {20}. −−→ The result set is non-empty. Collision is detected in assigned time-slots for S4 S3 . In the following, we describe how to solve the time-slot collision problem. When the time-slots is collision, master adapts the following rules to avoid the collision. S1: /* Avoidance for DH5 packet */ If the tth collision packet type is DH5, master change the packet to two DH3 packets. Master changes the pdijt = {5} to pdijt = {3, 3}. S2: /* Avoidance for DH3 packet */ If the tth collision packet type is DH3, master change the packet to one DH5 packet. Master changes the pdijt = {3} to pdijt = {5}. S3: /* Avoidance for DH1 packet */ If the tth collision packet type is DH1, master change the packet to one DH3 packet. Master changes the pdijt = {1} to pdijt = {3}. −−→ After master applying above rules to the collision condition of S4 S3 , the transmission process has shown in Fig. 5(c). After the schedule arrangement, master informs all slaves by a schedule-assignment packet. If a device does not do anything, it changes to sleep mode to save energy until next BTIM.
4
Experimental Results
In simulation, we investigate the performance of OCP protocol. To compare with three algorithms, RR, TSL [5] and DSA [1], we have implemnted OCP protocol and others using the Network Simulator (ns-2) [3] and BlueHoc [4]. The performance metrics of the simulation are given below. The simulation parameters are shown in table 1. – Average Holding Time: the time period of packet is arrested by other transmission. – Transmission Delay: the latency from a souurce device to a destination device. – Throughput: the number of data bytes received by all Bluetooth devices per unit time.
An Overlapping Communication Protocol
9
Table 1. The detail simulation parameters Parameters Value 2 ≥ N ≥ 70 Number of device 100m ×100m Network region Radio propagation range 10m No Mobility 64 time slots Schedule Interval DH1 or DH3 or DH5 Packet type
Fig. 6. Performance of comparison about (a) average of holding time vs. Bluetooth devices (b) transmission delay vs. piconet numbers
4.1
Performance of Average Holding Time and Transmission Delay
In a piconet, Fig. 6(a) indicates that the holding time of TSL is half of RR approximately. DSA uses TSL more efficiently and decreases the holding time more than TSL. OCP uses the overlapping communication scheme. Therefore, there are several communication pairs transmission simultaneously and the average holding time is almost reduced to two slots. In scatternet communication using RR, relay can transmit or receive data only by master polling. If master routes packets to relay and relay can not transmit packets to another master immediately, the transmission delay time grows. In OCP, a relay can have more high priority to decide the transmission and receive time. As shown in Fig. 6(b), OCP can decrease the transmission delay in scatternet communication. 4.2
Performance of Throughput
In a piconet, RR keeps one slave transmits data to another slave by master relay. TSL adopts the slave-to-slave direct communication without master relay. As shown in Fig. 7(a), by the time passing, the total throughput is more than RR. DSA uses the TSL more efficiently. Therefore, the total throughput is more than TSL. OCP uses the overlapping communication scheme, there are several communication pairs transmitting at the same time, hence the throughput is more than DSA.
10
Y.-S. Chen, Y.-W. Lin, and C.-Y. Chang
Fig. 7. Performance of throughput vs. seconds in (a) a piconet (b) a scatternet
In a scatternet, OCP decreases the holding time because the relay has more high priority to decide the transmission time and transmits to another relay directly by overlapping communication scheme. As shown in Fig. 7(b), the total throughput of OCP is more than RR in scatternet communication.
5
Conclusion
In this paper, we have proposed an efficient overlaping communication protocol to address the transmission holding problem. We realize OCP by using improved time-slot leasing. By OCP, devices can communicate simultaneously under the same frequence hopping sequence, and there are less packet delay time and low probability of network congestion. In scatternet, OCP is able to advance the data packet transmission between piconets. OCP has improvement in bandwidth utilization and transmission delay.
References 1. C. Cordeiro, S. Abhynkar, and D. Agrawal. ”Design and Implementation of QoSdriven Dyamic Slot Assignment and Piconet Partitioning Algorithms over Bluetooth WPANS”. INFOCOM 2004 Global Telecommunications Conference, pages 27–38, April 2004. 2. Bluetooth Special Interest Group. ”Specification of the Bluetooth System 1.2”, volume 1: Core. http://www.bluetooth.com, March 2004. 3. VINT Project. ”Network Simulator version 2 (ns2)”. Technical report, http://www.isi.edu/nsnam/ns , June 2001. 4. IBM research. ”BlueHoc, IBM Bluetooth Simulator”. Technical report, http://www-124.ibm.com/developerworks/opensource/bluehoc/, February 2001. 5. W. Zhang, H. Zhu, and G. Cao. ”Improving Bluetooth Network Performance Through A Time-Slot Leasing Approach”. IEEE Wireless Communications and Networking Conference (WCNC’02), pages 592–596, March 2002.
Full-Duplex Transmission on the Unidirectional Links of High-Rate Wireless PANs Seung Hyong Rhee1 , Wangjong Lee1 , WoongChul Choi1 , Kwangsue Chung1 , Jang-Yeon Lee2 , and Jin-Woong Cho2 1
2
Kwangwoon University, Seoul, Korea {shrhee, kimbely96, wchoi, kchung}@kw.ac.kr Korea Electronics Technology Institute, Sungnam, Korea {jylee136, chojw}@keti.re.kr
Abstract. The IEEE 802.15.3 WPAN(Wireless Personal Area Network) has been designed to provide a very high-speed short-range transmission capability with QoS provisions. Its unidirectional channel allocations for the guaranteed time slots, however, often result in a poor throughput when a higher layer protocol such as TCP requires a full-duplex transmission channel. In this paper we propose a mechanism, called TCP transfer mode, that provides the bidirectional transmission capability between TCP sender and receiver for the channel time allocations(CTAs) of the high-rate WPAN. As our scheme does not require additional control messages nor additional CTAs, the throughput of a TCP connection on the high-rate WPAN can be greatly improved. Our simulation results show that the proposed scheme outperforms any possible ways of TCP transmission according to the current standard of the WPAN.
1
Introduction
The emerging high-rate wireless personal area network (WPAN) technology, which has been standardized[1] and being further enhanced by the 15.3 task group in IEEE 802 committee, will provide a very high-speed short-range transmission capability with quality of service (QoS) provisions. Its QoS capability is provided by the channel time allocations using TDMA; if a DEV (device) needs channel time on a regular basis, it makes a request for isochronous channel time. Asynchronous or non-realtime data is supposed to use CAP (Contention Access Period) which adopts CSMA/CA for the medium access. Among the high-rate applications expected to be prevalent in a near future, the high-quantity file transfer using TCP will also occupy a large portion of the traffic transmitted in the WPAN environment. The unidirectional channel allocations for the guaranteed time slots, however, often result in poor throughput because a TCP connection requires a full-duplex transmission channel. In order
This work has been supported in part by the Research Grant of Kwangwoon University in 2004, and in part by the Ubiquitous Autonomic Computing and Network Project, the MIC 21st Century Frontier R&D Program in Korea.
X. Jia, J. Wu, and Y. He (Eds.): MSN 2005, LNCS 3794, pp. 11–20, 2005. c Springer-Verlag Berlin Heidelberg 2005
12
S.H. Rhee et al.
to transmit the TCP traffic according to the current standard of the high-rate WPAN, one of the following three methods can be adopted. First, it can be transmitted during CAP. However, as the duration of the CAP is determined by the piconet coordinator (PNC) and communicated to the DEVs via beacons, it is very hard for the DEVs to estimate the available bandwidth for a TCP connection. Second, the TCP connection may request a guaranteed time slot and use it for the bidirectional TCP data and acknowledgment packets. Clearly it will cause frequent collisions at the MAC layer between TCP sender and receiver, and thus significantly degrade the transmission performance. Finally, they may request two CTAs, one for the TCP data and another for the TCP acknowledgment. Due to the dynamic nature of the TCP flow control, it is very hard to anticipate or dynamically allocate the size of the CTAs. Recently, a lot of work has been done by many researchers in the area of the high-rate WPAN. However, few attempts have been made at the problem of non-real time TCP transmission on the unidirectional link so far. In this paper, we propose TCP transfer mode which is a mechanism that provides the bidirectional transmission capability between TCP sender and receiver on the guaranteed time slots of the high-rate WPAN. If a CTA is declared to be in the TCP transfer mode, the source DEV alternates between transmit mode and receive mode so that the destination DEV is able to send data (TCP ACK) in the reverse direction. Our mechanism is transparent to higher-layer entities: the source DEV regularly makes transitions between transmit and receive mode, and the destination DEV sends data only when the CTA is in the TCP mode. In addition, as our scheme does not require additional control messages nor additional CTAs, the throughput of a TCP connection on the high-rate WPAN can be greatly improved. The remaining part of this paper is organized as follows. After introducing related works and the high-rate WPAN protocol in chapter 2, we describe the three methods of TCP transmission under the current standard in chapter 3. In chapter 4, we propose a new transmission mode that allows bidirectional TCP transfer on the guaranteed time slots. Simulation results are provided and discussed in chapter 5, and finally chapter 6 concludes the paper.
2 2.1
Preliminaries IEEE 802.15.3 High-Rate WPAN
The IEEE 802.15.3 WPAN has been designed to provide a very high-speed shortrange transmission capability with QoS provisions[7,8]. Besides a high data rate, the standard will provide low power and low cost solutions addressing the needs of portable consumer digital images and multimedia applications. Figure 1 shows several components of an IEEE 802.15.3 piconet. The piconet is a wireless ad hoc network that is distinguished from other types of networks by its short range and centralized operation. The WPAN is based on a centralized and connectionoriented networking topology. At initialization, one device (DEV) will be required to assume the role of the coordinator or scheduler of the piconet. It is
Full-Duplex Transmission on the Unidirectional Links DEV
data on ac be
data
dat
DEV on ac be
a
PNC / DEV c on bea
13
beac on
DEV
data
DEV
Fig. 1. IEEE 802.15.3 piconet components
called PNC (piconet coordinator). Its duty includes allocating network resources, admission control, synchronization in the piconet, providing quality of services, and managing the power save mode. The superframe of the piconet consists of several periods as follows. In the first period, the PNC transmits a beacon frame which contains all the necessary information to maintain the piconet. All the DEVs in the piconet should receive the beacon and synchronize their timer with the PNC. The beacon frame is used to carry control information and channel time allocations to the entire piconet. In the second period, CAP (Contention Access Period) can be allocated optionally for the purposes of association request/response, channel time request/response and possible exchange of asynchronous traffic using CSMA method. CTA period is in the third period and is the most part of the superframe. This period is used for isochronous streams and asynchronous data transfer. Using a TDMA mechanism, the period allocates guaranteed time slots for each DEV. All transmission opportunities during the CTA period begin at predefined times, which is relative to the actual beacon transmission time and extends for predefined maximum durations. Those allocation information is communicated in advance from the PNC to the respective devices using the traffic mapping information element conveyed in the beacon. During its scheduled CTA, a DEV may send arbitrary number of data frames with the restriction that aggregate duration of these transmissions does not exceed the scheduled duration limit. 2.2
Related Works
Recently, a lot of work has been done by many researchers in the area of the high-rate WPAN. However, few attempts have been made at the problem of non-real time TCP transmission on the unidirectional link so far. In [2], the authors proposed a MAC protocol that enhances the TCP transmission mechanism in TDMA-based satellite networks. This is an approach of TCP throughput Superframe #m-1
Beacon #m
Contention Access Period
Superframe #m
Superframe #m+1
Channel Time Allocation Period MCTA 1
MCTA 2
CTA 1
CTA 2
CTA 3
...
Fig. 2. IEEE 802.15.3 superframe
CTA n
14
S.H. Rhee et al.
enhancement using the modified MAC protocol. Similarly, [3] proposes a mechanism for enhancement of TCP transfer via satellite environment. [4] proposes a MAC layer buffering method to improve handoff performance in the Bluetooth WPAN system in order to improve the TCP performance. They can minimize the negative effects of the exponential backoff algorithm and prevent duplicate packets during handoffs. In [5], although authors are not concerned with TCP transmission, they proposed an application-aware MAC mechanism which considers the status of higher layer. To the best of our knowledge, however, there have been no research in which the MAC layer supports efficient TCP transfer in the high-rate WPAN.
3
TCP Transmissions for the WPAN
In this chapter, we describe three possible methods of TCP transmission with the MAC protocol of the current WPAN standard which contains no mention on higher layer protocols. The performance of TCP transmission using each possible method will be discussed and compared. Except the three methods discussed in this chapter, TCP traffic can be transmitted during CAP. However, as the duration of the CAP is determined by the PNC and communicated to the DEVs via the beacon, it is very hard for the devices to estimate the available bandwidth for the TCP connection. We will consider only the methods of using CTAP in this paper. In order to transmit the TCP traffic using CTAP according to the current standard of the high-rate WPAN, one of the three methods in this chapter can be adopted. Figure 3 shows a TCP transmission process using immediate ACK policy in high-rate WPAN. First, TCP data packet comes from the higher layer and is processed at the MAC layer. The sender DEV sends the MAC frame to the receiver DEV via wireless interface. At the MAC layer of TCP receiver, it receives the MAC frame and sends a MAC ACK to the sender. The TCP sink that accepted TCP data frame sends a TCP ACK packet to the TCP sender. The TCP sender that received this TCP ACK packet sends MAC ACK frame for the TCP ACK. TCP sender transmits next TCP data packet to the receiver when it received TCP ACK. All TCP transmissions are achieved by this way in the high-rate WPAN.
TCPSink TCP ACK
Next TCP packet
802.15.3 MAC
TCP data
TCP ACK
TCP data
TCP
802.15.3 MAC MAC frame & ACK
MAC ACK MAC frame
Fig. 3. TCP transmission in IEEE 802.15.3 high-rate WPAN
Full-Duplex Transmission on the Unidirectional Links
3.1
15
TCP Transmission on a Single CTA
According to the current MAC protocol, one may use a single (unidirectional) CTA for a TCP connection. Clearly, this will result in a poor throughput because the TCP connection requires a full-duplex transmission channel. TCP traffic can not be transmitted between the sender and receiver using this method, because the CTA is defined as unidirectional and the TCP receiver has no way of sending TCP ACKs to the sender. Thus transmission of transport layer ACK is impossible and the connection can not be maintained. Figure 4 depicts the case where a single CTA is allocated to the TCP sender and the TCP receiver has no way of sending back the ACKs. beacon period beacon
CTA period CTA for TCP sender
beacon
CTA for TCP sender
...
superframe duration
Fig. 4. Single CTA in the high-rate WPAN
3.2
Allocating Two CTAs for TCP Data/ACK
The PNC may allocate extra CTA for the TCP receiver, so that the receiver is able to send back the necessary ACKs. In this method, the TCP sender transmits data during its own CTA, and the receiver transmits ACKs also during its allocated CTA. The problem here is that, due to the dynamic nature of the TCP flow control, it is very hard to anticipate or dynamically allocate the size of those CTAs. In addition, TCP sender waits an ACK packet after sending data up to the window size, and the receiver can not send an ACK before its CTA comes. Therefore, this method may waste the two CTAs because exact channel time allocation is almost impossible due to the dynamic property of the TCP connection. Figure 5 explains this method: TCP sender is assumed to transmit data packets during CTA1, and the receiver sends ACK packets during CTA2. In this method, the throughput can be very different according to the ratio of the durations of the two CTAs. If the ratio of the allocated CTAs does not consider the current status of the TCP connection, those CTAs can be seriously wasted. The problem of using two separate CTAs for a TCP connection is that it is extremely hard to adjust the ration of the two CTAs according to the dynamics of a TCP connection. 3.3
Sharing a Single CTA
TCP connection may request a single guaranteed time slot and use it for the bidirectional TCP data and acknowledgment packets. A single CTA is shared between TCP sender and receiver. That is, the sender send TCP data to the beacon
CTA 1
CTA 2
beacon
CTA 1
CTA 2
Fig. 5. Two CTAs for a TCP connection
...
16
S.H. Rhee et al.
beacon
Bidirectional CTA DATA (TCP sender ACK (TCP receiver
beacon
Bidirectional CTA
...
TCP receiver) TCP sender)
Fig. 6. Sharing a single CTA between two device
receiver and the receiver send TCP ACK to the sender during a single CTA. Clearly it will cause frequent collisions at the MAC layer between TCP sender and receiver, and significantly degrade the transmission performance.
4
TCP Transfer Mode
In this chapter, we propose TCP transfer mode which can maintain the throughput of a TCP connection without collision in a single CTA that is shared between TCP sender and receiver. The sender device informs PNC that it will send TCP data when it makes a channel time request. For this purpose, we have defined TCP Enable bit using the reserved bits in 15.3 MAC header. The PNC responds to the request, and then broadcast the beacon frame with the information on the newly allocated CTA for the TCP connection. The CTA information contains the stream index field which tells that the CTA is allocated to a TCP connection and the CTA will be used in the TCP Transfer mode. The TCP stream index and CTA block are explained in Figure 7. There are three kinds of ACK policy in the IEEE 802.15.3 high-rate WPAN: Immediate-ACK, No-ACK and Delayed-ACK policy. We consider only Immediate-ACK and No-ACK policy in this paper. We describe the TCP transfer mode with the No-ACK case first. The (TCP) sender changes its radio interface from TX mode to RX mode immediately after it sends a (TCP) data. Then it senses a frame in the reverse direction, which is possibly TCP ACK from the TCP receiver, during the SIFS (short inter frame space). TCP sender will be transmitting a TCP data continually if channel is idle. If TCP sender receives a TCP ACK from TCP receiver, TCP sender maintains the radio interface status as RX. TCP sender received whole ACK then sends next TCP data after waiting for a SIFS time. There is a figure in below that explains this operation of proposed TCP transfer mode. This TCP transfer mode support transmission without collision. Channel Time Allocation Block CTA Duration
CTA Location
0x00 : Asynchronous data 0xFD : MCTA traffic 0xFE : unassigned stream
Stream Index
Source ID
Destination ID
0x00 : Asynchronous data 0x01 : TCP stream 0xFD : MCTA traffic 0xFE : unassigned stream
Fig. 7. Stream Index field and value in CTA block
Full-Duplex Transmission on the Unidirectional Links 2*SIFS
SIFS
TCP data TCP Sender
TX
TCP Receiver
TX
RX
SIFS TCP ACK
TCP data
RX Check TCP ACK
17
TCP data
RX
TX
TX
RX
...
Check TCP ACK
(a) No-ACK policy
2*SIFS
SIFS TCP data
MAC ACK
TCP Sender
TX
RX
TCP Receiver
RX
SIFS TCP data TX
TX
RX Check TCP ACK
SIFS MAC ACK
TCP ACK
...
RX TX Check TCP ACK
(b) Immediate-ACK policy
Fig. 8. Difference of ACK policies: (a) No-ACK policy (b) Immediate-ACK policy Table 1. Simulation Environment
5 5.1
Attribute
Value
Bandwidth Number of flows CAP duration CTAP duration CTA duration MAC ACK policy TCP packet size TCP window size Error rate
100Mbps 1, 2, 3 4000us 4000us 3500:500, 3000:1000, 2500:1500 Immediate-ACK policy 1024, 2048, 4096 byte 20 0, 25, 50 %
Performance Evaluations Simulation Environment
We have implemented our TCP transfer mode in the ns-2 network simulator with the CMU wireless extension[9,10]. Our implementation includes beacon transmission, channel time management and ACK policies for the 15.3 WPAN. The parameters used for the simulation are summarized in Table 1. We assume that all the DEVs are fixed during the simulation, and are associated to the piconet before the simulation starts. Moreover, there are no control frames and management overhead except the beacon transmission. We have chosen the channel bit rate as 100Mbps in order to allow enough increasing of TCP window size. For the purpose of comparison, we use a same amount of time for the CAP and CTA duration. Finally, we have use various number of flows and adopts different error rates in order to verify the performance of our mechanism under diverse environments.
18
S.H. Rhee et al.
5.2
Simulation Results Without Channel Errors
In this section we evaluate the performance of the proposed TCP transfer mode in an error-free channel, in which no errors are assumed during the transmission of MAC frames. The effect of proposed mechanism can be verified in figure 9(a). In the case of single CTA method, although the entire bandwidth is 100Mbps, the aggregate throughput is saturated with about 28Mbps. This throughput degradation is due to the collisions between TCP sender and receiver. Since MAC layer can not distinguish TCP data and ACK packets, it transmits MAC frames whenever its queue is backlogged and the wireless medium is idle. Thus, collisions between peer DEVs are inevitable and the throughput is degraded. By using TCP transfer mode, we can achieve the throughput of 38Mbps in the simulation. This performance is due to the MAC entities’ ability that avoids collisions by checking the frame transmissions during the inter-frame space. Thus, the peer entities use the channel bidirectionally in the mode without packet loss and retransmission. TCP sender transmits TCP data packets as much as TCP window at once. This effect should be distinguished with that of TCP transmission in CAP periods: PNC can not determine the required length of the CAP for a TCP connection, and RTS/CTS frames are required in the CAP. In figure 9(b), we have used two flows in the piconet. As the number of flows is increased, the channel times allocated to a connection is decreased. Thus the throughput of each TCP connection is also decreased regardless of the mechanisms. The graph shows that the throughput is reduced by half compared that of figure 9(a). In this case, however, TCP transfer mode still outperforms the other methods. Figure 11(a) shows the throughput of a TCP connection when the number of flows in the piconet varies. As the duration of a superframe is fixed (4000μsec) during the simulations, a large number of flows means a small CTA for a connection, and thus a low throughput. For example, the duration of a CTA is 2000μsec for 2 flows and 1333μsec for 3 flows. Table 2 compares the TCP throughput according to the number of flows under different mechanisms. For any number of flows, TCP transfer mode shows a higher throughput than other methods. 30
40
Sharing a single CTA Allocating two CTAs TCp transfer mode
25
30 Aggregate Throughput [Mb]
Aggregate Throughput (Mb)
35
25 20 15
20
15
10
10 TCP transfer mode Sharing a single CTA Allocating two CTAs
5
5
0 0
2
4
6 Time(sec)
8
(a) 1-flow case
10
12
0 0
2
4
6 Time [s]
8
(b) 2-flow case
Fig. 9. TCP throughput in a 15.3 WPAN
10
12
Full-Duplex Transmission on the Unidirectional Links
19
Table 2. Comparison of throughput for different mechanisms Flows 1 2 3
CAP 28 Mbps 13.5 Mbps 8.7 Mbps
40
35
35
30
Two CTAs 21 Mbps 8.3 Mbps -
TCP Transfer Mode 38 Mbps 18.8 Mbps 12.7 Mbps
20
Using CAP Allocating two CTAs TCP transfer mode
Using CAP Using two CTAs New TCP transfer mode
20 15 10
0
20
15
10
TCP Flow=1 TCP Flow=2 TCP Flow=3
5
0
2
Aggragate Throughput [Mb]
25
15
25
Aggregate Throughput [Mb]
Aggregate Throughput [Mb]
30
5
4
6 Time [s]
8
10
5
0
12
(a) different number of flows
10
0
2
4
6 Time [s]
8
10
12
0
(b) TCP throughput with channel errors
0
2
(a) different error rates
4
6 Time [s]
8
10
12
(b) 2-flow case
Fig. 10. TCP transfer mode with frame error
5.3
Simulation Results with Channel Errors
In this section we evaluated the performance of the proposed TCP transfer mode in the wireless channel with frame errors. We assume that the frames are corrupted randomly according to a uniform distribution and, once a frame error occurs, the TCP receiver can not interpret the frame. Figure 11(b) shows the 40
20
35
Using CAP Using two CTAs New TCP transfer mode
35
Using CAP Allocating two CTAs TCP transfer mode
30
25 20 15 Error rate=0% Error rate=25% Error rate=50%
10
Aggregate Throughput [Mb]
15 Aggragate Throughput [Mb]
Aggregate Throughput [Mb]
30
10
5
25
20
15
10
5 0
0
2
4
6 Time [s]
8
10
(a) different error rates
12
5
0 0
2
4
6 Time [s]
8
(b) 2-flow case
(a) different number of flows
10
12
0 0
2
4
6 Time [s]
8
10
(b) TCP throughput with channel errors
Fig. 11. Simulation Results Table 3. Performance of the mechanisms with frame errors Error rate 0% 25% 50%
CAP 28 Mbps 23 Mbps 18 Mbps
12
Two CTAs 21 Mbps 18 Mbps 14.5 Mbps
TCP Transfer Mode 38 Mbps 31 Mbps 24.5 Mbps
20
S.H. Rhee et al.
simulation result when 50% of frame error is assumed in the wireless medium. The performances of three methods are depicted in the graph, and again, TCP transfer mode shows the best performance. Note that the graphs shows fluctuations as the frame errors cause the frame retransmissions and variation of TCP window sizes. In figure 10(a), we simulate the performance of TCP transfer mode for different rates of frame errors to examine the influence of channel errors. The throughput of a TCP connection decreases as the error rate increases. Table 3 compares the performance of different mechanisms discussed in this paper when there are non-zero values of error rates. In any cases our TCP transfer mode shows better throughput than those of other mechanisms. Finally, figure 10(b) depicts the performance for two TCP flows and 50% of frame errors in the piconet. Again in this simulation, TCP transfer mode outperforms the other methods.
6
Conclusion
In this paper we have proposed an efficient TCP transmission method that provides the bidirectional transmission capability between TCP sender and receiver for the channel time allocations of the IEEE 802.15.3 high-rate WPAN. Our scheme does not require additional control messages nor additional CTAs and it can be implemented with a minor change of current standard. We have described three possible methods of TCP transmission with the MAC protocol of the current WPAN standard, and compared the performance of our mechanism with those of the three methods under various simulation environments. Our extensive simulation shows that our proposed mechanism greatly improves the throughput of a TCP connection in the piconet regardless of the number of flows or the error rates.
References 1. Wireless Medium Access Control and Physical Layer Specifications for High Rate Wireless Personal Area Networks, IEEE standard, Sep. 2003 2. J. Zhu, S. Roy: Improving TCP Performance in TDMA-based Satellite Access Networks: ICC2003 IEEE(2003) 3. J. Neale, A. Mohsen: Impact of CF-DAMA on TCP via Satellite Performance: GLOBECOM2001 IEEE(2001) 4. S. Rhee et al.: An Application-Aware MAC Scheme for A High-Rate Wireless Personal Area Network: IEEE WCNC2004 (2004) 5. H. Balakrishnan, S. Seshan, E. Amir, R.Kantz: Improving TCP/IP Performance Over Wireless Networks: ACM MOBICOM 95(1995) 6. J. Karaoguz: High-rate Wireless Personal Area Networks: IEEE Communication magazine(2001) 7. P. Gandolfo, J. Allen: 802.15.3 Overview/Update: The WiMedia Alliance(2002) 8. The CMU Monarch Project: Wireless and mobile extension to ns Snapshot Release 1.1.1: Carnegie Mellon University(1999) 9. K. Fall, : The ns Manual, UC Berkeley, LBL, USC/ISI, and Xerox PARC(2001)
Server Supported Routing: A Novel Architecture and Protocol to Support Inter-vehicular Communication Ritun Patney, S.K. Baton, and Nick Filer School of Computer Science, University of Manchester, Manchester, United Kingdom M13 9PL {patneyr, S.K.Barton, nfiler}@cs.man.ac.uk
Abstract. A novel architecture and multi-hop protocol to support intervehicular communication is proposed. The protocol uses ‘latency’ as a metric to find routes. We introduce the concept of a Routing Server (RS), which tries and keeps up-to-date information about the state of the network, i.e. the network topology, and the latency associated at each node averaged over a short time. Route discovery is now carried out by sending a Route Request packet directly to the RS, rather than flooding the network. We call this model ‘Server Supported Routing (SSR)’. A simulation study is carried and the performance of SSR compared with Dynamic Source Routing (DSR) protocol. It is found that SSR performs better than DSR over longer hops, faster rate of topology change, and high offered load in the network. The study also reveals that DSR is much more sensitive to changes in the network than SSR.
1 Introduction The demand for mobile broadband services is increasing and soon vehicles will be enabled with broadband aware devices. Users will expect similar quality of service as in wired networks. However, current 3G networks, at vehicular traffic speeds, can only provide a transmission rate of 144 Kbps [1]. Wireless Local Area Networks (LAN) support data rates of up to 54Mbps [2], but such high data rates are restricted by very short transmission ranges (50-80m). One solution is to provide an Access Point (AP) at every 50m along all major road networks. However, such a solution will be too expensive to implement. Using a pure ad-hoc based solution tends to give a very low throughput for longer paths [3]. In this work we present a new idea, which, we believe, has not been proposed before in the literature. The solution incorporates a novel architecture along with changes to current multi-hop ad-hoc routing protocols. The model is motivated from the work carried out by Lowes [4]. Lowes measures the maximum achievable performance a routing protocol can reach using simulation studies. For this, his work implements a piece of software called the ‘magic genie routing protocol’. This software sits above all routers in the simulation environment. All nodes, at periodic intervals of time, keep passing their dynamic state, e.g. latency experienced by the packets being forwarded (averaged over a short time), current buffer size, etc. to the magic genie software. The software is also kept aware of the current network X. Jia, J. Wu, and Y. He (Eds.): MSN 2005, LNCS 3794, pp. 21 – 30, 2005. © Springer-Verlag Berlin Heidelberg 2005
22
R. Patney, S.K. Baton, and N. Filer
topology. The best path (dependent on the desired metric - hop count, latency, buffer size, etc.) from a source to a destination can now be found using the Dijkstra or Bellman-Ford [5] algorithm. When a node now has data for a destination, it simply asks the magic genie for the best available route. In our model, we introduce the concept of a Routing Server (RS), which is analogous to the magic genie routing server. We call our model ‘Server Supported Routing’ (SSR), and the network uses latency as the routing metric. We compare its performance with Dynamic Source Routing (DSR) protocol [6]. The remaining paper is organized as follows. In section 2, we describe the proposed model. Section 3 enumerates some details of implementation. In section 4, we present our results and analysis them. We finally conclude with a short discussion.
2 Approach and Proposed Model Fig. 1 provides an insight into the proposed architecture. The Routing Server (RS) is connected with immobile nodes, which we call Routing Server Access Points (RSAP). These are provided at regular distances (which can be much greater than the 50-80m limit for infrastructure based WLANs), and enable communication between the vehicles and the RS. Apart from RSAPs being immobile, they have the same functionality as any other vehicle. In the following discussion, a ‘node’ refers to the vehicles as well as the RSAPs. We assume the physical wireless links are bidirectional. The communication between the RSAP and RS is assumed to be done via underground cables. Furthermore, each vehicle is considered as a single network node, and the road network has no junctions and stoppages. 2.1 Overview Every node, at all times, tries to maintain a valid route to the RS. The route is based on the least latency a packet would experience to reach the RS, and contains the list of all nodes on this route, but itself. We refer to this route as Rn, which is defined below. Let S = {n, n-1,…., 1} be a set of ordered nodes in a path. Then Rn is the subset {n1,…., 1} of S, and where node numbered ‘1’ is the RS. Furthermore, every node also maintains a list of its current neighbours, and the average latency experienced by packets forwarded by it over time t (Lt). The neighbour list plus the latency at its end is referred to as ‘Routing Information Set (RIS)’. Periodically, every node, sends its RIS to the RS on Rn. In turn, the RS uses the RIS packets received from different nodes to maintain a link state graph of the network, along with the average latency experienced by packets at each node. 2.2 Building the RIS and Maintaining Rn The RS and every node periodically broadcast a RSBROADCAST (Routing Server Broadcast) packet on all interfaces. The packet is used by the receiving nodes to
Server Supported Routing: A Novel Architecture and Protocol
Fig. 1. Architecture for SSR
Fig. 5. Routing Load vs rate top. change
Fig. 2. Impact of hop count on p.d.f
Fig. 6. Throughput vs offered load
Fig. 3. Impact of hop count on delay
Fig. 7. p.d.f vs offered load
Fig. 4. p.d.f vs rate of topology change
Fig. 8. Routing load vs offered load
23
24
R. Patney, S.K. Baton, and N. Filer
maintain an updated Rn as well as to maintain a list of their neighbours. A node broadcasts a RSBROADCAST packet only if it itself has a valid route to the RS. The originator of this packet fills up the two fields of RSBROADCAST in the following manner: 1. 2.
Route to the RS (Rn) being currently maintained by the originator – The set S, obtained by adding itself to Rn. Latency – This field indicates the latency a packet forwarded on S would experience. It is the sum of the average latencies at each node in the set S. The RS fills in a value of 0 before originating such a packet. Each node adds its own latency to the latency of the path Rn before filling this field.
On reception of a RSBROADCAST packet, a node does the following: 1) Checks the route to the RS contained in the received packet. If this node’s address already appears in the route, then it silently discards the packet. 2) Else, if the current Rn passes through the originator of RSBROADCAST, then it updates Rn to use the new route. 3) Else, if the node does not have a valid or fresh enough Rn, it updates Rn with the route in the received packet. 4) Else, it compares the latency associated with its current Rn, with the latency in the received packet. It replaces Rn with the new route if the latter claims a lesser latency. Apart from the above, it always adds the originating node as its neighbour, and associates a time out for it. This timeout indicates the time by which it should receive the next RSBROADCAST packet for the node to qualify as its neighbour. A similar timeout value is also associated with Rn, after which the route is no longer valid. 2.3 Creating a Link Cache at RS RIS packets contain the originating node’s neighbours, and its Lt. This is used by the RS to update its maintained network state. The originating node of the RIS packet, along with each of its neighbour forms a link, and each node’s latency is stored as extracted from the RIS packet. The RS also associates each node and link with a validity time. Both are removed if the time expires, and no information has been received about them recently enough. In case, a node is removed, all links associated with the node are also removed. While adding a link, it maybe possible that the node at the other end of the link does not exist in the cache (no RIS packet received from it recently enough), the RS still adds this link to its cache since we have assumed bi-directional links. However, since no information has been explicitly received from the node in question, a high enough latency is associated with this node such that routes through it will only be chosen in case of no other route being possible between a source and a destination without its inclusion. 2.4 Route Discovery A node requiring a route to a destination, unicasts (thus preventing the overhead associated with flooding) a Route Request (RREQ) packet to the RS; the packet is
Server Supported Routing: A Novel Architecture and Protocol
25
source routed with route Rn. The model was first created such that hop-by-hop routing is used by RREQ packets, but however it was observed that due to the asynchronous change of Rn at different nodes, some RREQ packets were looping between nodes, and never reached the RS. On receiving a RREQ, the RS calculates a route based on the least latency and sends a Route Reply (RREP) back to the source node on the same route on which the RREQ packet had arrived. In our model, the source uses this route either for a fixed time, or to forward a fixed number of packets, whichever limit is reached first. Route discovery is done again once the limit is reached. 2.5 Route Maintenance We have eliminated RERR packets completely as opposed to DSR. The following discussion summarizes the situations where a RERR would have been sent by DSR, and how our model avoids them. While forwarding a packet (with a source route) to its next hop, if the link layer is not able to deliver the packet, it informs the network layer. In DSR, the network layer initiates a RERR for the originator of the failed packet (one RERR per one failed packet). In our network layer, route maintenance is inherent in the design, which is explained as follows: 1) A node re-starts a route discovery when it has used a route a maximum number of times (hence new routes are fetched periodically). 2) Nodes do not transmit RIS and RSBROADCAST packets if they do not have a fresh enough route to the RS (thus avoiding route failures). 3) The RREQs are only generated by a node only if it has a valid route to the RS. Furthermore, the RS does not reply back with a RREP if it does not find a route in its route cache between the source and the destination. 4) Links and routes time out and become invalid after a certain period of time unless new information is obtained about them within the time out period. All this makes it unnecessary for RERRs to be propagated. The non-generation of RERRs also saves network bandwidth. In DSR, while forwarding a RERR, a failure of another link may cause more RERRs to flow. Also, if a node generating a RERR does not have a route to the source node; it starts its own route discovery for the source, causing greater routing overhead.
3 Implementation We chose Dynamic Source Routing (DSR) as the basis for comparison because SSR has been formed after studying and modifying DSR in detail; some qualities like source routing for data packets, and using the same path for forwarding RREP packets on which RREQ arrived, are the same for both. We simulate SSR and DSR on a discrete event simulator [7]. The simulator has been developed by the Mobile Systems Architecture Research Group, School of Computer Science, University of Manchester, UK [8]. The simulations have been carried out in Windows environment, with Java as the programming language. The machine used was Intel based with 256MB of memory.
26
R. Patney, S.K. Baton, and N. Filer
3.1 Modeling the Protocol Stack The OSI reference model [9] divides the protocol stack into layers, each having a different functionality. For our simulations, we model the physical layer, MAC layer, and the network layer. The MAC layer used for simulations is Aloha [10]. In Aloha, each node transmits whenever it has data to transmit. In case of a collision, the frame is re-transmitted after a random amount of time. The randomness is necessary; as otherwise, the frames will keep colliding over and over again. 3.1.1 Traffic Simulation Traffic between a source and a destination is simulated by attaching a ‘traffic stream’ object between the pair (rather than nodes having an application layer which generates packets). The stream decides when the source generates a packet. The stream makes a function call on the node’s network layer to make it generate a packet. The stream consists of the start time of the first packet, the rate at which packets are to be generated, and number of packets to generate per call. 3.1.2 Network Layer Design We implement part of the DSR protocol. The basic route discovery mechanisms (RREQ and RREP) are implemented. The RREP is sent after reversing the route on which the RREQ arrives. The route maintenance (RERR) mechanism is also implemented. However, the extensions to the DSR protocol are not implemented. These include caching overhead routing information, support for unidirectional links, expanding route discovery, packet salvaging, automatic route shortening, etc. 3.1.2 Network Layer Design We implement part of the DSR protocol. The basic route discovery mechanisms (RREQ and RREP) are implemented. The RREP is sent after reversing the route on which the RREQ arrives. The route maintenance (RERR) mechanism is also implemented. However, the extensions to the DSR protocol are not implemented. These include caching overhead routing information, support for unidirectional links, expanding route discovery, packet salvaging, automatic route shortening, etc. Additional functionality required to simulate SSR (like unicasting of RREQ, etc.) has been added to the DSR design. This allows the program to be run in both modes (DSR as well as SSR) and can be set from an external parameter. 3.2 Topology Simulation We model node mobility by creating and destroying vehicles on a random basis. The reasoning behind this is that a link is only important at the time when a packet is being forwarded on it. For all simulations, the number of vehicles at any time is kept constant, i.e. for one vehicle destroyed; another is created at a random location (by choosing a random set of co-ordinates).
4 Simulations The following metrics are measured and evaluated: 1) Packet Delivery Fraction (P.D.F) – The ratio of the data packets received by the destinations to those generated by the sources.
Server Supported Routing: A Novel Architecture and Protocol
27
2) Throughput – Number of packets received per second at all destinations. We plot the average throughput of a stream. It is plotted in terms of packets received per second. 3) Normalized Routing Load – Number of routing packets transmitted per application packet received. A single transmission (hop-wise) of a routing packet by each node is counted as one transmission. This is summed for the whole network, and divided by the number of packets received at all destinations. 4) Average end-to-end delay – This is the delay experienced by a packet from creation until being successfully received at the destination. It includes retransmission delays at the MAC layer, and the route discovery delays. It is averaged over all streams. The metrics are plotted against the offered load per traffic stream, number of hops, and the rate of topology change, depending on the experiments. 4.1 Configuration For all the simulations, we use fixed packet sizes of 512 bytes. All simulations are started with 225 vehicles. The number of access points for all simulations has been kept at 8 (making the road length of about 6000m). The channel bandwidth (bit transmission rate) is configured at 54 Mbits/s. All simulations are 40s long. Traffic streams are created between 10 and 15s. The vehicles are created and destroyed starting from 1s. The network is found to stabilize from 25s, and all measurements are done from 25 to 40s. 4.2 Experiments Three different sets of experiments are performed. In the first, two traffic streams are run in the opposite direction between the same set of nodes. The number of hops is varied in increments of 2 and the packet generation rate is kept at 300 packets/sec. This is done on a static topology. We measure the packet delivery fraction and the end-to-end delay (fig. 2, and fig. 3). The packet delivery fraction in DSR falls rapidly with increase in number of hops, whereas in SSR, it falls slowly. This in a way is consistent with the theory that probability of a packet drop increases with increasing path lengths [11]. The end-to-end delay in DSR, for longer path lengths, is seen to be much higher than for SSR. Collision of frames leads to re-transmissions, which adds to their delay. This delay also adds to the delay of the buffered frames (which will have to wait till this frame finishes delivery or is dropped). Furthermore, a frame colliding more than a specific number of times is ultimately dropped by the MAC layer, leading to greater packet loss. These problems are exacerbated with increasing of intermediary nodes (longer routes). In SSR, because routes are acquired on a regular basis, data is sent on different routes (may not be completely disjoint routes). Some of these routes may also pass through the RS (a wired link not suffering from the problems of wireless links). For packets routed via the RS, these problems come into existence only for the hops from an AP to the destination and from the source to the AP. SSR is therefore able to give a reasonably steady performance over varying hop counts.
28
R. Patney, S.K. Baton, and N. Filer
In the second set of experiments, we run 3 traffic streams, between arbitrarily chosen sources and destinations. The pairs are chosen such that they are at least 6 hops apart. The packet generation rate is kept constant at 300 packets per second, and the rate of vehicles being created and destroyed is varied. We measure packet delivery fraction, and the normalized routing load (fig. 4, and fig. 5). It is found that the packet delivery fraction for DSR is very poor as compared to SSR for this high rate of topology change. This can be attributed to the following: 1.
2.
In DSR, to obtain a route, RREQ and RREP packets need to traverse between the source and the destination. The RREP is returned on the route on which the RREQ arrives. Some intermediary nodes may be destroyed while the route is being discovered, thus breaking the individual links, and thus the path on which the RREP packet was to be forwarded. This may lead to failure of the source receiving any RREP. This makes the source re-start the route discovery, and after a certain number of retries, all packets for this destination are dropped. In DSR, data packets are also source routed. Destroying a node which falls on the route on which data packets are being forwarded, will result in failure of some data packets to reach the destination. This will also result in the generation of RERR packets, not only causing a higher routing load, but also greater interference at the physical layer. Furthermore, it will take time before the source node gets a RERR packet. In the meantime, the source keeps forwarding data packets on the same failed route.
In SSR, the RREQ is not sent all the way to the destination but only to the RS through the nearest AP (which also means fewer hops generally). For data packets forwarded to their destinations through the RS, the probability of them getting lost due to intermediate nodes dying is less (could happen over routes between the source to AP and AP to the destination). The normalized routing load in SSR is more or less the same irrespective of the topological changes. All nodes keep sending RIS and RSBROADCAST packets even if they do not have any data to transmit or receive. The only other routing load is due to the RREQ and the RREP packets. RREQ is not broadcasted but unicasted to the RS and RERR packets are not generated in the event of a link fail. To stress the above discussed points further, the variation in the graphs for DSR is much more than for SSR. This implies that DSR is more likely to be affected by changes in the links than SSR. It is interesting to note that at slower rates of topology change, performance of DSR and SSR is quite similar. A slower rate means the network is closer to a static network. DSR may find multiple routes to a destination for every RREQ it sends (since a RREQ is broadcasted and RREP is generated for each RREQ received). Therefore it may use a second route available on receiving a RERR, thus reducing the routing load and improving the packet delivery fraction. In the last set of experiments, we run 3 traffic streams, chosen arbitrarily (with a minimum hop separation of 6), and vary the rate of packet generation of each stream. To make it fairly challenging for the protocols, we keep the rate of router creation/destruction constant at 8.7 routers per second. With an even faster rate of router creation and destruction, the throughput of DSR starts to approach zero. This
Server Supported Routing: A Novel Architecture and Protocol
29
happens because no route discovery process (sending of RREQ and receiving a RREP) is able to complete with such a fast rate of topology change, and thus not providing a basis for meaningful analysis and comparison with SSR. The number of packets generated per second is varied from 50 to 400 packets per second (in increments of 25 packets per second). We measure the throughput (packets/sec), normalized routing load, and packet delivery fraction (fig. 6, fig. 7, and fig. 8). The whole process is repeated with traffic streams run between a different set of sources and destinations. The performance metrics are then averaged for the two. SSR performs better than DSR when it comes to packet delivery fraction and the average throughput of each stream. It is however interesting to note that the throughput increases with increase in data rate, whereas the percentage of packets delivered decreases. This means that although both protocols, with increasing data rates, are able to get more number of packets to the destination with in the same time, they are not able to achieve this proportionally to the increase in data rate. This needs to be studied in more detail in future work. The normalized routing load decreases for SSR with increasing data rates. This is because, as discussed above, the routing load in the SSR architecture remains almost the same. The graph falls due to the increase in throughput (routing load per packet received falls). It generally falls for DSR (increased throughput) too, though it is more, which again indicates the sensitivity of DSR to changes in network conditions (offered load in this case).
5 Conclusion and Future Work It is observed that though DSR claims to use least hop count as a metric, the routes found are actually based on the round trip times for the RREQ and RREP packets. These packets are delayed by different amounts at different nodes; due to the instantaneous state of various parameters at these nodes e.g. transmit buffer size, etc. The back-off algorithm of the MAC layer also adds to this. Thus, the routes which a source learns via its route discovery are not based on any formal criterion. Furthermore, the source chooses a route amongst these on a purely random basis, which may not be the best route from itself to the destination. The chosen route also remains fixed either for the entire communication session or until the route breaks. A node gets no indication of a better route existing during a session. Whereas, in SSR, it is the RS that provides the best possible route based on the state of the whole network. A node also refreshes this route periodically. This enables it to be aware of the best available route between itself and the destination at most times. These reasons also eliminate the need to generate RERR packets. The observation is that the SSR model performs better than DSR (in terms of higher throughput and average latency) with increasing hop count. SSR seems to be more stable with changing network parameters, including link failures and traffic conditions. The routing load for SSR does not change very much with varying network conditions (NRL decreases as throughput increases); whereas in DSR, it fluctuates, depending on the dynamic state of the network. SSR also enables remote machines to initiate communication sessions with vehicles via the RS, which is not possible with DSR.
30
R. Patney, S.K. Baton, and N. Filer
The model needs to be simulated with other MAC protocols like MACA [12]. The possibility of making a fresh routing decision at each or some other intermediary nodes should be explored. Furthermore, a node should maintain the latency metric for each link as opposed to maintaining an average latency over all neighbour links. To improve the models efficiency, a prediction algorithm is needed, which is able to calculate the status of a wireless link in the near future. As suggested from the simulation studies, having the RSAPs form a backbone routing infrastructure may lead to a much better performance. In such a scenario, data packets only traverse over wireless links between the source/destination and the nearest RSAP.
References 1. Y. Lin, and I.Chlamtac. Wireless and Mobile Network Architectures. John Wiley and Sons, 2001, Chapter 21. 2. LAN MAN Standards Committee of IEEE Computer Society. Wireless LAN Medium Access Control (MAC) and Physical (PHY) Layer Spec. IEEE Std. 802.11, IEEE, June 99. 3. J. P. Singh, N. Bambos, B. Srinivisan, and D. Clawain. Wireless LAN Performance under Varied Stress Conditions in Vehicular Traffic Scenarios. IEEE Vehicular Technology Conference, vol. 2, pp. 743 - 747, Vancouver, Canada, Fall 2002. 4. James A. Lowes. Ad-hoc Routing with the MSA Simulation Engine. Research Library, Department of Computer Science, University of Manchester, UK, 2003. 5. Thomas H. Cormen. Introduction to Algorithms. MIT Press, 2001. 6. David B. Johnson, David A. Maltz, and Yin-Chun Hu. Dynamic Source Routing Protocol for Mobile Ad-hoc Networks. IETF Internet Draft, draft-ietf-manet-dsr-10.txt, July 2004. 7. Peter Ball. Introduction to Discrete Event Simulation. DYCOMANS workshop on ‘Management and Control: Tools in Action’, Algarve, Portugal, May 1996. 8. Stephen Q. Ye. A New Packet Oriented Approach to Simulating Wireless Ad-hoc Network Protocols in Java. Research Library, Department of Computer Science, University of Manchester, UK, 2002. 9. Andrew S. Tanenbaum. Computer Networks. Prentice-Hall, third edition, 2001. 10. N. Abramson. Development of the ALOHANET. IEEE Transactions on Information Theory, vol. IT-31, March 1985, pp. 119-123. 11. D. Bertsekas, R. Gallager. Data Networks. Prentice-Hall, second edition, 2000. 12. Phil Karn. MACA – A New Channel Access Protocol for Packet Radio. In Proceedings of the ARRL/CRRL Amateur Radio Ninth Computer Networking Conference, 1990, pp. 134-140.
Energy-Efficient Aggregate Query Evaluation in Sensor Networks Zhuoyuan Tu and Weifa Liang Department of Computer Science, The Australian National University, Canberra, ACT 0200, Australia {zytu, wliang}@cs.anu.edu.au
Abstract. Sensor networks, consisting of sensor devices equipped with energy-limited batteries, have been widely used for surveillance and monitoring environments. Data collected by the sensor devices needs to be extracted and aggregated for a wide variety of purposes. Due to the serious energy constraint imposed on such a network, it is a great challenge to perform aggregate queries efficiently. This paper considers the aggregate query evaluation in a sensor network database with the objective to prolong the network lifetime. We first propose an algorithm by introducing a node capability concept that balances the residual energy and the energy consumption at each node so that the network lifetime is prolonged. We then present an improved algorithm to reduce the total network energy consumption for a query by allowing group aggregation. We finally evaluate the performance of the two proposed algorithms against the existing algorithms through simulations. The experimental results show that the proposed algorithms outperform the existing algorithms significantly in terms of the network lifetime.
1
Introduction
Wireless sensor networks have attracted wide attention due to their ubiquitous surveillant applications. Recent advances in microelectronical technologies empower this new class of sensor devices to monitor information in a previously unobtainable fashion. Using these sensor devices, biologists are able to obtain the ambient conditions for endangered plants and animals every few seconds. Security guards can detect the subtle temperature variation in storage warehouses in no time. To meet various monitoring requirements, data generated by the sensors in a sensor network needs to be extracted or aggregated. Therefore, a sensor network can be treated as a database, where the sensed data periodically generated by each sensor node can be treated as a segment of a relational table. During each time interval, a sensor node only produces a message called a tuple (a row of the table). An attribute (a column of the table is either the information about the sensor node itself (e.g., its id or location), or the data detected by this node (e.g., the temperature at a specific location). There is a special node in the network called the base station which is usually assumed to have constant energy supply. The base station is used to issue the queries by users and collect the aggregate results for the whole network. In such a sensor X. Jia, J. Wu, and Y. He (Eds.): MSN 2005, LNCS 3794, pp. 31–41, 2005. c Springer-Verlag Berlin Heidelberg 2005
32
Z. Tu and W. Liang
network, users simply specify the data that they are interested in through the various SQL-like queries as follows, and the base station will broadcast these queries over the entire network. select from where group by having duration
{attributes, aggregates} sensors condition-of-attributes {attributes} condition-of-aggregates time interval
To respond to a user aggregate query, the network can process in either centralized or in-network processing manner. In the centralized processing, all the messages generated by the sensor nodes are transmitted to the base station directly and extracted centrally. However, this processing is very expensive due to the tremendous energy consumption on the message transmission. By virtue of the autonomous, full-fledged computing ability of sensor nodes, the collected messages by each sensor node can also be filtered or combined locally before transmitted to the base station, which is called in-network aggregation. In other words, a tree rooted at the base station and spanning all the sensor nodes in the network will be constructed for the data aggregation. Data collected by each node is aggregated before being transmitted to its parent. Ultimately the aggregate result will be relayed to the base station. To implement data aggregation, each node in the routing tree will be assigned into a group according to the distinct value of a list of group by attributes in a SQL-like query. Messages from different nodes are merged into one message at an internal node if they belong to the same group [10]. For example, if we pose a query of “the average temperature in each building”, each sensor node will first generate its own message and collect the messages from its descendants in the tree, and then use SUM and COUNT functions in SQL to compute the average temperature for each group (each building) before forwarding the result to its parent. In the end, all the messages in the same building will be merged into one message so that the transmission energy consumption will be dramatically reduced. Therefore, the number of messages finally received by the base station is equal to the number of buildings. Related work. Network lifetime is of paramount importance in sensor networks, because one node failure in the network can paralyse the entire network. Network lifetime of a wireless sensor network can thus be defined as the time of the first node failure in the network [1]. To improve the energy efficiency and prolong the network lifetime, several existing protocols for various problems have been proposed in both ad hoc networks and sensor networks [1, 2, 3, 4, 5, 6, 7, 11]. For example, in ad hoc networks Chang and Tassiulas [1, 2] realized a group of unicast requests by discouraging the participation of low energy nodes. Kang and Poovendran [6] provided a globally optimal solution for broadcasting through a graph theoretic approach. While in sensor networks, Heinzelman et al [3] initialized the study of data gathering by proposing a clustering protocol LEACH, in which nodes are grouped into a
Energy-Efficient Aggregate Query Evaluation in Sensor Networks
33
number of clusters. Within a cluster, a node is chosen as the cluster head which will be used to gather and aggregate the data for other members and forward the aggregated result to the base station directly. Lindsey and Raghavendra [7] provided an improved protocol PEGASIS using a chain concept, where all the nodes in the network form a chain and one of the nodes is chosen as the chain head in turn to report the aggregated result to the base station. Tan and K´ orpeoˇ glu [11] provided a protocol PEDAP for the data gathering problem, which constructs a minimum spanning tree (MST) rooted at the base station to limit the total energy consumption. Kalpakis et al [5] considered a generic data gathering problem with the objective to maximize the network lifetime, for which they proposed an integer program solution and a heuristic solution. This paper provides the evaluation of an aggregate query in a sensor network with an objective to prolong the network lifetime. The pervasive way to do this is to apply the in-network aggregation to proceed the query evaluation, which has been presented in [8], where the information-directed routing is proposed to minimize the transmission energy consumption while maximizing data aggregation. Yao and Gehrke [12] have generated efficient query execution plans with in-network aggregation, which can significantly reduce resource requirements. Apart from these, the query semantics for efficient data routing has been considered in [9] to save transmission energy, in which a semantic routing tree (SRT) is used to exclude the nodes that the query does not apply to. Furthermore, group aggregation has been incorporated into the routing algorithm GaNC in [10], where the sensor nodes in the same group are clustered along the same routing path with the goal of reducing the size of transmitted data. However, an obvious indiscretion in some of the routing protocols, such as MST and GaNC, is that a node is chosen to be added into the tree without taking into account its residual energy during the construction of the routing tree. As a result, the nodes closer to the root of the routing tree will exhaust their energy rapidly due to the fact that they serve as relay nodes and forward the messages for their descendants in the tree. Thus, the network lifetime is shortened. Our contributions. To evaluate an aggregate query in a sensor network, we first propose an algorithm by introducing the node capability concept to balance the residual energy and the energy consumption of each node in order to prolong the network lifetime. We then present an improved algorithm which allows the group aggregation to reduce the total energy consumption. We finally conduct the experiments by simulation. The experimental results show that the proposed algorithms outperform the existing ones. The rest paper is organized as follows. Section 2 defines the problem. Section 3 introduces the node capability concept and a heuristic algorithm. Section 4 presents an improved algorithm. Section 5 conducts the experiments. Section 6 concludes.
2
Preliminaries
Assume that a sensor network consists of n homogeneous energy-constrained sensor nodes and an infinite-energy-supplied base station s deployed over an
34
Z. Tu and W. Liang
interested region. Each sensor periodically produces sensed data as it monitors its vicinity. The communication between two sensor nodes is done either directly (if they are within the transmission range of each other) or through the relay nodes. The network can be modeled as a directed graph M = (N, A), where N is the set of nodes with |N | = n+1 and there is a directed edge u, v in A if node v is within the transmission range of u. The energy consumption for transmitting a m-bit message from u to v is modeled to be mdα v,u , where dv,u is the distance from u to v and α is a parameter that typically takes on a value between 2 and 4, depending on the characteristics of the communication medium. Given an aggregate query issued at the base station, the problem is to evaluate the query against the sensor network database by constructing a spanning tree rooted at the base station such that the network lifetime is maximized. We refer to this problem as the lifetime-maximized routing tree problem (LmRTP for short).
3
Algorithm LmNC
In this section we introduce the node capability concept and propose a heuristic algorithm called the Lifetime-maximized Network Configuration (LmNC) for LmRTP based on the capability concept. Capability concept. Given a node v, let p(v) be the parent of v in a routing tree. The energy consumption for transmitting a m-bit message from v to p(v) is Ec (v, p(v)) = mdα v,p(v) , where dv,p(v) is the distance between v and p(v). Let Er (v) be the residual energy of v before evaluating the current query. Assume that the length of the message sensed by every node is the same (m-bit), then the capability of node v to p(v) is defined as C(v, p(v)) = Er (v)/Ec (v, p(v)) − 1 = Er (v)/mdα v,p(v) − 1.
(1)
If v has k descendants in the routing tree, then the energy consumption at v to forward all the messages (its own generated message and the messages collected from its descendants) to its parent p(v) will be (k + 1)mdα v,p(v) , given that there is no data aggregation at v. If after this transmission, v will exhaust its residual energy, then Er (v) = (k + 1)mdα v,p(v) . From Equation (1), it is easy to derive that k = Er (v)/mdα − 1 = C(v, p(v)). So, if there is no aggregation at v, v,p(v) the capability of node v to p(v), Ec (v, p(v)), actually indicates the maximum number of descendants that it can support by its current residual energy. Algorithm description . Since a node with larger capability can have more descendants in the routing tree (if data aggregation is not allowed), it should be placed closer to the tree root to prolong the network lifetime. Based on this idea, we propose an algorithm LmNC, where each time a node with the maximum capability is included into the current tree. Thus, the nodes are added one by one until all the nodes are included in the tree. The motivation behind this algorithm is that adding the node with the maximum capability innately balances the node residual energy Er (v) and the actual energy consumption for transmitting a
Energy-Efficient Aggregate Query Evaluation in Sensor Networks
35
message to its parent Ec (v, p(v)) (as the definition of the node capability), so that the network lifetime is prolonged. Specifically, we denote by T the current tree and VT the set of nodes included in T so far. Initially, T only includes the base station, i.e. VT = {s}. Algorithm LmNC repeatedly picks a node v (v ∈ V − VT ) with maximum capability to u (u ∈ VT ) and adds it into T with u as its parent. The algorithm continues until V −VT = ∅. The detailed algorithm is given below. Algorithm. Lifetime Efficient Network Configuration (G, Er ) /* G is the current sensor network and Er is an array of the residual energy of the nodes */ begin 1. VT ← {s}; /* add the base station into the tree */ 2. Q ← V − VT ; /* the set of nodes which is not in the tree*/ 3. while Q = ∅ do 4. Cmax ← 0; /* the maximal capability of nodes in the tree */ 5. for each v ∈ Q and u ∈ VT do 6. compute C(v, u); 7. if Cmax < C(v, u) 8. then Cmax ← C(v, u); 9. added node ← v; 10. temp parent ← u; 11. p(added node) ← temp parent; /* set the parent for the node with maximum capability */ 12. VT ← VT ∪ {added node}; /* add the node into tree */ 13. Q ← Q − {added node}; end.
Note that, although there have been several algorithms for LmRTP considering the residual energy of nodes during the construction of the routing tree (including [1]), they failed to consider the actual transmission energy consumption from a node to its parent. This can be illustrated by the following example. Assume that there is a partially built routing tree and a number of nodes to be added into the current tree. Node vi has the maximum residual energy among the nodes out of the tree, while the distance between vi and its parent is much longer than that between another node vj and its parent. Now, if node vi is added into the tree, it will die easily in the further tree construction because of the enormous transmission energy consumption from vi to its parent. Therefore, although vi has more residual energy than vj at the moment, the maximum number of the messages transmitted by vi to its parent is less than that by vj . We thus conclude that the lifetime of node vi is shorter than that of node vj .
4
Improved Algorithm LmGaNC
Although algorithm LmNC manifests the significant improvement on the network lifetime for LmRTP, the total energy consumption for each query is hardly considered during the construction of the routing tree. Because the node with maximum capability may be far away from its parent and the excess transmission energy consumption by the node will be triggered. In this section we present an improved
36
Z. Tu and W. Liang
algorithm called Lifetime-maximized Group-aware Network Configuration (LmGaNC) by allowing group aggregation to reduce the total energy consumption. Algorithm description. Since group aggregation is able to combine the messages from the same group into one message, incorporating the nodes of the same group into a routing path will reduce the energy consumption and maximize the network lifetime, because the messages drawn from these nodes will contain fewer groups. With this idea Sharaf et al provided a heuristic algorithm (in [10]) to construct an energy-efficient routing tree. Further incorporating this idea into algorithm LmNC, we propose an improved algorithm LmNC as follows. Algorithm. Lifetime Efficient Network Configuration (G, Er ) /* G is the current sensor network and Er is an array of the residual energy of the nodes */ begin 1. VT ← {s}; /* add the base station into the tree */ 2. Q ← V − VT ; /* the set of nodes which is not in the tree*/ 3. while Q = ∅ do 4. Cmax ← 0; /* the maximal capability of nodes in the tree */ 5. for each v ∈ Q and u ∈ VT do 6. compute C(v, u); 7. if Cmax < C(v, u) 8. then Cmax ← C(v, u); 9. added node ← v; 10. temp parent ← u; 11. p(added node) ← temp parent; /* set the parent for the node with maximum capability */ 12. dmin ← ∞; /* minimum distance to choose */ 13. for each u ∈ VT and u = temp parent do 14. if group id(u ) = group id(added node) and dadded node,u < dmin 15 and dadded node,u ≤ df ∗ dadded node,temp parent 16. then p(added node) ← u ; 17. VT ← VT ∪ {added node}; /* add the node into tree */ 18. Q ← Q − {added node}; end.
Algorithm LmGaNC is similar to algorithm LmNC. The difference is that, during the construction of the routing tree, a child with the maximum capability chosen by LmNC will keep checking if there is a node in the same group as itself in the current tree in terms of LmGaNC. We call this node a better parent. If yes, the child will switch to this better parent. If there are more than one better parent to choose from, the closest one will be chosen. Notice that choosing a better parent far away will cause the extra transmission energy consumption. So, a concept of distancef actor (df ) is employed, which is the upper bound of the distance between a child and its selected parent. For example, if df =1.5, then we only consider the parent whose distance to the child is at most df ∗ dv,u = 1.5dv,u , where dv,u is the distance between v and its current parent u. Energy reduction brought by algorithm LmGaNC is demonstrated by the following example. In Figure 1, we have a partially built routing tree ( see Fig. 1(a)).
Energy-Efficient Aggregate Query Evaluation in Sensor Networks
37
Assume that black nodes 2 and 7 belong to Group 1, shaded node 6 belongs to Group 2, and the rest belong to Group 3. The numbers of messages are as shown in the figure (depending on the number of various groups in the subtree). Under algorithm LmNC, shaded node 8 (in Group 2) has maximum capability to its parent node 7 (see Fig. 1 (b)). In order to forward one message originally from node 8, all the nodes in the path from node 8 to the root, except the root 1, have to consume extra energy for this transmission. While the improved algorithm LmGaNC allows node 8 to switch to a better parent (node 6) which is in the same group, assuming without violation of the distance factor. As a result, none of the nodes, except node 8 itself, needs to forward one extra message for node 8, so that energy can be saved. Here, after applying group aggregation,
1 [msg]=2
2 [msg]=1
3
5
[msg]=2
6
[msg]=1
[msg]=3
4
[msg]=1
[msg]=2
5
[msg]=1
6
[msg]=2
7
[msg]=2
3
2
3
2
[msg]=2 [msg]=1
4
1
1 [msg]=3
[msg]=2
[msg]=1
4
[msg]=2
5
6
[msg]=1
7
7 [msg]=1
[msg]=1
8 a) the current routing tree
[msg]=1
b) add node 8 under algorithm LmNC
8 c) add node 8 under algorithm LmGaNC
Fig. 1. Benefit of algorithm LmGaNC
a node capability indicates the number of its descendants in different groups (excluding the group of the node itself) rather than the total number of its descendants, because the messages from the descendants in the same group can be merged into one message only. Effect of conditional data aggregation. The Group-By clause can divide an aggregate query’s result into a set of groups. Each sensor node is assigned to one group according to its value of Group-By attributes. In practice, however, this clause may not be enough to answer the query like “what is the average temperature in each room at level 5”. To match the query condition, we normally employ the where clause originated from the SQL language to further reduce the energy consumption delivered by algorithm LmGaNC. So, in the above case, the condition clause “WHERE Level no.=5” will be imposed on the query. Before each sensor node transmits its sensed data to its parent, it will check whether the data matches the query condition. If not, then the node just transmits a bit of notification information with value 0 rather than its original data to its parent, so that its parent will not keep waiting for the data from that child. This is especially true under the aggregation schema of Cougar [12], where each node holds a waiting list of its children and will not transmit its data to its parent until it hears from all the nodes on the waiting list. Since the size of the
38
Z. Tu and W. Liang
data is shrunk into only 1 bit given the mismatch, the total transmission energy consumption will be further reduced. However, even if a node matches the query condition, whether its residual energy can afford the message transmission is still questionable. One possible solution for this is that the node checks if it has sufficient residual energy to complete the transmission. If not, it will send a bit of notification information with value 1 instead of its original data to its parent to indicate the insufficiency of its residual energy.
5
Simulation Results
This section evaluates algorithms LmNC and LmGaNC against the existing algorithms including MST (Minimum Spanning Tree), SPT (Shortest Path Tree) and GaNC. The experimental metrics adopted are the network lifetime and the total energy consumption, based on different numbers of groups, various distance factors, and with and without the where condition clause. We assumed that the network topologies are randomly generated from the NS-2 network simulator with the nodes distributed in a 100 × 100 m2 region and each sensor is initially equipped with 105 μ-Joules energy. For each aggregate query, we assign a “Group id” for each node randomly and take the average of the experimental results from 30 distinct network topologies for each network size. Performance analysis of the proposed algorithms. Before we proceed, we reproduce an existing heuristic algorithm called GaNC [10] for the concerned problem, which will be used as the benchmark. Algorithm GaNC with the group aggregation concept is derived from a simple First-Heard-From (FHF) protocol where the nodes always select the first node from which they hear as their parents after the query specification is broadcast over the network. The main difference between GaNC and FHF is that the child under GaNC can change to a better parent in the same group within the given distance factor. The simulation results in Figure 2(a) show that the network lifetimes delivered by algorithms LmNC and LmGaNC significantly outperform the ones delivered by MST, SPT and
Group Number = 10
Group Number = 10
100
1e+07
90
Average Network Lifetime
80 70
Total Energy Consumption (u-Joule)
MST SPT LmNC GaNC LmGaNC
60 50 40 30 20
MST SPT LmNC GaNC LmGaNC
8e+06
6e+06
4e+06
2e+06
10 0
0
20
40
60
80 100 120 Number of Nodes
140
160
(a) Network Lifetime
180
200
0
0
20
40
60
80 100 120 Number of Nodes
140
160
180
200
(b) Total Energy Consumption
Fig. 2. Performance comparison among various algorithms
Energy-Efficient Aggregate Query Evaluation in Sensor Networks
39
GaNC. Figure 2(b) shows that algorithm LmGaNC gracefully balances the total energy consumption of LmNC with a slight shortening on the network lifetime when the number of nodes in the network is less than 80. Sensitivity to the number of groups. In comparison to algorithm GaNC, the average network lifetime under LmGaNC is more sensitive to the number of groups. Figure 3(a) indicates both algorithms manifest their lifetime improvements by 50% approximately, when the number of groups is decreased from 10 to 5. The reasons behind is as follows. On one hand, fewer groups mean that more sensor nodes will be in the same group, and thus the possibility of message suppression under group aggregation will be enhanced. On the other hand, fewer groups make a child node have more chances to switch to a better parent in the same group, so less transmission energy will be consumed.
Groups Number = 5 150 GaNC(df = 3) GaNC(df = 1.5) LmGaNC(df = 3) LmGaNC(df = 1.5)
400
Average Network Lifetime
Average Network Lifetime
GaNC(10 groups) GaNC(5 groups) LmGaNC(10 groups) LmGaNC(5 group) 100
50
300
200
100
0
0
20
40
60
80 100 120 Number of Nodes
140
160
180
200
(a) Different Numbers of Groups
0
0
20
40
60
80 100 120 Number of Nodes
140
160
180
200
(b) Different Distance Factors
Fig. 3. Performance comparison LmGaNC vs GaNC
Sensitivity to the distance factor. As discussed earlier, the distance factor is introduced to limit the maximum distance that is acceptable when a child node switches to a better parent in the same group. It avoids unnecessary energy dissipation resulting from this switching. As such, the smaller the distance factor is, the less the energy dissipation will be, therefore, the longer the network can endure. Figure 3(b) shows that when distance factor is decreased from 3 to 1.5, algorithm LmGaNC exhibits its sensitivity immediately and the network lifetime is significantly prolonged, while GaNC reacts much more rigidly. Sensitivity to the where condition clause. The experiments here aim to further reduce the energy consumption of evaluating an aggregate query through allowing the nodes that mismatch the query condition to send a 1-bit notification to their parents instead of the sensed data. Figure 4 (a) and (b) illustrate the effects of the where clause in an aggregate query on both the network lifetime and the total energy consumption. The experimental results show that the network lifetime increases by more than 50% while the total energy consumption only goes up by around 25% under LmGaNC.
40
Z. Tu and W. Liang
800
1.5e+07 without WHERE clause with WHERE clause Total Energy Consumption (u-Joule)
without WHERE clause with WHERE clause
Average Network Lifetime
600
400
200
0
0
20
40
60
80 100 120 Number of Nodes
140
160
(a) Network Lifetime
180
200
1e+07
5e+06
0
0
20
40
60
80 100 120 Number of Nodes
140
160
180
200
(b) Total Energy Consumption
Fig. 4. Performance of LmGaNC with the where condition clause
6
Conclusions
This paper considered the aggregate query evaluation in a senor network database by exploring the node capability concept. Based on this concept we first proposed a heuristic algorithm to prolong the network lifetime, then presented an improved algorithm by incorporating group aggregation to reduce the total energy consumption. We finally conducted experiments to evaluate the performance of the proposed algorithms against those of the existing ones. The experimental results showed that the proposed algorithms outperform the existing algorithms. Acknowledgment. It is acknowledged that the work by the authors was supported by a research grant from the Faculty of Engineering and Information Technology at the Australian National University.
References 1. J-H Chang and L. Tassiulas. Energy conserving routing in wireless ad hoc networks. Proc. INFOCOM’00, IEEE, 2000. 2. J-H Chang and L. Tassiulas. Fast approximate algorithms for maximum lifetime routing in wireless ad hoc networks. IFIP-TC6/European Commission Int’l Conf., Lecture Notes in Computer Science, Vol. 1815, pp. 702–713, Springer, 2000. 3. W. R. Heinzelman, A. Chandrakasan and H. Balakrishnan. Energy-efficient communication protocol for wireless microsensor networks. Proc. 33th Hawaii International Conference on System Sciences, IEEE, 2000. 4. C. Intanagonwiwat, D. Estrin, R. Govindan, and J. Heidemann. Impact of network density on data aggregation in wireless sensor networks. Proc. 22nd International Conference on Distributed Computing Systems (ICDCS’02), IEEE, 2002. 5. K. Kalpakis, K. Dasgupta and P. Namjoshi. Efficient algorithms for maximum lifetime data gathering and aggregation in wireless sensor networks. Computer Networks, Vol. 42, pp. 697–716, 2003. 6. I. Kang and R. Poovendran. Maximizing static network lifetime of wireless broadcast ad hoc networks. Proc. ICC’03, IEEE, 2003.
Energy-Efficient Aggregate Query Evaluation in Sensor Networks
41
7. S. Lindsey and C. S. Raghavendra. PEGASIS: Power-efficient gathering in sensor information systems. Proc. Aerospace Conference, IEEE, pp. 1125–1130, 2002. 8. J. Liu, F. Zhao, and D. Petrovic. Information-directed routing in ad hoc sensor networks. Proc. 2nd international conference on wireless sensor networks and applications, ACM, 2003. 9. S. Madden, M. Franklin, J. Hellerstein and W. Hong. The design of an acquisitional query processor for sensor networks. Proc. of SIGMOD’03, ACM, 2003. 10. M. A. Sharaf, J. Beaver, A. Labrinidis and P. K. Chrysanthis. Balancing energy efficiency and quality of aggregate data in sensor networks. J. VLDB, Springer, 2004. ´ Tan and I. ˙ K´ 11. H. O. orpeoˇ glu. Power efficient data gathering and aggregation in wireless sensor networks. ACM SIGMOD Record, Vol. 32, No. 4, pp. 66–71, 2003. 12. Y. Yao and J. Gehrke. Query processing in sensor networks. Proc. 1st Biennial Conf. Innovative Data Systems Research (CIDR’03), ACM, 2003.
Data Sampling Control and Compression in Sensor Networks* Jinbao Li1,2 and Jianzhong Li1,2 1
Harbin Institute of Technology, 150001, Harbin, China 2 Heilongjiang University, 150080, Harbin, China [email protected], [email protected]
Abstract. Nodes in wireless sensor networks have very limited storage capacity, computing ability and battery power. Node failure and communication link disconnection occur frequently, which means weak services of the network layer. Sensing data is inaccurate which often has errors. Focusing on inaccuracy of the observation data and power limitation of sensors, this paper proposes a sampling frequency control algorithm and a data compression algorithm. Based on features of the sensing data, these two algorithms are combines together. First, it adjusts the sampling frequency on sensing data dynamically. When the sampling frequency cannot be controlled, data compression algorithm is adopted to reduce the amount of transmitted data to save energy of sensors. Experiments and analysis show that the proposed sampling control algorithm and the data compression algorithm can decrease sampling times, reduce the amount of transmitted data and save energy of sensors.
1 Introduction Recent advancement in digital-electronics, micro-processors and wireless technologies enable the creation of small and cheap sensors which has processor, memory and wireless communication ability. This accelerates the development of large scale sensor networks. In sensor networks, various sensors which have different functions are distributed to some given area to collect, monitor and process information. Sensor networks integrate sensor technique, computer technique, and distributed information processing technique and communication technique. Sensors are used to collect, obtain or monitor their surroundings, process the information to get detail and accurate information about the area of sensor networks. For example, by obtaining geographical features such as hardness and humidity of the jungle of enemies in battlefield, the blue print of b1attle can be made. There are a lot of attractive features of sensors such as small, cheap, flexible, movable and wireless communication ability, so sensor networks can be used to obtain detailed and reliable information in any time, location, hypsography or environment. In military affairs, sensor networks can be used to *
Supported by the National Natural Science Foundation of China under Grant No.60473075 and No.60273082; the Natural Science Foundation of Heilongjiang Province of China under Grant No.ZJG03-05 and No.QC04C40.
X. Jia, J. Wu, and Y. He (Eds.): MSN 2005, LNCS 3794, pp. 42 – 51, 2005. © Springer-Verlag Berlin Heidelberg 2005
Data Sampling Control and Compression in Sensor Networks
43
monitor the action of enemies and the existence of dangerous features such as poison gas, radiation, exploder, etc. In environment monitoring, sensors can be set at plain, desert, mountain region or seas to monitor and control changes in the environment. In traffic applications, sensors can be used to monitor and control the traffic in freeway or crowed area in cities. Sensors can also be used in security supervising of large shopping center, carport and other devices and in supervising the occupying of parking spaces in parks. Sensors generally have limited processing, storing and communication ability. They connected with each other by wireless network. Sensors can move, which makes the topology structure of the network change. They communicate with each other by Ad-Hoc manner. Each sensor can act as a router and has the ability of search, localization and reconnection dynamically. Sensor networks is a special kind of wireless Ad-Hoc network, which has features such as frequent moving, connection and disconnection, limitation of power, large distributed area, large amount of nodes, limitation of own resources [1]. The reliability of communication is weak and the power is limited in sensor networks. Each node may be invalid at any moment. The network layer can provide weak service. Each sensor has limited storage capacity, computing ability and battery power. There are errors in the measured value of sensors, so the observation data are not accurate. This paper focuses on data management and query processing [2,6,8,9,10,17] in sensor networks. Aimed at the inaccuracy of sensing data, a sampling frequency control algorithm and a data compression algorithm are proposed, which are suitable for approximate query in sensor networks. By controlling the sampling of nodes, the sampling frequency control algorithm can reduce sampling frequency and decrease power consumption. The data compression algorithm well uses the limited storage capacity and computing ability, compresses the sampling data through a compression algorithm, which needs few computing. Through these two algorithms, this paper reduces sampling frequency and amount of transmitted data. The power is thus saved. The paper is organized as follows. Section 1 introduces sensor networks. Section 2 is the related work. Section 3 proposes the sampling frequency control algorithm and the data compression algorithm. Then the experiments and analysis is given in section 4. The last section is the conclusion.
2 Related Works Energy saving is an important optimization target in sensor networks. Each sensor is power by battery. Battery cannot be replaced when it is exhausted because of the deserted or dangerous environment it lies. So the limited power of sensors should be efficiently used to prolong the lifetime of sensor networks. S. Madden and Yao.Y proposed a clustering method to reduce communication cost, which saved a lot of energy [3,4,6,7,8,17]. This method first constructs a aggregating tree. Before non-leaf node transmits data to its parent, it aggregates the data in its subtree and tramsmits the results to its parent. It not only reduces the communication cost by aggregation, but also gets more accurate query results. In TAG[3], base station is used as root node in sensor networks. When receiving an aggregation operation, a query tree is constructed. Each node in the tree combines
44
J. Li and J. Li
data received from its subtree with its own data and transmits the result to its parent. In sensor networks, node invalidation frequently occurs. The communication line also fails frequently because of environment, packet conflict, lower Signal-to-Noise, etc. If a node is invalid and the message cannot be transmitted to its parent, the aggregation from its subtree will be lost. If failure node is near the root node, the failure will effect the aggregation dramatically. Fault tolerance of query processing on stream data from sensors is investigated by Olson[13]. Restricted by a given accuracy, it processes continues query on a central data stream processor. The data stream processor sets a filter on each remote data source. Thus, the limitation of fault tolerance is dynamically distributed to data sources. For each data source, data should be transmitted only when the current value(compared with the last value) is beyond the threshold of its filter. Centrolized processing is adopted in this method, so it cannot be used in sensor networks directly. Mohamed.A etc. investigate how to implement approximate query processing by setting filters on sensors when query accuracy is satisfied [11][12]. Based on Olson’s work, A. Deligiannakis etc. research on distribution problem below a given threshold of fault tolerance in sensor networks [12]. They extend the idea of using filter to reduce the transmission cost to sensor networks. By increasing fault tolerance threshold and using residual scheme, energy of sensors and bandwidth of networks are obviously saved at the cost of accuracy. The lifetime of network is extended. R. Cheng researched on probabilistic query on inaccurate data [14]. He gave the definition, classification, query processing algorithm and query evaluation method of inaccurate data. Lazaridis introduced a data transmission method in sensor networks which can reduce transmission cost [15]. This method first piecewise constantly compresses raw data on each sensors. Only if the compressed data beyonds the threshold of fault tolerance, it is transmitted to base station. Considine etc. implement approximate innetwork aggregation in sensor network database through small sketches[16]. They extend the copy-sensitive sketches, which is suitable for aggregation query. They use the sketches of that method to produce accurate results which have lower communication cost and lower computing cost.
3 Sampling Frequency Control and Data Compression Sending and receiving data are the most energy consuming operations. In Mica motes system in Berkeley, the energy cost by transmitting a bit through wireless network is equal to the energy cost by executing 1000 instructions by CPU [5]. This observation shows that to reduce the energy cost when transmits data, we should try to avoid communications between nodes. Besides sending and receiving data, sampling is another energy consuming operation [5]. In the situation that sampling accuracy is satisfied, if we can control sampling and reduce the sampling frequency, not only the energy used to sample data can be saved, but also the amount of transmitted data will be reduced. Affected by node invalidation, wireless communication uncertainty and power restriction, the obtaining, processing and transmission of sensing data often had errors. Sensing data are uncertainty to some extent [14][15]. At the same time, user queries often need not accurate results. Query on sensing data is a kind of query on
Data Sampling Control and Compression in Sensor Networks
45
Value
uncertain data. Some errors are allowable in query results. For example, counting the number of rooms, the temperature of which is beyond 30°C, in every 3 seconds. The error is defined below 10%. In applications, observation data from sensors are often continues data, such as temperature, humidity, etc. Data from one sensor often lies in a stable range during a given period. For sensors that the measured data changes continuously, the measured data may fluctuate slightly in some successive periods and 0 t1 t2 t3 t4 t5 t6 t7 Time may fluctuate obviously but in linearity in some other periods. Figure 1 gives an Fig. 1. Time series of sensing data example. Though data in areas [t1,t2], [t3,t4], [t4,t5] and [t6,t7] are all different, they fluctuate slightly in each area. Data in area [t2,t3] and [t5,t6] fluctuate obviously, but they change in linearity. 3.1 Sampling Frequency Control For sensors that observation data changes continuously, we can use the n latest measured value of sensing data to predict the time interval t, in which the measured data within a given error bound. If t is bigger than the sampling cycle, we can get next sample data at time tN+1= t+ tN. This prediction method can control the sampling frequency and reduce energy cost when sampling. Linear Regression model is used in this paper to predict the steady time interval t of measured data within given error bound. Suppose the prediction of sensing data conforms to unitary Linear Regression model, the prediction function can be represented as formula 3.1. v = a + bt
(3.1)
where, v is the value of sensing data, t is the sampling moment, a and b are regression coefficients. Till current time N, the time series of sensing data is supposed to be , 0T, tN+1 is the next sampling moment. If the prediction value Δt ≤ T or the linear regression model in the first step not exists, the general sampling frequency is adopted. Algorithm 1 controls sampling frequency and data transmission by prediction. It is executed in each sensor. In some time interval, if sample data are linearly correlated, prediction and sampling frequency control method are used to control sampling rate. Otherwise, the method proposed in section 3.2 will be used to compress sample data. This compression method needs few computations. This paper mainly proposes these two methods to reduce the sampling frequency and the amount of transmitted data. Battery power is dramatically saved.
Data Sampling Control and Compression in Sensor Networks
3.2 Data Compression Algorithm Sensors produce a lot of continuous sensing data, but have very limited storage capacity. As a result, not all the sensing data can be stored in sensors. The storage of a sensor can cache or store only part of the data. To make full use of storage capacity of sensors, compression method can be used to compress sensing data. Thus, more data can be cached in sensor storage and used for query processing and data prediction. The region of sensor networks, say D, are divided into some areas based on a given error . That is, 2İ is the length of each data area and D is divided into D/(2 ) areas. Suppose the given error is 5, region D is [20,99], ranges of all data areas should be [20,29], [30,39], … , [90,99]. After the data areas are determined, we assign the sample data of each sampling cycle to corresponding data area. For example, data from cycle 1 to cycle 8 are all belong to data area [20,30], so cycle range [1,8] is stored instead of storD ata area ing 8 sample data. [2 0,2 9] Data from cycle 9 and [3 0,3 9] cycle 11 belong to [5 0,5 9] data area [30, 40], … while cycle 10 belongs to data area D ata area [50,60]. Cycles [9,9], [20 ,30] [10,10] and [11,11] [40 ,50] have to be assigned to [60 ,70] corresponding data areas separately as
47
Algorithm 1. Precision Based Sampling and Transmit Algorithm PBSA Input: Sampling period T, Tolerance error δ, , ,Wmin, MPC time series S= with tolerance εi Fα (1, n − 2) Q e ( n − 2) (6) n= *Wmin (7) Reconstructs Linear Regression equations and gets the values of a and b; QR (8) if F = > Fα (1, n − 2) Q e ( n − 2) (9) n= *Wmin (10) else (11) n=Wmin (12) end (13) Calculates tN+1 from formula 3.5 (14) tN+1=max(tN+T, tN+1) (15) else (16) tN+1= tN+T (17) end (18) if abs( vN+1-vN)>δ (19) SnedToParent(vN+1,t N+1) (20) end Table 1. Example of DCA algorithm c ycle
…
cycle
[1 ,8]
[26 ,32]
…
[9 ,9]
[11 ,11]
[10 ,1 0]
[40 ,47]
…
…
… …
D ata area
c ycle
cycle
[2 0,3 0]
[1,3]
[6 ,6]
[4 0,5 0]
[4,4]
[6 0,7 0]
[5,5]
cycle
[7 ,10]
…
(a)
(b )
cycle
c ycle
D ata area
[1 ,3]
[6 ,6]
[2 0,3 0]
c ycle ( [1,3]
[4 ,4]
[11 ,11]
[4 0,5 0]
[4,4]
[11 ,1 2]
[5 ,5]
[7 ,10]
[6 0,7 0]
[5,5]
[7,10]
(c)
(d)
[6 ,6]
48
J. Li and J. Li
shown in figure 1. Following this rule, when sending data to sink, only cycles and the corresponding data areas are sent. Partial of the raw data instead of the whole data are sent to sink. Data are compressed in this manner. To implement the above Algorithm 2. Data Compress Algocompression method, buffers rithm DCA ,it is executed in each should be set on each node, sensor which stores data shown in table Input: 1(a). The cycles corresponding Sampling period T, Tolerance error , to each data area are maintained Sample series v , t as a circular queue. The cycle of Output: Compressed Sample series the current sample data is in(1) Divide the observation region D of sensor networks serted into the queue, which into D/(2 ) data area; concorresponds to the data area it struct a table T to store belongs to. If the sequential these data areas; number of data area correspond(2) Obtain the current observation data v , t ing to the current cycle is the (3) if |v -v | 0 is the decay exponent. The above sensing model has been used in [7] for determining path exposure. The measurement of the target signal amplitude, xk , at a sensor k may also be corrupted by an additive noise, nk . Thus, xk =
θ + nk , k = 1, 2, ..., dα k
(1)
54
B. Wang et al.
The above measurement model has also been used in [8] to detect the presence of a target. Unlike the use of value fusion for target detection in [8], we use the estimate of the target parameter θ based on the corrupted measurements. Let θˆ and θ˜ = θˆ − θ denote the estimate and the estimation error, respectively. A commonly used performance criterion is to minimize the mean squared error (MSE) of an estimator, i.e., to minimize E[θ˜2 ]. The measurement given by (1) can be written in matrix format for K sensors as X = Dθ + N, (2) −α −α T T where X = (x1 , x2 , ..., xK )T , D = (d−α 1 , d2 , ..., dK ) , and N = (n1 , n2 , ..., nK ) . The additive noises are assumed to be spatially uncorrelated white noise with zero mean and σk2 variance, but otherwise unknown. The covariance matrix of the noises {nk : 1, 2, ..., K} is given by 2 R = E[NNT ] = diag[σ12 , σ22 , ..., σK ].
(3)
A well-known best linear unbiased estimator (BLUE) [9] can be applied to estimate θˆK and to achieve a minimum MSE. According to BLUE, when K measurements are available, the estimate θˆK of the original signal θ is given as −1 T −1 θˆK = DT R−1 D D R X. (4) The MSE of BLUE is given as E[(θ − θˆK )2 ] = (DT R−1 D)−1 ,
(5)
and the estimation error θ˜ is given as θ˜K = θˆK − θ = [DT R−1 D]−1 DT R−1 N. 2.2
(6)
Information Exposure
In general, the estimate of a parameter at a point where a target is present is different from the estimate of the same point without the target. This is because the signal energy of the target is normally larger than the background noise energy. For example, the seismic vibrations caused by a tank are greater than the background noise. If the estimation error is small, not only the target can be claimed to be detected but also the target parameter can be obtained within some confidence level. Note that the estimation error θ˜K given by (6) is a random variable with zero mean (due to zero mean uncorrelated noises) and 2 variance denoted as σ ˜K . This motivates use to define the information exposure of a point as follows. Definition 1. K sensors cooperate to estimate a parameter at a point p that has distances dk , k = 1, 2, ...K to these sensors. The information exposure of this point is defined as the probability that the absolute value of estimation error is less than some threshold, A, i.e., I(p, K) = Pr[|θ˜K | ≤ A].
(7)
Worst and Best Information Exposure Paths in Wireless Sensor Networks
55
Fig. 1. Illustration of path exposure. Each point is monitored by three sensors.
The information exposure of a point depends on the number of sensors doing the estimation as well as their distances to the point. According to the definition, the larger I(p, K) is, the lower the probability that the estimation error θ˜K takes on a value by more than A, and hence the better point p is monitored. When the target is at location p(t) at time t, we use I(p(t), K) to denote the information exposure of this point. Suppose a target is moving in the sensor field from point p(t1 ) to point p(t2 ) along the path (or curve) P (t1 , t2 ). We now define the information exposure of the path along which the target moves as follows. Definition 2. Suppose that every point of a path P (t1 , t2 ) is monitored by K sensors. The information exposure for the path P (t1 , t2 ) along which a target moves during the time interval [t1 , t2 ] is defined as: t2 dP (t1 , t2 ) 1 dt, Φ(P (t1 , t2 ), K) = I(p(t), K) (8) ||P (t1 , t2 )|| t1 dt where ||P (t1 , t2 )|| is the Euclidean length of the path (i.e., the number of points on the path), I(p(t), K) is the information exposure of point p(t) and | dP (tdt1 ,t2 ) | is the element of curve length. From the above definition, the path information exposure is an average for all path points’ information exposures. Hence it can be considered as a measure of the average target estimation ability of the path. Fig. 1 illustrates the path information exposure. The path consists of three points each monitored by three I(pi , 3). We sensors and hence the information exposure for the path is 13 compare the path information exposure to the path exposure defined in [7] and in [8]. The definition in [7] only considers simple processing for the measurements, viz., summing up the decayed parameters. It does not consider the effect of noise in the measurements. The definition in [8] applies value fusion to determine the detection probability of a point. In contrast, our definition includes advanced processing for noise corrupted measurements, i.e., the BLUE estimator, and provides the estimation of some target parameter as well as the detectability of a target.
3
Best and Worst Information Exposure Paths
Assume a rectangular sensor field in which each point is monitored by exactly K sensors. Let P denote a path from a point located at the west side of the sensor field to a point located at the east side of the sensor field, and let P denote
56
B. Wang et al.
Fig. 2. Illustration of the difference between the mean weighted path problem and the shortest path problem. The path < a, c, e > is the shortest path from a to e, however, the mean weighted path is < a, b, d, f, e >.
the set of such paths. The best (worst) information exposure path PB (PW ) is defined to be the path with the largest ( smallest) information exposure among all paths in P traversing from west to east in the sensing field. In general, the best (worst) information exposure path can be fairly arbitrary in shape. An optimal solution may be very hard, , if not impossible, to obtain. Instead, we propose an algorithm to approximately solve the problem. The basic idea is to use a grid approach to transform the problem from a continuous domain to a discrete domain, and to restrict the best (worst) information exposure path search only along the grid. The formation of a grid again can be rather arbitrary and higher-order grids [7] can also be used. However, regular polygons are suggested to form a regular grid to simplify the computation of the average path information exposures. In general, the smaller the side of a regular polygon, the closer the approximation to an optimal solution; however, the computation becomes more complicated and takes longer time. The construction of the grid needs not to include the specified start and end points as the vertices of some polygons. If the start and end points are not on the grid, we can use their nearest polygon vertices to approximate them. Each side of a regular polygon is assigned a weight that represents the information exposure of this side. The target is restricted to move only along the sides of the polygons. Let graph G(V, E) denote the grid, where V is the set of vertices and E the set of edges. Let w(vi , vj ) denote the weight of the edge connecting vertices vi and vj . The problem to find the best (worst) information path is then converted to find the maximum (minimum) mean weighted path connecting the start point ps and the end point pe in the graph G(V, E), i.e., to find a path P =< ps = v1 , v2 , ..., vj = pe > with j the largest (smallest) mean weight w(P ) = 1j i=1 w(vj , vj+1 ). We note that the best (worst) information exposure path may not be unique in the graph. The above maximum (minimum) mean weighted path (abbreviated as max/min-MWP hereafter) problem is different from the classical single source shortest pathproblem in that the latter is to find a path with the smallest path j weight i=1 w(vi , vi+1 ). Fig. 2 presents an example to illustrate that the minMWP and the shortest path may not be the same in a graph. However, we note that the classical Dijkstra’s algorithm (see e.g., [10], page 595) to solve the shortest path problem can also be applied to solve the mean weighted path problem with some modifications. The modifications include introducing variables l[vi ] to
Worst and Best Information Exposure Paths in Wireless Sensor Networks
57
Table 1. Procedure for Finding Worst Information Exposure Path Notations: S the set of vertices whose min-MWP from the source have already been determined. Q the remaining vertices. d the array of best estimates of the min-MWP to each vertex. l the array of the path lengths from the source to each vertex. p the array of predecessors for each vertex. Find Min MWP (1) generate a suitable regular grid; (2) initiate the graph G(V, E, vs , ve ); (3) compute information exposure for all edges; (4) initiate l[v] = 0 for all vertices; (5) initiate d[v] ← ∞ and p[v] ← nil for all vertices; (6) initiate d[vs ] = 0 for the start vertex; (7) initiate S ← ∅ and Q = V ; (8) while ve ∈ S (9) u ← ExtractMin(Q) (10) S = S ∪ {u} (11) l[u] = l[u] + 1 (12) for each vertex v in Adjacent(u) 1 d[u] + l[u] w(u, v) (13) z = l[u]−1 l[u] (14) if d[v] > z (15) d[v] = z (16) p[v] = u (17) l[v] = l[u] (18) endif (19) endfor (20) endwhile (21) min exposure = d[ve ]
denote the path length (i.e., the number of edges) from the start vertex to the vertex vi and modifying the computation of the RELAX algorithm ([10], page 586) to set predecessor vertex weight as the information exposure from the start vertex to the vertex in consideration. That is, the modified Dijkstra algorithm uses the average edge weight instead of the total edge weight to compute the vertex weight for predecessors. The procedure for finding the worst information exposure path is given in Table 1. Lines (1) to (7) are the initialization part of the algorithm. The edge weight can be computed by integrating the point information exposure of all the points of the edge or by simply using the point information exposure of the median point of the edge. Lines (5) to (7) are the same initialization part as in Dijkstra’s algorithm. Since we only need to find a path connecting the start vertex to the end vertex, the condition for the while loop is changed accordingly. Line (11) is added to update the counter of the number of edges from the start vertex to the current vertex. Lines (12) to (19) are the modified RELAX algorithm. The RELAX algorithm ([10], page 586) computes z = d[u] + w(u, v) as the total weight from the start vertex to the vertex in consideration; while line (13) computes the mean weight. Since the grid is regular and all edges have equal lengths, l is as simple as a counter. Furthermore, line (17) is added to update the
58
B. Wang et al.
counter for the vertex in consideration as its predecessor’s counter. Finally, the vertices in the predecessors p[ve ] and the edges connecting these vertices provide the min-MWP path. The procedure in Table. 1 can also be applied to compute the max-MWP with the following modifications. Line (5) should be modified to initialize d[v] = −∞; line (9) changed to ’u ← ExtractM ax(Q)’; line (13) changed to ’if d[v] < z’, and finally line (21) changed to ’max exposure = d[ve ]’.
4
Numerical Examples
This section presents some numerical examples. For simplicity, we consider a special case where all noises are Gaussian. Since all noises are Gaussian and independent, the sum of these Gaussian noises is still Gaussian with zero mean and K −1 K 2 2 2 α 2 2α 2 1/dk σk . variance σ ˜K = k=1 ak σk , where ak = BK /dk σk and BK = k=1
We further assume that all noises have the same variance, i.e., σk2 = σ 2 for all k = 1, 2.... Via some algebra, the information exposure for a point is given by A 2 1 θ˜K A exp − 2 dθ˜K = 1 − 2Q( ), I(p, K) = 2˜ σK σ ˜K −A 2π θ˜K ∞ √ 2 where Q(x) is defined as Q(x) = √12π x exp(− t2 )dt, σ ˜K = σ CK , and CK = K −1 . For simplicity, A can be set as βσ, β > 0. In this paper, we set ( i=1 1/d2α i ) β = 0.5. Consider a 10 × 10 grid sensor field with 10 sensors at locations marked by red disks, as shown in Figs. 3 and 4. The side length of a square is set as a unit. The start and end points are marked by the blue circle, and located at the left and right sides of the grid, respectively. For simplicity, the information exposure of the middle point of each segment is used as the path information
Fig. 3. Worst and best information exposure paths when K = 1. The red line is the Min-MWP and blue dashed line is the Max-MWP.
Worst and Best Information Exposure Paths in Wireless Sensor Networks
59
Fig. 4. Worst and best information exposure paths when K = 10. The red line is the Min-MWP and blue dashed line is the Max-MWP.
Fig. 5. Path information exposure as a function of the number of sensors for estimation of a same point
of the whole segment. The sensors selected to monitor a point are the ones closest to the monitored point, which is the most efficient selection as shown in [11]. Furthermore, we set α = 1.0 in the computations. Figs. 3 and 4 show the computed worst and best information exposure paths when using 1 and 10 sensors to monitor each segment, respectively. The worst and best information exposures for K = 1 are 0.17 and 0.52 respectively, and for K = 10 are 0.31 and 0.58 respectively. In general, to increase the path information exposure, one can use more sensors to monitor a point; or deploy more sensors in the field; or do both. Fig. 5 plots the path information exposures for min-MWP and max-MWP when using different numbers of sensors to monitor a point. The sensor field and the start and end points are the same as in Fig. 3. As expected, the path information exposure increases when using more sensors to monitor a point. This is because the estimation accuracy of a point can be improved by using more sensors for esti-
60
B. Wang et al.
Fig. 6. Path information exposure as a function of the number of added additional sensors. K = 1.
mation. However, we note that there might not exist a monotonically increasing relationship between the information exposure of min-MWP and K. The information exposure for the min-MWP has improved (0.31−0.17)/0.17 = 82% when using K = 10 sensors compared with that using K = 1 sensor. Next we evaluate the impacts to information exposure when randomly adding more sensors to the field. The same sensor deployment as in Fig. 3 is used, and 1 to 10 additional sensors are randomly added to the field. However, each point is monitored by only one sensor, i.e., K = 1. Fig. 6 plots the information exposure against the number of additional sensors. Note that a value of 1 in the x-axis indicates a total of 11 sensors in the field, 10 of which are deterministically distributed as in Fig. 3 and 1 randomly distributed. The information exposure is obtained over 20 runs of simulations. It is observed that the improvement of information exposure for min-MWP is reduced. The information exposure for the min-MWP has improved (0.21 − 0.18)/0.18 = 17% when randomly adding K = 10 sensors compared with that adding K = 1 sensor. This observation suggests that using more sensors to collaboratively monitor a point is better in terms of increased information exposure than having only one sensor do the monitoring despite having a larger total number of sensors in the field. Motivated by this observation, we now propose a heuristic to adaptively deploy sensors so as to increase the information exposure of min-MWP. Let add(K) denote the method of increasing the number of sensors by one to monitor a point, and add(N r) denote the method of deploying a new sensor to the field. To increase information exposure, a new sensor should be deployed as close as possible to min-MWP. In this paper, a new sensor is added to the center of a square that has at least one side as part of min-MWP and whose total information exposure of the four sides is the smallest. For example, if we add a new sensor to Fig. 3, it is added to the center of the square located at the first column and the fourth row (from bottom to above). Assume that at first the field is randomly deployed
Worst and Best Information Exposure Paths in Wireless Sensor Networks
61
Fig. 7. Information exposure of the min-MWP by using the heuristic method Table 2. Pseudo-codes for the heuristic sensor deployment
A Heuristic Deployment Method (1) set K = 1 and Φ(P0 , 0) = 0 (2) compute Φ(P1 , 1) (3) while Φ < Φtarget (4) if Φ(PK , K) − Φ(PK−1 , K − 1) < ΔΦ, add(N r) (5) else add(K); endif (6) compute new Φ (7) endwhile
with N r sensors. The pseudo-codes for the heuristic are given in Table. 2. In the table, Φtarget and ΔΦ are two predefined thresholds. ΔΦ relate to the relative cost between using add(K) and using add(N r). We still use the sensor deployment in Fig. 3 to evaluate the proposed heuristic, and set ΔΦ = 0.03. Fig.7 shows the information exposure of min-MWP computed by the heuristic. It is observed that the information exposure increases faster than that shown in Fig. 5 (Fig. 6).
5
Concluding Remarks
We have proposed the concept of the best and worst information exposure paths for WSNs based on parameter estimation theory. The proposed information exposure can be used as a measure of the goodness of sensor deployment or coverage. An algorithm has been proposed to find the worst/best information exposure path and has been evaluated by simulations. Furthermore, a heuristic has been proposed for sensor deployment to increase the minimum information exposure.
62
B. Wang et al.
References 1. Akyildiz, I.F., Su, W., Sankarasubramaniam, Y. and Cayirci, E.: Wireless Sensor Networks: A Survey. Computer Networks, Elsevier Publishers (2002) vol. 39, no. 4, 393–422 2. Cardei, M., and Wu, J.: Energy-Efficient Coverage Problems in Wireless Ad Hoc Sensor Networks, Handbook of Sensor Networks, (Ilyas, M. and Mahgoub, I. Eds) chapter 19, CRC Press (2004) 3. Huang, C.-F., and Tseng, Y.-C.: A Survey of Solutions to The Coverage Problems in Wireless Sensor networks, Journal of Internet Technology, (2005) vol. 6, no. 1 4. Huang, C.-F. and Tseng, Y.-C.: The Coverage Problem in A Wireless Sensor Network, ACM international workshop on wireless sensor networks and applications (WSNA), (2003) pp. 115–121 5. Wang, X., Xing, G., Zhang, Y., Lu, C., Pless, R., and Gill, C.: Integrated Coverage and Connectivity Configuration in Wireless Sensor Networks, In ACM International Conference on Embedded Networked Sensor Systems (SenSys), (2003) 28–39 6. Meguerdichian, S., Koushanfar, F., Potkonjak M., and Srivastava, M. B.: Coverage Problems in Wireless Ad-hoc Sensor Networks, IEEE Infocom, (2001) vol. 3, 1380– 1387 7. Meguerdichian, S., Koushanfar, F., Qu, G., and Potkonjak, M.: Exposure in Wireless Ad Hoc Sensor Networks, ACM International Conference on Mobile Computing and Networking (MobiCom), (2001), 139–150 8. Clouqueur, T., Phipatanasuphorn, V., Ramanathan, P., and Saluja, K. K: Sensor Deployment Strategy for Target Detection, First ACM International Workshop on Wireless Sensor Networks and Applications(WSNA), (2002) 42–48 9. Mendel, J. M.: Lessons in Estimation Theory for Signal Processing, Communications and Control, Prentice Hall, Inc, (1995) 10. Cormen, T. H., Leiserson, C. E., Rivest, R. L., and Stein, C., Introduction to Algorithms, The MIT Press; 2nd Edition, 2001. 11. Wang, B., Wang, W., Srinivasan, V., and Chua, K. C.: Information Coverage for Wireless Sensor Networks, accepted by IEEE Communications Letters, 2005.
Cost Management Based Secure Framework in Mobile Ad Hoc Networks RuiJun Yang1,2 , Qi Xia1,2 , QunHua Pan1 , WeiNong Wang2 , and MingLu Li1 1
Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, China [email protected] 2 Network Information Center, Shanghai Jiao Tong University, Shanghai, China
Abstract. Security issues are always difficult to deal with in mobile ad hoc networks. People have seldom studied the costs of those secure schemes respectively and for some secure methods designed and adopted before-time, their effects are often investigated one by one. In fact, when facing certain attacks different methods would response individually and it would result in some waste of resource, which may be worthless from the viewpoint of whole network. Making use of the cost management idea, we analyze the costs of secure methods in mobile ad hoc networks and introduce a secure framework. Under the framework not only the network systems own tasks can be finished in time but also the whole network’s security costs can be decreased. We discuss the process of security costs computation at each mobile node and in certain nodes groups. We then exemplify the DoS attacks and costs computation among defense methods. The results show that more secure environment can be achieved based on the framework in mobile ad hoc networks.
1
Introduction
Mobile ad hoc networks are an active area of current researches and until recently the focus has been on issues such as routing [4], security [6] and data management. As for network security people have proposes many kinds of secure mechanisms such as SAR [3], SEAD [4] and so on. The resource consumptions of secure mechanisms are always large in mobile ad hoc networks so the cost is very high when multi-secure methods collaborate to resist the attacks and system weaknesses. Sometimes the resources such as bandwidths, power and media storages are wasted or inefficiently applied. From the view of whole ad hoc networks, the costs of security techniques should be taken into account besides finishing the main networks tasks. There are some reasons for the high costs about security mechanisms. Firstly, the own characteristics of mobile ad hoc network account for the main factor. Mobile ad hoc networks are the resource limited systems themselves and each component mobile nodes have finite usable resources either power, bandwidths or processing ability and memories though their capacities are determined by X. Jia, J. Wu, and Y. He (Eds.): MSN 2005, LNCS 3794, pp. 63–72, 2005. c Springer-Verlag Berlin Heidelberg 2005
64
R. Yang et al.
the certain applications. Nevertheless the restricted resource in each node is a big weakness for mobile ad hoc network. So the cost issues should be considered both in network own basic functions design and other appended mechanisms. The second reason is related with the working mechanisms of mobile ad hoc network. At the beginning of ad hoc network protocols designing, people have not brought into the security problems. And some secure mechanisms are developed and attached to the older protocols when facing more and more security leaks and network attacks. Even added into the network protocols or integrated as a secure module, these operations are dealt for themselves and the costs of multi nodes or multi mechanisms united defense again attacks are rarely considered. So the working mechanisms related security costs should also be noticed and paid more attention. The assumptions about radio propagations may be one of the reasons about little attention for security costs. Those secure schemes are declared that they may help the system to satisfy some secure requirements though the results are just get through some simulations. It is simulation’s radio propagation assumptions that make things to be different. Those simulations are executed under nearly perfect assumptions and outcomes may be subverted causing larger costs when they were done in more realistic scenarios. It is certain that simulations and designing results may run a risk in testifying secure protocols. [1]points out the simplistic radio models may lead to manifestly wrong results. [2]indicates that theoretical results on the capacity of ad hoc networks are still based on some simplified assumptions and the derivation of new results by considering critical factors. The rest of this paper is organized as follows: In section 2, we discuss the existing secure schemes and attacks from layer to layer and attempts a detailed presentation about secure costs in each nodes or each mechanism. In section 3, we provide a secure framework based on cost management and describe the costs computing process at each node or in some nodes groups. The analysis about DoS attacks and defenses under the proposed secure framework can be found in section 4 and some secure mechanisms reconfiguration’s effects are also presented. At last we conclude this paper in section 5 with a stating about existing issues and some future works.
2
Analysis of Network Attacks and Security Mechanisms
The fundamental requirements of computer security like confidentialityintegrity, authentication and non-repudiation are also valid when protection of correct network behaviors is to be considered in Mobile ad hoc networks. The characteristics of Mobile ad hoc networks make them vulnerable to various forms of attacks, which can be classified into different categories according to different standard. One of the classifications is passive attacks and active attacks. Passive attacks are the attacks in which an attacker just eavesdrops on the network traffic to get more useful information for future attacks. Active attacks are the attacks in which an attacker actively participates in disrupting the normal operation of network protocols [6].
Cost Management Based Secure Framework in Mobile Ad Hoc Networks
65
For different layers and different types of attacks, people have made great corresponding efforts and produced lots of secure mechanisms and in [5] an adaptive secure framework has been proposed which take into account the structures of security but have little knowledge on how to cooperation among multi-security methods. The attacks are always restricted within certain network applications and when facing aggression the secure mechanisms have to do their own bests they can to cope with hardness with little thinking about the cooperation among them. It is believable that attacks would happen at almost all layers from one time to another time with different styles. Facing combined attacks, people have developed combined secure mechanisms to resist them and prevent system from falling. Actually there may be many methods in each layer. When facing secure problems bursting at more than two layers simultaneously, single layer protocol can not dealt with them itself and there is need for cooperation among several layers’ secure mechanisms. But protection against one type of attacks may weaken the network against a second type of attacks and finding the right balance is extremely difficult. Another problem exists that having two separate protocols at each layer performing similar functions in an uncoordinated manner may lead to large overhead and make the cost high. Some secure mechanisms begin to run before encountering attacks and others startup while suffering aggressions. So it is hard to compute the cost of secure schemes in mobile ad hoc networks. The composing parts of whole running cost at one node are CPU occupancy factor, power consumption, bandwidth occupancy factor and memory utilizations all of which are the expenditure of network running and are necessary for the attainment of finishing tasks. It is necessary for mobile nodes to quantify the running cost and for each nodei, its running cost Ci is the weighted sum of four factors corresponding costs: Ci = αCi−CP U + βCi−Memmory + δCi−Battery + γCi−Band
(1)
here Ci−CP U , Ci−Memory , Ci−Battery ,Ci−Band are the corresponding CPU occupancy factor, power consumption, bandwidth occupancy factor and memory utilizationsα, β, δ, γare the weighted coefficients representing their own importance. Every secure mechanism has its own secure cost Csecurity . Before the working of secure mechanism, the running cost of normal network protocol is Cpre−secure−mechanism and after secure mechanism adopted, the running cost is Cpost−secure−mechanism so: Csecurity = Cpost−secure−mechanism − Cpre−secure−mechanism
(2)
Once the cross layer secure mechanisms can cooperate to resist attacks, their combined cost should be small than the sum of every sole mechanism cost. For the combined secure mechanisms there are many algorithms and their costs is not the simple sum of nodes Csecurity . It needs to propose an adaptive secure framework to deal with that complex cost computing.
66
3
R. Yang et al.
Secure Framework Based on Cost Management
There are some cross layer secure framework aiming at deal with complex weaknesses and attacks in mobile ad hoc networks. Commonly secure mechanisms have not taken into account the cost of security. Based on cost management, there are several principles for designing: 1) the cost management should be executed from the view of the whole network. 2) It should calculate secure scheme cost of every layer. 3) It should increase the validity of the cost management to get the greatest secure benefits at the expense of the smallest secure costs. We should forecast, analyze, calculate and harmonize secure costs among nodes effectively. When facing threats, system can configure own limited resource according to the results feedback by the different stage of cost management. Every node firstly calculates its own cost, judge whether it needs support from other nodes. If does, it should collect the involved nodes’ costs according to corporations in nodes group resisting attacks. After the simple calculation, system judges and decides how to reconfigure secure mechanisms in distributed style. Then system distributes the safeguard tasks among nodes according to predefined principles with the nodes’ resource effectively utilizing. As shown in Fig. 1, we propose a secure framework based on cost management. In the secure framework, there are four parts. Besides the normal ad hoc network protocols’ running module, the secure mechanisms configuration module and reconfiguration module are also important for security costs effects of the whole network. Both them are centered with cost management module. Firstly, some secure mechanisms such as secure routing protocol, authorization and authentication should be set in the network nodes in order to prevent some known attacks. While the network running, some attacks may be found and related secure schemes would be adopted to resist them. Before their working, we should compute the costs budgets of security and compute the costs of
Fig. 1. Cost Management Based Secure Framework
Cost Management Based Secure Framework in Mobile Ad Hoc Networks
67
them again after some while or the stop of secure schemes. These processes are executed more times when facing large mounts of attacks. The changes between the budgets and real costs can be fed back and used for analysis of secure mechanisms reconfigurations. At the same time, resistant nodes or nodes groups can make use of the feedback information to evaluate the whole effectiveness of costs and reconfigure the secure mechanisms according to that information. The flow of costs computation under secure framework is shown below: 3.1
Each Mobile Node Solely Computes Its Own Security Costs
For the node in whole network, there are corresponding network operating costs from application, transport, network, to MAC and physical layer, which are shown in Table. 1. As mentioned in former section, the normal running cost of each mobile node is shown in formula 1.: every layer’s running costs are CiA ,CiT ,CiN ,CiM ,CiP ,which all can be computed according to Ciψ = αψ Ciψ−CP U + βψ Ciψ−Memmory + δψ Ciψ−Battery + γψ Ciψ−Band
(3)
ψ means certain layer. After computed CiA ,CiT ,CiN ,CiM ,CiP , the results are stored in a vector called cost vector: Cˆi = ( CiA , CiT , CiN , CiM , CiP )
(4)
As mentioned above, every secure mechanism’ cost is shown in formula 2. For certain layer secure mechanism, the costs CiA ,CiT ,CiN ,CiM ,CiP , always change before and after secure mechanisms adoption. And they can be computed: Ciψ−security = Ciψ−post−secure−mechanism − Ciψ−pre−secure−mechanism
(5)
ψ in Ciψ−security indicate some layer of network protocol. The security weaknesses occur indeterminably in some layers and we use one vector to distinguish the security related costs change from the changes due to normal system running when computing the secure mechanisms costs. ˆi = (oiA , oiT , oiN , oiM , oiP ), oik , k ∈ A, T, N, M, P O Table 1. Normal Running Cost of Each Layer Protocol iN ode A(application) T (transf er) N (network) M (M AC) P (physical) C1T C1N C1M C1P 1 C1A 2 C2A C2T C2N C2M C2P 3 C3A C3T C3N C3M C3P .. .. .. .. .. .. . . . . . . i CiA CiT CiN CiM CiP .. .. .. .. .. .. . . . . . . n CnA CnT CnN CnM CnP
(6)
68
R. Yang et al.
The values of oiA ,oiT ,oiN ,oiM ,oiP can only are 0 or 1 respectively representing whether adopting secure schemes. If not, the results in each factors should not be included in the Ciψ−security . That is to say, for node i, its security cost can i : be computed though the non-zero values in the vector CO i = Cˆi × O ˆi = (CiA , CiT , CiN , CiM , CiP ) × (oiA , oiT , oiN , oiM , oiP ) CO
(7)
The values of oiA ,oiT ,oiN ,oiM ,oiP can be re-written by its seated node. The whole costs of one node is: Ciφ oiφ (8) Ci−security = i is nonzero. When the node Here φ represents the layer which value in CO computing its own costs, it can also calculate the costs ratio which indicates the secure mechanisms costs proportion to the whole running costs. Ciφ oiφ (CiA + CiT + CiN + CiM + CiP ) (9) ηi = Ci−security /Ci = 3.2
Local Group Costs Computation
Every small period, some nodes can make up of one group in some small area and the principal node, which has largest quantity of usable resource, is charge with the local group costs computation. Selection of The Principal Node: When one node has the largest resource to utilization comparing with neighboring nodes, it can be made use of computing the group cost around it. In order to make the correct choice of principal node, the neighboring nodes have to exchange their information about usable resource. These exchanging also make some costs for themselves so the size of local group should not be too large. Setup for Size of Group and Computing Period: The local group cost computing should be executed periodically. Once the node finishing its own first time secure cost computing, it can send one indication about its attempt to call local group cost computing. When more than half of several adjacent nodes sense the attempts from each other they can select the principal node and begin the first-time group cost computation. The principal node only include those nodes inside one-hop in its current group. After group cost calculating, the principal node feeds back some cost information to its group’s nodes. One node cannot take part in another group cost computation when it has already been participating one computation. Local Groups Cost Computation: The principal node requires nodes in its group to send their own secure costs and compute the group costs: Cgroup =
m i=1
Ci−security
(10)
Cost Management Based Secure Framework in Mobile Ad Hoc Networks
69
At the same time, it can also calculate the group secure costs ratio used in future security analysis. Here m is the number of nodes in the group. m m Ci−security Ciφ oiφ i=1 ηgroup = i=1 = (11) m m Ci CiA + CiT + CiN + CiM + CiP i=1
i=1
For the whole mobile ad hoc network it is very hard to share the security information so it is too difficult to implement the global security costs and co-operations among nodes. Some models should be proposed to predict the execution time of multiple jobs on each mobile node with varied secure mechanisms costs. 3.3
Global Secure Costs Compute
Every larger period, the former principals can set up a large-scale secure cost computation in larger area. For the mobility of nodes it is too hard to compute global secure costs. We can calculate secure in larger area than steps in 3.2. The tasks are still charged by principals and it needs them to store some local secure costs and cost ratios and to exchange them among nodes. These principal nodes should send indications to the same type of nodes outside two-hop about the intension of calculating secure costs in larger area. Those indications only can be accepted and calculated instantly providing the involved groups’ security association to system global cost analysis. The method of computing is similar with step2’s but only two types of parameters are used here. For example, node B receives two local secure cost related values ηGroup−A and CGroup−A from node A (here Ci−security , i ∈ Group − A) it can combine them with its own CGroup−A = parametersgroup-B and CGroup−B . Then it can calculate the security association ΦGroup−A,Group−B in larger area around link A-B ΦGroup−A,Group−B =
(CGroup−A +CGroup−B ) [(CGroup−A /ηGroup−A ) +(CGroup−B /ηGroup−B )]
(12)
Here we only make the simple cost management and through several larger areas secure cost computations the approximate global secure costs can be obtained. For more complex global secure cost management algorithms it needs more efforts to study them in the future. In mobile ad hoc networks there are some problems in secure costs share and cross layers secure mechanisms cooperation among nodes because the mobile nodes keep moving and the nodes group changing. So we can only remind system about secure conditions through checking the secure cost ratios and secure costs in short times.
4
Example of Security Configuration Under Secure Framework
In the implementation of the proposed secure framework, each node has to be charge with its own tasks when facing attacks in order to protect system using
70
R. Yang et al.
Fig. 2. Scenario of Example of DoS Attacks and Defenses
rational schemes. Here we exemplify the DoS attacks and protections as the illustration of our cost management based secure framework. DoS attacks aim to prevent access to network resources and it is difficult to protect against. It can target different layers and there is much difference between various types of DoS attacks. Traffic patterns generated by an attacking node, its location in the network, availability of other compromised nodes, and availability of routing information are key factors in determining the efficacy of the DoS [7]. In fact, the inherent computational costs enable other DoS attacks at other layers on the server performing some protections. An appropriate secure mechanism costs’ balance should be introduced for the defense against DoS attacks. As shown in Fig. 2,let’s suppose an ad hoc network contain twenty nodes numbered from 1 to 20 and three attack nodes named A1 ,A2 ,and A3 . Nodes around the attack nodes organize randomly three nodes groups. It will set up two big local secure conjunctions between the node 3 and 11 as well as the node 11, 17.First,we calculate every security cost of the twenty nodes marked as C1 ,C2 ,. . . C2 0. Then, we compute the group costs of Group1 Group2 Group3 and proportion of security cost to whole running cost ηGroup−1 ,ηGroup−2 ,ηGroup−3 , centered the node 2,11 and 17 which are respond for the computing tasks. At last, the security associationΦGroup1,Group3 between node 1 and node 11 as well asΦGroup2,Group3 between node 11, 17 are figured out. If system has detected attacks and defense actions are deployed, the costs of security will turn larger. At this time, system should estimate the invalid costs. For the residual resources there are three kinds of scenarios including satisfying the requirements of system running, completing parts of mission ineffectively, and losing basic functions. For each instance we should adjust the costs in time. These data can be used to collocate their secure scheme of the nodes in the group. For example, at some time in the Group1 , the attacker A1 sends Exhaustion attack to node 1, the node 1 prefabricates small frames secure scheme, while the nodes 2 and 4 startup client puzzles secure scheme to protect the flooding attack from A1 . If A1 make misdirection to node 7 and 8, which we assume the two nodes have prefabricated authorization. So, because the node 3 has not been attacked from A1 , it has so many sources to use to startup the secure cost
Cost Management Based Secure Framework in Mobile Ad Hoc Networks Node 3's CPU variation curve
Node 3's Bandwidth variation curve
Node 1's Two CPU variation curves after two configuration
Node 1's Bandwidth variation curve
71
Node 4's Two CPU variation curves after two configuration
Node 4's Bandwidth variation curves after two configuration
Fig. 3. CPU and Band Utilization
calculation of Group1 . It can be harmonize every node to relocate the secure mechanisms by calculating every node’s secure cost and ratio. Through simulating the secure mechanisms reconfiguring of the nodes in Group1 , we get some graphs of CPU and band utilization ratio shown in Fig. 3. The occupancy factor of CPU of Node3 is small at the beginning. It increases after starting up the calculation of the costs. At first, the band is transferring the data normally. When the other nodes are attacked, it has less data to transfer with them so that its band is decreased. Because the Node7,Node8 have prefabricated secure schemes, their bands and CPUs are changing when attacked. And then, they can resist the attacks through their own secure schemes, which result in that changing is smoothly. But for the band of Node1 attacked, it is stable straightly and keeps in a low bandwidth-utilizing ratio, so it could not be adjusted. But because it uses small frames, its occupancy factor of CPU will be high. We can modify it through its secure scheme to decrease its CPU occupancy factor. Node2 and Node4 have the same conditions with facing flooding attacks, because they adapt the client puzzle methods the CPU keep working in a high occupancy factor and the bandwidths utilizing are also too large. At this time, the secure costs become too abnormally high and the system running is affected badly. For some attacks we should not care them and make the CPU’s overload mitigating based on the sacrifice of bandwidths which can satisfy the basic data transferring requirements though sill keeping higher. Above figures tell us that local group’s secure costs can be used to enhance the effects of security costs in the mobile ad hoc networks and to improve the availabilities of nodes in larger areas. For the more complex applications and implementations of secure framework such as the costs computation of whole
72
R. Yang et al.
networks nodes, we still keep on studying and in the future may introduce some QoS’s evaluating methods to detect the effectiveness of secure mechanisms reconfigurations schemes under this secure framework.
5
Conclusion and Future Works
In this paper we analyze the radio propagation assumptions from the viewpoint of security costs, then propose a secure framework based on cost management. After presenting the implementation of costs computing, we provide an example about DoS attacks and defenses. The results show that the mobile ad hoc network shortcoming of node’s limited resources can be overcome to some extent and the effects of security costs can be improved with increasing the availability of ad hoc networks in more realistic scenarios. There are still some existing problems left for us to work over in the future. The weighted coefficients in the weighted sum of every layer secure mechanisms costs are hard to choose suitably. It should be seasoned with the certain application security requirements and now we give them values by our experiences. It should be determined self-adaptively in the future. It is difficult to calculate the costs of cross layers secure mechanisms for there is some counteracts in the co-operations. The modularization of each secure mechanism costs computing should be considered in the future work under our proposed secure framework.
References 1. Calvin Newport, Simulating mobile ad hoc networks: a quantitative evaluation of common MANET simulation models, Dartmouth College Computer Science Technical Report TR2004-504, June 16, 2004. 2. Ian F. Akyildiz, Xudong Wang, and Weilin Wang, Wireless mesh networks: a survey, Computer Networks (to be published in 2005), from www.elsevier.com/locate/comnet 3. S. Yi, P. Naldurg, R. Kravets, A security-aware ad hoc routing protocol for wireless networks, in: The 6th World Multi-Conference on Systemic, Cybernetics and Informatics (SCI 2002), 2002. 4. Y. -C. Hu, D.B. Johnson, A. Perrig, SEAD: secure efficient distance vector routing for mobile wireless ad hoc networks, in: Proceedings of the 4th IEEE (WMCSA 2002), IEEE, Calicoon, NY, June 2002, pp. 3-13. 5. Shuyao Yu, Youkun Zhang, Chuck Song, Kai Chen, security architecture for Mobile Ad Hoc Networks, Proc. of APAN’2004. 6. Lakshmi V. and D. P. Agrawal, Strategies for enhancing routing security in protocols for mobile ad hoc networks, J. Parallel Distributed Computing. 63 (2003) 214-227 7. Imad Andy JeanPierre, Hubaux, y and Edward W. Knightlyz, Denial of Service Resilience in Ad Hoc Networks, MobiCom’04, Sept. 26Oct.1, 2004, Philadelphia, Pennsylvania, USA.
Efficient and Secure Password Authentication Schemes for Low-Power Devices Kee-Won Kim, Jun-Cheol Jeon, and Kee-Young Yoo Department of Computer Engineering, Kyungpook National University, Daegu, Korea, 702-701 {nirvana, jcjeon33}@infosec.knu.ac.kr, [email protected]
Abstract. In 2003, Lin et al. proposed an improvement on the OSPA (optimal strong-password authentication) scheme to make the scheme withstand the stolen-verifier attack, using smart card. However, Ku et al. showed that Lin et al.’s scheme is vulnerable to the replay and the denial of service attack. In 2004, Chen et al. proposed a secure SAS-like password authentication schemes. Their schemes can protect a system against replay and denial-of-service attacks. In this paper, we propose two efficient and secure password authentication schemes which are able to withstand replay and denial-of-service attacks. The proposed schemes are more efficient than Chen et al.’s schemes in computation costs. Moreover, the proposed schemes can be implemented on most of target lowpower devices such as smart cards and low-power Personal Digital Assistants in wireless networks. Keywords: password authentication, low-power device, mutual authentication, wireless network.
1
Introduction
The password authentication scheme is a method to authenticate remote users over an insecure channel. A variety of password authentication schemes have been proposed [1, 2, 3, 4, 5, 7, 9]. Lamport [1] proposed a one-time password authentication scheme using a one-way function, but this scheme has two practical difficulties: the high hash overhead and the requirement of resetting the verifier. Thereafter, many strongpassword authentication schemes have been proposed, e.g., CINON [2] and PERM [3]. Unfortunately, none of these earlier schemes is both secure and practical. In 2000, Sandirigama et al. [4] proposed a simple and secure password authentication scheme, called SAS. However, Lin et al. [5] showed that the SAS suffers from vulnerability to both replay and denial-of-service attacks and proposed an
This work was supported by the Brain Korea 21 Project in 2005. Corresponding author.
X. Jia, J. Wu, and Y. He (Eds.): MSN 2005, LNCS 3794, pp. 73–82, 2005. c Springer-Verlag Berlin Heidelberg 2005
74
K.-W. Kim, J.-C. Jeon, and K.-Y. Yoo
optimal strong-password authentication scheme, called OSPA, to enhance the security of SAS. Chen and Ku [6] pointed out that SAS and OSPA are vulnerable to stolen-verifier attacks. In 2003, Lin et al. [7] proposed an improved scheme to enhance the security of OSPA. Ku et al. [8] showed that Lin et al.’s scheme [7] is vulnerable to replay and denial-of-service attacks. In 2004, Chen et al. [9] proposed secure SAS-like password authentication schemes. They proposed not only the unilateral authentication but also the mutual authentication. Their schemes can protect a system against replay and denial-of service-attacks. In this paper, we propose two efficient and secure password authentication schemes, that can withstand replay and denial-of-service attacks. Moreover, the proposed schemes are more efficient than Chen et al.’s schemes, and can be implemented on most of target low-power devices such as smart cards and lowpower Personal Digital Assistants in wireless networks. The remainder of this paper is organized as follows. The proposed schemes are presented in Section 2. In Section 3, we discuss the security and computation costs of the proposed schemes. Finally, we state the conclusions of this paper in Section 4.
2
The Proposed Schemes
In this section, we propose two schemes to withstand denial-of-service attacks and replay attacks. The proposed schemes are the unilateral authentication scheme(Method 1) and the mutual authentication scheme(Method 2). 2.1
Notations
The following notations are used throughout this paper. • • • • • • •
U denotes the low-power client. S denotes the server. E denotes the adversary. ID denotes the identity of U . P W represents the password of U N, N , r and r denote random nonces. h denotes a one-way hash function. h(x) means x is hashed once. h2 (x) means x is hashed twice. • ⊕ denotes a bitwise XOR operation. • || denotes a string concatenation. • x denotes the secret key of S.
2.2
The Proposed Scheme with Unilateral Authentication (Method 1)
Registration Phase Suppose a new user U wants to register with a server S for accessing services. The registration phase is shown in Fig. 1. The details are presented as follows:
Efficient and Secure Password Authentication Schemes
75
Step (R1) U sends his identity ID and password P through a secure channel. Step (R2) S selects a random nonce N and computes X = h(x||ID), vpw src = h(P ⊕ N ) and vpw = h2 (P ⊕ N ), where x is the secret key of the server. Step (R3) S stores vpw into the database and issues a low-power device storing {X, vpw src, N, h(·)} to U through a secure channel. Authentication Phase The authentication phase is shown in Fig. 2. The details are presented as follows. If the user U wants to login, U enters the identity ID and password P in his lowpower device, then the low-power device will perform the following operations. Step (A 1) Verify h(P ⊕ N ) and vpw src. If they are not equal, the low-power device terminates this session. Step (A 2) Select a new random nonce N and compute new vpw src = h(P ⊕ N ),
(1)
C1 = X ⊕ new vpw src, C2 = vpw src ⊕ h(new vpw src).
(2) (3)
Step (A 3) Send {ID, C1 , C2 } to the server as a login request. Upon receiving the login request {ID, C1 , C2 } from U , the server will perform the following operations: Step (A 4) Check the format of ID. Step (A 5) Compute X = h(x||ID), new vpw src = C1 ⊕ X = h(P ⊕ N ),
(4) (5)
vpw = h(C2 ⊕ h(new vpw src )).
(6)
User - Choose ID, P
Server ID, P
- Select : N - Compute : X = h( x || ID) vpw− src = h( P ⊕ N ) vpw = h 2 ( P ⊕ N ) - Store : vpw into database - Write : { X , vpw− src, N , h(⋅)} Low-power device to low-power device
- Possess low-power device Fig. 1. The registration phase of Method 1 and Method 2 of the proposed schemes
76
K.-W. Kim, J.-C. Jeon, and K.-Y. Yoo
User
Server ?
- Check : h( P ⊕ N ) = vpw− src - Select : N ′ - Compute: new− vpw− src = h( P ⊕ N ′) C1 = X ⊕ new−vpw− src C2 = vpw− src ⊕ h(new− vpw− src) ID, C1 , C2
- Check : ID - Compute : X ′ = h( x || ID) - Extract : new−vpw− src′ = h( P ⊕ N ′) from C1 ⊕ X ′ - Extract : vpw ' = h 2 ( P ⊕ N ) from h(C2 ⊕ h(new−vpw− src′)) ?
- Check : vpw′= h2 ( P ⊕ N ) - Compute : new−vpw′ = h( new− vpw− src′) = h 2 ( P ⊕ N′) - Update : new−vpw′ Fig. 2. The authentication phase of Method 1
and check whether vpw is equal to the stored verifier vpw = h2 (P ⊕ N ). If it holds, then the server accepts the login request. Step (A 6) Compute new vpw = h(new vpw src ) and updates vpw = h2 (P ⊕ N ) with new vpw = h2 (P ⊕ N ) for the next authentication session. 2.3
The Proposed Scheme with Mutual Authentication(Method 2)
To provide the mutual authentication, we propose Method 2 based on Method 1. The registration phase is the same as that of Method 1 and omitted here. Authentication Phase The authentication phase is shown in Fig. 3. The details are presented as follows. If the user U wants to login, U enters the identity ID and password P in his lowpower device, then the low-power device will perform the following operations. Step (A 1) Verify h(P ⊕ N ) and vpw src. If they are not equal, the low-power device terminates this session. Step (A 2) Generate a random nonce r, where r is used to identify this transaction uniquely.
Efficient and Secure Password Authentication Schemes
User
77
Server ?
- Check : h( P ⊕ N ) = vpw− src - Generate: a nonce r
ID, r
r ′ ⊕ X ′, h( r || r ′)
- Extract : r′ from r ′ ⊕ X ′ ⊕ X - Verify the server by h(r || r′) and r with the extracted r′ - Select : N ′ - Compute: new−vpw− src = h( P ⊕ N ′) C1 = X ⊕ new−vpw− src ⊕ r ′ C2 = vpw− src ⊕ h(new−vpw− src ⊕ r ′) C1 , C2
- Check : ID - Generate : a new nonce r′ - Compute : X ′ = h( x || ID ) r′ ⊕ X ′ h (r || r′)
- Extract : new− vpw− src′ = h( P ⊕ N ′) from C1 ⊕ X ′ ⊕ r ′ - Extract : vpw ' = h 2 ( P ⊕ N ) from h(C2 ⊕ h(new−vpw− src′ ⊕ r ′)) ?
- Check : vpw′= h2 ( P ⊕ N ) - Compute : new−vpw′ = h( new− vpw− src′) = h 2 ( P ⊕ N′) - Update : new−vpw′ Fig. 3. The authentication phase of Method 2
Step (A 3) Send {ID, r} to S. Upon receiving the message {ID, r} from U , the server will perform the following operations: Step (A 4) Check the format of ID. Step (A 5) Generate a random nonce r and compute X = h(x||ID), r ⊕ X and h(r||r ). Step (A 6) Send {r ⊕ X , h(r||r )} to U . Upon receiving the message {r ⊕ X , h(r||r )} from S, the low-power device will perform the following operations: Step (A 7) Extract r from r ⊕ X ⊕ X and verify h(r||r ) using r and r to authenticate the remote server.
78
K.-W. Kim, J.-C. Jeon, and K.-Y. Yoo
Step (A 8) Select a new random nonce N and compute new vpw src = h(P ⊕ N ),
(7)
C1 = X ⊕ new vpw src ⊕ r , C2 = vpw src ⊕ h(new vpw src ⊕ r ).
(8) (9)
Step (A 9) Send {C1 , C2 } to the server as a login request. Upon receiving the login request {C1 , C2 } from U , the server will perform the following operations: Step (A10) Compute new vpw src = C1 ⊕ X ⊕ r = h(P ⊕ N ), vpw = h(C2 ⊕ h(new vpw src ⊕ r )).
User
(10) (11)
Server ?
- Check : h( P ⊕ N ) = vpw− src ID , r - Generate: a nonce r r ′ ⊕ X ′, h( r || r ′)
- Extract : r′ from r ′ ⊕ X ′ ⊕ X - Verify the server by h(r || r′) and r with the extracted r′ - U enters a new password P′ - Select : N ′ - Compute: new− vpw− src = h( P′ ⊕ N ′) C1 = X ⊕ new−vpw− src ⊕ r′ C2 = vpw− src ⊕ h(new− vpw− src ⊕ r′) C1 , C2
- Check : ID - Generate : a new nonce r′ - Compute : X ′ = h( x || ID) r′ ⊕ X ′ h(r || r′)
- Extract : new− vpw− src′ = h( P′ ⊕ N ′) from C1 ⊕ X ′ ⊕ r ′ - Extract : vpw ' = h 2 ( P ⊕ N ) from h(C2 ⊕ h(new−vpw− src′ ⊕ r ′)) ?
- Check : vpw′ = h2 ( P ⊕ N ) - Compute : new−vpw′ = h( new− vpw− src′) = h 2 ( P′ ⊕ N ′) - Update : new−vpw′ Fig. 4. The password change phase of the proposed scheme
Efficient and Secure Password Authentication Schemes
79
and check whether vpw is equal to the stored verifier vpw = h2 (P ⊕ N ). If it holds, then the server accepts the login request. Step (A11) Compute new vpw = h(new vpw src ) and updates vpw = h2 (P ⊕ N ) with new vpw = h2 (P ⊕ N ) for the next authentication session. Password Change Phase The password change phase is shown in Fig. 4. The details are presented as follows. If the user U wants to change his old password P to a new password P , he only needs to perform the procedures below. The password change phase is much the same as the authentication phase, except for Steps (A8), (A10) and (A11). After executing from Step (A1) to Step (A7), U ’s low-power device card executes Step (A 8), as below. Step (A 8) U enters a new password P . The low-power device selects a new random nonce N and computes new vpw src = h(P ⊕ N ),
(12)
C1 = X ⊕ new vpw src ⊕ r ,
(13)
C2 = vpw scr ⊕ h(new vpw src ⊕ r ).
(14)
After executing Step (A’8), U ’s low-power device executes Step (A9). Upon receiving the login request {C1 , C2 } from U , the server will perform the following operations: Step (A 10) Compute new vpw src = C1 ⊕ X ⊕ r = h(P ⊕ N ), vpw = h(C2 ⊕ h(new vpw src ⊕ r )).
(15) (16)
and check whether vpw is equal to the stored verifier vpw = h2 (P ⊕ N ). If it holds, then the server accepts the login request. Step (A 11) Compute new vpw = h(new vpw src ) and update vpw = h2 (P ⊕ N ) with new vpw = h2 (P ⊕ N ) for the next authentication session.
3
Security and Efficiency of the Proposed Schemes
In this section, we examine the security and the efficiency of the proposed schemes. 3.1
Security
In the following, we analyze the security of the proposed schemes.
80
K.-W. Kim, J.-C. Jeon, and K.-Y. Yoo
Password Guessing Attack There are only two instances including the password P : the login message {C1 , C2 } and the verifier h2 (P ⊕N ) stored by the server. If an adversary E intercepts C1 = h(x||ID) ⊕ h(P ⊕ N )(or C1 = h(x||ID) ⊕ h(P ⊕ N ) ⊕ r , in Method 2) and C2 = h(P ⊕ N ) ⊕ h(h(P ⊕ N ))(or C2 = h(P ⊕ N ) ⊕ h(h(P ⊕ N ) ⊕ r ), in Method 2), it is infeasible to guess the user’s password without knowing x, N, N , and r because E has no feasible way to ascertain the password. Suppose adversary E has stolen the verifier h2 (P ⊕ N ). E cannot guess the password without knowing N . Replay Attack In the proposed schemes, the server extracts vpw = h2 (P ⊕ N ) from C2 using new vpw src = h(P ⊕ N ) obtained by C1 . Then the server checks whether vpw is equal to the stored verifier h2 (P ⊕ N ) or not. If C1 or C2 are replaced ?
with a value in any previous session, the verification equation vpw = h2 (P ⊕ N ) does not hold. Then, the server rejects the login request. Therefore, adversary E cannot login to the remote sever by replaying the previous login request. Impersonation Attack In Method 1 and Method 2, if adversary E wants to impersonate the user, E must compute a valid {C1 , C2 }. Because E has no idea about P , N , r , and the server’s secret key x, E cannot forge a valid {C1 , C2 }. Therefore, E has no chance to login by launching the impersonation attack. In Method 2, if adversary E wants to impersonate the server, he must send a valid {r ⊕ h(x||ID), h(r||r )} to the user. Because E has no idea about the server’s secret key x, he cannot compute h(x||ID) to forge a valid {r ⊕ h(x||ID), h(r||r )}. Therefore, the proposed schemes can resist impersonation attacks. Stolen-Verifier Attack Assume that adversary E has stolen verifier h2 (P ⊕ N ). To pass the authentication in the proposed schemes, E must have h(x||ID) and h(P ⊕ N ) to compute {C1 , C2 }. The adversary cannot compute h(x||ID) because he does not know the system secret key x. E cannot derive h(P ⊕ N ) from h2 (P ⊕ N ) because h(·) is a one-way hash function. Therefore, the proposed schemes can resist stolen-verifier attacks. Denial-of-Service Attack In the proposed schemes, the server extracts vpw = h2 (P ⊕ N ) from C2 using new vpw src = h(P ⊕ N ) obtained by C1 . Then, the server checks whether vpw is equal to the stored verifier h2 (P ⊕ N ). If C1 or C2 is replaced with ?
another value, the verification equation vpw = h2 (P ⊕ N ) does not hold. Then, the server rejects the login request and does not update the verifier. Therefore, the proposed schemes can resist denial-of-service attacks.
Efficient and Secure Password Authentication Schemes
81
Table 1. Efficiency comparison between Chen et al.’s schemes and the proposed schemes Chen et al.’s schems [9] Method 1 Method 2 Computation cost of 2THu + 1THs 2THu + 1THs registration Computation cost of 7THu + 5THs 7THu + 5THs authentication
The proposed schemes Method 1 Method 2 3THs 3THs 3THu + 3THs 4THu + 5THs
Method 1 : the unilateral authentication Method 2 : the mutual authentication THu : the time for performing a one-way hash function by the user THs : the time for performing a one-way hash function by the server
3.2
Efficiency
We compare the proposed and related schemes in terms of computation costs. Table 1 shows the efficiency comparison of the proposed and related schemes in the registration and authentication. As shown in Table 1, the proposed scheme is more efficient than Chen et al.’s schemes in computation costs. Therefore, the proposed schemes are efficient enough to be implemented on most of target low-power devices in wireless networks.
4
Conclusions
In this paper, we have proposed an improvement on Lin et al.’s scheme that can withstand replay attacks and denial-of-service attacks. Moreover, we have proposed mutual authentication based on unilateral authentication of the proposed scheme. The proposed scheme is more efficient than Chen et al.’s schemes in computation costs. Therefore, the proposed schemes are efficient enough to be implemented on most of the target low-power devices such as smart cards and low-power Personal Digital Assistants in wireless networks.
References 1. L. Lamport, Password Authentication with Insecure Communication, Communications of ACM, Vol.24, No.11, pp.770–772, 1981. 2. A. Shimizu, A Dynamic Password Authentication Method by One-way Function, IEICE Transactions, Vol.J73-D-I, No.7, pp.630–636, 1990. 3. A. Shimizu, T. Horioka, H. Inagaki, A Password Authentiation Methods for Contents Communication on the Internet, IEICE Transactions on Communications, Vol.E81-B, No.8, pp.1666–1673, 1998. 4. M. Sandirigama, A. Shimizu, M.T. Noda, Simple and Secure Password Authentication Protocol (SAS), IEICE Transactions on Communications, Vol.E83-B, No.6, pp.1363–1365, 2000.
82
K.-W. Kim, J.-C. Jeon, and K.-Y. Yoo
5. C.L. Lin, H.M. Sun, T. Hwang, Attacks and Solutions on Strong-password Authentication, IEICE Transactions on Communications Vol.E84-B, No.9, pp.2622-2627, 2001. 6. C.M. Chen, W.C. Ku, Stolen-verifier Attack on Two New Strong-password Authentication Protocols. IEICE Transactions on Communications, Vol.E85-B, No.11, pp.2519–2521, 2002. 7. C.W. Lin, J.J. Shen, M.S. Hwang, Security Enhancement for Optimal Strongpassword Authentication Protocol, ACM Operating Systems Review, Vol.37, No.2, pp.7–12, 2003. 8. W.C. Ku, H.C. Tsai, S.M. Chen, Two Simple Attacks on Line-Shen-Hwnag’s Strongpassword Authentication Protocol, ACM Operating Systems Review, Vol.37, No.4, pp.26–31, 2003. 9. T.H. Chen, W.B. Lee, G. Horng, Secure SAS-like Password Authentication Schemes, Computer Standards and Interfaces, Vol.27, No.1, pp.25-31, 2004.
Improving IP Address Autoconfiguration Security in MANETs Using Trust Modelling Shenglan Hu and Chris J. Mitchell Information Security Group, Royal Holloway, University of London {s.hu, c.mitchell}@rhul.ac.uk
Abstract. Existing techniques for IP address autoconfiguration in mobile ad hoc networks (MANETs) do not address security issues. In this paper, we first describe some of the existing IP address autoconfiguration schemes, and discuss their security shortcomings. We then provide solutions to these security issues based on the use of trust models. A specific trust model is also proposed for use in improving the security of existing IP address autoconfiguration schemes.
1
Introduction
IP address autoconfiguration is an important task for zero configuration in ad hoc networks, and many schemes have been proposed [5, 6, 11]. However, performing IP address autoconfiguration in ad hoc networks securely remains a problem. Most existing schemes are based on the assumption that the ad hoc network nodes will not behave maliciously. However, this is not always a realistic assumption, since not only may some malicious nodes be present, but ad hoc network nodes are often easily compromised. Of the existing IP address autoconfiguration schemes, we focus here on requester-initiator schemes. In such a scheme, a node entering the network (the requester ) will not obtain an IP address solely by itself. Instead, it chooses an existing network node as the initiator, which performs address allocation for it. The rest of this paper is organised as follows. In Section 2, we review existing requester-initiator schemes, and analyse their security properties. In Section 3, we describe how to improve the security of these schemes using trust models. In Section 4, a new trust model is proposed which can be used to secure the operation of requester-initiator schemes, and an analysis of this trust model is given. Finally, a brief conclusion is provided in section 5.
2
Requester-Initiator Address Allocation Schemes
We first briefly review two existing requester-initiator schemes. We then use them to illustrate a variety of security issues that can arise in such schemes.
The work of this author was sponsored by Vodafone.
X. Jia, J. Wu, and Y. He (Eds.): MSN 2005, LNCS 3794, pp. 83–92, 2005. c Springer-Verlag Berlin Heidelberg 2005
84
S. Hu and C.J. Mitchell
Nesargi and Prakash proposed the MANETconf scheme [5], a distributed dynamic host configuration protocol for MANETs. In this scheme, when a new node (the requester) joins an ad hoc network, it broadcasts a request message to its neighbour nodes. If the requester is the only node in the network, then it becomes an initiator; otherwise, it will receive a reply message from one or more of its neighbour nodes. It then selects one of its reachable neighbour nodes as an initiator. Each node in the network stores the set of addresses currently being used, as well as those that can be assigned to a new node. The initiator selects an IP address from the available addresses and checks the uniqueness of the address by broadcasting a message to all network nodes. If the IP address is already being used, then the initiator selects another IP address and repeats the process until it finds a unique IP address which it allocates to the requester. Fazio, Villri and Puliafito [2] proposed another requester-initiator scheme for IP address autoconfiguration based on MANETconf [5] and the Perkins-RoyerDas [6] scheme. In this protocol, a NetID is associated with each ad hoc network, and each node is therefore identified by a (NetID, HostID) pair. When a new node (a requester) joins an ad hoc network, it randomly selects a 4-byte HostID and requests the initiator to allocate it a unique address. The initiator randomly chooses a candidate IP address and broadcasts it to all other nodes to check for uniqueness. When the initiator finds a unique address, it sends the requester this address and the NetID of the network. The NetID is used to detect merging of networks. When partitioning happens, the NetID of each part will be changed. In all requester-initiator schemes, including those briefly described above, IP address allocation for new nodes depends on the correct behaviour of the initiator and other existing nodes. However, in reality, malicious nodes may be present in an ad hoc network, potentially causing a variety of possible problems. We now note a number of potential security problems. Firstly, if a malicious node acts as an initiator, it can deliberately assign a duplicate address to a requester, causing IP address collisions. In schemes where a node stores the set of addresses currently being used, it can also trigger IP address allocations for nodes that do not exist, thereby making IP addresses unavailable for other nodes that may wish to join the MANET. This gives rise to a serious denial-of-service attack. Secondly, a malicious node can act as a requester and send address request messages to many initiators simultaneously, who will communicate with all other nodes in order to find a unique address. This will potentially use a lot of the available bandwidth, again causing a denial-of-service attack. Thirdly, a malicious node in the network could claim that the candidate IP address is already in use whenever it receives a message from an initiator to check for duplication. As a result no new nodes will be able to get an IP address and join the network. It can also change its IP address to deliberately cause an IP address collision, forcing another node to choose a new IP address. This could lead to the interruption of that node’s TCP sessions. The main purpose of this paper is to consider means to address these threats. We start by showing how a trust model satisfying certain simple properties can
Improving IP Address Autoconfiguration Security
85
be used to reduce these threats. Note that all three threats identified above result in denial-of-service attacks — this provides us with an implicit definition of what we mean by a malicious node, i.e. a node seeking to deny service to other network nodes; the nature of a malicious node is discussed further below.
3
Solutions Based on Trust Models
According to clause 3.3.54 of ITU-T X.509 [9], trust is defined as follows: “Generally an entity can be said to ‘trust’ a second entity when the first entity makes the assumption that the second entity will behave exactly as the first entity expects”. In this paper, trust (in the form of a trust value) reflects the degree of belief that one entity has in the correctness of the behaviour of another entity. It is dynamic, and reduces if an entity misbehaves, and vice versa. For the moment we do not assume any particular method of computing trust values; we simply suppose that such a method has been selected. We use the following terminology. For any nodes A and B in an ad hoc network, the trust value held by A for B, i.e. the level of trust A has in B, is a rational (floating point) value denoted by TA (B). Every node A has a threshold trust value, denoted by TA∗ . That is, A deems B as trustable if and only if the trust value that A currently assigns to B is at least its threshold trust value, i.e. TA (B) ≥ TA∗ ; otherwise node A will regard node B as a potentially malicious node. Each node maintains its own threshold value TA∗ , i.e. different nodes may choose different trust thresholds. Hence the definition of malicious node may vary from node to node, depending on local policy. Every node also keeps a blacklist. Whenever it finds another node for which its trust value is lower than its threshold trust value, it deems this node a malicious node and adds this node to its blacklist. It will regularly calculate its trust values for the nodes in its blacklist and update its blacklist based on these new trust values. Except for the messages used for calculating trust values, it will ignore all other messages from nodes in its blacklist and will not route any other messages to these nodes. One underlying assumption in this paper is that the number of malicious nodes in an ad hoc network is small. We also assume there is a trust model available with the following two properties, where the neighbour nodes of a node are those nodes that are within direct transmission range. (1) Any node can make a direct trust judgement on its neighbour nodes based on the information it gathers in a passive mode. If one of its neighbour nodes is malicious, it can detect the misbehaviour of this malicious node. It maintains trust values for all its neighbour nodes and regularly updates them. (2) Any node is able to calculate the trust values of non-neighbour nodes based on the trust values kept by itself and/or other nodes. We now show how such a trust model can be used to improve the security of any requester-initiator scheme. 3.1
Choosing a Trustable Node as the Initiator
When a node N joins a network, it broadcasts a request message N eighbour Query containing TN∗ to its neighbour nodes. If the requester is the only node in the net-
86
S. Hu and C.J. Mitchell
work, then it becomes an initiator; otherwise, each of the other nodes receiving a N eighbour Query message will check the trust values it hold for its neighbour nodes, and send N a reply message InitREP, containing identifiers of the nodes for which the trust values it holds are greater than or equal to TN∗ . Once N has received InitREP messages from its neighbour nodes, it combines these messages and chooses as its initiator the responding neighbour node which appears in the most received InitREP messages. A malicious node is unlikely to appear in InitREP messages generated by honest neighbour nodes. Hence, given our assumption that a majority of the nodes in the network are honest nodes, the probability that a malicious node will be chosen as an initiator is low. 3.2
Checking for Duplication of the Candidate Address
When an initiator A chooses a candidate IP address for a new node, it broadcasts an Initiator Request message to check for duplication. In order to protect a requester-initiator scheme against a DoS attack caused by malicious nodes claiming the possession of arbitrary candidate IP addresses chosen by initiators, the trust model can be used to discover possible malicious nodes. If initiator A receives a reply message Add Collision from an existing node (node B, say) indicating that B is already using the candidate IP address and B is not in A’s blacklist, then A will react to this reply as follows. A either maintains a trust value for B or can calculate one (if B is not a neighbour node). If A’s trust value for B is greater than or equal to TA∗ , then A believes that the candidate IP address is already being used. Node A will then choose another candidate IP address and repeat the procedure to check for duplication. Otherwise, A deems B a malicious node. In this case, A adds B to its blacklist and ignores this Add Collision message. A also broadcasts a M alicious Suspect message about B to all other nodes. When it receives A’s message, each node uses its trust value for node A (if necessary calculating it) to decide if it should ignore A’s message. If A is deemed trustworthy, then it will calculate its trust value for B. If its newly calculated trust value for B is lower than its threshold acceptable trust value, then it adds B to its blacklist. As a result, misbehaving nodes will be permanently excluded from the network. 3.3
Dealing with Possible Address Collisions
Suppose node E detects a collision of IP addresses between two existing nodes. It will then send a message about the collision to both of them. Any node (node F , say) receiving such a message considers its trust value for node E (calculated if necessary). If TF (E) ≥ TF∗ , then F will assign itself a new IP address. Otherwise, F will keep its current IP address and add E to its blacklist. 3.4
Brief Analysis
The above protocol enhancements significantly improve the security of requesterinitiator schemes. Only trusted nodes will be chosen as initiators for address
Improving IP Address Autoconfiguration Security
87
allocation. Malicious nodes will be detected and isolated with the cooperation of other network nodes. The DoS attack caused by a malicious node claiming the possession of candidate IP addresses is prevented. Only minor computational costs will be incurred if the number of malicious nodes is small. However, when a malicious node acts as a requester and simultaneously asks many initiators for IP address allocation, each initiator will treat this malicious node as a new node and will not be able to calculate a trust value for it (since there will no history of past behaviour on which to base the calculations). This kind of attack cannot be prevented by using the trust model approach described in this paper. Some other method outside the scope of trust modelling is required.
4
A Novel Trust Model
In the solutions described in section 3, we assume that there is a trust model by which the trust values between any two nodes can be calculated. Many methods have been proposed for trust modelling and management, see, for example [3, 10]. Unfortunately, none of them has all the properties discussed in Section 3. Thus, they cannot be straightforwardly adopted for use in our scheme. In this section we propose a trust model specifically designed to be used in this environment. In our trust model, each trust value is in the range 0 to +1, signifying a continuous range from complete distrust to complete trust, i.e. TA (B) ∈ [0, +1]. Each node maintains a trust table in which it stores the current trust value of all its neighbour nodes. When a new node joins a network, it sets the trust values for its neighbour nodes in its trust table to an initial value Tinit , and dynamically updates these values using information gathered. A node computes its trust value for another node using one of the following two methods, depending on whether or not the other node is a neighbour node. 4.1
Calculating Trust Values Between Neighbour Nodes
If B is a neighbour node of A, then A calculates TA (B) based on the information A has gathered about B in previous transactions, using so-called passive mode, i.e. without requiring any special interrogation packets. Here we adopt the approach of Pirzada and McDonald [10] to gather information about neighbour nodes and to quantify trust. Potential problems could arise when using passive observation within a wireless ad hoc environment [8, 12]. The severity of these problems depends on the density of the network and the type of Medium Access Control protocol being used. This is an open issue which needs further research. In our paper, we assume that these problems will not occur. Information about the behaviour of other nodes can be gathered by analysing received, forwarded and overheard packets monitored at the various protocol layers. Possible events that can be recorded in passive mode are the number and accuracy of: 1) Frames received, 2) Streams established, 3) Control packets forwarded, 4) Control packets received, 5) Routing packets received, 6) Routing packets forwarded, 7) Data forwarded, 8) Data received. We also use the following
88
S. Hu and C.J. Mitchell
events: 9) The length of time that B has used its current IP address in the network compared with the total length of time that B has been part of the network, and 10) How often a collision of B’s IP address has been detected. The information obtained by monitoring all these types of event is classified into n (n ≥ 1) trust categories. Trust categories signify the specific aspect of trust that is relevant to a particular relationship and is used to compute trust for other nodes in specific situations. A uses the following equation, as proposed in [10], to calculate its trust for B: TA (B) =
n
WA (i)TA,i (B)
i=1
n where WA (i) is the weight of the ith trust category to A and i=1 WA (i) = 1; TA,i (B) is the situational trust of A for B in the ith trust category and is in the range [0,+1] for every trust category. More details can be found in [10]. Each node maintains trust values for its neighbour nodes in a trust value table, and regularly updates the table using information gathered. If a neighbour node moves out of radio range, the node entry in the trust table is kept for a certain period of time, since MANETs are highly dynamic and the neighbour node may soon be back in the range. However, if the neighbour node remains unreachable, then the entry is deleted from the trust table. 4.2
Calculating Trust Values for Other Nodes
If B is not A’s neighbour node (as may be the case in multi-hop wireless ad hoc networks), A needs to send a “Trust Calculation Request” to B, and B returns to A a “Trust Calculation Reply”, which contains trust values for all nodes in the route along which it is sent back to A, as follows. We adopt the Route Discovery method proposed in the Dynamic Source Routing Protocol (DSR) [7] to send A’s Trust Calculation Request (TCReq) message to B. Node A transmits a TCReq as a single local broadcast packet, received by all nodes currently within wireless transmission range of A. The TCReq identifies the initiator (A) and target (B), and also contains a unique request identifier, determined by A. Each TCReq also contains a route record of the address of each intermediate node through which this particular copy of the TCReq has been forwarded. When a node receives this TCReq, if it is not the target of the TCReq and has recently seen another TCReq from A bearing the same request identification and target address, or if this node’s own address is already listed in the route record, it is discarded. Otherwise, it appends its own address to the route record in the TCReq and propagates it by transmitting it as a local broadcast packet. The process continues until the TCReq reaches B. When B receives the TCReq message, a route from A to B has been found. Node B now returns a Trust Calculation Reply (TCReply) to node A along the reverse sequence of nodes listed in the route record, together with a copy of the accumulated route record from the TCReq. In our trust model, the TCReply also contains a trust value list of the trust value for each node in the route record,
Improving IP Address Autoconfiguration Security
y y
y (TD (E), TE (B))
(TC (D), TD (E), TE (B))
y
A
- y C
89
y
(TE (B)) - y y @ D E @ @ @ R @ y
y
B
y
Fig. 1. A route from A to B
as calculated by its predecessor in the route. That is, whenever a node in this route receives a TCReply, it forwards the message to the preceding node on the route and appends to the trust value list its trust value for the succeeding node on the route, i.e. the node which forwarded the TCReply to it. For example, as shown in Figure 1, when a TCReq is sent along the route A → C → D → E → B and arrives at B, a route is found, and the route record is ACDEB. B will send a TCReply back to A along the route B → E → D → C → A. The immediate nodes E, D, and C append TE (B), TD (E), TC (D) to the trust value list respectively when they send back the TCReply. Therefore, when node A receives the TCReply, it will obtain a trust value list (TE (B), TD (E), TC (D)); it will also have TA (C) from its own trust value table. Thus, A finds a route to B and also trust values for all the nodes in this route. Following the above procedure, A may find one or more routes to B. Since there might be malicious nodes present, A will check if each route is a valid route, i.e., if all the trust values in the TCReply message are above A’s threshold acceptable trust value TA∗ . Observe that when a malicious node receives a TCReply, it can change any of the trust values in the trust value list it receives, which only contains trust values for succeeding nodes on the route. However, it cannot change the trust value of any other node for itself. Suppose the route received by A consists of nodes A = N0 , N1 , . . . , Ni = B. If a certain trust value TNm (Nm+1 ) on this route is below TA∗ , then either node Nm+1 is a malicious node or another node Nk (k ≤ m) is a malicious node and has intentionally changed the trust value list when it received the TCReply. Therefore, when A obtains the trust list, it will learn that there is at least one malicious node in the route, and this route is therefore regarded as an invalid route. If there is no valid route from A to B, A cannot calculate its trust value for B. In order to prevent certain attacks (see below), A is obliged to regard B as a potentially malicious node. Given that malicious nodes are rare, this situation is unlikely to occur frequently. If there does exist at least one valid route from A to B, then we can calculate the trust value of node A for node B based on the weighted average of the trust values of the nodes preceding node B on all valid
90
S. Hu and C.J. Mitchell
routes for node B, and the weight of each route is based on the trust rating of all the intermediate nodes on each valid route. Suppose the route R is a valid route from A to B, denoted by: N0,R → N1,R → N2,R → ... → Ni−2,R → Ni−1,R → Ni,R where N0,R = A and Ni,R = B. A can then calculate the trust weight for route R by computing the geometric average of the trust values listed in the TCReply in route R (all these trust values are above TA∗ given that R is a valid route): WA,B (R) =
i−2
1
TNj,R (Nj+1,R ) i−1
j=0
A’s trust value for node B is computed as the weighted arithmetic average of the trust values of A for B on all valid routes, R1 , R2 , . . . , Rg , say. g 1 (WA,B (Rh )TNi−1,Rh (B)) h=1 WA,B (Rh )
TA (B) = g
h=1
Node A only calculates its trust value for B when needed, and A will not store this trust value in its trust table. This is particularly appropriate for highly dynamic ad hoc networks. 4.3
Analysis
An underlying assumption for the scheme described above is that a node is considered malicious if it does not adhere to the protocols used in the network. Under this definition, two types of malicious node can be identified. Firstly, a node may be malicious all the time, i.e. it will behave maliciously when interacting with other nodes for all types of network traffic; it may also be malicious in its behaviour with respect to the trust model by sending incorrect trust values to requesting nodes. This type of malicious node behaviour can be detected by its neighbours, and thus we can expect that an honest neighbour will maintain a low trust value for such a node. Alternatively, a node can behave honestly for all network interactions, and only behave maliciously with respect to trust model functionality. In our scheme, malicious nodes of this type will not be detected by other nodes, and the calculation of trust values will potentially be affected by these nodes. This is therefore an example of a vulnerability in our trust model approach. Hence, in the analysis below, we assume that all malicious nodes are of the first type, and we can assume that a misbehaving node will be detected by its neighbour nodes. The trust value of any node A for any other node B in the network can be calculated. We claim that the existence of a small number of malicious nodes will not affect the calculation of the trust values. First note that, if B is A’s neighbour node, A calculates its trust value based on information it gathers itself, which will not be affected by the existence of malicious nodes. Otherwise, A calculates
Improving IP Address Autoconfiguration Security
91
its trust value for B based on the trust values listed in the TCReply in one or more routes from A to B. If there are malicious nodes in a route R from A to B, then there will be a unique malicious node Nk,R ‘closest’ to A on this route − all nodes on this route that are closer to A can therefore be trusted not to modify the trust value list. Consider the route R: A = N1,R → · · · → Nk−1,R → Nk,R → Nk+1,R → · · · → Ni,R = B where Nk,R is malicious and Nk−1,R is honest. When the TCReply message from B to A is forwarded to node Nk,R by node Nk+1,R , node Nk,R appends TNk,R (Nk+1,R ) to the trust value list and send this TCReply message to node Nk−1,R . Since node Nk,R is a malicious node, it may deliberately modify (lower or raise) some of the trust values in the trust value list, i.e. any of (Tk,R (Nk+1,R ), TNk+1,R (Nk+2,R ), · · ·, TNi−1,R (Ni,R )). Moreover, other malicious nodes on this route may collude and raise each other’s trust values. However, TNk−1,R (Nk,R ) should be very low, since the maliciousness of Nk,R will be detected by its honest neighbour node Nk−1,R . When A receives the TCReply message, A will regard this route as invalid, and will not use this route to calculate its trust value for B. Hence all intermediate nodes in a valid route must be honest. The number of malicious nodes that our trust model can tolerate varies depending on the network topology. Our scheme requires at least one valid route from A to B in order to calculate the trust value of A for B. If at any time, no valid route from A to B is found, A cannot calculate a trust value for B. In this case, if A regards B as an honest node, a malicious node B can attack our trust model by modifying the route record when B receives a request message, or by just ignoring this request message to prevent A from finding a valid route to B. Thus, as mentioned in Section 4.2, if A cannot calculate a trust value for B, then A must treat B as as a potentially malicious node. If this trust model is used in the scheme described in Section 3, A adds B to its blacklist. However, a MANET is highly dynamic. A can calculate its trust value for B and update its blacklist frequently as long as a new valid route from A to B can be found when the topology of the network changes. All honest nodes will adhere to our trust model and make sure that the TCReply messages are sent back correctly. The model suffers from Sybil attacks, where a node fraudulently uses multiple identities. We can use other methods to prevent Sybil attacks; for example, it may be possible to use trusted functionality in a node to provide a unique node identity. A detailed discussion of such techniques is outside the scope of this paper. We also ignored the problem posed by malicious nodes impersonating honest nodes. This problem cannot be completely overcome without using origin authentication mechanisms. We assume that, in environments where impersonation is likely to be a problem, an authentication mechanism is in place, i.e. fake TCReply messages sent by a malicious node will be detected. Many protocols and mechanisms for authentication and key management are available [1]. However, providing the necessary key management to support a secure mechanism of
92
S. Hu and C.J. Mitchell
this type is likely to be difficult in an ad hoc environment. As mentioned above, trusted functionality, if present in a device, may help with this problem. Possible solutions to these issues will be considered in future work.
5
Conclusion
This paper focuses on IP address autoconfiguration in adversarial circumstances. The main contribution of this paper is to use a trust model to provide a number of enhancements to improve the security of requester-initiator schemes for IP address autoconfiguration in an MANET. It also gives a new trust model which can be used in these enhancements. Nevertheless other possible trust models with different trust quantification methods can also be applied in our solutions, as long as they satisfy the properties described in Section 3.
References 1. Boyd, C., Mathuria, A.: Protocols for Authentication and Key Establishment. Springer-Verlag, June 2003 2. Fazio, M., Villri, M., Puliafito., A.: Autoconfiguration and maintenance of the IP address in ad-hoc mobile networks. In: Proc. of Australian Telecommunications, Networks and Applications Conference, 2003 3. Huang, C., Hu, H.P., Wang, Z.: Modeling Time-Related Trust. In Jin, H., Pan, Y., Xiao, N., Sun, J. eds.: Proceedings of GCC 2004 International Workshops. Volume 3252 of Lecture Notes in Computer Science., Springer-Verlag (2004) 382–389 4. Mezzetti, N.: A Socially Inspired Reputation Model. In Katsikas, S.K. et al. eds.: Proceedings of 1st European PKI Workshop. Volume 3093 of Lecture Notes in Computer Science., Springer-Verlag (2004) 191–204 5. Nesargi, S., Prakash, R.: MANETconf: Configuration of Hosts in a Mobile Ad Hoc. Network. In: Proceedings of INFOCOM 2002, Volume 2, IEEE 2002 1059–1068 6. Perkins, C.E., Royer, E.M., Das, S.R.: IP Address Autoconfiguration in Ad Hoc Networks. 2002, Internet Draft: draft-ieft-manet-autoconf-00.txt 7. Johnson, D.B., Maltz, D.A., Hu, Y.C.: The Dynamic Source Routing Protocol for Mobile Ad Hoc Networks. (DSR). 2004, IETF Draft: draft-ietf-manet-dsr-10.txt 8. Marti, S., Giuli, T.J., Lai, K., Baker, M.: Mitigating routing misbehavior in mobile ad hoc networks. In Pickholtz, R., Das, S., Caceres, R., Garcia-Luna-Aceveseds, J. eds.: Proceedings of the Sixth Annual International Conference on Mobile Computing and Networking, ACM Press (2000) 255–265 9. International Telecommunication Union: ITU-T Recommendation X.509 (03/2000), The directory – Public-key and attribute certificate frameworks, 2000 10. Pirzada, A.A., McDonald, C.: Establishing Trust in Pure Ad hoc Networks. In: Proceedings of the 27th Conference on Australasian Computer Science, volume 26, Australian Computer Society, Inc. (2004) 47–54 11. Thomson, S., Narten, T.: IPv6 stateless address autoconfiguration. Dec. 1998, IETF RFC 2642 12. Yau, P, Mitchell, C.J.: Reputation methods for routing security for mobile ad hoc networks In: Proceedings of SympoTIC ’03, IEEE Press (2003) 130–137
On-Demand Anycast Routing in Mobile Ad Hoc Networks Jidong Wu Zhejiang University, Hangzhou, Zhejiang, China 310027
Abstract. Anycast allows a group of nodes to be identified with an anycast address so that data packets destined for that anycast address can be delivered to one member of the group. An approach to anycast routing in mobile ad hoc networks is presented in the paper. The approach is based on the Ad Hoc On-demand Distance Verctor Routing Protocol (AODV) and is named AODVA Anycast Routing Protocol (AODVA). AODVA extends AODV’s basic routing mechanisms such as on-demand discovery and destination sequence numbers for anycast routing. Additional mechanisms are introduced to maintain routes for anycast addresses so that no routing loops will occur. Simulations show that AODVA achieves high packet delivery ratios and low delivery delay for data packets destined for anycast addresses. Keywords: Routing protocols, anycast, ad hoc networks, mobile networking.
1
Introduction
Anycast has been developed in the context of IPv6 [1]. With anycast, a group of nodes providing the same service in the network is identified via a so-called anycast address. Data packets destined for that anycast address are then delivered to any one of this group. Many network applications could benefit from the use of anycast. For example, by using anycast, a node is able to communicate with one of rendezvous points so that this node can integrate itself into a multicast tree used for support multicast routing [2]. Anycast may have many potential applications in mobile ad hoc networks, which are infrastructure-less and self-organized. For example, anycast could be used to improve service resiliency or to dynamically discover services available. Mobile ad hoc networks are able to operate completely autonomously, not needing support from a fixed infrastructure. However, their topology and structure could change frequently. Consequently, services in mobile ad hoc networks should be constructed in distributed fashion to avoid “a single point failure”. Furthermore, duplicated servers might also be of more interest. For instance, some critical databases might be duplicated in the network so that a high level of service availability can be provided in spite of intermittent connectivity and node failure in ad hoc networks. Anycast provides an elegant solution to the deployment of such distributed services: the nodes providing the same service can be treated just as a group of the nodes, and that group can be identified with an anycast address. X. Jia, J. Wu, and Y. He (Eds.): MSN 2005, LNCS 3794, pp. 93–102, 2005. c Springer-Verlag Berlin Heidelberg 2005
94
J. Wu
This paper presents an on-demand anycast routing protocol, which is based on the Ad hoc On-demand Distance Vector Routing Protocol (AODV) [3, 4] and is named AODV Anycast Routing Protocol (AODVA). Additional mechanisms are introduced to deal with anycast routing, although the most basic mechanisms of AODV are still applied to AODVA as well. The remainder of the paper is organized as follows. AODVA is described in detail in Section 2. Simulation results are presented in Section 3. Section 4 discusses related work, and Section 5 concludes the paper.
2
AODV Anycast Routng Protocol (AODVA)
Similarly to AODV, AODVA assumes that communication among nodes in the network is bi-directional and does not need to be reliable. Each node has a unique address. Additionally, it is assumed that anycast addresses have no specific prefix, since anycast addresses in IPv6 are not distinguished from unicast addresses in the address format, as opposed to multicast addresses [1]. Nodes possessing the same anycast address, called anycast destinations for that anycast address, form a so-called anycast destination group of that address. A data packet destined to an anycast address, called an anycast data packet, can be delivered to any member of the anycast destination group identified by that anycast address. However, it is preferable that it is delivered to the nearest member according to the distance metric used by the routing protocol. The route that an anycast data packet is forwarded is called an anycast route. An anycast destination used by the routing protocol for calculating the anycast route at a node is called an anycast peer of that node, and route calculation takes place independently at each node. Each node maintains an anycast routing table, which keeps the routing information of anycast addresses of interest. An entry in the anycast routing table contains an anycast address, the successor node of the anycast route for that address, the unicast address of the anycast peer, the anycast peer’s destination sequence number (DSN), the distance to the anycast peer (in hops), and the expiration time of the entry. The fields of anycast address and the successor are looked up for forwarding anycast data packets. AODVA uses a pair consisting of the unicast address of an anycast peer and this anycast peer’s destination sequence number to identify the freshness of the associated anycast routing information. Fig. 1 illustrates an example. Nodes A and B represent two anycast destinations with anycast address X. Nodes J,K and L select node A as their anycast peer, while M and N select B as their anycast peer. Furthermore, in Fig 1 each node is associated with a list of labels [w, x, y, z]. The first label w represents the anycast address in question; x is the anycast peer of the node; the anycast peer has a DSN equal to y and is located z hops away from the node. For example, node K has node A as its anycast peer for the anycast address X. Besides, Node K knows at the moment that the anycast peer A has the DSN equal to 1 and is located 6 hopes away from node K. Furthermore, upstream nodes and down-
On-Demand Anycast Routing in Mobile Ad Hoc Networks
J X,A,1,7
B(X)
X,A,1,6 K
X,B,1,1 N
L X,A,1,5
95
A(X)
Nodes A and B have the X,B,1,2 anycast address X M
Fig. 1. Anycast routes and anycast peers
stream nodes on an anycast route can be distinguished. For example, nodes K and L be located on the same anycast route with node L being located closer to the anycast destination A. Hence, node L is located downstream from node K and node K is located upstream from node L. 2.1
Anycast Route Discovery and Maintenance
When a node has a data packet destined for an anycast address, and there is no known route or the route is broken, the node initiates a route discovery. If the node recognizes the address as an anycast address, it broadcasts an anycast route request (AREQ) message to all its neighbors. An anycast route request includes, besides the anycast address in question, the unicast address and the DSN of the known anycast destination. Otherwise, it broadcasts a route request (RREQ) message as if the anycast address ware an unicast address. When an intermediate node receives a RREQ or AREQ for the first time and it has a valid route to the anycast address in question, it sends an anycast Route Reply (AREP) message back to the RREQ or AREQ originator in the following cases: – if the DSN it stored is not smaller than the one contained in the AREQ, and both DSNs are associated with the same anycast destination. – if it has a path to another anycast destination. If the conditions above is not satisfied, the intermediate node records the node from which it receives the RREQ or AREQ in order to set up a reverse route so that it can forward AREPs later. Furthermore, it stores the necessary information from the received RREQ or AREQ in order to detect duplicated RREQs or AREQs later. Then, the RREQ or AREQ is re-broadcast to its neighbors. If an anycast destination with the anycast address in question receives the RREQ or AREQ, it always replies with an AREP, which includes its unicast address, DSN and expiration time associated with this entry in the routing table. When a node receives an AREP, it updates its routing table as described below: – The AREP reports an anycast route to the same anycast peer, but with a greater DSN, or with an equal DSN but a lower distance. In this case, as in
96
J. Wu
unicast routing in AODV, the node updates the routing information such as the DSN and the distance. The node transmitting the AREP is selects as the new successor, but the anycast peer is not changed. We call this case a peer refresh. – The AREP has a DSN associated with another anycast peer, but the AREP was received from its current successor. In this case, the node updates only the information about the anycast peer, and does not change its successor node. We call such a change a peer revision. – The AREP reports an anycast route with a shorter distance, and comes from a node other than its current successor. This route, however, leads to an anycast destination other than its current anycast peer. To determine the freshness of the routing information reported, the node initiates an anycast echo procedure. In case of a positive result a so-called peer switch is performed. As a result, the node has a new successor node as well as information about a new anycast peer. If the node is an intermediate node, it modifies the AREP or creates a new AREP and sends the AREP to the next node along the reverse route. In the case of peer refreshes or peer revisions, it increases the distance contained in the received AREP. In the case of peer switches, it creates a new AREP using the new destination’s information. Finally, the AREP arrives at the RREQ or AREQ originator, and the route to the anycast destination is then established. 2.2
Anycast Echo Procedure
In case of a peer switch, a node should ensure that no routing loop is introduced. A route loop will occur if a node uses out-of-date routing information and selects a upstream node as its new successor of an anycast route. Therefore, an anycast echo procedure is used so that nodes on an anycast route can update the corresponding routing information. Four routing messages are defined in AODVA for the anycast echo procedure: Update Request (UREQ), Update Reply (UREP), Update Clear (UCLR), and Update Release (UREL). The node, which initiates an anycast echo procedure, sends a UREQ to the node that has reported a shorter route leading to another anycast destination. As a matter of convenience we call the two nodes the echo-issuer and the echo-relay, respectively. The UREQ is destined for the anycast address in question. Two tables are used for anycast echo procedures: a pending reply table and an echo request table. The pending reply table is only used by the echo-issuer. An entry in the pending reply table contains the anycast address for which the anycast echo procedure is in progress, the address of the corresponding echorelay, and an expiration timer for receiving a UREP. An echo request table is used by all nodes which receive and propagate UREQs. An entry contains the anycast address for which an anycast echo procedure is in progress, a list of backward nodes from which the node receives UREQs, and an expiration timer for receiving an UREP. The backward nodes determine the reverse route to the echo issuer. This route is used for forwarding
On-Demand Anycast Routing in Mobile Ad Hoc Networks
97
UREPs. A node first checkes its pending reply table and its echo request table before initiating an anycast echo procedure, so that it does not initiate multiple anycast echo procedures for the same anycast address. Interaction of UREQs and UREPs. An echo-issuer sends a UREQ to an echo-relay at the beginning of the anycast echo procedure. It inserts an entry in its pending reply table. It does not change the routing entry for an anycast address until it receives the corresponding UREP if this anycast address is listed in the pending reply table. When an intermediate node receives a UREQ, it adds a new entry to its echo request table if no anycast echo procedure for this anycast address is in progress. If a procedure is already in progress, it only needs to add the new node to the backward node list in the already existing entry. Then the UREQ is forwarded. It does not change the routing entry for an anycast address until it receives the corresponding UREP, if this anycast address is listed in its echo request table. The UREQ finally arrives at an anycast destination. The destination increases its DSN, and returns a UREP along the reverse route. The UREP includes the anycast address in question, the unicast address of the anycast destination, its DSN, and the hop count to the destination. An intermediate node receiving the UREP updates its own routing table, increases the hop count contained in the UREP, forwards it to the next node in the backward node list, and deletes the corresponding entry in the echo request table. After the UREP arrives at the echo-issuer, the echo-issuer selects the echorelay as its new successor for the anycast address in question, updates its routing table accordingly, and deletes the corresponding entry in the pending reply table. Furthmore, If the corresponding timer in the pending reply table expires, the anycast echo procedure is ended and the corresponding table entry is deleted. Dealing with Stall Routing Information. When a UREQ arrives at an echo-issuer, it determines if it is the originator of this UREQ. If this is the case, it terminates the anycast echo procedure. Furthermore, it returns a UCLR to the node from which it received the UREQ. The UCLR contains similar information to a UREP. Nodes that receive a UCLR update their routing table and delete the corresponding entries in the echo request table. Fig. 2 shows an example of X,A,1,4
X,A,1,4
X,A,1,3
X,A,1,3
X,B,1,5
X,B,1,5 T X,B,2,8 UREQ[X,T]
X,B,2,8
X,B,1,3
(a)
X,B,1,4
UCLR [X,B,2,8]X,B,1,3
X,B,1,4
(b)
Fig. 2. Detecting a path between the echo-issuer and echo-relay
98
J. Wu
such a case. Node T is the echo-issuer. Since T is located on the anycast route, it receives its own UREQ. T , thus, returns a UCLR to terminate the anycast echo procedure. Dealing with Simultaneous Anycast Echo Procedures. More than one anycast echo procedure may be active for the same anycast address. If a node other than an echo-issuer receives another UREQ from a different echo-issuer, it can add the node from which it received the UREQ into the backward node list in its echo request table. The UREQ is not forwarded. The UREP received later must be forwarded to all nodes in the backward node list. However, if an echo-issuer receives a UREQ for the same anycast address from another echo-issuer, it returns a UREL to that echo-issuer, which, as a consequence, terminates its anycast echo procedure. A UREL contains the anycast address in question, and the unicast address, the DSN and the distance to the anycast peer. The UREL passes along the reverse route from which the UREQ is forwarded. It updates the routing tables and the echo request table.
3
Simulative Analysis of On-Demand Anycast Routing
In this section, simulation results of AODVA are presented and discussed. AODVA is implemented in the network simulator ns-2 as an extension of the AODV agent. The goal of the simulations is to evaluate the performance of AODVA for support anycast routing. and the following metrics are considered [5, 6]:packet delivery ratio, end-to-end delay of packets, and control overhead. Packet delivery ratio is the quotient of the number of packets sent by sources and the number of packets received by the anycast destination. This ratio reflects the effectiveness of the algorithm. End-to-end delay of packets is the average value of time which packets take to reach anycast destination nodes. Delay of a packet is calculated as time interval between the time at which packet received by a data sink and sent by a data source, respectively. Control overhead is the amount of all routing packets sent in the simulation. Packets that are forwarded across multiple hops are considered on a per-hop basis and not on a per-path basis. We also consider the normalized control overhead, which is the average number of routing messages sent per data packets. 3.1
Simulation Configurations
The setup used in the simulations is comparable to the one used for performance evaluation of unicast routing protocols [5]. In each simulation experiment, 50 wireless nodes move over a 1500m x 300m space for 900 seconds of simulated time. Nodes move according to the model “random waypoint”[5], in which the characteristics of node movement are defined by two parameters: speed and pause time. In the simulations, the speed of node movement is randomly selected and is between 0.1 and 10m/second. Seven different pause times are selected: 0, 30, 60, 120, 300, 600, and 900 seconds. For each value of pause time, ten runs of simulations have been conducted. Constant bit rate (CBR) traffic with 20 sources
On-Demand Anycast Routing in Mobile Ad Hoc Networks
99
is used in the simulation, for CBR traffic is popularly used for evaluating routing protocols in the literature [5, 6]. Without losing generality, in the simulation only one anycast address is used. The anycast destination group with 3, 5, and 7 destinations (sinks) has been used in simulations, respectively. For the purpose of comparison, an “ideal” algorithm, called “omniscient anycast routing” (OMNIACAST) here, has been implemented, and the corresponding simulations have been performed . In the OMNIACAST scheme, the node’s routing agent is able to access the global data structures used by the simulator. From these global data structure, the routing agent is able to read the information about anycast group membership and the network topology, and to calculate paths to anycast destination directly. No routing messages are needed to be exchanged. 3.2
Discussion of Results
The results of both AODVA and OMNIACAST are plotted in Fig. 3. A number enclosed in parentheses indicates the numbers of anycast sinks. As seen from Fig. 3.a, AODVA achieves high packet delivery ratios. The ratios are above 95.5% in the simulations. The delivery ratio increases as the pause time increases, that is, the delivery ratio increases as the degree of mobility in the network reduces. Differences of delivery ratio between AODVA and OMNIACAST become smaller when the pause time increases. It is observed from the figure, even in the case of OMNIACAST, the delivery ratios are only between 99.43 and 99.99%, for mobility and wireless transmission collisions result in a non-hundred percent delivery ratio. As seen from Fig. 3.b, as mobility in the network reduces, the average endto-end delay reduces for AODVA. On the other hand, it is relatively stable for OMNIACAST. Similar to the behavior of delivery ratios, differences of end-toend delay between AODVA and OMNIACAST become smaller when the pause time increases. In the following, the impact of anycast group size is discussed. Fig. 4 shows performace mertics as a function of both mobility and the size of anycast group. As seen from Fig. 4.a, the packet delivery ratios of AODVA
20 sources, 5 sinks, 10m/s 0.3
OMNIACAST (5) AODVA (5)
Packet delivery delay (sec)
Packet delivery ratio (percent)
20 sources, 5 sinks, 10m/s 100 98 96 94 92 90 88 86 84 82 80
OMNIACAST (5) AODVA (5)
0.25 0.2 0.15 0.1 0.05 0
0
100 200 300 400 500 600 700 800 900 Pause time(sec)
(a) Packet delivery ratio
0
100 200 300 400 500 600 700 800 900 Pause time(sec)
(b) Packet delivery delay
Fig. 3. Simulation results with 20 flows, 5 anycast sinks
100
J. Wu 0.3
98 96
AODVA (3) OMNIACAST (3) AODVA (5) OMNIACAST (5) AODVA (7) OMNIACAST (7)
94 92 90 88 86 84
Packet delivery delay (sec)
Packet delivery ratio (percent)
100
0.25
0.15 0.1 0.05
82 80
0 0
100 200 300 400 500 600 700 800 900 Pause time(sec)
0
AODVA control overhead 20 sources,10m/s 100000 90000 80000 70000 60000 50000 40000
AODVA (3) AODVA (5) AODVA (7)
30000 20000 10000 0
100 200 300 400 500 600 700 800 900 Pause time(sec)
(c) Control overhead
100 200 300 400 500 600 700 800 900 Pause time(sec)
(b) Packet delivery delay Normalized control overhead (messages)
(a) Packet delivery ratio
Control overhead (messages)
AODVA (3) OMNIACAST (3) AODVA (5) OMNIACAST (5) AODVA (7) OMNIACAST (7)
0.2
AODVA normalized control overhead 20 sources,10m/s 2
1.5
1
0.5
AODVA (3) AODVA (5) AODVA (7)
0 0
100 200 300 400 500 600 700 800 900 Pause time(sec)
(d) Normalized control overhead
Fig. 4. Impact of anycast group size
are high, namely, they are all above 92.1%. The packet delivery ratio increases as the size of anycast group increases. The more anycast destinations, the closer is the delivery ratio of AODVA to that of OMNIACAST. As seen from Fig. 4.b, the average end-to-end delay reduces quickly as the size of anycast group increases. For example, the delay in the cases of 5 anycast destinations reduces dramatically compared with that in the case of 3 anycast destinations. This is caused by the reduction of network load. First, as shown in Fig. 4.c, the amount of routing messages decreases as the number of the anycast sinks increases. Second, it was observed in the simulations that more data packets reach the anycast destinations through short paths with the increase of the size of anycast group. Fig. 4.c and Fig. 4.d show that the increase of the size of anycast group reduces control overhead. The main reason is that less route discovery was issued when the size of anycast group increases. It was observed in the simulations that route breaks occurred more frequently in the cases of a smaller size of anycast group; the number of the route requests sent in the case of 3 sinks is about 150% of that in the case of 5, which is nearly 150% of that in the case of 7 anycast sinks. The two figures also show that control overhead tends to reduce as the mobility of nodes reduces, just as expected. It is interesting to see that in the case of stationary scenarios with 5 or 7 anycast sinks, the normalized control overheads are near or even under 1. The corresponding lines in the two figures have similar shapes. It is not an incident. In the calculation of the normalized
On-Demand Anycast Routing in Mobile Ad Hoc Networks
101
control overhead, the numbers of received data packets are nearly equal, as in all three cases there exist 20 CBR sources. As shown in Fig. 4, the simulation results with AODVA and three anycast sinks exhibit some irregularities at the point of pause time 60 seconds. This is due to the movement scenarios used in the simulation. It was observed that there are less temporary unreachable nodes in the movement scenarios with 60 seconds than in those with 120 seconds, although the former exhibits more link breaks than the latter. Consequently, in the case of three anycast sinks, the AODVA routing performance at the point of pause time 60 seconds is a bit better than that at the point of pause time 120 seconds. But with the increase of the size of anycast group (e.g., with 5 or 7 anycast sinks), the simulation results are less influenced by the irregularity contained in the movement scenarios.
4
Related Work
In [7], the members of an anycast group are treated as one “virtual node”, so that anycast routing could be integrated and unified into unicast routing mechanisms. With such an approach, in the case of AODV, a “virtual destination sequence” for a virtual node has to be agreed on or coordinated by all anycast group members. One way to do this was proposed in [8]. Each anycast destination node maintains, in addition to the destination sequence number, an anycast sequence numbers for each anycast address it possesses. When answering route requests, it updates its local anycast sequence number so that the anycast sequence number is greater by one than the maximum of it local anycast sequence number and the one contained in route requests. The approach proposed in this paper, on contrast, does not need a common destination sequence member for a anycast address. An extension of AODV for anycast was proposed in [9]. In that extension, route request messages are extended to include anycast group-IDs, and route discovery for anycast groups is conducted similarly for unicast addresses. Compared with that extension, the approach developed in this paper provides some special procedures to ensure loop freedom of anycast routes.
5
Conclusion
Anycast has many potential applications in mobile ad hoc networks. In this paper an ad hoc on-demand anycast routing protocol called AODVA is proposed. AODVA extends the well-established ad hoc routing protocol AODV for supporting anycast routing. The simulations have shown that AODVA achieves good performance.
Acknowledgments Main work of this paper have been done at the Institute of Telematics, University of Karlsruhe, Germany. The work was supported by the IPonAir project funded by the German Federal Ministry of Education and Research.
102
J. Wu
References 1. Hinden, R., Deering, S.: IP version 6 addressing architecture. RFC 2373 (1998) 2. Kim, D., Meyer, D., et al.: Anycast rendevous point (RP) mechanism using protocol independent multicast (PIM) and multicast source discovery protocol (MSDP). RFC 3446 (2003) 3. Perkins, C.E., Royer, E.M.: Ad hoc on-demand distance vector routing. In: Proceedings of the 2nd IEEE Workshop on Mobile Computing Systems and Applications, New Orleans, LA (1999) 90–100 4. Perkins, C., Belding-Royer, E., Das, S.: Ad hoc on demand distance vector (AODV) routing. RFC 3561 (2003) 5. Broch, J., Maltz, D.A., Johnson, D.B., Hu, Y.C., Jetcheva, J.: A performance comparison of multi-hop wireless ad hoc network routing protocols. In: Proceedings of ACM/IEEE MOBICOM. (1998) 85–97 6. Das, S.R., Perkins, C.E., Royer, E.M.: Performance comparison of two on-demand routing protocols for ad hoc networks. In: Proceedings of the IEEE Conference on Computer Communications (INFOCOM), Tel Aviv, Israel (2000) 3–12 7. Park, V., Macker, J.: Anycast routing for mobile services. In: Proc. Conference on Information Sciences and Systems (CISS) ’99. (1999) 8. Gulati, V., Garg, A., Vaidya, N.: Anycast in mobile ad-hoc networks. Course project http://ee.tamu.edu/~ vivekgu/courses/cs689mobile_report.ps (2001) 9. Wang, J., Zheng, Y., Jia, W.: An aodv-based anycast protocol in mobile ad hoc network. In: Personal, Indoor and Mobile Radio Communications, 2003. PIMRC 2003. 14th IEEE Proceedings on. Volume 1. (2003) 221 – 225
MLMH: A Novel Energy Efficient Multicast Routing Algorithm for WANETs* Sufen Zhao1, Liansheng Tan1, and Jie Li2 1
Department of Computer Science, Central China Normal University, Wuhan 430079, PR China {S.Zhao, L.Tan}@mail.ccnu.edu.cn 2 Graduate School of Systems and Information Engineering, University of Tsukuba, Tsukuba Science City, Japan [email protected]
Abstract. Energy efficiency is of vital importance for wireless ad hoc networks (WANETs). In order to keep the nodes active as long as possible, it is essential to maximize the lifetime of a given multicast tree. However, hop count is generally an important metric for WANETs, and any efficient routing protocol should have low hop count. The problem of generating the optimized energy efficient routing protocol for WANETs is NP-hard. Any workable heuristic solution is, therefore, highly desirable in this case. To take into account the tradeoff between the lifetime and the hop count in routing of multicast tree in WANETs, this paper defines a new metric termed energy efficiency metric (EEM) function. Theoretical analyses show that it is efficient to fully characterize the energy efficiency of WANETs. A distributed routing algorithm called Maximum Lifetime and Minimum Hop-count (MLMH) is then proposed with the aim that extends the lifetime while minimizes the maximal hop count of a source-based multicast tree in WANETs. Simulation results give sound evidence that our algorithm achieves a balance between the hop count and the lifetime of the multicast tree successfully.
1 Introduction For a wireless ad hoc network (WANET), each node in the network cooperates to provide networking facilities to various distributed tasks. So nodes are usually powered by limited source of energy. However, the set of network links and their capacities is not pre-determined because it depends on factors such as distance between nodes, transmission power, hardware implementation and environmental noise. Thus, it is different from a wired network. Due to the fact that the devices are dependent on battery power, it is important to find energy efficient routing paths for transmitting packets. In order to find optimal routing paths for WANETs, it is necessary to consider not only the cost of transmitting a packet, but also that of receiving, and even discarding *
The research of Sufen Zhao and Liansheng Tan has been supported by National Natural Science Foundation of China under Grant No.60473085. The work of Jie Li has been supported in part by JSPS under Grant-in-Aid for Scientific Research.
X. Jia, J. Wu, and Y. He (Eds.): MSN 2005, LNCS 3794, pp. 103 – 112, 2005. © Springer-Verlag Berlin Heidelberg 2005
104
S. Zhao, L. Tan, and J. Li
a packet. From this viewpoint, the proportions of broadcast and point-to-point traffic used by the protocol should be considered carefully. However, in many ad hoc networks, the metric of actual interest is not the transmission energy of individual packets, but the total operational lifetime of the network. From a conceptual viewpoint, power-aware routing algorithms attempt to distribute the transmission load over the nodes in a more egalitarian fashion, even though such distribution drives up the total energy expenditure. However, if the total energy of the nodes is used up, the links will break down and the data transmission will fail. So maximizing the total operational lifetime of the network is more important. On the other hand, hop count of the multicast tree also plays an important role in WANETs. It is well known that an ad hoc wireless network is unreliable and its links can break down at any time. A longer routing path may take more risks. Wireless links, typically perform link-layer re-transmissions, and therefore choosing a path with a very large number of short hops can be counter-productive. In fact, as the hop count increase, the resulted increase in the total number of re-transmissions could not be neglected. So an appreciated routing protocol should have short routing path. The energy-efficiency issue in wireless network design has been received significant attention in the past few years. In the past, several heuristic energy-efficient multicasting algorithms were proposed. These algorithms include Shortest Path Tree (SPT) algorithm, Minimum Spanning Tree (MST) algorithm and Broadcasting Incremental Power (BIP) algorithm [5]. The MST heuristic applies Prim’s algorithm to obtain a MST, and then broadcasting messages rooted at the source node. The SPT heuristic applies Dijkstra’s algorithm to obtain a shortest path tree rooted at the source node. The BIP heuristic is a different version of Dijkstra’s algorithm for SPT. All these minimum total energy protocols can result in rapid depletion of energy at intermediate nodes; possibly leading to network getting partitioned and interruption to the multicast service. In [2], Wang and Gupta proposed an algorithm called L-REMiT to maximize lifetime for WANETs. L-REMiT extends lifetime of a multicast tree through extending the lifetime of the bottleneck node in the multicast tree. Simulation results presented in [2] show the promising performance of L-REMiT. However, it does not consider hop count while switching the parent for the bottleneck node’s child. To design energy efficient routing algorithm is NP-hard [3], an efficient solution is highly desirable. In this paper, we focus on multicast routing protocols for WANETs, and propose a novel method called Maximum Lifetime and Minimum Hop-count (MLMH) for maximizing the lifetime of source-based multicast trees in WANET. MLMH algorithm defines a new metric termed energy efficiency metric (EEM) function. The EEM function is the weighted summation of relative increment for lifetime and hop count, and it is shown to be efficient to fully characterize the energy efficiency of WANETs. Theoretical results show that our algorithm improves SPT protocol significantly. The remainder of this paper is organized as follows. Section II describes the system model. Section III presents the energy efficiency problem definition. Section IV proposes the novel algorithm MLMH to maximize the lifetime of source-based multicast tree. Section V presents the simulation results that demonstrate the superiority of our algorithms. Finally, the conclusion is given in Section VI.
MLMH: A Novel Energy Efficient Multicast Routing Algorithm for WANETs
105
2 System Model The wireless ad hoc network consists of several mobile hosts (nodes) and routers. The topology of the wireless ad hoc network is modeled by an undirected graph G = (V, E, Wx, We), where V is the set of nodes, E is the set of full-duplex communication links, where |V| = n and |E| = m. Wx: V ĺR+ is the node weight function, and We : E ĺ R + is the edge weight function. Here, we assume Wx defines the residual energy of node x, and the Euclidean distance between two neighbor nodes i and j is defined by We, naming d i,j, and node i and node j are within the transmission range of each other when they keep their power at this level. Let Pi,j be the minimum energy needed for link between nodes i and j for a data packet transmission. Here, we assume all the packets are of the same size. Therefore, from [1] we can know that: Pi,j = K (di,j)ș + C,
(1)
where K is a constant upon the properties of the antenna. ș is the propagation loss exponent, and it’s value is typically between 2 and 4. C is a fixed component that accounts for overheads of electronics and digital processing. For long range radios, C 50 Erlangs) or TA < TB , QV ertex is higher than QBorder . But, when TB < TA < 50 Erlangs, the border placement is better. Note that, 50 Erlangs corresponds to about 10% blocking probability which is far beyond the normal network operation range. If we assume cell A is the hot spot, i.e., TA > TB , then the border placement approach is usually a good choice for seed ARSs. So, we may have the first rule of thumb as follows. QV ertex =
Rule of Thumb 1: Place the seed ARSs at cell borders. In addition, it has been shown in [7] that, for an n-cell system, the√ maximum number of seed ARSs needed for each shared border of two cells is 3n − 4 n − 4. Seed ARS vs. grown ARS. If additional ARSs are available, there are two approaches to place them. One is to place them as seeds according to what we discussed in Sec 3.3 without any overlap on the existing ARSs. This approach intends to maximize the total effective ARS coverage. The other way is to let them grow from the seeds which are already there (see ARS 5 shown in Figure 3, assuming border placement approach is adopted). The grown ARS is required to be within the coverage of at least one existing ARS so that they can relay traffic to each other. Thus, their coverage overlaps within some area, and not all of the area covered by the grown ARS will result in the increase
696
H. Wu et al.
Vertex Approach
Seed ARS
80 70
60
Border Approach
50 40 30 20 10
0 100 80
100 60
80 60
40
TA
40
20
20 0
60 Quality of ARS coverage (Q)
Quality of ARS coverage (Q)
70
50 40 30 Grown ARS
20 10
0 100 80
100 60
80
TB
60
40
0
40
20 TA
Fig. 4. Quality of ARS coverage: vertex placement vs. border placement
20 0
0
TB
Fig. 5. Quality of ARS coverage: seed ARS vs. grown ARS
of the system’s Q value. To minimize the overlapped area and maximize the effective coverage of the grown ARS, we place it just within the transmission range of the existing seed ARS (i.e., let the distance between the two ARSs as far as possible while they can still communicate with each other). Accordingly, we can compute the additional coverage of the √grown ARS (i.e., its coverage minus the overlapped area), which πr 2 −2( 1 πr 2 −
3
r2 )
6 4 is about πr 2 and the increased Q value
≈ 0.61S, where r is the radius of an ARS coverage area,
QGrow in ≈ 0.61 · S · Ta · (1 − bB )
(4)
assuming it grows inward cell A. Comparing it with QBorder (i.e., placing a new seed ARS at cell border) in Figure 5, as we can see, only when TB is very low and TA is much higher than TB , the grown ARS approach performs better than the seed ARS approach, and therefore we have the second rule of thumb. Rule of Thumb 2: Place an ARS as seed if it is possible. The direction of growing. If the additional ARSs can not be placed as seeds because there is no free space at the shared boundaries of cells, we have to let them grow from some of the seeds. An ARS can grow inward cell A (see ARS 5 in Figure 3) or outward cell A (see ARS 6 in Figure 3). Both of them have the same ARS coverage (S). But since the ARSs cover different cells with different traffic intensities, they may result in different Q values. When an ARS grow inward cell A, its Q value (QGrow in ) has been shown in Equation 4. When an ARS grows outward cell A, its Q value is QGrow out ≈ 0.61 · S · Tb · (1 − bA )
(5)
Similarly, we compute the Q values of these two approaches and conclude the third rule of thumb. Rule of Thumb 3: Grow an ARS toward the cell with high traffic intensity. The three rules of thumb may serve as the guidelines for ARS placement. More specifically, to optimize the system performance, the operators may first place ARSs at
Quality of Coverage (QoC) in Integrated Heterogeneous Wireless Systems
697
the shared borders of the cells. If there are additional ARSs, they may let them grow in the cell with higher traffic load. However, depending on the size of the cells and the coverage of the ARSs, there may be some exceptions. For example, when a number of seed ARSs have been deployed in a system, placing another seed ARS later may result in some overlap with the existing ARSs, and therefore result in a lower Q value. In this case, growing the additional ARSs may result in a better performance. Similarly, when there are already many ARSs growing in the cells with high traffic intensity, placing an ARS in the neighboring cells may be more beneficial.
4 Simulation and Discussions To evaluate the performance of various ARS placement approaches in terms of the system-wide (i.e., weighted average) request blocking and dropping probability, we have developed a simulation model using the GloMoSim simulator [10] and the PARSEC language [11]. The simulated system includes a cell A and six neighboring cells (see Figure 6), which are modelled as hexagons with the center-to-vertex distance of 2 Km. We have assumed that 50 units of bandwidth are allocated for one cell, and for simplicity, each connection requires 1 unit bandwidth. In order to obtain converged statistical results, we have simulated 6, 400 MHs which are uniformly distributed in the system, and run the simulation for 100 hours for each traffic intensity before collecting the results. The traffic intensity is measured in Erlangs which is the product of the request arrival rate (Poisson distributed) and the holding time (exponentially distributed). In addition, we have used location dependent traffic pattern by default. More specifically, assuming cell A is the hot spot, the traffic intensity in cell B is about 80% of that in cell A. Six ARSs with 500 m transmission range have been simulated in four scenarios (see Figure 6 (a)-(d)), which implement the different ARS placement approaches described in Section 3. Figures 6(a) and (b) show six seed ARSs placed according to the border and the vertex approaches, respectively, while in Figure 6(c) and (d), there are 3 seed ARSs placed at the borders and 3 additional ARSs growing from the seeds inward and outward cell A, respectively. We have obtained Q values of all six ARSs for different placement approaches from the simulation, and compared them with the analytical results in Figures 7-8. As we can see, the analytical results (in Figure 7) and simulation results (in Figure 8) show a very B
B B
B A
B
B
B A
A B
B
B
B
A
B
B
B B
B
B
B
B
B
B
B
B
B
(a) Border
(b) Vertex
(c) Grow-in
(d) Grow-out
Fig. 6. Four scenarios of ARS placement in the simulated system
698
H. Wu et al.
20
18
18
17
17
16
16
15
15
14
14
13
13
12 40
41
42
Border−0m/s Border−3m/s Vertex−0m/s Vertex−3m/s Grow−inward−3m/s Grow−inward−0m/s Grow−outward−0m/s Grow−outward−3m/s
19
Q Value
Q Value (Analytical Results)
20
Border−A Vertex−A Grow−inward−A Grow−outward−A
19
43
44 45 46 47 Traffic intensity in cell A (Erlangs)
48
49
Fig. 7. Analytical Results; Tb = 0.8Ta
50
12 40
41
42
43
44 45 46 47 Traffic intensity in cell A (Erlangs)
48
49
50
Fig. 8. Simulation Results; Tb = 0.8Ta
similar trend. The reason that the Q values obtained from the simulation are usually higher than those from the analysis is that the blocking probability without relaying is used in Equation 2 through 5, which is higher than the real blocking probability in iCAR (with relaying). In addition, as shown in Figure 8, the mobility of MHs has little affect on the Q values (although it does affect the connection dropping probability as to be shown later). In all cases within the normal operation range of an iCAR system (e.g., the traffic intensity of cell A is from 40 to 50 Erlangs), the grown ARSs yield lower Q values than that of the seed ARSs as we expected. However, the Q values of the border and the vertex approaches are very close, and when the traffic intensity is high, the vertex approach may result in higher Q values. This is because even though the ARSs in the border approach still cover more active connections in such a situation, a large fraction of the covered connections is nonrelayable because of the high blocking probability in the neighboring cells. On the other hand, the covered connections in the vertex approach may be relayed to either of the two neighboring cells, and therefore has higher Q values. As we discussed earlier, the real blocking probability of the cells in iCAR is lower than that we used in the analysis, thus the intersection point of the curves representing the Q values of the border and the vertex approaches in the simulation occurs at a higher traffic intensity than that in analysis (comparing Figures 7 and 8). The request blocking rates of MHs in the systems with different ARS placement approaches are shown in Figure 9. As we can see, an iCAR system with a higher Q value usually has a lower blocking rate. The results have also verified the usefulness of the three rules of thumb established in Section 3. More specifically, the border ARS placement has the lowest blocking rate among all of these approaches, which may be kept below 2% (the acceptable level) even when the traffic intensity is as high as 50 Erlangs. As a comparison (though the results are not shown), if six ARSs are randomly placed in the seven cells of the system (with the Q value being close to 0), the request blocking rate is from about 2% to above 10% when the traffic intensity of cell A increases from 40 to 50 Erlangs. Although the MHs mobility may affect the dynamics in relaying capability of an iCAR system due to switch-over, our results indicate that the blocking rates in all ARS
Quality of Coverage (QoC) in Integrated Heterogeneous Wireless Systems 0.025
0.014
Grow−outward Grow−inward Vertex Border
0.012
Average Request Dropping Rate
Average Request Blocking Rate
0.02
0.015
0.01
0.01
699
Random−15m/s Grow−outward−15m/s Grow−inward−15m/s Border−15m/s Vertex−15m/s Random−1.5m/s Grow−outward−1.5m/s Grow−inward−1.5m/s Border−1.5m/s Vertex−1.5m/s
0.008
0.006
0.004 0.005 0.002
0 40
41
42
43
44 45 46 47 Traffic intensity in cell A (Erlangs)
48
49
50
Fig. 9. Call Blocking rates for different ARS placement approaches, MH Speed=0m/s
0 40
41
42
43
44 45 46 47 Traffic intensity in cell A (Erlangs)
48
49
50
Fig. 10. Call Dropping rates for different ARS placement approaches
placement approaches increase very little with the MHs mobility. On the other hand, MH mobility affects the connection dropping probability significantly (see Figure 102 ). More specifically, the dropping probability increases from 0 to the order of 10−3 and 10−2 , respectively, when the maximum MH moving speed increases from 0m/s to 1.5m/s and 15m/s. Although the seed ARS placement approaches (i.e., the border and the vertex approaches) still perform better than the grown ARS placement approaches in terms of connection dropping rate, the difference among them is not as obvious as that in terms of the connection blocking rate. In addition, note that the vertex approach has a lower connection dropping rate than that of the border approach. This is because, when an active MH moves from one cell (i) to another cell (j), although there is the same probability that the MH is covered by ARSs at the moment crossing the shared border of the two cells, the ARS coverage is 2S 3 in cell j in the vertex approach, which is larger than that in the border approach ( S2 ). The larger ARS coverage implies the longer time that the ARS can support the MH via relaying, and consequently results in a lower connection dropping rate. We note that the proposed rules of thumb are based on the assumption that each ARS has an unlimited bandwidth at its R and C interface, that is, it can relay as many connections as needed to a BTS (provided that the BTS has free bandwidth). However, when there is only limited CBW, the grow-outward (cell A) approach may not be affected as much as the grow-inward approach, and hence the two approaches may perform equally well (or bad). This is because the amount of CBW determines the amount of traffic that can be relayed from cell A to its neighboring cells and consequently becomes the performance bottleneck, and placing an ARS outside cell A will increase the total amount of CBW (used to relay traffic from cell A to cell B) available to the ARS cluster, while placing inside cell A will not. Nevertheless, in a real system, only the connection requests that would be blocked without relaying, which is a small portion (e.g. about 5% of total requests if the initial blocking rate is 5%), will be supported by relaying although the relayable traffic (i.e., the Q value) may be much higher than that, and thus 2
For simplicity, we assume that there is no priority given to the hand-off attempts over new connection attempts.
700
H. Wu et al.
the assumption of having enough relaying bandwidth is valid in most situations, and the presented rules of thumb will be good guidelines for ARS placement.
5 Conclusion In this paper, we have addressed the location management issue in heterogenous networks. In particular, we have defined a new performance metric called Quality of Coverage (QoC), and used iCAR as an representative system. We have compared various placement strategies in terms of their QoC values, and provided three rules of thumb as the guidelines for the placement of ARSs in iCAR. The performance of the proposed ARS placement strategies have been evaluated via both analysis and simulations. We expect that the concept of QoC along with the results and guidelines will be useful not only for managing ARS placement (and their limited mobility) in iCAR, but also for planning of and routing in other integrated heterogeneous wireless systems including the wireless system with controllable mobile base stations or access points.
References 1. Y.D.Lin and Y.C.Hsu, “Multihop cellular: A new architecture for wireless communication,” in IEEE INFOCOM’2000, pp. 1273–1282, 2000. 2. http://www.3gpp.org/. 3. X.-X. Wu, B. Mukerjee, and S.-H. G. Chan, “Maca – an efficient channel allocation scheme in cellular networks,” in IEEE Global Telecommunications Conference (Globecom’00), vol. 3, pp. 1385–1389, 2000. 4. H. Wu, C. Qiao, S. De, and O. Tonguz, “Integrated cellular and ad-hoc relay systems: iCAR,” IEEE Journal on Selected Areas in Communications special issue on Mobility and Resource Management in Next Generation Wireless System, vol. 19, no. 10, pp. 2105–2115, Oct. 2001. 5. G. Stuber, Principles of Mobile Communication. Kluwer Academic Publishers, 1996. 6. R. Kohno, R. Meidan, and L. Milstein, “Spread spectrum access methods for wireless communications,” IEEE Communications Magazine, vol. 33, no. 1, pp. 58–67, 1995. 7. C. Qiao and H. Wu, “iCAR : an integrated cellular and ad-hoc relay system,” in IEEE International Conference on Computer Communication and Network, pp. 154–161, 2000. 8. A. Viterbi, CDMA Principles of Spread Spectrum Communication. Addison-Wesley, 1996. 9. R. L. Freeman, Telecommunication System Engineering. John Wiley & Sons Inc., 1996. 10. X. Zeng, R. Bagrodia, and M. Gerla, “GloMoSim: A library for parallel simulation of largescale wireless networks,” in Proc. Workshop on Parallel and Distributed Simulation, pp. 154– 161, 1998. 11. R. Bagrodia, R. Meyer, M. Takai, Y. Chen, X. Zeng, J. Martin, B. Park, and H. Song, “Parsec: A parallel simulation environment for complex systems,” Computer, pp. 77–85, Oct. 1998.
ACOS: A Precise Energy-Aware Coverage Control Protocol for Wireless Sensor Networks Yanli Cai1, Minglu Li1, Wei Shu2, and Min-You Wu1,2 1
Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai 200030, China 2 Department of Electrical and Computer Engineering, The University of New Mexico, Albuquerque, New Mexico, USA {cai-yanli, li-ml,wu-my}@cs.sjtu.edu.cn,[email protected]
Abstract. A surveillance application requires sufficient coverage of the protected region while minimizing the energy consumption and extending the lifetime of sensor networks. This can be achieved by putting redundant sensor nodes to sleep. In this paper, we propose a precise and energy-aware coverage control protocol, named Area-based Collaborative Sleeping (ACOS). The ACOS protocol, based on the net sensing area of a sensor, controls the mode of sensors to maximize the coverage, minimize the energy consumption, and to extend the lifetime of the sensor network. The simulation shows that our protocol has better coverage of the surveillance area while waking fewer sensors than other state-of-the-art sleeping protocols.
1 Introduction A wireless sensor network consists of a set of inexpensive sensors with wireless networking capability [1]. Applications of wireless sensor networks include battlefield surveillance, environment monitoring and so on [2]. As sensors may be distributed arbitrarily, one of the fundamental issues in wireless sensor networks is the coverage problem. The coverage of a sensor network, measured by the fraction of the region covered, represents how well a region of interest is monitored. On the other hand, a typical sensor node such as an individual mote, can only last 100-120 hours on a pair of AA batteries in the active mode [3]. Power sources of the sensor nodes are non-rechargeable in most cases. However, a sensor network is usually desired to last for months or years. Sleeping protocols to save energy are under intensive study, such as RIS [4, 5], PEAS [6] and PECAS [4]. These protocols presented different approaches to utilizing resources, but needs further improvement in coverage or efficient energy consumption. Here we propose a sleeping protocol, named Area-based Collaborative Sleeping (ACOS). This protocol precisely controls the mode of sensors to maximize the coverage and minimize the energy consumption based on the net sensing area of a sensor. The net sensing area of a sensor is the area of the region exclusively covered by the sensor itself. If the net sensing area of a sensor is less than a given threshold, the sensor will go to sleep. Collaboration is introduced to the protocol to balance X. Jia, J. Wu, and Y. He (Eds.): MSN 2005, LNCS 3794, pp. 701 – 710, 2005. © Springer-Verlag Berlin Heidelberg 2005
702
Y. Cai et al.
the energy consumption among sensors. Performance study shows that ACOS has better coverage and longer lifetime than other sleeping protocols. The rest of the paper is organized as follows. Section 2 discusses previous research. Section 3 describes the basic design of the protocol. Section 4 improves the baseline ACOS for better performance. Section 5 provides a detailed performance evaluation and comparison. We conclude the paper in Section 6.
2 Related Work Different coverage methods and models have been surveyed in [7, 8, 9]. Three coverage measures are defined [7], which are area coverage, node coverage, and detectability. Area coverage represents the fraction of the region covered by sensors and node coverage represents the number of sensors that can be removed without reducing the covered area, while detectability shows the capability of the sensor network to detect objects moving in the network. Centralized algorithms to find exposure paths within the covered field are presented in [8]. In [9], the authors investigate the problem of how well a target can be monitored over a time period while it moves along an arbitrary path with an arbitrary velocity in a sensor network. Power conservation protocols such as GAF [10], SPAN [11] and ASCENT [12] have been proposed for ad hoc multi-hop wireless networks. They aim at reducing the unnecessary energy consumption during the packet delivery process. In [13], a heuristic is proposed to select mutually exclusive sets of sensors such that each set of sensors can provide a complete coverage. In [14], redundant sensors that are fully covered by other sensors are turned off to reduce power consumption, while the fraction of the area covered by sensors is preserved. Sleeping protocols such as RIS [4, 5], PEAS [6] and PECAS [4] have been proposed to extend the lifetime of sensor networks. In RIS, each sensor independently follows its own sleep schedule which is set up during network initialization. In PEAS, a sensor sends a probe message within a certain probing range when it wakes up. The active sensor replies to any received probe message. The sensor goes back to sleep if it receives replies to its probes. In PEAS, an active node remains awake continuously until it dies. PECAS makes an extension to PEAS. Every sensor remains within active mode only for a duration and then goes to sleep.
3 Basic Protocol Design In this section, we describe the basic design of ACOS protocol. This protocol precisely controls the mode of sensors so that the coverage of the sensor network can be maximized and the energy consumption minimized. 3.1 Notations and Assumptions We adopt the following notations and assumptions throughout the paper. z
Consider a set of sensors S = {s1, s2, …, sn}, distributed in a two-dimensional Euclidean plane.
ACOS: A Precise Energy-Aware Coverage Control Protocol
z
z
z
z
z
703
Sensor sj is referred as a neighbor of another sensor si, or vice versa, if the Euclidean distance between si and sj is less than 2r. Assume that each sensor knows its own location [15, 16, 17, 18]. As shown in Section 3.2, relative locations [19] are enough for our protocol. A sensor has two power consuming modes: low-power mode and powerconsuming mode. Power-consuming mode is also called active mode. The sensing region of each sensor is a disk, centered at the sensor, with radius r, its sensing range. The net sensing region of sensor si is the region in the sensing range of si but not in the sensing range of any other active sensor. The net sensing area or net area of si is the area of the net sensing region. The net area ratio, denoted as ai, is the ratio of si’s net sensing area to si’s maximal sensing area, ʌr2. The net area threshold, denoted as ij, is a parameter between 0 and 1.
3.2 The Net Area Calculation The shadowed region with bold boundary in Fig.1 shows an example of the net sensing region of sensor s0. Before detailed description of ACOS, a solution to computing the net area is presented here.
s3
s2
s4 D
s0
C B
E
s1
s5
A F
Fig. 1. The net sensing region of a sensor s0
We use the algorithm in [20] to find boundaries of the net sensing regions inside a sensor, which is referred as perimeter-coverage algorithm. The perimeter-coverage algorithm costs polynomial time to find the coverage of the protected region by considering how the perimeter of each sensor’s sensing region is covered. Consider sensor si, a segment of si’s perimeter is k-perimeter-covered if all points on the segment are in the sensing range of at least k sensors other than si itself. Consider s0 as shown in Fig.1, we first find the 0-perimeter-covered segments of p in this example. Then in the sensing range of s , we s0’s perimeter, the minor arc FA 0 find the 1-perimeter-covered segments for each of s0’s neighbors, the minor p , CD p in this example. After all the segments are found, two segp , DE p , BC p , EF arcs AB ments are jointed together if they have common end points. The closed boundary of each net sensing region is determined by a segment sequence. After finding the boundaries of each net sensing region, the area of a net sensing region can be computed by calculating the area of the polygon formed by its segment sequence, the polygon ABCDEF in this example, and the area of the region between
704
Y. Cai et al.
each segment of arc and the corresponding chord. Here, each node only needs to know the relative locations of its neighbors. To determine the relative location among sensors is easier than determining each node’s absolute location. 3.3 Basic Protocol Design For simplicity of description, the basic design of ACOS protocol is presented in this section, which is called “baseline ACOS”, leaving other intricate problems to be addressed in Section 4. Each sensor node has four states: Sleep state, PreWakeUp state, Awake state, and Overdue state. The Sleep state corresponds to the low-power mode. The PreWakeUp, Awake, and Overdue states belong to the active mode. The PreWakeUp is a transit state and lasts for a short period of time, while the Awake and Overdue states may last for several minutes or hours. Every sensor remains in the Awake state for no more than Twake_Duration. The state transition diagram of the baseline ACOS is show in Fig.2. PreWakeUp Awake
Sleep
Overdue
Fig. 2. State transition diagram of baseline ACOS
Consider any sensor si with a decreasing sleep timer T isleep_left to represent the time left before si wakes up again. Sensor si also keeps a decreasing wake timer T iwake_left, which is initialized as Twake_Duration when the sensor turns from the low-power mode to the active mode. The value of Twake_Duration - T iwake_left indicates how long the sensor has been in the active mode since it turned from the low-power mode to the active mode. Sensor si also maintains an active neighbor list nListi, collecting information from every message received. For any neighbor sk in nListi, sk’s location and T kwake_left are also stored in nListi. All the above timers decrease as the time progresses, as shown in event e0 in Fig.3. When si wakes up, its state changes from Sleep to PreWakeUp. It broadcasts a message PreWakeUp_Msg to its neighbors within radius 2r and waits for Tw seconds. When any neighboring sensor sj is in Awake state and receives this message, sj sends back a Reply_PreWakeUp_Msg including its location and T jwake_left. Upon receipt of a Reply_PreWakeUp_Msg from any neighbor sj, si extracts the location of sj and T jwake_left and stores them into its nListi. At the end of Tw, si computes the net area ratio ai. If ai is less than ij, it shows that si is not contributing enough coverage, and it is unnecessary for si to work at this moment. So it returns back to Sleep state and sleeps for a period time of the minimum value of all T kwake_left from nListi. It is possible that several neighbors around an active sensor sj get its T jwake_left and all wake up at the same time. The consequence is that not only they contend with communications, but also most of them may decide to start to work because of unawareness of each other. To avoid this situation, a random offset İ can be added to the sleep time.
ACOS: A Precise Energy-Aware Coverage Control Protocol
The following is the event that occurs for any sensor si Event e0: the clock of si ticks one time if(si is in Sleep state){ si’s sleep timer T isleep_left = T isleep_left -1; }else{ si’s wake timer T iwake_left = T iwake_left -1; Update the timers in local neighbor list nListi, for any sk k wake_left -1; }
705
nListi, T kwake_left = T
The following is the event that occurs when sensor si is in Sleep state Event e1: the sensor si’s sleep timer T isleep_left has decreased to zero Change to PreWakeUp state Broadcast a PreWakeUp_Msg within radius 2r; Within Tw seconds, upon receipt of a Reply_PreWakeUp_Msg from neighbor sj, extract the location of sj and T jwake_left and store them into nListi; Compute the net area ratio ai; if (ai < ij){ Set T isleep_left = Min{ T kwake_left, for sk nListi } + İ, İ is a random offset; Clear nListi and change back to Sleep state; }else{ Change to Awake state and set timer T iwake_left = TWake_Duration; Broadcast a Wake_Notification_Msg including its location and T iwake_left within radius 2r; } The followings are events that occur only when sensor si is in Awake state Event e2: sensor si receives a PreWakeUp_Msg from sj Reply sj with Reply_PreWakeUp_Msg, including its location and T iwake_left; Event e3: the timer T iwake_left of the sensor si has decreased to zero Change to Overdue state; The followings are events that occur when sensor si is in Awake or Overdue state Event e4: sensor si receives a Sleep_Notification_Msg from sj Remove sj from nListi; Event e5: sensor si receives a Wake_Notification_Msg Update nListi and compute the net area ratio ai; if (ai < ij){ Broadcast a Sleep_Notification_Msg within radius 2r; Set T isleep_left = Min{ T kwake_left, for sk nListi } + İ, İ is a random offset; Clear nListi and change to Sleep state; } Fig. 3. The events of baseline ACOS
706
Y. Cai et al.
If ai is equal or greater than ij, si changes to Awake state, initialize its wake timer T i wake_left and broadcasts a Wake_Notification_Msg including its location and T wake_left to its neighbors, as described by event e1 in Fig.3. When si is still in the Awake state and hears a PreWakeUp_Msg from sj, it replies sj with a Reply_PreWakeUp_Msg, including its T iwake_left. Although sensor si in its Overdue state is also in the active mode, it does not reply to PreWakeUp_Msg, so that si is not counted by newly waked up sensors and is more likely able to go to sleep in a short time. This is how energy consumption balance among sensors is achieved. This procedure is described in event e2 in Fig.3. When si is in the Awake state and its wake timer T iwake_left has decreased to zero, it changes from Awake to Overdue state, as shown in event e3 in Fig.3. When si is in the Awake or Overdue state and hears a Wake_Notification_Msg, it updates its list nListi and recalculates the net area ratio ai first. If ai is less than ij, this indicates that si can go to sleep safely, and therefore broadcasts a Sleep_Notification_Msg to its neighbors. Then it changes to the Sleep state and sleep for the minimum value of all T kwake_left from nListi. Also, a random offset is added to the sleep time. This procedure is described in event e5 in Fig.3. If si is in the Awake or Overdue state and hears a Sleep_Notification_Msg from sj, si removes sj entry from nListi, as shown in event e4 in Fig.3. Fig.3 demonstrates all events that occur in our protocol. The events drive a sensor to change from one state to another and precisely control its power consuming modes. i
4 Optimizations In the basic design of the ACOS protocol, two problems are not addressed. One problem is the unawareness of dead neighbors. When a sensor si receives a Wake_Notification_Msg, it computes its net area ratio ai. The calculation of ai depends on information stored in the local neighbor list nListi. The information may be outdated, because some of neighbors may have died from physical failure or energy depletion without notification. The other problem is about sleep competition caused by waking up a sensor. Consider sensor si that decides to wake up after computing the net sensing area ratio ai. It then broadcasts a Wake_Notification_Msg to its neighbors, and several neighbors may receive this message. Each of them computes their net area ratio without collaboration, and many of them may go to sleep. We call this situation multiple sleeps. In some cases, multiple sleeps are needed to reduce overlap, but in other cases, multiple sleeps should be avoided. In this Section, we modify the baseline ACOS to solve dead neighbors problems and to reduce the effect of multiple sleeps problems by adding a new transit PreSleep state. 4.1 Dealing with Dead Neighbors When sensor si receives a Wake_Notification_Msg and its net area ratio ai is less than ij, it changes to PreSleep state and clears the current information in its nListi. Then it broadcasts a PreSleep_Msg to its neighbors and waits for Tw seconds. When neighbor sj is in its Awake or Overdue state and hears this message, sj sends back a
ACOS: A Precise Energy-Aware Coverage Control Protocol
707
Reply_PreSleep_Msg including its location and T jwake_left. At the end of Tw, si recomputes the net area ratio ai’. If ai’ is equal or greater than ij, this indicates that some neighbors died after the last time si woke up and that si should not go to sleep at the moment. 4.2 Dealing with Multiple Sleeps Caused by a Waking Up Sensor The protocol is enhanced by making the neighbors that are ready to sleep collaborate with each other. When sensor si receives a Wake_Notification_Msg from sj, it updates its net area ratio ai’ and broadcasts a SleepIntent_Msg to its neighbors, including ai’. Within T’w seconds, it receives SleepIntent_Msg from its neighbors, who are intent to sleep too. At the end of T’w seconds, it chooses the sensor sk who has the minimum value of net area among the neighbors from whom a SleepIntent_Msg had been received. If ai’ is greater than sk’s net area ratio ak’, then si does not hold the minimum net area ratio and its neighbor sk may go to sleep. Then si re-computes the net area ratio ai’’ by regarding sk as a sleep node. If ai’ is less than ak’, it indicates that si does have the minimum net area ratio. If ai’’ is less than ij, it indicates the sleep of sk does not largely increase si’s net area. So si could go to sleep relatively safely in the case of ai’< ak’ or ai’’< ij. We plan to find a more efficient strategy to select a set of sj’s neighbors and enable them to sleep, in order to achieve better coverage while making more sj’s neighbors sleep in future.
5 Performance Evaluation In this section, we implemented ACOS and other three protocols RIS [4, 5], PEAS [6] and PECAS [7] for comparison. We evaluate the coverage with different node density using our protocol in Section 5.1, and compare the coverage achieved with an equal number of active nodes for all four protocols in Section 5.2. In our simulation, the sensing range of each sensor is set as 20 meters, i.e. r = 20m , and the communication range is 40m. The sensors are uniformly distributed in a 400m × 400m region, with bottom-left coordinate (0, 0) and top-right coordinate (400, 400). In order to evaluate the relations between coverage and different node densities, the numbers of distributed sensors are 400, 800, 1600 and 3200, respectively with density 1, 2, 4 and 8 per square of r × r. From now on, we will abbreviate “square of r × r” as “r-square.” In RIS, the time is divided into time slots of equal length Tslot at each sensor. Each Tslot is divided into two parts, the active period and the sleeping period. The duration of the active period is p* Tslot, where p depends on applications, and the sleeping period takes the rest part of a time slot. In PEAS, probing range Rp is given by the application depending on the degree of robustness it needs. In PEAS, a working node remains awake continuously until its physical failure or depletion of battery power. In PECAS, every sensor remains within the active mode for a duration indicated by parameter Work_Time_Dur each time it wakes up.
708
Y. Cai et al.
5.1 The Coverage of ACOS In Fig.4(a), we can see the number of active nodes ascends sharply as the net area threshold goes smaller. When ij = 0, all nodes are active, however, even if ij is a small non-zero number, the number of active nodes is much smaller than the total number of nodes. Fig.4(b) shows that the maximal coverage may be approached by much fewer active sensors than the total number. For example, when the node density is 8 per r-square, i.e. 3200 sensors in total, ACOS wakes up only 361 nodes but covers 98.5% of the whole region. Fig.4(c) illustrates that the coverage percentage is approximately linear to the net area threshold ij.
Density 8/r*r
300
Density 4/r*r Density 2/r*r
250
Density 1/r*r 200 150 100 50 0 0.0
100%
100%
90%
90%
Coverage Percentage
350
Coverage Percentage
Number of Active Nodes
400
80% 70% Density 8/r*r
60%
Density 4/r*r 50%
Density 2/r*r
40%
Density 1/r*r
30%
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0
50
100
150
200
250
300
350
400
Density 8/r*r Density 4/r*r Density 2/r*r Density 1/r*r
80% 70% 60% 50% 40% 30% 0.0
0.1
Number of Active Nodes
Net Area Threshold
(a)
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0
Net Area Threshold
(b)
(c)
Fig. 4. The coverage over node density using ACOS protocol
5.2 Comparison of Coverage We evaluate the coverage over the roughly equal number of active sensors in the case of 800 sensors deployed.
Coverage Percentage
100% ACOS PEAS PECAS RIS
90%
80%
70%
60%
50%
40% 50
100
150
200
250
300
350
Number of Active Nodes
Fig. 5. Coverage over the number of active sensors with 800 sensors deployed
As shown in Fig.5, for the same number of active nodes, ACOS can achieve more coverage than others. And as the number of active nodes increases, ACOS approximates the maximal coverage that can be achieved quite sooner than the other three protocols.
ACOS: A Precise Energy-Aware Coverage Control Protocol
(a) ACOS 203 active sensors, ij = 0.1, coverage percentage = 89.3%
(b) RIS 202 active sensors p=0.2, coverage percentage = 72.5%
(c) PEAS 208 active sensors, Rp=1.0*r, coverage percentage = 85.0%
709
(d) PECAS, 202 active sensors, Rp=0.975*r, coverage percentage = 84.6%
Fig. 6. Spatial distribution of working sensors under different protocols
Fig.6 shows a typical scene of spatial distribution of working sensors under different protocols. The number of active nodes in each protocol is roughly 200. Fig.6(a) shows ACOS works quite well. There are no big holes and not much overlap. From the Fig.6(b) we can see that with the RIS protocol, there are many sensing holes and many sensors are densely clustered because of no collaboration. Fig.6(c) and Fig.6(d) show that PEAS and PECAS perform well but not as good as ACOS.
6 Conclusion In this paper, we consider a fundamental problem of keeping sufficient coverage of the protected region while minimizing energy consumption and extending the lifetime of sensor networks. We have developed a sleeping protocol ACOS, which controls the mode of sensors to optimize the usage of energy as well as to maximize the coverage. We evaluate our protocol on a simulator and compare it with other sleeping protocols. The results demonstrate that our protocol has better coverage of the surveillance area while waking fewer sensors than other state-of-the-art sleeping protocols. Acknowledgements. This research was supported partially by Natural Science Foundation of China grant #60442004.
References 1. G. Pottie and W. Kaiser. Wireless integrated network sensors, Communications of the ACM, 2000. 2. A. Mainwaring, J. Polastre, R. Szewczyk, D. Culler, and J. Anderson. Wireless sensor networks for habitat monitoring. In First ACM International Workshop on Wireless Sensor Networks and Applications (WSNA), 2002. 3. Crossbow. Power management and batteries. Application Notes, available at http://www.xbow.com/Support/appnotes.htm, 2004.
710
Y. Cai et al.
4. Chao Gui and Prasant Mohapatra. Power Conservation and Quality of Surveillance in Target Tracking Sensor Networks. In ACM MobiCom, 2004. 5. Santosh Kumar, Ten H. Lai and J´ozsef Balogh. On k-Coverage in a Mostly Sleeping Sensor Network. In ACM MobiCom, 2004. 6. F. Ye, G. Zhong, J. Cheng, S.W. Lu and L.X. Zhang. PEAS: a robust energy conserving protocol for long-lived sensor networks. In the 10th IEEE International Conference on Network Protocols (ICNP), 2002. 7. Benyuan Liu and Don Towsley. A Study of the Coverage of Large-scale Sensor Networks. In IEEE International Conference on Mobile Ad-hoc and Sensor Systems (MASS), 2004. 8. S. Meguerdichian, F. Koushanfar, M. Potkonjak, and M. B.Srivastava. Coverage problems in wireless ad-hoc sensor networks. In IEEE Infocom, 2001. 9. S. Megerian, F. Koushanfar, G. Qu, and M. Potkonjak. Exposure in wireless sensor networks. In ACM Mobicom, 2001. 10. Y. Xu, J. Heidemann, and D. Estrin. Geography informed energy conservation for ad hoc routing. In ACM Mobicom, 2001. 11. B. Chen, K. Jamieson, and H. Balakrishnan. Span: An energy efficient coordination algorithm for topology maintenance in ad hoc wireless network. In ACM Mobicom, 2001. 12. A. Cerpa and D. Estrin. Ascent: Adaptive self-configuring sensor networks topologies. In IEEE Infocom, 2002. 13. S. Slijepcevic and M. Potkonjak. Power efficient organization of wireless sensor networks. In IEEE Int’l Conf. on Communications (ICC), pages 472–476, 2001. 14. D. Tian and N. D. Georganas. A coverage-preserving node scheduling scheme for large wireless sensor networks. In WSNA, 2002. 15. P. Bahl and V. N. Padmanabhan. RADAR: An in-building RF-based user location and tracking system. In IEEE Infocom, 2000. 16. Koen Langendoen, Niels Reijers. Distributed localization in wireless sensor networks: a quantitative comparison. Computer Networks, pages 499–518, 2003. 17. R. L. Moses, D. Krishnamurthy and R. M. Patterson, "A self-localization method for wireless sensor networks", in EURASIP J. Appl. Signal Process., pages 348-358, 2003. 18. Lingxuan Hu, David Evans. Localization for Mobile Sensor Networks. In ACM MobiCom, 2004. 19. Neal Patwari, Alfred O. Hero, III, Matt Perkins, Neiyer S. Correal and Robert J. O’Dea. Relative Location Estimation in Wireless Sensor Networks, IEEE Trans. Signal Processing, 2003. 20. C. Huang and Y. Tseng. The coverage problem in a wireless sensor network. In WSNA, 2003.
Coverage Analysis for Wireless Sensor Networks Ming Liu 1,2, Jiannong Cao1, Wei Lou1, Li-jun Chen 2, and Xie Li2 1
Department of Computing, Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong 2 State Key Laboratory for Novel Software Technology Nanjing University, Nanjing 210093, China
Abstract. The coverage problem in wireless sensor networks (WSNs) is to determine the number of active sensor nodes needed to cover the sensing area. The purpose is to extend the lifetime of the WSN by turning off redundant nodes. In this paper, we propose a mathematical model for coverage analysis of WSNs. Based on the model, given the ratio of the sensing range of a sensor node to the range of the entire deployment area, the number of the active nodes needed to reach the expected coverage can be derived. Different from most existing works, our approach does not require the knowledge about the locations of sensor nodes, thus can save considerably the cost of hardware and the energy consumption on sensor nodes needed for deriving and maintaining location information. We have also carried out an experimental study by simulations. The analytical results are very close to the simulations results. The proposed method can be widely applied to designing protocols for handling sensor deployment, topology control and other issues in WSNs.
1 Introduction Technology advances in sensors, embedded systems, and low power-consumption wireless communications have made it possible to manufacture tiny wireless sensors nodes with sensing, processing, and wireless communication capabilities. The lowcost and low power-consumption sensor nodes can be deployed to work together to form a wireless sensor network. The sensor nodes in a sensor network are able to sense surrounding environment and carry out simple processing tasks, and communicate with the neighboring nodes within its transmission range. By means of the collaboration among sensor nodes, the sensed and monitored environment information (e.g. temperature, humidity) is transmitted to the base station for processing. A large-scale wireless sensor network can consist of tens of thousands of tiny sensor nodes, with high density of sensors up to 20 nodes/m3. The high density of sensors may result in comparatively large energy consumption due to conflict in accessing the communication channels, maintaining information about neighboring nodes, and other factors. A widely-used strategy for reducing energy consumption while at the same time meeting the coverage requirement is to turn off redundant sensors by scheduling sensor nodes to work alternatively [1,2]. The coverage problem in wireless sensor networks (WSNs) is to determine the number of active sensor nodes needed to cover the sensing area. X. Jia, J. Wu, and Y. He (Eds.): MSN 2005, LNCS 3794, pp. 711 – 720, 2005. © Springer-Verlag Berlin Heidelberg 2005
712
M. Liu et al.
A broadly-used strategy [3][4][5][6][7][8] is to determine the active nodes by using the location information about the sensor nodes and their neighborhood. However, relying on complicated hardware equipment such as GPS (the Global Positioning System) or the directional antenna will greatly increase the hardware cost of sensor nodes and energy consumption; at the same time, the message transmission for and calculation of the locations and directions will also consume the energy of a node. Therefore, it is desirable for a solution to the coverage problem not to depend on any location information. In this paper, we propose a mathematical model for coverage analysis of WSNs without requiring the use of location information. Based on the model, given the ratio of the sensing range of a sensor node to the range of the entire deployment area, the number of the active nodes needed to reach the expected coverage can be derived. The proposed analytical method is based on the random deployment strategy, which is the easiest and cheapest way for sensor deployment [10]. Comparing with similar work [13] using theoretical methods for coverage analysis without the use of location information, which is a special case of our work. It means our work is more general than [13]. Most applications may not require the maximal area coverage, and a small quantity of blind points generated at certain intervals can be accepted. If the working nodes in a sensor network can maintain a reasonable area coverage, most applications can realize. Coverage can be regarded as one quality of service of a sensor network to evaluate its monitoring capability [9]. If the coverage fraction is below certain threshold, the sensor network will be thought unable to work normally. So, it will be very significant to propose a simple method that can, in a statistical sense, evaluate the coverage fraction which meets the coverage requirement in application without depending on location information. This paper provides such a solution. The rest of the paper is organized as follows. We introduce related work in section 2. In section 3, we present sensor network models and preliminary definitions. In section 4, we analyse the relationship between the ratio of the sensing range of a sensor node to the range of the entire deployment area and the coverage fraction. Numeric results and simulation results are provided in section 5.
2 Related Work Coverage is one of the important issues in sensor networks. Because of different applications of sensor networks, maybe there are different definitions of coverage. We argue, in the case of K-cover, coverage in sensor networks can be simply described as: any point in the coverage area lies within the sensing range of at least K sensor nodes. Obviously, K is bigger than or equal to one. Wireless sensor networks are usually characterized by high density of sensor nodes and limited node energy. With the desired coverage fraction being guaranteed, working nodes density control algorithm and node scheduling mechanism are utilized to reduce energy cost and thus extend sensor network lifetime. In [3] and [4], an approach is proposed to compute the maximal cover set: all the sensor nodes are divided into n cover sets which do not intersect one another, and the sensor nodes in each cover set can perform independently the task of monitoring the desired area; sensor nodes in all the cover sets take turns at performing the monitoring task. In [3], Slijepcevic et al. have proved that the calculation of the maximal cover
Coverage Analysis for Wireless Sensor Networks
713
set is as NP-complete problem. The two algorithms proposed in [3] and [4] are both centralized ones, so they are not suitable for the case in which there is a large quantity of sensor nodes. In addition, both the two algorithms have to rely on the location information of sensor nodes in reckoning cover sets. In [1], Tian et al. propose a distributed coverage algorithm based on a node scheduling scheme. The off-duty eligibility rule proposed in this algorithm, relying on the geographical information of sensor nodes and AOA (Arrival of Angle) obtained through the directional antenna, can reckon the coverage relation between one node and its neighbors and then select working nodes. Obviously, sensor networks relying on GPS or AOA information are characterized by high cost and high consumption of energy. In addition, the off-duty eligibility rule fails to consider the problem that excessive overlap may be formed so that the number of working nodes selected becomes very large to cause extra energy consumption. In [11], it has been proved that this algorithm based on a node scheduling scheme is low-effective. In [13], Gao et al. propose a mathematical method, which does not rely on location information, to describe the redundancy. According to this method, one sensor node can utilize the number of neighbors within its sensing range to calculate its own probability of becoming a redundant node. Since there is no need to be equipped with GPS or directional antenna, it is possible to get the cost of sensor nodes under control. In addition, it becomes unnecessary to derive location information through the exchange of message, and thus the energy consumption for communication in sensor networks is reduced. However, for most sensor nodes, the sensing hardware and the communicating hardware are two fully independent parts, and the communicating range is always not equal to the sensing range. Therefore, some specialized parts are needed to judge the number of neighbors within the sensing range. As the above analysis suggests, most previously proposed coverage algorithms rely on outside equipment like GPS, directional antenna or positioning algorithm, etc. In this case, both the cost and the energy consumption are increased; in the mean time, some problems remain unsolved, e.g. GPS-based protocols have to correct some mistakes made in calculating location information; the work of GPSbased systems is not reliable in indoor environment, and thus some other positioning systems need to be deployed. For some positioning algorithms, each node needs to exchange a large quantity of information with the beacon node to calculate its location, and this will also result in high consumption of power. In [14], Stojemenvic makes a comprehensive analysis on location-based algorithms, and locations out that obtaining and maintaining location information will cause great consumption of energy. In this paper, we provide an effective mathematical method to evaluate the number of nodes needed to reach the expected coverage fraction. In this method, only if the proportion of the node’s sensing range to the range of the deployment area C is known, the relation between the number of sensor nodes in C and the expected coverage fraction can be derived by simple calculation. Therefore, our approach is applicable to many cases. It can be easily adopted in handling the problems of sensor deployment, topology control, etc.
714
M. Liu et al.
3 Models and Assumptions In this section, we first introduce two methods used in our research: the deployment method and the sensing method. Then we will give a few definitions to simplify the analytical process in Part 4. 3.1 Deployment Model In [10], the commonly used deployment strategies are studied: random deployment, regular deployment, and planned deployment. In the random deployment strategy, sensor nodes are distributed with a uniformly distribution within the field. In the regular deployment strategy, sensors are placed in regular geometric topology such as a grid. In the planned deployment strategy, the sensors are placed with higher density in areas where the phenomenon is concentrated. In the planned deployment method, although sensors are deployed with a non-uniform density in the whole deployment area, however in a small range, sensors are approximately deployed randomly. In this sense, our analytical results of random deployment are also applicable to planned deployment. The analysis in this paper is based on the random deployment strategy, which is reasonable in dealing with the application scenario in which priori knowledge of the field is not available. For convenience, we assume that sensor nodes are placed in a two-dimensional circular area C with a radius of R. Actually, we are not concerned about the shape of the deployment area, which can be circular or square, and the area C can represent a subset of the whole deployment area or represent the whole deployment area. Our research focuses on how to obtain the number of nodes required by C with the coverage of sensor network being guaranteed. We assume that sensor nodes are uniformly and independently distributed in the area C, and no two sensors can be deployed exactly at the same location. 3.2 Sensing Model The analysis in this paper is based on Boolean sensing model, which is broadly adopted in the study of sensor networks [1][2][12]. In the Boolean sensing model, each sensor has a fixed sensing range. A sensor can only sense the environment and detect events within its sensing range. And in this paper all sensors are supposed to have the same sensing rang and the sensor’s sensing range r ≤ R . A point is covered if and only if it lies within at least one sensor’s sensing range. So, the deployment area is partitioned into two regions, the covered region and the vacant region. An arbitrary point in the covered region is covered by at least one sensor node, while the vacant region is the complement of the covered region. Actually, some applications require a higher degree of accuracy in detecting objects, so an arbitrary point in the covered region has to lie within the sensing ranges of k nodes at the same time. The analytical results in this paper, however, can be easily extended into K-coverage. 3.3 Related Definitions To facilitate later discussion, we introduce the following definitions: Definition 1: Neighboring area. For an arbitrary point ( x, y ) ∈ C , its neighboring area is defined as ℵ ( x, y ) =
{( x ', y ') ∈ C | ∀(( x '− x ) + ( y '− y ) 2
2
≤ r2
)}
Coverage Analysis for Wireless Sensor Networks
715
Definition 2: The central area C ' . For C ' , we have C ' ⊂ C . And for an arbitrary point ( x, y ) ∈ C ' , there is x2 + y 2 < ( R − r )2
Definition 3: Expected coverage fraction, denoted as q. Expected coverage fraction of a sensor network is defined as the expected proportion of the covered region to the whole deployment region. For example, an application requires the coverage fraction reach 85 percent of the whole region, and the expected coverage fraction equals 0.85. If the expected coverage fraction is known, it can be used to calculate the number of nodes needed to cover the deployment area. As shown in Figure 1, for an arbitrary point, its neighboring area is actually the overlapped area of the circle centered at the point with a radius of r and the area C. The central area C’, which and C are circles centered at the same point, has a radius of R − r . Obviously, the neighboring area of every point in C ' is the same, and its value is π r 2 . For any point in C − C ' , its neighboring area is in inverse proportion to its distance away from the centered point of C, and is less than π r 2 .
r
l
R
r
C' C
Fig. 1. Illustration of analysis
4 Analysis for Coverage For an arbitrary point ( x , y ) ∈ C , if there exists at least one sensor node in its neighboring area, the point is covered. Since the sensors in C are distributed randomly and uniformly, the probability that an arbitrary node falls on the point’s ( x , y )
neighboring area is p = ℵ( x, y )area / Carea .
Assume that m sensor nodes are deployed randomly in C. In the case of singlecover, the probability that an arbitrary point is covered is equal to the one that at least one sensor node falls on its neighboring area, namely, p{( x , y )∈C }
1
= Cm p
(1 − p )
m −1
m
+ Cm2 p 2 (1 − p )
= ¦ Cmn p n (1 − p )
m−2
+ ⋅⋅ ⋅ ⋅ ⋅ ⋅ +Cmm p m
m−n
n =1
= 1 − (1 − p )m
(1)
716
M. Liu et al.
Hence, for any two points in C, if the size of their neighboring area is the same, the probability of being covered is equal to each other. For ∀ ( x, y ) ∈ C ' , it’s neighboring area ℵ( x, y )area = π r 2 , so the probability that an arbitrary node falls on certain point’s neighboring area in C’ is p = ℵ( x, y )area / Carea = π r 2 / π R 2 . According to Formula 1, if there are m sensor nodes randomly deployed in C, for ∀ ( x, y ) ∈ C ' , the probability of being covered is 2n
m r2 · §r · § p{( x , y )∈C ' } = ¦ Cmn ¨ ¸ ¨ 1 − 2 ¸ R ¹ ©R¹ © n =1
m−n
(2)
For each point that lies within the marginal region C − C ' of C, its neighboring area is less than π r 2 ; and especially for each point on the edge of C, its neighboring area is the smallest, and thus its probability of being covered is also the smallest. Assume the probability that each point on the edge of C is covered is p min , and then it is obvious that p
min
≤ p{( x , y )∈C } ≤ p{( x , y )∈C ' } . When R r , the area of C − C ' can be ignored in
calculation. In this case, it can be approximately concluded that for an arbitrary point §
in C, the probability of being covered is the same: p(( x , y )∈C ) ≈ 1 − ¨1 − ©
§
probability that each point in C is covered is 1 − ¨1 − ©
§
fraction can be: q = 1 − ¨1 − ©
2
m
r2 · ¸ . Since the R2 ¹
m
r · ¸ , the expected coverage R2 ¹
m
2
r · ¸ . R2 ¹
When the proportion of r to R is small enough that it cannot be ignored, to use Formula 2 to calculate the expected coverage fraction can lead to an error that is beyond tolerance. Therefore, in order to have an accurate evaluation of q, we have to compute the average probability of being covered for all the points in C. As shown in Figure ( ), there is one point ( x1 , y1 ) , and l denotes the distance between this location and the center of Circle C, and l > R − r . Then the value of point’s neighboring area ℵ( x1 , y1 ) area is the area of the shadowed region: ℵ( x1 , y1 )area
§ R2 − r 2 + l 2 · R ¨ 2l ¸ 2 2 2 2 = 2¨ ³ r − ( y1 − l ) dy1 + ³ R − y1 dy1 ¸ ¨¨ l − r ¸¸ R2 − r 2 + l 2 2l © ¹
(
R2 − r 2 − l 2 1 R2 − r 2 − l 2 R2 − r 2 − l 2 2 = π r 2 + R 2 + r 2 arcsin + r − 2 2lr 2l 4l 2
(
)
R −r +l R −r +l − 2lR 2l 2
− R 2 arcsin
2
2
2
2
2
R2 −
(R
2
−r +l 2
)
2
(3)
)
2 2
4l 2
From Formula 3, we can derive a common expression of the neighboring area of any point in C − C ' . By the operation of integral calculus, we can calculate the average size of neighboring area of all the points in C − C ' , denoted as ℵ( x, y )C − C ' :
Coverage Analysis for Wireless Sensor Networks ℵ( x , y ) C − C ' =
³³ ℵ( x1, y1 )area dσ
c −c '
= 2π ³
R
R −r
717
2 π ªR2 − ( R − r ) º
¬
¼
2 lℵ( x1 , y1 ) area dl π ª R 2 − ( R − r ) º ¬ ¼
(4)
And the average neighboring area of all the points in C is: ℵ( x, y )area = ª 2³ ℵ( x1 , y1 ) area dl + r 2 × π ( R − r ¬« R − r R
)2 º¼»
R2
(5)
Hence, for all the points in C, the average probability of being covered, i.e. the expected coverage fraction in C is: q = 1 − (1 −
ℵ( x, y ) area
π R2
)n
(6)
Our previous discussion only involves the single-cover. In the case of k-cover, there are at least k nodes in the neighboring area of an arbitrary point in the covered region. The probability of being covered is p{( x , y )∈C }
k
= Cm p
(1 − p )
m−k
m
+ Cmk +1 p 2 (1 − p )
= ¦ Cmn p n (1 − p )
m−n
m − k +1
+ ⋅⋅ ⋅ ⋅ ⋅ ⋅ +Cmm p m
(7)
n=k
Once the radius of C (denoted as R) and the node’s sensing range (denoted as r) are determined, the probability that an arbitrary point in C is covered relates only to its neighboring area. Therefore, the above discussion based on the 1-cover is still applicable in dealing with multi-cover.
5 Analysis and Evaluation 5.1 Numerical Results As Table 1 shows, the larger the proportion of r to R is, the smaller the size of the average neighboring area of all the points in C will be. When R = r, the proportion of the average neighboring area to the maximal neighboring area is only 0.5781. But when r1, the advantage in ,d(t) of solving the Decisional Multi-linear Diffie-Hellman problem is negligible, where is able to distinguish ,d(t) is defined as the probability that x x …x e(P,P,…,P) 1 2 d+1 from z G2. In Section 4.1, we illuminate the key tree is most applicable to group key management in dynamic peer groups when d=3, in which case the bilinear map is employed.
3 New Protocols 3.1 Notations and Lemmas System setup: Let G1 is an additive group and G2 a multiplicative group, e: G1dĺG2 is a d multi-linear map on G1 and G2. Choose a generator P of G1 and a map H: G2ĺZp*. The system parameter is (G1, G2, P, H).
Efficient Group Key Management for Dynamic Peer Networks
755
Key pairing generation: We employs a d-ary key tree whose nodes are denoted by . Each node is associated with the key K and the blinded key BK=H(K)P. Assume that every member Mi(at node ) chooses a secret random number ri (mod p), and he knows every key along the path from to , referred to as the key–path. Lemma 1. The key K and blinded key BK can be computed recursively as follows: H(K
)
K =e(BK, BK,…, BK) H(K ) =e(BK, BK,…, BK) =… H(K)H(K)…H(K) =e(P, P,…, P)
(1)
BK=H(K)P If a child node is null, its blinded key BK should be replaced by a certain sibling node. Lemma 2. Any authorized group member Mi can compute the group key K from its key and the blinded keys on the co-path [3], which is the set of siblings of each node on the Miƍs key-path. 3.2 Protocols In this section, protocols for join and leave events specified by a key tree are presented. Firstly, a framework of group applications is given in Fig 1.
Fig. 1. Framework of group applications
Note that merge and partition protocols are based on join and leave protocols respectively, and the rekey operation is a special case of the leave protocol without any members leaving the group. Thus, we focus only on the join and leave protocols in this paper. In the following, we assume that (1) the underlying group communication
756
W. Wang, J. Ma, and S. Moon
system is resistant to fail-stop failures; (2) all communication channels are public and authentic; (3) any member can initiate the membership change protocols. Join Protocol. Assume that there are N members {M1,…, MN} in the group. A new member MN+1 first initiates the protocol by sending a join request message that contains its blinded key to the group members.
Fig. 2. Key trees before and after a join (leave)
After the request is accepted, if the key tree is full, MN+1 is inserted to the root node, else the insertion node and the sponsor should be determined. The insertion node is the topmost leftmost node where the join does not increase the height of the key tree and the sponsor is a random leaf node in the subtree rooted at the insertion node. When the insertion node has two or more child nodes, the sponsor inserts MN+1 under this insertion node. Otherwise, the sponsor creates a new intermediate node and a new member node, and promotes the new intermediate node to be the parent of the insertion node and MN+1. After updating the tree, the sponsor picks a new key, computes the new blinded keys on the key–path of MN+1 and multicasts the new tree to all group members. An example of member M8 joining to a group is given in Figure 2. Leave Protocol. Assume that member Ml leaves the group. In this case, Ml's parent node is determined as the intermediate node and the sponsor is the random topmost leaf node of the subtree rooted at the intermediate node. If the degree of intermediate node is not two, the sponsor needs only to delete the leaf node corresponding to Ml. Otherwise, the leaving node is deleted and the sibling of Ml is promoted to replace Ml's parent node. After updating the tree, the sponsor picks computes the new blinded keys on the key–path of itself and multicasts the new blinded keys to all group members which allows them recompute the new group key.
Efficient Group Key Management for Dynamic Peer Networks
757
Assuming the setting of Figure 2, if member M8 leaves the group, the sponsor M7 deletes nodes and , and then renames to . After updating the tree, M7 selects a new random secret key, computes the new blinded key BK, and multicasts it to the group.
4 Scheme Analysis 4.1 Performance Analysis This sub-section devotes to the analysis of the computation, storage and communication costs for presented protocols. Consider a fully balanced key tree with degree d and height h and its corresponding secure group. Let N =dh be the number of group members. Computation Costs. For each join/leave request, the member who requests the join/leave is called the requesting member, and the other members are non-requesting members. Apparently, for a join/leave request, a requesting member perform h+1 pairings and the sponsor h-1. The average computation overhead of a non-requesting member and the whole computation overhead of our scheme are given by the following expressions: log d N d 1 h ¦ i(d − 1)d h−i = d − 1 − N − 1 N − 1 i =1
(
log d N d 1 d ( N − 1) log d N − 1 − )( N − 1) + ( h − 1) + (h + 1) = + d −1 2 d −1 N −1 2
(2)
Storage Costs. The storage is measured by the number of keys and blinded keys that need to be stored. In our group key management scheme, each member M should store the keys on its key-path and the blinded keys on its co-path. Table 1 summarizes the computation and storage overheads for the presented scheme. Table 1. Computation and storage costs
requesting a non-requesting sponsor whole scheme member member computation join O(logdN) d/(d-1)-logdN/(N-1) O(logdN) O(dN/(d-1)+logdN/2) cost leave 0 d/(d-1)-logdN/(N-1) O(logdN) storage cost O(dlogdN) O(dlogdN+N) The dependencies of the computation and storage costs on d are given in Fig 3, which show that the greater the degree d, the lower the computation complexity, but the storage requirement hits its lowest point at d§3. Moreover, the bilinear pairing such as Weil pairing [6] corresponding to d=3 has been widely applied in cryptography literatures. Thus we draw the following conclusion.
758
W. Wang, J. Ma, and S. Moon
Conclusion: The ternary tree is most applicable to group key management in dynamic peer networks.
(A)
(B)
Fig. 3. Computation and Storage Cost With Key Tree of Degree d. (A) Computation Cost and (B) Storage Cost.
Communication cost. Table 2 presents a numerical comparison among three kinds of the group key management schemes in terms of unicasts, multicasts, rounds and messages. In the leave protocol, three schemes have the same communication cost. But in the join protocol, the cost in our scheme provides the best performance with TGDH in second place, and KEY GRAPH in third place. Table 2. Communication cost
join
leave
unicast multicast round message unicast multicast round message
our scheme
TGDH
0 2 2 2 0 1 1 1
0 2 2 3 0 1 1 1
KEY GRAPH (group-oriented) 2 1 3 3 0 1 1 1
Figure 4(A) and Figure 4(B) presents a comparison in the computation and storage overheads among our scheme, TGDH and KEY GRAPH with d=3, 4, 2, respectively. From the comparisons we can draw conclusions that our scheme approximates KEY GRAPH and is much better than TGDH in computation complexity, in addition it is a little better than TGDH but still much worse than KEY GRAPH in storage requirement. Although the pairings is computationally slower than modular exponentiation, fast implementation of it has been studied actively recently.
Efficient Group Key Management for Dynamic Peer Networks
(A)
759
(B)
Fig. 4. Performance Comparison. (A) Computation Cost and (B) Storage Cost.
4.2 Security Analysis In this section, we first prove the correctness and group key freshness of the presented scheme. Later, we prove that presented scheme provides the security requirements of group key secrecy, backward secrecy, forward secrecy and key independence. Correctness. The correctness means that, in the presence of a passive adversary, all group members can compute the same group key. Theorem 1. All leaf nodes can get a same group key. Proof. The proof is by induction on the height of key tree. We denote the key tree with height h(= logdN ) by Th. Basis: The case h=1 is trivial. Induction Hypothesis: Assume that Theorem 1 holds for arbitrary trees of height H0 and sufficiently large m: |Pr[ (A0) =1]-Pr[ (B0) = 1]| >1/mk.
(3)
Theorem 3. If the DMDH problem on the group G1 and G2 is hard, there is no probabilistic polynomial time algorithm that can distinguish A0 from B0 with non-negligible probability. Proof. If X0=(Rn0,Rn0+1,…, Rn0+n1-1), and Xd-1=(Rnd-1,Rnd-1+1,…, Rnd-1+nd-1), where each Xi is associated with some node and a subtree Ti, and all of them possess the same parent node . Firstly, with the knowledge of Lemma 1, AL and BL can be rewritten as:
—AL : = (view(L, R, T), y) for random yęG2 =(view(L+1,X0,T0),view(L+1,X1,T1),…,view(L+1,Xd-1,Td-1), H(KdvL1)p, H(KdvL11)p, ...,H(KdvL1d1)p,y) —BL: = (view(L, R, T), K(L, R, T)) =(view(L+1,X0,T0),view(L+1,X1,T1),…,view(L+1,Xd-1, Td-1), H(KdvL1) p, H(KdvL11) p, ...,H(KdvL1d1) p , e(P,P,…, )
H(K)H(K)…H(K)
)
Fact 1: There is no probabilistic polynomial time algorithm that can distinguish Ah from Bh, which is equivalent to solve the DMDH problem in G1 and G2. The proof is by contradiction and induction. Contradiction Hypothesis and Induction Basis: Assume that A0 and B0 can be distinguished in polynomial time.
Efficient Group Key Management for Dynamic Peer Networks
761
Induction Hypothesis: Assume that there exists a polynomial algorithm that can distinguish AL from BL. Induction Step: We will show that this algorithm can be used to distinguish AL+1 from BL+1 or can be used to solve the DMDH problem. Consider the following equations: 0
—A L:=AL:=(view(L+1,X0,T0),view(L+1,X1,T1),…,view(L+1,Xd-1,Td-1), H ( K dvL+1 ) p, H ( K dvL++11 ) p, ..., H ( K dvL++1+ d −1 ) p , y) … d —A L:= (view(L+1,X0,T0),view(L+1,X1,T1),…,view(L+1,Xd-1,Td-1), r0P, r1P,…, rd-1P, y) d —B L: = (view(L+1,X0,T0),view(L+1,X1,T1),…,view(L+1,Xd-1,Td-1), r0P, r1P,…, rd-1P, H(K )H(K )…H(K) e(P,P,…, ) ) … 0 —B L:=BH:=(view(L+1,X0,T0),view(L+1,X1,T1),…,view(L+1,Xd-1,Td-1), H(K )H(K )…H(K) ) H(KdvL+1) p, H(KdvL++11) p, ..., H(KdvL++1+d−1) p , e(P,P,…, P) 0
d
Since A L and B L can be distinguished in polynomial time, the passive adversary can i j i j i j distinguish at least one of (AL, AL), (AL, BL),(BL, BL) for i,j [0, d]. 0
1
i
j
i
j
i
j
0
1
A L and A L (AL and AL, AL and BL, BL and BL, for i,j [0, d]): Suppose A L and A L can be distinguished in polynomial time. Suppose that the passive adversary want to decide whether PƍL+1=(view(L+1, X0, T0), rƍ ) is an instance of TDMDH problem or rƍ is a random number. To solve this, the passive adversary generate trees T1, T2,…,Td-1 of level L+1 with distribution X1, X2,…,Xd-1, respectively. Note that the passive adversary knows all secret and public information of T1, T2,…,Td-1. Then, by PƍL+1 and (T2, X2), (T3, X3),…, (Td, Xd) pairs, he can generate the distribution: PƍL=(view(L+1,X0,T0),view(L+1,X1,T1),…,view(L+1,Xd-1,Td-1),rƍP, H(KdvL++11)p, ...,H(KdvL++1+d−1)p, y) Now the passive adversary input PƍL to the distinguisher A A0L A1L. If PƍH is an instance 0 1 0 1 of A L(A L), then PƍL+1 is an instance of A L+1 (A L+1) , respectively. From the above analysis, we get that the passive adversary can distinguish Ah from B. Consequently, there is no probabilistic polynomial time algorithm that can distinguish A0 from B0 with a non-negligible probability. # Key Independence Theorem 4. In the presence of a passive adversary, our group key management scheme provides the security requirements of group key secrecy, backward secrecy, forward secrecy and key independence. Proof. The relationship among the properties is given in [3]. Thus, we only need to show that the backward secrecy and forward secrecy is provided in the proposed scheme. In the join event, the sponsor computes the new blinded keys and, consequently, previous root key is changed. Therefore, the view of the joining member Mwith is exactly same as that of a passive adversary. Clearly, all the new keys will contain M's contribution. This reveals that the probability of M deriving any previous keys of the
762
W. Wang, J. Ma, and S. Moon
key tree is negligible. Hence, the backward secrecy can be provided in our scheme. Similarly, we can show that our protocols provide forward secrecy. Thus, the presented scheme provides the security requirements of group key secrecy, backward secrecy, forward secrecy and key independence. # 4.3 Statelessness Based on the interdependency of rekey messages, group key management algorithms can be classified into stateless and stateful schemes. In a stateless protocol, rekey messages are independent of each other. In our scheme, once a group member M was comes back on-line or misses previous group keys, the sponsor corresponding to M will send the current key tree message to it, from which M can compute the group key based on the key of itself and the blinded keys on its co-path. So, a member missing previous group keys needs not to contact other members to obtain keys that were transmitted in the past to gain the current group key.
5 Conclusions and Future Work In this paper, we present an efficient scheme for distributed group key management in dynamic peer networks. The scheme supports dynamic membership group events, and satisfies the desired security requirements. Our analysis shows that the ternary key tree is most applicable to group key management in dynamic peer networks. As a result, we conclude that the new scheme is more efficient than the ones available in computation, storage and communication overheads. The limitation of the proposed scheme is that it is not yet a satisfactory solution for the cases of merge and partition events. SDR [7] based approaches can deal with the problem of a center sending a message to a group of users. We are planning to extend the proposed scheme to all membership events by modifying the SDR model.
References 1. Harney, H. and C. Muckenhirn. Group Key Management Protocol (GKMP) Specification. RFC 2093, July 1997 2. M. Steiner, G. Tsudik, and M. Waidner. Key agreement in dynamic peer groups. IEEE Transactions on Parallel and Distributed Systems, August 2000 3. Y. Kim, A. Perrig, and G. Tsudik. Simple and fault-tolerant key agreement for dynamic collaborative groups. Technical Report 2, USC Technical Report 00-737, August 2000 4. C. Wong, M. Gouda, and S. Lam. Secure group communications using key graphs. IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 8, NO. 1, FEBRUARY 2000 5. D. Naor, M. Naor, and J. Lotspiech. Revocation and tracing schemes for stateless receivers. In Advances in cryptology - CRYPTO, Santa Barbara, CA, August 2001. Springer-Verlag Inc. LNCS 2139, 2001, pp. 41-62 6. D. Boneh and M. Franklin. Identity based encryption from the Weil pairing. SIAM J. of Computing, Vol. 32, No. 3, pp. 586-615, 2003 7. D. Naor, M. Naor, and J. Lotspiech. Revocation and tracing schemes for stateless receivers. Lecture Notes in Computer Science, 2139:41–62, 2001
Improvement of the Naive Group Key Distribution Approach for Mobile Ad Hoc Networks Yujin Lim1 and Sanghyun Ahn2, , 1
Department of Information Media, University of Suwon, Suwon, Korea [email protected] 2 School of Computer Science, University of Seoul, Seoul, Korea [email protected]
Abstract. Most of mobile ad hoc network (MANET) applications are based on the group communication and, because of the insecure characteristic of the wireless channel, multicast security is especially needed in MANET. Secure delivery of multicast data can be achieved with the use of a group key for data encryption. However, for the support of dynamic group membership, the group key has to be updated for each member join/leave and, consequently, a mechanism distributing an updated group key to members is required. The two major categories of the group key distribution mechanisms proposed for wired networks are the naive and the treebased approaches. The naive approach is based on unicast, so it is not appropriate for large group communication environment. On the other hand, the tree-based approach is scalable in terms of the group size, but requires the reliable multicast mechanism for the group key distribution. In the sense that the reliable multicast mechanism requires a large amount of computing resources from mobile nodes, the tree-based approach is not that desirable for the small-sized MANET environment. However, recent studies on the secure multicast mechanism for MANET focus on the treebased approach. Therefore, in this paper, we propose a new key distribution protocol, called the proxy-based key management protocol (PROMPT), which is based on the naive approach and reduces the message overhead of the naive by introducing the concept of the proxy node.
1
Introduction
The mobile ad hoc network (MANET) is a new paradigm for wireless communication among mobile devices (nodes). Within this network, mobile nodes can
This research was supported by the MIC(Ministry of Information and Communication), Korea, under the Chung-Ang University HNRC-ITRC (Home Network Research Center) support program supervised by the IITA (Institute of Information Technology Assessment). This work was supported by grant No. R01-2004-10372-0 from the Basic Research Program of the Korea Science & Engineering Foundation.
X. Jia, J. Wu, and Y. He (Eds.): MSN 2005, LNCS 3794, pp. 763–772, 2005. c Springer-Verlag Berlin Heidelberg 2005
764
Y. Lim and S. Ahn
communicate without the help of fixed base stations and switching centers. Mobile nodes within their coverage can directly communicate via wireless channels, and those outside their coverage can also communicate with the help of other intermediate mobile nodes. MANET is originally researched for the communication in a military environment. Recently, with the advent of mobile devices like portable computers, MANET becomes a commercially possible solution. Most of MANET applications like disaster relief and information exchange within a conference or a lecture room are based on the group communication. However, because of the insecure characteristic of the wireless channel, security becomes a big issue and makes the group communication within MANET more problematic. For security management within MANET, availability, confidentiality, integrity and authentication must be considered [1]. Secure delivery of multicast data can be achieved with the use of a group key for data encryption. However, for the support of dynamic group membership, the group key has to be updated for each member join/leave and, consequently, a mechanism distributing an updated group key to members is required. The two major categories of the group key distribution mechanisms proposed for wired networks are the naive and the tree-based approaches. The naive approach is based on unicast, so it is not appropriate for large group communication environment. On the other hand, the tree-based approach is scalable in terms of the group size, but it requires the reliable multicast mechanism for the group key distribution and this characteristic is not well suited for the MANET environment with energy-constraint. However, recent studies on the secure multicast mechanism for MANET focus on the tree-based approach (even most of them do not consider mobility). Since most of MANET applications are expected to be small-sized ones and MANET nodes are mobile and constrained with computing capability and power, a more simple secure multicast mechanism is required for the practicality of MANET. Therefore, in this paper, we propose a simple secure multicast mechanism which adopts the naive approach and enhances it to reduce the group key update overhead of the naive by utilizing the broadcast capability of the wireless link and the neighbor information obtained from the MANET routing protocol such as AODV [2]. The rest of the paper is organized as follows; in Section 2, the schemes proposed for the multicast security in the wired network and MANET are described, and the motivation of our work is presented. In Section 3, we describe our newly proposed key distribution protocol, the proxy-based key management protocol (PROMPT). In Section 4, the performance of PROMPT is shown, and Section 5 concludes this paper.
2
Related Work
Recently, group communication applications like video conferencing and pushbased information services over Internet have been attracted much attention. These group-based applications are closely related to multicast mechanisms and
Improvement of the Naive Group Key Distribution Approach
765
major issues in multicast include the provisioning of reliability and security. Multicast security implies only allowed senders/receivers can send/receive group messages. Secure delivery of multicast data can be achieved by encrypting data and, for this, the group key distribution among members is required. In order to provide the multicast security, several aspects must be considered. First, group members must share a group key for data encryption and decryption. Second, the join/leave secrecy must be provided. The join secrecy implies that a newly joining member is not able to decrypt messages transmitted ahead of its join. For this, whenever a new member joins, the group controller must update the group key and encrypt the new group key with the old group key and multicast it to group members. The leave secrecy implies that a left member is not able to decrypt messages transmitted after its leave. Hence, the group key must be updated for each member leave. However, unlike the join secrecy, if the new group key is encrypted with the old group key, the left member can decrypt the new group key. Therefore, for the leave secrecy, a private key is assigned to each member and, whenever a member leaves, the new group key is encrypted with each private key and sent to the corresponding member (i.e., the new group key is not exposed to the left member). For the distribution of an updated group key, two major approaches, the naive and the tree-based approaches, have been proposed [3]. In the naive approach, the group controller maintains one group key and N private keys for a group with N members. At the time of subscription, each member enters into the authentication procedure and receives the group key and its own private key from the group controller.In the naive approach, the leave secrecy is guaranteed by the group controller unicasting the new group key encrypted with each private key to each corresponding member. The advantages of the naive approach are simplicity, easiness of implementation, and no requirement on the reliable multicast mechanism. However, the larger the group size is, the more overhead it requires since the group controller transmits a new group key to each member via unicast. The tree-based approach targets to reduce the key update overhead of the naive approach, by maintaining a logical key tree composed of the group key, subgroup keys and private keys at the group controller [4]. Subgroup keys help to reduce the number of messages generated for a key update at the group controller. A logical K-ary key tree structure is adopted for an efficient key management. A member of a group with N members has to maintain all of those keys on the path from the root to itself in the key tree. The group controller has to maintain all keys in the key tree, including the group key, subgroup keys and N private keys. Therefore, the number of keys maintained at the group controller is (KN - 1) / (K - 1), and that at a group member is logK N . As a summary, in the tree-based approach, the number of keys maintained at a member is increased and the number of messages transmitted by the group controller is reduced. Hence, the tree-based approach is more scalable in terms of the group size than the naive approach. Due to this advantage of the treebased approach, recently proposed secure multicast mechanisms for MANET
766
Y. Lim and S. Ahn
are focused on the tree-based approach and try to reduce the communication overhead of the tree-based approach [5] [6] [7] (some of the previously proposed MANET tree-based secure multicast mechanisms operate based on the GPS information which is not adequate for the MANET environment, and most of non-GPS based mechanisms do not consider mobility at all and only [7] provides moderate level of mobility). However, the key distribution of the tree-based approach is based on the multicast, a reliable multicast mechanism is required as a basic component since those members not receiving newly updated keys can not participate in the group communication any further. In order to provide reliable multicast services, reliable multicast mechanisms have been extensively studied for the wired network. Recently, with the increase of the interest in MANET, reliable multicast mechanisms for MANET have been proposed by many researchers [8] [9] [10] [11] [12]. The reliable multicast mechanism requires buffering at sources and/or receivers for the recovery of lost packets, so even for the wired network reliable multicast mechanisms overburden nodes. Also, the reliable multicast mechanism can not be used for real-time applications due to the retransmission-based lost packet recovery. Especially in MANET which uses the wireless channel with high bit error rate (BER) and mobile nodes, providing a reliable multicast service is more difficult than that in the wired network. Moreover, with the energy and computing resource constraints of MANET nodes, using a complex reliable multicast mechanism is not preferable. Therefore, instead of using the tree-based approach requiring the reliable multicast mechanism, using the naive approach is more practical for the small-sized MANET environment which does not have to be scalable. Therefore, we propose a new key distribution protocol, called the proxy-based key management protocol (PROMPT), which is based on the naive approach with utilizing the broadcast characteristic of the wireless channel in order to reduce the message overhead of the naive approach. In this paper, we focus only on the group key distribution mechanism itself and not on any other security related issues such as malicious proxy nodes.
3
Proxy-Based Key Management Protocol (PROMPT)
In this section, we propose a new key distribution protocol, the proxy-based key management protocol (PROMPT), which targets to reduce the message overhead of the naive approach. In PROMPT, the join secrecy is easily provided by multicasting a new group key encrypted with the old group key for each member join. For the leave secrecy, the broadcast characteristic of the wireless channel is utilized to reduce the key update message overhead. Two basic operations, the first hop grouping and the last-hop grouping, are newly defined for PROMPT. The first-hop grouping is performed by the source which multicasts a new key information to its neighboring group members. This is feasible since each node maintains its own neighbor list. In this operation, the source encrypts an updated group key with private
Improvement of the Naive Group Key Distribution Approach
767
keys of its neighboring members and includes newly encrypted group keys in an update message and sends it via 1-hop flooding. The destination address of the update message has the group multicast address and the TTL field is set to 1 and the data field includes more than one [the IP address of a neighboring member, the new group key encrypted with the private key of the neighboring member] pairs. The last-hop grouping is performed by a member with a number of neighboring members and uses the 1-hop flooding. For this operation, each member node has to know how many neighbor nodes are members. However, since the group membership information is managed only at the source, it is not possible for a node to know the membership of its neighbors. Therefore, to solve this problem, at the initial stage, the source unicasts the updated key information to each member not in the first-hop grouping as in the naive approach. The group address is included in the IP option field of this new key message to allow each member node to know the group membership of its neighbors. Once the source starts to send data after finishing the key update procedure, those member nodes with more than a pre-specified number, k, (which is a system parameter determined at the session set-up stage of a multicast session) of neighboring members send PROXY packets to the source to let the source know that it is a possible representative (i.e., proxy) of its neighboring members. Within a PROXY packet, the list of neighboring members is included. The proxy node selection problem is the set covering problem which is NP-hard [13] and, as a heuristic solution, a greedy approach is adopted in PROMPT. That is, the source receiving PROXY packets select those nodes with the largest number (which is greater than or equal to k) of neighboring members not covered yet as proxy nodes. Figure 1 shows the formal description on the first-hop grouping at the source and the last-hop grouping at a proxy node. Figure 2 shows the format of the key update packet sent by the source. In the data field, the pair of the IP address and the group key encrypted with the private key of the proxy node and those of neighboring members of the proxy node are included. The destination field has the IP address of the proxy node. The IP option field has the following subfields: – the multicast address to indicate the group information – the proxy bit to let the receiving node know whether it is a proxy node or not – the key bit to indicate whether the packet has the user data or the key information – the number of [IP address, encrypted group key] pairs – the length of an encrypted group key Since the information used for the proxy node selection is collected during the previous group key update procedure, some of the neighboring nodes of a proxy node may not receive the key update message. In this case, the proxy node notifies the source of those non-receiving members in order to let the source unicast the update message to them.
768
Y. Lim and S. Ahn
Fig. 1. The procedure of the first-hop grouping and the last-hop grouping
Fig. 2. The format of the key update packet
We can expect that PROMPT will show better performance for a dense group. For a dense group, the number of those nodes with more than a pre-specified number of neighboring members may be large and, if PROXY packets are sent almost simultaneously by members, the packet explosion problem may occur. To solve this problem, the node which has sent a PROXY packet to the source lets its neighbors know the fact in order to prevent them from sending their own PROXY packets to the source.
Improvement of the Naive Group Key Distribution Approach
769
For a sparse group, the performance of PROMPT is expected to be similar to that of the naive approach. The reason for this is that in this case proxy nodes may not be selected and PROMPT works like the naive approach.
4
Performance Evaluation
To evaluate the performance of PROMPT, we have performed simulations using the GloMoSim simulator [14]. GloMoSim is a simulation package for wireless network systems written with the distributed simulation language from PARSEC. The simulation environment consists of 50 mobile nodes in the range of 1000 × 1000 m, and the transmission range of each node is set to 250 m in radius, and the channel capacity is set to 2 Mbps. The moving direction of each mobile node is chosen randomly. And we have applied the free space propagation model in which the power of the signal decreases 1/d2 for distance d, and assumed the IEEE 802.11 as the medium access control protocol. The AODV [2] is used as the underlying unicast routing protocol, and the source and receivers are randomly selected and the total simulation time is set to 1000 seconds. Since those nodes with more than a pre-specified number (i.e., threshold k) of neighboring members send PROXY packets to the source, if the threshold is set to a small value, the PROXY packet implosion can happen. On the other hand, if the threshold is set to a large value, the number of generated PROXY packets decreases and PROMPT becomes the naive approach. Therefore, an appropriate threshold value needs to be determined. Figure 3 shows the performance of PROMPT in terms of the number of transmitted packets (including PROXY and key update packets) with various threshold values (in this case, the group size is set to 15 and the pause time to 300 seconds). For smaller threshold values, more PROXY packets are generated since the possibility of being a proxy candidate node becomes higher (’PROMPT(control)’ in figure 3 shows this result). For larger threshold val-
Fig. 3. The performance for various thresholds
770
Y. Lim and S. Ahn
Fig. 4. The performance for various node mobility
Fig. 5. The performance for various group sizes
ues, more key packets are generated by the source since the possibility of the last-hop grouping decreases (’PROMPT(key)’ in figure 3 shows this result). In that figure, the plot labeled with ’PROMPT’ is the result of summing up ’PROMPT(control)’ and ’PROMPT(key)’. Overall, in this simulation environment the most appropriate threshold value is 5. Figures 4 and 5 show the performance of PROMPT for various node mobility and group sizes with k = 5. As shown in figure 4, PROMPT outperforms the naive in terms of the number of transmitted packets and the control overhead due to PROXY packets is kept almost constant. As the node mobility decreases (i.e., as the pause time increases), the number of transmitted packets increases since only successful transmissions are considered (lower mobility gives a higher possibility of successful transmission).
Improvement of the Naive Group Key Distribution Approach
771
Figure 5 shows that, in terms of the number of transmitted packets, PROMPT outperforms the naive for all group size cases and for larger group sizes PROMPT performs much better. The reason is that for larger group the possibility of having neighboring members increases. As a summary, PROMPT gives better performance in a dense group environment and even for the sparse group case it performs similar to that of the naive. Since most of the group communications in MANET happen in a dense environment, we can say that our PROMPT is appropriate for MANET.
5
Conclusion
MANET is a new paradigm for wireless communication among mobile devices. Most of MANET applications are based on the group communication. However, due to the insecure characteristic of the wireless channel, secure delivery of multicast data becomes a hot issue and, for this, the concept of the group key has been adopted. Since the group membership can change dynamically, the group key also has to be updated. Updated keys need to be distributed to group members, so the key distribution protocol is required for the secure multicast. The naive and the tree-based approaches are the most representative schemes for the group key management proposed for the wired network. However, since MANET is small-sized and dense in nature and hard to support the reliable multicast mechanism, the naive approach is more preferable than the tree-based approach in this kind of environments. Therefore, in this paper, we proposed a key distribution protocol, the proxy-based key management protocol (PROMPT), which is based on the naive and tries to reduce the number of key distribution-related packets. PROMPT introduces the concept of the proxy node which is a representative of its neighboring members and utilizes the broadcast characteristic of the wireless channel to reduce the key distribution overhead. From the performance evaluation, we have showed that PROMPT outperforms the naive in dense group and higher node mobility cases. Even in the worst case, PROMPT performs almost the same as the naive.
References 1. L. Zhou, and Z. J. Haas, ”Securing Ad Hoc Networks”, IEEE Network, pp24-30, Nov. 1999. 2. C. Perkins, E. Bolding-Royer, and S. Das, ”Ad hoc On-Demand Distance Vector (AODV) Routing”, IETF RFC 3561, July 2003. 3. M. J. Moyer, J. R. Rao, and P. Rohatgi, ”A Survey of Security Issues in Multicast Communications”, IEEE Network, pp12-23, Nov. 1999. 4. C. K. Wong, M. Gouda, and S. S. Lam, ”Secure Group Communication Using Key Graphs”, Proceedings of ACM SIGCOMM, 1998. 5. L. Lazos and R. Poovendran, ”Energy-Aware Secure Multicast Communication in Ad-hoc Networks Using Geographic Location Information”, IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP ’03), vol. 4, pp201204, April 2003.
772
Y. Lim and S. Ahn
6. M. Moharrum, R. Mukkamala, and M. Eltoweissy, ”CKDS: An Efficient Combinatorial Key Distribution Scheme for Wireless Ad-Hoc Networks”, IEEE International Conference on Performance, Computing, and Communications (IPCCC ’04), pp631-636, April 2004. 7. S. Zhu, S. Setia, S. Xu, and S. Jajodia, ”GKMPAN: An Efficient Group Rekeying Scheme for Secure Multicast in Ad-Hoc Networks”, International Conference on Mobile and Ubiquitous Systems: Networking and Services (MOBIQUITOUS ’04), pp42-51, Aug. 2004. 8. R. Chandra, V. Ramasubramanian, and K. Birman, ”Anonymouns Gossip: Improving Multicast Reliability in Mobile Ad-Hoc Networks”, 21st International Conference on Distributed Computing Systems (ICDCS), pp275-283, April 2001. 9. S. Gupta and P. Srimani, ”An Adaptive Protocol for Reliable Multicast in Mobile Multi-hop Radio Networks”, IEEE WMCSA ’99, pp111-122, Feb. 1999. 10. L. Klos and G. Richard III, ”Reliable Group Communication in an Ad Hoc Network”, IEEE International Conference on Local Computer Networks (LCN 2002), 2002. 11. A. Sobeih, H. Baraka, and A. Fahmy, ”ReMHoc: A Reliable Multicast Protocol for Wireless Mobile Multihop Ad Hoc Networks”, IEEE Consumer Communications and Networking Conference (CCNC), Jan. 2004. 12. K. Tang, K. Obraczka, S.-J. Lee, and M. Gerla, ”Reliable Adaptive Lightweight Multicast Protocol”, IEEE ICC 2003, May 2003. 13. T. H. Cormen, C. E. Leiserson, and R. L. Rivest, ”Introduction to Algorithms”, MIT Press, 1990. 14. UCLA Computer Science Department Parallel Computing Laboratory and Wireless Adaptive Mobility Laboratory, ”GloMoSim: A Scalable Simulation Environment for Wireless and Wired Network Systems”, http:// pcl.cs.ucla.edu/projects/domains/glomosim.html
RAA: A Ring-Based Address Autoconfiguration Protocol in Mobile Ad Hoc Networks Yuh-Shyan Chen and Shih-Min Lin Department of Computer Science and Information Engineering, National Chung Cheng University, Chiayi 621, Taiwan
Abstract. The problem for dynamic IP address assignment is manifest in mobile ad hoc networks (MANETs), especially in 4G all-IP-based heterogeneous networks. In this paper, we propose a ring-based address autoconfiguration protocol to configure node addresses. This work aims at the decentralized ring-based address autoconfiguration (DRAA) protocol, which has the capability to perform low latency and whose broadcast messages are reduced to lower control overhead. In addition, we introduce the centralized ring-based address autoconfiguration (CRAA) protocol to largely diminish control overhead and to serve as an even solution for IP address resource distribution. Both of DRAA and CRAA protocols are low-latency solutions because each node independently allocates partial IP addresses and does not need to perform the duplicate addresses detection (DAD) during the node-join operation. Communication overhead is significantly lessened in that RAA (DRAA and CRAA) protocols use the logical ring, thus utilizing fewer control messages solely by means of uni-cast messages to distribute address resources and to retrieve invalid addresses. Especially, the CRAA protocol reduces larger numbers of broadcast messages during network merging. The other important contribution is that our CRAA protocol also has an even capability so that address resources can be evenly distributed in each node in networks; this accounts for the reason our solution is suitable for large-scale networks. Finally, the performance analysis illustrates performance achievements of RAA protocols. Keywords: Autoconfiguration, IP address assignment, MANET, RAA, wireless IP.
1
Introduction
Multiple functions of the fourth-generation (4G) communication system are envisioned to be extensively used in the near future. 4G networks are an all-IP-based heterogeneous network, exploiting IP-based technologies to achieve integration among multiple access network systems, such as 4G core networks, 3G core networks, wireless local area networks (WLANs) and MANETs. In IP-based MANETs, users communicate with others without infrastructures and service charges. A MANET is made up of identical mobile nodes, each node with a limited wireless transmission range to communicate with neighboring nodes. In X. Jia, J. Wu, and Y. He (Eds.): MSN 2005, LNCS 3794, pp. 773–782, 2005. c Springer-Verlag Berlin Heidelberg 2005
774
Y.-S. Chen and S.-M. Lin
order to link nodes through more than one hop, multi-hop routing protocols such as DSDV, AODV, DSR, ZRP and OLSR - are designed. These multi-hop routing protocols require each node to have its own unique IP address to transmit packets hop by hop toward the destination. Hence to practice these routing protocols in a correct manner, a node must possess an IP address bearing no similarity to that of any other node. In recent years, many solutions for dynamic IP address assignment in MANETs have been brought out. According to their dynamic addressing mechanisms, we organize these solutions into the following four categories: all agreement approaches [1][3][5], leader-based approaches [2][9], best-effort approaches [8] [10] and buddy system approaches [4][7]. The all agreement approach [5] featured a distributed, dynamic host configuration protocol for address assignment called MANETconf. The greatest communication overhead will be produced during network merging because nodes in networks must perform DAD. Among leader-based approaches, Y. Sun et al. [2] proposed a Dynamic Address Configuration Protocol (DACP). The biggest drawback Leader-based approaches bear is that the workload of the leader node is too heavy due to DAD for all joining nodes. Among best-effort approaches, the prophet address allocation protocol [10] makes use of an integer sequence consisting of random numbers through the stateful function f (n) for conflict-free allocation. Prophet does not perform DAD to reduce communication overhead during network merging, but nodes with the smaller network identifier (NID) change their IP addresses no matter duplication occurs or not. Although Prophet brings the benefit of lower communication overhead, nodes break all on-going connections with the smaller NID. When two large networks merge, the impact of connection loss is significant. A.P. Tayal et al. [7] proposed an address assignment for the automatic configuration (named AAAC), to which the buddy system is applied whenever resources run out and new nodes seek to join a network. In AAAC, every node in merging networks broadcasts its IP address and address pool to whole networks for network merging, whose communication overhead is large. To offer effective IP address assignment in a dynamic network environment, the solution provided by our addressing protocol presents three goals, aiming at efficient, rapid IP address distribution as well as applicable address resource maintenance: 1) low latency, 2) low communication overhead and 3) evenness. Namely, low latency produces the results that a requested node timely gets a unique address in the IP address assignment process, communication overhead is lessened to enhance network efficiency, and address resources are evenly distributed in each node. The remaining sections of the paper are organized as follows. Section 2 proposes protocol comparisons and basic ideas of RAA protocols. In Section 3, we present the details of the decentralized RAA protocol (DRAA) with regard to address resource maintenance and node behavior handling. In Section 4, the centralized RAA protocol (CRAA) is introduced. Section 5 shows the performance analysis. Finally, Section 6 draws conclusions for the paper.
RAA: A Ring-Based Address Autoconfiguration Protocol
2
775
Basic Ideas
In this section, we proffer both conceptual discrepancies among various protocols and the basic idea of RAA protocols. The key to determining the effectiveness of a dynamic IP address assignment protocol mainly lies in the latency of node joining, and we illustrate the comparison among various protocols in terms of node joining in Fig. 1. We consider mobile wireless networks where all nodes use IP address to communicate with others. Such a network can be modeled as follows. A mobile wireless network is represented as a graph G = (V, E) where V is the set of nodes and E is the edge set which gives available communications. In a given graph G = (V, E), we denote by n = |V | the number of nodes in the network. The identifier (ID) and the related list of node u are represented as Nu and RLu respectively. The network identifier is represented as NID, which is 2-tuple: , by whose uniqueness is distinguished in different networks. The ID of a node is 2-tuple: . The successor, the predecessor and the second predecessor of node u are represented as Su , Pu and SPu . In Fig. 1(a) and (b), MANETconf and DACP are not conflicting free protocols, so they have to perform DAD during node joining, which increases the latency of node joining. On the contrary, the other protocols, such as Prophet, AAAC, DRAA and CRAA, are conflicting ones so that they can assign a unique address to new nodes without DAD. Furthermore, AAAC, DRAA and CRAA are categorized into buddy system approaches, whose main difference is the evenness of address blocks in all nodes during node joining. In AAAC and DRAA, a new node Nj broadcasts a one-hop address request (AREQ) to its neighbors and awaits the very first address reply (AREP). As Fig. 1(e) shown, the CRAA protocol awaits AREPs of all neighbors and picks up the biggest address block for use, which effectively improves the uneven distribution of address blocks shown in the former case. In this paper, we draw on a novel technique developed in peer-to-peer (P2P) networks to provide a logical view for resource maintenance. It offers a logical network, allowing clients to share files in a peer-to-peer way. One of the distributed hash table (DHT) based approaches in P2P, known as Chord [6], 1 AREQ AREP
DAD
DAD
Latency
AREQ AREP
AREQ AREP 1
Latency
AREQ AREP1 1
Latency
AREQ
Latency
11 AREP
Latency
RREP RREQ AREP AREP
AREP
DAD
j
j
AREQ
j
AREQ
neighbors of node j
AREP
j
AREQ
all address blocks
j
AREQ
DAD
(a) MANETconf
(b) DACP
(c) Prophet
(d) DRAA/AAAC
(e) CRAA
Fig. 1. The comparison of various protocols during node joining
all address blocks
776
Y.-S. Chen and S.-M. Lin
inspires us to utilize its logical view to perform address resource management. In the prototype of Chord, in order to keep load balancing, each node uses the hash function to produce a node identifier. Unlike Chord, our solutions - RAA protocols - are not necessary to use any hash function to distribute address resources because IP address resources differ from file ones. If the hash function is applied to distribute address resources for the evenness in networks, the complexity of address management will be raised and not be suitable in MANETs. For this reason, the buddy system is combined in RAA protocols to manage resources. The binary buddy system is one common, famous type of buddy systems with a resource size of 2m units and starting off this size. When the application issues a request, a 2m -unit block is split into two with 2m−1 units respectively. However, resources in the binary buddy system are depleted rapidly. When a node owns 2m addresses and allocates them to other nodes up to m times, address resources will be consumed. In the traditional buddy system approach (AAAC), address resources are retrieved solely during resource consumption. If a network changes with frequency, then resources usually run out. If so, the latency of a new node’s requiring an available address block inevitably increases. In our solution, address resources are retrieved during both resource consumption and node joining. In RAA protocols, each node records its logical neighbors’ IDs on the related list (RL). Logical neighbors are the successor, the predecessor and the second predecessor. Notice that the RL is updated during node joining and network merging. If Ni exists in RAA protocols, the successor (Si ) is the node which allocates an address block to it and is also the first node in the clockwise course starting from Ni . The predecessor (Pi ) and the second predecessor (SPi ) are the first node and the second node in the anticlockwise course starting from Ni respectively. The successor’s, the predecessor’s and the second predecessor’s IDs can be represented as SIDi , P IDi and SP IDi . The RL of Ni is RLi : {SIDi , P IDi , SP IDi }. Each node possesses its own anticlockwise-course free address block. During node leaving, only the successor in the clockwise course has to be informed; then it will retrieve the address block of the leaving node. Our solution achieves highly-efficient resource management and address allocation through the combination of logical ring and the buddy system.
3
Decentralized Ring-Based Address Autoconfiguration Protocol (DRAA)
Since nodes in MANETs have joining, leaving, partitioning and merging behaviors, this section sheds light on how the DRAA protocol handles these behaviors. Before we introduce it, the state diagram of RAA protocols should be shown first to help handle the node behaviors. There are totally five states in RAA protocols: INITIAL, STABLE, MERGE, HS and FINISH. When a new node intends to join a network, it enters the INITIAL state and waits for a free address block. A node enters the STABLE state when getting an address. If nodes are informed of network merging by some node, they enter the MERGE state for merging. If the holder leaves the network, other nodes enter the HS (holder selection) state
RAA: A Ring-Based Address Autoconfiguration Protocol
777
to select a new holder. If leaving the network, a node enters the FINISH state. According to node behaviors, we realize how the DRAA protocol deal with joining and leaving. In following paragraphs, we will state the initiation of networks and specify node behaviors via the DRAA protocol. 3.1
Initial State
A node is in the INITIAL state before getting a usable address. At the start, the first node, Ni , enters a network and broadcasts a one-hop address request (AREQ) message. Ni awaits an address request timer (AREQ Timer) to gather responses from other nodes. Once the timer expires and Ni does not get any response, Ni gets its identity as the first node (i.e. the holder) in the network. Ni randomly chooses an IP address, and sets it into its ID. The holder uses its ID and a random number as the network identifier (NID). The NID is periodically broadcast to the whole network by the holder, enabling nodes to get the NID of their locating network and to detect network partitioning and mergeing. 3.2
Node Joining
When aiming to enter a network, Nj needs to obtain an IP address and then checks whether Pj exists in the network in order to ensure the wholeness of address blocks. The joining procedure consists of two phases: 1) address requesting and 2) failed node checking. The address requesting phase is used to allocate an IP address to a new node whereas the failed node checking is used to check whether the predecessor of the new node is alive. Address Requesting Phase. With Nj in the initial state and intending to join the network in DRAA, the address requesting phase will be triggered. The address requesting phase is applied to allocate an IP address and an address block to a new node, including an address request (AREQ) issued from the new node, an address reply (AREP) replied from neighbors of the new node and an address reply acknowledgement (AREP ACK) sent by the new node to the neighboring node which is the first to transmit an AREP. Failed Node Checking Phase. After the address requesting phase, Nj uses the IP address to communicate with other nodes in the MANET and enters the STABLE state from the INITIAL state. The Failed node checking phase is used to check whether the predecessor of the new node is alive. The new node first sends an alive checking (ACHK) message to its predecessor. If the predecessor alive, the predecessor replies with an ALIVE message to the new node. If the predecessor fails, the new node sends an address retrieve (ARET) message to its second predecessor to retrieve the address block of the predecessor. Then the second predecessor sends back an address retrieve acknowledgement (ARET ACK) to the new node to inform it which ones are its new predecessor and second predecessor.
778
3.3
Y.-S. Chen and S.-M. Lin
Node Leaving
When a node leaves the network genteelly, the leaving node sends a LEAVE message to its successor, and then the successor takes over the address block of the leaving node. Notice that during node leaving, the successor is unnecessary to be the one-hop neighbor of the leaving node, which has to procure the successor’s routing path in the routing table. If nodes crash without a LEAVE message, the orphan block of the crashed node is retrieved in the failed node checking phase. 3.4
Network Partitioning
The only responsibility of the holder node in the DRAA protocol is to broadcast its NID. If there is a node not receiving the NID after one broadcasting period, the node may have been partitioned from the network. But wireless communication is not reliable; the NID may not be received due to packet loss. To reduce the impact, we define network partitioning as three broadcasting periods without the need to receive the NID. Once a node detects network partitioning, it performs the holder selection algorithm. Holder Selection. In the holder selection algorithm, we borrow the random backoff in carrier sense multiple access with collision avoidance (CSMA/CA) protocol of IEEE 802.11 standard and slightly modify the idea. When network partitioning is detected by the node, the detecting node enters the holder selection (HS) state and chooses a random backoff timer (RB Timer) which determines the amount of time the node must wait until it is allowed to broadcast its NID. The RB Timer reduces the collision of NID messages, broadcast by nodes in the holder selection state. 3.5
Network Merging
Assume that there are two networks G (NIDG ) and G (NIDG ) intend to merge. Network G has n nodes and network G has n nodes. In the DRAA protocol, all nodes during network merging have to broadcast its ID for DAD. When all nodes get full information of networks, they choose a holder who broadcasts a bigger NID. If duplicate addressing occurs, duplicate nodes which have fewer TCP connections or are in the smaller network should rejoin the network. The benefits of choosing the node from a smaller network are quicker responses and lesser possible disconnections.
4
Centralized Ring-Based Address Autoconfiguration Protocol (CRAA)
With regard to low communication overhead and evenness, the centralized RAA protocol (named CRAA) is introduced. The differences between DRAA and CRAA protocols are in node joining and network merging. In the address requesting phase of node joining, the CRAA protocol selects the biggest free address block to use during the address requesting phase. Furthermore, the CRAA
RAA: A Ring-Based Address Autoconfiguration Protocol
779
protocol in the failed node checking phase can recover more than one failed node because the holder in CRAA maintains the node list (N L) which contains all used IP addresses in the network. The N L is helpful when there are two or more continuous node failures in a ring, so that the lost address resources can be retrieved with the help of the holder. During networks merging, the N L reduces broadcast messages by exchanging the holder’s N L only. In the address requesting phase, the new node in the CRAA protocol waits for all neighboring nodes’ AREP messages and selects the biggest free address block for use. The DRAA protocol can distribute valid addresses immediately, but the address block of each node will probably not be evenly for the long-term perspective. The advantage of the CRAA protocol is averaging the number of free address blocks managed by each node. Since the holder maintains the N L of the network in the CRAA protocol, each node sends an address registration request to the holder when a node takes over a free address block. After using a new node uses a new IP address, the new node sends a registration request (RREQ) to the holder that records the new node’s address on the N L and the holder sends a registration reply (RREP) to the new node. By contrast with the DRAA protocol during network merging, in the CRAA protocol, when the holder receives the merging request (MREQ), it broadcasts its own N L. Each node receives the holders’ N Ls, merges those N Ls, modifies its own RL and chooses the bigger NID to be its network identifier. This method reduces large communication overhead during network merging. The N L becomes much significant in CRAA, so that we will introduce the replication of the N L. The N L incorporates in all used nodes in the network and is a helpful information source for both the lost address block retrieval and network merging. The N L replicas exist in the holder’s successor, predecessor and second predecessor. When the holder gets an RREQ message, the holder copies the newest N L to these three nodes to ensure the freshness of the N L.
5
Performance Analysis
To make appropriate protocol simulation, all protocols are implemented with C++. We select AAAC [7] and Prophet [10] to compare with our protocols, and do not consider comparing with MANETconf and DACP. Both of AAAC, Prophet are conflicting free protocols, while MANETconf and DACP are not. The simulation parameter is described below. The simulation time is 1500 seconds. The speed of nodes is varied from 0 to 10 m/s, and the pause time is set to 0 for strict reasons. The mobile nodes move according to the random waypoint mobility model. The network size is set to 1000m × 1000m where the number of nodes is from 50 to 300. The size of address blocks is set to 2m IP addresses and m are 10 and 24. The maximal size of the address block is 24-bit which makes a suitable selection for large networks. In particular, in our simulation, the transmission range is 250m. The underlying routing protocol applies the DSDV in all simulations. The network is initialized with a single node. Nodes join the network every 1 second and arriving nodes are placed randomly in the
0.6 0.5 0.4 0.3 0.2 0.1 0 75 100 125 150 175 200 225 250 275 300
20 15 10 5 0 50
75
100 125 150 175 200 225 250 275 300
350
AAAC Prophet DRAA CRAA
300 250 200 150 100 50 0 50
75 100 125 150 175 200 225 250 275 300
Number of nodes
Number of nodes
Number of nodes
(a) m=10
(b) m=10
(c) m=10
1
AAAC Prophet DRAA CRAA
0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 50
25
75 100 125 150 175 200 225 250 275 300
35 30 25 20 15 10 5 0 50
75
100 125 150 175 200 225 250 275 300
90
AAAC Prophet DRAA CRAA
80 70 60 50 40 30 20 10 0 50
AAAC Prophet DRAA CRAA
300 250 200 150 100 50 0 50
75 100 125 150 175 200 225 250 275 300
75
100 125 150 175 200 225 250 275 300
Number of nodes
(d) m=10 10
350
AAAC Prophet DRAA CRAA
Number of unicast messages
50
AAAC Prophet DRAA CRAA
30
Resource consumption times
0.7
35
Resource consumption times
AAAC Prophet DRAA CRAA
0.8
Number of unicast messages
1 0.9
Number of broadcast messages
Y.-S. Chen and S.-M. Lin
Number of broadcast messages
Latency of address allocation (s)
Latency of address allocation (s)
780
AAAC Prophet DRAA CRAA
9 8 7 6 5 4 3 2 1 0 50
75
100 125 150 175 200 225 250 275 300
Number of nodes
Number of nodes
Number of nodes
Number of nodes
(e) m=24
(f) m=24
(g) m=24
(h) m=24
Fig. 2. Average latency of address allocation, the number of broadcast/unicast messages and resource consumption times with the varied number of nodes at m=10 and 24
rectangular region. The beacon interval for the holder to broadcast its NID is set to 10 seconds. The maximum retry times r of AREQ is set to 3 [10], and the AREQ Timer is set to 2 seconds for quicker addressing. For strict reasons, when all nodes are stable, node leaving and rejoining, network partitioning and network merging are produced freely. The performance metrics of the simulation are given below. – Latency: Latency of address allocation represents the average latency for a new node to obtain a unique IP address within the network. The shorter the latency, the better, since it means a new node can get a usable IP address more rapidly. – Communication overhead : Communication overhead refers to the number of control messages transmitted during the simulation period, including unicast and broadcast messages. Normally, broadcast messages occupy more bandwidth than unicast messages do. – Evenness: Evenness implies that address blocks should be evenly distributed in all nodes, which indicates that each node has the capability to assign address blocks to newly-joined nodes. The more evenly address blocks are allotted, the fewer the address resource consumption times in each node are, from which we are able to determine whether address blocks are evenly distributed. Fig. 2(a) and (e) depict the average latency of address allocation versus the varied number of nodes, with two address block sizes: 210 and 224 respectively. In general, the DRAA protocol has the shortest latency because the DRAA protocol fast replies an address request and maintains orphan address blocks when a node joins the network. The CRAA protocol has longer latency than AAAC and DRAA protocols because it awaits AREPs of neighbors to select a
RAA: A Ring-Based Address Autoconfiguration Protocol
781
bigger address block. Prophet has the longest latency when the number of nodes is more than 150 because during network merging, all nodes in the smaller NID have to rejoin the network and wait for the expiration of an AREQ Timer. When the number of nodes is 50, all protocols have longer latency because the network size is 1000m × 1000m and the transmission range is 250m. When a new node joins the network, it has the higher probability that no node is within its transmission range, so that the new node needs to await the AREQ Timer expiration. When the number of nodes is more than 150 at m=10 in Fig. 2(a), the latency of AAAC and DRAA increases with the node number because more resource consumption times in the whole network will expand the number of address requesting phase retries. Fig. 2(b) and (f) show the number of broadcast messages versus the varied number of nodes, with two address block sizes: 210 and 224 respectively. Prophet has the fewest broadcast messages because it does not preform DAD during merging and all nodes with the smaller NID have to rejoin the network. The CRAA protocol has the second fewest broadcast messages because only the holder broadcasts N L for DAD during network merging. The DRAA protocol has fewer broadcast message than AAAC because the resource consumption times of DRAA are less than AAAC. Fig. 2(c) and (g) display the number of unicast messages versus the varied number of nodes, with two address block sizes: 210 and 224 respectively. AAAC has the fewest unicast messages because most of its control messages are handled with broadcast messages. In the CRAA protocol, more unicast messages are added to maintain the N L, so its unicast messages are more than those in DRAA as well as AAAC. Unicast messages in Prophet are the most because many nodes need to rejoin the network during network merging, which results in the large number of unicast messages. Fig. 2(d) and (h) show the resource consumption times versus the varied number of nodes, with two address block sizes: 210 and 224 respectively. The DRAA protocol, during node joining, has the failed node checking phase to retrieve orphan blocks, so the resource consumption times decrease. The CRAA protocol has evenly-distributed resources because nodes in the CRAA protocol request address blocks from all neighbors and choose the biggest one to use. Although the way Prophet allots addresses is different from that of buddy system approaches and does not guarantee every allotted address is unique, f (n), which distributes addresses, does not have the phenomenon of resource consumption. To be fair, the resource consumption times of Prophet are set to be 0. On the whole, when the address block size is big enough, the resource consumption times will be significantly reduced and it is the CRAA protocol whose resource consumption times are close to be 0.
6
Conclusions
This paper proposes two ring-based address autoconfiguration protocols in mobile ad hoc networks. Compared with existing address assignment protocols, the DRAA protocol successfully achieves low latency and low communication
782
Y.-S. Chen and S.-M. Lin
overhead, and the CRAA protocol further achieves low communication overhead and evenness of dynamic address assignment. RAA protocols use a logical ring to proceed address allocation and resource management. The ring provides unique address assignment without DAD. The DRAA protocol tolerates one node’s invalidity and restores a failed node without help of the holder, and the CRAA protocol restores failed nodes with help of the holder. Based on the above advantages, RAA protocols shows high efficiency in address allocation as well as in resource management and suitability for the large scale mobile ad hoc network.
References 1. C. E. Perkins, J. T. Malinen, R.Wakikawa, E. M. Belding-Royer, and Y. Sun. Ad hoc Address Autoconfiguration. IETF Internet Draft, draft-ietf-manet-autoconf01.txt, 2001. (Work in Progress). 2. Y. Sun and E. M. Belding-Royer. Dynamic Address Configuration in Mobile Ad hoc Networks. Technical report, Computer Science Department, UCSB, Mar. 2003. 3. Zero Configuration Networking. http://www.zeroconf.org/. 4. Mansoor Mohsin and Ravi Prakash. IP address assignment in a mobile ad hoc network. In Proceedings of IEEE Military Communications Conference (MILCOM 2002), volume 2, pages 856–861, Anaheim, CA, United States, 7-10 Oct. 2002. 5. Sanket Nesargi and Ravi Prakash. MANETconf: Configuration of hosts in a mobile ad hoc network. In Proceedings of the Twenty-first Annual Joint Conference of the IEEE Computer and Communications Societies (INFOCOM 2002), volume 2, pages 1059–1068, 23-27 Jun. 2002. 6. I. Stoica, R. Morris, D. Liben-Nowell, D. Karger, M. Frans Kaashoek, F. Dabek, and H. Balakrishnan. Chord: A scalable peer-to-peer lookup protocol for internet applications. IEEE/ACM Transactions on Networking, pages 149–160, 2002. 7. Abhishek Prakash Tayal and L. M. Patnaik. An address assignment for the automatic configuration of mobile ad hoc networks. Personal Ubiquitous Computer, volume 8, issue 1, pages 47–54, Feb. 2004. 8. Nitin H. Vaidya. Weak duplicate address detection in mobile ad hoc networks. In Proceedings of the 3rd ACM international symposium on Mobile ad hoc networking and computing (MobiHoc 2002), pages 206–216, Lausanne, Switzerland, 9-11 Jun. 2002. 9. Kilian Weniger and Martina Zitterbart. IPv6 Autoconfiguration in Large Scale Mobile Ad-Hoc Networks. In Proceedings of European Wireless 2002, pages 142– 148, Florence, Italy, Feb. 2002. 10. Hongbo Zhou, Lionel M. Ni, and Matt W. Mutka. Prophet address allocation for large scale MANETs. In Proceedings of the Twenty-Second Annual Joint Conference of the IEEE Computer and Communications Societies (INFOCOM 2003), volume 2, pages 1304–1311, San Francisco, CA, United States, Mar. 30-Apr. 3 2003.
Dual Binding Update with Additional Care of Address in Network Mobility KwangChul Jeong, Tae-Jin Lee, and Hyunseung Choo School of Information and Communication Engineering, Sungkyunkwan University, Suwon, 440-746 Korea +82-31-290-7145 {drofcoms, tjlee, choo}@ece.skku.ac.kr
Abstract. In this paper, we propose an end-to-end route optimization scheme for nested mobile networks, which we refer to as Dual Binding Update (DBU ). In general, the nested mobile networks easily suffer from a bi-directional pinball routing with hierarchically multiple mobile routers. To handle this matter, we provide a new binding update (BU) message to allow a Correspondent Node (CN) to keep an additional Care of Address (CoA). And we also allow intermediate Mobile Routers (MRs) maintain a routing table to forward packets inside the mobile network and replace a source address of the packet for reverse route optimization. We evaluate the DBU with existing schemes by analytical approaches. The results show that the DBU reduces the delay of route optimization significantly under various scenarios and also improves an average Round Trip Time (RTT) consistently for many nesting levels tested.
1
Introduction
As wireless networking technologies have drastically advanced and many electronic devices are given the capability of communications with their own IP addresses, users expect to be connected to the Internet from anywhere at anytime. The IETF has standardized protocols such as Mobile IPv4 (MIP) and Mobile IPv6 (MIPv6) [3] to support seamless connectivity to mobile hosts. Recently, more and more users require the seamless Internet services while they are on public transportation. And if one user moves with several wireless devices and communicates through the Internet, these moving devices with him (her) constitute a Wireless Personal Area Network. Unfortunately, because Mobile IP is designed for continuous accessibility to mobile hosts with mobility transparency on IPv4 or IPv6, it does not provide a solution in response to these new demands. The continuous mobility of a network will lead to movements of the nodes in the network. If all nodes are forced to run Mobile IP according to the movement of the network, the overhead causes futile consumption of network resources. Hence
This work was supported in parts by IITA IT Research Center and University Fundamental Research Program of Ministry of Information and Communication, Korea. Corresponding author.
X. Jia, J. Wu, and Y. He (Eds.): MSN 2005, LNCS 3794, pp. 783–793, 2005. c Springer-Verlag Berlin Heidelberg 2005
784
K. Jeong, T.-J. Lee, and H. Choo
an elected node called Mobile Router (MR) [1] should become a gateway instead of entire nodes in the network for efficient resource management. The mobile network may have a complicated hierarchical architecture, and this situation is referred to as a nested mobile network. According to NEMO Basic Support Protocol (NBS) [8], it is required that all packets going through the nodes inside the nested mobile network should be tunneled to every Home Agent (HA) which they pass by. To avoid this, the packet should be delivered to the Top Level Mobile Router (TLMR) directly. However, because all Mobile Network Prefixes (MNP) of the MRs in the nested mobile network are different, the TLMR cannot route the packet to its destination. Hence we need to develop a mechanism for end-to-end routing optimization in the nested mobile network. The rest of this paper is organized as follows. We introduce the bi-directional pinball routing problem of nested mobile networks and review the existing NEMO schemes for route optimization in Section 2. Section 3 describes the dual binding update method in the DBU for end-to-end route optimization of nested mobile networks. In Section 4, we evaluate the performance of the proposed scheme compared with the existing ones. Finally we conclude this paper in Section 5.
2 2.1
Related Works NEMO Basic Support (NBS) [8]
In a NEMO network, a point of attachment can vary due to its movement. Since every NEMO network has its own home network, mobile networks configure addresses by using the prefix of its home. These addresses have a topological meaning while the NEMO network resides at home. When the NEMO network is away from home, a packet addressed to an Mobile Network Node (MNN) [1] still routes to the home network. The NBS is designed to preserve established communications between the MNNs and CNs while the NEMO network moves, and it creates bi-directional tunnels between the MNN HA and MR CoA to support network mobility . The NBS, however, does not describe the route optimization [8], since the NBS is mainly designed with bi-directional tunnel. Hence if the mobile network has a nesting level (depth) of N, the packet is encapsulated N times. And when the MNN delivers the packet to the CN, all intermediate nodes replace the source addresses of the packet with their own CoAs to avoid ingress filtering and the destination addresses of the packet with their HAs for packet tunneling. Hence it causes the reverse pinball routing problem. The overhead of bi-directional pinball routing becomes more significant as the nesting level increases and a distance between the HAs of MRs becomes longer. Therefore the NBS lacks scalability and promptness with respect to a nested environment. 2.2
Recursive Binding Update Plus (RBU+) [2]
The RBU+ scheme is basically operated under the MIPv6 route optimization unlike the NBS. Thus in RBU+, any node receiving the packet via its HA performs
Dual Binding Update with Additional Care of Address VMN HA
CN Internet
BU
MR2 HA TLMR HA
AR TLMR
VMN HA
CN Internet
BU AR
BU
MR2
785
MR2 HA TLMR HA
TLMR MR2
VMN (a)
VMN
(b)
Fig. 1. Route optimization in RBU+
the binding update (BU). After the CN sends the first packet to the Visited Mobile Node (VMN) [1], both the TLMR and the MR2 perform the BU as shown in Fig. 1(a). The RBU+ maintains the optimal route from the CN to the TLMR by updating its binding information recursively when it receives a BU message. For example, the CN makes (VMN HoA:VMN CoA) as in Fig. 1(a) and (MR2 prefix:MR2 CoA) as in Fig. 1(a) out of (VMN HoA:MR2 CoA) by the recursive binding update. This is because the VMN CoA is configured with MR2 prefix. The TLMR also delivers the BU to the CN as in Fig. 1(b) and the CN also makes (VMN HoA:MR2 CoA) and (TLMR prefix:TLMR CoA) out of (VMN HoA:MR2 CoA) since the MR2 CoA is configured with TLMR prefix. Therefore CN can deliver the packets to the TLMR in which the VMN resides. However, The RBU+ should perform the recursive search for the recursive binding update whenever a BU message arrives, and the delay for route optimization becomes more serious as the nesting level increases. 2.3
Reverse Routing Header (RRH) [5]
The RRH scheme is basically based on a single tunnel between the first MR and its HA. The RRH records the addresses of intermediate MRs in the slots of routing header when the MNN sends the packet to the CN. Fig. 2(a) shows an example of the RRH operation. While a VMN sends the packet to a CN, an MR2 records the source address of the RRH with its CoA to avoid the ingress filtering. It also records the destination address of the RRH with its HA, and its HoA in the free slot of RRH. The TLMR performs the same tasks when it receives the packet from the MR2. Then the packet is delivered to the MR2 HA, which contains the multiple slots with the TLMR CoA. Finally the MR2 relays the packet to the CN according to the original routing header. Fig. 2(b) shows how to use the slot contents. When the CN sends the packets to the VMN, it is routed through MR2 HA. At this point, the MR2 HA records all intermediate nodes which the packets should traverse by using TLMR CoA and multiple slots. In other words, the RRH performs source routing with multiple slots and it alleviates the pinball routing overhead by a single tunnel. However it requires
786
K. Jeong, T.-J. Lee, and H. Choo VMN HA
CN Internet
VMN HA
CN Internet
MR2 HA TLMR HA
AR
MR2 HA slot 1
AR
slot 0
MR2 HA TLMR CoA MR2 CoA MR2 HoA slot 1
TLMR TLMR CoA
MR2 HA
slot 1
MR2
MR2 CoA
VMN
MR2 HA
VMN CoA
slot 0
TLMR
MR2 CoA MR2 HoA
TLMR HA
slot 0
MR2
MR2 HoA
CN
VMN
VMN HoA
(b)
(a)
Fig. 2. Route optimization in RRH
more slots as the depth of nesting increases, and because the RRH scheme should suffer from inevitable single tunnel it still has potential overhead.
3
The Proposed Scheme
In this section, we propose a Dual Binding Update scheme to constitute the optimal routes by utilizing the BU message. And we focus on the MIPv6-enabled VMN which Home Address (HoA) does not consist of the MNP of parent-MR. 3.1
Dual Binding Update for End-to-End Route Optimization
The binding update that the MNN performs in MIPv6 can be divided into two types. The first one occurs when the MNN detects its movement, and the second one occurs when the packets are delivered via its HA. In both types, after the MNN sends the BU messages, each HA and CN records the (MNN HoA:MNN CoA) entry in the binding cache. However it cannot route packets through optimal route based on this binding entry. Hence the NBS incurs a pinball routing to support the nested mobile networks. In the DBU, the MNN sends binding entry of (MNN HoA:MNN CoA) when it performs BU to its HA, but it sends both (VMN HoA:VMN CoA) and TLMR CoA when it performs BU 7 bit
9 bit
A H L KMR T
Reserved
16 bit Sequence # Lifetime
32 bit
Fig. 3. The T bit defined in the BU message
Dual Binding Update with Additional Care of Address VMN_HA
CN
787
VMN_HA
CN
MR2_HA
AR
TLMR_HA
BU
TLMR TLMR CoA VMN HA VMN HoA
0
BU
Internet Internet
MR2_HA
TLMR_HA
AR
TLMR TLMR CoA
CN
VMN HoA
1
CN
VMN HoA
1
MR2
MR2 CoA VMN HA VMN HoA
0
MR2
MR2 CoA
VMN
VMN CoA VMN HA VMN HoA
0
VMN
VMN CoA
CN
VMN HoA
1
Source
Destination
HAO
T
Source
Destination
HAO
(a) Binding update for VMN_HA
T
(b) Binding update for CN
Fig. 4. Two types of BU message
to the CN. So the CN can keep the optimal route from the CN to the TLMR. Fig. 3 shows a newly defined T bit in the BU message. If T bit is set, a node receiving the BU message records the binding entry of (VMN HoA:VMN CoA) including the TLMR CoA. In Fig. 4(a), the VMN sends the BU message to its HA in which T bit is unset, and then the VMN HA records only (VMN HoA:VMN CoA) entry in the binding cache. Because when the mobile network with which the TLMR is associated moves, it generates a BU storm to the HAs of MR. Fig. 4(b) shows the situation that the VMN sends the BU to the CN. In this case T bit is set. The CN records the (VMN HoA:VMN CoA:TLMR CoA) entry in the binding cache. Hence when the CN sends the packet to the VMN, it sends the packet to the TLMR directly not via intermediate MRs due to the additional TLMR CoA entry. As shown in Fig. 5, every node which the BU message traverses replace the source address with its CoA. Unlike the NBS, it does not append an additional header. According to the NBS, when an MNN in a nested mobile network sends a packet to a CN, the packet is routed through each HA of the MRs including its HA. Because every MR including the VMN records the source address with its CoA and the destination address of the packet with its HoA to avoid the ingress filtering and to maintain the connection with the CN irrespective of the VMN’s location. However this reverse pinball routing also causes serious overhead, and the overhead becomes more significant as the nesting level increases. When the CN maintains the TLMR CoA, it confirms the connection with the VMN since the source address of the packet is TLMR CoA. And the VMN offers mobility transparency to transport layer of CN based on the Home Address Option (HAO). So the proposed DBU provides the solution for the reverse pinball routing problem. The following algorithm describes the operation while a certain node receives the BU message according to the mechanism mentioned above. The route optimization schemes described above do not touch the route optimization from the TLMR to the VMN. While the TLMR receives a packet from the CN through the optimal route, it has no clue to route the packet to the VMN since it does not maintain any address information for the VMN. Hence
788
K. Jeong, T.-J. Lee, and H. Choo
we propose the route optimization scheme from TLMR to VMN by utilizing BU message. Every VMN entering a new mobile network sends BU message to its HA and CNs. All MRs above the VMN cache the VMN HoA and the source address of the packet while BU messages go through them. So when the TLMR receives the packet with its destination specified by VMN HoA, it is able to relay that packet to the proper intermediate child-MRs and finally to the right VMN. In RBU+ scheme [2], we need an additional mechanism to send a packet from the TLMR to the VMN. Especially in the Route Request Broadcasting mechanism, all intermediate MRs should relay the request messages to the child nodes to find a destination node. Therefore the delay of packet delivery becomes larger as the nesting level increases. 3.2
More Mobility Concern
We can classify the mobility of the nested mobile network into three types. 1 ) VMN moves; 2 ) A partial network moves; 3 ) An entire network including the TLMR moves. In types 2 and 3, the CN is disconnected temporarily from the VMN of the nested mobile network. Because the CN sends the packet to the TLMR CoA according to the binding cache. In this paper, we consider this problem and propose a mechanism to handle it. 4) Pack
et Deliv
VMN_HA
ery
4) Pac ket De
CN
livery
VMN_HA
CN
Internet
TLMR_HA
2) Registration AR1
Internet
MR2_HA
4) BU
AR2
3) Ack
MR2_HA TLMR_HA
2) Registration AR1
AR2
3) Ack
v Ad 1)
v Ad 1)
TLMR
TLMR MR2
MR2 VMN
(a)
VMN
(b)
Fig. 5. In case of entire networks move
As you see in Fig. 5, the entire network including TLMR1 moves to the network under AR2. The CN which communicates with the VMN cannot recognize the movement of the VMN, so when the CN sends the packet to the TLMR1 and the VMN cannot receive it. We propose two methods to settle this problem. The first one can be applied to the case that a partial network moves. That is the TLMR1 sends the BU message to all CNs communicating with its child nodes. These messages are for exchanging the TLMR CoA entry except for (HoA:coA). This method has an advantage of short delay, but we can expect high cost. The other one uses an extension of BU message. When the VMN delivers the BU message to the CN, it inserts an additional TLMR HoA in
Dual Binding Update with Additional Care of Address
789
the binding entry. Hence the CN maintains the TLMR HoA together with the TLMR CoA. In case that the CN cannot send the packet to the TLMR CoA, it sends the packet to the TLMR HoA temporarily. Then after the TLMR receives the packet through its HA, it can perform the BU to the CN or relay the packet for a while. This method prevents the TLMR from burst binding updates, but it suffers from relatively large delay and amount of packet losses.
4
Performance Evaluation
In this section, we evaluate our proposed scheme compared with the NBS, RBU+, and RRH analytically in terms of delay. At first, we consider the delay until a VMN receives a packet through an optimal route after it enters the mobile network with which a TLMR is associated. Secondly, we evaluate a Round Trip Time (RTT) including the NBS when a CN exchanges the amount of packets with the VMN for various nesting levels. 4.1
Analyses Environment
We assume an environment of performance evaluation as following: Assumption 1) One MR has t¯ nodes in average. Assumption 2) Each node under the same parent MR can be a MR with probability α and a VMN with probability 1-α. Let T indicate the total number of nodes which a single TLMR constitutes. t¯(α − 1) T = t¯(1 − α) + t¯ · α(t¯(1 − α) + t¯ · α(t¯(1 − α) + · · · = t¯α − 1 t¯(α − 1) ≥ 1, Hence, t¯α < 1. When we Assume that t¯ ≥ 1 and T = ¯ tα − 1 And since we also assume that the CN communicates with a voluntary VMN, we need to calculate an average number of VMNs for the nesting level. Equations below denote the average number of VMNs for each nesting level : Nesting level i : t¯i αi−1 · (1 − α) Therefore the probability of the voluntary VMN communicating with the CN resides in the nesting level of i is t¯αi−1 (1 − α) = (1 − t¯α)(t¯i−1 · αi−1 ) = (1 − t¯α)(t¯α)i−1 T When we consider the routing paths through which the CN communicates with the VMN, we can classify them into outside the TLMR and inside the TLMR. And if we assume the environment of evaluation as random networks, we can say the distances among any neighboring two entities are equal and define ω to represent the one hop delay of outside the TLMR. And we also assume that the distances among the nodes inside the TLMR are equal, and define ϕ which represents the one hop delay of inside the TLMR. PV MN (i) =
790
4.2
K. Jeong, T.-J. Lee, and H. Choo
Comparison of Delay for Route Optimization
The users connecting the Internet in wireless networks expect to be provided various services with guaranteed delay. Therefore when we consider the network mobility which forms a nested mobile network frequently, it is especially important to deliver the packet via the optimal route as soon as it is possible. We evaluate the delay of route optimization when the CN communicates with the voluntary VMN for the given Assumptions. The delay of route optimization is calculated as the time until the VMN receives the packet through optimal route after the CN initiates the communication. And it consists of the delay from the the CN to TLMR, from the TLMR to VMN, and binding update delay. The ω represents the one hop routing delay between the nodes outside the TLMR, c(i) represents the number of hops which the packet traverses outside the TLMR until the optimal route constitutes. And ϕ represents the one hop routing delay between nodes inside the TLMR, b(i) represents the number of hops which the packet traverses inside the TLMR until the optimal route constitutes. 4.2.1 The Delay of RBU+ Route Optimization In the RBU+, any node performs the BU whenever a packet arrives via its HA since the it is operated based on the MIPv6 route optimization. Hence when the CN sends the packet to the VMN, several nodes perform the BU concurrently. However we focus on the delay, we only evaluate the BU delay for the VMN which is the largest among them. We can define the number of routing hops which the packet routes via outside the TLMR in terms of the nesting levels. , i+2 + c(i − 1) − (i + 1) RBUCN −T LMR (i) = i + 2 + 2 , i+2 + c(i − 1) + 1 = 2 After the TLMR receives the packet, all of intermediate MRs need relay it to the VMN. In this respect, we can also define the number of routing hops which the packet routes via inside the TLMR for the nesting levels. RBUT LMR−MN N (i) = i(i + 2) When we consider that the VMN sends the BU message, we can define the number of hops which it passes by for the nesting levels. RBUMN N −T LMR (i) = RBUT LMR−CN (i) =
i(i + 1) 2
(i + 2)(i + 3) −1 2
Finally, the average total delay for RBU+ route optimization is as follows. RBUT OT AL =
∞ i=1
: ; (1 − t¯α)(t¯α)i−1 ω · c(i) + b(i)
Dual Binding Update with Additional Care of Address
=
791
8 % $, i+2 + c(i − 1) + 1 (1 − t¯α)(t¯α)i−1 ω · 2 i=1 $ 8& 8 $ i(i + 1) (i + 2)(i + 3) + ϕ · i(i + 2) + −1 +ω· 2 2
∞
4.2.2 The Delay of RRH Route Optimization The RRH is basically based on a single tunnel between the TLMR and the HA of the first MR, so it involves potential overheads continuously. We can simply define the number of routing hops outside the TLMR and inside the TLMR which the packet passes via for the nesting levels. RRHCN −T LMR (i) = (i + 2) + 2,
RRHT LMR−MN N (i) = 2i
And we define the required number of routing hops when the VMN perform the BU for the nesting levels as follows. RRHMN N −T LMR (i) = i , RRHT LMR−CN (i) = 2 Finally, the total average delay for RRH route optimization is RRHT OT AL =
∞
: ; (1 − t¯α)(t¯α)i−1 ω · c(i) + b(i)
i=1
=
∞
% & : ; (1 − t¯α)(t¯α)i−1 · ω · (i + 2) + 2 + ϕ · (2i + i) + ω · 2
i=1
4.2.3 The Delay of DBU Route Optimization The DBU shortens the delay of route optimization through the reverse route optimization compared with the RBU+. And since our proposed scheme does not have to pass by HA of the first MR after route optimization, it is more efficient than the RRH. We can define both the number of outside the TLMR hops and inside the TLMR which the packet passes through in terms of the nesting levels. DBUCN −T LMR (i) = i + 3 , DBUT LMR−MN N (i) = 2i And we define the required number of hops the packet passed by when the VMN perform the BU according to the nesting level. DBUV MN −T LMR (i) = i , DBUT LMR−CN (i) = 1 As a result, the average total delay for proposed route optimization is DBUT OT AL =
∞ i=1
=
∞ i=1
: ; (1 − t¯α)(t¯i−1 · αi−1 ) ω · c(i) + b(i)
% & : ; i−1 ¯ ¯ (1 − tα)(tα) ω · (i + 3) + ϕ · (2i + i) + ω
792
K. Jeong, T.-J. Lee, and H. Choo
14000 12000
RRH RBU+ DBU
2600 2400
11000
Delay of Route Optimization
Delay of Route Optimization
2800
RRH RBU+ DBU
13000
10000 9000 8000 7000 6000 5000 4000 3000 2000
2200 2000 1800 1600 1400 1200 1000 800 600 400
1000
200
0 0.0
0.2
0.4
0.6
0.8
1.0
20
40
60
t*a
180
100
350
RBU+ RRH DBU
160
NBS RBU+ RRH DBU
300
140
Average RTT
Average Delay of Route Optimization
80
Value of
(b) Comparison : Delay of RO for various one hop delay outside the TLMR
(a) Comparison : Delay of RO for t
120 100 80 60 40
250
200
150
100
20
50
0 0
2
4
6
8
10
Nesting level (i)
(c) Comparison : Delay of RO for nesting levels
0
2
4
6
8
10
Nesting level (i)
(d) Comparison : average RTT for various nesting levels
Fig. 6. Comparison : delay of route optimization for t · α
Fig. 6 shows the evaluation results under various parameters. In Fig. 6(a), when the t · α is close to 1, the total delay of route optimization for the RRH and RBU+ is drastically augmented. And in general, a routing distance between the nodes at outside the TLMR is larger than that in inside the TLMR. We vary the value of ω and evaluate the delay of route optimization while we fix the value of ϕ to 1 and t · α to 0.5 as in Fig. 6(b). The result shows that the delay gap becomes more significant as the value of ω increases. Fig. 6(c) shows the average delay for route optimization when the MNN resides in the nesting level i. Since we fix the value of t · α to 0.5, there is little chance that the nesting level is bigger than 10. The result indicates that the DBU is superior to the RRH and the RBU+ within reasonable scopes. We also evaluate average RTTs including the NBS when the CN exchanges the packet 30 times with the VMN under various nesting levels, and the value of ω is fixed to 30. The result shows that the average RTTs of all schemes increase linearly with different slopes. But as you see in Fig. 6(d), the average RTT of RBU+ exceeds the one of RRH starting from the nesting level 6. This is because the RBU+ takes more delay to constitute the optimal route as the nesting level increases.
5
Conclusion
This paper proposes a Dual Binding Update scheme for end-to-end route optimization in NEMO environment. The DBU defines a new T bit in a BU message, and CNs maintain an additional TLMR CoA when the T bit is set. And another mechanism is needed to relay the packets to the correct MNN. So all nodes that
Dual Binding Update with Additional Care of Address
793
the BU message traverses keep the MNN HoA and CoA of a child-node for the end-to-end route optimization. The MNNs also can constitute reverse optimal routes by using the TLMR CoA and HAO. The RBU+ provides the end-to-end route optimization, but it does not offer reverse optimization and it takes more time as the nesting level increases. Although the RRH avoids the pinball routing, it requires additional multiple slots in the BU message for the source routing. Besides the RRH involves potential overhead due to the inevitable single tunnel. In our DBU, it is allowed that all nodes can communicate with MNNs with minimum delay. Our evaluation results the show advantages of the DBU compared to the other existing schemes.
References 1. C. Ng, P. Thubert, H. Ohnishi, E. Paik, “Taxonomy of Route Optimization models in the NEMO Context,” IETF, draft-thubert-nemo-ro-taxonomy-04, February 2005, Work in progress. 2. C. Hosik, P. Eun Kyoung, and C. Yanghee, “RBU+: Recursive Binding Update for End-to-End Route Optimization in Nested Mobile Networks,” HSNMC 2004, LNCS 3079, pp. 468-478, 2004. 3. D. Johnson, C. Perkins, and J. Arkko, “Mobility Support in IPv6,” IETF, RFC 3775, June 2004. 4. M. Watari, T. Ernst, “Route Optimization with Nested Correspondent Nodes,” IETF, draft-watari-nemo-nested-cn-01, February 2005, Work in progress. 5. P. Thubert, M. Molteni, “IPv6 Reverse Routing Header and its application to Mobile Networks,” IETF, draft-thubert-nemo-reverse-routing-header-05, March 2005, Work in progress. 6. R. Wakikawa, S. Koshiba, K. Uehara, J. Murai, “ORC: Optimized Route Cache Management Protocol for Network Mobility,” Telecommunications, ICT 2003, 10th International Conference, vol. 2, pp. 1194-1200, 23 Feb.-1 March 2003. 7. T. Clausen, E. Baccelli, R. Wakikawa, “NEMO Route Optimisation Problem Statement,” IETF, draft-clausen-nemo-ro-problem-statement-00, October 2004. 8. V. Devarapalli, R. Wakikawa, A. Petrescu, P. Thubert, “Network Mobility (NEMO) Basic Support Protocol,” IETF, January 2005, RFC 3963.
Optimistic Dynamic Address Allocation for Large Scale MANETs Longjiang Li1 and Xiaoming Xu2 1
Department of Computer Science and Engineering, Shanghai Jiaotong University, Shanghai 200030, P.R. of China 2 Department of Automatic Control, Shanghai Jiaotong University, Shanghai 200030, P.R. of China {E_llj, xmxu}@sjtu.edu.cn
Abstract. In order to allow truly spontaneous and infrastructureless networking, the autoconfiguration algorithm of the mobile node addresses is important in the practical usage of most MANETs. The traditional methods such as DHCP can not be extended to MANETs because MANETs may operate in a stand-alone fashion and their topologies may change rapidly and unpredictably. The diversified schemes have been proposed to solve this problem. Some of them apply Duplicate Address Detection (DAD) algorithms to autoconfigure the address for each node in a MANET. However, the multi-hop broadcast used by DAD results in high communication overhead. Therefore, a new autoconfiguration algorithm is proposed in this article, which combines the enhanced binary split idea of Dynamic Address Allocation Protocol (DAAP) and the pseudo-random algorithm to construct the interface ID of IPv6 address. The allocation process is distributed and do not rely on the multi-hop broadcast, so our algorithm can be suitable for large scale MANETs through our simulation study.
1 Introduction Mobile ad hoc networks (MANETs) are self-organizing wireless networks. Each mobile node has the capabilities to route data packets. Before the proper routing can be possible, every node needs to be configured a unique address. In MANETs, preconfiguration is not always possible and has some drawbacks. Furthermore, there is no central infrastructure managing all nodes in MANETs, so an autoconfiguration protocol is required to provide dynamic allocation of node’s address [10]. The autoconfiguration protocols proposed presently can be classified in protocols utilizing stateless and stateful approaches. Most stateful approaches rely on the allocation table [1]. M. Günes and J. Reibel [6] have proposed an address autoconfiguration protocol utilizing a centralized allocation table. The protocols, MANETconf [5], Boleng’s protocol [7], and the Prophet Allocation protocol [4], utilize a distributed common allocation table. The protocol, proposed by M. Mohsin and R. Prakash [9], utilizes the multiple disjoint allocation tables. The weakness is that most these approaches rely on reliable state synchronization in the presence of packet loss and network merging, which may consume a considerable amount of bandwidth [1]. In X. Jia, J. Wu, and Y. He (Eds.): MSN 2005, LNCS 3794, pp. 794 – 803, 2005. © Springer-Verlag Berlin Heidelberg 2005
Optimistic Dynamic Address Allocation for Large Scale MANETs
795
contrast, the stateless approaches usually need DAD algorithm to cope with address conflict. Perkins et al. [10] have proposed an autocofiguration protocol following a stateless approach. However, the broadcast used in DAD usually may result in high communication overhead and small scalability. The other two approaches, weak DAD (WDAD) [11] and passive DAD (PDAD) [12], need to change the routing protocol. And the protocol HCQA [13] combines elements of both stateful and stateless approaches, but incurs more complexities. In order to overcome the aforementioned drawbacks, we propose a new address autoconfiguration algorithm, namely optimistic dynamic address allocation algorithm (ODAA), which combines the enhanced binary split idea of Dynamic Address Allocation Protocol (DAAP) [2][3] and the pseudo-random algorithm [15] to reduce the overhead of address configuration very much. This paper is structured as follows. Next section introduces the basic idea of the proposed approach. Section 3 discusses its characteristics and gives a succinct comparison with two known algorithms, which depicts the superiority of the proposed algorithm over others. Section 4 presents some simulation results, which is equal to our analysis. Section 5 concludes the paper.
2 Basic Idea We assume that each allocation process the MANET starts from a single node. Each such process is referred to as an address domain. The first node in a domain is called as domain initiator. In order to generate unique addresses in a domain, we define an allocation function based on the similar idea to Dynamic Address Allocation Protocol (DAAP) [2] [3]. Furthermore, a local decision policy is proposed to propagate address resource between neighboring nodes. Since each address holds particular ability, assigned by the allocation function, to generate new addresses, we refer to such address as meta address. When a node need leave the MANET, the node can transfer its address capsulated in an address message to one of any neighboring nodes for reuse. 2.1 Structure of Meta Address We use a 3-tuble: (address, power, DID) to represent a meta address where address is its identification in the domain, power indicates its ability to generate new addresses and DID is the domain identification. Here the address is unique in a domain and the combination of DID and address identify a global address. Usually, DID must be generated with a pseudo-random algorithm consistent with [21] and is propagated to new nodes during the course of allocation. Because DID is a random number, if the number of bits for DID is large enough, two domains will have different DIDs. The probability that two or more of DIDs will collide can be approximated using the formula [15]: P = 1 – exp (- N2 / 2 (L + 1)), where P is the probability of collision, N is the number of interconnected DIDs, and L is the length of the DID. For example, in IPv6, the 64-bit interface ID can be coded using the combination of DID and address.
796
L. Li and X. Xu
Suppose that the scope of address is [0, M-1] where M is the power of 2 (=2K). If there is only one domain initiator in a MANET, every node in the MANET can obtain unique address belonging to one domain, i.e. DID plays a role of the network ID. Otherwise, multiple DIDs may compete or coexist in the same MANET. When we only consider address allocation in a single domain, because all nodes have the same DID, for convenience, we may neglect DID and only use a 2-tuble: (address, power) to represent an address. We denote the scope of address as S, i.e. S = [0, M-1]. The allocation function, say f, is defined as below. 1) f: S×T-> S×T, where T = [0, K] is a set of integer number scope; 2) For a input (address, power), f (address, power) = (address|2K-power, power1). 3) A meta address can call f to generate new address, if only its power is greater than zero. And a meta address reduce its power by 1 each time it call f successfully. Note that “|” is a bit-wise OR operation, e.g. for a node with (address, power) = (010, 1) and K=3, f (010, 1) = (110, 0) (see Figure 1). Note that, here, address is coded in binary code. A meta address will be prohibited from calling f, or we can write that f (add ress, 450. Pedestrian
Vehicle
0.30
CO_TEBU / CO_MIPv6
0.25 0.20 0.15 0.10 0.05 0.00 10
50
90
130 170 210 250 290 330 370 410 450 PMR (p)
Fig. 7. Cost ratio of the TEBU scheme against MIPv6 handover Pedestrian
Vehicle
0.86
CO_TEBU / CO_FMIPv6
0.84 0.82 0.80 0.78 0.76 0.74 10
50
90
130 170 210 250 290 330 370 410 450 PMR (p)
Fig. 8. Cost ratio of the TEBU scheme against FMIPv6
Fig. 7 and 8 show the variation of cost ratio against MIPv6 handover and FMIPv6, respectively when radius of a cell is 100 meters. In a case of the pedestrian, the varia-
834
S. Ryu and Y. Mun
tion of cost ratio is high at low PMR since the handover process is performed infrequently. In Fig. 7 and 8, the higher the PMR is, the higher the performance is, and values of those ratios are convergent to limits, respectively, when the PMR increases over 450. At (4), the mean cost ratio is 0.21, therefore, compared to MIPv6 handover, the TEBU scheme gains 79% improvement. The mean cost ratio is 0.79 by (5), where the TEBU scheme saves the cost by 21% against FMIPv6.
5 Conclusions and Future Study A handover process occurs due to movements of an MN maintaining a home IP address in MIPv6. The handover process causes a long latency for the MN to receive or to send packets, such as a handover latency. In the MIPv6, the handover latency is a critical issue. FMIPv6 has studied to reduce the handover latency, but registration latency is still long. We propose a tentative and early binding update (TEBU) scheme to reduce the handover latency. Especially, the TEBU message is sent to HA in advance before a layer 2 handover. We have showed an improved performance by means of comparison of costs, such as the cost ratio of the TEBU scheme against MIPv6 and FMIPv6. Compared to MIPv6 handover, the TEBU gains 79% improvement, while it gains 21% improvement compared to FMIPv6. Currently, FMIPv6 is being standardized in IETF. The TEBU scheme can help improve the performance of FMIPv6. Therefore, when TEBU scheme is used with FMIPv6, a performance of the handover process will be improved.
Acknowledgments This work was supported by the Soongsil University Research Fund.
References 1. Johnson, D., Perkins, C., Arkko, J.: Mobility Support in IPv6, RFC 3775 (2004) 2. Koodli, R.: Fast Handovers for Mobile IPv6, work in progress (2004) 3. Soliman, H., Malki, K. El: Hierarchical Mobile IPv6 mobility management (HMIPv6), work in progress (2004) 4. Ergen, M.: IEEE 802.11 Tutorial (2002) 5. McCann, P.: Mobile IPv6 Fast Handovers for 802.11 Networks, work in progress (2004) 6. Vogt, C., Bless, R., Doll, M.: Early Binding Updates for Mobile IPv6, work in progress (2004) 7. Jain, R., Raleigh, T., Graff, C., Bereschinsky, M.: Mobile Internet Access and QoS Guarantees using Mobile IP and RSVP with Location Registers, in Proc. ICC'98 Conf. (1998) 1690-1695 8. Koodli, R., Perkins, C.: Fast Handovers and Context Transfers in Mobile Networks, ACM Computer Communication Review, Vol. 31, No. 5 (2001) 9. Pack, S., Choi, Y.: Performance Analysis of Fast Handover in Mobile IPv6 Networks, work in progress, IFIP PWC 2003, Venice, Italy (2003)
The Tentative and Early Binding Update for Mobile IPv6 Fast Handover
835
10. Kim, J., Mun, Y.: A Study on Handoff Performance Improvement Scheme for Mobile IPv6 over IEEE 802.11 Wireless LAN, work in progress (2003) 11. McNair, J., Akyildiz, I.F., Bender, M.D.: An Inter-System Handoff Technique for the IMT-2000 System, IEEE INFOCOM, vol. 1 208-216 (2000) 12. Vatn, J.: An experimental study of IEEE 802.11b handover performance and its effect on voice traffic, SE Telecommunication Systems Laboratory Department of Microelectronics and Information Technology (IMIT) (2003) 13. Mishra, A., Shin, M., Arbaugh, W.: An Empirical Analysis of the IEEE 802.11 MAC Layer Handoff Process, ACM SIGCOMM Computer Communication Review (2003)
A Simulation Study to Investigate the Impact of Mobility on Stability of IP Multicast Tree Wu Qian1 , Jian-ping Wu2 , Ming-wei Xu1 , and Deng Hui3 1
2
Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China {wuqian, xmw}@csnet1.cs.tsinghua.edu.cn http://netlab.cs.tsinghua.edu.cn Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China [email protected] 3 Hitachi (China) Investment, Ltd Beijing 100004, China [email protected]
Abstract. Mobile users expect similar kinds of applications to static ones, including various IP multicast applications. In mobile environment, the multicast tree should face with not only the dynamic group membership problem but also the mobile node’s position change issue. In this paper, we study the stability of IP multicast tree in mobile environment. We define a stability factor and investigate how the various elements of network and mobility impact on it. It is shown that the stability factor is mainly dominated by three elements, namely the ratio of the number of mobile nodes and network size, mobility model and the mobile multicast scheme. These results can give some useful references when we design a new mobile multicast scheme in the future.
1
Introduction
Mobile users expect similar kinds of applications to static ones, including attractive IP multicast applications. In the mean time, with the merit of efficient multi-destinations delivery, IP multicast can give the benefit of saving network bandwidth and releasing the burden of replications from the source. This kind of efficiency is especially valuable for mobile networks which usually use wireless infrastructure and face with the bandwidth scarce problem. In mobile environment, multicast must deal with not only the dynamic group membership but also the dynamic member location. The current multicast protocols are developed implicitly for static members and do not consider the extra requirements to support mobile nodes. Every time a member changes its location, keeping track of it and reconstructing the multicast tree will involve extreme
This work is supported by the Natural Science Foundation of China (No60373010), National 973 Project Fund of China (No 2003CB314801), and Cooperative research project on Mobile IPv6 Multicast between Hitachi (China) and Tsinghua University.
X. Jia, J. Wu, and Y. He (Eds.): MSN 2005, LNCS 3794, pp. 836–845, 2005. c Springer-Verlag Berlin Heidelberg 2005
A Simulation Study to Investigate the Impact of Mobility
837
overhead, while leaving this tree unchanged will result in inefficient sometimes incorrect delivery path. Mobile IP [5] provides two basic approaches to support mobile multicast, i.e., bi-directional tunneling (BT) and remote subscription (RS). Most of the other proposed solutions are based on them [2, 3]. The main problem of BT-based approaches is the poor multicast packet delivery efficiency [2, 3]. Sometimes the multicast would even degrade to unicast. On the contrary, RS-based approaches maintain most of the merits of multicast mechanism. The main issue of these approaches is the overhead to maintain the multicast delivery tree as joining and leaving behaviors occur much more frequently in mobile networks. The purpose of this paper is to investigate the impact of mobility on the stability of multicast tree. We implement our study both on plane and on hierarchical RS-based scheme. The study focuses on simulationbased evaluation because mobile multicast protocols are still an emerging area and deployment is relatively uneventful, and simulation can provide researchers with a number of significant benefits, including repeatable scenarios, isolation of parameters, and exploration of a variety of metrics. In this paper, we investigate how the various elements of network and mobility impact on stability of multicast tree. These elements include the mobile multicast scheme, the network size, the move speed, the mobility model and the power range. From the abundant data obtained, it is shown that the stability factor defined in this paper as the average number of link changes per updating is mainly dominated by three elements, namely the ratio of the number of mobile nodes and network size, mobility model and the mobile multicast scheme. While the effect of speed and AR’s power range remains slight. These results can give some useful references when we design new mobile multicast scheme in the future, and remind us to pay enough attention to the stability problem in our design procedure. The rest of the paper is organized as follows. In section 2 we introduce former research on stability and our extension. In section 3 mobile multicast schemes are introduced. Section 4 presents our simulation environment and methodology. In section 5, a set of experiments are carried out and analytical results are presented. Finally, we conclude this paper and introduce future works.
2
Stability of Multicast Tree
One of the major points of interest in multicast tree is the stability problem. Besides the dynamics of topology changes, multicast also offers the possibility of joining and leaving a group at any time. This activity requires the multicast tree to be dynamically updated. If these changes occur too often, the tree may become unstable, resulting in undesirable routing overhead. 2.1
Former Research on Stability
Mieghem [1] implements a study on how the number of links on a multicast tree changes as the number of multicast users in a group change. In his paper, the stability of a multicast tree is defined as follows.
838
W. Qian et al.
Definition 1. In a shortest path tree with m different group members uniformly distributed in the graph containing N nodes, the number of links in the tree that change after one multicast group member leaves the group has been chosen. If we denote this quantity by ΔN (m), then, the average number of changes equals E[ΔN (m)]. The situation where E[ΔN (m)] ≤ 1 may be regarded as a stable regime. The definition is base on the assumption that either no or one group member can leave at a single instant of time. The author carried his research on a specific class of random graphs, called as RGU (random graphs with N nodes, independently chosen links with probability p and uniformly on [0,1] or exponentially distributed link metrics w) . The research shows for RGU when m is larger than 0.3161N ≈ N/3, the expression E[ΔN (m)] ≤ 1 can be satisfied, and the multicast tree would be a stable one. In addition, it also quantifies the common belief that minimal spanning trees are more instable than shortest path trees. 2.2
Stability Problem in Mobile Networks and Our Motivation
With the character of mobility, the multicast tree will encounter much more joining and leaving events in mobile network than in static network. So it is significant to investigate the stability of multicast tree in mobile environment. Because the coverage of wireless network can overlap, mobile node (MN) would change its position by three kinds of manner, connect, disconnect and handoff respectively. In multicast application, these complex position change manner will result in complicated transform of multicast tree, so the assumption Definition 1 can’t comprehensively reflect the behavior of multicast tree in mobile environment. In this article we expand Definition 1 and come to a more universal definition. Definition 2. In a multicast tree with m different group members uniformly distributed in the graph containing N nodes, the number of links in the tree that change after one multicast group membership update (join, leave, or leave-join event) occurs has been chosen. We denote this quantity by ΔN (m). The stability of the multicast tree may be measured by the average number of changes, E[ΔN (m)]. We call this value as Stability Factor α. The situation where α ≤ 1 may be regarded as a stable regime. Clearly, the smaller α, the more stable the tree is. Our study focuses on a simulation-based evaluation and qualitative analysis. There are three main reasons. First, as mentioned in [1], very few types of graphs can be computed to quantify the stability of multicast tree. Second, the topology of Internet is currently not sufficiently known to be categorized as a type or an instance of a class of graphs. Third, mobile multicast is still an emerging area and deployment is relatively uneventful, and simulation can provide researchers with a number of significant benefits, including repeatable scenarios, isolation of parameters, and exploration of a variety of metrics. Although it is impossible to simulate all kinds of graphs and there exist differences between the simulation and the real world, it can still give us the first order of estimates and useful references when designing a mobile multicast scheme.
A Simulation Study to Investigate the Impact of Mobility
3
839
Mobile Multicast Schemes
Bi-directional tunneling (BT) and remote subscription (RS) are the two basic multicast schemes introduced by Mobile IP to be used in mobile environment. Most of the other proposed solutions are based on them and attempt to solve some drawbacks of them [2, 3]. In BT scheme, the MN implements multicast all through its home agent, including joining/leaving group and receiving multicast packets, so the routing inefficiency and bandwidth wasting become the main drawbacks. Because BT and its inheritors [2, 3] weaken the primary characters of multicast, more attentions are paid on RS-based approaches and we focus our research just on RS-based approaches. We choose two typical ones among them, the original plane RS scheme and the hierarchical MobiCast [4] scheme. 3.1
Remote Subscription (RS)
In RS scheme, the MN would resubscribe to the multicast group whenever it changes the attachment to a new access network. The MN does the resubscription using its new care of address through the local multicast router just like the static one in the foreign network. Obviously, the multicast packets are delivered on the shortest paths. This scheme maintains the main merits of multicast mechanism. RS faces with some new problems and the major one is the stability problem. In this scheme, both of the multicast delivery tree and the multicast group membership should be updated after the handoff, which would result in network overhead and computation overhead. If there are many mobile nodes moving quickly, this would cause many leaving and joining behaviors, and consequently bring the multicast tree serious stability problem. 3.2
MobiCast
MobiCast is a hierarchical RS-based scheme. This solution divides networks into domains, and each domain has a Domain Foreign Agent (DFA). MobiCast focus on the intra-domain multicast technique, while the inter-domain method is just directly chosen from RS or BT. DFA subscribes to the multicast group on behalf of the MNs and manages the multicast in its domain. For every multicast group, DFA provides a unique translated multicast address in the domain. Multicast packets are at first delivered to DFA, then DFA changes the group address to the translated multicast address. The Base Station (BS) in the domain subscribes to the translated multicast group, and forwards packets to the MN after retranslating them back. In order to achieve fast handoff within the domain, MN’s affiliated BS will inform the physically adjacent BSs to subscribe to the corresponding translated multicast group and buffer multicast packets. These BSs form a virtual domain called Dynamic Virtual Macro-cells (DVM). Comparing with RS, MobiCast hides the MN’s movement from outside, and avoids the update of main multicast delivery tree. The other advantages are the reduced handoff latency and packets loss. But because a mass of unnecessary
840
W. Qian et al.
multicast packets are forwarded to adjacent BSs while there are maybe no group members, one of the main drawbacks of MobiCast would be bandwidth waste, which is critical for mobile environment. What’s more, every time the mobile handoff occurs, it will cause several BSs to join the multicast group while others to leave. So the other main drawback is the significant multicast protocol cost and multicast tree update within the domain.
4
Simulation Environment and Methodology
In our simulation, multicast group members are all mobile nodes, and the update of multicast tree is absolutely caused by the position changes of mobile members. We record the total number of link changes of multicast tree and the total number of position change events. The ratio of them is the Stability Factor defined in Definition 2. 4.1
Investigating Elements
– Mobile Multicast Scheme: The multicast tree maintaining manner differs significantly in different mobile multicast schemes. In our simulation we investigate the RS scheme and MobiCast scheme. And for the MobiCast scheme, we study both the single-domain and multi-domain situations. – Number of MN & Network Size: [1] shows that the number of MN, denoted by m, and the network size, denoted by N, are the most important elements impact the stability of multicast tree. – Mobility model: The mobility model impacts when the MN would move and how it moves. The models used in the simulation are described below: • Random Waypoint (RPW for short) [7]: In this model an MN begins by staying in a randomly chosen location for a certain period of time. Once this time expires, it chooses a random destination in the simulation area and a speed that is uniformly distributed between [minSpeed, maxSpeed]. The MN then travels toward the new destination at the selected speed. Upon arrival, the MN pauses for a random period before starting the process again. This model can be used to mimic the wandering in an area action. In our simulation the pause time is uniformly chosen between 0 and 5s, and the speed is random between 0 and 40m/s. • Gauss-Markov [8]: This mobility model can be used to imitate the action of random movement while without sudden stops and sharp turns. MN’s new speed and direction are correlated to its formal speed and direction and their mean values through the simulation. A tuning parameter, α, where 0 ≤ α ≤ 1 , is used to vary the randomness. The smaller the , the greater the randomness is. We choose the value of to be 0.75 in our simulation, and the mean value of direction to be 90 degrees initially. The mean value of speed is fixes to 40m/s. • Exhibition [9]: MN in this model chooses a destination from among a fixed set of exhibition centers and then moves toward that center with
A Simulation Study to Investigate the Impact of Mobility
841
a fixed speed which uniformly chosen between [minSpeed, maxSpeed]. Once a node is within a certain distance of the center it pauses for a given time and then chooses a new center. This model can be used to mimic the action of people visiting a museum. Our simulation uses 10 centers placed uniformly. When a MN travels to a center, it stops when it is within 20 meters of the center and then pauses a time between 0 and 10s. The speed of a node is random between 0 and 40m/s. – Move Speed: One of the main characters of mobility is the move speed. The faster the MN, the more probable the position change event will occur. – Power Range of AR: The power range reflects the service area of an AR. When the power range is small, the MN is prone to change its serving AR more frequently. 4.2
Network Model and Methodology
The simulation is built on OMNET++ [6], a discrete event simulator. The topology in our simulation is a mesh network in which each node acts as a multicast router of local network and also an AR for MN. Generally, the size of this mesh network is 10*10, the power range is set to a square of 100*100 meters for simplicity (only the handoff change manner would occur), and the distance between two nearby ARs is 100 meters. For MobiCast scheme, there are another 1 or 4 DFA routers in the topology. We will change the size of mesh network or the power range of AR when we investigate their impact on stability. For simplicity, there is only one multicast group with one fixed source and the group members are all mobile nodes. We use a source based shortest path tree to deliver multicast packets. Because all the links have the same weight, the improved shortest path tree in our simulation is also a minimal spanning tree. Originally, mobile nodes are randomly located at the mesh. The number of mobile nodes varies form 5 to 80. We run each simulation for 500 seconds.
5
Results
In this section, we illustrate how the mobility impacts on stability of IP multicast tree. The stability is measured by Stability Factor defined in Definition 2. The smaller , the more stable the tree is. 5.1
The Impact of Mobile Multicast Scheme
Fig 1 illustrates how the stability factor varies with the number of MN in different mobile multicast schemes. The schemes include RS, MobiCast with one domain (or DFA) and MobiCast with four domains (or DFAs). The size of mesh network is 10*10 and the speed of MN is random between 0 and 40m/s. Fig 1(a) is obtained under RWP model and 1(b) under Exhibition model. All the curves in Fig 1 show that stability factor is decreasing with m (number of MN) which accords with our instinct. It also shows that the stability factor
842
W. Qian et al.
5
4
RS MobiCast(1 DFA) 4
RS MobiCast(1 DFA)
3.5
MobiCast(4 DFAs)
MobiCast(4 DFAs)
Stablity Factor
Stablity Factor
3 3
2
2.5 2 1.5 1
1
0.5 0
0 0
10
20
30
40
50
60
70
80
0
10
20
The number of mobile nodes
30
40
50
60
70
80
The number of mobile nodes
(a) Comparing under RWP Model
(b) Comparing under Exhibition Model
Fig. 1. The Impact of Mobile Multicast schemes
2.5
5 6*6 10*10
6*6 10*10 4
12*12
Stablity Factor
Stablity Factor
2
1.5
1
0.5
12*12
3
2
1
0
0 0
10
20
30
40
50
60
70
80
The number of mobile nodes
(a) The RS Scheme
0
10
20
30
40
50
60
70
80
The number of mobile nodes
(b) The MobiCast Scheme (4 domains)
Fig. 2. The Impact of Network Size
of MobiCast scheme changes more intensively than that of RS, and MobiCast is more instable when m is small, but more stable when m is big enough. This phenomenon is due to the data redundancy mechanism used in MobiCast to achieve fast and nearly lossless intra-domain handoff. For MobiCast, when the number of MN is small, every time the mobile handoff occurs, it will cause several ARs to join the multicast group while others to leave. But when there are many mobile nodes in the network, most of the ARs will lie on the multicast tree, and a handoff event will not cause many varieties. As an improved scheme, MobiCast achieves attractive performance in some aspects but aggravates the stability problem of multicast tree which embarrasses it from widely use. 5.2
The Impact of Network Size
In order to know how the network size impacts stability factor, we run simulation in three kinds of networks with different size. They are 6*6, 10*10 and 12*12. The other simulation environments are the same. Fig 2 illustrates the impact
A Simulation Study to Investigate the Impact of Mobility
843
of network size in RS and MobiCast scheme. We can see that the bigger the network, the more instable the tree is. This is because MNs are more dispersive in big network, and the probability of moving to an AR already having MN in its range is small. Another result shown in Fig 2 is that although the absolute value of stability factor differs a lot, the curve of how it varies with the number of MN is much similar. 5.3
The Impact of Mobility Model
Fig 3 illustrates how the mobility model impacts stability factor. Fig 3(a) and 3(b) are the results gained in RS and MobiCast scheme respectively. We compare RWP, Gauss-Markov and Exhibition these three kinds of mobility models. As for the Gauss-Markov model, the tuning parameter is set to 0.75 which means the move track of MN is much influenced by its history movement. As illustrated by Fig 3, mobility model does have some impact on stability. The stable order of multicast tree is Gauss-Markov>Exhibition>RWP, while the random character is just reverse, namely more random model would result in more instable. It can be explained that mobility model impacts how the MNs distributes in the network, and greater randomness will result in more dispersive.
2.5
5 Random Waypoint Gauss-Markov
Random Waypoint Gauss-Markov 4
Exhibition
Stablity Factor
Stablity Factor
2
1.5
1
0.5
Exhibition
3
2
1
0
0 0
10
20 30 40 50 60 The number of mobile nodes
70
80
(a) The RS Scheme
0
10
20 30 40 50 60 The number of mobile nodes
70
80
(b) The MobiCast Scheme (4 domains)
Fig. 3. The Impact of Mobility Model
5.4
The Impact of Move Speed
We investigate the impact of move speed on stability in RS scheme under different mobility models. Fig 4(a) shows when the number of MN is 30 how the stability factor varies with the average speed of MN. The variation manner is investigated in both RWP and Exhibition model. Fig 4(b) presents three curves about the stability factor’s variation with the number of MN when the average move speed in Gauss-Markov model is 3, 10 and 40m/s. We observe that the move speed does little impact on the stability factor. This result can be explained as follows. When the speed of MN is faster, the position change event will occur more often, which would cause the multicast
844
W. Qian et al. 1
2 Gauss-Markov (v=3m/s) Gauss-Markov (v=10m/s)
0.8
Gauss-Markov (v=40m/s)
Stablity Factor
Stablity Factor
1.5 0.6
0.4
1
0.5 0.2
Random Waypoint Exhibition
0
0 0
5
10
15
20
25
30
35
40
0
10
The average speed of mobile nodes (m/s)
20
30
40
50
60
70
80
The number of mobile nodes
(a) RWP and Exhibition Model
(b) Gauss-Markov Model with different average speed
Fig. 4. The Impact of Move Speed
tree to be updated more frequently. But the changes of link would increase too, and the average number of changes per updating, named as stability factor in our definition, would remain similar. 5.5
The Impact of the Power Range of AR
Fig 5 illustrates how the power range of AR impacts stability factor. We investigate RS scheme. The power range of AR is varied by 50 and 100 meters. Fig 5(a) and 5(b) are the results obtained under the RWP and Exhibition model respectively. From the figure we can see, the power range of AR also does little impact on the stability factor, yet when the power range is small the MN is prone to change its serving AR more frequently. The reason is the same as described in section 5.4. 2.5
2.5 100m 50m
100m 50m 2
Stablity Factor
Stablity Factor
2
1.5
1
0.5
1.5
1
0.5
0
0 0
10
20 30 40 50 60 The number of mobile nodes
70
(a) Comparing under RWP Model
80
0
10
20 30 40 50 60 The number of mobile nodes
70
80
(b) Comparing under Exhibition Model
Fig. 5. The Impact of the Power Range of AR
6
Conclusion and Future Work
In this paper, we study the stability of IP multicast tree in mobile environment. We implement our research both on the original RS scheme and the hierarchical
A Simulation Study to Investigate the Impact of Mobility
845
MobiCast scheme. The paper focuses on simulation-based evaluation and how the various elements of network and mobility impact on it. These elements include the mobile multicast scheme, the network size, the move speed, the mobility model and the power range. From the abundance data obtained, it is shown that the stability factor defined in this paper is mainly dominated by three elements, namely the ratio of the number of mobile nodes and network size, mobility model and the mobile multicast scheme. While the effect of speed and power range of AR remains slight. Although the simulation can’t absolutely reflect the real world, the simulation results can still give us the first order of estimates and useful references when designing a mobile multicast scheme. Sometimes the position change of a multicast source would cause the whole multicast tree to be updated. In the future, we will give more efforts to investigate the stability problem when multicast sender can be a mobile one.
References 1. P, V.Mieghem., M, Janic.: Stability of a Multicast Tree. IEEE INFOCOM 2002, Vol.2. Anchorage, Alaska (USA) (2002)1133-1142 2. I, Romdhani., M, Kellil., H-Y, Lach., A, Bouabdallah., H, Bettahar.: IP Mobile Multicast: Challenges and Solutions. IEEE Communications Surveys & Tutorials, vol.6, no.1. (2004) 18-41 3. Hrishikesh, Gossain., Carlos, De.M.Cordeiro., Dharma, P.Agrawal.: Multicast: Wired to Wireless. IEEE Communications Magazine, vol.40, no.6. (2002) 116-123 4. C, Lin.Tan., S, Pink.: MobiCast: a multicast scheme for wireless networks. ACM/Baltzer Mobile Networks and Applications, vol.5, no.4. (2000) 259-271 5. C, Perkins. (ed.): IP Mobility Support for IPv4. RFC 3344, August (2002) 6. Omnet++ Community Site. http://www.omnetpp.org 7. D, Johnson., D, Maltz.: Dynamic source routing in ad hoc wireless networks. In T. Imelinsky and H. Korth, editors, Mobile Computing. (1996) 153-181. 8. B, Liang., Z, Haas.: Predictive distance-based mobility management for PCS networks. Proceedings of the Joint Conference of the IEEE Computer and Communications Societies (INFOCOM). (1999) 1377-1384 9. P, Johansson. (ed.): Scenario Based Peformance Analysis of Routing Protcols for Mobile Ad Hoc Networks. IEEE MobiCom. (1999)
Fast Handover Method for mSCTP Using FMIPv6 Kwang-Ryoul Kim1 , Sung-Gi Min1, , and Youn-Hee Han2 1
2
Dept. of Computer Science and Engineering, Korea University, Seoul, Korea {biofrog, sgmin}@korea.ac.kr Samsung AIT, Giheung-si, Kyounggi-do, Korea [email protected]
Abstract. In this paper, we propose the fast handover method in mSCTP using FMIPv6. Using FMIPv6 as handover procedure in mSCTP, the performance of handover can be significantly improved. First, mSCTP can add new network address to the correspondent node quickly as FMIPv6 provides New Care of Address (CoA) without closing connection to current network. Second, mSCTP can determine when it has to change the primary IP address with the trigger from FMIPv6. This trigger indicates that the mobile node has completely joined the network of New Access Router (NAR) and confirms that the MN can receive data through the NAR. We present integrated handover procedures that maximizes the handover performance between mSCTP and FMIPv6. We implement the integration of mSCTP and FMIPv6 in our test bed and verify the operation of suggested handover procedures by analysis of the experimental result.
1
Introduction
There are increasing requirements for communication in wireless environment and mobility is also the most important aspect of future IP network. The functionality of mobile communication can be divided into three aspects that is the mobility management, the location management and the handover management. The mobility management provides the mobile nodes (MN) to connect to the correspondent node (CN) while it is moving. When the MN enters the new networks, it finds the New Access Router (NAR) and obtains new address from the NAR. After this, the MN configures the new IP address for expected communication. The location management is used to find the MN and make connection from the CN to the MN. The location management keeps the location information of the MN, each time the MN enter the new wireless networks. The handover management provides continuous communication while the MN moves on communicating. Using the handover, the MN can change the wireless access networks seamlessly with minimizing service disruption and packet losses.
Corresponding author.
X. Jia, J. Wu, and Y. He (Eds.): MSN 2005, LNCS 3794, pp. 846–855, 2005. c Springer-Verlag Berlin Heidelberg 2005
Fast Handover Method for mSCTP Using FMIPv6
847
The Mobile IP (MIP) and Mobile IPv6 (MIPv6) is most representative one that provides mobility and location management with supporting handover during mobile communication. Basically, the management for mobile communication is processed in network layer, many currently used applications need not to be modified to support mobile communication. However, the MIP requires the specific routers like Home Agent (HA) and Foreign Agent (FA) that perform a mobility control and location management [1],[2]. The Session Initiation Protocol (SIP) can provide mobile communication in application layer [3]. The SIP is used for establishing or closing multimedia sessions and supports unicast and multicast connections. The SIP can provide handover by sending the updated INVITE message when it moved to new network and provide location management using the REGISTER message that register new location information to the SIP register server. The Fast handover for Mobile IPv6 (FMIPv6) supports mobile communication in another aspect. The FMIPv6 is proposed to reduce handover latency of MIPv6 and it makes MIPv6 can be used for real time traffic such as voice and video [4]. Therefore, FMIPv6 is focused on handover management and does not provide any other functions for mobile communication. The Mobile Stream Control Transmission Protocol (mSCTP) is recently proposed that provides mobility management and handover management in transport layer without requiring the specific routers such as HA or FA. However, the mSCTP does not provide location management and it is hard to provide low handover latency using mSCTP alone [5],[6]. The protocols such as MIP, SIP provide all mobile communication functionalities. However, some protocols are focused on specific aspects of mobile communication functionalities, so it must be used with other mobile protocols to support mobile communication. Currently, there are various requirements in mobile communication, especially in handover performance, using a single mobile communication protocols can’t satisfy such requirements. The requirement for low handover latency is essential to support real time application in mobile environment. Therefore, it is expected that FMIPv6 is widely adopted to improve handover performance in mobile communication environment. In this paper, we propose to use FMIPv6 with mSCTP to improve handover performance. We present the FMIPv6 can process handover with low latency and can be used as efficient under layer that support handover for mSCTP. We also describe the procedures of handover processing which is integrated both FMIPv6 and mSCTP procedure. We perform experiment to verify the operation of suggested handover procedures in our test bed. The handover performance of mSCTP using FMIPv6 is presented by comparing the handover latency.
2
Transport Layer Mobility Support in mSCTP
The SCTP has been approved by the IETF as a new reliable generic purpose transport layer protocol and it has many characteristic same as TCP that provide
848
K.-R. Kim, S.-G. Min, and Y.-H. Han
reliable session oriented transport service [7]. And SCTP has many additional features such as muti-streaming and muti-homing which support more variable needs of application. In case of muti-homing feature, single SCTP endpoint can have multiple IP addresses for redundant purpose. In [8], multi-homing feature is extended with Dynamic Address Reconfiguration (DAR) which makes SCTP add or delete new IP address and change primary IP address while end to end session is active. The DAR extension makes SCTP can support mobility of moving terminal. The mSCTP is actually SCTP with DAR extension and support soft handover in the transport layer. When the MN moves to the new network after it have been associated with the CN using the mSCTP, the MN obtains New Care of Address (NCoA) from the network, and mSCTP sends ASCONF-AddIP message to the CN to register NCoA. The mSCTP in MN change the primary IP address after it is considered that the MN joined the new network, and if a previous CoA goes inactive state, delete it from the CN by sending the ASCONF-DeleteIP message. By configuring address of the MN as it moves, session associated between the MN and the CN is not disturbed, and data can be transmitted continuously while the MN is moving. The mobility management point of view which is focused on how the MN can connect to the CN for communication, mSCTP can handle it in an efficient way. If the MN needs to communicate with the CN, it can transfer data by initiating mSCTP session between them. The specific routers such as, HA and FA of the MIP are unnecessary and mSCTP provides mobility with an end to end principle. However, the mSCTP do not have a location manager by itself, if the CN needs to set up a session to the MN, it must find the information about MN using location manager. Once the session has been made on both cases, mSCTP control the handover and no other protocols need to support handover. For handover perspective, mSCTP sends ASCONF-AddIP message after it obtain NCoA from the NAR, but mSCTP has no explicit information when it change the primary IP address which is much important to performance of handover. mSCTP resides in the transport layer, so it has not exact movement or location information of the MN. If mSCTP makes improper decision to change of primary IP address, we experience heavy processing overhead in handover and degraded handover performance. Therefore, to improve handover performance of mSCTP, much consideration about change the primary IP address is required. In current specification of mSCTP, the changing primary IP address is remained in main challenge issue [6]. If it is necessary to support real time application in mobile communication, the guarantee for low handover latency is required. And guarantee for low handover latency can’t be satisfied in a single mobile protocol. In case of MIP, it is hard to satisfy the real time application for handover with MIP alone, the protocol such as FMIPv6 that provides much tight handover requirement is proposed.
Fast Handover Method for mSCTP Using FMIPv6
849
mSCTP also can be used with other mobile protocols for location management or handover management. And it is required that mSCTP interoperates and uses the features of other protocols, if it can provide more handover performance or process a specific aspect of handover more efficiently.
3
Fast Handover in mSCTP Using FMIPv6
In this section, we discuss the handover procedures and the issues of handover in current mSCTP more specifically. After this, we describe the integrated procedure of handover using FMIPv6 with mSCTP for fast handover. 3.1
Current Handover Issues in mSCTP
When the MN moves from one location to others, the mSCTP adds new address sending the ASCONF-AddIP message after it acquires new IP address from NAR, but there are many possibilities when it sends ASCONF-DeleteIP message or more importantly when it changes the primary IP address using ASCONF-SetPrimaryAddress. Currently, the MN can change primary IP address as following cases [6]. 1. Immediately after a MN received ASCONF-ACK for ASCONF-AddIP from the CN. 2. Using explicit handover information from layer2 or physical layer. 3. Using a handover trigger from upper layer. CN (3.3.3.3)
MN (1.1.1.1)
NAR (2.2.2.X)
CN (3.3.3.3)
Data
2
AddIP 2.2.2.2
4 DeleteIP 1.1.1.1
Internet
3 SetPrimary 2.2.2.2
1
MN(1.1.1.1) Subnet A
Disconnected from PAR
Data
RA
L2 association Handover Trigger
-AddIP ASCO NF ASCONF -ACK ry -SetPr ima ASCONF ASCON F-ACK Data
MN (2.2.2.2) Subnet B
-D ASCO NF
RA
Service disruption
eleteIP
ASCONF -ACK
Fig. 1. The typical handover procures in mSCTP
In the first case, the mSCTP changes primary IP address after it registered NCoA to the CN. The performance of handover with this method depends on the
850
K.-R. Kim, S.-G. Min, and Y.-H. Han
state of handover at link layer. If the mSCTP changes the primary IP address after the link layer has successfully joined to the new network, then the mSCTP can receive data through the NAR and changing the primary path is valid. So, the handover in msCTP is also successful. However, mSCTP doesn’t have correct movement information of the MN, it is possible that the MN obtains an NCoA actually it does not move to new location. In this case, receiving the ASCONFACK from CN does not confirm that the MN is in the new location and the changing primary IP address can makes heavy handover overloads. The Fig. 1 describes the case1. The MN obtains NCoA from the NAR with Route Advertisement (RA) message, and registers the NCoA to the NAR with sending ASCONF-AddIP. If the MN receives the ASCONF-ACK for ADDIP from the CN, it sends the ASCONF-SetPrimaryAddress to change the primary IP address. In this case, the handover latency consists of the latency that the MN obtains the NCoA with receiving RA from the NAR and the latency that updates this address with sending ASCONF-AddIP and ASCONF-SetPrimaryAddress. The RA is distributed periodically from the NAR, and the latency for obtaining NCoA can be up to the distribution period of RA. In the second case, it is shown that the handover performance of mSCTP which use the link layer information such as radio signal strength is better than the case without using it [9]. In this case, the mSCTP can process handover with much precise information about link layer, it provides improved handover performance. However, using the information comes from link layer or physical layer in the transport layer, the transport layer must have another independent component that receive information from under layer and store it. Then it evaluates under layer information and determines when it adds new address and changes the primary IP address. But receiving, storing and evaluating under layer handover information at the transport layer is not simple and it does not conform to layered architecture of protocol. In the third case, the upper layer makes decision of the handover, and sends the trigger to the mSCTP. The upper layer has information about handover that is more than movement of the MN. Especially, if there are multiple wireless networks, the upper layer makes the handover decision from one network to another, considering with a bandwidth, cost, and other performance parameters. This kind of handover is called vertical handover and it can be triggered without moving of the MN. Thus, it is desirable to handover whenever the mSCTP receives handover request from the upper layer. 3.2
Using FMIPv6 for Fast Handover in mSCTP
The FMIPv6 is proposed to improve handover performance of MIPv6. When MN handovers using MIP, there are the IP connectivity latency and the binding update latency. The IP connectivity latency consist of movement detection latency and the NCoA configuration latency in the new subnet area. The binding update latency is the time that MN sends Binding Update to the HA and updates binding information on the HA.
Fast Handover Method for mSCTP Using FMIPv6
851
With the FMIPv6, the MN can obtain NCoA without closing connection to current link when it discovers new access point. This process eliminates the IP connectivity latency. The RtSolPr message and PrRtAdv message are used for this purpose. The FMIPv6 makes tunnel between the PAR and NAR using Fast Binding Update (FBU) message and the MN sends Fast Neighbor Advertisement (FNA) message to the NAR when it attached to the NAR. With this FBU and FNA process, the MN can reduce binding update latency significantly and minimize packet losses during handover. If we consider the handover procedures of mSCTP, the IP connectivity latency still exists in the same way. The latency that updates the NCoA using ASCONF-AddIP and changes primary IP address is considered as binding update latency. So FMIPv6 can also be used with mSCTP to improve the handover performance. As see in Fig. 2, when FMIPv6 is used for handover processing in mSCTP, the FMIPv6 processes handover information from link layer or physical layer. The FMIPv6 receives information from link layer and use this information to send RtSolPr message. If FMIPv6 acquires NCoA through RtSolPr and PrRtAdv messages exchange, this NCoA can be used directly in mSCTP to send the ASCONF-AddIP message. As a result, the IP connectivity latency in mSCTP can be reduced and the MN can start handover process without long IP connectivity latency. After the FMIPv6 sent FNA message to the NAR which means that the MN has attached to the NAR, FMIPv6 immediately triggers mSCTP that it can
CN (3.3.3.3)
2
5
AddIP 2.2.2.2
DeleteIP 1.1.1.1
Internet
1
MN(1.1.1.1) Subnet A
4
SetPrimary 2.2.2.2
PrRtAdv
MN (2.2.2.2)
3 L3 Trigger to mSCTP
Subnet B
Fig. 2. Handover procedures in mSCTP with FMIPv6
852
K.-R. Kim, S.-G. Min, and Y.-H. Han MN (1.1.1.1)
PAR (1.1.1.X)
NAR (2.2.2.X)
CN (3.3.3.3)
RtSolPr PrRtAdv ASCONF-AddIP (2.2.2.2) ASCONF-ACK
FBU
HI HAck
FBack
FBack Data transfer
buffering tunneling
L3 Trigger
FNA ASCONF-SetPrimaryAddress (2.2.2.2) Data transfer ASCONF-ACK
Data transfer ASCONF-DeleteIP (1.1.1.1) ASCONF-ACK
Fig. 3. Integrated handover procedures in mSCTP with FMIPv6
change primary IP address. Receiving trigger that the FMIPv6 has sent the FNA message to the NAR, confirms that the MN already has attached to the subnet of NAR. So, with this trigger, the MN can change the primary IP address always after it has attached to the subnet of NAR. The integrated procedures of the handover using FMIPv6 with mSCTP which maximize the performance of the handover are shown in Fig. 3. The moment that MN receive PrRtAdv message from PAR, MN can acquire network address of the NAR. And the MN can register network address of NAR to the address list of CN with ASCONF-AddIP message before moving to the location of NAR. The FMIPv6 in MN generates trigger immediately after it sends FNA message to NAR. Using this trigger, mSCTP sends ASCONF-SetPrimaryAddress message to change the primary IP address. In Fig. 3, using FMIPv6 with mSCTP, mSCTP can have more improved handover performance as following reasons. 1. mSCTP can obtain and register the NCoA to the CN without IP connectivity latency with the FMIPv6 RtSolPr, PrRtAdv address acquisition process. 2. mSCTP can change primary IP address with confirmation that the MN has attached the subnet of the NAR.
Fast Handover Method for mSCTP Using FMIPv6
853
Actually, FMIPv6 processes handover sequences and sends trigger to mSCTP, mSCTP does not need to process information from link layer or physical layer to determine the time changing primary IP address. Through this, mSCTP can have simple and correct information about under layer handover state, and can make clear handover decision correspondent to under layer state.
4
Experiment and Result
In this section, we introduce experimental result of suggested handover procedure. Fig. 4 shows the network set up of experimental environment. There are two client systems perform as MN and CN. The CN is fixed PC system and the MN has Lucent Orinoco USB client silver WLAN adapter. For both client systems, we ported Linux kernel 2.6.12.2 that includes lksctp [10] module for SCTP and DAR extension, and integrated it with FMIPv6 that have been ported our previous project. The PAR and NAR are typical routers based on Linux kernel 2.4.20 and support IPV6 routing. We also use two common Linksys wireless-G APs. We performed experiment as following handover process shown in Fig. 1 and Fig. 2. Fig. 5 (a) shows the result of handover latency in mSCTP without FMIPv6. Each handover result shows three different latencies. The first bar (CONN-ADDIP) represents the latency from link up event to the event that MN sends ASCONF-AddIP message. The second bar (CONN-ACK) represents the latency from link up event to the event that MN receives ASCONF-ACK message for ASCONF-SetPrimaryAddress. And the third bar (DISC-ACK) rep-
CN eth0(3ffe:102::100)
PAR
eth1(3ffe:102::2)
eth0(3ffe:100::1)
Hub
NAR
eth1(3ffe:102::1)
eth0(3ffe:101::1)
Hub
Hub
AP
AP
Linksys
Linksys PCoA (3FFE:100::202:2dff:fe20:e436)
Fig. 4. Experiment test bed set up
854
K.-R. Kim, S.-G. Min, and Y.-H. Han
12
12
CONN~PRIMARY CONN~ACK DISC~ACK
10
10
8
8
Latency (sec)
Latency (sec)
CONN~ADDIP CONN~ACK DISC~ACK
6
4
2
0 0
6
4
2
2
4
6
8
10
12
14
16
18
20
0 0
2
4
Handover Number
(a) mSCTP
6
8
10
12
14
16
18
20
Handover Number
(b) mSCTP with FMIPv6 Fig. 5. Handover latencies
resents the total handover latency form link down event to the event that the MN receives ASCONF-ACK message for ASCONF-SetPrimaryAddress. In handover result of mSCTP without FMIPv6, we can see that the total handover latency is mainly caused by CONN-ADDIP latency. In CONN-ADDIP latency, MN configures NCoA using RA that is periodically sent by NAR. We use RA sending period up to 10 seconds, therefore the CONN-ADDIP latency varies widely from few seconds to over than 10 seconds. Fig. 5 (b) is the handover latency result from mSCTP using FMIPv6. The first bar (CONN-PRIMATY) represents the latency from link up to the event that MN sends ASCONF-SetPrimaryAddress. The second bar (CONN-ACK) represents the latency from link up to the event that MN receives ASCONFACK for ASCONF-SetPrimaryAddress message. And the third bar (DISC-ACK) represents the total handover latency from link down to the event that MN receives ASCONF-ACK for ASCONF-SetPrimaryAddress message. We notice that CONN-PRIMARY latency is very low and takes only few ms. From this result, we can see that MN can acquire the NCoA very quickly by using FMIPv6 and it takes very small times to send ASCONF-SetPrimaryAddress message. However, the CONN-ACK latency takes more than 1 second. We found out that this latency was caused by the delay exist in IPv6 before sending first packet due to DAD, NUD, NS/NA processing, not by the round trip delay between ASCONF-SetPrimaryAddress to ASCONF-ACK. While implementing MN and CN for experiment, we did not optimize this processing for performance, so it exist both handover procedures in our experiment. Note that the handover performance of suggested procedure is greatly improved by pre-acquisition of NCoA and variation of handover latency is relatively small than that in mSCTP without FMIPv6.
5
Conclusion
We have introduced the fast handover method in mSCTP using FMIPv6. Using FMIPv6 as handover procedure, mSCTP can add NCoA to CN quickly without
Fast Handover Method for mSCTP Using FMIPv6
855
closing a connection to PAR and can change primary IP address after the MN has completely joined to the subnet of the NAR using the trigger from FMIPv6. Therefore, handover latency can be reduced significantly and mSCTP can process handover with explicit trigger from FMIPv6. mSCTP does not have to maintain or evaluate the information from link layer and physical layer directly. And mSCTP resides in the transport layer and FMIPv6 is in the network layer, trigger handover between mSCTP and FMIPv6 are simple and efficient. We have presented the integrated handover procedures of the FMIPv6 and the mSCTP to optimize handover performance and performed experiment on our test bed. The experimental result shows that the handover latency in mSCTP is mainly caused by NCoA acquisition process that depends on RA and FMIPv6 can improve the handover performance significantly by pre-acquisition of NCoA.
References 1. C. Perkins: IP Mobility Support for IPv4, RFC3344, Aug. 2002 2. D. Johnson, C. Perkins, J. Arkko: Mobility Support in IPv6, RFC3775, June 2004 3. Henning Schulzrinne, Elin Wedlund: Application-Layer Mobility Using SIP, Mobile Computing and Communications Review, Vol. 4, No. 3, July 2000 4. Rajeev Koodli: Fast Handover for Mobile IPv6, RFC 4068, Oct. 2004 5. Wei Xing, Holger Karl, Adam Wolisz: M-SCTP Design and Prototypical Implemetation of an End-to-End Mobility Concept, Proc. of 5th Intl. Workshop The Internet Challenge: Technology and Applications, Berlin, Germany, Oct. 2002 6. Seok. J. Koh, Qiaobing Xie: Mobile SCTP (mSCTP) for Internet Mobility, IETF Internet Draft, draft-sjkoh-msctp-00.txt, Mar. 2005, work in progress 7. R. Stewart, Q. Xie: Stream Control Transmission Protocol, RFC2960, Oct. 2000 8. R. Stewart, M. Ramalho, Q. Xie, M. Tuexen, P. Conrad: Stream Control Transmission Protocol (SCTP) Dynamic Address Reconfiguration, IETF Internet Draft, draft-ietf-tsvwg-addip-sctp-11.txt, Feb. 2005, work in progress 9. Moonjeong Chang, Meejeong Lee: A Transport Layer Mobility Support Mechanism, ICONS 2004 LNCS 3090 10. The Linux Kernel SCTP project, http://lksctp.sourceforge.net/
An Analytical Comparison of Factors Affecting the Performance of Ad Hoc Network* Xin Li, Nuan Wen, Bo Hu, Yuehui Jin, and Shanzhi Chen Broadband Network Research Center, State Key Laboratory of Networking and Switching, Beijing University of Posts and Telecommunications [email protected], [email protected], [email protected], [email protected], [email protected]
Abstract. In this paper an analytical model is proposed to investigate and quantify the effects and interactions of node mobility, network size and traffic load on the performance of ad hoc networks using AODV in terms of cost, average end-to-end delay and throughput. The analytical results reveal that contrary to the traditional concept, performance of ad hoc networks is much more sensitive to traffic load and network size than to node mobility. The capacity of ad hoc networks relies on the collective impact of all three factors but not any one alone. Furthermore, NS-2 based simulations are carried out to verify the theoretical model.
1 Introduction Development of dynamic routing protocols is one of the central challenges in the design of ad hoc networks. Accurate understanding of the nature of network is the cornerstone of successful routing protocol design. But little work has been carried out to systematically analyze the impacts or interactions of factors such as node mobility, network size and traffic load. It has been a rooted belief that volatile topology could impose the most significant impact and as a result tremendous efforts have been focused on mechanisms that can catch up with the high degree of node mobility. Researches[1][2][3] focused on comparison between different routing protocols have given some insights into the nature of ad hoc networks. It has been a common sense that there is no generally perfect ad hoc routing protocol. To achieve best performance existing routing protocols must be adapted to the given scenarios. In this paper, an analytical model is proposed to facilitate qualifying and quantifying the impacts and correlation of node mobility, network size and traffic load Original version of AODV[4] is selected to represent the inherent nature of reactive routing protocols. But the framework of this model is not strictly AODV-oriented. It could also be utilized in analysis on other protocols. *
This project is supported by the state 863 high-tech program of China (the Project Num is 2005AA121630), National Natural Science Foundation of China (Grant No.90204003, Grant No. 60472067), National Basic Research Program of China (Grant No. 2003CB314806), and Beijing Municipal Key Disciplines XK100130438 .
X. Jia, J. Wu, and Y. He (Eds.): MSN 2005, LNCS 3794, pp. 856 – 865, 2005. © Springer-Verlag Berlin Heidelberg 2005
An Analytical Comparison of Factors Affecting the Performance of Ad Hoc Network
857
NS-2[5] based simulations are also carried out to verify the theoretical analysis. Both analytical and simulation results show that contrary to the rooted belief that ad hoc networks suffer the most to node mobility, the collective impact of network size and traffic load are much more decisive to the network capacity. In the development of routing protocols more attention should be paid to improve the network scalability and alleviate congestion. The rest of this paper is organized as follows. In section , preliminary notions, assumptions and performance metrics are briefly presented. Section gives an detailed introduction to the new model. Analytical and simulation results are presented in section and respectively. Finally we conclude this paper in section .
2 Preliminaries 2.1 Basic Notions Suppose node X and Y has been within the transmission range of each other during the time interval (t0,)t] and at time t X needs to transmit a packet to Y, then the link (X, Y) can be in one of the following states: Link Broken: Y has moved out of the transmission range of X. Link Un-operational: Y is still within the transmission range of X, but the packet transmission from X to Y fails. Def.1 Link Failure: link (X, Y) fails if it is broken or un-operational. Def.2 Route Failure: a route {X1,X2,…Xk} fails if a packet encounters link failure before it reaches Xk. 2.2 Assumptions 1) The ad hoc network is deployed within a square field. Network size is measured by border length b in terms of hops. 2) Route length in terms of hops between any two given nodes is uniformly distributed between 1 and the network diameter. 3) Traffic load is generated by a number of CBR sources with transmission rateȜand fixed packet size. UDP is selected as the transport layer protocol. 4) All nodes move in accordance with random waypoint model[6]. IEEE 802.11[7] is used as the MAC layer protocol. Due to the memoryless nature of random waypoint model, the hold time of links between adjacent nodes is exponentially distributed and has a mean μ. 5) To guarantee full connectivity in the network, the minimum number of neighbors of any node should be no less than ʌ[8], which requires the number of nodes (N) should be no less than (ʌ+1)b2/ʌ. Considering the density waves[6] of node distribution, N is defined to be: § (π + 1)b 2 · ¸¸ N = 2ceil¨¨ © π ¹
(1)
6) Link-unoperation is mainly caused by noises, obstacles, network congestion, and multiple access interferences. In our model only high contention rates, congestion and
858
X. Li et al.
hidden terminal problem that are proportional to the traffic load are taken into account. So the probability of link un-operational (pu) is taken as the measure of traffic load. 7) The original version of AODV without expanding ring search is adopted in this model. 2.3 Performance Metrics Cost—the average amount of data bytes and control bytes generated to successfully deliver a data byte. Each hop-wise transmission of routing and data bytes should be taken into account. Throughput— ratio of the data packets delivered to the destination to those generated by the CBR sources. Average end-to-end delay— this includes all possible delays caused by buffering during route discovery latency, retransmission delays caused by route failure and propagation time. For simplicity and clarity, average transfer time in one hop, DL, is selected as the measure of delay.
3 Analytical Model 3.1 General Results Lemma 1. The maximum route length within ad hoc network is:
( )
Lmax = ceil 2b
(2)
The average route length within ad hoc network is:
( )
§ 1 + ceil 2b · ¸ EL = ceil ¨¨ ¸ 2 ¹ ©
(3)
Function ceil(A) rounds the elements of A to the nearest integer greater than or equal to A. Proof: according to assumptions 2) the result is straightforward. Theorem 1. The probability that a packet is successfully transmitted over a link connecting two common nodes is: pL = (1 − pu )e
−
1
λμ
(4)
Proof: Due to the memoryless nature of exponential distribution, the hold time of a link between common nodes since the end of last packet transmission, say t, is independent of the link hold time before t and exponentially distributed with Thus the probability that a link broken occurs before a transmission is mean given by: 1
− 1· § p B = P¨ t < ¸ = 1 − e λμ © λ¹
An Analytical Comparison of Factors Affecting the Performance of Ad Hoc Network
859
To successfully transmit a packet over a link connecting common nodes, two preconditions must be satisfied: the link is non-broken and operational. So the probability that a link is not failed is: pL = (1 − pu )(1 − pB ) = (1 − pu )e
−
1
λμ
Lemma 2. The average number of links successfully passed by a packet along a route before an error occurs is: Lmax H
ª 1 q=« ¬ Lmax
º ¦¦ kP (1 − P )» − 1 k −1 L
L
(5)
¼
H =1 k =1
Proof: For a route of H hops, let the number of links (including the failed link) passed by a packet to encounter a link failure be QH, thus p(QH = k ) = pLk −1 (1 − pL )
The expected value of QH is: H
H
k =1
k =1
E [QH ] = ¦ kP(Q = k ) = ¦ kPLk −1 (1 − PL )
According to assumption 2), the probability that the route length between any pair of given nodes equals L is 1/ LMAX. Hence for any packet in the network, the average number of links it passed to encounter an error (including the failed link), let it be Q, should be: E[Q ] = E{E [QH ]} =
L MAX
LMAX H
1 ¦ P(L )E[Q ] = L ¦¦ kP (1 − P ) H
H =1
k −1 L
L
MAX H =1 k =1
And the average number of links successfully passed by a packet along a route before an error occurs is: ª 1 q = Q −1 = « ¬ Lmax
Lmax H
º ¦¦ kP (1 − P )» − 1 k −1 L
H =1 k =1
L
¼
Lemma 3. The average number of routing failures a given packet encountered before it is successfully delivered to the destination is: z=
1 LMAX
Lmax
1
¦p H =1
H L
−1
(6)
Proof: Let ZH be an r.v. which describes the number of routing attempts needed to successfully deliver a packet to its final destination through a route with H hops. ZH has a geometric distribution given by: P(Z H = k ) = (1 − pSH )
k −1
pSH , pSH = pLH
pSH is the probability that a packet is successfully routed to its final destination through H hops. Thus the expected value of ZH is 1/ pSH, and the expected value of Z should be:
860
X. Li et al.
E [Z ] = E{E [Z H ]} =
LMAX
1
¦ P( H ) p H =1
=
SH
1 LMAX
LMAX
1
¦p H =1
H L
So the average number of routing failures before a packet is successfully delivered is: z = E [Z ] − 1 =
1 LMAX
LMAX
1
¦p H =1
H L
−1
the average traffic generTheorem 2. Let the number of data bytes in a packet be ated to successfully route a packet to its final destination using AODV is: C=
z (104q + 48 EL + 52 N ) + 64 E L
η
(7)
Proof: Let CLS, CERR, CREQ and CREP be the one-hop cost at network layer of successfully delivering a data packet, route error message, route request message and route reply message respectively. Hence the traffic generated to successfully route a data packet is composed of the following costs: The cost of successfully transmitting a data packet through the route: ELCLS. The cost of failed data transmission: zqCLS. The cost of informing the source about route failure: zqCERR. The cost of route discovery, including the cost incurred by the source to broadcast RREQ and the destination to send back RREP: z(NCREQ+ELCREP). We also assume that each source sends out packets without stop and the cost incurred by the first round of route discovery could be neglected. So the overall cost of successfully routing a packet is: C ' = z (qCERR + qCLS + EL CREP + NCREQ ) + EL CLS
When using AODV, CERR, CREQ and CREP should be 40 bytes, 50 bytes and 48 bytes respectively. To calculate the byte efficiency, C’ should be divided by the number of bytes in payload . Thus the result is straightforward. Theorem 3. When using AODV, throughput of the network is: T=
L
MAX Lmax
(8)
1
¦p H =1
H L
Proof: By definition throughput is just the reciprocal of the average number of routing attempts needed to successfully deliver a packet to its final destination, i.e. 1/Z. Based on lemma 3, the result is straightforward. Theorem 4. When using AODV, the average end-to-end delay is: D = 2qzDL + E L DL + 2 zE L DL
(9)
An Analytical Comparison of Factors Affecting the Performance of Ad Hoc Network
861
Proof: When using AODV, before a packet is successfully delivered it may encounter several route failures. Using lemma 1, both the average time consumed to transmit a packet from source to the failed link and inform the source of the failure is qDL. In route discovery process, although RREQ is flooded throughout the network, the average time for RREQ reaching destination is ELDL. The same is that for RREP being routed back. Thus the average cycle for one route discovery is 2ELDL. Using lemma 3, the average delay incurred by handling route failures before a packet is delivered is z(2qDL+2ELDL). At last, the average time of successfully routing a packet—ELDL should be appended. So theorem 4 is proved.
4 Analytical Results In this section the impacts imposed by node mobility, network size and traffic load on the performance of fast node are evaluated and compared using the model mentioned above. To facilitate quantifying the effects of different factors, CBR sources are assumed to send 64-byte packets with a transmission rate of 10 packets per second. 4.1 Analysis on Cost Fig.1a gives cost as a function of average link hold time and network size. It is observed that cost keeps rather stable whenμvaries from 10 seconds to infinite. Steep rise occurs whenμis less than 5 seconds. That is really an extremely high speed and not realistic, so the impact of node mobility is not so prominent as it has long been estimated. Both Fig.1a and Fig.1b demonstrate that network size and traffic load could impose much greater impact than node speed. Tiny variations in either factor could dramatically fluctuate the network performance. According to theorem 2, overhead incurred by flooding network with RREQ constitutes the greatest majority of cost. In the light of assumption 5 and theorem 2, route discovery overhead is proportional to z and b2, while lemma 3 indicates that retransmission times per packet is mainly decided by b and pu. So network size and traffic load could be much more influential than node mobility. Fig.1c gives more insights into the interaction of b and pu. It shows that b or pu alone could not dominate the network performance. In small networks even under heavy traffic load could cost still be kept at low level, while extremely low traffic load could remarkably improves the network scalability. But when both factors exceed some thresholds, network capacity could be highly vulnerable to any slight increment in either factor. 4.2 Analysis on Average End-to-End Delay and Throughput Fig.2 depicts the relative impacts of node mobility, traffic load and network size on average end-to-end delay. They show that the average end-to-end delay is not sensible to node mobility too. Another interesting observation is that compared with cost, both the absolute value and variation pace in delay are kept at a moderate level. This is due to the reason that when handling a route failure, cost incurred by flooding RREQ across network has a magnitude of 2b2 bytes. But to delay, time interval incurred by
862
X. Li et al.
(a) Cost as a function of and b with pu set to 0.1
(b) Cost as a function of μand pu with b set to 6 hops
(c) Cost as a function of b and pu withμset to 100s
Fig. 1. Analysis on cost
(a) Delay as a function of μand b with pu set to 0.1
(b) Delay as a function of μand pu with b set to 6 hops
(c) Delay as a function of b and pu withμset to 100s
Fig. 2. Analysis on the average end-to-end delay
(a) Throughput as a function of μand b with pu set to 0.1
(b) Throughput as a functn of μand pu with b set to 6 hops
(c) Retransmission times as a function of b and pu withμset to 100s
Fig. 3. Analysis on throughput and retransmission times
one route discovery is just around 2EL, i.e. b / 2 . This gives us an elicitation: by exploring the average retransmission times per packet, i.e. Z, more insights into the correlation between different factors could be achieved. Just the same as we have observed in Fig.1 and Fig.2, Fig.3a and Fig.3b show that b and pu could impose much greater impact on throughput than node mobility too. So we can conclud that node mobility is not a decisive factor to the network performance. Fig.3c illustrated that average retransmission times has almost the same characteristic as that of cost and delay. But the curves in this figure are steeper than their
An Analytical Comparison of Factors Affecting the Performance of Ad Hoc Network
863
counterparts in Fig.1c and Fig.2c. The reason lies in that z is more directly influenced by network size and traffic load. Even within the scope where cost and delay keep rather stable, obvious variations in retransmission times could still be observed.
5 Simulation Results 5.1 The Simulation Model NS-2 based simulations were also carried out to verify the theoretical model. Simulation configurations are the same as that mentioned in the theoretical analysis. Sourcedestination pairs are spread randomly over the network. For a network with N nodes, ceil(N/5), ceil(1.25N/5) and ceil(1.5N/5) sources are used to emulate three different traffic loads in the second set of simulations. Two more configurations: ceil(1.25N/5) and ceil(1.5N/5) are also adopted in the third set of experiments. In our simulations node mobility is measured by the average speed with pause time set to 0 second. Five node speed configurations, 2m/s, 6m/s, 10m/s, 14m/s and 30m/s, are selected to represent a series of descending link hold time. But how to calculateμas a function of node speed is out of the scope of this paper. Three network size configurations with network border length set to 4 hops, 5 hops and 6 hops respectively are adopted. The number of nodes in each field is decided according to assumption 5). Simulations are run for 400 simulated seconds for each scenario. Each data point represents an average of at least five runs with identical traffic load, network size, average node speed, but different randomly generated mobility scenarios. It should be noticed that in all the simulations the average end-to-end delay is measured in seconds. But this will not affect the comparison between analytical and simulation results. 5.2 Simulation Results
b=4 b=5 b=6
Average node speed (m/s) (a) Cost as a function of average node speed and network size with traffic load set to ceil (N/5)
Throughput
Delay (sec)
Cost (byte)
The first set of experiments examines the effects of average node speed and network size. To keep traffic load at the same level, the number of sources is set to ceil (N/5) in each scenario. It could be observed in Fig.4 that even if the node speed has been raised to 14 times higher than the initial value, the network performance just suffer very little degradation of no more than 25%. But one hop increment in the network border length could lead to an increase of more than 80%.
Average node speed (m/s) (b) Delay as a function of average node speed and network size with traffic load set to ceil
b=4 b=5 b=6
Average node speed (m/s) (c) Throughput as a function of average node speed and network size with traffic load set to ceil (N/5)
Fig. 4. The relative impact of node speed and network size
864
X. Li et al.
1.6
160
1.4
140
1.2
120 100
20 sources 24 sources 29 sources
80 60
0.25
1 0.8
20 sources 24 sources 29 sources
0.6 0.4
40
0.2
20
Throughput
180
D elay (sec)
Cost (byte)
0.3
1.8
200
2
6
10
14
Average node speed (m/s)
30
0
2
6
10
14
30
2
Average node speed (m/s)
6
10
14
30
Average node speed (m/s)
(b) Delay as a function of average node speed and traffic load with b set to 6 hops
(a) Cost as a function of average node speed and traffic load with b set to 6 hops
20 Sources 24 Sources 29 Sources
0.1 0.05
0
0
0.2 0.15
(c) Throughput as a function of average node speed and traffic load with b set to 6 hops
Fig. 5. The relative impact of node speed and traffic load
2 1 .8
1 .2 1
b=4,N=44 b=5,N=66 b=6,N=96
0 .8 0 .6 0 .4
Retransmission times
1 .4
Delay (sec)
Cost (byte)
b=4,N=44 b=5,N=66 b=6,N=96
1 .6
0 .2 0 ceil(N/5)
ceil(1 .2 5 N/5 )
ceil(1 .5N /5 )
ceil(1 .7 5N /5 )
ceil(2 N/5)
Number of sources
(a) Cost as a function of network size and traffic load with average node speed set to 4m/s
ceil(N/5)
ceil(1.2 5N/5)
ceil(1.5N/5 )
ceil(1 .7 5N/5)
Number of sources
ceil(2 N/5)
ceil(N/5 )
ceil(1 .2 5 N/5)
ceil(1 .5 N/5)
ceil(1 .7 5N /5 )
ceil(2 N/5)
Number o f so urces
(b) Delay as a function of network size and traffic(c) Retransmission times as a function of network size load with average node speed set to 4m/s and traffic load with average node speed set to
Fig. 6. The relative impact of network size and traffic load
In the second set of experiments correlation between traffic load and node mobility is investigated with the network border length uniformly set to 1500 meters in each scenario. Although we could not figure out which pu should 20, 24 and 29 sources correspond to, Fig.5a, Fig.5b and Fig.5c still reveal that the network performance could suffer more to increased traffic load than boosted node speed. Fig.5 also shows that when node speed exceeds 10m/s, the network performance would be dramatically degraded by increased node mobility. The same trend could also be observed in Fig.4, although it is not so obvious when b is set to 4 and 5. In section our analytical results have illustrated that the impact of node mobility could increase sharply as average link hold time decreases below certain threshold value and the pace is proportional to network size, which is coincident with the simulation results. Fig.4 and Fig.5 also illustrate that the influence of traffic load are not so dramatic as network size. Fig.6 further confirms it. This also corresponds to the observations from Fig.1c and Fig.2c. Based on our analytical model the reason is rather straight forward: larger network means longer average route length, end-to-end delay and more routing overhead, especially the route discovery overhead, which directly contribute to the performance degradation. At the same time, just as shown in Fig.3c and Fig.6c, longer route length could lead to more route failures and retransmission times, thus further deteriorate the situation.
An Analytical Comparison of Factors Affecting the Performance of Ad Hoc Network
865
6 Conclusions In this paper an analytical model is proposed to investigate and quantify the impacts and interactions of network size, node mobility and traffic load on the performance of ad hoc networks. The original version of AODV is selected to represent the inherent nature of reactive routing protocols. Ns-2 based simulations are also carried out to verify the validity of the theoretical model. Both the analytical and simulation results reveal that performance of ad hoc networks depends on the collective impact of different factors but not any one alone while network size and traffic load could impose much greater impact than node mobility. Network performance could start saturating even when the average link hold time is still at a very low level. These conclusions also reveal that routing protocol design in ad hoc networks is highly scenariooriented. It is an art of balancing and compromising. Before setting about to solve problems it would be better to find out what the problem is.
References 1. Hsu. J, Bhatia. S, Takai. M, Bagrodia. R and Acriche. M.J, “Performance of mobile ad hoc networking routing protocols in realistic scenarios,” MILCOM 2003. IEEE Volume 2, Page(s):1268 – 1273, 13-16 Oct. 2003 2. J. Broch, D. A. Maltz, D. B. Johnson, Y.-C. Hu, and J. Yetcheva, “A performance comparison of multi-hop wireless ad hoc network routing protocols”, MOBICOM’98, 1998. Dallas, Texas. 3. D. Samir. R, R. Castañ, eda. J. Yan, “Simulation-based performance evaluation of routing protocols for mobile ad hoc networks,” Mobile Networks and Applications, vol. 5 Issue 3, September 2000. 4. C. E. Perkins and E. M. Royer, “Ad Hoc On-demand Distance Vector Routing,” Proc. 2nd IEEE Wksp. Mobile Comp. Sys. and Apps, pp. 90–100, Feb. 1999. 5. K. Fall and K. Varadhan, Eds, ns notes and ocumentation, 1999; available from http://wwwmash.cs.berkeley.edu/ns/. 6. T. Camp, J. Boleng, and V. Davies. “A survey of mobility models for ad hoc network research”, Wireless Comm. and Mobile Computing (WCMC), 2(5): 483--502, 2002 7. IEEE, “Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications,” IEEE Std. 802.11-1997, 1997. 8. G. Ferrari and O.K. Tonguz, "Minimum number of neighbors for fully connected uniform ad hoc wireless networks", IEEE International Conference on Communications (ICC'04), Paris, France, 2004
Maximum Throughput and Minimum Delay in IEEE 802.15.4 Benoˆıt Latr´e1, , Pieter De Mil1 , Ingrid Moerman1 , Niek Van Dierdonck2 , Bart Dhoedt1 , and Piet Demeester1 1
Department of Information Technology (INTEC), Ghent University - IBBT - IMEC Gaston Crommenlaan 8, bus 201, B-9050 Gent, Belgium Tel. +32 9 331 49 00, Fax. +32 9 331 48 99 [email protected] 2 Ubiwave NV, Warandestraat 3, B-9240 Zele, Belgium Tel. +32 52 45 39 80, Fax. +32 52 45 39 89
Abstract. This paper investigates the maximum throughput and minimum delay of the new IEEE 802.15.4-standard. This standard was designed as a highly reliable and low-power protocol working at a low data rate and offers a beaconed and unbeaconed version. We will give the exact formulae for a transmission between one sender and one receiver for the unbeaconed version as this one has the least overhead. Further, the influence of the different address schemes, i.e. no addresses or the use of long and short addresses, is investigated. It is shown that the maximum throughput is not higher than 163 kbps when no addresses are used and that the maximum throughput drops when the other address schemes are used. Finally, we will measure the throughput experimentally in order to validate our theoretical analysis.
1
Introduction
The market of wireless devices has experienced a significant boost in the last few years and new applications are emerging rapidly. Several new protocols have been proposed such as IEEE 802.11g and IEEE 802.16. However, these protocols focus on achieving higher data rates in order to support high bit rate applications for as much users as possible. On the other hand, there is a growing need for low data rate solutions which provide high reliability for activities such as controlling and monitoring. Furthermore, these applications often use simple devices which are not capable of handling complex protocols. In order to cope with this problem, a new standard was defined in the end of 2003: IEEE 802.15.4 [1]. The goal of the IEEE 802.15.4 standard is to provide a low-cost, highly reliable and low-power protocol for wireless connectivity among inexpensive, fixed and portable devices such as sensor networks and home networks [2, 3]. This last type of networks is commonly referred to as Wireless Personal Area Networks (WPAN). The standard works in the 2.4 GHz range -the same range as 802.11b/g
Corresponding author.
X. Jia, J. Wu, and Y. He (Eds.): MSN 2005, LNCS 3794, pp. 866–876, 2005. c Springer-Verlag Berlin Heidelberg 2005
Maximum Throughput and Minimum Delay in IEEE 802.15.4
867
and Bluetooth- and defines a physical layer and a MAC sub layer. The standard is used by the Zigbee Alliance [4] to build a reliable, cost-effective and low-power network. In this paper, we will investigate the maximum throughput and minimum delay of 802.15.4. We will do this both analytically and experimentally. All the information needed for obtaining these results can be found in the standard [1]. This paper will offer the exact formulae for these calculations in order to give an overview and an easy way to calculate the maximum throughput without the need to completely understand the standard. The paper is organized as follows. Section 2 will give a brief technical overview of 802.15.4. In section 3, the maximum throughput is calculated. The analysis of the results is given in section 4 and experimental validation is done in section 5. Finally, section 6 concludes the paper.
2
Technical Overview
The new IEEE 802.15.4 defines 16 channels in the 2.4 GHz band, 10 channels at 915 MHz and 1 channel at 868 MHz. The 2.4 GHz band is available worldwide and operates at a raw data rate of 250 kbps. The channel of 868 MHz is specified for operation in Europe with a raw data rate of 20 kbps and for North America the 915 MHz band is used at a raw data rate of 40 kbps. All of these channels use DSSS. The standard specifies further that each device shall be capable of transmitting at least 1 mW (0 dBm), but actual transmit power may be lower or higher. Typical devices are expected to cover a 10–20 m range. The MAC sub layer supports different topologies: a star topology with a central network coordinator, a peer to peer topology (i.e. a tree topology) and a combined topology with interconnected stars (clustered stars). Both topologies use CSMA/CA to control access to the shared medium. All devices have 64-bit IEEE addresses, but short 16-bit addresses can be assigned. In order to achieve low latencies, the IEEE 802.15.4 can operate in an optional superframe mode. In this mode, beacons are sent by a dedicated device, called a PAN-coordinator at predetermined intervals (PAN = Personal Area Network). These intervals can vary from 15 ms to 245 seconds. The time between these beacons is split in 16 slots of equal size and is divided in two groups: the contention access period (CAP) and the contention free period (CFP) in order to provide the data with quality of service (QoS). The time slots in the CFP are called guaranteed time slots (GTS) and are assigned by the PAN-coordinator. The channel access in the CAP is contention based (CSMA/CA). When a device wishes to transmit data, the device waits for a random number of back off periods. Subsequently, it checks if the medium is idle. If so, the data is transmitted, if not, the device backs off once again and so on. As the MAC sub layer needs a finite amount of time to process data received from the PHY, the transmitted frames are followed by an Inter Frame Space (IFS) period. The length of the IFS depends on the size of the frame that has just been transmitted. Long frames will be followed by a Long IFS (LIFS) and
868
B. Latr´e et al.
Long frame
ACK
Short frame
ACK
LIFS turnaroundtime
SIFS turnaroundtime
Back Off Period
1 frame duration
Fig. 1. Frame sequence in 802.15.4. We notice the back off period and that long frames are followed by a long inter frame space and short frames by a short inter frame space. MAC Header 2 byte MAC sublayer
1 byte
MAC footer
0-20 byte
Variable
Frame Sequence Address control number info
Payload
2 byte Frame check sequence
MAC Protocol Data Unit (MPDU)
Synchronisation header
PHY header
PHY Service Data Unit (PSDU)
PHY protocol data unit (PPDU) 5 byte
1 byte
127 byte
Fig. 2. Frame structure of IEEE 802.15.4
short frames by a Short IFS (SIFS). An example of a frame sequence, using acknowledgments (ACKs), is given in figure 1. If no ACKS are used, the IFS follows the frame immediately. The packet structure of IEEE 802.15.4 is shown in figure 2. The size of the address info can vary between 0 and 20 bytes as both short and long addresses can be used and as a return acknowledgment frame does not contain any address information at all. Additionally, the address info field can contain the 16-bit PAN identifier, both from the sender and from the receiver. These identifiers can only be omitted when no addresses are sent. The payload of the MAC Protocol Data Unit is variable with the limitation that a complete MAC-frame (MPDU or PSDU) may not exceed 127 bytes.
3 3.1
Theoretical Calculations Assumptions
The maximum throughput of IEEE 802.15.4 is determined as the number of data bits coming from the upper layer (i.e. the network layer) that can be transmitted.
Maximum Throughput and Minimum Delay in IEEE 802.15.4
869
Hence, we are only interested in the throughput of the MAC-layer according the OSI-protocol stack. In this paper, only the unslotted version of the protocol (i.e. without the super frames) in the 2.4 GHz band is examined. Indeed, the 2.4 GHz band provides the most channels at the highest data rate and the unslotted version has the least overhead. Hence, CSMA with a back off scheme is used. The maximum throughput is calculated between only 1 sender and only 1 receiver which are located close to each other. Therefore, we assume that there are no losses due to collisions, no packets are lost due to buffer overflow at either sender or receiver, the sending node has always sufficient packets to send and the BER is zero (i.e. we assume a perfect channel). 3.2
Calculations
The maximum throughput (TP ) is calculated as follows. First the delay of a packet is determined. This overall delay accounts on the one hand for the delay of the data being sent and on the other hand for the delay caused by all the elements of the frame sequence, as is depicted in figure 1, i.e. back off scheme, sending of an acknowledgement, . . . In other words, the overall delay is the time needed to transmit 1 packet. Subsequently, this overall delay is used to determine the throughput: 8·x (1) TP = delay(x) In this formula, x represents the number of bytes that has been received from the upper layer, i.e. the payload bytes from figure 1. The delay each packet experiences can be formulated as: delay(x) = TBO + Tf rame (x) + TT A + TACK + TIF S (x)
(2)
The following notations were used: TBO Tf rame (x) TT A TACK TIF S
= = = = =
Back off period Transmission time for a payload of x byte Turn around time (192 μs) Transmission time for an ACK IFS time
For the IFS, SIFS is used when the MPDU is smaller than or equal to 18 bytes. Otherwise, LIFS is used. (SIFS = 192 μs, LIFS = 640 μs). The different times are expressed as follows: Back off period TBO = BOslots · TBO slot BOslots TBO slot
= Number of back off slots = Time for a back off slot (320 μs)
(3)
870
B. Latr´e et al.
The number of back off slots is a random number uniformly in the interval (0, 2BE -1) with BE the back off exponent which has a minimum of 3. As we only assume one sender and a BER of zero, the BE will not change. Hence, the 3 number of back off slots can be represented as the mean of the interval: 2 2−1 or 3.5. Transmission time of a frame with a payload of x bytes Tf rame (x) = 8 · LP HY LMAC HDR Laddress LMAC F T R Rdata
= = = = =
LP HY + LMAC
Length of Length of Length of Length of Raw data
HDR
+ Laddress + x + LMAC Rdata
FTR
(4)
the PHY and synchronization header in bytes (6) the MAC header in bytes (3) the MAC address info field the MAC footer in bytes (2) rate (250 kbps)
Laddress incorporates the total length of the MAC address info field, thus including the PAN-identifier for both the sender as the destination if addresses are used. The length of one PAN-identifier is 2 bytes. Transmission time for an acknowledgement TACK =
LP HY + LMAC HDR + LMAC Rdata
FTR
(5)
If no acknowledgements are used, TT A and TACK are omitted in (2). Summarizing, we can express the throughput using the following formula: 8·x a·x+b delay = a · x + b TP =
(6) (7)
In this equations, a and b depends on the length of the data bytes (SIFS or LIFS) and the length of the address used (64 bit, 16 bit or no addresses). The parameter a expresses the delay needed for sending 1 data byte, parameter b is the time needed for the protocol overhead for sending 1 packet. The different values for a and b can be found in table 1. Table 1. Overview of the parameters for equation 5 nr of address bits 0 bits ACK no ACK 16 bits ACK no ACK 64 bits ACK no ACK
a 0.000032 0.000032 0.000032 0.000032 0.000032 0.000032
b 0.002656 0.002112 0.002912 0.002368 0.003296 0.002752
Maximum Throughput and Minimum Delay in IEEE 802.15.4
4
871
Analysis
In this section, we will analyze the throughput and bandwidth efficiency of IEEE 802.15.4 and we will discuss the lower delay limit. Several scenarios are considered: an address length of 64 bit address, of 16 bit address or without any address info and in all cases with or without the use of ACKs. The bandwidth efficiency is expressed as TP . (8) η = Rdata The results can be found in figures 3 and 4 where figure 3 gives the useful bitrate and figure 4 the bandwidth efficiency. In the figures, the payload size represents the number of bits that are received from the upper layer. In section 2 it was mentioned that the maximum size of the MPDU is 127 bytes. Consequently, the number of data bytes that can be sent in one packet is limited. This can be seen in the figures: when the address length is set to 2 bytes (or 16 bits), the maximum payload size is 114 bytes. This can be calculated as follows: M P DU = LMAC HDR + Laddress + LMAC F T R + payload, where Laddress equals to 2 · 2 bytes + 2 · 2 bytes for the PAN-identifiers and the short addresses respectively. Putting the correct values into the formula for MPDU, gives us 114 bytes as maximum payload length. When the long address structure is used (64 bits), 102 data bytes can be put into 1 packet. If no addresses are used, the PAN-identifiers can be omitted, which means that Laddress is zero. The maximum payload is now set to 122 bytes. In general, we see that the number of useful bits or the bandwidth efficiency grows when the number of payload bits increases. The same remark was made when investigating the throughput of IEEE 802.11 [5] and is to be expected as all the packets have the same overhead irrespective of the length of the packet. Further, the small bump in the graph when the address length is 16 bits at 6 bytes, figure 3(b), is caused by the transition of the use of SIFS to LIFS: at that moment the MPDU will be larger than 18 bytes. In all cases, the bandwidth efficiency increases when no ACK is used, which is to be expected as less control traffic is being sent. In figure 3 and 4 we have only shown the graphs for short and long addresses. The graphs for the scenario without addresses are similar to the previous ones with the understanding that the maximum throughput is higher when no addresses are used. The graphs were omitted for reasons of clarity. A summary can be found in table 2 where the maximum bit rate and bandwidth efficiency of the several scenarios are given. We can see that we under optimal circumstances, i.e. using no addresses and with-out ACK, an efficiency of 64.9% can be reached. If acknowledgements are used, an efficiency of merely 59.5 % is obtained. Using the short address further lowers the maximum bit rate by about 4%. The worst result is an efficiency of only 49.8% which is reached when the long address is used with acknowledgements. The main reason for these low results is that the length of the MPDU is limited to 127 bytes. Indeed, the number of overhead bytes is relatively large compared to the number of useful bits (MPDU payload). This short packet
872
B. Latr´e et al. 4
16
x 10
14
10 8 4
6
3
address 16 bits + ACK address 16 bits no ACK address 64 bits + ACK address 64 bits no ACK
4
Useful bitrate (bps)
Useful bitrate (bps)
12
2
x 10
2.5 2 1.5 1 address 16 bits + ACK address 16 bits no ACK
0.5
0
0
20
40
60 80 Payload size (bytes)
100
120
0 0
(a)
2
4 6 8 Payload size (bytes)
10
(b)
Fig. 3. Useful bitrate in function of the number of payload bytes for the different address schemes. The graph on the right (b) shows a snapshot of the left graph for an address size of 16 bits. The transition from SIFS to DIFS can be seen clearly.
70
Bandwidth efficiency (%)
60
50
40
30
20
address 16 bits + ACK address 16 bits no ACK address 64 bits + ACK address 64 bits no ACK
10
0
0
20
40
60 80 Payload size (bytes)
100
120
Fig. 4. Bandwidth efficiency of IEEE 802.15.4
length was chosen in order to limit the number of collisions (small packets are used) and to improve fair use of the medium. Further, the main application area
Maximum Throughput and Minimum Delay in IEEE 802.15.4
873
Table 2. Maximum bitrate and maximum efficiency of IEEE 802.15.4 for different address lengths nr of address bits maximum bitrate (bps) 0 bits ACK 147,780 no ACK 162,234 16 bits ACK 139,024 no ACK 151,596 64 bits ACK 124,390 no ACK 135,638
maximum efficiency (%) 59.5 64.9 55.6 60.6 49.8 54.8
−3
7
x 10
6
delay (s)
5
4
3 address 16 bits + ACK address 16 bits no ACK address 64 bits + ACK address 64 bits no ACK
2
1
0
20
40
60 80 Payload size (bytes)
100
120
Fig. 5. Minimum delay for varying payload sizes for the short and long address
of this standard focuses on the transmission of small quantities of data, hence the small data packets. Figure 5 gives the minimum delay each packet experiences. We immediately notice that the delay is a linear function of the number of payload bytes, as long as we assume a payload of more than 6 bytes for the short address scheme. The jump in the graph for the short address length is caused by the IFS-mechanism. In table 3, the minimum delay is given for the different scenarios. For the maximum payload, the minimum delay is the same for all the scenarios. Indeed, the MPDU is set to the maximum of 127 bytes. However, as can be seen in figure 5, the maximum number of payload bits differs when the short or long address is used.
5
Experimental Results
In order to validate our theoretically obtained maximum throughput, we will measure experimentally the throughput between 2 radios using the IEEE 802.15.4
874
B. Latr´e et al.
Table 3. Minimum delay in ms for a payload of zero bits and a payload of a maximum number of bits nr of address bits 0 bits 16 bits 64 bits
delay (ms) payload = 0 bits payload = maximum ACK 2.21 6.56 no ACK 1.66 6.02 ACK 2.46 6.56 no ACK 1.92 6.02 ACK 3.30 6.56 no ACK 2.75 6.02
4
14
x 10
12
Useful bitrate (bps)
10
8
6
4 experimental analytical
2
0
0
10
20
30
40 50 60 Payload size (bytes)
70
80
90
100
Fig. 6. Comparison between analytical and experimental results when short addressing and acknowledgments are used
specification. For our assays, we have used the 13192 DSK (Developer’s Starter Kit) of Freescale Inc. This kit uses the MC13192 radio chip of Freescale Inc. [9]. This radio works at 2.4 GHz and software is included which implements the IEEE 802.15.4–standard. In order to minimize interference caused by other habitants of the 2.4 GHz-band, we have used channel 16 (highest channel) as this channel does not overlap with any of the channels of IEEE 802.11 [6]. Figure 6 gives a comparison between the theoretically and experimentally obtained results when a short address and an acknowledgment is used. We see that the experimental curve is lower than the one obtained analytically. The relative difference between the two curves is steady at about 11 %. However, we notice that the 2 graphs have the same curve. We have fitted the experimental curve with (7) and we obtained the following values for a and b respectively: 0.0000324 and 0.00359. The analytical values can be found in table 1 (16 bit address and ACK used):
Maximum Throughput and Minimum Delay in IEEE 802.15.4
875
0.000032 and 0.00291. We see that the main difference is in the part that is independent of the number of bytes sent. This is an indication that an extra delay or processing time needs to be added to each packet. The duration of this extra delay is about 680 μs (b is expressed in seconds: 0.00359-0.00291 = 0.00068 seconds). Another experiment was done where the long address was used without an ACK. Again a lower throughput than theoretically expected is achieved. Now we have a difference of about 9 %. The fitted values for a and b are 0.00003201 and 0.003271 respectively. As in the previous situation, the extra delay is independent of the number of bits sent and is about 520 μs. The time difference in the two situations is comparable. The extra delay is probably caused by processing the software on the devices.
6
Conclusion
The maximum throughput and minimum delay are determined under the condition that there is only 1 radio sending and 1 radio receiving. The next step in analyzing the performance of IEEE 802.15.4 would be introducing more transmitters and receivers which can hear each other. It is assumed that the maximum overall throughput, i.e. the throughput of all the radios achieved together, will fall as the different radios have to access the same medium. Indeed, this will result in collisions and longer back off periods. This also will cause lower throughput and larger delays. Another issue is the performance of the slotted version of IEEE 802.15.4 and the use of varying duty cycles. This study was done in [7]. As was mentioned in section 2 and 4, IEEE 802.15.4 works in the 2.4 GHz-band, the same band as WiFi (IEEE 802.11) and Bluetooth (IEEE 802.15.1). Consequently, these technologies will cause interference when used simultaneously. The interference between WiFi and 802.15.4 was investigated in [6] and [8]. It was concluded that WiFi interference is detrimental to a WPAN using 802.15.4. However, if the distance between the IEEE 802.15.4 and IEEE 802.11b radio exceeds 8 meter, the interference of IEEE 802.11b is almost negligible. In this paper, we have presented the exact formulae for determining the maximum theoretical throughput of the unbeaconed version of IEEE 802.15.4. It was concluded this throughput varies according to the number of data bits in the packet and that a maximum throughput of 163 kbps can be achieved. Generally, the bandwidth efficiency is rather low due to the small packet size imposed in the standard.
Acknowledgements This research is partly funded by the Belgian Science Policy through the IAP V/11 contract, by The Institute for the Promotion of Innovation through Science and Technology in Flanders (IWT-Vlaanderen) through the contracts No. 020152, No. 040286 and a PhD grant for B. Latr´e, by the Fund for Scientific Research - Flanders (F.W.O.-V., Belgium) and by the EC IST integrated project MAGNET (Contract no. 507102).
876
B. Latr´e et al.
References 1. IEEE Std. 802.15.4: IEEE Standard for Wireless Medium Access Control and Physical Layer specifications for Low-Rate Wireless Personal Area Networks, 2003 2. Ed Callaway et al.,“Home Networking with IEEE 802.15.4: A developing Standard for Low-Rate Wireless Personal Area Networks”, IEEE Communications Magazine, Vol 40 No. 8, pp 70-77, Aug. 2002 3. Jos´e A. Gutierrez, Marco Naeve, Ed Callaway, Monique Bourgeois, Vinay Mitter, Bob Heile, “IEEE 802. 15.4: A developing standard for low-power low-cost wireless personal area networks”, IEEE Network, vol. 15, no. 5, September/October 2001 pp. 12-19 4. ZigBee Alliance, www.zigbee.org 5. Xiao Y. and Rosdahl J., “Throughput and delay limits of 802.11”, IEEE Communications Letter, Vol. 6, No. 8, August 2002, pp. 355-357 6. Soo Young Shin, Sunghyun Choi, Hong Seong Park, Wook Hyun Kwon, “Packet Error Rate Analysis of IEEE 802.15.4 under IEEE 802.11b Interference”, Wired/Wireless Internet Communications 2005, LCNS 3510, May 2005, pp.279-288 7. Jianliang Zheng and Myung J. Lee “Will IEEE 802.15.4 make Ubiquitous networking a reality?: A discussion on a potential low power, low bit rate standard”, IEEE Communica-tions Magazine, Vol. 42, No. 6, Jun 2004, pp. 140 - 146 8. N. Golmie, D. Cypher, and O. Rebala, “Performance Evaluation of Low Rate WPANs for Sensors and Medical Applications”, Proceedings of Military Communications Conference (MILCOM 2004), Oct. 31 - Nov. 3, 2004 9. Freescale Inc. http://www.freescale.com/ZigBee
On the Capacity of Hybrid Wireless Networks in Code Division Multiple Access Scheme Qin-yun Dai1, Xiu-lin Hu1, Zhao Jun2, and Yun-yu Zhang1 1
Department of Electronic and Information Engineering, Huazhong University of Science & Technology,Wuhan, Hubei 430074, China [email protected] 2 SHANGHAI Branch, China Netcom Corporation LTD., Pudong, Shanghai 201203, China [email protected]
Abstract. The hybrid wireless network is a kind of the novel network model, where a sparse network of base stations is placed within an ad hoc network. The problem on throughput capacity of hybrid wireless networks is considered to evaluate the performance of this network model. In this paper, we propose a general framework to analyze the capacity of hybrid wireless networks in code division multiple access scheme. Subsequently, we derive the mathematical analytical expressions of the capacity of hybrid wireless network systems under some assumptions. Finally, simulation results show that the hybrid wireless network could be a tradeoff between centrally controlled networks and ad hoc networks.
1 Introduction Throughput capacity is an important parameter to evaluate the performance of wireless networks, which is denoted as the long-term achievable data transmission rate that a network can support. The network architecture is one of important factors influencing on the network capacity performance. The wireless network architecture can be roughly divided in two categories [1], as shown in Fig. 1. A widely used architecture is a network centrally controlled by base stations where every node communicates with others through the base stations, i.e., centrally controlled network. An alternative is the ad hoc architecture, where each node has the same capabilities. Two nodes wishing to communicate do so directly or use nodes lying in between them to route their packets. Recently, a hybrid wireless network model is proposed in [2], which is formed by placing a sparse network of base stations in an ad hoc network. Data may be forwarded in a multi-hop fashion or through base stations in a hybrid wireless network. In the case of centrally controlled network, the capacity performance is analyzed on a cell basis by considering the uplink and downlink, which has been well studied over the last decades. Subsequently, the throughput capacity of an ad hoc network has been discussed widely. In fact, the physical layer of wireless networks is undergoing tremendous development because of the recent advances in signal processing and multiple antenna systems and has significantly impacts on the capacity performance. X. Jia, J. Wu, and Y. He (Eds.): MSN 2005, LNCS 3794, pp. 877 – 885, 2005. © Springer-Verlag Berlin Heidelberg 2005
878
Q.-y. Dai et al. B
A
B
c
D
E
BS
c
A
E
D
Fig. 1. Two wireless network architecture models are given. The left part shows the sketch map of a centrally controlled network and the right part is regards as an ad hoc network. A, B, C, D, E denote as different nodes in a network. BS is the abbreviation for a base station. The rectangle in the right part is considered as a barrier during the transmission process.
With a better coding scheme such as the complex network coding [3], including for example multiple-access and broadcast, network capacity could be improved under the same physical constraints. In spread spectrum, multi-packet reception (MPR) can be obtained by assigning multiple codes to a single receiver. In this paper, we focus on the problem about the capacity of hybrid wireless networks. Liu [4] considered two different routing strategies and studied the scaling behavior of the throughput capacity of a hybrid wireless network under point-to-point model. It is well known that code division multiple access (CDMA) scheme is the possibility of receiving multiple packets at the same time. We propose a general framework to analyze the capacity of hybrid wireless networks in CDMA scheme by Markov chain approach and derive the mathematical analytical expressions of the hybrid wireless network capacity under some assumptions. Simulation experiments show that it is an attractive choice-of-technology for improving capacity performance of hybrid wireless networks.
2 The Hybrid Wireless Network Model 2.1 Network Component A hybrid wireless network consists of two components. The first component is an ad hoc network including nn nodes in a network region. The second component is a sparse network of nB base stations, which are placed within the ad hoc network in a regular pattern. The ad hoc network includes static and mobile nodes, where static nodes are distributed uniformly. The mobile nodes are randomly distributed at time t=0, and later, they move under the uniform mobility model so that the position of the mobile nodes at time t are independent of each other and the steady state distribution of the mobile nodes is uniform [5].
On the Capacity of Hybrid Wireless Networks in CDMA Scheme
879
2.2 Routing Strategy In a hybrid wireless network, there are two transmission modes: ad hoc mode and infrastructure mode. In the ad hoc mode, data are forwarded from the source to the destination in a multi-hop fashion without using base stations. In the infrastructure mode, data are forwarded through base stations. We adopt the routing strategy proposed by Liu [4]. If the destination is located in the same cell as the source node, data are forwarded in the ad hoc mode. Otherwise, data are forwarded in the infrastructure mode. 2.3 Transmission Mode Description in Code Division Multiple Access Scheme In the infrastructure mode, multiple nodes transmit packets to each other through base stations. We assume a time division duplex (TDD) system with equal-sized uplink and downlink packets and each occupies one time slot. Nodes are half-duplex and are always in the receiving mode during the downlink period, and similarly, nodes are in the transmitting mode during the uplink period. A slotted aloha random access protocol is used by all nodes in the uplink: whenever a node has a new packet to transmit, it sends the packet in the earliest available uplink time slot. If the packet is not successfully received by the base station, the node will retransmit the packet with a fixed probability in each successive uplink slots until a successful transmission occurs. Each node in the network transmits packets in the uplink using a unique spreading code which is assumed to be randomly generated. We assume that the receiver at the base station is a bank of matched filter, while the base station uses orthogonal codes for packets intended for different nodes in the downlink so that each receiving node always successfully receives its packets and the transmission success of a packet depends on the uplink reception alone. In the ad hoc mode, nodes transmit to each other directly through a common channel by which all nodes are fully connected. The same slotted aloha random access protocol used in the infrastructure mode is also employed by all nodes. The transceiver at each node is also half-duplex. Every node uses a unique code to spread its transmitted packets. In order to receive packets from any potential nodes, we assume that each node has the knowledge of all possible spreading codes and the receiver at each node is also a bank of matched filters. 2.4 Assumptions The classical analysis of slotted aloha by Kleinrock [6] is used in this paper: a node that needs to retransmit a packet is called as in the backlogged state; otherwise a node is in the unbacklogged state. To simplify analysis, we ignore noises and assume that errors in a packet are caused by multiple-access interference (MAI) alone. The following five assumptions about the hybrid wireless network are made [7]. Assumption 1: Nodes generate packets according to independent Poisson processes with equal arrival rate. Assumption 2: There is an immediate feedback about the status of the transmission. Assumption 3: There is no buffer at any node, i.e., each node can at most hold one packet at a time.
880
Q.-y. Dai et al.
Assumption 4: Probability ski means the receiver detects successfully i out of k colliding packets in a time slot. Assumption 5: Each node has equal probability to transmit to every other node.
3 Performance Analyses For an nn-node network, the Markov chain is characterized by (n n + 1) × (n n + 1) transition matrix P = [p nk ] with pnk being the probability that the network state goes from n to k in one transition. Next, we characterize the Markov chain for a hybrid wireless network by obtaining the transition matrix of the system. 3.1 Characterization of Hybrid Wireless Networks In the infrastructure mode, the network state changes every two-time slots (packets are transmitted during the uplink time slot and received in the downlink time slot). pnk represents the probability that the network state goes from n to k in two time slots. To obtain the state transition matrix PI = [p Ink ], we first define the reception matrix S for the base station. § s10 ¨ ¨ s 20 S=¨ M ¨ ¨sn 0 © n
s11
0
s 21
s 22
s nn 1 s nn 2
L 0 · ¸ L 0 ¸. M ¸ ¸ L s n n n n ¸¹
(1)
where sjk is the probability that the base station successfully demodulates k out of j packets. The computation of packet success probability follows [8]. Let k be the total num§ 3N · , ber of packets in a slot, N be the spreading gain, the BER x is [9]: ¨ ¸ x = Q¨ ¸ © k −1 ¹
where Q(y ) = (1 / 2π ) ∞ e − (t / 2 )dt . Under the assumption that errors occur independently ³y 2
in a packet, the packet success probability p (k ) = e §¨ L p ·¸x i (1 − x )L −i , where eb is the ¦¨ ¸ I b
p
i =0
© i ¹
number of bit errors that can be corrected by coding and Lp is the length of a packet. §k· n k −n s kn = ¨¨ ¸¸p I (k ) (1 − p I (k )) . ©n¹
(2)
Let Q aI (k, n ) be the probability that k unbacklogged nodes transmit packets in a given uplink slot and QIr be the probability that k backlogged nodes transmit. §n − n· n −n −k ¸(1 − p I ) (p I )k , Q I (k, n ) = §¨ n ·¸(1 − p )n − k p k , where p aI = 1 − e −(2λ / n ) is the Q I (k , n ) = ¨ n n
n
a
¨ ©
k
¸ ¹
a
a
r
¨k¸ © ¹
r
r
probability that there is at least one packet arrives at an unbacklogged node during two slots for the Poisson arrival with rate λ and pr is the retransmission probability for a backlogged node during the uplink slot. The transition probability with s00 defined to be one
On the Capacity of Hybrid Wireless Networks in CDMA Scheme
p
I nk
881
nn −n I I ¦n ° y = n − k ¦x = 0 s ( x + y )[x + ( n − k ) ]Q r (y, n )Q a (x, n ) 0 ≤ k < n . = ® n n −n n I I °¯¦ x = k − n ¦ y = 0 s ( x + y )[x − ( k − n ) ] Q a ( x, n )Q r ( y, n ) n ≤ k ≤ n n
(3)
The stationary distribution of the network state {q I }nn =0 can be obtained by solving the following balance equation: qr I = qv I PI , where qv I = [q I0 , q1I , L, q In ] and ¦ q In = 1 . n
n
In the ad hoc mode, the transition probability pnk is defined for every time slot since the network state changes during one time slot. Define the network reception matrix R as § r10 ¨ ¨ r20 R =¨ M ¨ ¨ rn 0 © n
r11
0
r21
r22
rn n 1
rn n 2
L 0 · ¸ L 0 ¸. M ¸ ¸ L rn n n n ¸¹
(4)
where rjk is the probability that k out of j packets in the time slot are received by their intended receivers in the network. Theorem 1 [7]: Under assumption 1-5, given total L ≤ n n packets are transmitted in a time slot, the probability that there are n ≤ L successfully received packets by their intended receivers in the network is given by
L
rLn = ¦l =n
min( l , n n − L )
¦
J = min( J , l )
§n n − L· ¨¨ ¸ J ¸¹ q Ll © (n n − L )l
. (5)
§ · J l! ¨ ¸ × ¦ × ¨ ¦ ∏ d L ,a i , b j ¸ J F L a ! a ! a ! ¨ ¦ b j =n i =1 ¸ J ¦ j=1 a j =l 1 2 © j=1 ¹
Where
L −( a i − bi )
¦
k = bi
a j = 1,2L l, b j = 0,1,L a j
and
§ L ·§ n − L · ¸¸ q Ll = ¨¨ ¸¸¨¨ n © l ¹© n n − l ¹
l
§ L −1 · ¨¨ ¸¸ © n n −1¹
L−l
, d L ,a , b = i
i
§ a i ·§ L − a i · ¨¨ ¸¸¨¨ ¸¸ . Similar to the infrastructure mode, the transition prob ability © b i ¹© k − b i ¹s LK §L· ¨¨ ¸¸ ©k¹ n −n ¦n ¦ n r(x + y )[x +( n −k ) ]Q ar (y, n )Q aa (x, n ) 0 ≤ k < n . ° p ank = ® nyn =−nn− k nx =0 a a °¯¦x = k −n ¦ y =0 r( x + y )[x −( k −n ) ]Q a ( x , n )Q r ( y, n ) n ≤ k ≤ n n
(6)
§n − n· n −n −k ¸¸(1 − p aa ) n (p aa )k , Q ar (k, n ) = §¨¨ nk ·¸¸(1 − p r )n −k p kr are probabilities that k Q aa (k , n ) = ¨¨ n © k ¹ © ¹
packets are transmitted by unbacklogged and backlogged nodes in one time slot, respectively, and paa and pr are packet transmission probabilities for unbacklogged and backlogged nodes in one time slot, respectively, r00 is also define to be one,
882
Q.-y. Dai et al.
{ }
a p aa = 1 − e − ( λ / n n ) . Similar to the infrastructure mode, the stationary distribution q
nn n =0
of
the ad hoc mode can be obtained by solving the Markov-chain balance equation: qr a = qv a Pa , where qv a = [q a0 , q 1a , L, q an ] , and ¦ q an = 1 . n
3.2 Throughput of Hybrid Wireless Networks In the infrastructure mode, given network state n, the number of packets successfully n received by their intended receivers in two time slots is N = p I k ls , where ¦ k ¦ kl n
k =1
l =0
k
is the probability that total k packets are transmitted in the p = ¦ Q (x, n )Q (k − x, n ) I k
I a
I r
x =0
uplink time slot. Because the throughput β I (n ) and the average throughput βI is den fined per time slot, β I (n ) = N / 2 , β = E(β (n )) = β (n )q I , where q In is the stationary n
I
I
¦
I
n
n =0
distribution of the network state Markov chain. In the ad hoc mode, β a (n ) and the average throughput βa are: β (n ) = n p a k lr , ¦ k ¦ kl a n
k =1
nn
l=0
where p ak is the probability that total k packets are transmitted βa = E(β a (n )) = ¦ β a (n )q a n
n =0
in one time slot in the ad hoc mode. Theorem 2: According to our model in section 2, the average throughput of the hybrid wireless network in CDMA scheme is βH = (1 − 1 / n B )βI + 1 / n B βa . Proof: In our model, we assume the nodes of hybrid wireless network have the property that static nodes are distributed uniformly and the steady state distribution of the mobile nodes is uniform. According to the routing policy in section 2, the probability that the destination is located in the same cell as the source node is 1/ n B , so that the probability that data are forwarded in the ad hoc mode is 1/ n B , and similarly, the probability that data are forwarded in the infrastructure mode is 1− 1 / n B . Because the average throughput of infrastructure mode and ad hoc mode are βI and βa respectively in CDMA scheme, the result follows.
Ŷ
4 Experimental Results In this section, the experimental results of throughput capacity of a hybrid wireless network in CDMA model and corresponding comparisons are given. In three network models, i.e., hybrid wireless network, centrally controlled network, and ad hoc network, define nB=3, nn=50. Results on the throughput capacity of three network models are obtained when we simulate the traffic transmission within seconds under the same physical parameters such as the data arrival rate, the bit error rate and so on. Fig. 2 shows the hybrid wireless network model architecture would lead to being a tradeoff between a centrally controlled network and an ad hoc network, where the throughput capacity of a hybrid wireless network is slightly smaller than the centrally controlled network and far larger than the ad hoc network.
On the Capacity of Hybrid Wireless Networks in CDMA Scheme
883
Fig. 2. Comparison of the average throughput capacity in three different network models under the same physical parameters
Fig. 3. Comparison of average throughput capacity between point-to-point and CDMA (nB=1)
Fig. 3 gives the comparison of the throughput capacity of a hybrid wireless network between the point-to-point model and CDMA scheme. According to two different models, the variation trend of throughput could be analyzed as the number of nodes in the network changes from 10 to 50. We observe that the throughput capacity in CDMA scheme is larger than those of the point-to-point model. Capacity performance in CDMA scheme is distinctly superior to those of the point-to-point model as the number of base stations increases. The throughput will decrease to the some value scale as the number of nodes increases. We consider the above results could be
884
Q.-y. Dai et al.
Fig. 4. Poisson arrival rate influences on the throughput capacity of a hybrid wireless network in CDMA scheme ( λ = 0.1, 0.5,1.0 )
Fig. 5. The number of base stations influences on the throughput capacity of a hybrid wireless network in CDMA scheme (nB=1,3,5,8,10,20)
improved on if the appropriate routing policy is selected, which could make throughput performance remain a high constant value and have no downtrend. Subsequently, results of Poisson arrival rate λ influencing on the throughput performance of a hybrid wireless network in CDMA scheme are obtained. On the whole, the throughput performance would be improved as λ increases. From Fig. 4, we observe that the throughput capacity would keep the steady state at last after it is increased to some value as nn is added, when nB remains a constant. We guess that the hybrid wireless network architecture could exist some theory limit value of capacity perform-
On the Capacity of Hybrid Wireless Networks in CDMA Scheme
885
ance on the given condition, which would be significantly influenced by the number of base stations nB, though the quantitative conclusions are not presented in this paper. Finally, different values of nB influencing on the throughput capacity of a hybrid wireless network in CDMA scheme is also given. Experimental data from Fig. 5 are consistent with our intuitive imagination, which shows the throughput capacity will increase as nB is added and will go to the steady state as the number of nodes in the network is added at last. Experiment results would forecast the conclusion that the capacity or the capability of the hybrid wireless network offering loads would be determined once nB is determined, which could be improved farther through the advance signal process or others, but the capacity increment must be based on the theory limit value depending on nB.
References 1. Toumpis, S., A.J.G.: Some Capacity Results for Ad Hoc Network. Allerton Conference on Communication, Control and Computing, Allerton II, Vol. 2 (2000) 775–784 2. Olivier Dousse, P.T., M. H.: Connectivity in Ad Hoc and Hybrid Network. IEEE INFOCOM’02, Vol. 2 (2002) 1079–1088 3. Gastpar, M., M.V.: On the Capacity of Wireless Networks: the Relay Case. IEEE INFOCOM’02, Vol. 3 (2002) 1577-1586 4. Liu, B., Liu, Z., Towsley, D.: On the Capacity of Hybrid Wireless Networks. IEEE INFOCOM’03, Vol. 2 (2003) 1543-1552 5. Nikhil Bansal, Z.L.: Capacity, Delay and Mobility in Wireless Ad Hoc Networks. IEEE INFOCOM’03 (2003) 1553-1563 6. Kleinrock, L., S.S.L.: Packet Switching in a Multi-access Broadcase Channel: Performance Evaluation. IEEE Transaction on Communication, Vol. 23 (1975) 410-423 7. Jeffrey, Q.Bao, L.T.: A Performance Comparison Between Ad Hoc and Centrally Controlled CDMA Wireless LANs. IEEE Transaction on Wireless Communication, Vol. 4 (2002) 829-841 8. Morrow Jr, R.K, J.S.L.: Bit-to-Bit Error Dependence in Slotted DS/SSMA Packet Systems with Random Signature Sequences. IEEE Transaction on Communication, Vol. 37 (1989) 1052-1061 9. Lehnert, J., M.P.: Error Probabilities for Binary Direct-sequence Spread-spretrum Communications with Random Signature Sequences. IEEE Transaction on Communication, Vol. 35 (1987) 87-98
Performance Evaluation of Existing Approaches for Hybrid Ad Hoc Networks Across Mobility Models Francisco J. Ros, Pedro M. Ruiz, and Antonio Gomez-Skarmeta DIIC, University of Murcia, Spain {fjrm, pedrom, skarmeta}@dif.um.es
Abstract. There is being an on-going effort in the research community to efficiently interconnect Mobile Ad hoc Networks (MANET) to fixed ones like the Internet. Several approaches have been proposed within the MANET working group of the Internet Engineering Task Force (IETF), but there is still no clear evidence about which alternative is best suited for each mobility scenario, and how does mobility affect their performance. In this paper, we answer these questions through a simulationbased performance evaluation across mobility models. Our results show the performance trade-offs of existing proposals and the strong influence that the mobility pattern has on their behavior.
1
Introduction and Motivation
Mobile ad hoc networks consist of a number of mobile nodes which organize themselves in order to communicate one with each other wirelessly. These nodes have routing capabilities which allow them to create multihop paths connecting nodes which are not within radio range. These networks are extremely flexible, self-configurable, and they do not require the deployment of any infrastructure for their operation. However, the idea of facilitating the integration of MANETs and fixed IP networks has gained a lot of momentum within the research community. In such integrated scenarios, commonly known as hybrid ad hoc networks, mobile nodes are witnessed as an easily deployable extension to the existing infrastructure. Some ad hoc nodes are gateways which can be used by other nodes to seamlessly communicate with hosts in the fixed network. Within the IETF, several solutions have been proposed to deal with the interconnection of MANETs to the Internet. One of the first proposals by Broch et al. [1] is based on an integration of Mobile IP and MANETs employing a source routing protocol. MIPMANET [2] followed a similar approach based on AODV, but it only works with Mobile IPv4 because it requires foreign agents (FA). In general, these approaches are tightly coupled with specific types of routing protocols, and therefore their applicability gets restricted. The proposals which are receiving more attention within the IETF and the research community in general are those from Wakikawa et al. [3] and Jelger X. Jia, J. Wu, and Y. He (Eds.): MSN 2005, LNCS 3794, pp. 886–896, 2005. c Springer-Verlag Berlin Heidelberg 2005
Performance Evaluation of Existing Approaches
887
et al. [4], which define different gateway discovery functions and address allocation schemes. Another interesting proposal is that from Singh et al. [5], which proposes a hybrid gateway discovery procedure which is partially based on the previous schemes. Many works in the literature have reported the strong impact that mobility has on the performance of MANETs. Thus, mobility will be a central aspect in our evaluations. In particular, we have employed three well-known mobility models (Random Waypoint, Gauss–Markov and Manhattan Grid) that we have used to deeply investigate the inter-relation between the Internet interconnection mechanism and the mobility of the network. An in-depth survey of the Random Waypoint and Gauss–Markov models (and others) can be found in [6], while the Manhattan Grid model is defined in [7]. The main novelty of this paper, is the investigation of the performance of the Internet connectivity solutions which are receiving more attention within the IETF. To the best of our knowledge, such kind of study has not been done before. In the authors’ opinion, this paper sheds some light onto the performance implications of the main features of each approach, presenting simulation results which provide valuable information to interworking protocol designers. Moreover, these results can be used to properly tune parameters of a given solution depending on the mobility pattern of the network, what can also be useful for hybrid MANETs deployers. The remainder of the paper is organized as follows: Sect. 2 provides a global sight of the most important current interworking mechanisms. The results of the simulations are shown in Sect. 3. Finally, Sect. 4 gives some conclusions and draws some future directions.
2
Analysis of Current Proposals
In this section we explore the most significant features of the main MANET interconnection mechanisms nowadays, namely those from Wakikawa et al., Jelger et al. and Singh et al. We refer to these solutions using the surname of their first author from now on. Table 1 summarizes the main features provided by each solution. Table 1. Summary of features of well-known existing proposals Wakikawa Jelger Singh Proactive/Reactive/Hybrid P/R P H Multiple Prefixes Yes Yes No Stateless/Stateful Stateless Stateless n/a DAD Yes No n/a Routing Header/Default Routing RH DR RH/DR Restricted Flooding No Yes No Load Balancing No No Yes Complete Specification Yes Yes No
888
2.1
F.J. Ros, P.M. Ruiz, and A. Gomez-Skarmeta
Address Allocation
Nodes requiring global connectivity need a globally routable IP address if we want to avoid other solutions like Network Address Translation (NAT). There are basically two alternatives to the issue of address allocation: they may be assigned by a centralized entity (stateful auto-configuration) or can be generated by the nodes themselves (stateless auto-configuration). The stateful approach is less suitable for ad hoc networks since partitions may occur, although it has also been considered in some works [8]. Both “Wakikawa” and “Jelger” specify a stateless auto-configuration mechanism which is based on network prefixes advertised by gateways. The nodes concatenate an interface identifier to one of those prefixes in order to generate the IP address. Currently, “Singh” does not deal with these issues. 2.2
Duplicate Address Detection
Once a node has an IP address, it may check whether the address is being used by other node. If that is the case, then the address should be deallocated and the node should try to get another one. This procedure is known as Duplicate Address Detection (DAD), and can be performed by asking the whole MANET if an address is already in use. When a node receives one of those messages requesting an IP address which it owns, then it replies to the originator in order to notify the duplication. This easy mechanism is suggested by “Wakikawa”, but it does not work when network partitions and merges occur. Because of this and the little likelihood of address duplication when IPv6 interface identifiers are used, “Jelger” prefers avoiding the DAD procedure. The main drawback of the DAD mechanism is the control overhead that it introduces in the MANET, specially if the procedure is repeated periodically to avoid address duplications when a partitioned MANET merges. 2.3
Gateway Discovery
The network prefix information is delivered within the messages used by the gateway discovery function. Maybe this is the hottest topic in hybrid MANETs research, since it has been the feature which has received more attention so far. Internet-gateways are responsible for disseminating control messages which advertise their presence in the MANET, and this can be accomplished in several different ways. “Wakikawa” defines two mechanisms: a reactive and a proactive one. In the reactive version, when a node requires global connectivity it issues a request message which is flooded throughout the MANET. When this request is received by a gateway, then it sends a message which creates reverse routes to the gateway on its way back to the originator. The proactive approach of “Wakikawa” is based on the periodic flooding of gateway advertisement messages, allowing mobile nodes to create routes to the Internet in an unsolicited manner. Of course, this solution heavily increments the gateway discovery overhead because the gateway messages are sent to the whole MANET every now and then.
Performance Evaluation of Existing Approaches
889
In order to limit that overhead of proactive gateway discovery, “Jelger” proposes a restricted flooding scheme which is based on the property of prefix continuity. A MANET node only forwards the gateway discovery messages which it uses to configure its own IP address. This property guarantees that every node shares the same prefix than its next hop to the gateway, so that the MANET gets divided in as many subnets as gateways are present. When “Jelger” is used with a proactive routing protocol, a node creates a default route when it receives a gateway discovery message and uses it to configure its own global address. But if the approach is integrated with a reactive routing protocol, then a node must perform a route discovery to avoid breaking the on-demand operation of the protocol. Regarding “Singh” approach, it introduces a new scenario where gateways are mobile nodes which are one hop away from a wireless access router. Nodes employ a hybrid gateway discovery scheme, since they can request gateway information or receive it proactively. The first node which becomes a gateway is known as the “default gateway”, and it is responsible for the periodic flooding of gateway messages. Remaining gateways are called “candidate gateways” and they only send gateway information when they receive a request message. 2.4
Routing Traffic to the Internet
The way traffic is directed to the Internet is also different across approaches. “Wakikawa” prefers using IPv6 routing headers to route data packets to the selected gateways. This introduces more overhead due to the additional header, but it is a flexible solution because nodes may dynamically vary the selected gateway without the need to change their IP address. This helps at maximizing the IP address lifetime. However, “Jelger” relies on default routing, i.e., nodes send Internet traffic using their default route and expect the remaining nodes to correctly forward the data packets to the suitable gateway. “Singh” uses both alternatives: default routing is employed when nodes want to route traffic through their “default gateway”, but they can also use routing headers to send packets to a “candidate gateway”. 2.5
Load Balancing
“Singh” depicts an interesting feature which does not appear in the rest of the proposals: a traffic balancing mechanism. Internet-gateways could advertise a metric of the load which passes across them within the gateway discovery messages. MANET nodes could use this information to take a more intelligent decision than what is taken when only the number of hops to the gateway is considered. Unfortunately, no detailed explanation of this procedure is provided in the current specification.
3
Performance Evaluation
To assess the performance of “Wakikawa” and “Jelger”, we have implemented them within the version 2.27 of the ns2 1 network simulator. The gateway se1
The Network Simulator, http://www.isi.edu/nsnam/ns/.
890
F.J. Ros, P.M. Ruiz, and A. Gomez-Skarmeta
lection function uses in both cases the criterion of minimum distance to the gateway, in order to get a fair comparison between the two approaches. “Singh” has not been simulated because the current specification is not complete enough and therefore it has not captured the research community attention yet. In addition, we have also implemented the OLSR protocol according to the latest IETF specification2 . We have set up a scenario consisting of 25 mobile nodes using 802.11b at 2 Mb/s with a radio range of 250 m, 2 gateways and 2 nodes in the fixed network. These nodes are placed in a rectangular area of 1200x500m2. 10 active UDP sources have been simulated, sending out a constant bit rate of 20Kb/s using 512 bytes/packet. The gateways are located in the upper right and lower left corners, so that we can have long enough paths to convey useful information. In addition, we use the two different routing schemes which are being considered for standardization within the IETF: OLSR [9] as a proactive scheme, and AODV [10] as a reactive one. This will help us to determine not only the performance of the proposals, but the type of routing protocols for which they are most suitable under different mobility scenarios. The case of OLSR with a reactive gateway discovery has not been simulated because in OLSR all the routes to every node in the MANET (including the gateways) are already computed proactively. So, there is no need to reactively discover the gateway, because it is already available at every node. In both AODV and OLSR we activated the link layer feedback. Movement patterns have been generated using the BonnMotion 3 tool, creating scenarios with the Random Waypoint, Gauss–Markov and Manhattan Grid mobility models. Random Waypoint is the most widely used mobility model in MANET research because of its simplicity. Nodes select a random speed and destination around the simulation area and move toward that destination. Then they stop for a given pause time and repeat the process. The Gauss–Markov model makes nodes movements to be based on previous ones, so that there are not strong changes of speed and direction. Finally, Manhattan Grid models the simulation area as a city section which is only crossed by vertical and horizontal streets. Nodes are only allowed to move through these streets. All simulations have been run during 900 seconds, with speeds randomly chosen between 0 m/s and (5, 10, 15, 20) m/s. Random Waypoint and Manhattan Grid models have employed a mean pause time of 60 seconds, although the former has also been simulated with 0, 30, 60, 120, 300, 600 and 900 seconds of pause time in the case of 20 m/s as maximum speed. The Manhattan Grid scenarios have been divided into 8x3 blocks, what allows MAC layer visibility among nodes which are at opposite streets of a same block. 3.1
Packet Delivery Ratio
The Packet Delivery Ratio (PDR) is mainly influenced by the routing protocol under consideration, although Internet connectivity mechanisms also have an 2 3
Code available at http://ants.dif.um.es/masimum/. Developed at the University of Bonn, http://web.informatik.uni-bonn.de/IV/Mitarbeiter/dewaal/BonnMotion/.
Performance Evaluation of Existing Approaches
891
1
0.95
Packet Delivery Ratio
0.9
0.85
0.8
0.75
0.7 OLSR+Jelger OLSR+Wakikawa pro AODV+Jelger AODV+Wakikawa pro AODV+Wakikawa rea
0.65
0.6 0
100
200
300
400 500 Pause Time (s)
600
700
800
900
Fig. 1. PDR in Random Waypoint model using different pause times (maximum speed = 20 m/s)
impact. Similarly to previous simulations of OLSR in the literature, we can see in Fig. 1 that as the mobility increases in the Random Waypoint model, it offers a much lower performance compared to AODV. The reason is that OLSR has a higher convergence time compared to AODV as the link break rate increases. In addition, according to RFC 3626, when link layer feedback informs OLSR about a broken link to a neighbor, the link is marked as “lost” for 6 seconds. During this time packets using this link are dropped in OLSR. This behavior also affects the routes towards Internet gateways, which is the reason why the PDR is so low in OLSR simulations. In the case of OLSR, “Jelger” performs surprisingly worse than the proactive version of “Wakikawa”. Given that “Jelger” has a lower gateway discovery overhead we expected the results to be the other way around. The reason is that “Jelger” is strongly affected by the mobility of the network. After carefully analyzing the simulations we found out that the selection of next hops and gateways makes the topology created by “Jelger” very fragile to mobility. The problem is that the restrictions imposed by the prefix continuity in “Jelger” concentrates the traffic on a specific set of nodes. In AODV, this problem is not so dramatic because AODV, rather than marking a neighbor as lost, starts finding a new route immediately. So, we can conclude that although prefix continuity has very interesting advantages (as we will see), it has to be carefully designed to avoid data concentration and provide quick reactions to topological changes. Regarding AODV, we can see how proactive “Wakikawa” offers a better PDR than the remaining solutions at high speeds. This is due to the proactive dissemination of information, what updates routes to the Internet as soon as they get broken. “Jelger” and reactive “Wakikawa” behave very much the same because the former is designed to create routes on-demand when it is integrated within a reactive routing protocol (although proactive flooding of gateway information is still performed).
F.J. Ros, P.M. Ruiz, and A. Gomez-Skarmeta 1
1
0.95
0.95 Packet Delivery Ratio
Packet Delivery Ratio
892
0.9 0.85 0.8 0.75 0.7 0.65
0.9 0.85 0.8 0.75 0.7 0.65
0.6
0.6 OLSR Jelger
OLSR Wak_P
AODV Jelger
RandomWaypoint GaussMarkov
AODV Wak_P
AODV Wak_R
OLSR Jelger
ManhattanGrid
OLSR Wak_P
AODV Jelger
RandomWaypoint GaussMarkov
(a) 5 m/s
AODV Wak_P
AODV Wak_R
ManhattanGrid
(b) 15 m/s
Fig. 2. PDR obtained from different mobility models for different maximum speeds
3500
RandomWaypoint GaussMarkov ManhattanGrid
3000
16000
RandomWaypoint GaussMarkov ManhattanGrid
1000
12000
2000 1500 1000
500
OLSR Wak_P
AODV Jelger
AODV Wak_P
AODV Wak_R
RandomWaypoint GaussMarkov ManhattanGrid
3000
1500
AODV Jelger
AODV Wak_P
AODV Wak_R
OLSR Jelger
RandomWaypoint GaussMarkov ManhattanGrid
2000 1500
OLSR Wak_P
AODV Jelger
AODV Wak_P
AODV Wak_R
(d) MAC (15 m/s)
10000 8000 6000 4000 2000
0 OLSR Jelger
AODV Wak_R
12000
500 0
AODV Wak_P
RandomWaypoint GaussMarkov ManhattanGrid
14000
1000
500
AODV Jelger
(c) No Route (5 m/s)
No Route
1000
OLSR Wak_P
16000
2500 Full Queue
MAC
OLSR Wak_P
(b) Full Queue (5 m/s) 3500
6000
0 OLSR Jelger
(a) MAC (5 m/s) 2000
8000
2000
0 OLSR Jelger
10000
4000
500 0
RandomWaypoint GaussMarkov ManhattanGrid
14000
2500 Full Queue
MAC
1500
No Route
2000
0 OLSR Jelger
OLSR Wak_P
AODV Jelger
AODV Wak_P
AODV Wak_R
(e) Full Queue (15 m/s)
OLSR Jelger
OLSR Wak_P
AODV Jelger
AODV Wak_P
AODV Wak_R
(f) No Route (15 m/s)
Fig. 3. Cause of packet drops for different mobility models
One of our goals is to analyze if the results are congruent across mobility models. Figure 2 shows a comparison between Random Waypoint, Gauss–Markov and Manhattan Grid mobility models with maximum speeds of 5 m/s and 15 m/s. Figures for other maximum speeds showed a similar trend (they are not included due to space constraints). At first sight we can point out an interesting thing: mobility model can heavily influence the resulting PDR, but results seem to be consistent across mobility models. That is, “Jelger” continues offering a lower PDR than “Wakikawa” when they are integrated within an OLSR network, and AODV does not change its PDR very much regardless of the Internet interconnection mechanism and the mobility model used. But in fact, each mobility model influences in a different way every approach showing their strengths and drawbacks. We can better realize this if we make a more in-depth analysis of the causes of packet drops, as we will explain below.
Performance Evaluation of Existing Approaches
893
The Gauss–Markov model presents the biggest link break rate of all the simulated mobility models when the maximum speed is high. However, it provokes very few link losses at low speeds. Because this mobility model does not perform strong changes in speed and direction, when a node picks a high speed then it is very likely that the node will continue travelling at high speeds, making links to break more often. Just the opposite occurs when the node initially chooses a low speed. That sheds some light onto the results of Fig. 2, where it is worth pointing out that the PDR dramatically decreases in OLSR as the maximum available speed of the Gauss–Markov model increases. As we previously said, “Jelger” is less strong against frequent topology changes than “Wakikawa”, and that is why this behavior of the Gauss–Markov model impacts more on its performance. Figure 3 clearly outlines this, because the number of drops due to the absence of a suitable route towards the Internet significantly grows at high speeds in Gauss–Markov model. Moreover, the number of packet drops due to the MAC layer not being able to deliver a packet to its destination (because of a link break) also increases. The mobility model has a lower influence in AODV than in OLSR, because the former is able to easily adapt to changing topologies. On the other hand, Manhattan Grid model does not cause many link breaks because nodes have their mobility very restricted. Instead of that, nodes tend to form groups, increasing contention at link layer. This is why this model makes the PDR of OLSR and AODV very similar, enhancing results of the former. In addition, the performance of “Jelger” and “Wakikawa” also tend to equal since “Jelger” is very sensitive to those link breaks which this model lacks (see Fig. 3). Manhattan Grid mobility model fills up interface queues because of MAC layer contention, while it does not cause many drops due to link breaks (MAC and No Route drops). As a note, results obtained by this mobility model should depend on the number of blocks used (we have used a fixed configuration though). In addition, we can ascertain from Fig. 3 that OLSR is not prone to packet drops due to filling up the interface queue, since it does not buffer data packets before sending them. Some of these types of drops appear in “Wakikawa” because of its non-controlled flooding, which creates more layer-2 contention than “Jelger”. In the case of AODV, queues get full because data packets are buffered when a route is being discovered. But that is not so heavily evidenced in proactive “Wakikawa” because Internet routes are periodically refreshed. 3.2
Gateway Discovery Overhead
Finally, we evaluate the overhead of the gateway discovery function of each of the proposals. As we can see in Fig. 4, AODV simulations result in a higher gateway overhead as the mobility of the network increases in Random Waypoint model. This is due to the increase in the link break rate, which makes ad hoc nodes find a new route to the Internet as soon as their default route is broken. We can clearly see that proactive “Wakikawa” generates the biggest amount of Internet-gateway messages due to its periodic flooding through the whole
894
F.J. Ros, P.M. Ruiz, and A. Gomez-Skarmeta 40000 OLSR+Jelger OLSR+Wakikawa pro AODV+Jelger AODV+Wakikawa pro AODV+Wakikawa rea
Gateway Discovery Messages Overhead (# msgs)
35000
30000
25000
20000
15000
10000
5000
0 0
100
200
300
400 500 Pause Time (s)
600
700
800
900
Fig. 4. Gateway discovery overhead in the Random Waypoint model using different pause times (maximum speed = 20 m/s)
35000
40000
RandomWaypoint GaussMarkov ManhattanGrid
Gateway Discovery Overhead
Gateway Discovery Overhead
40000
30000 25000 20000 15000 10000 5000 0
35000
RandomWaypoint GaussMarkov ManhattanGrid
30000 25000 20000 15000 10000 5000 0
OLSR Jelger
OLSR Wak_P
AODV Jelger
AODV Wak_P
(a) 5 m/s
AODV Wak_R
OLSR Jelger
OLSR Wak_P
AODV Jelger
AODV Wak_P
AODV Wak_R
(b) 15 m/s
Fig. 5. Gateway discovery overhead obtained from different mobility models for different maximum speeds
network. Reactive “Wakikawa” shows the minimum gateway overhead thanks to its reactiveness. “Jelger” sits in between the other two, due to its limited periodical flooding. As it was expected, the gateway discovery overhead for Internet connectivity mechanisms combined with OLSR remains almost unaffected by network mobility. This is due to the fact that Internet connectivity messages are periodically sent out by OLSR without reaction to link breaks. So, its gateway control overhead is not heavily affected by mobility. Figure 4 shows that “Jelger” always maintains a lower overhead than proactive “Wakikawa” due to the restriction of forwarding imposed by prefix continuity. The difference remains almost constant independently of the mobility of the network. The number of messages due to the gateway discovery function in OLSR simulations does not vary very much regardless of the mobility model used (Fig. 5). The mobility model does not seem to significantly impact the overhead offered
Performance Evaluation of Existing Approaches
895
by all these approaches, except in the case of the Manhattan Grid model which tends to equal the results of “Jelger” and “Wakikawa” when they are integrated within OLSR . This is due to the higher contention caused by this mobility model, which reduces the number of control messages which can be sent in “Wakikawa”. The gateway discovery overhead of AODV gets very much affected by the influence of the mobility model, but as it happened with the PDR, it remains consistent across mobility models. The Manhattan Grid model offers the minimum amount of link breaks, and therefore there is a low overhead in all AODV solutions. The Gauss–Markov model causes little overhead at low speeds (few link breaks) but a lot of overhead at higher speeds (many link breaks). The Random Waypoint mobility model sits in between the others.
4
Conclusions, Discussion and Future Work
In this paper we have conducted a simulation-based study of the current approaches for interconnecting MANETs and fixed networks. This study has evaluated their performance, and it has shown how different mobility models influence in a different way the behavior of each solution. Our results show that depending on the scenario we want to model, every solution has its strong and weak points. Hence, “Jelger” suits better for mobility patterns where few link breaks occur, like the Gauss–Markov (at low speeds) and Manhattan Grid mobility models. In those cases it offers a good PDR with a reduced gateway discovery overhead. However, we have seen that although prefix continuity offers an interesting mechanism of limited flooding, it has to be carefully designed in order to avoid routes which are fragile to changing topologies. On the other hand, reactive and proactive versions of “Wakikawa” are more suitable for high mobility scenarios. Random Waypoint and Gauss– Markov (at high speeds) mobility models generate a big number of link breaks, but “Wakikawa” solution is able to perform quite well under these circumstances. Nevertheless, it is also clear that proactive gateway discovery needs a constrained flooding mechanism to avoid the huge amount of overhead associated with the discovery of gateways. In our opinion, this result opens up the need for new adaptive schemes being able to adapt to the mobility of the network. In addition to adaptive gateway discovery and auto-configuration, there are other areas in which we plan to focus our future work. These include among others improved DAD (Duplicate Address Detection) mechanisms, efficient support of DNS, discovery of application and network services, network authentication and integrated security mechanisms.
Acknowledgment Part of this work has been funded by Spanish MCYT by means of the “Ramon y Cajal” workprogramme, the ICSI Call for Spanish Technologists and the SAM (MCYT, TIC2002-04531-C04-03) and I-SIS (2I04SU009) projects.
896
F.J. Ros, P.M. Ruiz, and A. Gomez-Skarmeta
References 1. J. Broch, D.A. Maltz, D.B. Johnson, “Supporting Hierarchy and Heterogeneous Interfaces in Multi-Hop Wireless Ad Hoc Networks.” Proceedings of the Workshop on Mobile Computing held in conjunction with the International Symposium on Parallel Architectures, Algorithms, and Networks, IEEE, Perth, Western Australia, June 1999. 2. U. Jonsson, F. Alriksson, T. Larsson, P. Johansson, G.M. Maquire, Jr. “MIPMANET:Mobile IP for Mobile Ad Hoc Networks.” IEEE/ACM Workshop on Mobile and Ad Hoc Networking and Computing, pp. 75–85. Boston, MA USA, August 1999. 3. R. Wakikawa, J. T. Malinen, C. E. Perkins, A. Nilsson, and A. Tuominen, “Global Connectivity for IPv6 Mobile Ad Hoc Networks,” Internet-Draft “draft-wakikawamanet-globalv6-03.txt”. Oct. 2003. 4. C. Jelger, T. Noel, and A. Frey, “Gateway and Address Autoconfiguration for IPv6 Ad Hoc Networks,” Internet-Draft “draft-jelger-manet-gateway-autoconf-v602.txt”. Apr. 2004. 5. S. Singh, J.H. Kim, Y.G. Choi, K.L. Kang, and Y.S. Roh, “Mobile Multi-gateway Support for IPv6 Mobile Ad Hoc Networks,” Internet-Draft, “draft-sinhg-manetmmg-00.txt”, June 2004. 6. T. Camp, J. Boleng, and V. Davies, “A Survey of Mobility Models for Ad Hoc Network Research,” Wireless Communication & Mobile Computing (WCMC): Special issue on Mobile Ad Hoc Networking: Research, Trends and Applications, vol. 2, no. 5, pp. 483-502, 2002. 7. “Selection Procedures for the Choice of Radio Transmission Technologies of the UMTS (TS30.03 v3.2.0)”, TS 30.03 3GPP, April 1998. 8. H. W. Cha, J. S. Park, and H. J. Kim, “Extended Support for Global Connectivity for IPv6 Mobile Ad Hoc Networks,” Internet-Draft “draft-cha-manet-extendedsupport-globalv6-00.txt”. Oct. 2003. 9. T. Clausen and P. Jacquet, Eds., “Optimized Link State Routing Protocol (OLSR),” IETF RFC 3626, October 2003. 10. C. Perkins, E. Belding-Royer, and S. Das, “Ad hoc On Demand Distance Vector (AODV) Routing,” IETF RFC 3561, July 2003.
UDC: A Self-adaptive Uneven Clustering Protocol for Dynamic Sensor Networks Guang Jin and Silvia Nittel Department of Spatial Information Science and Engineering, University of Maine, Orono, ME 04469, USA {jin, nittel}@spatial.maine.edu
Abstract. The constrained resources of sensor networks challenge researchers to design resource efficient protocols. Clustering protocols are efficient to support the aggregation queries in sensor databases. This paper presents a novel clustering protocol, named UDC ( spatially Uneven Density Clustering ), to prolong the lifetime of sensor networks. Unlike other clustering protocols, UDC forms distributed sensor nodes into spatially uneven clusters according to local network conditions. In short, clustered by UDC, the nodes nearby the central base are grouped into smaller clusters, while the distant nodes are clustered into larger groups to save resources. Our simulation results exemplify that UDC can extend the lifetime of sensor networks up to twice as long as the other clustering protocols do.
1 Introduction Recent successes of nano-scale sensing devices, low-power wireless communicating devices and miniaturized manufacture of computing devices lead to the technology of sensor networks which have enabled us to explore the physical world in detail that cannot be obtained easily through other traditional ways. Consisting of a set of sensor nodes each of which have with different sensors attached, and connected to each other through wireless radio links, sensor networks provide a platform to monitor the physical world for different types of applications. Several resources, such as limited battery sources and limited communication bandwidth, however, constrain sensor networks. Those limitations require an efficient protocol to minimize the resource consumption of sensor networks and provide a reliable platform to other applications. For example, SMECN, SPIN-2,SAR, Directed Diffusion paradigm etc. are some of the proposed energy aware communication protocols designed for wireless sensor networks [1]. Sensor database management systems (sensor DBMS), such as TinyDB [2] and Cougar [3], also address the energy saving problems by minimizing the amount of data transmitted within networks. For example, in TAG [4] sensor nodes process raw readings into small amount of descriptive information in-network to save precious resources. Clustering protocols, such as LEACH [5] and HEED [6], are ideal to support the aggregation queries while prolonging the lifetime of sensor networks. By using clustering protocols, several sensor nodes are elected as cluster heads that collect raw readings from member nodes, process them and return processed results back to users. The rotation between X. Jia, J. Wu, and Y. He (Eds.): MSN 2005, LNCS 3794, pp. 897–906, 2005. c Springer-Verlag Berlin Heidelberg 2005
898
G. Jin and S. Nittel
two roles of head and non-head member reduces the average resource consumption of senor nodes and extends the lifetime of networks. Based on a model for the energy consumption rate of sensor nodes, we propose a novel protocol, named UDC( spatially Uneven Density Clustering ) to support aggregation queries, which adapts clusters to local network conditions. For example, those sensor nodes near to the central base can have direct communication links with the central base, while faraway nodes can establish larger clusters to save energy consumption. UDC forms clusters according to different conditions, such as sensor nodes’ locations and cluster sizes. As a result, UDC can prolong the lifetime of networks while maintaining the high quality of networks compared with other approaches, which has been confirmed by our simulation results. The remaining of the paper is organized as follows. Section 2 introduces related work on sensor databases and clustering protocols. A model of cluster characteristics is introduced in section 3, based on which we present the UDC protocol in section 4. We present our simulation results and compare UDC with other approaches in section 5. Finally, section 6 draws the conclusion and discusses the future work.
2 Background In a typical scenario of sensor network applications, a powerful central base exists to receive queries from users, disseminate data into networks and receive readings. Although the central base can be characterized as a powerful machine, even with unlimited energy, the sensor nodes are constrained devices characterized by their constrained resources (e.g. limited energy sources, communication bandwidth and range)[7], which forces all applications over sensor networks to be resource conservative. 2.1 Sensor Database Several “sensor databases” or “device databases”, such as TinyDB and Cougar, have been recently established. So a sensor network cannot only monitor the physical world, but also respond to users’ queries over the world. In most cases, users prefer a statistical or other descriptive information rather than raw readings from sensor networks, which inspires researches to study the in-network processing of aggregation queries. TAG[4] is a framework to support aggregation queries. In TAG, some sensor nodes take the responsibility to process raw readings from other nodes and aggregate them into a summarized information to avoid redundant resource overheads. 2.2 Protocol Routing protocols in sensor networks is a widely researched area, but also are challenged by factors such as energy-awareness, lightweight computation, fault tolerance, scalability and topology of the network [8]. Main categories are Flat, Clustering (Hierarchical) and Adaptive protocols [9]. The first type includes routing mechanisms based on flat multi-hop communication. SAR [10] implements multi-cast approach creating a tree, initiated from source to destination nodes. Directed Diffusion [11] introduces a data-centric model, an alternative
UDC: A Self-adaptive Uneven Clustering Protocol
899
way to conventional address based routing methods. The second category of protocols, hierarchical or clustering approaches, is the basis for aggregation queries. LEACH [5] uses randomized election algorithm to cluster nodes in sensor networks. HEED [6] improves the efficiency by using additional cluster head election procedure among head candidates. In the third type of routing protocols, namely adaptive routing, all the information is disseminated among every node. The idea is to allow users to query any node in the network. SPIN-1 and SPIN-2[12] family of protocols utilizes data negotiation and adaptive algorithms. To reduce bandwidth overhead, nodes only exchange metadata instead of actual data for negotiation. As all the nodes exchange information, the current energy levels in the network are known.
3 Cluster Characteristic Clustering methods have been studied over the decades and are useful in the many fields, such as data mining and artificial intelligence [13] to discover the relations within data. The computation costs of many well-established algorithms (e.g. k-means and g-mean), however, reduce their applicability in sensor networks. A model to analyze the energy consumption of sensor nodes can help us to design resource conservative clustering protocols. 3.1 A Energy Model of Cluster Generally, clustering protocols divide time into time units, called rounds. In each round, a sensor node can either be a cluster head or be a non-head member of some cluster, based on the choice made by clustering protocols. A non-head member only sends its reading to the cluster head, and the cluster head aggregates raw readings from member nodes and returns the processed result back to the central base. Furthermore, since the size of message in most aggregation queries doesn’t increase through processing procedures, the message will be assume constant (i.e. k bits) in our model. Table 1. Parameters of energy consumption model Parameter Etx f x amp d0
Description Energy consumption rate for transmitting data. Energy consumption rate for amplifying signal to nearby location. Energy consumption rate for amplifying signal to faraway location. Distance threshold for signal amplifiers.
The characteristics of low-energy radio [5, 6] enable an energy cost model, table 1, that helps us understand the behavior of clustered sensor nodes. To send k bits data and amplifies it to be detected by a remote sensor at distance of d, a node consumes a amount of ESD (k, d) = Etx · k + f x · d2 energy, if the sink is located nearby. If the sink is far away (i.e. d is bigger than a threshold d0 ) several sensor nodes have to act as routers to relay messages, and the distance has quartic effects on energy consumption
900
G. Jin and S. Nittel
as ESD (k, d) = Etx · k + amp · d4 . A sensor node spends ERD energy to receive k bits data from other nodes, as defined by ERD (k) = Etx · k. If a cluster has m nodes, the distance between its cluster head and central base is d, and the message length is k bits, the energy consumption rate for its cluster head is,
ERCH (d, m) = ERD [(m − 1) · k] + ESD (k, d).
(1)
In (1), the choice of ESD or ESD is made based on cluster heads’ locations. Although a cluster head consumes extra energy to receive messages from it members, the burden of a non-head node is lessened, since a non-head node only needs to return its reading to its cluster head located nearby. If the distance between a non-head node and its cluster head is d, we can define its energy consumption rate ERMEM as ERMEM (d) = ESD (k, d).
(2)
3.2 Benefit of Clustering Above discussion excludes the applications of na¨ıve methods (e.g. fixed cluster heads or all nodes with direct links to the central base) in the constrained environment of sensor networks. In a clustering protocol, on the other hand, all sensor nodes can rotate the roles of being a cluster head and a non-head member. For the whole life of a sensor node, the energy consumption rate can be averaged as, ¯ · m − 1 + ERCH · 1 , ERSN (d, m) = ERMEM (D) m m
(3)
where m is the number of sensor nodes in the cluster, the distance between its cluster ¯ indicates this node’s average cost as head and the central base is d, and ERMEM (D) being a non-head member while other nodes act as cluster heads. From (3) we can see that although a sensor node in a cluster with more nodes consumes more energy as being a cluster head, there is more chance that this node can act as a non-head member during its whole lifetime. The average energy consumption rate, ERSN , should act as an important role in clustering protocols. Similar models have been developed by [5, 6] to find an optimal probability of electing a cluster head, i.e. 1/m in (3). One of the deficiencies of previous approaches is that they assume the probability is predefined. Rather the optimal probability of being a cluster head depends on the sensor node’s location and distances between neighbor nodes. For example, if a sensor node is located very near to the central base, it is optimal for the node to establish a direct communication link with the central base, since the cost of direct communication with the central base might be cheaper than using another cluster head as a mediator. The distant sensor nodes may create bigger clusters to lower ERSN . However, while the cluster grows by adding more sensor nodes, the cost of being non-head members, ERMEM , also increases because of the increasing distance between the cluster head and non-head members. The changing layout of mobile sensor networks brings more difficulties to clustering protocols based on fixed probability. Besides the weakness that the global probability cannot catch local optimal choices, the fixed probability may result in an improper clustering pattern especially in a large
UDC: A Self-adaptive Uneven Clustering Protocol
901
network. If we set a global fixed probability, pCH , for nodes to be cluster heads, the number of cluster heads actually follows the binomial distribution, P rob(k) = Cnk · pCH · (1 − pCH ), where k is the number of cluster heads after clustering procedures, and n is the number of total nodes. The variation of such distribution is given by, n·pCH · (1 − pCH ). As we can see, the number of cluster heads after clustering procedure with fixed probability values varies more while the number of sensor nodes increases, which means a small chance to generate an optimal number of clusters in large networks.
4 UDC Protocol A good clustering protocol should adapt node clusters to local network conditions. Expensive centralized algorithms can make a better clustering by analyzing all nodes’ locations and requirements of queries with drawing on excessive communication overhead. Furthermore the changing layouts of mobile sensor networks bring more difficulties to any centralized approaches. Distributed approaches, like UDC, can be adaptive to local conditions at expense of exchanging collaborative messages between neighboring nodes. To lessen the burden of sensor nodes, a compact message to represent clusters rather than detailed information about individual sensor nodes is preferable to clustering protocols. 4.1 Cluster Bounding Circle UDC uses a compact, fixed length message, named cluster bounding circle (CBC) to present clusters. A CBC is a circle centered at the cluster head, and the diameter is the longest distance between the cluster head and non-head members. A message of CBC i includes the location of its cluster head, CHi , the diameter of CBC, di and the number of sensor nodes contained, mi . Similar to (3), UDC estimates the energy consumption rate for the cluster head in CBC i as, ERCH (|CHi |, mi , di ) =
1 [ERMEM (di ) · (mi − 1) + ERCH (|CHi |, mi )], (4) mi
where the |CHi | indicates the distance between the cluster head and the central base. ¯ in (3), UDC use a upper bound ERMEM (di ) Rather than the average cost ERMEM (D) in (4) to estimate the cost when this cluster head acts as a non-head member, since in a ¯ is hard to estimate. dynamic environment, ERMEM (D) After exchanging CBC messages, cluster heads may face different relations with other clusters. If CBC1 contains CBC2 , head 1 uses ERCH (|CH1 |, m1 + m2 , d1 ) to estimate its new cost of combining cluster 2 into 1. Since adding cluster 2 into 1 do not increase the diameter of cluster 1, but decrease the chance that head 1 acts as a cluster head, which means the new cost is cheaper. In other situations where CBC1 doesn’t contain CBC2 the diameter of cluster 1 will change by adding cluster 2. Sensor node 1 expects that adding cluster 2 increases the cost of being a non-head member due to the enlarged diameter of new cluster. Hence 1 uses the estimation as ERCH (|CH1 |, m1 + m2 , d2 + |CH1 − CH2 |), where |CH1 − CH2 | indicates the distance between cluster head 1 and 2. The expected new cost might decrease if the changed chance of being a cluster head is more influential than the change of diameter.
902
G. Jin and S. Nittel
4.2 Description of Algorithms Reforming clusters may cause extra energy consumption, therefore UDC only allows cluster heads to reform clusters. In short words, if a cluster head chooses to join another cluster, all its member nodes will join the new cluster and change their cluster heads to the new head. Figure 1 illustrates the clustering algorithm for the UDC protocol. In each round, all sensors choose themselves to be their own cluster heads at first. Then they enter a loop. In such a loop, all cluster head nodes prepare CBC messages based on local cluster information and exchange them in the beginning. Cluster head nodes then evaluate received CBC messages to find if it is worthy to enlarge the cluster by adding other clusters in, based on above discussions. If it is more expensive to enlarge itself, the cluster head node and all its members exit the loop and end the clustering procedure. Otherwise the cluster heads elect new cluster heads among them by a sub clustering procedure, subclustering() that can introduce the multi-hop idea to UDC. This procedure is not crucial to UDC, since the clusters can reform themselves based on CBC messages. In UDC, a cluster head can accept or reject join requests from other clusters by analyzing CBC messages. Further splitting operations may form better clusters, but UDC ignores them to avoid unnecessary energy consumption. After clusters are formed, no changes occur till next round. While other protocols assign a predefined global probability to elect cluster heads, UDC adapts sensor nodes into different cluster densities based on local network conditions. Figure 2 shows an example of clustered layout based on the UDC protocol, where the star indicates the location of central base, the crosses indicate the locations of cluster heads, the dots indicate non-head nodes and the dotted lines indicate the com-
1.myHead=myID; 2.do{nextRound=true; 3.myCBC=prepare(myCurrentClusterInfo); 4.broadcast(myCBC); 5.othersCBCs[ ]=receive(); 6.isWorthyToEnlarge=evaluate(othersCBCs[]); 7.if(!isWorthyToEnlarge) nextRound=false; 8.else{tentativeHead=subclustering(); 9.if(myID==tentativeHead){ 10.do{newMemCBC=receive(); 11.if(isWorthyToAdd(newMemCBC)) confirmSender(i); 12.}until no one wants to join me;} 13.else{ sendTo(myCBC,tentativeHead); 14.waitForConfirm(); 15.if(confirmed){informMembers(ChangeHeadTo,tentativeHead); 16.toBeMember(); 17.nextRound=false;}}} 18.}while(nextRound)
Fig. 1. Algorithm description of the UDC protocol
UDC: A Self-adaptive Uneven Clustering Protocol 100
80
80
60
60
y
y
100
40
40
20
20
0
0
0
20
40
60
80
100
x
(a) 121 points Fig. 2. An example
903
0
20
40
60
80
100
x
(b) 200 points
Fig. 3. Plots of two skewed layouts
munication links within clusters. The sensor nodes nearby the central base tend to create small clusters, while the distant nodes need to create larger clusters to reduce the energy consumption. The self adaptive characteristic of UDC is more attractive to mobile sensor networks, since the layout of sensor nodes cannot be easily determined in advance in mobile sensor networks. Our simulation results validated our expectation that UDC outperforms current clustering protocols, as illustrated in section 5.
5 Simulation Result We implemented UDC in Java, and simulated the sensor network behaviors by treating each sensor node as a thread. We tested the UDC protocol, LEACH and HEED in different network layouts. The first layout is a grid-like network with 121 nodes. In real applications, nevertheless, the mobile sensor networks cannot be in perfect grid forms all the time, but more realistically be skewed as showed by fig.3. In the first skewed layout, fig.3(a), we set a normal distribution along x and y axes both centered at 50 with variance 25 and generated 121 points. Similarly in the second skewed layout, two clustered locations with 150 and 50 points were generated centered at (25, 25) and (75, 75) respectively as shown in fig.3(b). We implemented a HEED-like algorithm as the subclustering() in UDC. This subclustering() uses a cluster head discovery procedure as HEED does, but just uses distance as the cost metric, while HEED also uses Table 2. Parameter settings in simulation Parameter Network region Central base Threshold distance d0 Etx f x amp Data packet size (k) Initial energy Cprob CHprob range
Value from (0, 0) to (100, 100) at (50, 125), (50, 150) or (50, 175) 75m 5nj/bit 10pj/bit 0.0013pj/bit 2048 bits 0.5J 5% 30% 25m
904
G. Jin and S. Nittel
degree-based weights. In each round, all sensor nodes are clustered according to different clustering protocols, and then consume an amount of energy based on the simple model defined by fig.2. Most of our parameters are the same with [5, 6], among which several parameters are used only by LEACH and HEED. Cprob indicates the tentative cluster head probability within range in HEED, and CHprob is the global cluster head probability in LEACH. We set CHprob = 30% according to the optimal choice defined in [5]. The reason that we ignore the data processing cost is that the communication cost is much more expensive than on-board computation. Furthermore, the data processing requirement depends on applications and query types, and can be generalized as Etx . Our simulation program measures the remaining energy of each node, and records the node death rates based on different protocols. To be fair to all clustering protocols, the network scenario is set as single hop where all sensor nodes can establish direct links with the central base. 5.1 Lifespan of Sensor Networks Figure 4 plots the rounds when the first and last sensor nodes die in the 121 grid sensor network for different locations of the central base. As we expected, UDC prolongs the time when the first node ceases working. UDC allows all sensor nodes to work up to twice longer than other clustering protocols do, as shown by fig.4(a). This metric is one of the most important requirements of sensor networks, since this time indicates how long a network can perform with full capacity. As the distance between the sensor network and the central base is enlarged, the lifetime of network decreases. Compared with other clustering methods, the decreasing trend of round when the first node dies in UDC is flatter than in that in other protocols. But the decreasing trend of time when all nodes deplete their energy in UDC is steeper as illustrated by fig.4(b). Figure 5 shows the results in skewed layouts and compares them with the grid layout with the central base at (50, 150). UDC still outperforms others in terms of the time when first one cease working as showed by fig.5(a). As we can see from fig.5, however, the predefined global probability does not work well for different network layouts. For example, LEACH works better than HEED in skewed network of 121 nodes, but is outperformed by HEED in the grid network with the same number of nodes and the same parameter settings. On the other hand, the lifespan of a sensor network is a dynamic process as shown by fig.6 which records the number of alive nodes in each round in the grid-like network of 121 nodes with the central base at (50, 175). It reveals that the sensor nodes in UDC
(a) First node dies
(b) Last node dies
Fig. 4. Lifetime of 121 grid layout
(a) First node dies
(b) Last node dies
Fig. 5. Lifetime of skewed sensor networks
UDC: A Self-adaptive Uneven Clustering Protocol
(a) Threshold=0 Fig. 6. The life span of 121 grid layout
905
(b) Threshold=60
Fig. 7. Quality of sensor networks
die “suddenly”, while sensor nodes in other clustering protocols die more gradually. It is reasonable since sensor nodes in UDC spend a lot of energy to maintain all of the nodes alive, while in other clustering protocols alive nodes can alleviate their burden from the death of other nodes. In UDC, when the first one cease working the remaining nodes are also close to their ends. The “sudden death” property of UDC may cause a problem to a sensor network in a high sensor node density, since the redundant nodes may also be depleted in UDC protocol to maintain all sensor nodes alive as shown in fig.5(b). 5.2 Quality of Networks It is hard to define the life time of a sensor network, since a network consists of more than one sensor node. The time when the first node or the last node stops functioning cannot describe the behavior of sensor networks perfectly. A quality description of sensor network may be more reasonable than the simple “lifetime” of networks. The quality of sensor networks, nevertheless, depends on applications. For example, some applications require all sensors to work, while half alive sensor nodes are enough for other applications. Hence a simple quality metric to describe the number of alive nodes in service can be defined as, Q=
∞
CountAliveN odes(i), if CountAliveN odes(i) > threshold.
(5)
i=0
In (5), CountAliveN odes(i) counts the number of alive sensor nodes in round i. This measurement Q tells us that “during its lifetime, how many alive sensor nodes can a sensor network totally provide to satisfy some services based on a threshold setting?”. Compared with the lifetime of networks, the Q measure a quantity of service provided by sensor networks to satisfy applications. The quality of 121 grid network with different threshold settings is measured for different central base locations, and shown by fig.7. UDC outperforms other protocols as shown by fig.7(a) for threshold = 0, where a single alive node is satisfactory. With a bigger and more realistic threshold, such an improvement would be more significant as shown by fig.7(b).
6 Conclusion and Future Work This paper presents a novel protocol, UDC, designed for dynamic sensor networks to support aggregation queries. Compared with other clustering protocols, UDC benefits
906
G. Jin and S. Nittel
from adapting clusters to local network conditions, and significantly prolongs the lifetime of networks with a high quality. Although a sensor network in UDC protocol tends to undergo a “sudden death” after some time point, the “sudden death” property of UDC also throws a light for next version of UDC to reduce the “death rate” of nodes and enhance the quality of networks.
References 1. Akyildiz, I., Su, W., Sankarasubramaniam, Y., Cayirci, E.: A survey on sensor networks. IEEE Commun. Mag. 40 (2002) 102–114 2. Hellerstein, J.M., Hong, W., Madden, S., Stanek, K.: Beyond average: Toward sophisticated sensing with queries. In: IPSN. (2003) 63–79 3. Demers, A., Gehrke, J., Rajaraman, R., Trigoni, N., Yao, Y.: The cougar project: a work-inprogress report. SIGMOD Rec. 32 (2003) 53–59 4. Madden, S., Franklin, M.J., Hellerstein, J.M., Hong, W.: Tag: a tiny aggregation service for ad-hoc sensor networks. SIGOPS Oper. Syst. Rev. 36 (2002) 131–146 5. Heinzelman, W.R., Chandrakasan, A.P., Balakrishnan, H.: An application-specific protocol architecture for wireless microsensor networks. IEEE Transactions on Wireless Communications (2002) 660–670 6. Younis, O., Fahmy, S.: Distributed clustering in ad-hoc sensor networks:a hybrid, energyefficient approach. IEEE INFOCOM 2004 (2004) 7. Culler, D., Estrin, D., Srivastava, M.: Overview of sensor networks. Computer (2004) 41–48 8. Tilak, S., Abu-Ghazaleh, N.B., Heinzelman, W.: A taxonomy of wireless micro-sensor network models. SIGMOBILE Mob. Comput. Commun. Rev. 6 (2002) 28–36 9. Karaki, A.J., Kamal, A.: 6. In: Handbook of Sensor Networks: Compact Wireless and Wired Sensing Systems. 1 edn. CRC Press (2005) 10. Conner, W.S., Chhabra, J., Yarvis, M., Krishnamurthy, L.: Experimental evaluation of synchronization and topology control for in-building sensor network applications. In: WSNA ’03: Proceedings of the 2nd ACM international conference on Wireless sensor networks and applications, New York, NY, USA, ACM Press (2003) 38–49 11. Intanagonwiwat, C., Govindan, R., Estrin, D.: Directed diffusion for wireless sensor networks. IEEE/ACM Transactions on Networking 11 (2003) 12. Heinzelman, W.R., Kulik, J., Balakrishnan, H.: Adaptive protocols for information dissemination in wireless sensor networks. In: Mobile Computing and Networking. (1999) 174–185 13. Han, J., Kamber, M.: Data Mining: Concepts and Techniques. 1 edn. Morgan Kaufmann (2001)
A Backoff-Based Energy Efficient Clustering Algorithm for Wireless Sensor Networks Yongtao Cao1 , Chen He1 , and Jun Wang2 1
Department of Electronics Engineering, Shanghai Jiao Tong University, China [email protected], 2 Department of Communication Engineering, Nanjing University of Posts and Telecommunications, China
Abstract. Wireless sensor networks have emerged recently as an effective way of monitoring remote or inhospitable physical environments. One of the major challenges in devising such networks is how to organize a large amount of sensor nodes without the coordination of any centralized access point. Clustering can not only conserve limited system resource, but also serve as an effective self-organization tool. In this paper, we present a distributed clustering algorithm based on adaptive backoff strategy. By adaptively adjusting the wakeup rate of the exponential distribution, a node with higher residual energy is more likely to be elected clusterhead. We also take advantage of the contention-based channel access method to ensure that clusterheads are well scattered. Simulation experiments illustrate that our algorithm is able to significantly prolong network life compared with the conventional approach.
1
Introduction
Wireless sensor networks have recently attracted intensive attention from both academic and industrial fields since their vast potential prospects in military and commercial applications, emergency relief etc. [1].Because of the absence of a centralized control, deployed sensors should be capable of self-organizing to construct the whole network topology. Hierarchical (clustering) technique is an efficient tool to organize large-scale networks such as Internet or cellular networks. Clustering and data aggregation can also aid in reducing bandwidth and energy consumption, which is vital to resource-constrained sensor networks. Several clustering algorithms have been proposed for wireless sensor networks in recent years. Heinzelman et al.[2] have proposed a distributed algorithm LEACH to form one-hop clusters, in which sensors elect themselves as clusterheads with some probability and advertise their decisions to their one-hop neighbors. Assuming that sensors are uniformly distributed in the working region, the authors also present an analytical model to compute the optimal number of clusterheads. In [3], Bandyopadhyay et al. assume that sensors are distributed according to a homogeneous spatial Poisson X. Jia, J. Wu, and Y. He (Eds.): MSN 2005, LNCS 3794, pp. 907–916, 2005. c Springer-Verlag Berlin Heidelberg 2005
908
Y. Cao, C. He, and J. Wang
process. By taking advantage of results in stochastic geometry, they obtain the optimal probability at which a sensor node choose itself as a clusterhead and the maximum number of hops allowed from a sensor to its clusters.Younis et al. [4] propose a clustering protocol HEED which periodically selects clusterheads according to a hybrid of residual energy and a second parameters. Among these protocols mentioned above, LEACH, due to its simplicity, effectiveness, low time complexity etc., has been well studied and become a referred baseline to evaluate the clustering performance in sensor networks. In this paper, we first point out that LEACH may result in fast death of some sensor nodes because of its randomness. Then we propose a distributed load-balancing clustering algorithm based on an adaptive backoff strategy. Simulation experiments show that this algorithm is able to prolong the network life compared with LEACH.
2
Analysis on LEACH
In this section, we first define “the network life”. The network life of wireless sensor networks can be defined as the time elapsed until the first node dies, the last node dies, or a fraction of nodes die. In many applications, each working node is critical for the whole system, so the network lifetime is defined as the shortest lifetime of any node in the network, i.e. Ts = min {Ti , i ∈ V } , where V is the set of nodes in the network, Ti is the lifetime of the node i and Ts represents the network lifetime. Since sensor networks often work in hazardous or hostile environments, it is difficult or impossible to recharge batteries for sensor nodes. How to prolong the network life is the first consideration in wireless sensor networks. In clustered sensor networks, clusterheads are responsible for data fusion within each cluster and directly transmit the aggregated data to the remote base station (BS). With clustering and data compression, the network payload has been greatly reduced i.e. battery energy can be considerably saved. Among clustering protocols proposed recently, LEACH is a typical representative. LEACH is a dynamic distributed clustering protocol since it only depends on local information. Its clustering process can terminate in a constant number of iterations (O(1) time complexity) regardless of network diameter or the number of nodes. Moreover, by rotating clusterheads, LEACH attempts to evenly distribute the energy loads among all nodes. However, because of some reasons, LEACH may result in faster death of some sensor nodes, i.e. shorten the network life. Our simulation experiments show LEACH is not load-balancing as expected. We think that two reasons are able to explain such result: A. Some nodes become “forced clusterheads” and have to directly communicate with remote base station Although [2] and [3] compute the optimal number of clusterheads for LEACH, the number of clusterheads produced by using LEACH doesn’t always equal to
A Backoff-Based Energy Efficient Clustering Algorithm
909
the expected optimal value due to its randomness. For example, among N nodes, k clusterheads are expected. Then the self-electing probability is p = k/N . The event X is defined as “ m clusterheads are elected”, which is a discrete random variable following a Binomial distribution with parameters N and p, i.e. N N −m Pr (X = m) = pm (1 − p) m When too few clusterheads are elected, it is very likely that there is no selfelected clusterhead in a certain node’s proximity. So it has to become a “forced clusterhead” and communicates directly with BS which often locates far away from the node’s working region. Even if LEACH results in the target number of the clusterheads, these clusterheads may scatter unevenly, i.e. most clusterheads clump in some regions while few clusterheads locate in other regions[5]. The unevenly distribution of clusterheads also leads to “forced clusterheads”.Although LEACH attempts to rotate the CH’s role among sensor nodes i.e. one node communicates directly with BS only once in a working cycle. But unfortunately, the randomness undermines the attempt. Simulation results shows that the probability that a node contact directly with BS only once in a 20-round cycle is little while the case a node communicates directly with BS more than eight times accounts for about 50%. It is well known that radio communication with lowlying antennae and near-ground channels has an exponential path loss, i.e. that the minimum output power required to transmit a signal over a distance d is n proportional to d , 2 ≤ n ≤ 4. Since transmitting data directly to remote BS is much energy-consuming, the clustering algorithm should avoid too many nodes involving in such long-distance communications. B. LEACH determines one node’s “role” without taking into consideration its residual energy LEACH determines clusterheads according to a predefined probability1 . Although a sensor node, if it has been elected as a clusterhead once, has no chance to become a clusterhead again in a working cycle, it is still possible to consume much more energy than other nodes, for example, it has to serve as a “forced CH” in each round or manage too many nodes because of the uneven distribution of sensor nodes. However, in next working cycle these nodes with low residual energy have the same opportunity to be elected clusterheads, which will deplete their battery energy very soon.
3
A Clustering Algorithm with Adaptive Backoff Strategy
In this section, we present a distributed algorithm based on adaptive backoff strategy. The primary goal of our approach is to prolong network lifetime, i.e. 1
Although LEACH provides an alternative way considering nodes’ residual energy to determine the self-electing probability, it is unrealistic in large-scale sensor networks since each node needs to obtain other nodes’ energy information throughout the whole network. So we ignore such method in this paper since it has no practical meaning.
910
Y. Cao, C. He, and J. Wang
the time elapsed until the first node dies. The scheme we proposed not only maintains the desirable features of LEACH but also helps to evenly distribute energy load among sensor nodes. We model a sensor network by an unit disk graph G = (V, E),the simplest model of ad hoc networks or sensor networks. Each node v ∈ V represents a sensor node in this network. There is an edge(u, v) ∈ E if and only if u and v locate in each other’s transmission range. In this case we say that u and v are neighbors. The set of neighbors of a node v will be denoted by N (v). Nodes can send and receive message from their neighbors. We also make the assumption that the radio channel is symmetric. 3.1
Clustering Algorithm
Our distributed clustering algorithm is shown in Fig.1, The parameters used in this algorithm are listed in Table 1. The operation of our algorithm is also divided into rounds, which is similar with LEACH. Each round begins with a clusterforming phase. In this phase, nodes are initially in the waiting mode. Each node waits for a initiator timer according to an exponential random distribution, i.e. Ei i f (ti ) = λi e−λi ti , where λi = λ0 residual Emax .Eresidual is the estimated current residi ual energy in the node i and Emax is a reference maximum energy. When the timer fires, the node first sends a bid to compete for the channel. If the node A win the channel, it elects itself as a clusterhead and broadcast an ADV CH message to all its neighbors. The neighbors that receive the message, stop the timer and decide to join the cluster which A initializes. If one node simultaneously receives more than one ADV CH message i.e. it falls within the range of more than one self-elected clusterheads, it uses the node ID or its distance to those clusterheads to break ties (The distance can be determined according to signal attenuation). When a node B decides to join a certain cluster, it broadcasts a JOIN message, JOIN(myID, myHEAD) and terminates the algorithm. On receiving a JOIN(u, t) message, , node v checks if it has previously sent a ADV CH message. If this is the case, it checks if node u wants to join v’s cluster (v = t). If the node vhas not sent a ADV CH message, it record the u sdecision. If all v’s neighbors have decided their role, i.e. join a certain cluster beforev’s timer fires, v has to become a forced CH and stop its timer. When TCF , the maximal time of cluster forming elapses, each node which has not decided its role places a bid for the channel. The nodes which have successfully broadcast Table 1. Parameters Used in the Algorithm Γ Eresidual Emax λ0 TCF
the the the the the
set of ID’s of my one-hop neighbors estimated current residual energy in the node fully charged battery energy initial wakeup rate maximal cluster-forming time
A Backoff-Based Energy Efficient Clustering Algorithm
Fig. 1. Algorithm Pesudo-code
911
912
Y. Cao, C. He, and J. Wang
the ADV CH messages will become CHs while others will join a certain cluster. Then another round of cluster-formation will begin until all nodes are clustered. 3.2
Discussion
For our algorithm, we obtain the following properties: Lemma 1. Our algorithm is fully distributed. Proof. A node makes decision only depending on local information:it attempts to become a clusterhead when its timer fires or joins a cluster when it successfully receives an ADV CH message. Lemma 2. The cluster-forming phase lasts for at most TCF . Proof. When TCF elapses, though a node’s timer has not yet fire, it has to stop the timer and bid for a clusterhead. Thus TCF is the upper bound of the execution time of our algorithm. Lemma 3. Our algorithm has O(1) message complexity per node i.e. the total message comlexity is O(1). Proof. During the execution of our algorithm, one node in the network at most sends one ADV CH message or JOIN message. Lemma 4. There are no neighboring clusterheads i.e. the clusterheads are well scattered. Proof. Our approach that a sensor node contends to be a cluster head is based on a control channel broadcast access method, which is similar with [6], where only one node win in its neighborhood, and in consequence, the elected heads are well scattered. A @ Lemma 5. when the wakeup rate λ ≥ max e, − ln(1−ε) , our clusterhead selecTCF tion algorithm will perform dynamic load-balancing for wireless sensor networks. Proof. Derived from the exponential distribution f (t) = λe−λt , When fixing f (t) = μ, μ ∈ [0, 1],we get t(λ) = − λ1 ln μλ . The first-order derivate of the function dt dt = λ12 (1 − ln λμ ), it is obvious that when λ ≥ μe, dλ ≤ 0 i.e. the function is is dλ monotonously decreasing. So when we choose λ ≥ max(μe) = e}
(1)
the algorithm will ensure that the node with more residual energy have more opportunity to become a clusterhead since its timer is more likely to elapse before its neighbors with lower battery energy. We also hope that the network elects enough number of clusterheads in one round before the time TCF elapses, for instance 50% of nodes are expected to initialize cluster-formation within one minute. Thus the selection of λ should
A Backoff-Based Energy Efficient Clustering Algorithm
913
satisfy the following inequation: p {t > TCF } ≤ 1 − ε, where ε is the expected percentage. Therefore, in this case +∞ ln(1 − ε) f (t)dt ≤ 1 − ε ⇒ λ ≥ − . (2) TCF TCF Based on (2), we can calculate that a λ of 0.012 ensures that 50% of the nodes initialize cluster-forming process within one minute (ε = 0.5, TCF = 60sec) . From (1) and (2), we can conclude: $ 8 ln(1 − ε) λmin = max e, − (3) TCF
4
Simulation Results
We conduct simulation experiments to compare performance of the proposed algorithm with LEACH. These two algorithms are implemented in Microsoft Visual C++. The entire simulation is conducted in a 100m∗100m region, which is between (x = 0, y = 0) and (x=100, y=100 ). 100 nodes with 2J initial energy are randomly spread in this region. Two nodes are said to have a wireless link between them if they are within communication range of each other. The performance is simulated with the communication range of the nodes set to 25 meters. Initially, each node is assigned a unique node ID and x, y coordinates within the region. The base station locates in the (50,175). We assume the simple radio model proposed by Heinzelman et al.[2].The transmission range R is set to 25m.For LEACH, we set the optimal value k = 5. And for our algorithm, we set the initial wakeup rate λ0 = 56 wakeup/sec. Moreover, the residual battery energy is discretized into 20 levels, so the minimum wakeup rate λmin equals 2.8, which satisfies the inequation (3) (we assume ε = 0.5, TCF = 60sec). Simulation experiments proceed with rounds. In each round, one ordinary node, if it has enough residual energy to function properly, collects sensor data and sends a packet to its CH or BS. We call such packet “the effective data packet”. We first measure how many times a node communicates directly with the BS during a 20-round cycle by using LEACH and our algorithm. Fig.2 shows that compared with LEACH, our algorithm greatly reduces the number of times one node contact directly with BS, since the contention-based head advertising ensures no neighboring clusterheads i.e. clusterheads are well-distributed. We also compare network life with our algorithm to LEACH, where network life is the time until the first node dies. Fig.3 illustrates our algorithm greatly improves the network life over LEACH. This is because LEACH’s randomness may lead to some “heavy-burdened” nodes whose battery energy is very likely to be depleted much faster. In contrast, the head selection in our algorithm is primarily based on nodes’ residual energy. Those nodes with more residual energy have higher probability to become CHs, which provides good load-balance among sensor nodes. In addition, the contention-based head advertising ensures that clusterheads are welldistributed which further lessens some nodes’ burden and prolongs the network life.
914
Y. Cao, C. He, and J. Wang
Fig. 2. Distribution of the number of times one node communicates directly with BS in a 20-round cycle using LEACH (100 nodes,kopt = 5) vs. our algorithm(100 nodes,λmin = 2.8)
Fig. 3. Network Life using LEACH (100 nodes,kopt = 5) vs. our algorithm(100 nodes,λmin = 2.8)
Then we study the relationship of network life and effective sensor data. Observing the simulation results of Fig.4 shows that our algorithm will produce 50% effective sensor data more than LEACH over time since in the latter many nodes have to spend a large amount of energy communicating with BS. On the contrary, our algorithm effectively avoids this problem, hence the limited energy has been saved to send more effective sensor data. Finally, we compare the energy-efficiency with LEACH to our algorithm. Fig.5 shows the total number of effective sensor data sent by network nodes for a
A Backoff-Based Energy Efficient Clustering Algorithm
915
Fig. 4. Number of survival nodes per given amount of effective data packets sent using LEACH (100 nodes,kopt = 5) vs. our algorithm(100 nodes,λmin = 2.8)
Fig. 5. Amount of effective data packets sent per given amount of energy using LEACH (100 nodes,kopt = 5) vs. our algorithm(100 nodes,λmin = 2.8)
given amount of energy. The result illustrates that our algorithm sends much more effective sensor data for a given amount of energy than LEACH i.e. our algorithm is more energy-efficient.
5
Conclusion
Clustering is one of the fundamental problems in wireless sensor networks. LEACH provides many advantageous features to meet the requirements of the
916
Y. Cao, C. He, and J. Wang
severe resource constraints in sensor networks. However, due to its randomness, LEACH is not as load-balancing as expected. Both theoretical analysis and simulation results show that too much nodes have to communicate directly with the base station, which will consume a large amount of nodes’ limited battery energy. To solve such problem, we propose a new distributed clustering algorithm which not only uses an adaptive backoff strategy to realize load balance among sensor node, but also introduces a contention-based message broadcast method to ensure there are no neighboring clusterheads. Simulation results also indicate that the proposed algorithm greatly reduces the number of “forced clusterheads” and prolongs the network life efficiently.
References 1. G. J. Pottie and W. J. Kaiser: Wireless Integrated Network Sensors. Commun. ACM, vol. 43, no. 5, May 2000, pp. 551-58. 2. W. B. Heinzelman, A. P. Chandrakasan, H. Balakrishnan: An Application-specific Protocol Architecture for Wireless Microsensor Networks. IEEE Tran. On Wireless Communications, Vol. 1, No. 4, pp.660-670, Oct. 2002. 3. Bandyopadhyay and E. J. Coyle: An Energy Efficient Hierarchical Clustering Algorithm for Wireless Sensor Networks. Proc. IEEE INFOCOM 2003. 4. O. Younis, and S. Fahmy: Distributed Clustering in Ad-hoc Sensor Networks: a Hybrid, Energy-efficient Approach. Proc.IEEE INFOCOM 2004. 5. L.Zhao, X. Hong and Q. Liang: Energy-Efficient Self-Organization for Wireless Sensor Networks: A Fully Distributed Approach. Poc. IEEE GLOBECOM 2004. 6. T.C. Hou and T.J. Tsai: An Access-Based Clustering Protocol for Multihop Wireless Ad Hoc Networks. IEEE JSAC, 2001,19 (7):120121210.
Energy-Saving Cluster Formation Algorithm in Wireless Sensor Networks Hyang-tack Lee1 , Dae-hong Son1 , Byeong-hee Roh1 , S.W. Yoo1 , and Y.C. Oh2 1
Graduate School of Information and Communication, Ajou University, San 5 Wonchon-dong, Youngtong-Gu, Suwon, 443-749, Korea {hlee, ajouzam, bhroh, swyoo}@ajou.ac.kr 2 Samsung Electronics Cooperation, Suwon, Korea [email protected]
Abstract. In this paper, we propose an efficient energy-saving cluster formation algorithm (ECFA) with sleep mode. ECFA can achieve energy efficient routing with the following two properties. First, ECFA reconfigures clusters with fair cluster formations, in which all nodes in a sensor network can consume their energies evenly. In order to achieve fair cluster regions, ECFA does not require any information on the location and energy of nodes. Second, by letting nodes very close to just elected cluster head be in sleep mode, ECFA can reduce unnecessary energy consumptions. Performances of ECFA are compared with LEACH and LEACH-C in the cases of the static and mobile nodes.
1
Introduction
Recently, sensor networks are becoming an important tool for monitoring a region of interest. To design sensor networks such that sensors utilize their energies in very effective ways is one of the most important issues to be solved for the efficient operation of the networks. Much of works for energy aware design for sensor networks have been carried out[1][2][3]. Among those, as a distributed cluster-based routing scheme, LEACH (LowEnergy Adaptive Clustering Hierarchy) has been proposed[4]. In LEACH, cluster heads that relay data from sensor nodes in a certain region called cluster to BS (base station) are periodically elected to prevent a specific sensor node from consuming its residual energy rapidly. However, since the cluster heads are elected in distributed and probabilistic way, there exist the possibilities of poor cluster formations, in which cluster heads are located very close to each other. LEACH-C (LEACH-Centralized)[5] has been proposed to solve the poor cluster formation problem in LEACH. In LEACH-C, each node sends the information on its location and residual energy to BS. By using the information, BS constructs clusters as optimal as it can, and broadcasts the cluster information to all nodes in the network. LEACH-C is more effective than LEACH from the cluster formation’s viewpoints, but it consumes much energy compared with LEACH because all nodes have to communicate with BS at each round and it requires additional X. Jia, J. Wu, and Y. He (Eds.): MSN 2005, LNCS 3794, pp. 917–926, 2005. c Springer-Verlag Berlin Heidelberg 2005
918
H.-t. Lee et al.
overhead for each sensor node to know its location information through additional communication technique such as GPS. In this paper, we propose an efficient energy-saving cluster formation algorithm (ECFA) with sleep mode. ECFA can achieve energy efficient routing with the following two properties. First, ECFA can improve the energy efficiency by preventing the possibility of poor cluster formation as in LEACH. ECFA can produce clusters with fair cluster regions such that all the sensors in a sensor network can utilize their energies equally. In order to achieve fair cluster regions, ECFA does not require any information on the location and energy of each node. Second, by letting nodes very close to the cluster head just elected be in sleep mode, ECFA can reduce unnecessary active period which is one of the main causes of energy dissipation[6]. Performances of ECFA are compared with LEACH and LEACH-C in the cases of the static and mobile nodes. The rest of the paper is organized as follows. In Section 2, some background on LEACH algorithm and its generic problem is explained. Then, our proposed scheme, ECFA, is illustrated in Section 3. In Section 4, some experimental results will be given. Finally, we make conclusions in Section 5.
2
Problem Definition
In LEACH, timeline is divided into rounds, and each round consists of Setup Phase and Steady-state Phase[4]. Clusters are reconfigured in Set-up Phase. After then, in Steady-state Phase, actual data transmission can be done from the nodes to the cluster head, and then to the BS. Especially, Set-up Phase consists of three sub-phases such as Advertisement, Cluster Set-up and Schedule Creation Phases. In Advertisement Phase, each node decides whether it can be elected as a cluster head or not. Then, in Cluster Set-up Phase, all nodes except for cluster heads choose their cluster head, and then cluster reconfiguration is finished. Finally, TDMA schedule for data transmission in the network is arranged in Schedule Creation Phase. For electing cluster heads in Advertisement Phase, n-th sensor node chooses a random number between 0 and 1, and compares the number with a threshold value T (n) as following * P if n ∈ G 1 (1) T (n) = 1−P ·(r mod P ) 0 otherwise where P is the desired percentage of the cluster heads, r is the current round, and G is the set of nodes that have not been cluster head in the last 1/P rounds. If the chosen random number by n-th sensor node is less than T (n), the node is elected as a cluster head for the corresponding round. According to Eq.(1), LEACH ensures that all nodes become a cluster head exactly once during consecutive 1/P rounds. There are two basic problems in LEACH with the above head election procedure.
Energy-Saving Cluster Formation Algorithm
919
First, since cluster heads are elected in a probabilistic way only using Eq.(1), there is no way to consider the formation of clusters. There exists some possibility of both good and poor cluster formations. In good cluster formations, since elected sensor nodes are evenly distributed in the region of the sensor network, all sensor nodes can consume their energy evenly in average. On the other hand, in some poor cluster formations that adjacent nodes can be elected as cluster heads, sensor nodes with longer distances to corresponding cluster head consume much more energies than those with shorter distances. In addition, collisions can be occurred frequently in the network due to short distance between cluster heads. Though LEACH-C[5] has been proposed to overcome the poor cluster formation problem, it requires an additional overhead for each node to know its location and to deliver its location and energy information to BS. The overhead results in consuming much energy compared with LEACH. Second, in LEACH, all nodes have to keep awake in Advertisement Phase. This may cause a problem on unnecessary energy consumption. For example, some nodes that close to the first elected cluster head and will be a member of a cluster governed by the cluster head do not need to participate into following head election procedures and listen signals from other cluster heads.
3
Energy-Saving Cluster Formation Algorithm with Sleep Mode
In this paper, we propose an energy-saving cluster formation algorithm(ECFA) with sleep mode to solve the two problems described in the previous Section that LEACH has. Fig.1 shows the basic timeline of the proposed ECFA operation, in which timelines are divided into rounds as in LEACH. The Advertisement Phase of ECFA consists of K stages where K denotes the predefined number of cluster heads in the network. It is noted that there exists only one stage in Advertisement Phase in LEACH. This means that there are K chances for each node to become a cluster head at each round in ECFA, while only one in LEACH.
Round Set-up Phase
Steady-state Phase
... ...
Schedule Creation Phase Cluster Set-up Phase Advertisement Phase
time
k stages
Fig. 1. Timeline of ECFA operation
In ECFA, nodes within a certain range from each cluster head elected at each stage do not take part in the cluster head process at next stages in the corresponding round. The detail procedure for the cluster head election will be
920
H.-t. Lee et al.
explained later. If we let C(r, s) be the number of nodes that can take part in the cluster head election process at a state s in a round r where s=0,1,· · · ,K-1 and r=0,1,2,· · · . Ideally, the average number of nodes belong to the range covered by each cluster head can be C(r, 0)/K where C(r, 0) is the number of nodes that can be candidates of cluster heads at the beginning of the round r. Then, we have s C(r, s) = C(r, 0) · (1 − ) (2) K where C(r, 0) = N − K · (r mod N K ) and N is the total number of nodes. Let Pn (r, s) and ECH (r, s) be the probability that n-th node becomes a cluster head and the expected number of nodes that can be elected as cluster heads, respectively. In order to get the ideal situation where one cluster head is elected at each stage, Pn (r, s) and ECH (r, s) should satisfy the following condition. That is, N Pn (r, s) = 1. (3) ECH (r, s) = n=1
Let G be the set of nodes that have not been elected as cluster heads in the last N /K rounds, and Pn∈G (r, s) be the probability that an arbitrary node n (n ∈ G) becomes a cluster head at time (r,s). The nodes that have not been a cluster head have identical Pn (·) at each stage. Then, we can rewrite Eq.(3) as ECH (r, s) = C(r, s) · Pn∈G (r, s) = 1
(4)
From Eq.(4), we have Pn∈G (r, s) =
1 . C(r, s)
(5)
It is noted that Eq.(5) is the probability that only one node can be elected as the cluster head at a certain stage. Substituting Eq.(2) into Eq.(5), we have the threshold for n-th node can become a cluster head at stage s in a round r as following * 1 if n ∈ G N s T (n, s) = {N −K·(r mod K )}·(1− K ) (6) 0 otherwise With the threshold of Eq.(6), the cluster head election procedure of the proposed ECFA for n-th sensor node at a stage s in a round r is shown in Fig.2. At the beginning of the stage, n-th node eligible for becoming a cluster head selects a value between 0 and 1 randomly, and then compares it with the threshold T (n, s) as in Eq.(6). If the selected random number by the node is less than T (n, s), the node is elected as a cluster head for the corresponding stage. After then, it broadcasts its advertisement message to neighbor nodes within a certain range. The neighbor nodes received the advertisement message estimate the distance d from itself to the cluster head by the received signal power. The operations of neighbor nodes are differentiated according to the distance d as following.
Energy-Saving Cluster Formation Algorithm
921
procedure ClusterHeadElection ECFA(n,r,s) begin 1: if (s=0) then 2: initialize cluster head list 3: state = ACTIVE 4: end-if 5: if (state=ACTIVE) then 6: choose a random number v in [0,1] 7: if (v < T (n, s)) then 8: broadcast a cluster head advertisement message 9: state=SLEEP 10: else 11: goto S1 12: end-if 13: else if (state=LISTEN) then 14: S1: wait and listen a cluster head advertisement message 15: if (a cluster head advertisement message is received) then 16: estimate the distance d from the cluster head 17: add the cluster head to the cluster head list 18: if (d ≤ Ro ) then 19: state=SLEEP 20: else if (Ro < d ≤ R) then 21: state=LISTEN 22: end-if 23: end-if 24: end-if end procedure Fig. 2. Cluster head election procedure of ECFA
i) 0 ≤ d ≤ Ro . The neighbor nodes within this area sleep until the beginning of Cluster Set-up Phase to reduce the unnecessary active periods. Since the nodes are in sleep, they do not take part in the next cluster head election processes and listen to the signal from other cluster heads. ii) Ro < d ≤ R. The neighbor nodes within this range do not take part in the cluster head election process at the next stages. Though these nodes do not participate into the head election process, they still listen to the signal from other cluster heads. After all cluster heads are selected, the nodes determine the cluster head with the strongest power as its cluster head during Cluster Set-up Phase. iii) d > R. The nodes take part in the head election process at the next stage. Determining Ro and R. Consider a sensor network with M ×M size. Let us assume an ideal situation that sensor nodes are uniformly distributed on the sensor network and K elected cluster heads are evenly arranged. Then, we have the following approximation M 2 = KπRo 2
(7)
922
H.-t. Lee et al.
where Ro is the radius for the advertisement region exclusively covered by each cluster head in the above ideal situation. However, when some of the cluster heads are located on the region near to edges of the network, it becomes unsuitable because the advertisement regions covered by those clusters are smaller than M 2 /K, and it does not satisfy the ideal condition of Eq.(7). In order to solve the problem, it needs to consider some larger value R than Ro of Eq.(7). For the upper limit of R, let assume an extreme case when K=1 and the cluster head is elected at a border of the sensor network. In this case, if R = 2Ro , the cluster head can cover the whole sensor network range. On the other hand, as K increases to infinite, Ro becomes small enough to satisfy the ideal situation of Eq.(7). Likewise, the radius R that we are trying to find out is highly related to the area of sensor networks as well as the number of clusters. Let Ra be the average radius of the sensor network. For sensor networks with M ×M size, it can be approximated by M 2 = π · Ra2 . if we define the radius Ro )Ro , then we have R ≡ (1 + R a R = (1 +
1/K)Ro
(8)
It is noted that the factor in front of Ro of right side of Eq.(8) has the range between 1 and 2. This provides the consistency with the intuitions that we used for the derivation of R. That is, R=2Ro when K =1, and R → Ro when K → ∞.
4
Simulation Results
The simulation has been carried out using ns-2 network simulator [7][8]. For the simulation, a sensor network with size of 100m×100m and 100 arbitrarily distributed sensor nodes has been considered. We let the number of clusters be 5, the cluster formation be changed at every 20 seconds and the location of BS be (50,75). In addition, the same simulation environment and radio energy model as in [5] has been used. That is, we let the initial energy for each node be 2J and the spreading factor be 8. According to equations (3) and (4), as the values of Ro and R for ECFA, 25 and 37 have been used, respectively. Effect of Sleep Range Ro on the System Time. To observe the effect of sleep mode, we carried out the simulations varying the sleep range(Ro ) between 0 and R. Especially, in order to investigate how the energy of each node is consumed efficiently, in Fig.3, it is shown the system time that whole nodes are alive. That is, the y-axis of Fig.3 indicates the elapsed time until the fist dead node is happened. We can see from Fig.3 that as the sleep range Ro increases to around 25, the time also increases. However, as the sleep range increases over 25, it tends to decrease. This is because when the sleep range is larger than Ro , there are some nodes that use more energy for data transfer due to long distance between the cluster heads and them. This indicates that the value of 25 for the sleep range of Ro given from Eq.(7) is adequate. It is noted that the zero sleep range means that sleep mode is not used.
Energy-Saving Cluster Formation Algorithm
923
Time To All Nodes Alive (s)
450 ECFA
400
350
LEACH-C
300
LEACH
250
200
0
5
10
15
20
25
30
35
40
Sleep Range (m)
Fig. 3. Elapsed system time upto all the nodes are alive
Number of Nodes Alive
100
80
60
LEACH-C
ECFA(0)
40
ECFA LEACH
20 0
0
100
200
300
400
500
600
700
800
Number of Data Signals Received at BS
Performances for Static Nodes’ Case. Let define the system lifetime as the time until there is no active node in the network. In Fig.4(a), the system lifetime performances are compared. In Fig.4(a), ECFA(0) means ECFA without sleep mode, i.e. Ro =0. For ECFA, Ro =25 is used for the simulation. From Fig.4(a), we can see that LEACH-C shows the shortest system lifetime, because each node
70000 LEACH-C
60000
ECFA 50000 40000
ECFA(0) LEACH
30000 20000 10000 0
0
100
200
300
Simulation Time (sec)
400
500
600
700
800
Simulation Time (sec)
(b) Number of Data Signals Received at BS
(a) 70000 60000 LEACH-C
50000
ECFA
40000 30000 ECFA(0) 20000
LEACH
10000 0
0
20
40
60
80
100
120
140
160
180
200
Total Energy Dissipated in Network (J)
(c) Fig. 4. Performances for static nodes: (a) number of nodes alive along time, (b) number of data signals received at BS over the time and (c) per given amount of energy
924
H.-t. Lee et al.
is required to maintain its location and energy information and to communicate the information with BS for forming clusters. On the other hand, ECFA and ECFA(0) show better system lifetime performances than LEACH. Especially, from the results that ECFA outperforms ECFA(0), we can see that the sleep mode can improve the system lifetime performances further. Though the system lifetime of LEACH-C is shorter than other schemes, from the viewpoints of the amount of delivered data, LEACH-C shows the best performances as shown in Fig.4(b) and (c). In Fig.4(b), the total number of data signals received at BS during the system lifetime is compared. Though LEACHC shows the shortest system lifetime, it can deliver much more data signals than other schemes. This is because LEACH-C can configure optimal cluster formations with the global knowledge of the network so that it can consume less energy for delivering data between cluster heads and their cluster member nodes once clusters are configured. This phenomenon can be illustrated by Fig.4(c), in which LEACH-C shows the largest delivered data signals per given energy. We can see also that ECFA and ECFA(0) show lower data deliver performances than LEACH-C, but much better than LEACH. And, ECFA show better than ECFA(0). Compared with LEACH, ECFA has more opportunities of delivering data in ECFA since ECFA keeps the nodes alive longer than LEACH by reducing the possibility of poor cluster formations. In addition, by using sleep mode, ECFA can achieve longer system lifetime and better data deliver performances. On the other hand, since ECFA does not require any knowledge of network, it can not get optimal cluster formations as in LEACH-C. However, ECFA can keep the system lifetime much longer than LEACH-C.
50000
ECFA
1000
Total Amount of Transmitted Data
Average Lifetime of a Node (s)
Performances for Mobile Nodes’ Case. We also carried out simulations for the environment that sensor nodes are moving. For the simulation, we used Random Waypoint model presented in [9]. At every second, each node chooses a destination randomly and moves toward it with a velocity uniformly chosen from the range [0, Vmax ], where Vmax is the maximum allowable velocity for every mobile nodes[10]. Same parameters used in [10] are applied to the simu-
800 LEACH
ECFA(0)
600 400 LEACH-C 200 0
1
2
3
4
5
Vmax (m/s)
(a)
6
7
8
9
40000
LEACH-C
30000
ECFA(0)
ECFA
20000
10000 LEACH 0
1
2
3
4
5
6
7
8
9
Vmax (m/s)
(b)
Fig. 5. Performances for mobile nodes: (a) average lifetime of a node (b) total amount of transmitted data
Energy-Saving Cluster Formation Algorithm
925
lation. The energy consumption due to mobility of each node is not considered in our simulation. By doing so, the intrinsic operational performances of those comparable schemes can be compared. In Fig.5(a), system lifetime performances for mobile nodes’ case are shown when Vmax varies between 1 and 9 m/sec. Fig.5(a) shows that ECFA keeps the network longer than LEACH and LEACH-C. We can also see that ECFA with sleep mode outperforms ECFA(0) without sleep mode. Especially, the system lifetime of LEACH-C decreases as Vmax increases, while other schemes are very few dependent on the variation of Vmax . LEACH-C forms clusters as optimal as it can based on the global knowledge of the network. The cluster formation is very effective when the velocities of nodes are so slow that there are very slight changes in the whole network topology. However, it may not be kept when the nodes move so fast that there are large number of nodes very far from cluster heads and the global network topology is significantly changed from that the clusters were formed. Therefore, the lifetime of LEACH-C is getting shortened as the nodes move faster. In ECFA and LEACH, the nodes are not required to send the information at the beginning of the rounds. Thus, under the same condition, ECFA and LEACH dissipate less energy than LEACH-C. Moreover, ECFA makes the average lifetime of a node longer over all the velocities since ECFA is more energy-saving and forms the cluster more efficiently than LEACH. Fig.5(b) shows the total number of data signals transmitted during the lifetimes of those comparable schemes. As the velocity increases, the amount of delivered data signals tends to decrease. At low velocities less than 5 m/s, LEACHC shows the best data delivery performances. However, as the velocity increases larger than 5, ECFA shows better performances than LEACH-C because the lifetimes of LEACH-C are abruptly shortened for high velocities. Even though the lifetime of LEACH is longer than that of LEACH-C, LEACH cannot send more data because it cannot form more efficient cluster than LEACH-C.
5
Conclusion
In this paper, we proposed an efficient energy-saving cluster formation algorithm (ECFA) with sleep mode. ECFA can not only reconfigure clusters with fair cluster formations in which all nodes in a sensor network can consume their energies evenly, but also reduce unnecessary active period that is one of the main causes of energy dissipations. The simulation results showed that the system lifetime performance of ECFA outperforms those of other schemes such as LEACH and LEACH-C in both cases where the nodes are moved and not. For the data throughput, LEACH-C showed the best performance, and ECFA did better than LEACH in case the nodes are static. However, it was shown that ECFA outperforms the other schemes when the nodes are moving fast. Likewise, ECFA can achieve the improved lifetime performances better than LEACH-C as well as LEACH under both static and mobile nodes environments. Moreover, ECFA can be implemented with lower complexity than LEACH-C since ECFA does not require for nodes to maintain and send the information on their loca-
926
H.-t. Lee et al.
tions and energies to BS. We conclude that, ECFA can be well applied to the situation that longer system operation time is required and medium amount of sensing data is requested to be exchanged continuously under both environments where sensor nodes are moving or not. Acknowledgement. This work was supported in part by MIC and IITA through IT Leading R&D Support Project, Korea.
References 1. Edger H. Callaway, Wireless Sensor Networks: Architectures and Protocols, Auerbach Publications, August 2003 2. Sameer Tilak, Nael B. Abu-Ghazaleh, and Wendi Heinzelman, ”A Taxonomy of Wireless Micro-Sensor Network Models,” ACM Mobile Computing and Communications Review, Vol. 6, No. 2, pp. 28-36, April 2002 3. I.F.Akyildiz, W.Su, Y.Sankarasubramaniam, E.Cayirci, ”A survey on Sensor Networks,” IEEE Communications Magazine, Vol. 40, No. 8, pp. 102-114, August 2002 4. W. Heinzelman, A. Chandrakasan, and H. Balakrishnan, ”Energy-Efficient Communication Protocol for Wireless Microsensor Networks,” Proceeding of Hawaii Conference on System Sciences, January 2000 5. W. Heinzelman, A. Chandrakasan, and H. Balakrishnan, ”An Application-Specific Protocol Architecture for Wireless Microsensor Networks,” IEEE Transactions on Wireless Communications, Vol. 1, No. 4, October 2002 6. W. Ye, J. Heidemann, and D. Estrin, ”Medium Access Control With Coordinated Adaptive Sleeping for Wireless Sensor Networks,” IEEE/ACM Transaction on Networking, Vol. 12, No. 3, June 2004 7. UCB/LBNL/VINT, ”Network Simulator ns-2,” http://wwwmash.cs.berkeley. edu/ns. 8. W.Heinzelman, A.Chandrakasan, and H.Balakrishnan, uAMPS ns Code Extensions, http://wwwmtl.mit.edu/ research/icsystems/uamps/leach 9. F.Bai, N.Sadagopan, A.Helmy, ”The IMPORTANT Framework for Analyzing the impact of Mobility on Performance of Routing for Ad Hoc Networks,” Ad-Hoc Networks, Vol. 1, No. 5, pp. 383 - 404, November 2003 10. L. Breslau, D. Estrin, K. Fall, S. Floyd, J. Heidemann, A. Helmy, P. Huang, S. McCanne, K. Varadhan, Y. Xu, H. Yu, ”Advances in network simulation,” IEEE Computer Magazine, Vol. 33, No. 5, pp. 59-67, May 2000
RECA: A Ring-Structured Energy-Efficient Cluster Architecture for Wireless Sensor Networks Guanfeng Li and Taieb Znati Department of Computer Science, University of Pittsburgh, Pittsburgh, PA 15260, U.S.A {ligf, znati}@cs.pitt.edu
Abstract. Clustering schemes have been proposed to prolong the lifetime of a Wireless Sensor Networks (WSNs). It is also desired that the energy consumption be evenly dispersed throughout the network. This paper presents RECA: a Ring-structured Energy-efficient Clustering Architecture. In RECA, nodes take turns to be the cluster-head and make local decisions on the fair-share length of their duty cycle according to their remaining energy and those of the rest of the nodes within the same cluster, consequently they deplete their energy supply at approximately the same time regardless of the initial amount of energy in their battery. RECA avoids the tight synchronization problem in LEACH and our primary results show that our scheme can achieve 50% - 150% longer network lifetime than LEACH depending on the initial energy level.
1
Introduction
Wireless Sensor Networks (WSNs) are autonomous systems which are often deployed in an uncontrolled manner such as dropping from an unmanned plane. The resulting network is characterized as an infrastructure-less, immobile wireless network. Applications employing these networks often take the form of one such network and one or a few data observers, called Base stations (BS’s). Recent advances in VLSI technology have made individual sensor become smaller in size and more powerful in computation and radio communication, “anytime, anywhere” computing is becoming more and more realistic. However, the development of high-energy battery still lags far behind electronic progress. How to efficiently use the energy-constrained battery still remains a prominent research problem in WSNs. Various techniques have been developed to address energy consumption problems in different aspects of WSNs. A comprehensive list of technologies to conserve energy in WSNs can be found in [1]. Among these technologies, clustering is
This work is supported by NSF awards ANI-0325353, ANI-0073972, NSF-0524634 and NSF-0524429.
X. Jia, J. Wu, and Y. He (Eds.): MSN 2005, LNCS 3794, pp. 927–936, 2005. c Springer-Verlag Berlin Heidelberg 2005
928
G. Li and T. Znati
particularly promising and has received much attention in the research community. In a clustered network, nodes are organized into non-overlapping groups. A special node in a cluster, called cluster-head, assumes extra responsibility, e.g. to fuse data collected by the nodes inside the same cluster and to route inter-cluster data packet. Because of the geographical proximity, the sensed data from nodes within one cluster usually exhibit high correlation, therefore a cluster-head can collect data from its member nodes, aggregate them to remove redundancy, and send one single packet to the BS. Number of transmission is reduced and hence the energy conservation. Despite numerous advantages from clustering scheme, care must be taken to avoid its undesirable effects. If a node is fixed as the cluster-head during the whole network lifetime, its battery can be depleted very soon because of the extra work of a cluster-head has to perform. To prolong the network lifetime, it is therefore desired that the role of cluster-head rotate among nodes. To this end, there are two major schemes to switch the role of cluster-head among the network nodes. The first one is self-election and bully scheme. The operation of the system is divided into rounds, in each round, nodes randomly elect itself as the cluster-head and “bully” the neighboring nodes to be their cluster member. The second one is probe-and-switch. Each node sleeps for a random time and wakes up to probe the surrounding environment; if it perceives there are not enough cluster-heads around itself, it becomes a cluster-head. The inherent randomness in the cluster-head assignments in these schemes, however, results in uneven workload among all the network nodes thus leads to nodes dying prematurely. Clustering algorithms must strive to achieve balanced energy consumption in order to prolong the network lifetime. Our proposed work, RECA, takes the node residual energy and its workload into consideration. The basic operation of RECA is divided into rounds. In each round, nodes within a cluster deterministically take turns in assuming the functionality of a the cluster-head for a time duration proportional to their residual energy. Energy dissipation is evenly distributed among all the nodes, thereby effectively prolonging the network lifetime.
2
Related Work
One pioneering work in this literature is LEACH [2]. In this work, operation of the network is divided into rounds. At the beginning of each round, each node generates a random number and compares to a threshold. If the generated random number is less than the threshold, this node becomes a cluster-head, otherwise, it joins the nearest self-elected cluster-head to form a cluster. The threshold is computed in such a way that nodes elect themselves to be clusterheads with an increasing probability if they have not been a cluster-head in recent rounds. However, a recent study of the formation of clusters [3] in LEACH reveals that due to the stochastic way of the cluster-head election, the number of clusterheads varies in a very large range. Specially, when the total number of nodes and the desired percentage of cluster-heads are small, there are no nodes electing
RECA: A Ring-Structured Energy-Efficient Cluster Architecture
929
themselves as the cluster-heads in some rounds. Furthermore, LEACH requires that nodes are at least loosely synchronized in order to start the cluster-head election at approximately the same time. To improve LEACH performance, Handy, Haase, and Timmermann introduced energy factor into the cluster-head election process [4]. If a candidate for cluster-head has more residual energy in its battery, it has a better chance to be a cluster-head. Their addition to LEACH helps evenly distribute energy consumption to some extent, however, the election of a cluster-head remains largely in a random way. Another improvement of LEACH was presented in PEGASIS [5]. In this work, nodes form a chain and send data only to the neighbor on the chain. Nodes take turns to be the cluster-head to fuse the data for the entire chain and send the result to the BS. Although PEGASIS can outperform LEACH in network lifetime, its performance on distributing energy consumption is worse as pointed out in [6]. GAF[7], ASCENT[8] and PEAS[9] are routing protocols employing the idea of a special node acting as a cluster-head in a geographical vicinity. Energy conservation is achieved through turning of the radio transmitter of some certain nodes. In GAF, nodes are supposed to be equipped with location-aware devices. The whole network field is divided into grids, sensors within the same grid are considered “equivalent” in routing a packet. As long as there is a node awake in a grid to route packets, other nodes can go to sleep to save energy. In ASCENT, nodes probe the surrounding environment to make local decision whether or not to switch state to be a cluster-head. Similar work can be found in PEAS. However, all of these schemes suffer from unbalanced workload among the sensors because of the lack of a schedule for active cluster-heads to be switch back to inactive state. In extreme cases, A cluster-head is forced to stay active until dead if no nodes are willing to take over the role of cluster-head. Our proposed work, RECA differs from the above works in that cluster-head management is deterministic to ensure fairness among cluster nodes. Nodes assume cluster-head functionalities for a period of time proportional to their residual energy. Consequently, energy dissipation in the proposed scheme is evenly distributed among all the nodes, thereby effectively prolonging the network lifetime. Furthermore, cluster-head functionalities rotate deterministically among the cluster nodes. This deterministic aspect of the scheme eliminates the reclustering cost incurred by other clustering schemes. Finally, the requirement for tight synchronization is relaxed considerably since cluster formation happens only once at the initialization phase of the network.
3
RECA: A Ring-Structured Energy-Efficient Clustering Architecture
The basic operations of RECA are divided into two distinct phases: Energybalanced Cluster Formation and Cluster-head Management. The first phase is executed only once. Its objective is to produce a set of energy-balanced clusters.
930
G. Li and T. Znati
In the second phase, the role of cluster-head rotates among cluster nodes using the residual energy of a node to determine the length of the period during which it serves as a cluster-head. 3.1
Energy-Balanced Cluster Formation
In this phase, sensors are grouped into clusters with about the same total energy. We first obtain initial clusters using minimum transmission power, then we balance the energy among the initial clusters. To the contrary of other clustering schemes, this phase is executed only once throughout the whole lifetime of the network, re-clustering cost is therefore avoided. Initial Cluster Formation. To minimize the intra-cluster communication cost later on in the system operation, one-hop clusters using minimum transmission power are formed. The algorithm used to obtain the initial clusters is outlined in Algo. 1. First, the expected number of nodes in one cluster is estimated apriori as 2 γ = N ×π×R , where N is the total number of nodes in the network, A is the A area that the network covers and R is the minimum transmission range. The cluster organizer election process is divided into time slots. In each slot, each node that has not elected itself as a cluster organizer or associated with a cluster generates a random number in [0, 1] and compares to a threshold h = min(2r × Algorithm 1. Init Clusters: Obtain the initial clusters Define: γ: The expected number of nodes in one cluster. tslot : Duration of one time slot. Init Clusters :executed at each node for r = 0 to log2 γ do if not a cluster organizer nor a cluster member then generate a number α between 0 and 1 if α < min(2r × γ1 , 1) then announce itself as a cluster organizer exit end if end if listen for a duration of tslot if not a cluster organizer then if hears one or a few cluster organizer announcements then Choose the best SNR cluster to join end if end if end for
RECA: A Ring-Structured Energy-Efficient Cluster Architecture
931
1 γ , 1),
where r is the slot number. If the randomly generated number is less than h, this node becomes a cluster organizer and announces this information using minimum transmission power, if not, it listens to the cluster organizer announcements. If it hears any, it associates itself with the cluster from which the announcement has the best signal noise ratio (SNR). Nodes that associate with one cluster in previous slots also listen to the new announcements to adjust cluster membership. If one node finds a closer newly elected cluster organizer, it re-associates itself with the new one. After !log2 γ" time slots, each node in the network is either a cluster organizer or a cluster member. Each cluster organizer then compute the total energy in its cluster and exchange this information with other cluster organizers. After receiving this information from all the other clusters, a cluster organizer can compute the average energy Eavg for one cluster in the network. Balancing Energy on Initial Clusters. Because the cluster organizer election is based on a random self-election mechanism, there is no guarantee on the total energy in one cluster. A group of energy balanced clusters can be achieved by applying the steps shown in Algo. 2 with the initial clusters. A cluster is said to be acceptable if its total energy falls in the range [Eavg − δ,Eavg + δ]. δ is the maximum allowed variance of the total energy in a cluster. The algorithm starts from the cluster with the minimum total energy. If this cluster is not acceptable, the cluster organizer broadcast recruiting message to neighboring clusters. Nodes hearing this message reply with node ID along with its energy information if they belong to clusters that haven’t had the chance to balancing their total energy yet. The cluster organizer then select the combination of nodes that makes the total energy fall in the acceptable range (or as close
Algorithm 2. Balance Clusters: Balancing energy among clusters Define: S: A set of clusters obtained from algorithm 1. ci : a cluster, ci ∈ S. Eavg : average total energy per cluster. δ: Maximum allowed variance of total energy for one cluster. Balance Clusters :executed at the cluster organizer for cluster ci if ci has least total energy in S then recruit nodes from neighboring clusters in S s.t. total energy falls in [Eavg − δ, Eavg + δ] ask all the other cluster organizers to remove ci from S end if if ci has members deprived by other clusters then recompute the total energy update with all the other cluster organizers end if
932
G. Li and T. Znati
as possible) and broadcast the ID list of the selected nodes. The selected nodes then leave the current cluster and join the new cluster. To avoid oscillation, this cluster should not be touched again. Those clusters that have nodes deprived recompute the total energy and send it to the rest of the cluster organizers. Another cluster with least energy starts a new round until all the clusters have either had a chance to adapt its membership or they are acceptable in the first place. During this process, nodes drift from energy-rich clusters to energy-poor clusters. After the cluster membership is stable, each node announces its cluster membership along with its initial energy level and records this information from the nodes within the same cluster. Nodes in the same cluster then organize themselves into a logic ring in the increasing order of their IDs. 3.2
Cluster-Head Management
The operation of a RECA system within one cluster is divided into rounds, each round lasts for a duration of R. R is a predefined system parameter. The value of R should be long enough to hide the system control overhead and short enough to avoid a node dying in its active duty cycle. All nodes within a cluster take turns to be the cluster-head for a fair-share duration of a duty cycle in one round. Nodes that are not cluster-head use minimum transmission power to send data to the cluster-head. A node u computes its fair-share cluster-head duty cycle du as the following: Eu du = × R. Ew all w where Eu are the energy level of node u. Node w is in the same cluster with u. In addition to computing the duty cycle, each node computes and maintains two timers, t0 and t1 . Timer t0 is used to mark the beginning of the duty cycle as the cluster-head of a node. The value of t0 for node u is: Ev v’s ID TC . In general, nodes that travel rapidly in the network may degrade the cluster quality because they alter the node distribution in their clusters and make the clusters unstable, possibly long before the end of TO . However, research efforts on clustering should not be restricted only within the arena of static or quasistationary networks where node movements are rare and slow. Rather, for those applications where TO is not much longer than TC , we propose in this work an efficient protocol that generates clusters in ad hoc networks with mild to moderate node mobility. One such example is related to fast and efficient command and control in military applications, where nodes can frequently move. In our model for sensor networks, though, the sensor nodes are assumed to be quasi-stationary and all nodes have similar capabilities. Nodes are location unaware and will be left unattended after deployment. Recharging is assumed not possible and therefore, energy-efficient sensor network protocols are required for energy conservation and prolonging network lifetime. For clustering, in particular, every node can act as both a source and a server (clusterhead). A node may fail if its energy resource is depleted, which motivates the need for rotating the clusterhead role in some fair manner among all neighboring nodes for load balancing and overall network longevity. The problem of clustering is then defined as follows. For an ad hoc or sensor network with nodes set V , the goal is to identify a set of clusterheads that cover the whole network. Each and every node v in set V must be mapped into exactly one cluster, and each ordinary node in the cluster must be able to directly communicate to its clusterhead. The clustering protocol must be completely distributed meaning that each node independently makes its decisions based only on local information. Further, the clustering must terminate fast and execute efficiently in terms of processing complexity and message exchange. Finally, the clustering algorithm must be resistant to moderate mobility (in ad hoc networks) and at the same time renders energy-efficiency, especially for sensor networks.
3
DECA Clustering Algorithm
The DECA algorithm structure is somewhat similar to that presented by Lin and Gerla [8] in that each node broadcasts its decision as the clusterhead in the neighborhood based on some local information and score function. In [8] the score is computed based on node identifiers, and each node holds its message transmission until all its neighbors with better scores (lower ID) have done so. Each node stops its execution of the protocol if it knows that every node in its closed neighborhood (including itself) has transmitted. HEED [12] uti-
940
J.H. Li, M. Yu, and R. Levy
lizes node residual energy as the first criterion and takes a cost function as the secondary criterion to compute the score, and each node probabilistically propagates tentative or final clusterhead announcement depending on its probability and connectivity. The execution of the protocol at each node will terminate when the probability of self-election, which gets doubled in every iteration, reaches 1. It is assumed in [8] that the network topology does not change during the algorithm execution, and therefore it is valid for each node to wait until it overhears every higher-score neighbor transmitting. With some node mobility, however, this algorithm can halt since it is quite possible that an initial neighboring node leaves the transmission range for a node, say v, so that v cannot overhear its transmission. v then has to wait endlessly according to the stopping rule. Similar assumption exists in HEED. Under node mobility, HEED will not halt though, since each node will terminate according to its probability-doubling procedure. However, we observe that the rounds of iterations are not necessary and can potentially harm the clustering performance due to the possibly excessive number of transmitted announcements. We emphasize the important insights on distributed clustering: those nodes with better scores should announce themselves earlier than those with worse scores. In this work, we utilize a score function that captures node residual energy, connectivity and identifier. Each node does not need to hold its announcement until its better-scored neighbors have done so; each node does not need to overhear every neighbor in order to stop; and, each node only transmits one message, rather than going through rounds of iterations of probabilistic message announcement. Given the fact that it is communication that consumes far more energy in sensor nodes compared with sensing and computation, such saves on message transmissions lead to better energy efficiency.
3.1
DECA Operation
Each node periodically transmits a Hello message to identify itself, and based on such Hello messages, each node maintains a neighbor list. Define the score function at each node as score = w1 E + w2 C + w3 I , where E stands for node residual energy, C stands for node connectivity, I stands for node identifier, and 3 weights follow i=1 wi = 1. We put higher weight on node residual energy in our simulations. The computed score is then used to compute the delay for this node to announce itself as the clusterhead. The higher the score, the sooner the node will transmit. The computed delay is normalized between 0 and a certain upper bound Dmax , which is a key parameter that needs to be carefully selected in practice, like the DIFS parameter in IEEE 802.11. In our simulation, we choose Dmax = 10ms and the protocol works well. After the clustering starts, the procedure will terminate after time Tstop , which is another key parameter whose selection needs to take node computation capability and mobility into consideration. In the simulation, we choose Tstop = 1s. The distributed clustering algorithm at each node is illustrated in the pseudo code fragments. Essentially, clustering is done periodically and at each clustering
A DECA for Ad Hoc and Sensor Networks
941
epoch, each node either immediately announces itself as a potential clusterhead or it holds for some delay time. I. 1 2 3 4 5 6 7
Start-Clustering-Algorithm() myScore = w1 E + w2 C + w3 I; delay = (1000 − myScore)/100; if (delay < 0) then broadcastCluster (myId, myCid, myScore); else delayAnnouncement (); Schedule clustering termination.
II. Receiving-Clustering-message(id, cid, score) 1 if (id == cid) 2 then if (myCid == UNKNOWN) 3 then if (score > myScore) 4 then myCid = cid; 5 cancelDelayAnnouncement (); 6 broadcastCluster (myId, myCid,score); 7 elseif (score > myScore) 8 then if (myId == myCid) 9 then needConversion = true; 10 else 11 convertToNewCluster (); III. Finalize-Clustering-Algorithm() 1 if (needConversion) 2 then if (!amIHeadforAnyOtherNode ()) 3 then convertToNewCluster (); 4 if (myCid == UNKNOWN) 5 then myCid = cid; 6 broadcastCluster (myId, myCid, score); On receiving such clustering messages, a node needs to check whether the node ID and cluster ID embedded in the received message are the same; same node ID and cluster ID means that the message has been transmitted from a clusterhead. Further, if the receiving node does not belong to any cluster, and the received score is better than its own score, the node can simply join the advertised cluster and cancel its delayed announcement. If the receiving node currently belongs to some other cluster, and the received score is better than its own score, two cases are considered. First, if the current node belongs to a cluster with itself as the head, receiving a better scored message means that this node may need to switch to the better cluster. However, cautions need to be taken here before switching since the current node, as a clusterhead, may already have other nodes affiliated with it. Therefore, inconsistencies can
942
J.H. Li, M. Yu, and R. Levy
occur if it rushes to switch to another cluster. In our approach, we simply mark the necessity for switching (line 9 in Phase II) and defer it to finalizing phase, where it checks to make sure that no other nodes are affiliated with this node in the cluster as the head, before the switching can occur. But if the current node receiving a better-scored message is not itself a clusterhead, as an ordinary node, it can immediately convert to the new cluster, and this is the second case (line 11 in Phase II). It is critical to note that the switch process mandates that a node needs to leave a cluster first before joining a new cluster. In the finalizing phase, where each node is forced to enter after Tstop , each node checks to see if it needs to convert. Further, each node checks if it already belongs to a cluster and will initiate a new cluster with itself as the head if not so. 3.2
Correctness and Complexity
The protocol described above is completely distributed, and to prove the correctness of the algorithm, we need to show that 1) the algorithm terminates; 2) every node eventually determines its cluster; and 3) in a cluster, any two nodes are at most two-hops away. Theorem 1. Eventually DECA terminates. Proof. After the clustering starts, the procedure will stop receiving messages after time Tstop , and enter the finalizing phase, after which the algorithm will terminate. Note that in order for DECA to outperform related protocols presented in [8] and [12] under node mobility, it is critical to design the key parameters Dmax and Tstop appropriately taking node computation and mobility patterns into considerations. With carefully designed parameters, node needs not to wait (possibly in vain as in [8]) to transmit or terminate, nor need it to go through rounds of probabilistic announcement. In HEED, every iteration takes time tc , which should be long enough to receive messages from any neighbor within the transmission range. We can choose Dmax to be roughly comparable to (probably slightly larger than) tc and DECA can generally terminate faster than HEED. Theorem 2. At the end of Phase III, every node can determine its cluster and only one cluster. Proof. Suppose a node does not determine its cluster when entering Phase III. Then condition at line 4 holds and the node will create a new cluster and claims itself as the clusterhead. So every node can determine its cluster. Now we show that every node selects only one cluster. A node determines its cluster by one of the following three methods. First, it claims itself as the clusterhead; second, it joins a cluster with a better score when its cluster is undecided; and third, it converts from a cluster to another one. The first two methods do not make a node join more than one clusters, and the switch procedure checks for consistency and mandates that a non-responsible node (a node not serving as head for a cluster) can only leave the previous cluster first before joining the new cluster. As a result, no node can appear in two clusters.
A DECA for Ad Hoc and Sensor Networks
943
One may argue that Theorem 2 does not suffice for clustering purposes. For example, one can easily invent an algorithm such that every node creates a new cluster and claims itself as the clusterhead; obviously Theorem 2 holds. However, our algorithm does much better than such trivial clustering. Most of the clusters are formed executing line 4 to line 6 in Phase II, which means joining clusters with better-scored heads. This is due to the fact that the initial order of clusterhead announcements is strictly determined using the score function. Theorem 3. When clustering finishes, any two nodes in a cluster are at most two-hops away. Proof. The proof is based on the mechanisms by which a node joins a cluster. A node, say v, joins a cluster with head w only if v can receive an announcement from w with a better score. In other words, all ordinary nodes are within one-hop from the clusterhead and the theorem follows. To show that the algorithm is energy-efficient, we prove that the communication and time complexity is low. Theorem 4. In DECA, each node transmits only one message during the operation. Proof. In broadcastCluster method, a Boolean variable iAlreadySent (not shown in Pseudo code) ensures that each node cannot send more than once. Now we show that each node will eventually transmit. In Phase I execution when nodes start the clustering, each node either transmits immediately or schedules a delayed transmission, which will either get executed or cancelled at line 5 in Phase II. Note that the cancellation is immediately followed by a transmission so each node will eventually transmit. Theorem 5. The time complexity of the algorithm is O(|V |). Proof. From Phase II operations, each received message is processed by a fixed number of computation steps without any loop. By Theorem 4, each node only sends one message and therefore there are only |V | messages in the system. Thus the time complexity is O(|V |).
4
Performance Evaluation
We evaluate the DECA protocol using an in-house simulation tool called agentbased ad-hoc network simulator (NetSim). In our simulations, random graphs are generated so that nodes are randomly dispersed in a 1000m × 1000m region and each node’s transmission range is bound to 250m. We investigate the clustering performance under different node mobility patterns, and the node speed ranges from 0 to 50m/s. For each speed, each node takes the same maximum speed and a large number of random graphs get generated. Simulations are run and results are averaged over these random graphs.
944
J.H. Li, M. Yu, and R. Levy static scenario 0.7
ratio of clusters and single−node clusters
total # of clusters single−node clusters
Krishna
0.6
0.5
HEED
0.4
0.3
Lin&Gerla
DECA
0.2
0.1
0
1
2
3
4
Fig. 1. Ratio of number of clusters. Static scenario. maximum speed 0.1m/s 0.7
ratio of clusters and single−node clusters
total # of clusters single−node clusters
Krishna
0.6 HEED
0.5
0.4 Lin&Gerla
DECA
0.3
0.2
0.1
0
1
2
3
4
Fig. 2. Ratio of number of clusters. Maximum speed 0.1m/s.
In general, for any clustering protocol, it is undesirable to create single-node clusters. Single-node clusters arise when a node is forced to represent itself (because of not receiving any clusterhead messages). While many other protocols generate lots of single-node clusters as node mobility gets more aggressive, our algorithm shows much better resilience. We have considered the following metrics for performance comparisons: 1) the average overhead (in number of protocol messages); 2) the ratio of the number of clusters to the number of nodes in the network; 3) the ratio of the single-node clusters to the number of nodes in the network; and 4) the average residual energy of the selected clusterheads. We first look at static scenarios where nodes do not move and the quasistationary scenarios where the maximum node speed is bounded at 0.1m/s. We choose [8] proposed by Lin & Gerla (LIN) as a representative for those general clustering protocols, and choose Krishna’s algorithm (KRISHNA) [7] to represent dominating-set based clustering protocols. For energy-aware protocols, we choose HEED [12] to compare with DECA. From Fig. 1 (static scenario) and Fig. 2 (0.1m/s max. speed) it is easy to observe that KRISHNA has the worst clustering performance with the highest cluster-to-nodes ratio, while DECA and LIN possess the best performance. HEED performs in between. Fig. 3, which combines Fig. 1 and Fig. 2, shows that all four protocols perform consistently under (very) mild node mobility. In fact, with maximum node speed set as 0.1m/s, both LIN and DECA perform exactly the same as their static scenarios, while HEED and KRISHNA degrade only to a noticeable extent.
A DECA for Ad Hoc and Sensor Networks
945
static scenario and maximum speed 0.1m/s
ratio of clusters and single−node clusters
0.7 total # of clusters single−node clusters total # of clusters(0.1ms) single−node clusters(0.1ms)
0.6
Krishna
HEED
0.5
0.4 DECA
Lin&Gerla
0.3
0.2
0.1
0
1
2
3
4
Fig. 3. Ratio of number of clusters. Put together.
average # of message transmissons per node
2.5 DECA HEED DECA/HEED 2
1.5
1
0.5
0
0
5
10
15 20 25 30 35 40 node maximum speed (m/s)
45
50
Fig. 4. Average number of transmissions per node and DECA/HEED ratio
During our simulations, as we increase the maximum node speed, both LIN and KRISHNA fail to generate clusters. This is expected. In LIN, a node will not transmit its message until all its better-scored neighbors have done so; the algorithm will not terminate if a node do not receive a message from each of its neighbors. Node mobility can make the holding node wait for ever. In KRISHNA, in order to compute clusters, each node needs accurate information of the entire network topology, facilitated by network-wide link state update which by itself is extremely vulnerable to node mobility. In contrast, we found that both HEED and DECA are quite resilient to node mobility in that they can generate decent clusters even when each node can potentially move independently of others. The following figures compare the performance of DECA and HEED under different node mobility. Fig. 4 shows that for DECA, the number of protocol messages for clustering remains one per node, regardless of node speed, as proven in Theorem 4. For HEED, the number of protocol messages is roughly 1.8 for every node speed, and a node running DECA transmits about 56% number of messages as that in HEED (shown as DECA/HEED in Fig. 4). The fact that HEED incurs more message transmissions is due to the possibly many rounds of iterations (especially when node power is getting reduced), where each node in every iteration can potentially send a message to claim itself as the candidate clusterhead [12]. Reducing the number of transmissions is of great importance, especially in sensor networks, since it would render better energy efficiency and fewer packet colli-
946
J.H. Li, M. Yu, and R. Levy 0.5 0.45 0.4
ratio of clusters
0.35 0.3 0.25 0.2
DECA HEED
0.15 0.1 0.05 0
0
10
20 30 node maximum speed (m/s)
40
50
Fig. 5. Ratio of clusters to total number of nodes in network 0.5 DECA HEED
0.45
ratio of single−node clusters
0.4 0.35 0.3 0.25 0.2 0.15 0.1 0.05 0
0
5
10
15 20 25 30 35 node maximum speed (m/s)
40
45
50
Fig. 6. Ratio of single-node clusters to total number of nodes in network
sions (e.g. CSMA/CA type MAC in IEEE 802.11). Fig. 5 and Fig. 6 illustrate the ratio of number of clusters and single node clusters to the total number of nodes in network. In both cases, DECA outperforms HEED. Note that both DECA and HEED perform quite consistently under different maximum node speed and this is not coincident: a node in both DECA and HEED will stop trying to claiming itself as the potential clusterhead after some initial period (delayed announcement in DECA and rounds of iterations in HEED) and enters the finalizing phase. As a result, the local information gathered, which serves as the base for clustering, is essentially what can be gathered
70 DECA HEED
average clusterhead engergy
60
50
40
30
20
10
0
5
10
15 20 25 30 35 node maximum speed (m/s)
40
45
Fig. 7. Average clusterhead energy
50
A DECA for Ad Hoc and Sensor Networks
947
within the somewhat invariant initial period which leads to consistent behaviors under different node mobility. Further, we compare DECA and HEED with respect to the (normalized) average clusterhead energy in Fig. 7. Again both DECA and HEED perform quite consistently and DECA outperforms HEED with about twice the average clusterhead residual energy. This is in accordance with Fig. 5 where DECA consistently incurs fewer message transmissions than HEED. In sensor networks, sending fewer messages by each node in DECA while achieving the intended goal usually means energy-efficiency and longer node lifetime. In addition, HEED may possess another undesirable feature in its protocol operation. Over time, each node’s energy fades leading to a smaller probability of transmission in HEED for each node, which implies more rounds of iterations. As a result, more announcements could be sent and more energy could be consumed, which could lead to more messages sent and more energy consumed in the next round of clustering! In future work we will analyze HEED and execute more extensive simulations to see if such amplifying-effects really exist. DECA, on the contrary, does not posses this potential drawback even with energy fading, since each node only sends one message during the operation.
5
Related Work
Das and Sivakumar et al. [10] identified a subnetwork that forms a minimum connected dominating set (MCDS). Each node in the subnetwork is called a spine node and keeps a routing table that captures the topological structure of the whole network. The main drawback of this algorithm is that it still needs a non-constant number of rounds to determine a connected dominating set [11]. In [11] the authors proposed an efficient localized algorithms that can quickly build a backbone directly in ad hoc networks. This approach uses a localized algorithm called the marking process where hosts interact with others in restricted vicinity. This algorithm is simple, which greatly eases its implementation, with low communication and computation cost; but it tends to create small clusters. Similar to [8], Basagni [3] proposed to use nodes’ weights instead of lowest ID or node degrees in clusterhead decisions. Weight is defined by mobility related parameters, such as speed. Basagni [4] further generalized the scheme by allowing each clusterhead to have at most k neighboring clusterhead and described an algorithm for finding a maximal weighted independent set in wireless networks. One of the first protocols that use clustering for network longevity is the Low-Energy Adaptive Clustering Hierarchy (LEACH) protocol [6]. In LEACH, a node elects to become a clusterhead randomly according to a target number of clusterheads in the network and its own residual energy, and energy load gets evenly distributed among the sensors in the network. In addition, when possible, data are compressed at the clusterhead to reduce the number of transmissions. A limitation of this scheme is that it requires all current clusterheads to be able to transmit directly to the sink.
948
6
J.H. Li, M. Yu, and R. Levy
Conclusion and Future Work
In this paper we present a distributed, efficient clustering algorithm that works with resilience to node mobility and at the same time renders energy efficiency. The algorithm terminates fast, has low time complexity and generates nonoverlapping clusters with good clustering performance. Our approach is applicable to both mobile ad hoc networks and energy-constrained sensor networks. The clustering scheme provides a useful service that can be leveraged by different applications to achieve scalability. It can be observed that in DECA the dispersed delay timers for clusterhead announcement assume the existence of a global synchronization system. While this might not be a problem for many (military) ad hoc network applications, for sensor networks synchronization can become trickier. It could be an interesting research to study time synchronization protocols combined with clustering protocol in sensor networks, with an effort to provide the maximum degree of functionality and flexibility with minimum energy consumption. Further, it could be interesting to observe how much improvement DECA can still maintain over HEED as transmission range varies.
References 1. I. F. Akyildiz, W. Su, Y. Sanakarasubramaniam, and E. Cayirci, ”Wireless sensor networks: A survey,” Computer Networks, vol. 38, no. 4, pp. 393-422, March 2002. 2. D. J. Baker, A. Ephremides, and J. A. Flynn, ”The design and simulation of a mobile radio network with distributed control,” IEEE Journal on Selected Areas in Communications, vol. SAC-2, no. 1, pp. 226-237, January 1984. 3. S. Basagni, ”Distributed clustering for ad hoc networks,” in Proceedings of the 1999 International Symposium on Parallel Architectures, Algorithms, and Networks (ISPAN’99) 4. S. Basagni, D. Turgut, and S. K. Das, ”Mobility-adaptive protocols for managing large ad hoc networks,” in Proc of the IEEE International Conference on Communications, ICC 2001, June 11-14 2001, pp. 1539-1543. 5. B. N. Clark, C. J. Colburn, and D. S. Johnson, ”Unit disk graphs,” Discrete Mathematics, vol. 86, pp. 165-167, 1990. 6. W. R. Heinzelman, A. Chandrakasan, and H. Balakrishnan, ”Energy efficient communication protocol for wireless microsensor networks,” in Proceedings of the3rd Annual Hawaii International Conference on System Sciences, HICSS 2000,January 4-7 2000, pp. 3005-3014 7. P. Krishna, N.N. Vaidya, M. Chatterjee and D.K. Pradhan, A cluster-based approach for routing indynamic networks, ACM SIGCOMM Computer Communication Review 49 (1997) 49-64. 8. C. R. Lin and M. Gerla, ”Adaptive clustering for mobile wireless networks,” Journal on Selected Areas in Communications, vol. 15, no. 7, pp. 1265-1275, September 1997. 9. A. B. McDonald and T. Znati, ”A mobility-based framework for adaptive clustering in wireless ad hoc networks,” IEEE Journal on Selected Areas in Communications, vol. 17, no. 8, pp. 1466-1487, August 1999.
A DECA for Ad Hoc and Sensor Networks
949
10. R. Sivakumar, B. Das, and B. V., ”Spine-based routing in ad hoc networks,” ACM/Baltzer Cluster Computing Journal, vol. 1, pp. 237-248, November 1998, special Issue on Mobile Computing. 11. J. Wu and H. Li, ”On calculating connected dominating sets for efficient routing in ad hoc wireless networks,” Telecommunication Systems, Special Issue on Mobile Computing and Wireless Networks, vol. 18,no. 1/3, pp. 13-36, September 2001. 12. O. Younis, S. Fahmy, ”HEED: A Hybrid, Energy-Efficient,Distributed Clustering Approach for Ad Hoc Sensor Networks”, IEEE TRANSACTIONS ON MOBILE COMPUTING, VOL. 3, NO. 4, OCTOBER-DECEMBER 2004
A Novel MAC Protocol for Improving Throughput and Fairness in WLANs Xuejun Tian1 , Xiang Chen2 , and Yuguang Fang2 1
2
Department of Information Systems, Faculty of Information Science and Technology, Aichi Prefectural University, Aichi, Japan [email protected] Department of Electrical and Computer Engineering, University of Florida, Gainesville, FL, U.S.A [email protected], [email protected]
Abstract. Many schemes have been proposed to enhance throughput or fairness of the original IEEE 802.11 standard, however, they either fail to consider both throughput and fairness, or to do so with complicated algorithms. In this paper, we propose a new MAC scheme that dynamically optimizes each active node’s backoff process. The key idea is to enable each node to adjust its Contention Window (CW ) to approach the optimal one that will maximize the throughput. Meanwhile, when the network enters into steady state in saturated case, i.e., under heavy traffic load, all the nodes will maintain approximately identical CW s, which guarantees fair share of the channel among all nodes. Through simulation comparison with previous schemes, we show that our scheme can greatly improve the throughput no matter the network is in saturated or non-saturated case, while maintaining good fairness.
1
Introduction
Wireless local area networks (WLANs) have been increasingly popular and widely deployed in recent years. Currently, the IEEE 802.11 MAC standard includes two channel access methods: a mandatory contention based one called Distributed Coordination Function (DCF) and an optional centralized one called Point Coordination Function (PCF). Due to its inherent simplicity and flexibility, the DCF mode is preferred and has attracted most research attention. Meanwhile, PCF is not supported in most current wireless cards and may result in poor performance when working alone or together with DCF, as shown in [1][2]. In this paper, we focus on DCF. Since all the nodes share a common wireless channel with limited bandwidth in the WLAN, it is highly desirable that an efficient and fair medium access control (MAC) scheme is employed. However, for the 802.11 DCF, there is room for improvement in terms of both efficiency([3][4][5]) and fairness. Cali et al. pointed out in [6] that depending on the network configuration, DCF may deliver a much lower throughput compared to the theoretical throughput limit. Meanwhile, as X. Jia, J. Wu, and Y. He (Eds.): MSN 2005, LNCS 3794, pp. 950–965, 2005. c Springer-Verlag Berlin Heidelberg 2005
A Novel MAC Protocol for Improving Throughput and Fairness in WLANs
951
demonstrated in [7], the fairness as well as throughput of the IEEE 802.11 DCF could significantly deteriorate when the number of nodes increases. Although extensive research has been conducted to improve throughput ([8] [9][10][6][11][12][13][14]) or fairness ([6] [15]), except in [11], these two performance indexes are rarely considered together. In this paper, we aim to enhance both throughput and fairness for DCF at the same time by proposing a novel MAC scheme called DOB. Compared to the original 802.11 DCF and previous enhancement approaches, this scheme has the following distinguishing features: – Unlike [6] that relies on the accurate on-line estimation of the number of active nodes, we use a simply and accurate measure called average idle interval, which is easily obtained and reflects network traffic load, to Dynamically Optimizing the Backoff algorithm. – It is known that in the 802.11 DCF, each node exponentially increases it contention window (CW) in case of collision and reset it after successful transmission. Although this is designed to avoid collisions, the fact that the CWs change drastically lead to neither fast collision resolution nor high throughput [11]. In contrast, DOB enables each node to keep a quasi-stable CW that oscillates around an optimal value that leads to a throughput close to maximum. More specifically, the current CW will be decreased if it is greater than the optimal CW and be increased otherwise. – Since each node in the network maintains its CW around the optimal value, all nodes will have equal opportunities to seize the channel. As a result, the fairness is improved compared to the original DCF. The remainder of this paper is organized as follows. In Section 2, we describe the IEEE 802.11 MAC protocol and then discuss the related work. We elaborate on our key idea and the theoretical analysis for improvement in Section 3. Then, we present in detail our proposed DOB scheme in Section 4. Section 5 gives performance evaluation and the discussions on the simulation results. Finally, concluding remarks are given in Section 6.
2
Preliminaries
In this section, we discuss the related work. Especially, we focus on Cali’s work ([6]) and FCR (Fast Collision Resolution, [11]), as these two schemes resolve MAC collisions through dynamically adjusting the contention window. 2.1
Related Work
Considerable research efforts on IEEE 802.11 DCF have been expended on either theoretical analysis or throughput improvement ([7][9][10][6][11][16]). In [7], Bianchi used a Markov chain to model the binary exponential backoff procedure. By assuming the collision probability of each node’s transmission is constant and independent of the number of retransmission, he derived the saturated throughput for the IEEE 802.11 DCF. In [8], Bharghvan analyzed and improved the
952
X. Tian, X. Chen, and Y. Fang
performance of the IEEE 802.11 MAC. Although the contention information appended to the transmitted packets can help in collision resolution, its transmission increases the traffic load and the delay results in insensitivity to the traffic changes. Kim and Hou developed a model-based frame scheduling algorithm to improve the protocol capacity of the 802.11 [16]. In this scheme, each node sets its backoff timer in the same way as in the IEEE 802.11; however, when the backoff timer reaches zero, it waits for an additional amount of time before accessing the medium. Though this scheme improves the efficiency of medium access, the calculation of the additional time is complicated since the number of active nodes must be accurately estimated. Cali et al. [6] studied the 802.11 protocol capacity by using a p-persistent backoff strategy to approximate the original backoff in the protocol. In addition, they showed that given a certain number of active nodes and average frame length, there exists an average contention window that maximizes throughput. Basing on this analysis, they proposed a dynamic backoff tuning algorithm to approach the maximum throughput. It is important to note that the performance of the tuning algorithm depends largely on the accurate estimation of the number of active nodes. However, in practice, there is no simple and effective run-time estimation algorithm due to the distributed nature of the IEEE 802.11 DCF. Meanwhile, a complicated algorithm ([6]) would impose a significant computation burden on each node and be insensitive to the changes in traffic load. Fairness is another important issue in MAC protocol design for WLANs. In a shared channel wireless network, throughput and fairness essentially conflict with each other as shown in [17]. The analysis in [7] demonstrated that the fairness as well as the throughput of IEEE 802.11 DCF could significantly deteriorate when the number of nodes increases. Several research works addressed this issue[6] [15]. In [6], the number of active nodes needs to be estimated as mentioned and in [15], only initial contention window is adjusted and thus the contention window is not optimized. As will be shown later, our proposed DOB preserves the advantages and overcomes the deficiencies of the work [6] and FCR. While relying on dynamic tuning of CW , it needs not estimate the number of active nodes, as is the case in [6]. Compared to the original IEEE 802.11 or FCR, since each node, without initializing its CW with the minimum value, keeps its CW close to the same optimal value, DOB can maintain fairness and keep the network operating with less fluctuation. Consequently, the network always works in a quasi-stable state. In other words, the nodes with a smaller CW than the optimal CW will increase CW and the nodes with a greater CW will decrease CW .
3 3.1
Design Motivation and Analysis Motivation
In the IEEE 802.11 MAC, an appropriate CW is the key to providing throughput and fairness. A small CW results in high collision probability, whereas a large CW results in wasted idle time slots. In [6], Cali et al. showed that given the
A Novel MAC Protocol for Improving Throughput and Fairness in WLANs
953
number of active nodes, there exists an optimal CW that leads to the theoretical throughput limit and when the number of active nodes changes, so does this optimal CW . Since in practice, the number of active nodes always changes, to let each node attain and keep using the corresponding optimal CW requires the estimation of the number of active node. However, this is not an easy task in the network environment where a contention-based MAC protocol is used. To get around this difficulty, we are thus motivated to find other effective measures that also lead us to the optimal CW and hence the maximal throughput. Therefore, we focuses on the average idle interval in the channel between two consecutive busy periods (due to transmissions or collisions) that each node locally observes. It has two merits. One is that without complex computation, each node can obtain the average idle interval online by observation, which is quite simple since the DCF is in fact built on the basis of physical and virtual carrier sensing mechanisms. In the following, we derive the relationship between average idle interval and throughput through analysis. For the purpose of simplicity, we assume the frame length is constant. Later on, we will show that the performance is not sensitive to a variable frame length. 3.2
Analytical Study
In [6], IEEE 802.11 DCF is analyzed based on an assumption that, in each time slot, each node contends the medium with the same probability p subjected to p = 1/(E[B] + 1), where E[B] is the average backoff timer and equals (E[CW ] − 1)/2 for DCF. In the strict sense, this assumption is not true because every node maybe use different CW. Since our DOB will enable each node to settle on a quasi-stable CW, we assume that all the nodes use the same and fixed CW for simplicity reason. Consequently, we have p = 2/(CW + 1)
(1)
as all the expectation signs E can be removed. We assume every node is an active one, i.e., always having packets to transmit and for every packet transmission, the initial backoff timer is uniformly selected from [0, CW − 1]. For each virtual backoff time slot, it may be idle, busy due to a successful transmission, busy due to collision. Accordingly, we denote by tslt , Ts , and Tcol the time durations of the three types of virtual slots, respectively, and denote by pidl , ps , and pcol the associated probabilities, respectively. Thus, we can express the above probabilities as follows. pidl = (1 − p)n ps = np(1 − p)(n−1) pcol = 1 − pidl − ps
(2)
where n is the number of active nodes. Thus, the throughput is expressed as ρ=
T ps tslt pidl + Tcol pcll + Ts ps
954
X. Tian, X. Chen, and Y. Fang
=
T tslt pidl /ps + Tcol pcol /ps + Ts
(3)
where T is the transmission time of one packet, which can be obtained by subtracting overhead from Ts . In the above formula, the term pidl /ps can be thought of as the average number of idle slots for every successful transmission and the term pcol /ps the average number of collisions for every successful transmission. If we denote by Lidl the average idle interval, it can be expressed as Lidl = pidl /(1 − pidl )
(4)
Considering Equation (1) and (2), this equation can be further written as Lidl = =
1 (1 + 2/(CW − 1))n − 1 1 (n−i)
n CW2 −1 + ... + (ni )( CW2−1 )
(5)
+ ... + ( CW2−1 )n
In Equation (5), we can see that when CW is large enough, Lidl = (CW − 1)/(2n). As a matter of fact, this is the case when the network traffic load is heavy. In this case, to effectively avoid collisions, the optimal CW is large enough for the approximation Lidl = (CW − 1)/(2n) in our DOB, which is also verified through simulations. With Equation (2), (3), and (5), we can express the throughput as a function of Lidl , as shown in Fig. 1. Several important observations are made. First, we find that every curve follows the same pattern; namely, as the average idle
0.9
0.8
Throughput
0.7
0.6
0.5
0.4
Framelength=100
50
20
10
n=100 0.3 n=10 0.2 0
5
10
15 20 25 Average Idle Interval
30
35
Fig. 1. Throughput vs. average idle interval
40
45
A Novel MAC Protocol for Improving Throughput and Fairness in WLANs
955
11 10
n=100, Framelength=100
Average Idle Interval
9
n=10, Framelength=100
8 7 6 5 4 3 2 1 0 0
500
1000 CW-1
1500
2000
2500
Fig. 2. Average idle interval vs. Contention Window
interval Lidl increases, the throughput first rises quickly, and then decreases relatively slowly after reaching its peak. Second, although the optimal value of Lidl that maximizes throughput is different in cases of different frame lengths, it varies in a very small range, which hereafter is called the optimal range of Lidl . Finally, this optimal value is almost independent of the number of active nodes. Therefore, Lidl is a suitable measure that indicates the network throughput. Lidl = (CW − 1)/(2n) − α
(6)
Fig. 2 shows the relationship between Lidl and the contention window CW , as revealed in Equation (5). It can be observed that Lidl is almost a linear function of CW when CW is larger than a certain value. Specifically, in the optimal range of Lidl for aggregate throughput, say Lidl =[3,8], we can estimate Lidl using an approximation linear formula: Where, α is a constant. Since we are interested in tuning the network to work with maximal throughput, given the nice linear relationship, we can achieve this goal by adjusting the size of CW . In other words, each node can observe the average idle interval locally and adjust its backoff window accordingly such that the network throughput is maximized. Clearly, the above results hold when all nodes have the same CW . In reality, different nodes may have different CW s that fluctuates around the optimal CW. Next, we give a theorem that given the average idle interval, i.e., given the idle probability pidl , the achieved throughput will not be lower than the minimum throughput which is obtained under the condition that all nodes have the same CW . T heorem: Given the probability that a slot is idle is P and the number of active nodes n, the throughput is minimal in the case of p1 = p2 = ... = pi = ... = pn . Due to the fact that CW = p2i − 1, it follows that each node has the same CW .
956
X. Tian, X. Chen, and Y. Fang
This Theorem can be proved by mathematical induction, here we omit it because of page limit. The Theorem reveals two important points. First, if we keep the average idle interval as the one corresponding to the peak throughput shown in Fig. 1, the throughput will not be less than the peak throughput, as the peak throughput is derived under the condition that all nodes have the same CW. Second, it shows that there is a tradeoff between throughput and fairness. If all nodes keep the same CW, which means the channel is fairly shared among all nodes, the throughput is sacrificed. By detecting the average idle interval, each node can adjust its current CW around the optimal CW at runtime. Assume the observed current average idle interval is lidl , the optimal CW corresponding to the optimal Lio is CWo , given Equation (6), we can estimate CWo as CWo = (CWc − 1)
Lio + α +1 lidl + α
(7)
where CWc is the current CW. Clearly, we obtain the optimal CW while avoiding the difficult task of estimating the number of active nodes. Then, we can adjust the current CW basing on Equation (7) so as to approach the optimal CW and hence tune the network to deliver high throughput. In the following, we give the tuning algorithm in detail.
4
DOB Scheme
In this section, we describe the dynamically optimizing backoff (DOB) scheme in which each node adapt its backoff process according to the observed average idle interval lidl , which reflects the network traffic load. The goal is that each node always uses a contention window close to the optimal CW. Unlike the IEEE 802.11, DOB divides a node’s backoff process into three stages, namely stage 0, 1, and 2. When a node starts backoff, depending on the initially chosen backoff timer (BT), it may enter stage 1 or 2. After a collision, a node may enter into stage 0. At each stage, the node decrements its backoff timer when an idle time slot is detected. The node contends for the channel only when its BT reaches 0 and it is at stage 2. At the end of stage 0 and stage 1, the backoff nodes refresh their CWs basing on the average idle interval lidl observed in previous stages according to the formula revised from (7) so that a high aggregate throughput can be achieved. We introduce two parameters Kh and Kl as thresholds for observed channel idle interval lidl . Specifically, Kh is the threshold corresponding to high traffic load and Kl the threshold corresponding to low traffic load. When observed lidl is lower than Kh , the node increases its CW and when lidl is larger than Kl the node decreases its CW. Kh and Kl are defined as follows:
Kh = Kh − (CWi − 1)/CWct
Kl = Kl − (CWi − 1)/CWct
(8) (9)
Where, the range defined by [Kh , Kl ] is correspondent to the optimal range around to Lio shown in formula (7). The term (CWi − 1)/CWct , where CWi
A Novel MAC Protocol for Improving Throughput and Fairness in WLANs
957
is the current CW of node i and CWct is the same constant for each node, is introduced to enhance the fairness in short term. We can understand this term as follows. In Equation (7), new CW depends on the old one for every node. Ideally each node may have the same initial CW when the network enters into steady state in saturated case, in reality, when a new active node initializes its CW as CWmin different from CWs used in other nodes and begins to transmit or the traffic load changes, CWs of different nodes changes and maybe different for a short term, which degrades fairness. Independent from average idle interval, the term [−(CWi − 1)/CWct ] allows nodes with larger CW are likely to decrease its CW and the nodes with smaller CW likely to increase its CW. We also define a modified version of Lio denoted by Lc : Lc = Lio + 0.5 − (CWi − 1)/CWct
(10)
Here, we approximate α as 0.5. Obviously, we have Kh < Lio < Kl . Then the Equation (7) becomes CWo = (CWc − 1)
Lc +1 lidl + 0.5
(11)
If we assume that finally CWo = CWc for stable state and CWs converge to the same value, we can express this value as CW =
Lio + 0.5 +1 1 1 2n + CWct
(12)
which is verified through average CW obtained from the following simulations. From the above equation, we can find that CW slips from the optimal value shown in Equation (6) because of introduction of CWct which is a tradeoff between fairness and aggregate throughput. For instance, CWct = 250 when the number of nodes is less than or equal to 100. When the number of active nodes in the network is more than 100, as far as throughput is concerned, we can set a larger CWct to keep the CW in Equation (12) closer to the optimal value in order to increase throughput ; however, this could lead to a little degradation in fairness as each node adapts its CW in a slower pace. To avoid measuring the average channel idle interval in a short term, a node observes the idle slots to calculate the average idle interval at least more than Observation Window (OW) which is a certain number of idle slots. DOB adopts different backoff processes for new transmissions and retransmissions after a collision. In the case new transmission, a node, say node A, follows the following algorithm (Algorithm I): 1. Node A uses its current CW if it has one; otherwise it selects CWmin as current CW and set its backoff timer (BT) to uniform[0,CW-1]. Node A enters into backoff stage 2 if BT Kl , node A starts transmission immediately and decreases its CW c as newCW = (CW − 1) lidlL+0.5 + 1, as derived from Equation (7). c +1 ii) if lidl < Kh , node A increases its CW as newCW = (CW − 1) lidlL+0.5 and resets its BT as BT=uniform[0, newCW-CW]. Then node A enters backoff stage 2. In the case of collision, node A follows the following algorithm (Algorithm II): 1. Node A sets its BT as BT=uniform[0,2CW+1] without changing its CW. 2. If BT Kl , node A still uses the current CW and resets BT as BT= uniform[0,CW-1]. c ii) if lidl < Kh , node A increases its CW as newCW = (CW − 1) lidlL+0.5 +1 and resets its BT as BT=uniform[0, newCW-1]. Then node A enters into stage 1 and follows step 2 in Algorithm I.
5
Performance Evaluation
In this section, we focus on evaluating the performance of our DOB through simulations, which are carried out on OPNET [18]. For comparison purpose, we also present the simulation results for the IEEE 802.11 DCF and FCR. In all the simulations, we consider the basic MAC scheme. In other words, the RTS/CTS mechanism is not used. The DCF-related and DOB-related parameters are set as the TABLE. We assume each node is Poison process source with the same arrival rate which increases till to saturation in simulations. As shown below, DOB exhibits a better performance than the IEEE 802.11 and FCR in terms of throughput and fairness. Table 1. Network configuration and Backoff parameters Parameter SIFS DIFS Slot Length aPreamblLength aPLCPHeaderLength Bit rate
Value Parameter Value 10 μsec Kh 5.8 50 μsec Kl 6.0 20 μsec Lio 5.9 144 bits OW 15 48 bits CWct 250 1 Mbps
A Novel MAC Protocol for Improving Throughput and Fairness in WLANs
959
0.8 Packet Size DOB
0.7
2048 bit 5120 bit
0.6
2048 bit
FCR
Throughput
5120 bit 0.5
IEEE 802.11
2048 bit 5120 bit
0.4
0.3
Number of Nodes: 10
0.2
0.1
0 0.1
0.2
0.3
0.4
0.5 0.6 Offered Load
0.7
0.8
0.9
1
0.9
1
Fig. 3. Throughput vs. offered traffic with 10 nodes 0.8 Packet Size 2048 bit
DOB
0.7
5120 bit 2048 bit
FCR
0.6
5120 bit
Throughput
0.5
IEEE 802.11
2048 bit 5120 bit
0.4
0.3 Number of Nodes: 50 0.2
0.1
0 0.1
0.2
0.3
0.4
0.5 0.6 Offered Load
0.7
0.8
Fig. 4. Throughput vs. offered traffic with 50 nodes
5.1
Throughput
Firstly, we present the throughput obtained for the three schemes, i.e., DOB, FCR and the IEEE 802.11, under different offered load. Fig. 3, 4 show the throughput results when the number of nodes is 10 and 50. In each figure, we also consider two different frame lengths: 2048 bits and 5120 bits. Note that the
960
X. Tian, X. Chen, and Y. Fang
frame length is the size of payload data and does not include MAC overhead, which is a reason that the simulation results are lower than the theoretical values. It can be observed that when the traffic load is low, say lower than 0.6, the throughputs of DOB and the IEEE 802.11 are almost the same and equal to the offered load, whereas the throughput of FCR is lower. It can be explained as follows. For both DOB and the 802.11, since the offered load is low, the MAC collisions are slight and all the offered traffic can get through. For FCR, a node sets its CW as minimum after successful transmission, while the other nodes enlarge their CWs. In this way, once a node obtains medium, probably it can transmit continuously. But FCR is not efficient in the case of nonsaturation since a node has not enough packets for continuous transmission. This observation is more pronounced as the number of nodes increases. When the traffic load becomes heavy and the network enters into saturation, we see that the throughput of the 802.11 first increases with the traffic load, then slightly decreases after reaching the peak, and finally stabilizes at a certain value. This phenomenon is due to the fact that the maximum throughput of the 802.11 is larger than its saturated throughput. For both DOB and FCR, their throughputs first increase with the traffic load and then becomes stable. In the stable state, it can be seen that the 802.11 yields the lowest throughput among the three schemes. FCR is much better than the 802.11 because it can resolve the collisions in saturated case faster and more efficiently; and when a node seizes the channel, it will continuously transmit with a very high probability, resulting in high channel utilization. Even so, our DOB outperforms FCR though the difference in the throughput dwindles as the number of nodes become large. 0.85
Normalized Throughput (Saturated case)
0.8
0.75
0.7 Number of Nodes: 10 0.65
DOB (MinCW=16, MaxCW=1024) FCR (MinCW=16, MaxCW=1024)
0.6
IEEE 802.11 MAC (MinCW=16, MaxCW=1024)
0.55
0.5 100
200
300
400
500
600
700
Average Packet Size (byte)
Fig. 5. Simulation results of throughput with 10 nodes
800
A Novel MAC Protocol for Improving Throughput and Fairness in WLANs
961
0.9
Normalized Throughput ( Saturated case )
0.85
0.8 Number of Nodes: 50 DOB (MinCW=16, MaxCW=1024)
0.75
FCR (MinCW=16, MaxCW=1024) 0.7 IEEE 802.11 MAC (MinCW=16, MaxCW=1024) 0.65
0.6
0.55
0.5
0.45 100
200
300
400 500 Average Packet Size (byte)
600
700
800
Fig. 6. Simulation results of throughput with 50 nodes
We also compare the throughputs of these three scheme as a function of the average packet size in saturated case, where each node always has packets in its buffer waiting for transmission, as shown in Fig. 5 and Fig. 6, which corresponds to the cases where the number of nodes is 10 and 50, respectively. In all cases, we find that the throughput increases along with the average packet size. This is because in the saturated case, given the number of nodes, the probabilities pidl , ps , and pcol are constant. Accordingly, according to Equation (3), the throughput gets larger if T gets larger while the overhead is the same. We also find that in all cases, our DOB achieves the highest throughput. To sum up, the throughput performance of DOB is the best either in nonsaturated case or saturated case. This is attributed to the fact that our DOB uses the average idle interval to adapt CW to approach the optimal CW, which can efficiently resolve collisions and lead to high throughput. Compared to FCR, DOB overcomes FCR’s inefficiency in non-saturated case while being slightly more efficient in dealing with collisions in saturated case. 5.2
Fairness
To evaluate the fairness of DOB, we adopt the following Fairness Index (FI) [19] that is commonly accepted: ( Ti /φi )2 (13) F I = i n i (Ti /φi )2 where Ti is throughput of flow i, φi is the weight of flow i (here we assume all nodes have the same weight in simulation). According to Equation (13), F I ≤ 1,
962
X. Tian, X. Chen, and Y. Fang
Node 14
Node 18
Node 21
1000 900
Contention Window (slot)
800 700 600 500 400 300 200
Number of nodes: 50 Framelength: 2048
100 0 0
10
20
30
40 50 Simulation Time (sec)
60
70
80
Fig. 7. Changes of CW in simulation with 50 nodes 500
Average Contention Window (slot)
450 400 350
Node 14
300
Number of nodes: 50 Framelength: 2048
250
Node 18 Node 21
200 150 100 50 0 0
10
20
30
40
50 60 Simulation Time (sec)
70
80
90
100
Fig. 8. Average CW in simulation with 50 nodes
where the equation holds only when all Ti /φi are equal. Normally, a higher F I means a better fairness. Before comparing the fairness indexes between DOB and the IEEE 802.11 (Note we do not include FCR since it depends on an additional scheduling algorithm to achieve good fairness), we show how each node’s CW changes in the course of simulation. Fig. 7 show the instantaneous change of the CW for three
A Novel MAC Protocol for Improving Throughput and Fairness in WLANs
963
1
Fairness Index
0.95
0.9
0.85
0.8 DAB 10s DAB 20s 0.75 IEEE 802.11 10s IEEE 802.11 20s 0.7 10
20
30
40
50 60 Number of Nodes
70
80
90
100
Fig. 9. Fairness index
nodes that are randomly selected from a network of 50 nodes. We see that while the CWs fluctuates from time to time, they have close average values, which are illustrated in Fig. 8. In Fig. 8, the average CW is about 450 close to the value 464.286 obtained by Equation (12). Since DOB ensures that the all the nodes use about the same CW that is around the optimal value, it is expected that it can achieve a better fairness. (Because of high probability of consecutive transmission after gaining right of medium for FCR, FCR itself can inversely affect the fairness without additional fair scheduling mechanism.) Here, we compare fairness index of DOB with IEEE 802.11 as shown in Fig. 9. Fig. 9 shows that the fairness of DOB within 10s and 20s periods are significantly improved over that of the IEEE 802.11. It can also be seen that as the number of nodes rises, the fairness drops quickly for the 802.11, whereas for DOB, the fairness only slightly decreases. This is because the CWs of DOB used in every nodes are close as shown in Fig. 7 and 8.
6
Conclusion
In this paper, we first show that under the assumption that all the nodes use the same and fixed contention window, an index called average channel idle interval can indicate network throughput and has a simple relationship with the optimal contention window CW that leads to the maximal throughput. Meanwhile, if all the nodes use the same CW, they will fairly share the common channel. Through both analysis and simulation, our scheme has the following advantages. First, as shown in the analysis, the average idle interval of channel is a suitable index for each node to grasp the network traffic situation and is insensitive to the change in packet length or the number of active nodes. Each node
964
X. Tian, X. Chen, and Y. Fang
only needs to adjust its backoff process based on the observed average channel idle interval, avoiding the difficult task of estimating the number of active nodes in a changing network environment as required in [6]. Second, compared with the original IEEE 802.11, DOB achieves much higher throughput in saturated case; compared with FCR, DOB overcomes its inefficiency in non-saturated case. This is attributed to the fact that in DOB, each node always adjust its backoff to approach the optimal CW and thus leads to high throughput. Finally, DOB achieves good fairness since when the network becomes stable, all the nodes maintains almost identical average CW. This research was partially supported by the Ministry of Education, Culture, Sports, Science and Technology, Grant-in-Aid for Exploratory Research, 17650020, 2005.
References 1. M. A. Visser, and M. E. Zarki: Voice and data transmission over an 802.11 wireless network. IEEE PIMRC (1995) 2. J.Y. Yeh and C. Chen: Support of multimedia services with the IEEE 802-11 MAC protocol. Proc. ICCE2 (2002) 3. L. Bononi and M. Conti and E. Gregori: Runtime Optimization of IEEE 802.11 Wireless LANs Performance. IEEE Trans. Parallel Distrib. Syst. 15(1) (2004) 6680 4. R. Bruno and M. Conti and E. Gregori: Distributed Contention Control in Heterogeneous 802.11b WLANs. In WONS, St Moritz, Switzerland, January 2005. 5. Q. Pang and S. C. Liew and J. Y. B. Lee and V. C. M. Leung. Performance evaluation of an adaptive backoff scheme for WLAN. Wireless Communication and Mobile Computing. 5(8) (2004) 867-879 6. F. Cali, M. Conti and E. Gregori: Dynamic tuning of the IEEE 802.11 protocol to achieve a theoretical throughput limit. IEEE/ACM Transactions on Networking. 8(6) (2000) 785-799 7. G. Bianchi: Performance analysis of the IEEE 802.11 distributed coordination function. IEEE JSAC. 18 (3) (2000) 8. V. Bharghvan: Performance evaluation of algorithms for wireless medium access. IEEE International Computer Performance and Dependability Symposium IPDS’98. (1998) 142-149 9. Y. C. Tay and K. C. Chua: A capacity analysis for the IEEE 802.11 MAC protocol. ACM/Baltzer Wireless Networks 7 (2) (2001) 10. J. H. Kim and J. K. Lee: Performance of carrier sense multiple access with collision avoidance protocols in wireless LANs. Wireless Personal Communications 11 (2), (1999) 161-183 11. Y. Kwon, Y. Fang and H. Latchman: A novel MAC protocal with fast collosion resolution for wireless Lans. IEEE INFOCM. (2003) 12. J. Weinmiller, H. Woesner. J. P. Ebert and A. Wolisz: Analyzing and tuning the distributed coordination function in the IEEE 802.11 DFWMAC draft standard. Proc. MASCOT (1996) 13. H. Wu, Y. Peng, K. Long, S. Cheng and J. Ma: Performance of reliable transport protocol over IEEE 802.11 wireless LAN: analysis and enhancement. IEEE INFOCOM, 2 (2002) 599-607
A Novel MAC Protocol for Improving Throughput and Fairness in WLANs
965
14. H. S. Chhaya and S. Gupta: Performance modeling of asynchronous data transfer methods of IEEE 802.11 MAC protocol. Wireless Networks. 3 (1997) 217-234 15. P. Yong, H. Wu, S. Cheng and K. Long: A new Self-Adapt DCF Algorithm. IEEE GLOBECOM 2 (2002) 16. H. Kim and J. Hou: Improving protocol capacity with model-based frame scheduling in IEEE 802.11-operated WLANs. Proc. of ACM MobiCom. (2003) 17. T. Nandagopal, T-E Kim, X. Gao and V. Bharghavan: Achieving MAC layer fairness in wireless packet networks. Proc. ACM MOBICOM (2000) 87-98 18. OPNET Modeler. http://www.opnet.com. 19. R. Jain, A. Durresi, and G. Babic: Throughput Fairness Index: An Explanation. ATM Forum/99-0045 (1999)
Optimal Control of Packet Service Access State for Cdma2000-1x Systems Cai-xia Liu1 , Yu-bo Tan2 , and Dong-nian Cheng 1 2
PLA Information Engineering University, Henan, P.R. China, 450002 PDL, Computer School, NUDT, Changsha, Hunan, P.R. China, 410073 [email protected]
Abstract. Packet data service accessing state control is an effective wireless resource control scheme for 3G systems. This paper proposed an optimal control mechanism for packet data service accessing state by taking an integrated performance function of the mean packets waiting time W , the mean saving in the signaling overhead S and the mean channel utilization U as target function. By introducing a specific IBP model to better capture the characteristics of WWW traffic over cdma2000-1x systems, a system performance model was established. By analyzing the relationships between service model parameters and the state transition control timer, this paper presents a reference for project development to set some system parameters.
1
Introduction
The ratio of data service is gradually increasing in the new generation mobile communication systems. In order to effectively utilize the wireless resource and satisfy the need of new services, IS-2000 systems presented the new packet data accessing control mechanism according to packet data services’ burst characteristic, which uses four states to control the accessing process and the states transition process is shown in Fig.1. Hereinto, new-arrival packets trigger the transitions from the dormancy, suspended or control hold state to the active state, and the upper-layer signaling together with the expiration timer control the transitions of any other two states. The expiration timer includes active timer Tactive , control hold timer Thold and suspended timer Tsuspended .
Fig. 1. Packet service states transition process X. Jia, J. Wu, and Y. He (Eds.): MSN 2005, LNCS 3794, pp. 966–975, 2005. c Springer-Verlag Berlin Heidelberg 2005
Optimal Control of Packet Service Access State for Cdma2000-1x Systems
967
If a data user is in active state, the base station (BS) will start the Tactive after finishing transmitting the user’s packets. If new packets have arrived before Tactive expired, the BS stopped the Tactive . Otherwise, if Tactive expired, the system will release the traffic channels and the user will transfer into control hold state, at the same time,Thold is started. If Thold expired, the user will be in suspended state and Tsuspended is started. Similarly, if Tsuspended expired, the user will transfer into dormancy state. Thus, the value of the three timers determines the BS to mark the users’ accessing states. For example, if the value of Tactive is too large, the user will occupy the traffic channels in a longer time, which may reduce the channel utilization. Whereas, a larger timing value can avoid rebuilding traffic channels repeatedly before sending packets, which not only may reduce the packets waiting time, also may save the signaling overheads for rebuilding traffic channels. Contrarily, a too small timing value may decrease the traffic channel occupying time and advance the channels’ utilization, but may increase the packets waiting time and signaling overheads. So, it’s very important to choose an appropriate timing value based on synthetically considering the critical system performance factors, such as the channel utilization, packets waiting time and the signaling overhead etc. At present, there are no rules to determine the timing value of Tactive , Thold and Tsuspended , also no methods to choose the timing value in the technology specification for cdma2000-1x systems. Reference [3] analyzed the threshold of the MAC states transition timers based on the self-similarity packet traffic model. While, according to the mathematics description of “self-similarity”, only the traffic flow with “lots of” ON/OFF sources in the network systems may approach to self-similarity characteristic, and the total service flux in a BS evidently doesn’t possess the “lots of” characteristic. This paper introduced a new method to determine the timing value based on a specific IBP (Interrupted Bernoulli Process) model by taking an integrated performance function of the mean packets waiting time W , the mean saving in the signaling overhead S and the mean channel utilization U as target function. IBP model may better capture the characteristics of packet data traffic over cdma2000-1x systems.
2 2.1
Packet Service Model Packet Service Characteristics
Generally, with different traffic sources, packet data service may be separated into many categories and 80 percent of the packet data service will be WWW service, so this paper mainly introduces WWW forward data service characteristics. WWW forward service is generally described using three-layer model: session layer, packet call layer and packet layer. A user’s one time WWW service process is named by “a session” and one time page-request is named by “a packet call”. During the transmission, each page-request’s objects are packed into many
968
C.-x. Liu, Y.-b. Tan, D.-n. Cheng
Fig. 2. Three-layer model of WWW service
Fig. 3. Sketch map of packet arrival
transmission layer packets. The interval between two packet calls is the user’s reading time as showed in Fig.2. Usually, the state during a page-request being sended and the page being downloaded to a user is named by active state. The state when a user is reading the page downloaded is named by dormancy state. The other packet services, such as FTP, WAP etc., also have the similar active and dormancy characteristics. 2.2
Dacket Service Model
In virtue of reference [4] and [5] which analyzed the traffic characteristics in cdma2000-1x systems, we induct the IBP Process to describe the arrival characteristics of packet service. IBP is the discrete form of IPP (Interrupted Poisson Process)[7] , and in which the time is dispersed into slots with equal length and any slot is either in ON period or in OFF period. In any slot of ON period, the packets arrive with probability a and no packets arrive in OFF period. Denoting p is the transferring probability that a slot is in ON period and its next slot is in OFF period andq is the transferring probability that a slot is in OFF period and the next slot is in ON period. If Don and Dof f respectively denote the number of slots in ON period and in OFF period, then Don and Dof f may be expressed by formula (1). 1 1 (1) Don = , Dof f = p q We use the following IBP process to describe the WWW traffic flow in cdma2000-1x systems: (1)A packet call’s duration conforms to geometry distribution with mean being Don and unit being the length of one slot; (2)The average packets number in a packet call is Nd ; (3)The length of the reading period between two adjacent packet calls, denoted by tpc , conforms to geometry distribution with mean being Dpc , and supposing no packets arrive in reading periods; (4)The arrival interval of two adjacent packets in one packet call, denoted bytp , approximatively conforms to geometry distribution with mean being Dd When the number of slots during one packet call is large enough, where the arrival interval denotes the number of slots in which there are no packets arrive during two adjacent packets(showed as Fig.3).
Optimal Control of Packet Service Access State for Cdma2000-1x Systems
969
Hereinto, the length of a slot may be 20ms, 40ms or 80ms, which depends on the size of the packet. From the IBP process mentioned above, we can deduce the following: (1)The probability of the reading period (OFF period) including kslots, denoted by ftpc (k),ftpc (k) = p1 (1 − p1 )k−1 . Hereinto, p1 is the transferring probability that a slot is in OFF period and the next slot is in ON period, from formula (1), p1 =1/Dpc . (2)The probability of the packet arrival interval being k slots, denoted by ftp (k) , is ftp (k) = p2 (1 − p2 )k . Hereinto, p2 is the packet arrival probability in each slot of ON period and p2 =1/(Dd+1).
3
Performance Model
This section will build the integrated performance target function which depends on the mean packets waiting time (W ), the mean saving in the signaling overhead (S) and the mean channel utilization (U ). It’s obvious that U ,W and Sare all relative with Tactive , where0 ≤ U ≤ 1 and Tactive denotes the number of the slots. For simple analysis, we define S being the probability of not requiring to rebuild traffic channels when one packet arrives, so0 ≤ S ≤ 1. Usually, when a packet arrives at a queue in BS, if the traffic channel for it already exists and no other packets are waiting in the queue, the BS will transmit the packet at the beginning of the next slot[2] .Since a packet’s arrival instant is random in one slot, it’s reasonable to assume the arrival instant is uniformly distributed in the interval of a slot. So, the average minimum waiting time for any packet in a BS is a half slot length. If defining W is the average 1 ≤ 1. So, Formula waiting time with unit the length of a slot, then we get 0 < 2W (2) restricts the value of F in [0,1],where ω1 , ω2 and ω3 are three weights and they satisfy ω1 + ω2 + ω3 = 1. 1 (2) 2W Formula (2) will be used as the system target function and by tuning the value of ω1 ,ω2 and ω3 , the target function will be suitable to different systems that emphasize particularly on different performance parameters. Our objective is to maximize U, maximizeS and minimize W (that is to maximize 1/(2W)). While optimizing the three parameters individually would result in three different (perhaps conflicting) operating points. A better optimization function can be derived by combining all these three parameters, so our objective is to: 1 ) M aximize(ω1U + ω2 S + ω3 2W This ensures all the three parameters collectively optimize the overall system performance. In the following, we derive expressions for the three parameters,U ,S and W . Then the Tactive that make the overall system performance optimize is also derived. F =
1U
+
2S
+
3
970
C.-x. Liu, Y.-b. Tan, D.-n. Cheng
3.1
Expression for S
Supposing A and B are two packets belonging to the same session and B arrived at BS just before A. If tA−B is the arrival interval between A and B and tB is the time of system needing to send B, then, when tA−B − tB ≤ Tactive , the traffic channel for the session is active when A arrives. We assume the packets in BS needn’t be split when being sent, which means the system will finish sending a packet in a slot, that is tB =1. The relationships among the various time parameters are showed in Fig.3. Packet A may be the first-arrival packet in a ON period, which means the ON period is activated due to the arrival of A, then packet B would be the last-arrived packet before the previous ON period ended, which means the next slot would enter into a OFF period after B arrived. A also may be the packet arrived in any slot after a ON period was activated, and packet B would be the packet arrived before the ON period ended. If A is the first-arrival packet of a ON period, then tA−B will equal to the length of a OFF period, tpc , and if (tpc − 1) ≤ Tactive , the traffic channel for the session is active when B arrives; If A is the packet arrived in any slot after a ON period was activated, then tA−B =tp , and if (tp − 1) ≤ Tactive , the traffic channel is also active for B. Moreover, a packet arrives to the BS with a probability of 1/Nd being the first packet of a ON period and with the probability of (Nd −1)/Nd being other one. So, S, the probability of not requiring rebuilding traffic channels when a packet arrives, may be expressed as formula (3). S= 3.2
Nd − 1 1 (1 − (1 − p2 )Tactive +2 ) + (1 − (1 − p1 )Tactive +1 ) Nd Nd
(3)
Expression for U
Assuming that: All sessions have the same bandwidth requirement which is marked with C (bps); All packets arriving to a BS have the same size which is marked with C·Tf 8 (bytes); In the duration of a session, the mean number of packet calls is Npc . Then, a session’s average duration time, denoted by L, is:L = Npc ·(Don +Dpc). Defining U is a traffic channel’s utility during the traffic channel being occupied by a session. From the foregoing analysis, the average number of slots occupied by the system for sending packets during a session, denoted by Tu , is : Tu = Nd · Npc . The data quantity sent in various slots will be equal if there are packets to be sent in these slots, and the quantity is C · Tf bits. While there may be no packets sent in some slots during the active period of a session, and the session may still occupy the traffic channel and the channels’ utility will be affected. It’s very obvious the value of Tactive determines the channels’ utility. (1) Tactive =0 means the system will release the traffic channel occupied by a session when it just finished sending a packet of the session, so if not considering the channel
Optimal Control of Packet Service Access State for Cdma2000-1x Systems
971
release time, the additional slots occupied will equal to zero, that is Te =0; (2) Tactive > 0 means the session will additionally occupy the traffic channel in (Dd1 − 1) slots if 1 < tp ≤ Tactive + 1,and will additionally occupy the traffic channel in Tactive slots if tp > Tactive +1,where Dd1 is the average packets arrival interval when 1 < tp ≤ Tactive + 1. So, the average number of the slots being additionally occupied in an ON period for a session, denoted by Ton , is: Ton = (Nd −1) · {(1−p2 )2 [1−(1−p2 )Tactive ] · (Dd1 − 1) + (1 − p2 )Tactive +2 · Tactive )}, (Tactive ≥ 1)
where Dd1 is expressed as the following: Dd1 = (1−p2 )2 +
(1−p2 )2 (1−(1−p2 )Tactive ) p2
− (Tactive + 1)(1 − p2 )Tactive +2 , (Tactive ≥ 1)
In the same way, the average number of the slots being additionally occupied in an OFF period for a session, denoted by Tof f , is: Tof f = (1−p1 )[1−(1 − p1 )Tactive ](Dpc1 −1) + (1−p1 )Tactive +1 · Tactive , (Tactive ≥ 1)
Where Dpc1 is the average length of an OFF period when 1 < tpc ≤ (Tactive +1) and it is expressed as: Dpc1 = (1−p1 ) +
(1−p1 )(1−(1−p1 )Tactive ) p1
− (Tactive + 1)(1 − p1 )Tactive +1 , (Tactive ≥ 1)
It’s easy to get the average number of the slots being additionally occupied in the duration of a session when Tactive ≥ 1, denoted by Te . Te = Npc · (Ton + Tof f ) So, the traffic channel’s utility, U , may be expressed as formula (4). Tu = U= Tu + Te
*
Nd ·Npc Nd ·Npc +Npc ·(Ton +Tof f )
1
$ =
, (Tactive ≥ 1) 1 , (Tactive = 0)
Nd Nd +Ton +Tof f
(4) 3.3
Expression for W
As the foregoing depiction of this paper: (1) A packet has to wait for average 1/2 slot in BS before it is sent with the precondition of the buffer being vacant and the traffic channel being active when the packet arrives;(2) If the traffic channels have been released, the packet has to request to rebuild them if there are free resource in the system. From session 3.2, the occupying ratio of a traffic channel during a session being activeis: * N +T +T on d Of f , (Tactive ≥ 1) Tu + Te Don +Dpc = p3 = Nd L Don +Dpc , (Tactive = 0)
972
C.-x. Liu, Y.-b. Tan, D.-n. Cheng
On the assumption that the wireless resources are all occupied by M sessions when a packet belonging to a dormancy session arrives, then the probability that there are traffic channels being released in the follow-up any slot is p4 =1−pM 3 . From the definition of geometry distribution, the probability that there are no free resources until the (k+1)th (k=0,1,2,. . . ) slots after a packet has arrived is p4 (1- p4 )k , which means the packet’s mean waiting time for free resources is (1−p4 )/p4 slots. Supposing Ts slots are needed to rebuild a traffic channel, then the mean waiting time of a packet belonging to a dormancy session is: 0.5 + (1 − p4 )/p4 + Ts Thus, the mean waiting time for any packet arriving at BS is: W = 0.5S + (0.5 + 3.4
1 − p4 + T s)(1 − S) p4
(5)
Determination of the Optimal T active
From formula (3), (4) and (5), the integrated performance function of U , S and 1/ (2W ) can be obtained. It’s obvious that the right side of formula (2) is correlative with Tactive , and the value of Tactive that makes F maximize can be deduced by differential coefficient, which is showed in formula (6). 1 ) d(ω1 U + ω2 S + ω3 2W =0 dTactive
4
(6)
Performance Analysis
For convenience, we assume the other parameters, with the exception of Tactive , in the expressions of F ,S,U and W take values according to Table 1. Figure 4-6 show the relationships between S, U , 1/(2W )and Tactive respectively. Intuitively analyzing, the relationships between the three parameters and Tactive are directly affected by Nd ,Dd and Dpc , where Nd ,Dd and Dpc are determined by the statistical characteristic of packet service in cdma2000-1x systems. Figure 4-6 conclude Dd has greater effect on the value of S, U and 1/(2W ) when Tactive takes value in (0 ∼ 50) and when Tactive > 20, the value of U and 1/(2W ) are great affected by Nd . Table 1. Parameters Evaluation parameters value Nd (unit : slot) 20,30,40 2, 4, 6 Dd (unit : slot) 250,350,500 Dpc (unit : slot) 500 Don (unit : slot) 5 Ts (unit: slot) M (the maximum session number) 5 1/3,1/3,1/3 ω1 , ω2 ω3
Optimal Control of Packet Service Access State for Cdma2000-1x Systems
973
Fig. 4. Nd varies, S,U , 0.5/W as func- Fig. 5. Dd varies, S,U , 0.5/W as functions of Tactive tions of Tactive
Fig. 6. Dpc varies, S,U,0.5/W as func- Fig. 7. Nd varies, F as a function of Tactive tions of Tactive
Fig. 8. Dd varies, F as a function of Fig. 9. Dpc varies, F as a function of Tactive Tactive
Figure7-9 depict the relationships between system performance target functions and Tactive , respectively when Nd , Dd and Dpc vary. Not losing universality, we temporarily assume ω 1 , ω 2 ,andω 3 all take value 1/3.
974
C.-x. Liu, Y.-b. Tan, D.-n. Cheng
Figure7-9 show the same phenomena as the following: (1)When Tactive is smaller, F increases as Tactive increasing and arrives to a maximum value; (2)After F arrived to maximum, F decreases as Tactive unceasingly increasing. From the foregoing analysis, the value of Tactive that makes F maximize is the target value that optimize the system performance. Figure 7-9 also show that the system performance and the optimal Tactive are obviously affected by Nd and Dd , which is mainly represented as: When Tactive takes values in the area of less than 20, the system performance improves as Nd increasing, and the optimal Tactive varies little; When Tactive takes values in the area of less than 100,the system performance decreases as Dd increasing, and the optimal Tactive increases; In the all value area, the system performance varies little as Dpc varies, especially, when Tactive < 20, the system performace is almost unacted on Dpc , as well as the optimal Tactive does. Thus, it can be seen: when the packet service statistical parameters Nd and Dd change in different cdma2000-1x systems, the optimal Tactive should be newly chosen, while the other parameters change, the optimal Tactive needs not be adjusted. Otherwise, intuitively analyzing formula (2), we can see that the optimal Tactive will take different values when the weights ω1 ,ω2 or ω3 change.
5
Conclusions
Packet data service accessing state control is an effective wireless resource control scheme for 3G systems according to the burst characteristic of packet data service. This paper proposed an optimizing control mechanism for packet data service accessing state by taking the integrated performance function of the mean packets waiting time W , the mean saving in the signaling overhead S and the mean channel utilization U as target function. By introducing a specific IBP model to better capture the characteristics of WWW traffic over cdma2000-1x systems, we established a system performance model based on the target function, introduced a calculation method for Tactive and analyzed the relationships between service model parameters and the Tactive . According to performance analysis, we educed several conclusions:(1) in different value areas, Nd and Dd have greater effect on the system performance and the optimal Tactive ;(2) When Nd or Dd changes, especially Dd changes, the system need to newly select the optimal Tactive value. The service model adopted in this paper is accord with cdma2000-1x system WWW service characteristics, and using the integrated performance of U ,S and W as the system performance target function has certain a prevalent significance. The analysis method in this paper is also the same with Thold and Tsuspended .
Optimal Control of Packet Service Access State for Cdma2000-1x Systems
975
References 1. B.Mah,” An Empirical Model of HTTP Network Traffic,” Proceeding of INFOCOM ’97, 1997. 2. 3GPP2 A.S0001-A,”3GPP2 Access Network Interfaces Interoperability Specification,” November 30, 2000. 3. Mainak Chatterjee and Sajal K. Das,” Optimal MAC State Switching for cdma2000 Networks,” Proceedings of IEEE INFOCOM 2002, Vol. 1, pp. 400-406. 4. ITU: US TG 8/1, Radio Communication Study Group,” The radio cdma2000 RTT Candidate submission,” Technical Report TR 45-5, Jun. 1998. 5. C. Comaniciu, NB Mandayam, D. Famolari, P. Agrawal,”Wireless Access to the World Wide Web in an Integrated CDMA System,”IEEE Transactions on Wireless Communications, May 2003. 6. M.E. Crovela and A. Bestavros,” Self-Similarity in World Wide Web Traffic: Evidence and Possible Causes,” IEEE/ACM Transaction on Networking, vol 5, No 6, December 1997, pp. 835-846. 7. R.O.Onvural,”Asynchronous Transfer Mode Network: performance Issues,” Artech House, 1994. 8. Y.S.Rao and A.Kripalani,”cdma2000 mobile radio access for IMT2000,” IEEE International Conference on Personal Wireless Communication, 1999, pp 6-15. 9. C. Comaniciu, NB Mandayam, D. Famolari, P. Agrawal,” Wireless Access to the World Wide Web in an Integrated CDMA System,” IEEE Transactions on Wireless Communications, May 2003. 10. Peng Peng,” A Critical Review of Packet Data Services in Wireless Personal Communication System,” Virginia Tech ECPE 6504: Wireless Networks and Mobile Computing , Spring 2000. http:// fiddle.visc.vt.edu/courses/ecpe6504-wireless/ projects.
A Cross-Layer Optimization for Ad Hoc Networks* Yuan Zhang, Wenwu Wu, and Xinghai Yang School of Information Science and Engineering, Jinan University, Jinan 250022, China {yzhang, wuww, ise_yangxh}@ujn.edu.cn
Abstract. The lack of an established infrastructure and the hostile nature of the wireless channel make the design of ad hoc networks a challenging task. The cross-layer design methodology, which has been strongly advocated in recent years, essentially aims to overcome the sub-optimality introduced by designing each layer in isolation. This paper explores one aspect of such optimizations, namely using multiple antennas at each node for receiving and transmitting data using directional beams. By jointly analyzing both the MAC layer parameters and choosing a suitable routing protocol afterwards, improvements over earlier published schemes are obtained. We also perform in-depth simulations to characterize the better performance of various scenarios.
1 Introduction The layering principle, which was originally proposed for the wired line networks, simplifies design and implementation and provides the possibility of alternative layer implementations. After its huge success in the Internet, it was naturally extended for the wireless networks too. However, the characteristics of wireless networks, caused by their low link capacity and high bit error rates, differ from wired networks in several ways. In particular, some of the concerns unique to a wireless ad hoc mobile environment include energy efficiency, link stability under mobility, routing in the absence of global location knowledge and scalability of network. As is mentioned in [1], due to the many unique challenges posed by such networks, the traditional approach to optimizing performance by separately optimizing different layers of the OSI-model may not be optimal. In order to obtain the best results, it might be necessary to perform optimization using the information available across many layers. Such techniques are known as cross-layer design. Cross-layer design is able to improve the network performance [2]. One example of the coupling, which has been addressed in [3,4,5], is between routing in the network layer and the access control in the medium access control sub-layer. By using theoretical upper bounds on the performance that are given in terms of capacity regions, [6] evaluates the effects of various design choices, with an emphasis on the MAC sub-layer. In [7] the cross-layer design addresses the joint problem of power control and scheduling for multi-hop wireless networks with QoS. It takes SINR and minimum rate as constraints to minimize the total transmit over the links. *
This work is supported by Science & Technology Foundation of Jinan University under Grant No. Y0520 and Y0519.
X. Jia, J. Wu, and Y. He (Eds.): MSN 2005, LNCS 3794, pp. 976 – 985, 2005. © Springer-Verlag Berlin Heidelberg 2005
A Cross-Layer Optimization for Ad Hoc Networks
977
This paper in particular explores one aspect of such optimizations, which is the use of multiple antennas at each node for receiving and transmitting data using directional beams. With the current move towards higher frequency bands, the size of such multiple-element antennas is no longer a constraint, making these systems an interesting and viable alternative to the current single-element omnidirectional antennas. In addition, to obtain the best performance, selecting the correct parameters at the MAC layer and a suitable routing protocol are also essential. The rest of this paper is organized as follows. Section 2 discusses the physical layer and some of the characteristics and problems of having multiple antenna systems. Similarly, section 3 looks at the MAC layer while section 4 focuses on routing. Section 5 provides simulation results and characterizes the performance of various schemes. Finally, section 6 presents the conclusions and a brief statement of the future work.
2 Physical Layer (PHY) At the physical layer, two distinct sets of decisions need to be made – choice of antenna system and choice of the air interface. The choice of air interface for an ad hoc wireless system is overviewed in [8], and in numerous papers in the literature. However, this will not be discussed here since the selection of such an air interface is based on a number of factors other than those considered in this paper. It will be assumed that the factors taken into consideration in this paper are orthogonal to those involved in selecting the air interface. In this paper, a more detailed discussion of the antenna systems will be presented. Directional antennas can be classified into two main categories – switched beam antennas and steered beam antennas. In [9], the author presents some of the first results in using beamforming to achieve improved throughput and reduced end-to-end delay in a static ad hoc setting. The author argues that the performance observed is similar for static and dynamic environments since the same routing and MAC protocols are used. However, this is not true. At the physical layer, in a static environment, once the direction of the transmission has been determined, a node can cache this information. For any subsequent transmissions to/from this node, the beam can be directed in this direction (except in the event of node failure). In a dynamic environment, however, any of the scenarios below can occur – 1. The position of the receiver changes between different transmissions This is a likely scenario even in a low-mobility, low-data rate network. One of the methods to overcome stale directions to receivers in a node’s cache is to use HELLO packets as described in [9, 10]. However, this can quickly impose an unnecessary overhead on the entire system. In addition, in a high data-rate or otherwise densely populated network, this can lead to congestion of the network and consequently a reduction in throughput. An alternative (which is used in the simulations conducted for this paper) is to attempt to communicate with the receiver using the direction stored in the cache. After a predetermined number of failed attempts (4 in the simulations), the antenna reverts to either an omnidirectional transmission, or it can start sweeping each sector sequentially. Thus, if a node wants to communicate with a neighbor and does not
978
Y. Zhang, W. Wu, and X. Yang
find its direction in the cache, it initiates a RREQ by sweeping through each sector (for both beam switching and beam steering antennas). 2. The position of the receiver changes between different packets for the same transmission This is a possibility in a moderate-to-high mobility environment with moderate-to-large data transfers. The approach described in (1.) above can be used to combat this problem. In addition, the ACKs transmitted by the receiver can be used to determine the direction of arrival and thus select the appropriate antenna for transmission. 3. The position of the receiver changes during a single packet transmission Given the transfer rate (11 Mbps) and the typical packet size in 802.11 networks, this is not a very likely scenario for most ad hoc networks and so will not be considered.
3 Medium Access Control Layer (MAC) Implementing a directional antenna system is intimately connected to the MAC layer, which should not only be able to support such directional transmission, but also leverage the additional information to improve performance. A number of papers in the literature address this problem [10,11,12,13]. However, most of the protocols discussed in these papers are adapted from the IEEE 802.11 wireless standard. In addition, it is noted that in general, most of these papers do not differ much in their proposed schemes. Consequently, leveraging the work done in these papers, the aspects of the MAC implemented in our simulations will be discussed and its merits and demerits evaluated. For the purposes of supporting directional transmission, two main modifications need to be made to the MAC protocol. 1. Every node needs to maintain a list of its neighbors as well as the directions they are present at, with respect to the node. This list could be maintained proactively using HELLO packets or performing a 360o sweep to locate the node when there is a need to transmit to it. 2. A Directional NAV table needs to be included in the MAC layer [13]. This maintains a list of those directions to which transmissions need to be differed due to existing communications in that area. In [13], this angle was set to be equal to the beamwidth of each directional antenna based on some assumptions made by the authors. Some of the salient features of the MAC protocol implemented in this paper include – • Directional RTS/CTS The source node transmits its RTS using a directional beam in the supposed direction of the receiver. The receiver transmits the CTS back using a directional beam. The source repeats the RTS up to 4 times (a design parameter), before transmitting in omni-directional mode.
A Cross-Layer Optimization for Ad Hoc Networks
979
• Omnidirectional Idle/Receive All nodes listen to the channel in an omni-directional mode when not transmitting or receiving. This ensures that they can receive RTS/CTS from any direction and also maintain an up-to-date DNAV table. • Directional Data Exchange Once the RTS/CTS handshaking procedure has been established, the data is exchanged using directional beams. In [13], a number of problems have been identified stemming from the use of a directional MAC protocol. While many of these problems, such as deafness and tradeoffs including those between spatial reuse and collisions are inherent with directional transmissions, one problem identified there can be resolved using steerable antennas. By choosing a beam pattern such that it has a gain just enough to reach the destination, but not much farther, the interference caused to nodes in the vicinity of the destination can be significantly reduced. This also has the added advantage of savings in terms of reduced energy required for transmission, as will be illustrated in the simulations.
4 Routing As has been mentioned earlier, most of the effort in relation to directional antennas has been done at the MAC layer. Much less focus has been done developing routing protocols that leverage the directional information at the routing level. Below, some of the work done in this area is presented. In [10], the authors evaluate the performance of DSR over their Directional MAC (called DiMAC). They choose to use the greater range of directional antennas, instead of their energy conservation property. In keeping with this, they propose a few optimizations to the traditional DSR algorithm. If DSR is implemented directly over a Directional MAC, then it is likely that the destination node will reply to the first route request packet which reaches it. The authors optimize DSR by making the destination wait for a short period to hear from multiple routes, before choosing the path with the lowest hop count. It can be argued, however, that this is not necessarily an optimal solution to the problem. It is because the lowest hop count might imply having to transmit over longer distances. This in turn has the dual disadvantage of consuming more energy and also causing more interference to other nodes in the neighborhood of these transmissions. Alternatively, it could be argued that using a lower hop count implies fewer nodes are involved in communications, leaving more nodes free to process other routes. Hence, choosing which route to reply to is a tradeoff which depends on factors such as energy constraints and traffic density in the network. Another optimization proposed in [10] is to have every intermediate node forward Route Request packets to only a fraction of the nodes in its neighborhood. This can reduce congestion in the system and also conserve energy without adversely affecting the throughput or delay. However, in a highly dynamic situation, this strategy may not be effective and could lead to performance degradation. In [11], the authors describe a network aware MAC and routing protocol to achieve load balancing using directional antennas. The concept of zone-disjoint transmissions is used to reduce overlap of communications (both in terms of nodes as well as antenna
980
Y. Zhang, W. Wu, and X. Yang
beams) and hence improve transmission. However, for this protocol to be implemented effectively, a number of tables containing location and routing-related information need to be proactively exchanged. The authors have shown that their combined MAC and routing protocol has good performance characteristics in a static scenario as compared to the DSR protocol. However, they noted that the performance dropped under mobility conditions. Based on some of the results discussed above, it would seem that a suboptimal, but easy, approach would be to use an appropriate routing protocol with directional antennas and a directional MAC to achieve similar performance to that of routing protocols which have been specially modified for such situations. A number of papers [14,15] have been written comparing various routing protocols and analyzing their characteristics. Some of the conclusions derived by the authors are included below. • • • •
For most scenarios, one of AODV or DSR typically performs best or at least on the same order as any protocol specifically tailor-made for such a scenario. In low-mobility, low-moderate traffic conditions, DSR tends to have the best performance due to its route caching and promiscuous listening feature. In high-mobility or heavily congested networks, AODV works well since it has a very low control overhead. In large networks, DSR does not scale well. Consequently, protocols such as LANMAR or ZRP which are based on hierarchical routing are preferable.
Based on these conclusions, in the simulations performed, AODV and/or DSR will be used as the routing protocols without any modifications. Throughput (bits/sec)
Average End-to-End Delay / Jitter 4400
0.35 0.3 0.25 0.2 0.15 0.1 0.05 0
T h ro u g h p u t
T i m e (s e c )
4200 4000 3800 3600 3400 AODV- DSR-Omni AODVDSRAODVDSROmni Switched Switched Steerable Steerable Protocol Delay Jitter
3200 AODV- DSR-Omni AODV- DSR- AODV- DSROmni Switched Switched Steerable Steerable Protocol - Antenna
Fig. 1. Delay/Jitter and Throughput for Scenario 1
5 Performance Evaluation In this section, the performance of the designed discussed in the past sections is presented. Based on these results, the advantages and disadvantages of this system will be presented. The simulations were performed using the QualNet simulator, version 3.7 [16].
A Cross-Layer Optimization for Ad Hoc Networks
981
5.1 Simulation Model In these simulations, it is assumed that the gain of the directional antenna (as compared to an omnidirectional antenna) is 15dB. The 802.11b radio was used in these simulations, which has an omni-directional range of 250m. The 2-ray propagation model has been used. Node mobility is simulated using the random way-point model. Constant shadowing with an average of 4dB is assumed. Fading has not been taken into account for any of the simulations. The 802.11 DCF was used as the MAC layer protocol. 5.2 Performance Criteria The performance criteria used to evaluate the various schemes include – 1)
2) 3) 4)
Average end-to-end delay: This includes all possible delays caused by buffering during route discovery latency, queuing at the interface queue, retransmission delays at the MAC, and propagation and transmission delays. This parameter helps in determining whether packets will be delivered within given time constraints. Throughput: This is number of bits received by the server per second (excluding control data). Signals Transmitted: This is the total number of signals transmitted per node. It can be used as an approximate indicator of energy consumed. Packets Dropped due to Retransmission Limit: This is the number of packets that were dropped due to lack of acknowledgement from the receiver. This can be used as a measure of how well the next-hop nodes are chosen to avoid interference with other communications. The retransmission limit was set to 4 for these simulations. Signals Transmitted
250
Packets Dropped due to Retransmission Limit 9 8 7 6 5 4 3 2 1 0
200 150 100 50 0 AODV- DSR-Omni AODVDSRAODVDSROmni Switched Switched Steerable Steerable Protocol - Antenna
AODVOmni
DSR-Omni
AODVSwitched
DSRSwitched
AODVDSRSteerable Steerable
Protocol - Antenna
Fig. 2. Signals Transmitted and Packets dropped due to Retransmission Limit
5.3 Simulation Results Scenario 1 This scenario was used to model a low-density, moderate mobility and moderate-traffic condition environment. 36 nodes were randomly placed in a 1500x1500m area. Up to
982
Y. Zhang, W. Wu, and X. Yang
40% of the nodes were involved in transmissions at any given time. A random waypoint model was used as the mobility model with the speeds of nodes varying between 0-10 m/sec with up to 30sec pause time. The DNAV angle was chosen to be 45o. The results are illustrated in fig. 1. A number of deductions can be drawn from figure 4 – •
•
• •
DSR tends to outperform AODV for any type of antenna system, and for both delay/jitter and throughput. This can be explained by the nature of this scenario, as described earlier, which enables DSR to perform better since it has the ability to cache routes. Steerable antennas provide the best performance in terms of both delay and throughput. This can be explained by the fact that steerable antennas have many patterns to choose from. Hence, it is possible to choose an antenna pattern which either allows transmissions to more distant nodes, or reduces interference with neighboring transmissions. The throughput of directional antennas is at least on the order of the omni-directional antennas, though it generally performs better. Beam switching using AODV seems to have the best throughput in this scenario (though it has the second-highest delay/jitter characteristics). This can be explained by the fact that some of the gains of DSR over steerable beams were negated due to the low control overhead imposed by AODV.
Fig. 2 illustrates the signals transmitted per node as well as the number of packets dropped due to retransmission limit. As can be seen, beam steering requires the lowest number of total signals to be transmitted. This again can be explained by the fact that of the 3 antenna techniques, beam steering typically provides the best pattern to a particular destination. As a result, fewer packets are dropped due to mobility or interference. In addition, the pattern of signals transmitted is supported by the number of packets dropped due to retransmission limit. DSR, which uses both larger control packets, as well as route caching, has a greater number of packets dropped. This is because, route caching can cause the wrong node to be chosen as the next hop (problem of stale routes in DSR, as discussed in [9, 10]). In short, DSR is not as energy-efficient as AODV, even though it has better delay-throughput characteristics. Scenario 2 This scenario was used to model a moderate-to-heavy density, moderate-to-high mobility and a moderate-to-heavy traffic condition environment. 100 nodes were randomly placed in a 1500x1500m area. Between 10 - 50% of the nodes were involved in transmissions at any given time. A random waypoint model was used as the mobility model with the speeds of nodes varying between 0-10 m/sec with up to 10sec pause time. The DNAV angle was chosen to be 45o. The trends observed in fig. 3 are similar to those observed for scenario 1, with one main exception. In most cases, the performance of AODV is very similar to that of DSR. This is because, as mobility increases, the low overhead of AODV overcomes the route caching advantage of DSR. It is also interesting to note that both directional antenna systems have almost similar performance, which is almost twice as good as the omnidirectional antenna case. It was expected that the ability of a beam-steering antenna to “follow” the destination would
A Cross-Layer Optimization for Ad Hoc Networks
983
enable it to perform better than beam switching. However, this is not the case. This could be hypothesized by the fact that the time chosen for expiry of direction-of-neighbor information at the MAC layer was optimized for this particular scenario. Hence, the beam-switched antenna did not transmit too many signals in the wrong direction results in similar throughputs. This hypothesis is further borne out by fig. 4 for signals transmitted and packets dropped due to retransmission limit. As can be seen from fig. 4, the number of signals transmitted and packets dropped due to the retransmission limit are almost the same for the beam steering and beam switching cases, supporting the argument made earlier. Also, these graphs clearly illustrate the advantages that AODV has over DSR in a high mobility situation. For all cases, AODV requires fewer signals to be transmitted and has only a fraction the number of packets dropped as DSR, reasons for which have been given earlier. Also, both the directional antenna require half or less signals to be transmitted as compared to the omnidirectional case, thus illustrating the energy savings that can be obtained from using these schemes. Throughput (bits/s) 4400
0.35 0.3 0.25 0.2 0.15 0.1 0.05 0
4200 4000 b i ts / s e c
T i m e (s e c )
Average End-to-End Delay / Jitter
3800 3600 3400 3200
AODV- DSR-Omni AODVDSRAODVDSROmni Switched Switched Steerable Steerable
3000 AODV- DSR-Omni AODVDSRAODVDSROmni Switched Switched Steerable Steerable
Protocol - Antenna
Protocol - Antenna
Delay Jitter
Fig. 3. Delay/Jitter and Throughput for Scenario 2
Packet drops due to retransmission limit
Signals transmitted 350
12
300
10
250
8
200
6
150
4
100
2
50
0
0 AODVOmni
DSR-Omni
AODVDSRAODVDSRSwitched Switched Steerable Steerable Protocol - Antenna
AODVOmni
DSR-Omni
AODVSwitched
DSRSwitched
Protocol - Antenna
Fig. 4. Signals Transmitted and Packets Dropped for Scenario 2
AODVDSRSteerable Steerable
984
Y. Zhang, W. Wu, and X. Yang
6 Conclusions Based on the simulations conducted, a number of conclusions can be drawn, • • •
• •
• •
Considerable gains can be obtained from the deployment of directional antennas. Directional antennas can be used either for the purpose of bridging network partitions or reducing energy consumption In [10], it was concluded that in static environments, directional antennas do not perform well in dense networks. However, the results obtained from this scenario do not bear out this conclusion. As was shown above, considerable gains can be obtained by deploying directional antennas even in dense networks and in mobility conditions. The choice of routing protocol, based on the network’s characteristics, is important for extracting maximum gains. Bean switching typically performs a little worse than beam steering, though the overall performance is on the same order as beam steering. However, since it is easier to implement, beam switching could be considered as a “cheaper alternative” to implementing a full adaptive beam steering solution. The DNAV angle to be used for directional 802.11-based MACs should be set close to the beam-width of the antenna pattern for optimal performance. Conventional routing protocols for ad hoc networks using directional MACs have performance gains similar to those reported for routing protocols optimized for directional transmissions.
While a number of useful results have been obtained, many research challenges still exist in this area. Foremost would be the development of a routing protocol that is able to harness the power of directional antennas without significantly increasing the overhead (and thus negating any gains obtained). In addition, the MAC protocol can be optimized to better support long-range transmissions to bridge network partitions. It would also be interesting to see if it would be possible for a single node to support multiple communications using different elements of a sectored antenna. The tradeoffs in terms of energy consumption and interference versus reduced latency will be useful to observe. Finally, efforts should be made to translate information on network congestion, as determined by packets dropped or other metrics, into useable information at the application layer. Consequently, in order to meet constraints such as latency or throughput, other factors such as coding and compression could be altered accordingly.
References [1] Goldsmith, A. J., Wicker, S.B.: Design Challenges for Energy-Constrained Ad Hoc Wireless Networks. IEEE Wireless Commun. Mag., 9 (2002) 8-27 [2] Shakkottai, S., Rappaport, T. S., Karlsson, P. C.: Cross-layer Design for Wireless Networks. IEEE Commun. Mag., 41 (2003) 74-80 [3] Ayyagari, D., Michail, A., Ephremides A.: Unified Approach to Scheduling, Access Control and Routing for Ad-hoc Wireless Networks. IEEE VTC, 1 (2000) 380-384
A Cross-Layer Optimization for Ad Hoc Networks
985
[4] Barrett, C., Marathe, A., Marathe, M. V., Drozda, M.: Characterizing the Interaction between Routing and MAC Protocols in Ad-hoc Networks. ACM MobiHoc (2002) 92-103 [5] Santhanam, A. V., Cruz, R. L.: Optimal Routing, Link Scheduling and Power Control in Multi-hop Wireless Networks. IEEE INFOCOM, 1 (2003) 702-711 [6] Toumpis, S., Goldsmith, A. J.: Performance, Optimization and Cross-layer Design of Media Access Protocols for Wireless Ad hoc Networks. IEEE ICC, 3 (2003) 2234-2240 [7] Kozat, U. C., Koutsopoulos I., Tassiulas, L.: A Framework for Cross-layer Design of Energy-efficient Communication with QoS Provisioning in Multi-hop Wireless Networks. IEEE INFOCOM, 23 (2004) 1446-1456 [8] Leiner, B., Nielson, D., Tobagi, F.: Issues in Packet Radio Network Design. Proceedings of the IEEE, 75 (1987) 6-20 [9] Ramanathan, R.: On the Performance of Ad Hoc Networks with Beamforming Antennas. ACM MobiHoc (2001) 95-105 [10] Choudhury R.R., Vaidya, N. H.: Impact of Directional Antennas on Ad Hoc Routing. ICPWC (2003) 590-600 [11] Roy, S., Saha, D., Bandyopadhyay, S., Ueda T., Tanaka, S.: A Network-Aware MAC and Routing Protocol for Effective Load Balancing in Ad Hoc Wireless Networks with Directional Antenna. MobiHoc (2003) 88~97 [12] Nasipuri, A., Ye, S., You J., Hiromoto, R.E.: A MAC Protocol for Mobile Ad Hoc Networks Using Directional Antennas. IEEE WCNC, 3 (2000) 23-28 [13] Choudhury, R. R., Yang, X., Ramanathan R., Vaidya, N.: Using Directional Antennas for Medium Access Control in Ad Hoc Networks. MobiCom (2002) 59-70 [14] Royer, E.M., Toh, C.K.: A Review of Current Routing Protocols for Ad Hoc Mobile Wireless Networks. IEEE Personal Communication, 6 (1999) 46-55 [15] Perkins, C.E., Royer, E.M., Das, S.R., Marina, M.K.: Performance Comparison of Two On-demand Routing Protocols for Ad Hoc Networks. IEEE Personal Communication, 8 (2001) 16-28 [16] QualNet Simulator, Version 3.7, Scalable Network Technologies, www.scalable-networks.com
A Novel Media Access Control Algorithm Within Single Cluster in Hierarchical Ad Hoc Networks Dongni Li1 and Yasha Wang2 1
2
School of Computer Science Technology, Beijing Institute of Technology, Beijing 100081, China [email protected] Institute of Software Engineering, School of Electronics Engineering and Computer Science, Peking University, Beijing 100871, China [email protected]
Abstract. Media Access Control (MAC) is one the most critical issues for wireless ad hoc networks. In a hierarchical ad hoc network, all the nodes in the same cluster share the wireless channel. The current MAC algorithms can hardly adapt to both light and heavy traffic loads, thus cannot perform well with remarkably and frequently changing traffic load in ad hoc networks in which nodes keep moving in and out of clusters. A multi-token MAC (MTM) algorithm is proposed in this paper. It is used within single cluster in hierarchical ad hoc networks. It can automatically compromise between CSMA/CA in IEEE 802.11 and the token scheduling in cluster-head-gateway switching routing (CGSR) algorithm. Simulation results show that in the ad hoc network with active nodes moving in and out of clusters, MTM gives better throughput ratio and average packet delay.
1 Introduction In hierarchical ad hoc networks, nodes are aggregated in clusters. All nodes in a cluster can communicate with a cluster-head and possibly with each other [17]. Media Access Control (MAC) is one of the most important issues in ad hoc networks. Across clusters, frequency or code division can be adopted to access the wireless channel. Within a cluster, special MAC algorithms should be used to allocate the shared channel among competing nodes [6], [17]. Usually, there are two types of MAC algorithms in wired or wireless local area networks, referred to as competition algorithms [8], [9], [10], [11] and queue algorithms [2], [4], [5] in this paper. In a competition algorithm, nodes attempting to transmit randomly compete for the shared channel. If a node wins the competition, it has the right to transmit data at once; otherwise it will participate in the next competition after waiting for a random amount of time. A typical example of the competition algorithms is carrier-sense multiple access with collision avoidance (CSMA/CA) in IEEE 802.11. In a queue algorithm, there is a polling among all the nodes according to some order, and the node transmits only when in its turn. A typical example of the queue algorithms is the one proposed for the clusterhead-gateway switching routing (CGSR) algorithm [17], hereafter referred to as TSC (token scheduling in CGSR), which is similar to the Token Ring in IEEE 802.5. X. Jia, J. Wu, and Y. He (Eds.): MSN 2005, LNCS 3794, pp. 986 – 995, 2005. © Springer-Verlag Berlin Heidelberg 2005
A Novel Media Access Control Algorithm Within Single Cluster
987
Generally, when the network load is light, the competition algorithms perform better than the queue type with respect to the aggregate throughput over all flows in the network [3], [1], because under this situation the channel is mostly idle. In a competition algorithm, the nodes can transmit without any delay [14], [7], however, the nodes in a queue algorithm have to wait for its turn even if the channel is available. On the other hand, when the network load is heavy, the queue algorithms give better aggregate throughput. This is because heavy load results in many collisions in competition algorithms. However, the queue algorithms enable the nodes to transmit in turn under the control of some scheduling schemes, thus avoiding collisions and increasing the channel utilization [12], [13]. Generally, when we design a local area network, the number of nodes and network traffic are estimated first, and then the result is used to decide which type of MAC algorithms should be deployed. However, unfortunately, this method is not suitable for an ad hoc network, mainly due to the following two reasons: 1. The distribution of nodes in an ad hoc network is often uneven, and this results in different node densities in different clusters (i.e. the number of nodes contained in a cluster differs from each other.). A cluster with more nodes generally has heavier load on it. If we select the same MAC algorithm for all clusters in advance, no matter competition or queue, the performance of the whole network cannot be optimized. On the other hand, if we select different MAC algorithms for different clusters according to their loads, the problem cannot be solved either. This is because in ad hoc networks, nodes may move to other clusters, two clusters may incorporate into one cluster, or one cluster may split into two clusters, all of above situations may lead to changes in cluster node densities. The MAC algorithm chosen in advance cannot always meet the requirements of such changes. 2. If we implement two algorithms on every node, i.e. a competition algorithm and a queue algorithm, and make each node switch the algorithms according to the current cluster load, there will be some new problems. First, the two-MACalgorithm configuration enhances the algorithm stack complexity of nodes; second, when the node switches from one algorithm to another, this sudden change in MAC layer may results in negative impact on the application’s QoS (quality of service); third, when the load of a cluster is between light and heavy, the decision of choosing algorithms is hard to make. In order to solve the above-mentioned problems in ad hoc networks, a novel MAC algorithm is proposed in this paper. It compromise between CSMA/CA and TSC according to the varying network loads. It can be deployed by all the clusters in a hierarchical ad hoc network. Under light traffic load, it is more like CSMA/CA; as the traffic load increases, it tends towards TSC. It should be noticed that MTM is used within single cluster, while across clusters CDMA or FDMA can be deployed, and this is beyond the scope of this paper. The rest of the paper is organized as follows. Section II presents our proposed multi-token MAC algorithm (hereafter referred as MTM). Section III discusses some key elements of MTM. Section IV describes the simulation model and discusses the simulation results. Section V concludes the paper in the end.
988
D. Li and Y. Wang
2 Proposed Multi-token MAC Algorithm MTM is proposed to solve the aforementioned problems within a single cluster where all the nodes share the same channel. We model a hierarchical ad hoc network as described in [17]. All nodes in a cluster can communicate with the cluster-head and they are all 1-hop from the cluster-head. Only the nodes holding tokens have the chances to transmit and the tokens are controlled by the cluster-head. Initially, every node in the cluster is allocated a token and the nodes (include cluster-head and cluster-members) begin to detect the network load in the cluster. There are three states with respect to the network load: heavy, normal and light. According to the detection results of the cluster-head, if the network load is heavy, the cluster-head will remove some tokens until the network load state turns to be normal or only one token is active in the cluster; otherwise, if the network load is light, the cluster-head will insert more tokens in the cluster until the network load turns to be normal, or every node holds a token. Suppose the extreme situations: if every node has a token, MTM is actually CSMA/CA; while if there is only one node in this cluster, MTM is actually TSC. MTM can be divided into two parts: determination of network load state and management of tokens. 2.1 Determination of Network Load State MTM allows multiple tokens exist at the same time, and CSMA/CA is deployed among these competing tokens. As specified in CSMA/CA, in case of collision, a node backs off for a random amount of time. If collisions happen more and more frequently, the back-off time gets longer as a consequence. This means that the network load is getting heavier and the transmission delay is getting longer. So we calculate for each node the weighted average of the number of collisions it encountered recently, and take it as the measure of the network load state. Every node in the cluster maintains the following set of variables: 1.
2.
Collision number (denoted ai), which records the number of collisions it encountered while attempting to transmit a frame. For instance, a0 denotes the number of collisions it encountered during transmitting the current frame; a1 denotes the number of collisions during transmitting the last frame; ai denotes the number of collisions during transmitting the last i-th frame. Collision number window (denoted w), which denotes the number of ais contributing to the weighted average of the number of collisions.
3. Weight vector (denoted V). V = (θ 0 ,θ 1 , L,θ w−1 ) , where
θ 0 ≥ θ 1 ≥ L ≥ θ w −1 ≥ 0 . 4.
w −1
¦
θ
i
= 1 ,
i= 0
Weighted average of the number of collisions (denoted C), denotes while attempting to transmit a frame, how many collisions the node encountered averagely. w −1
C = ¦ (a 0 ⋅θ 0 ) i =0
(1)
A Novel Media Access Control Algorithm Within Single Cluster
5.
989
Threshold of light load (denoted Tl) and threshold of heavy load (denoted Th). It is obvious that ThTl>0. They both denote the number of collisions during transmitting a frame.
While transmitting frames, the node counts the number of collisions. Whenever the node has performed a successful transmission, or the transmission has been canceled (this is because the number of retransmissions goes beyond a limit), it calculates C according to (1). If C Th, then the network load state is considered heavy; if C1
Y
Exist non-time-limiting token in the cluster
Remove a token
N
Y Send a control message to a non-time-limiting token holder, changing the token to be a time-limiting token
N
Y
Light load
TCountB
Sending
0
10+Ack_Duration
A->B
Receiving
0
10+Ack_Duration
A->C
Sending
40+2Ack_Duration
20+Ack_Duration
B->C
Sending
10+Ack_Duration
30+Ack_Duration
Direction
Action
Starting time
Duration time
Direction
Action
Starting time
Duration time
B->C
Receiving
10+Ack_Duration
30+Ack_Duration
D->E
Sending
60+3Ack_Duration
40+Ack_Duration
A->C
Receiving
40+2Ack_Duration
20+Ack_Duration
Direction
Action
Starting time
Duration time
D->E
Receiving
60+3Ack_Duration
40+Ack_Duration
C
Duration time
D
E
Fig. 3. Scheduling tables of Nodes
3.1
Distributed Arrangement Transmission Period of Nodes
In this phase, we build a scheduling table at each node in order to arrange the transmission period of all nodes. We assume that all nodes are fully connected and time synchronization so that all PS nodes can wake up at almost the same TBTT. At the TBTT, each node wakes up for an ATIM window interval. In the ATIM window, each node can send two kinds of frames. One kind of frames is the ATIM frame which is defined as ATIM(sender, receiver, the duration time). The other kind of frames is the ATIM ACK frame which is defined as ATIM ACK(the starting time, the duration time). Each node has a separate timer to record the beginning of available transmission time. If a node with buffered frames to a PS node, it will send an ATIM frame containing duration field which indicates how long the remaining transmission will be to the PS node within the ATIM window period. On receiving the ATIM frame, the indicated PS node responses an ATIM ACK with starting time and duration to the sender of the ATIM frame and completes the reservation of the data frames transmissions. Our main idea is illustrated in detail by Fig.2. In the ATIM window, five PS nodes send the ATIM frames. Only four ATIM frames are announced successfully sequentially, i.e., ATIM(A, B, 10), ATIM(B, C, 30), ATIM(A, C, 20), and ATIM(D, E, 40). On receiving ATIM(A, B, 10), node B constructs the ATIM ACK(0, 10+ACK duration) where ACK duration is equal to 2SIFS+ACK and replies it to A. Meanwhile, all nodes know the channel is busy during (0, 10+ACK duration) and update their timers to 10+ACK duration. On receiving ATIM(B, C, 30), C constructs the ATIM ACK(10+ACK duration, 30+ACK duration) and sends it back to B. All nodes update their timers to 40+2ACK duration. Then, on receiving ATIM(A, C, 20), C constructs the ATIM ACK(40+ 2ACK duration, 20+ACK duration) and responses it to A. All nodes update their timers to 60+3ACK duration. At last, on receiving ATIM(D, E, 40), E constructs the ATIM ACK(60+3ACK duration, 40+ACK duration) and send it to D. All nodes update their timers to 100+4ACK duration. Now, all nodes can decide their transmission periods and build their own scheduling tables as shown in Fig.3.
IEE-MAC: An Improved Energy Efficient MAC Protocol
3.2
1001
Transmission Data Frames for PS Nodes
At the end of ATIM window, each PS node which successfully transmitted or received an ATIM frame during an ATIM window wakes up to exchange its data frames and then enters doze state again according to its individual scheduling table. But if the duration between the current ending time and the next starting time is small, specifically, in our simulation, node will not enter doze state if the remaining duration is less than two SIFS durations. We continue the scenario as the above example to explain this operation. After ATIM window and a SIFS time, A and B wake up and exchange their frames announced in ATIM window. When completing the exchange of its traffic, A goes to doze state. Then C wakes up according to its schedule and B continues to transmit data frames to C. As noted above, after receiving data frames from A, B doesn’t enter doze state because the duration between its current ending time and its next starting time in B’s schedule table is too small. After completing the transmission to C, B goes to doze state. At the same time, A wakes up again and exchanges data frames with C follows the same rule as B and C. Then, A and C enter doze state until the end of the beacon interval since there is no entry in their schedule table. At last, D and E wake up and exchange their data frames transmission. When completing the exchange of their traffic, D and E go to doze state. 3.3
Adjust ATIM Window Size
In PSM of IEEE 802.11, there is a fixed value of ATIM window. Thus, nodes have to stay awake for the whole ATIM Window unnecessarily because they are not allowed to enter doze state until this fixed awake duration ends. In the worst case when no nodes have data packets to send, all nodes have to awake and just waste their energies. In our proposed scheme, each node measures how long the channel was idle continuously during the ATIM window. If nodes sense the channel is idle more than DIF S + 12 CWmax , we assume that no nodes with buffered frames to send. As a result, all nodes can end the ATIM window and enter the data transmission or doze state according to their individual schedule. Utilizing above method, we can dynamically adjust the ATIM window according to the actual traffic load of the network, conserve more power of PS nodes, and improve the network throughput.
4
Performance Evaluation
To evaluate the performance of the proposed protocol IEE-MAC, we implement IEE-MAC, Wu et al. in [13] proposed energy efficient MAC protocol, hereafter referred as Wu-MAC, the dynamic power saving mechanism (DPSM) proposed in [11], the PSM scheme in IEEE 802.11, and IEEE 802.11 without power saving mechanism in ns-2 with the CMU wireless extensions [15]. We name the last two schemes as PSM and NO-PSM respectively. In our simulation, we evaluate the above protocols in terms of the following two metrics.
1002
B. Gao, Y. Yang, and H. Ma
– Aggregate throughput in the network. – Total data delivered per unit of energy consumption (Kbits delivered per joule): This metric measures the amount of data delivered per joule of energy. 4.1
Simulation Model
For conveniently comparison, we use the similar simulation model with [11] for our simulations. The duration of each simulation is 20 seconds in a wireless LAN. Each flow transmits CBR (Constant Bit Rate) traffic, and the rate of traffic is varied in different simulations. The channel bit rate is 2Mbps and the packet size is fixed at 512 bytes. To measure energy consumption, we use 1.65w, 1.4w, 1.15w, and 0.045w as values of power consumed by the wireless network interface in transmit, receive, idle modes and the doze state, respectively. Each node starts with enough energy so that it will not run out of its energy during the simulations. All the simulation results are averages over 30 runs. The length of beacon interval is 100 ms; the ATIM window size in 802.11 is 5 ms [9]. Our protocol uses flexible ATIM window size. Based on the above experimental setting, we carry out two groups of experiments. In the first group, we vary the total network load in order to observe the effect of the network load on throughput and energy consumption. Simulated network loads are 10%, 20%, 40%, and 60%, measured as a fraction of the channel bit rate of 2 Mbps. In the second group, we change the number of nodes to explore the effect of the number of nodes on throughput and energy consumption. The number of nodes is chosen to be 10, 20, 30, 40 and 50. In all cases, half of the nodes transmit packets to the other half. 4.2
Simulation Results
Figure 4 plots the aggregate throughput against the network load with a fixed number of network nodes (30 nodes). NO-PSM has the best throughput since the nodes keep awake at all time and has not extra overhead of ATIM scheme. The throughput of IEE-MAC and Wu-MAC are very close to that of NO-PSM, because both of methods decrease the possibility of frame collision. However, the throughput of PSM degrades when the load of the network is high since highly loaded network needs more time for data transmission but PSM use the extra channel capacity for the ATIM window. In addition, the performance gap between NO-PSM and DPSM becomes big with the load of the network increasing due to the impact of collision increasing too. Figure 5 shows the dependence of the total data delivered per joule upon the network load. The same as Fig.4, the number of nodes is fixed as 30. We can see clearly that our protocol has the best energy efficiency among all these protocols. In particular, the energy efficiency of NO-PSM is the worst in all protocols since all nodes are always awake and there is no energy savings from doze state. However, the energy efficiency of NO-PSM increases with the network load grows due to more energy used to transmitting frames rather than
IEE-MAC: An Improved Energy Efficient MAC Protocol 60 Total Data Delived per Joule (Kbits/Joule)
Aggregate Thoughput (Kbits/s)
1200
1000
800
600
400 IEE−MAC Wu−MAC DPSM NO−PSM PSM
200
0 10
1003
20
30
40
Network Load
50
60
50
40
30
20 IEE−MAC Wu−MAC DPSM NO−PSM PSM
10
0 10
20
30
40
50
60
Network Load
Fig. 4. Aggregate throughput vs. network Fig. 5. Aggregate data delivered per joule load vs. network load
wasted at idle state. The energy efficiency of PSM is better than that of NOPSM in the case of low and medium network load. When the network load is low, the performance improvement of PSM compared with NO-PSM is slight since the node cannot enter doze state even if it has at least one packet to transmit or receive. As the load of the network increases, the energy gain from PSM become larger because more time is used to transmit or receive data frames during the nodes are awake. Finally, when the network load is high, the energy efficiency of PSM degrades even lower than that of NO-PSM. The reason is that highly loaded network needs more time for data transmission, but PSM use extra channel capacity for ATIM window. Energy gain from Wu-MAC and DPSM also become smaller when the network load gets higher for the same reason. However, our proposed IEE-MAC always has the best performance in all of the cases because our mechanism avoids overhearing and collision and thus decreases the energy consumption. In addition, our mechanism adjusts the ATIM Window adaptively so that we can avoid PS nodes keeping awake unnecessarily. Figure 6 illustrates the effect of the number of the nodes in the network on the throughput with the fixed network load (network load = 40%) when using our proposed IEE-MAC, Wu-MAC, DPSM, PSM, and NO-PSM schemes, respectively. NO-PSM has the best performance since the nodes keep awake at all time. The throughput of PSM and DPSM decrease rapidly as the number of nodes in the network increases especially in the case of the large network size, since more contending nodes result in more packets collisions and the ATIM window uses extra channel capacity. On the contrary, as shown in Fig.6, the throughput of our IEE-MAC and Wu-MAC only degrade slightly with the number of nodes increases. The reason lying behind is that these two schemes employ the scheduling transmission mechanism to decrease significantly the possibility of frames collision. The aggregate data delivered per joule of these five protocols versus the number of the nodes in the network is depicted in Fig.7. Our proposed IEE-MAC always performs better than other four protocols. Particularly, the data deliv-
1004
B. Gao, Y. Yang, and H. Ma 200 Total Data Delivered per Joule (Kbits/Joule)
800
Aggregate Thoughput (Kbits/s)
750 700 650 600 550 IEE−MAC Wu−MAC DPSM NO−PSM PSM
500 450 400 10
15
20
25
30
35
The number of nodes
40
45
50
180 160 140 120 IEE−MAC Wu−MAC DPSM NO−PSM PSM
100 80 60 40 20 0 10
15
20
25
30
35
40
45
50
The number of nodes
Fig. 6. Aggregate throughput vs. the Fig. 7. Aggregate data delivered per joule number of nodes vs. the number of the nodes
ered per joule of PSM and NO-PSM significantly decrease with the increase of the number of the nodes because there are more collisions among the nodes and nodes need to take more time to finish data transmission. The energy efficiency of Wu-MAC also experiences similarly degradation with the number of the nodes increase. However, the reason is different from the case of PSM and NO-PSM. In Wu-MAC, there is more energy wasting on overhearing with the number of nodes increase. On the contrary, the energy efficiency of IEE-MAC remains stable in most of the cases and only slightly degrades in the very large network size. This occurs because our mechanism eliminates overhearing and collision and conserves the energy consumption efficiently when the network size is large.
5
Conclusions
We have presented an improved energy efficient MAC (IEE-MAC) protocol for IEEE 802.11-Based Ad Hoc networks. The proposed protocol has four main contributions. First, to avoid unnecessary frame collisions and backoff waiting time in data frame transmissions, the protocol can schedule all nodes to transmit their frames to PS nodes sequentially. Secondly, the protocol can avoid overhearing of nodes in order to save their energy when other nodes are transmitting. Thirdly, IEE-MAC also allows node to enter doze state whenever it has finished transmitting packets other than stay awake until the end of the beacon interval. Lastly, to conserve more power of PS nodes and improve the channel utilization, the protocol utilizes an adaptive strategy to dynamically adjust the ATIM window size according to the actual traffic load. Results of simulation show that IEE-MAC obtains best performance in terms of throughput and energy efficiency compared with other energy efficient MAC protocols mentioned previously.
IEE-MAC: An Improved Energy Efficient MAC Protocol
1005
References 1. Singh, S., Woo, M., Raghavendra, C.S.: Power-aware routing in mobile ad hoc networks. In: MobiCom ’98: Proceedings of the 4th annual ACM/IEEE international conference on Mobile computing and networking, ACM Press (1998) 181–190 2. Xu, Y., Heidemann, J., Estrin, D.: Geography-informed energy conservation for ad hoc routing. In: Proceedings of the ACM/IEEE International Conference on Mobile Computing and Networking, Rome, Italy, USC/Information Sciences Institute, ACM (2001) 70–84 3. Chang, J.H., Tassiulas, L.: Energy conserving routing in wireless ad-hoc networks. In: INFOCOM (1). (2000) 22–31 4. Monks, J., Bharghavan, V., mei W. Hwu, W.: A power controlled multiple access protocol for wireless packet networks. In: INFOCOM. (2001) 219–228 5. Feeney, L.M., Nilsson, M.: Investigating the energy consumption of a wireless network interface in an ad hoc networking environment. In: IEEE INFOCOM. (2001) 6. Tseng, Y.C., Hsu, C.S., Hsieh, T.Y.: Power-saving protocols for ieee 802.11-based multi-hop ad hoc networks. Comput. Networks 43 (2003) 317–337 7. Zorzi, M., Rao, R.: Is tcp energy efficient? In: Proceedings of Sixth IEEE International Workshop on Mobile Multimedia Communications. (1999) 8. Krashinsky, R., Balakrishnan, H.: Minimizing energy for wireless web access with bounded slowdown. In: MobiCom 2002, Atlanta, GA (2002) 9. LAN MAN Standards Committee of the IEEE Computer Society: IEEE Std 802.111999, Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specification. IEEE (1999) 10. Woesner, H., Ebert, J.P., Schlager, M., Wolisz, A.: Power saving mechanisms in emerging standards for wireless lans: The mac level perspecitve. IEEE Personal Communications 5 (1998) 40–48 11. Jung, E.S., Vaidya, N.H.: An energy efficient mac protocol for wireless lans. In: INFOCOM. (2002) 12. Choi, J.M., Ko, Y.B., Kim, J.H.: Enhanced power saving scheme for ieee 802.11 dcf based wireless networks. In: PWC. (2003) 835–840 13. Wu, S.L., Tseng, P.C.: An energy efficient mac protocol for ieee 802.11 wlans. In: CNSR 2004. (2004) 137–145 14. Ye, W., Heidemann, J., Estrin, D.: Medium access control with coordinated adaptive sleeping for wireless sensor networks. IEEE/ACM Trans. Netw. 12 (2004) 493–506 15. The CMU Monarch Project: (The cmu monarch project’s wireless and mobility extension to ns)
POST: A Peer-to-Peer Overlay Structure for Service and Application Deployment in MANETs Anandha Gopalan and Taieb Znati Department of Computer Science, University of Pittsburgh, Pittsburgh, PA 15260, U.S.A {axgopala, znati}@cs.pitt.edu
Abstract. Ad-hoc networks are an emerging technology with immense potential. Providing support for large-scale service and application deployment in these networks, however is crucial to make them a viable alternative. The lack of infrastructure, coupled with the time-varying characteristics of ad-hoc networks, brings about new challenges to the design and deployment of applications. This paper addresses these challenges and presents a unified, overlay-based service architecture to support large-scale service and application deployment in ad-hoc networks. We discuss the main functionalities of the architecture and describe the algorithms for object registration and discovery. Finally, the proposed architecture was evaluated using simulations and the results show that the architecture performs well under different network conditions.
1
Introduction
Advances in wireless technology and portable computing along with demands for greater user mobility have provided a major impetus towards development of an emerging class of self-organizing, rapidly deployable network architectures referred to as ad-hoc networks. Ad-hoc networks, which have proven useful in military applications, are expected to play an important role in commercial settings where mobile access to a wired network is either ineffective or impossible. Despite their advantages, large-scale deployment of services and applications over these networks has been lagging. This is due to the lack of an efficient and scalable architecture to support the basic functionalities necessary to enable node interaction. Several challenges must be addressed in order to develop an effective service architecture to support the deployment of applications in a scalable manner. These challenges are related to the development of several capabilities necessary to support the operations of nodes, which include: Object registration and discovery, Mobile node location, and Traffic routing and forwarding. Node mobility, coupled with the limitation of computational and communication resources, brings about a new set of challenges that need to be addressed
This work has been supported by NSF award ANI-0073972.
X. Jia, J. Wu, and Y. He (Eds.): MSN 2005, LNCS 3794, pp. 1006–1015, 2005. c Springer-Verlag Berlin Heidelberg 2005
POST: A Peer-to-Peer Overlay Structure
1007
in order to enable an efficient and scalable architecture for service and application deployment in MANETs. In addition to object information, peers must also register their location and mobility information to facilitate peer1 interaction. This information, however, changes dynamically due to peer mobility. Efficient mechanisms must, therefore be in place to update this information as peers move. The main contribution of this paper is in providing a novel Peer-to-peer Overlay STructure (P OST ) that allows for service and application deployment in MANETs. This paper takes a unique peer-to-peer based approach to provide a scalable, robust and efficient framework along with the protocols and algorithms that allow for large-scale service and application deployment in MANETs. POST is efficient, by not requiring nodes to maintain routing information. Bootstrapping in POST does not require the knowledge of other POST nodes. A node only needs to know the hash function used and the location of the zone virtual manager service to register and query for objects. The main components of POST are: zones, virtual homes and mobility profile. A zone is a physical area in the network that acts as an information database for objects in the network. The zones are organized as a virtual DHT-based structure that enables object location through distributed indexing. The DHT uses a virtual structure that is tightly coupled to the physical structure of the network to locate peers where object information is stored. A virtual home of a node is the physical area where the node is most likely located. In the case when a node is mobile, it leaves behind its mobility information with select proxy nodes within its virtual home. This information constitutes the mobility profile of the node and consists of its expected direction and speed of travel. The rest of the paper is organized as follows: Section 2 details the related work in this area while Section 3 details the network characteristics used in POST, Section 4 details the different components of the system architecture and the algorithms used, Section 5 details the simulations and the ensuing results and Section 6 concludes the paper and identifies areas of future work.
2
Related Work
Service discovery for ad-hoc networks is still a very new area of research. There have been some protocols for service location and discovery that have been developed for LANs, namely: Service Location Protocol [2] and Simple Service Discovery Protocol [16]. SLP relies on agents to search for and locate services in the network; an user agent is used on behalf of users to search for services, while a service agent advertises services on behalf of a server and finally a directory agent collects the advertisements sent out by the server agent. SSDP uses a specific protocol and port number to search for and locate services in the network. HTTP UDP is used on the reserved local multicast address 239.255.255.250 along with the SSDP port while searching for services. Both SLP and SSDP cannot be directly used for MANETs due to their reliance on an existing network structure. 1
We will use the term node and peer interchangeably in this paper.
1008
A. Gopalan and T. Znati
CAN [12] provides a distributed, Internet-scale hash table. The network is divided into zones according to a virtual co-ordinate system, where each node is responsible for a virtual zone. Given (key,value) pairs, CAN maps the key to a point P in the co-ordinate system using a uniform hash function. The corresponding (key,value) pair is stored at the node that owns the zone containing P . We differ from CAN by having more nodes in a zone to hold object information. Also, a zone is not split when a new node arrives and the overhead is avoided. Mobility is also incorporated by using the mobility profile management base. The Landmark routing hierarchy [11] provides a set of algorithms for routing in large, dynamic networks. Nodes in this hierarchy have a permanent node ID and a landmark address that is used for routing. The landmark address consists of a list of the IDs of nodes along the path from this node to a well known landmark node. Location service is provided in the landmark hierarchy by mapping node IDs to addresses. A node X chooses its address server by hashing its node id. The node whose value matches or is closest to the hash value is chosen as X’s address server. GLS [6] provides distributed location information service in mobile ad-hoc networks. GLS combined with geographic forwarding can be used to achieve routing in the network. A node X “recruits” a node that is “closest” to its own ID in the ID space to act as its location server. POST differs from GLS and the Landmark scheme by using a group of nodes (that are selected from within a zone) to act as the object location server. The information is stored in a manner such that only k out of N fragments are necessary to re-construct it. This increases the robustness of POST, since the information is still available, even after the failure of some nodes. [5, 4, 14] use the concept of home regions. Each node is mapped to an area (using a hash function) in the network that is designated as its home region. The home region holds the location information about the mobile nodes which map to this location. A node updates its location information by sending updates to its home region. In our scheme, a node does not keep updating its virtual home, rather it leaves a trail behind that can be used by other nodes to locate it. Ekta [3] integrates distributed hash tables into MANETs and provides an architecture for constructing distributed applications and services. POST differs by not using Pastry [13] as the DHT. The DHT is constructed in a manner that allows it to take advantage of the location information provided. POST also takes node mobility into consideration. [8, 15] provide basis for resource discovery in MANETs. Our main contribution when compared to these works is the use of a DHT-based system for providing resource discovery in MANETs.
3 3.1
POST: Network Characteristics Zones and Virtual Homes (V Hs)
Consider an ad-hoc network covering a specific geographical area, denoted by Λ. We perceive this area to be divided into zones as shown in fig. 1. A zone for example, can be an administrative domain where geographical proximity facilitates
POST: A Peer-to-Peer Overlay Structure
1009
Fig. 1. Partitioning the network into zones
communication between nodes of the zone. A zone Zi , is uniquely characterized by an identifier and contains one or more neighborhoods. A neighborhood is defined as a physical area within a zone in the network. Each zone also has a zone virtual manager service (ZV M S) that returns the physical co-ordinates of a zone given the zone id. The ZV M S is collaboratively provided by a group of nodes within a zone. The zones form a virtual DHTbased distributed database that holds object information. The virtual DHT maps the objects into zones, where object information is stored. The novelty of this approach is that the virtual DHT-structure is tightly coupled to the physical structure of the network, where each physical zone is responsible for holding information about those objects that map to it. The database is thus, a fixed size hash table with each entry corresponding to a zone in the network. To ensure that there are no hot spots in the network due to the hashing of object keys to zones, the hash function is chosen to be a unif orm hash function. The virtual home of a node is the neighborhood within a zone where the node is most likely to be located and is defined by its physical co-ordinates. The V H of a node can change over time depending on the mobility of the node. For example, a user may be at home at time t1 and at time t2 , he/she may be at their office, in which case their new virtual home is characterized by the co-ordinates of the office. A user can choose to provide this information while registering, thus facilitating location of the user. Hence, a virtual home of a node refers to the actual physical location of the node and also reflects user behavior. A node is characterized by a unique identif ier and its virtual home. Each node knows the physical location of the neighborhood where the ZV M S is provided. A node, A, upon departure from its V H leaves behind some information so that other nodes can use this information to locate A. This involves building the M P M B, that comprises of a set of proxy nodes that are responsible for holding A’s mobility information. This information contains the expected direction and speed of travel of A and is called the mobility profile. Each object is characterized by its object id and a unique key. Any node wishing to register (or) query for an object (O) in the network, must first calculate the hash value (using a system-wide hash function), based on the key associated with O. This hash value provides a zone id that maps to a zone that is responsible for holding information about O. Zone Virtual Manager Service: Within a zone, the zone virtual manager service is mapped to a neighborhood and is collaboratively provided by the set of nodes whose V Hs map to this neighborhood. Given a zone id, the ZV M S returns the physical co-ordinates of the corresponding zone.
1010
A. Gopalan and T. Znati Table 1. Key range Zone Interval Lower Limit Upper Limit . . . . zi Ii (i − 1) · ατz (i · ατz ) − 1 . . . . zαz Iαz (αz − 1) · ατz (αz · ατz ) − 1
Zone Key Management: A zone is responsible for holding information about objects whose keys hash to that zone. There is a need for an algorithm to assign keys to zones. Let the entire key space be of size τ . τ is divided into αz intervals; αz = total number of zones in the network. Each interval contains α keys; α = τ αz . Let the zones be: z1 , z2 , ..., zαz . Let I1 , I2 , .., Iαz be the intervals, where each interval contains a set of keys. In POST, each zone zi is responsible for the keys corresponding to interval Ii . This is expressed in Table 1.
4 4.1
POST: System Architecture and Services Object Registration and Discovery
Object Registration: A peer A, managing a collection of objects registers its objects by hashing the object key to obtain a hash value, which maps to a zone id of a zone (Z) that A must register with. Node A queries its ZV M S to get the physical co-ordinates of Z. A sends a message with its virtual home, object key and other attributes related to O to Z. Nodes in Z collaboratively register this. Algorithm 1 details the process by which node A, containing an object O with key ki , registers this information, (H = hash function). Object Discovery: Object discovery is performed in a method, similar to object registration. A peer A, wishing to locate the object(s) in the network, first calculates the hash value by hashing the object key. This hash value maps to a zone id of a zone (Z). Node A queries its ZV M S to get the physical co-ordinates of Z. Using directional routing, peer A sends a request to Z for information about the location of the object. Nodes in Z reply with a list of peers that own this resource. Algorithm 2 details the process by which node A, discovers information about an object O with key ki .
Algorithm 1. Object Registration (1) (2) (3) (4)
Calculate Zid = H(ki ) Query ZV M S to get co-ordinates of zone, Z with id Zid Use directional routing to send a message to Z to register Nodes in Z register the object information
POST: A Peer-to-Peer Overlay Structure
1011
Algorithm 2. Object Discovery (1) (2) (3) (4)
4.2
Calculate Zid = H(ki ) Query ZV M S to get co-ordinates of zone, Z with id Zid Use directional routing to send a query to Z with ki Nodes in Z reply with a list of peers that own the object
Mobile Node Location and Peer Interaction
Mobile Node Location: Node mobility is incorporated by using a node’s mobility profile. Consider the situation when a node A leaves its current virtual home. Node A has some knowledge about its intended destination and its direction and speed of travel and leaves behind this information called the mobility profile with select proxy nodes that act as the MPMB. Using this mobility profile, other nodes can locate node A. To recruit proxy nodes, node A sends out a broadcast message once at its highest power (to ensure maximum broadcast radius) and waits for replies from the other nodes. The mobility profile is encoded in a manner such that k out of N (number of replies) fragments are enough to re-construct it. This ensures that the mobility profile is available even after a few proxy nodes have left the VH. [10] Defines an encoding scheme that can be used for this purpose. Algorithm 3 details the steps involved in building the MPMB. Algorithm 3. Building the MPMB (1) (2) (3)
Broadcast-Msg to recruit Encode mobility profile Send encoded parts to the nodes that replied
Node S
1
2
Node S
3 4
Node C
Fig. 2. Mobile Peer Location
Peer Interaction: Once a node discovers the resources available in the network, it tries to interact with the peer containing the required resource. Algorithm 4 and figure 2 detail the procedure by which a node C obtains a resource from a node S that is mobile.
1012
A. Gopalan and T. Znati
Algorithm 4. Handling peer mobility (1) (2) (3) (4) (5) (6) (7) (8)
C receives V H(S) from the zone holding object information Using directional routing, C sends messages towards V H(S) (msg 1 in the figure) case S is in V H(S) Connection is established between C and S case S is currently not in V H(S) The nodes in V H(S) reply with the mobility profile of S, a metric: [t0 , V (t0 ), D(t0 ), PV (t), PD (t)] (msg 2) C uses the mobility profile of S to determine the current position of S and sends messages in this direction (msg 3) S upon receiving the messages by C acknowledges it (msg 4) and initiates a conversation with C
The components of the mobility profile in algorithm 4 are: t0 : starting time, V (t0 ): expected starting speed, D(t0 ): expected initial direction, Pv (t): Predictor for speed after t time units since departure, Pd (t): Predictor for direction after t time units since departure. 4.3
Traffic Forwarding Algorithm
This section gives a brief description of the forwarding algorithm used in POST (for more details, refer to [1]). Consider the scenario when a source S attempts to route traffic to a destination D. To limit flooding in the network, traffic is sent in a truncated cone-shaped manner towards D as shown in figure 3. Nodes in zone 1 have the highest priority to forward the traffic. If no nodes are currently available in zone 1, the transmission area is expanded to include zone 2, after a timeout. This prioritizes the neighboring nodes in such a way that nodes that are more in line with the direction of the destination have higher priority to forward the message, thereby reducing the delay traffic suffers on its way towards the destination. Nodes that receive the message sent by S calculate their priorities based on which they decide whether to forward the message. Furthermore, upon hearing a transmission within the zone, the remaining eligible nodes drop the message. As the message progresses toward its destination, the highest priority node calculates a new cone and re-iterates the process.
D
2 1
1 2
S
Fig. 3. Directional Routing
POST: A Peer-to-Peer Overlay Structure
5
1013
Simulation and Results
The protocol was implemented in the Glomosim network simulator [9] and was tested under different network scenarios. The set of tests were conducted as part of the sensitivity analysis of POST. In the experiments, the hit rate and the response times for object registration and discovery were measured for a network of static, low mobility and high mobility nodes, while varying the network density. The channel characteristics used were: TWO-RAY (receiving antenna sees two signals, a direct path signal and a signal reflected off the ground) and FREE-SPACE (signal propagation is in the absence of any reflections or multipath). The mobility models used were: NO-MOBILITY (static) and the Random Trip mobility model [7]. The speed of the nodes was varied to simulate a network of slow moving nodes (5 m/s) and a network of fast moving nodes (25 m/s). The number of nodes was varied from 100 to 500 and these nodes were placed in a network grid of size 2800x2800m. The network grid was further divided into zones of size 400x400m. Traffic statistics were collected at the sender to evaluate the response time. To evaluate the “hit rate”, we measure the % of packets that reach the destination. Each data point represented was the value averaged over 10 independent experimental runs. The first set of experiments (figures 4(a), 4(b)) were performed by varying the channel characteristic. The second set of experiments (figures 5(a), 5(b)) were performed to measure the response time for object registration and discovery. From figure 4(a), we conclude that in all the cases the hit rate increases and reaches 100% as the network density increases. The hit rate is always lower for the high mobility case as when compared to the static and low mobility cases. This can be attributed to the fact that node mobility has an impact on the number of nodes in a zone that are available to hold information about the objects and the mobility profiles. We can further observe that due to this the hit rate for a sparse network is lower. From figure 4(b), we conclude that the hit rate is higher. This is because the transmission range for a node using the channel FREE-SPACE is much higher when compared to TWO-RAY. This increase in transmission radius leads to a
110
110 Static Low Mobility High Mobility 105
100
100
95
95 Hit Rate (%)
Hit Rate (%)
Static Low Mobility High Mobility 105
90
90
85
85
80
80
75
75
70
70 0
100
200
300 Number of Nodes
400
(a) TWO-RAY
500
600
0
100
200
300 Number of Nodes
400
(b) FREE-SPACE
Fig. 4. Hit Rate using different Channel Characteristics
500
600
1014
A. Gopalan and T. Znati
100
100 Static Low Mobility High Mobility
Static Low Mobility High Mobility 80
Response Tile (ms)
Response Tile (ms)
80
60
40
20
60
40
20
0
0 0
100
200
300 Number of Nodes
400
500
600
0
100
200
(a)
300 Number of Nodes
400
500
600
(b) Fig. 5. Object Registration and Discovery
higher number of nodes that can be reached and hence the availability of more nodes that hold the information about the objects and the mobility profiles. From figure 5(a), we conclude that for a slightly dense to a very dense network, the response time remains almost a constant. For a sparsely populated network, the response time is significantly higher. This is due to the fact that the paucity of nodes in the network leads to a lower probability of finding a node in the direction of the destination to forward the traffic. We can also observe that the response time for a high mobility network is higher (though not by much). This is because node mobility causes frequent changes to the zone membership. From figure 5(b), we conclude that the response time is very similar to the response times observed for object registration. This can be attributed to the fact that the protocol followed for object discovery is very similar that of object registration.
6
Conclusion and Future Work
The major contribution of this paper is in providing a novel Peer-to-peer Overlay STructure (POST) for service and application deployment in MANETs. The proposed framework is scalable, robust and efficient. POST is efficient since nodes do not maintain and update routing tables. Bootstrapping does not require the knowledge of other POST nodes. A node needs to know the hash function used in the network and the ZV M S, that maps zone ids to the zones to register and query for objects in the network. Object registration and discovery are achieved by hashing the key associated with an object to obtain the zone id of the zone associated with it. POST is scalable, since the hash function ensures that the database containing the object information is spread across various zones in the network. A zone key management protocol was developed to map object keys to the corresponding zones in the network. The mobility of the nodes in the network is incorporated into POST by using the mobility profile of a node. There is a lot of potential for future work in this area. Security needs to be incorporated into this service-architecture at various levels, be it object registration and discovery or handling the mobility profiles using the MPMB.
POST: A Peer-to-Peer Overlay Structure
1015
References 1. A. Gopalan, T. Znati and P. K. Chrysanthis. Structuring Pervasive Services in Infrastructureless Networks. In Proc. IEEE International Conference on Pervasive Services (ICPS ’05), 2005. 2. E. Guttmann and C. Perkins and J. Veizades and M. Day. Service Location Protocol, Version 2. Internet Draft, RFC 2608, 1999. 3. H. Pucha, S. M. Das and Y. Charlie Hu. Ekta: An Efficient DHT Substrate for Distributed Applications in Mobile Ad Hoc Networks. In Proc. of the 6th IEEE Workshop on Mobile Computing Systems and Applications, December 2004. 4. J. P. Hubaux, J. Y. Le Boudec, and M. Vetterli Th. Gross. Towards self-organizing mobile ad-hoc networks: the terminodes project. IEEE Comm Mag, 39(1):118 –124, January 2001. 5. I. Stojmenovic. Home agent based location update and destination search schemes in ad hoc wireless networks. Technical Report TR-99-10, Computer Science, SITE, University of Ottawa, Sep. 1999. 6. J. Li and J. Jannotti and D. De Couto and D. Karger and R. Morris. A Scalable Location Service for Geographic Ad-Hoc Routing. In Proc. Mobicom, pages 120– 130, August 2000. 7. Jean-Yves Le Boudec and Milan Vojnovic. Perfect Simulation and Stationarity of a Class of Mobility Models. In Proceedings of IEEE INFOCOM, 2005. 8. Jivodar B. Tchakarov and Nitin H. Vaidya. Efficient Content Location in Mobile Ad Hoc Networks. In Proc. IEEE International Conference on Mobile Data Management (MDM 2004), January 2004. 9. L. Bajaj, M. Takai, R. Ahuja, R. Bagrodia and M. Gerla. Glomosim: A scalable network simulation environment. Technical Report 990027, UCLA, 13, 1999. 10. Gretchen Lynn. ROMR: Robust Multicast Routing in Mobile Ad-Hoc Networks. PhD. Thesis, University of Pittsburgh, December 2003. 11. Paul F. Tsuchiya. The Landmark Hierarchy: A New Hierarchy for Routing in very large Networks. In Proc. ACM SIGCOMM, pages 35–42, August 1988. 12. Sylvia Ratnasamy, Paul Francis, Mark Handley, Richard Karp, and Scott Shenker. A scalable content-addressable network. In Proceedings of ACM SIGCOMM, 2001. 13. Antony Rowstron and Peter Druschel. Pastry: Scalable, decentralized object location, and routing for large-scale peer-to-peer systems. Lecture Notes in Computer Science, 2218:329–350, 2001. 14. Seung-Chul M. Woo and Suresh Singh. Scalable Routing Protocol for Ad Hoc Networks. Journal of Wireless Networks, 7(5), Sep. 2001. 15. U. Kozat and L. Tassiulas. Network Layer Support for Service Discovery in Mobile Ad Hoc Networks. In Proceedings of IEEE INFOCOM, 2003. 16. Yaron Y. Goland, Ting Cai, Paul Leach, Ye Gu. Simple Service Discovery Protocol/1.0. Internet Draft, Oct. 1999.
An Efficient and Practical Greedy Algorithm for Server-Peer Selection in Wireless Peer-to-Peer File Sharing Networks Andrew Ka Ho Leung and Yu-Kwong Kwok Department of Electrical and Electronic Engineering, The University of Hong Kong, Pokfulam, Hong Kong {khleung, ykwok}@eee.hku.hk
Abstract. Toward a new era of “Ubiquitous Networking” where people are interconnected in anywhere and at anytime via the wired and wireless Internet, we have witnessed an increasing level of impromptu interactions among human beings in recent years. One important aspect of these interactions is the Peer-to-Peer (P2P) Networking that is becoming a dominant traffic source in the wired Internet. In these Internet overlay networks, users are allowed to exchange information through instant messaging and file sharing. Unfortunately, most of the previous work proposed in the literature on P2P networking is designed for the traditional wired Internet, without much regard to important issues pertinent to wireless communications. In this paper, we attempt to provide some insight into P2P networking with respect to a wireless environment. We focus on P2P file sharing, already a hot application in the wired Internet, and will be equally important in the wireless counterpart. We propose a greedy server-peer selection algorithm to decide from which peer should a client download files so that the level of fairness of the whole network is increased and expected service life of the whole file sharing network is extended. We also propose a new performance metric called Energy-Based Data Availability, EBDA, which is an important performance metric for improving the effectiveness of a wireless P2P file sharing network. Keywords: wireless networking, P2P systems, file sharing, energy efficiency, fairness, greedy algorithm.
1
Introduction
It is reported in a recent survey [32] that Peer-to-Peer (P2P) applications generate one-fifth of the total Internet traffic, and it is believed that it will continue to grow. Furthermore, an ISP (Internet Services Provider) solution company [29] reported that the top-four hottest P2P file sharing applications are BitTorrent
This research was supported by a grant from the Research Grants Council of the HKSAR under project number HKU 7157/04E. Corresponding author.
X. Jia, J. Wu, and Y. He (Eds.): MSN 2005, LNCS 3794, pp. 1016–1025, 2005. c Springer-Verlag Berlin Heidelberg 2005
An Efficient and Practical Greedy Algorithm for Server-Peer Selection
1017
[1] (occupies 53 % of all P2P traffic), eDonkey2000 [5] (occupies 24 %), FastTrack [6] (occupies 19 %) and Gnutella [10] (occupies 4 %). Indeed, given the recent rapid development of high speed wireless communication technologies, including 3G, post-3G and WLAN, it is widely envisioned that file sharing over wireless P2P network will naturally be the next trend. An efficient wireless P2P network is reckoned as a key component of our next generation ubiquitous and pervasive mobile computing platform. However, such a wireless P2P network is likely to be energy constrained in nature, since mobile wireless devices are mostly battery operated and the battery has limited life. Running P2P applications on top of it requires network developers to incorporate energy conservation idea in their design. In this paper, We propose a server-peer selection rule to increase the performance of a wireless P2P file sharing network by using a greedy algorithm to select a suitable peer to download the requested files, when there are multiple peers who have the file we request for. Different from previous works, mobility factor and transmit power constraint in wireless network are explicitly incorporated into our design. The concept of “data availability” has been used in [22] and [20] to study the provision of information resource in P2P networks. We introduce a new performance metric, namely “energy-based data availability” for wireless P2P network. Our view is that, in an energy constrained network, “data availability” should not only refer to the amount of file resources possessed by somebody or being shared, but also refer to the energy levels of entities who hold the file resources. It makes no sense to have a wireless P2P network where some users hold a large amount of popular file objects for which other users always request (e.g., popular music or latest movie trailer) but these users who possess resources are of low battery level. All the valuable resources would be lost when these users’ energy exhausted. Our greedy algorithm can significantly increase the energy efficiency in terms of fairness. We avoid any single peer drop too much in energy level and has its energy level deviate too far from the others. In Section 2, we provide some background information on P2P networking. In Section 3, we describe our approach to increasing the energy efficiency by using a greedy server-peer selection algorithm. In Sections 4 and 5 we describe our simulation platform and the simulation results, respectively. We summarize our conclusions in Section 6.
2
Background and Related Work
Apart from data availability, architecture of file sharing system is another aspect of P2P networking which receives most attention, including the famous “first generation” centralized system Napster [23], which uses a centralized database to maintain a directory of file resources in the Internet, Gnutella [10] and KaZaA [14] which use decentralized searching. More advanced searching algorithm can be found in so-called “second generation” P2P system architectures, including Chord [17], Pastry [28], Tapestry [33] and CAN [27] which use distributed hash table (DHT) for searching and ask each peer to hold a subset of the whole rout-
1018
A.K.H. Leung and Y.-K. Kwok
ing information set. BitTorrent(BT) [1] is another latest common decentralized P2P protocol deployed in our Internet. However, all the above designs are not tailor-made for a wireless medium. Therefore, in order to achieve the goal of having ubiquitous wireless P2P file sharing, new design criteria must be taken into account. Our work focus on one of the key design issue, which is the energy constraint. Different from above work, our goal is not to find a searching algorithm nor routing protocol, but to propose a “server-peer selection algorithm” to select the server-client pairs in file sharing, assume that file searching result is already available. Our work is believed to be effective in extending P2P file sharing from traditional wired Internet to a wireless environment in the future.
3
Our Proposed Approach
3.1
Definition
Consider a P2P file sharing application running on top of a mobile wireless ad hoc network with N users. We denote each user by ni : U = {ni |i = 1, 2, 3, . . . , N }
(1)
We use Ei to denote the remaining energy level (battery level) of a mobile user ni . We then use a binary vector Vi to denote the file objects that user ni possesses: (2) Vi = {δik |k = 1, 2, 3, . . . , M }, i = 1, 2, 3, . . . , N $
where δik =
1 If user i has file object fk 0 If user i does not have file object fk
(3)
We assume that there are totally M different file objects in the network: F = {fj |j = 1, 2, 3, . . . , M }
(4)
In this P2P file sharing platform, we quantify the popularity of each file object and represent it using a weight, with respect to a particular user. Different files are of different popularity in the network, some file objects (e.g., latest pop music) are of higher ranking in the mind of some users. Thus, we represent the weight (rank) of an object fk in user ni ’s mind by a weight wik . Different file objects could have different weight values for different people, thus we need a N by M weight matrix W to denote weights of all file objects for all users: 6 W =
w11 . . . w1M ...... wN 1 . . . wN M
7 (5)
The weight measure, which is related to the taste of a particular user such as, “favorite music” or “favorite singer” are something quite subjective. We can make a more objective ranking of files by restricting each user to give their weight
An Efficient and Practical Greedy Algorithm for Server-Peer Selection
1019
values using some scale, say, “How many stars (five stars is maximum) do you give for this song?” ranking like this are commonly found in online forums and fans sites. It should be noted that, although the calculation of W require heavy computation and its value can be dynamically changing when users’ taste and community’s trends are changing, a single user need not to calculate it. As shown in later sections, each user only has to do local calculation on client-peer’s and server-peer’s weight values on file objects when running our greedy algorithm. Now we define the energy-based data availability, EBDA, of a user as: Di = Ei
M
wik δik
(6)
k=1
and the EBDA of the whole network is: D=
N i=1
$
where δik =
1 0
(Ei
M
wik δik )
(7)
k=1
If user i has file object fk If user i does not has file object fk
(8)
We interpret this performance metric in the following way. Firstly, “availability” of file objects in a P2P network depends on how many copies of each file are there in the network, this is represented by δ terms in our metric. Secondly, different files are of different popularity in the network, some file objects (e.g., latest pop music) are of higher ranking in the mind of some users, our metric take this into account and represent this by a weight w. In the definition of EBDA, the sum of all file object weights of each user is scaled (multiplied) by the amount of energy of each user, energy level of user ni by Ei . This multiplicative factor corresponds to the fact that all file objects an user hold would be rendered as nothing when the energy of this user exhausts. Consider a P2P network, given the same number of copies of the same set of file objects distributed over two group of users, each have the N users. We see that N a higher value of i=1 Di for a group has the implication that (1) the file object resources possessed by the whole group is more “durable”, more future sharing is possible before the energy of the holders exhaust and (2) on average, a holder of a file can keep their favorite file longer. 3.2
Greedy Algorithm for Server-Peer Selection
In the last section we see that EBDA is more meaningful performance metric for a wireless P2P network compare to traditional metrics used in wired networks. But how can we control this performance metric in a P2P network? Firstly, we note that it is undesirable to control which user could request for a file, this affects the freedom of users and possibly discourages peers’ participation. Secondly, we should not prohibit a peer to give response to a request posted by
1020
A.K.H. Leung and Y.-K. Kwok
another peer, this restricts the free flow of information and data. At least we allow all searching results and response to keep going. Thus, we let the “clientpeer” who ask for the file to decide from which “server-peer” to download the file, we control who delivers the file object. Given that there are a number of peers reply to a file object request and possess the file object being requested for, we select the one which leads to a more positive or less negative change of EBDA to transfer this file to the requesting peer. In particular, when a peer nc issues a request for a file object fj , let ns be the peer which possess fj and transmit this file to nc , we determine the change of total EBDA after the sharing action as follows: Let Dc and Ds denote the EBDA of nc and ns before sharing fj respectively, * D = E M w δ = E O c c c c k=1 ck ck (9) M Ds = Es k=1 wsk δsk = Es Os where Oc =
M k=1
wck δck and Os =
M k=1
wsk δsk .
ns transmits a file fj to nc , both transmit and receive action consumes power. Let P be the transmit power of ns , αP be the power consumption of the receiver on nc , let r be the transmission bit rate (which is assumed to be the same for all users), S(fj ) be the file size of fj , then the energy reduced on ns and nc are e = P S(fj )/r and αe = αP S(fj )/r respectively. Let Dc and Ds denote the EBDA of nc and ns after sharing fj , * D = (E − eα)(O + w ) c c c cj (10) Ds = (Es − e)Os Thus, change of total EBDA after this sharing is:
ΔD = Dc + Ds − Dc − Ds = (Ec − eα)(Oc + wcj ) + (Es − e)Os −Ec Oc − Es Os = Ec wcj − e(Os + αOc + αwcj )
(11)
Traditionally, a node is selected as the server-peer if it owns the requested file object and it is closest to the requesting node (smallest hop count). Our idea is to add the EBDA metric in server-peer selection process: We construct a server-set S which consists of all nodes which owns the requested file object and we find out the node which is closest to the requesting node, in terms of number of hop count and, if the file is sent from this node to the requesting node, the change of EBDA would be least negative or most positive. Then we select from this server-set a node ns This node ns would be the selected server-peer. Simply put, our idea is to add the EBDA metric into the traditional serverpeer process.
An Efficient and Practical Greedy Algorithm for Server-Peer Selection
Frame Control Octets
2
Duration
2
Dest. Address
6
Source Address
EBDA Tuple
4
1021
FCS
4
Fig. 1. Format of Request Packet
Frame Control Octets
2
Duration
2
Dest. Address
6
Source Address
4
Server Rank
FCS
4
Fig. 2. Format of Response Packet
3.3
Implementation Issues
We use a common wireless ad hoc network standard, IEEE 802.11b Wireless LAN [13], as the platform to investigate the implementation aspect of our algorithm. Firstly, the client-peer nc who request for the file fj should transmit a request packet containing a EBDA tuple < file ID of fj , Ec , Oc , wcj , S(fj , wcj ) > so that each peer nd who receive this request packet and possess fj can evaluate the value of ΔD(nd , nc ) if it acts as the server-peer. In order to calculate e, nd needs the file size of fj (which is given in the request packet), transmit power P and bit rate r. In IEEE 802.11b, there is no power control such as those in CDMA IS-95 [31], each peer (e.g., a user holding a PDA with WLAN adapter) uses a known fixed power. Common transmit power ranges from 13 dBm to 30 dBm, depends on the brand of the WLAN adapter [15, 3]. The formats of the request packet and the response packet are shown in Figure 1 and Figure 2. They are modified from IEEE 802.11 MAC ACK packet [13]. After calculation of ΔD(nd , nc ), each user nd replies nc by transmitting a response packet, containing a server rank, which value is given by the ΔD value calculated in Section 3.2. The client nc would decide who to download the file from and directly request that serverpeer. The download process afterward is the same as ordinary file download in conventional WLAN network. For the seek of simplicity, we assume a one-hop search scenario, where the file request is not re-transmitted by the receiver within transmission range of nc to other multiple–hop peers from nc , this rules out any influence of scalability problem or route-failure problem. Contention algorithms like CSMA/CA and random back-off in IEEE 802.11b is a prerequisite for running our algorithm so that the response packets from nd ∈ Nj would not collided upon receiving file object request from nc . This technique of modifying the MAC layer packet has already been used by Zhu et al. to enhance WLAN to a be relay-enabled with a rDCF MAC layer [34]. Trustworthiness of nodes is also an important issue.“What if a user cheating?” or “What if a user giving wrong values of remaining energy levels or weight values, etc?” To address this issue we rely on other research effort made on “Reputation Systems” which are designed for evaluating the trust-worthiness and behaviour of nodes in ad hoc network [8]. Modification on these system could be used to increase the efficiency of the greedy server-peer selection protocol.
1022
4 4.1
A.K.H. Leung and Y.-K. Kwok
Simulation Platform Mobility Model and Energy Consumption Model
In our simulations, 100 mobile users are assumed to be scattered randomly in a 400 m × 400 m area initially, they are allowed to move according to a randomwalk-like model. The original idea of random walk was firstly investigated by K. Pearson, in a letter to English journal Nature in 1905 [25, 11] and has been used to study P2P network in [9]. Briefly speaking, it investigates the position of an moving object after n equal length movements, each with an independent, random direction. In our simulations we modified the random walk model in such a way that each user moves in a particular direction with random period of time until it change to another random direction. The velocities of all users are not changed simultaneously, at a particular time instant only a random number of users decide to change their direction of motion. Also, the velocities of users are assumed to be distributed in a Gaussian manner with mean equals 0.83ms−1 [21]. Now we define the energy model used in this paper. Firstly, the wireless devices are assumed to have three possible modes of operation: Transmit, Receive and Idle. The energy consumption ratio of the three modes is set as 1 : 0.6 : 0.4, as indicated by the experimental measurements done by Feeney and Nilsson [7]. So the energy consumption on a node is set as: PTx TTx + PRx TRx + PIDLE TIDLE where the first three P terms represent power consumption in Transmit, Receive and Idle modes, respectively (exclude the exchange of control packets). 4.2
Wireless Channel Characteristics
The mobile devices in the simulations have similar wireless transmission parameters as those commonly found in IEEE 802.11b WLAN adapters available in the market which operate at 2.4 GHz ISM band, where the transmit power generally ranges from 13 dBm to 30 dBm, depending on the brand of the WLAN adapter. We set the transmit power as 20 dBm in our simulations. For simplicity we assume that the transmission rate of all users is fixed at 1 Mbps. At this bit rate the receiver sensitivity is usually around −90 dBm. We set it as −91 dBm, the same as Orinoco 802.11b Gold PCu card [24]. For radio propagation, we adopt the Okumura-Hata Model which is commonly used in the literature [26] to estimate the path loss. 4.3
File Searching and Fetching Model
The sizes of file objects are less than 5 MB. File objects can be MP3 songs, ring tones for mobile phones, short movie trailers in relatively low resolution, etc. Each peer is assumed to be a PDA running WLAN. We assume a file searching engine which could search the file and give the path as long as there exist a path
An Efficient and Practical Greedy Algorithm for Server-Peer Selection
1023
between the requesting-node and the sever-peer. There are totally 60 different file objects in the network. About behavior of users in simulations: a weight matrix for different objects and an aggressiveness matrix for peers. “Who request a file” and “What file object is being request” are decided according to these two matrices. It is not a random generation but use Zipf distribution for popularity of objects and Roulette’s wheel selection for generating the requests.
5
Performance Results
The performance gain of using the greedy algorithm is of two-fold. First, we observe that the standard deviation of energy levels of all nodes become smaller (see Figure 3). We interpret this as follows. Consider the algorithm aforementioned, among all the potential server-peer, we are selecting the one with most positive or least negative change of EBDA. This means we are, in a way, selecting the sever-peer with smallest number of objects on hands to share (consider Equation (11)). This server-peer is expected to be less frequently asked to share files (because it has less files to donate). This could implicitly allocate the severrole among different nodes more evenly, as long as the request is satisfied. The more even distribution results in a less deviated set of energy levels of different nodes. We regarded this as an energy-sense fairness. Energy S.D. 2121.5
2121
2120.5
Energy S.D
2120
2119.5
2119
2118.5
2118 With Greedy Algorithm Without Greedy Algorithm 2117.5
2117
0
500
1000
1500
2000
2500
Time / sec
Fig. 3. Standard deviation of energy levels of nodes
Secondly, the more even distribution of server task could also avoid the existence of busy-server and thus slightly increase the file request successful ratio. The only drawback of using the greedy algorithm is that the delay would be longer (on average 15 seconds more) since the greedy algorithm represent an extra criteria besides hop count in selection of server-peer.
6
Conclusions
We have proposed a greedy sever-peer selection algorithm which can increase the energy efficiency of the network. Our work shed some insight into a yet-tobe explored area, wireless P2P file sharing network which is believed to be an
1024
A.K.H. Leung and Y.-K. Kwok
important interaction platform in next generation wireless communication. A new performance metric, namely energy-based data availability, has been presented. With simulations we demonstrate that the performance of the network is improved in terms of fairness and file request successful ratio.
References 1. BitTorrent, http://bitconjurer.org/BitTorrent/, 2004 2. C. Bram, “Incentives build robustness in BitTorrent,” available at: http://bitconjurer.org/BitTorrent/bittorrentecon.pdf, 2004 3. Chris De Herrera, “802.11b Wireless LAN PC Cards,” http://www.cewindows. net/peripherals/ pccardwirelesslan.htm, 2004. 4. E. Cohen, A. Fiat and H. Kaplan, “Associative search in peer to peer networks: harnessing latent semantics,” Proc. INFOCOM 2003, vol. 2, pp. 1261–1271, Mar.Apr. 2003. 5. eDonkey2000, http://www.edonkey2000.com/, 2004. 6. FastTrack, http://www.slyck.com/ft.php, 2004. 7. L. M. Feeney and M. Nilsson, “Investigating the energy consumption of a wireless network interface in an ad hoc networking environment,” Proc. IEEE INFOCOM 2001, vol. 3, pp. 1548–1557, Apr. 2001. 8. S. Ganeriwal and M. B. Srivastava, “Reputation-based framework for high integrity sensor networks,” Proc. ACM SASN 2004, pp. 66–77, Oct. 2004. 9. C. Gkantsidis, M. Mihail and A. Saberi, “Random Walks in Peer-to-Peer Networks,” Proc. IEEE INFOCOM 2004, vol. 1, pp. 120–130, Mar. 2004. 10. Gnutella, http://www.gnutella.com/, 2004. 11. B. D. Huges, Random walks and random environments, Oxford Science Publications, 1995. 12. ICQ, http://www.icq.com/, 2004 13. IEEE, “Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) specifications,” IEEE Std 802.11b - 1999, 1999. 14. KaZaA, http://www.kazaa.com/, 2004 15. S. Kishore, J. C. Chen, K. M. Sivalingam and P. Agrawal, “A comparison of MAC protocols for wireless local networks based on battery power consumption,” Proc. IEEE INFOCOM 1998, vol. 1, pp. 150–157, Mar.-Apr. 1998. 16. J. Kulik, W. Rabiner and H. Balakrishnan ,“Adaptive protocols for information dissemination in wireless sensor networks ,” Proc. 5th ACM/IEEE MobiCom, pp. 174–185, Aug. 1999. 17. I. Stoica, R. Morris, D. Liben-Nowell, D. R. Karger, M. F. Kaashoek, F. Dabek and H. Balakrishnan, “Chord: a scalable peer-to-peer lookup protocol for Internet applications,” IEEE/ACM Trans. Networking, vol. 11, issue. 1, pp. 17–32, Feb. 2003. 18. Z. Li and M. H. Ammar, “A file-centric model for peer-to-peer file sharing systems,” Proc. 11th Int’l Conf. Network Protocols, pp. 28–37, Nov. 2003. 19. Z. Li, E. W. Zegura and M. H. Ammar “The effect of peer selection and buffering strategies on the performance of peer-to-peer file sharing systems,” Proc. IEEE MASCOTS 2002, pp. 63–70, Oct. 2002. 20. X. Liu, G. Yang and D. Wang, “Stationary and adaptive replication approach to data availability in structured peer-to-peer overlay networks,” Proc. ICON2003, pp.265–270, Sept.–Oct. 2003.
An Efficient and Practical Greedy Algorithm for Server-Peer Selection
1025
21. J. G. Markoulidakis, G. L. Lyberopoulos, D. F. Tsirkas and E. D. Sykas, “Mobility modeling in third-generation mobile telecommunications systems,” IEEE Personal Communications, vol. 4, no. 4, pp. 41–56, Aug. 1997. 22. M. D. Mustafa, B. Nathrah, M. H. Suzuri and M. T. Abu Osman, “Improving data availability using hybrid replication technique in peer-to-peer environments,” AINA 2004, vol. 1, pp.593–598, Mar. 2004. 23. “Napster”, http://www.napster.com/, 2004 24. Orinoco, “Orinoco 802.11b Gold PC card data sheet”, http://www.proxim.com/ products/ wifi/11b/, 2004. 25. K. Pearson, “The problem of the random walk,” Nature, vol. 72, p. 294, July 1905. 26. T. S. Rappaport, Wireless communication, principle and practice, Prentice Hall, 1996. 27. S. Ratnasamy, P. Francis, M. Handley, R. Karp, and S. Shenker, “A scalable content-addressable network,” Proc. ACM SIGCOMM 2001, Aug. 2001. 28. A. Rowstron and P. Druschel, “Pastry: Scalable, distributed object location and routing for large-scale peer-to-peer systems,” IFIP/ACM Int’l Conf. Distributed Systems Platforms (Middleware), pp. 329–350, Nov. 2001. 29. Slyck.com, http://www.slyck.com/news.php?story=574, 2004. 30. Sony CLIE handheld, http://sonyelectronics.sonystyle.com /micros/clie/, 2004. 31. TIA/EIA Interim Standard 95, July 1993. 32. The Washington Times Online, http://www.washtimes.com/technology/ 20040303-094741-3574r.htm, 2004. 33. B. Y. Zhao, L. Huang J. Stribling, S. C. Rhea, A. D. Joseph and J. D. Kubiatowicz, “Tapestry: a resilient global-scale overlay for service deployment,” IEEE. Journal Selected Areas in Communications, vol. 22, issue. 1, pp. 41–53, Jan. 2004. 34. H. Zhu and G. Cao, “rDCF: A relay-enabled medium access control protocol for wireless ad hoc networks,” Proc. IEEE INFOCOM 2005, Mar. 2005.
Can P2P Benefit from MANET? Performance Evaluation from Users’ Perspective Lu Yan Turku Centre for Computer Science (TUCS), FIN-20520 Turku, Finland [email protected]
Abstract. With the advance in mobile wireless communication technology and the increasing number of mobile users, peer-to-peer computing, in both academic research and industrial development, has recently begun to extend its scope to address problems relevant to mobile devices and wireless networks. This paper is a performance study of peer-to-peer systems over mobile ad hoc networks. We show that cross-layer approach performs better than separating the overlay from the access networks with the comparison of different settings for the peer-to-peer overlay and underlying mobile ad hoc network.
1 Introduction Peer-to-Peer (P2P) computing is a networking and distributed computing paradigm which allows the sharing of computing resources and services by direct, symmetric interaction between computers. With the advance in mobile wireless communication technology and the increasing number of mobile users, peer-to-peer computing, in both academic research and industrial development, has recently begun to extend its scope to address problems relevant to mobile devices and wireless networks. Mobile Ad hoc Networks (MANET) and P2P systems share a lot of key characteristics: self-organization and decentralization, and both need to solve the same fundamental problem: connectivity. Although it seems natural and attractive to deploy P2P systems over MANET due to this common nature, the special characteristics of mobile environments and the diversity in wireless networks bring new challenges for research in P2P computing. Currently, most P2P systems work on wired Internet, which depends on application layer connections among peers, forming an application layer overlay network. In MANET, overlay is also formed dynamically via connections among peers, but without requiring any wired infrastructure. So, the major differences between P2P and MANET that we concerned in this paper are (a) P2P is generally referred to the application layer, but MANET is generally referred to the network layer, which is a lower layer concerning network access issues. Thus, the immediate result of this layer partition reflects the difference of the packet transmission methods between P2P and MANET: the P2P overlay is a unicast network with virtual broadcast consisting of numerous single unicast packets; while the MANET overlay always performs physical broadcasting. (b) Peers in P2P overlay is usually referred to static node X. Jia, J. Wu, and Y. He (Eds.): MSN 2005, LNCS 3794, pp. 1026 – 1035, 2005. © Springer-Verlag Berlin Heidelberg 2005
Can P2P Benefit from MANET? Performance Evaluation from Users’ Perspective
1027
though no priori knowledge of arriving and departing is assumed, but peers in MANET is usually referred to mobile node since connections are usually constrained by physical factors like limited battery energy, bandwidth, computing power, etc. The above similarities and differences between P2P and MANET lead to an interesting but challenging research on P2P systems over MANET. Although both P2P and MANET have recently becoming popular research areas due to the widely deployment of P2P applications over Internet and rapid progress of wireless communication, few research has been done for the convergence of the two overlay network technologies. In fact, the scenario of P2P systems over MANET seems feasible and promising, and possible applications for this scenario include car-to-car communication in a field-range MANET, an e-campus system for mobile e-learning applications in a campus-range MANET on top of IEEE 802.11, and a small applet running on mobile phones or PDAs enabling mobile subscribers exchange music, ring tones and video clips via Bluetooth, etc.
2 Background and State-of-the-Art Since both P2P and MANET are becoming popular only in recent years, the research on P2P systems over MANET is still in its early stage. The first documented system is Proem [1], which is a P2P platform for developing mobile P2P applications, but it seems to be a rough one and only IEEE 802.11b in ad hoc mode is supported. 7DS [2] is another primitive attempt to enable P2P resource sharing and information dissemination in mobile environments, but it is rather a P2P architecture proposal than a practical application. In a recent paper [3], Passive Distributed Indexing was proposed for such kind of systems to improve the search efficiency of P2P systems over MANET, and in ORION [4], a Broadcast over Broadcast routing protocol was proposed. The above works were focused on either P2P architecture or routing schema design, but how efficient is the approach and what is the performance experienced by users are still in need of further investigation. Previous work on performance study of P2P over MANET was mostly based on simulative approach and no concrete analytical mode was introduced. Performance issues of this kind of systems were first discussed in [5], but it simply shows the experiment results and no further analysis was presented. There is a survey of such kind of systems in [6] but no further conclusions were derived, and a sophisticated experiment and discussion on P2P communication in MANET can be found in [7]. Recently, B. Bakos etc. with Nokia Research analyzed a Gnutella-style protocol query engine on mobile networks with different topologies in [8], and T. Hossfeld etc. with Siemens Labs conducted a simulative performance evaluation of mobile P2P filesharing in [9]. However, all above works fall into practical experience report category and no performance models are proposed.
3 Performance Evaluation of P2P over MANET There have been many routing protocols in P2P networks and MANET respectively. For instance, one can find a very substantial P2P routing scheme survey from HP
1028
L. Yan
Labs in [10], and US Navy Research publish ongoing MANET routing schemes in [11]; but all above schemes fall into two basic categories: broadcast-like and DHTlike. More specifically, most early P2P search algorithms, such as in Gnutella [12], Freenet [13] and Kazaa [14], are broadcast-like and some recent P2P searching, like in eMule [15] and BitTorrent [16], employs more or less some feathers of DHT. On the MANET side, most on-demand routing protocols, such as DSR [17] and AODV [18], are basically broadcast-like. Therefore, we here introduce different approaches to integrate these protocols in different ways according to categories. 3.1 Broadcast over Broadcast The most straight forward approach is to employ a broadcast-like P2P routing protocol at the application layer over a broadcast-like MANET routing protocol at the network layer. Intuitively, in the above settings, every routing message broadcasting to the virtual neighbors at the application layer will result to a full broadcasting to the corresponding physical neighbors at the network layer. The scheme is illustrated in Figure 1 with a searching example: peer A in the P2P overlay is trying to search for a particular piece of information, which is actually available in peer B. Due to broadcast mechanism, the search request is transmitted to its neighbors, and recursively to all the members in the network, until a match is found or timeout. Here we use the blue lines represent the routing path at this application layer. Then we map this searching process into the MANET overlay, where node A0 is the corresponding mobile node to the peer A in the P2P overlay, and B0 is related to B in the same way. Since the MANET overlay also employs a broadcast-like routing protocol, the request from node A0 is flooded (broadcast) to directly connected neighbors, which themselves flood their neighbors etc., until the request is answered or a maximum number of flooding steps occur. The route establishing lines in that network layer is highlighted in red, where we can find that there are few overlapping routes between these two layers though they all employ broadcast-like protocols. We have studied a typical broadcast-like P2P protocol, Guntella [19], in the previous work [20]. This is a pure P2P protocol, as shown in Figure 2, in which no advertisement of shared resources (e.g. directory or index server) occurs. Instead, each request from a peer is broadcasted to directly connected peers, which themselves
Fig. 1. Broadcast over Broadcast
Can P2P Benefit from MANET? Performance Evaluation from Users’ Perspective
1029
Fig. 2. Broadcast-like P2P Protocol
Fig. 3. Broadcast-like MANET Protocol
broadcast this request to their directly connected peers etc., until the request is answered or a maximum number of broadcast steps occur. It is easy to see that this protocol requires a lot of network bandwidth, and it does not prove to be very scalable. The complexity of this routing algorithm is O(n). [21, 22] Generally, most on-demand MANET protocols, like DSR [23] and AODV [24], are broadcast-like in nature [25]. Previously, one typical broadcast-like MANET protocol, AODV, was studied in [26]. As shown in Figure 3, in that protocol, each node maintains a routing table only for active destinations: when a node needs a route to a destinations, a path discovery procedure is started, based on a RREQ (route request) packet; the packet will not collect a complete path (with all IDs of involved nodes) but only a hop count; when the packet reaches a node that has the destination in its routing table, or the destination itself, a RREP (route reply) packet is sent back to the source (through the path that has been set-up by the RREQ packet), which will insert the destination in its routing table and will associate the neighbour from which the RREP was received as preferred neighbour to that destination. Simply speaking, when a source node wants to send a packet to a destination, if it does not know a valid route, it initiates a route discovery process by flooding RREQ packet through the network. AODV is a pure on-demand protocol, as only nodes along a path maintain routing information and exchange routing tables. The complexity of that routing algorithm is O(n). [27] This approach is probably the easiest one to implement, but the drawback is also obvious: the routing path of the requesting message is not the shortest path between source and destination (e.g. the red line in Figure 1), because the virtual neighbors in the P2P overlay are not necessarily also the physical neighbors in the MANET overlay, and actually these nodes might be physically far away from each other.
1030
L. Yan
Therefore, the resulting routing algorithm complexity of this broadcast over broadcast scheme is unfortunately O(n2) though each layer’s routing algorithm complexity is O(n) respectively. It is not practical to deploy such kind of scheme for its serious scalability problem due to the double broadcast; and taking the energy consumption portion into consideration, which is somehow critical to mobile devices, the double broadcast will also cost a lot of energy consumption, and make it infeasible in cellular wireless date networks. 3.2 DHT over Broadcast The scalability problem of broadcast-like protocols has long been observed and many revisions and improvement schemas are proposed in [28, 29, 30]. To overcome the scaling problems in broadcast-like protocols where data placement and overlay network construction are essentially random, a number of proposals are focused on structured overlay designs. Distributed Hash Table (DHT) [31] and its varieties [32, 33, 34] advocated by Microsoft Research seem to be promising routing algorithms for overlay networks. Therefore it is interesting to see the second approach: to employ a DHT-like P2P routing protocol at the application layer over a broadcast-like MANET routing protocol at the network layer. The scheme is illustrated in Figure 4 with the same searching example. Comparing to the previous approach, the difference lies in the P2P overlay: in a DHT-like protocol, files are associated to keys (e.g. produced by hashing the file name); each node in the system handles a portion of the hash space and is responsible for storing a certain range of keys. After a lookup for a certain key, the system returns the identity (e.g. the IP address) of the node storing the object with that key. The DHT functionality allows nodes to put and get files based on their key, and each node handles a portion of the hash space and is responsible for a certain key range. Therefore, routing is location-deterministic distributed lookup (e.g. the blue line in Figure 4). DHT was first proposed by Plaxton etc. in [35], without intention to address P2P routing problems. DHT soon proved to be a useful substrate for large distributed systems and a number of projects are proposed to build Internet-scale facilities
Fig. 4. DHT over Broadcast
Can P2P Benefit from MANET? Performance Evaluation from Users’ Perspective
1031
Fig. 5. DHT-like P2P Protocol
layered above DHTs, among them are Chord [31], CAN [32], Pastry [33], Tapestry [34] etc. As illustrated in Figure 5, all of them take a key as input and route a message to the node responsible for that key. Nodes have identifiers, taken from the same space as the keys. Each node maintains a routing table consisting of a small subset of nodes in the system. When a node receives a query for a key for which it is not responsible, the node routes the query to the hashed neighbor node towards resolving the query. In such a design, for a system with n nodes, each node has O(log n) neighbors, and the complexity of the DHT-like routing algorithm is O(log n). [36] Additional work is required to implement this approach, partly because DHT requires a periodical maintenance (i.e. it is just like an Internet-scale hash table, or a large distributed database); since each node maintains a routing table (i.e. hashed keys) to its neighbors according to DHT algorithm, following a node join or leave, there is always a nearest key reassignment between nodes. This DHT over Broadcast approach is obviously better than the previous one, but it still does not solve the shortest path problem as in the Broadcast over Broadcast scheme. Though the P2P overlay algorithm complexity is optimized to O(log n), the mapped message routing in the MANET overlay is still in the broadcast fashion with complexity O(n); the resulting algorithm complexity of this approach is as high as O(n log n). This approach still requires a lot of network bandwidth, and hence does not prove to be very scalable, but it is efficient in limited communities, such as a company network. 3.3 Cross-Layer Routing A further step of the Broadcast over Broadcast approach would be a Cross-Layer Broadcast. Due to similarity of Broadcast-like P2P and MANET protocols, the second broadcast could be skipped if the peers in the P2P overlay would be mapped directly into the MANET overlay, and the result of this approach would be the merge of application layer and network layer (i.e. the virtual neighbors in P2P overlay overlaps the physical neighbors in MANET overlay).
1032
L. Yan
Fig. 6. Cross-Layer Broadcast
Fig. 7. Cross-Layer DHT
The scheme is illustrated in Figure 6, where the advantage of this cross-layer approach is obvious: the routing path of the requesting message is the shortest path between source and destination (e.g. the blue and red lines in Figure 6), because the virtual neighbors in the P2P overlay are de facto physical neighbors in the MANET overlay due to the merge of two layers. Thanks to the nature of broadcast, the algorithm complexity of this approach is O(n), making it suitable for deployment in relatively large scale networks, but still not feasible for Internet scale networks. It is also possible to design a Cross-Layer DHT in Figure 7 with the similar inspiration, and the algorithm complexity would be optimized to O(log n) with the merit of DHT, which is advocated to be efficient even in Internet scale networks. The difficulty in that approach is implementation: there is no off-the-shelf DHT-like MANET protocol as far as know, though recently, some research projects, like Ekta [37], towards a DHT substrate in MANET are proposed. Table 1. How efficient does a user try to find a specific piece of data?
Broadcast over Broadcast DHT over Broadcast Cross-Layer Broadcast Cross-Layer DHT
Efficiency O(n2) O(n log n) O(n) O(log n)
Scalability N.A. Bad Medium Good
Implementation Easy Medium Difficult N.A.
Can P2P Benefit from MANET? Performance Evaluation from Users’ Perspective
1033
As an answer to the question in our paper title, we show that the cross-layer approach performs better than separating the overlay from the access networks, with the comparison of different settings for the peer-to-peer overlay and underlying mobile ad hoc network in above four approaches in Table 1.
4 Concluding Remarks In this paper, we studied the peer-to-peer systems over mobile ad hoc networks with a comparison of different settings for the peer-to-peer overlay and underlying mobile ad hoc network. We show that cross-layer approach performs better than separating the overlay from the access networks. We hope our results would potentially provide useful guidelines for mobile operators, value-added service providers and application developers to design and dimension mobile peer-to-peer systems, and as a foundation for our long term goal [38].
References 1. G. Kortuem, J. Schneider, D. Preuitt, T. G. C. Thompson, S. Fickas, Z. Segall. When Peerto-Peer comes Face-to-Face: Collaborative Peer-to-Peer Computing in Mobile Ad hoc Networks. In Proc. 1st International Conference on Peer-to-Peer Computing (P2P 2001), Linkoping, Sweden, August 2001. 2. M. Papadopouli and H. Schulzrinne. A Performance Analysis of 7DS a Peer-to-Peer Data Dissemination and Prefetching Tool for Mobile Users. In Advances in wired and wireless communications, IEEE Sarnoff Symposium Digest, 2001, Ewing, NJ. 3. C. Lindemann and O. Waldhorst. A Distributed Search Service for Peer-to-Peer File Sharing in Mobile Applications. In Proc. 2nd IEEE Conf. on Peer-to-Peer Computing (P2P 2002), 2002. 4. A. Klemm, Ch. Lindemann, and O. Waldhorst. A Special-Purpose Peer-to-Peer File Sharing System for Mobile Ad Hoc Networks. In Proc. IEEE Vehicular Technology Conf., Orlando, FL, October 2003. 5. S. K. Goel, M. Singh, D. Xu. Efficient Peer-to-Peer Data Dissemination in Mobile Ad-Hoc Networks. In Proc. International Conference on Parallel Processing (ICPPW '02), IEEE Computer Society, 2002. 6. G. Ding, B. Bhargava. Peer-to-peer File-sharing over Mobile Ad hoc Networks. In Proc. 2nd IEEE Conf. on Pervasive Computing and Communications Workshops. Orlando, Florida, 2004. 7. H.Y. Hsieh and R. Sivakumar. On Using Peer-to-Peer Communication in Cellular Wireless Data Networks. In IEEE Transaction on Mobile Computing, vol. 3, no. 1, January-March 2004. 8. B. Bakos, G. Csucs, L. Farkas, J. K. Nurminen. Peer-to-peer protocol evaluation in topologies resembling wireless networks. An Experiment with Gnutella Query Engine. In Proc. International Conference on Networks, Sydney, Oct., 2003. 9. T. Hossfeld, K. Tutschku, F. U. Andersen, H. Meer, J. Oberender. Simulative Performance Evaluation of a Mobile Peer-to-Peer File-Sharing System. Research Report 345, University of Wurzburg, Nov. 2004. 10. D. S. Milojicic, V. Kalogeraki, R. Lukose, K. Nagaraja, J. Pruyne, B. Richard, S. Rollins, Z. Xu. Peer-to-Peer Computing. Technical Report HPL-2002-57, HP Labs.
1034
L. Yan
11. MANET Implementation Survey. Available at http://protean.itd.nrl.navy.mil/manet/survey/survey.html 12. Gnutella: http://www.gnutella.com/ 13. Freenet: http://freenet.sourceforge.net/ 14. Kazaa: http://www.kazaa.com/ 15. eMule: http://www.emule-project.net/ 16. BitTorrent: http://bittorrent.com/ 17. DSR IETF draft v1.0. Available at http://www.ietf.org/internet-drafts/draft-ietf-manet-dsr-10.txt 18. AODV IETF draft v1.3. Available at http://www.ietf.org/internet-drafts/draft-ietf-manet-aodv-13.txt 19. Clip2. The gnutella protocol specification v0.4 (document revision 1.2). Available at http://www9.limewire.com/developer /gnutella protocol 0.4.pdf, Jun 2001. 20. L. Yan and K. Sere. Stepwise Development of Peer-to-Peer Systems. In Proc. 6th International Workshop in Formal Methods (IWFM'03). Dublin, Ireland, July 2003. 21. M. Ripeanu, I. Foster and A. Iamnitch. Mapping the Gnutella Network: Properties of Large-Scale Peer-to-Peer Systems and Implications for System Design. In IEEE Internet Computing, vol. 6(1) 2002. 22. Y. Chawathe, S. Ratnasamy, L. Breslau, S. Shenker. Making Gnutella-like P2P Systems Scalable. In Proceedings of ACM SIGCOMM, 2003. 23. D. B. Johnson, D. A. Maltz. Dynamic Source Routing in Ad-Hoc Wireless Networks. In Mobile Computing, Kluwer, 1996. 24. C. E. Perkins and E. M. Royer. The Ad hoc On-Demand Distance Vector Protocol. In Ad hoc Networking. Addison-Wesley, 2000. 25. F. Kojima, H. Harada and M. Fujise. A Study on Effective Packet Routing Scheme for Mobile Communication Network. In Proc. 4th Intl. Symposium on Wireless Personal Multimedia Communications, Denmark, Sept. 2001. 26. L. Yan and J. Ni. Building a Formal Framework for Mobile Ad Hoc Computing. In Proc. International Conf. on Computational Science (ICCS 2004). Krakow, Poland, June 2004. LNCS 3036, Springer-Verlag. 27. E. M. Royer and C. K. Toh. A Review of Current Routing Protocols for Ad-Hoc Mobile Wireless Networks. In IEEE Personal Communications, April 1999. 28. Q. Lv, S. Ratnasamy and S. Shenker. Can Heterogeneity Make Gnutella Scalable? In Proc. 1st International Workshop on Peer-to-Peer Systems (IPTPS '02), Cambridge, MA, March 2002. 29. B. Yang and H. Garcia-Molina. Improving Search in Peer-to-Peer Networks. In Proc. Intl. Conf. on Distributed Systems (ICDCS), 2002. 30. Y. Chawathe, S. Ratnasamy, L. Breslau, and S. Shenker. Making Gnutella-like P2P Systems Scalable. In Proc. ACM SIGCOMM 2003, Karlsruhe, Germany, August 2003. 31. I. Stoica, R. Morris, D. Karger, F. Kaashoek and H. Balakrishnan. Chord: A Scalable PeerTo-Peer Lookup Service for Internet Applications. In Proc. ACM SIGCOMM, 2001. 32. S. Ratnasamy, P. Francis, M. Handley, R. Karp and S. Schenker. A scalable contentaddressable network. In Proc. Conf. on applications, technologies, architectures, and protocols for computer communications, ACM, 2001. 33. A. Rowstron and P. Druschel. Pastry: Scalable, distributed object location and routing for large-scale peer-to-peer systems. In Proc. IFIP/ACM International Conference on Distributed Systems Platforms (Middleware), Heidelberg, Germany, pages 329-350, November, 2001.
Can P2P Benefit from MANET? Performance Evaluation from Users’ Perspective
1035
34. B. Y. Zhao, L. Huang, J. Stribling, S. C. Rhea, A. D. Joseph, and J. Kubiatowicz. Tapestry: A Resilient Global-scale Overlay for Service Deployment. In IEEE Journal on Selected Areas in Communications, January 2004, Vol. 22, No. 1. 35. C. Plaxton, R. Rajaraman, A. Richa. Accessing nearby copies of replicated objects in a distributed environment. In Proc. ACM SPAA, Rhode Island, June 1997. 36. S. Ratnasamy, S. Shenker, I. Stoica. Routing Algorithms for DHTs: Some Open Questions. In Proc. 1st International Workshop on Peer-to-Peer Systems, March 2002. 37. H. Pucha, S. M. Das and Y. C. Hu. Ekta: An Efficient DHT Substrate for Distributed Applications in Mobile Ad Hoc Networks. In Proc. 6th IEEE Workshop on Mobile Computing Systems and Applications, December 2004, UK. 38. L. Yan, K. Sere, X. Zhou, and J. Pang. Towards an Integrated Architecture for Peer-toPeer and Ad Hoc Overlay Network Applications. In Proc. 10th IEEE International Workshop on Future Trends of Distributed Computing Systems (FTDCS 2004), May 2004.
Research on Dynamic Modeling and Grid-Based Virtual Reality Luliang Tang1,2,* and Qingquan Li1,2 1
State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing (LIESMRS), Wuhan University, Wuhan, China [email protected] 2 Research Center of Spatial Information and Network Communication (SINC), Wuhan University, Wuhan, China [email protected]
Abstract. It is publicly considered that the next generational Internet technology is grid computing, which supports the sharing and coordinated use of diverse resources in dynamic virtual organizations from geographically and organizationally distributed components. Grid computing characters strong computing ability and broad width information exchange[1]. Globus presented Open Grid Services Architecture (OGSA), which centered on grid services[3]. According to the characteristic of Grid-based Virtual Reality (GVR) and the development of current grid computing, this paper put forward the Orient-Grid Distributed Network Model for GVR, whose dynamic Virtual group is corresponding with the Virtual Organization in OGSA service. The GDNM is of more advantage to the distributed database consistency management, and is more convenient to the virtual group users acquiring the GVR data information, and the dynamic virtual groups in GDNM are easier and more directly to utilize the grid source and communication each other. The architecture of GVR designed in this paper is based on OGSA and web services, which is based on the OGSA. This architecture is more convenient to utilizing grid service and realizing the GVR. This paper put forward the method of virtual environment Object-oriented Dynamic Modeling (OODM) based on Problem Solving (PS), which is applied with dynamic digital terrain and dynamic object modeling. This paper presents the implementation of GVR and the interfaces of Grid Service.
1 Introduction It plays an increasingly important role in modern society that the useful information is processed, visualized and integrated from various sources, while the information sources may be widely distributed, and the data processing requirements can be highly variable, and the type of resources are required, and the processing demands are put upon the researchers [5]. Grid technology is a major cornerstone of today’s computational science and engineering, and provides a powerful medium to achieve the integration of large amounts *
Corresponding author.
X. Jia, J. Wu, and Y. He (Eds.): MSN 2005, LNCS 3794, pp. 1036 – 1042, 2005. © Springer-Verlag Berlin Heidelberg 2005
Research on Dynamic Modeling and Grid-Based Virtual Reality
1037
of experimental data and computational resources, from simple parameters and highly distributed networks, into complex interactive operation that allow the researchers to efficiently run their experiments, by optimizing overhead and performance. The next generational internet based on grid computing makes it possible to build the Distributed Virtual Reality (DVR), which can realizes the mass data dynamic modeling, distributed virtual geographic environment, and collaborative GIS.
2 Grid Computing and Middleware 2.1 Grid Computing The Grid has been defined as “flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions and resources (A virtual organization)” and “A computational grid is a hardware and software infrastructure that provides dependable, consistent, pervasive, and inexpensive access to high-end computational capabilities.”[1]. The first grid test bed was put up with 1000M bandwidth in 1990s, more and more great grid projects were started such as I-WAY, Globus, Legion and the Golobal Grid Forum (CGF) in America, CERN DataGrid, UNICORE and MOL in Europe, Nimrod/G and EcoGrid in Austria, Ninf and Bricks in Japan [1]. In China, Li Sanli started his “ACI” grid research, and Chinese Academy of Sciences started state “863” project named “Vega Grid” in 1999. China National Grid (CNGrid) was started with many grid nodes in 2002, the China Grid Forum (CGF) came into existence in 2003[6]. 2.2 Middleware and Globus Toolkit The Globus Toolkit is a software toolkit that allows develop to program grid-based applications, which is one of the biggest corporations in research on grid computing [2,8], and is being the practical standard for grid system researches and applications. Resource manager is most important function in Globus Toolkit as a grid middleware, which is divided into two services, the resource broker and the job manager. Figure 1 displays the workflow of Globus Toolkit Grid middleware.
Fig. 1. Basic principle of Globus Toolkit Grid Middleware
1038
L.Tang and Q.Li
3 Distributed Virtual Reality and Grid-Based Virtual Reality 3.1 Distributed Virtual Reality With the development of the internet, the demand of widely distributed resource and data processing requirements, DVR (Distributed Virtual Reality, DVR) becomes one of hot research topics. The standard of DVR has been changed from the Distributed Information System (DIS) interactive simulation of SIMUNET to High Level Architecture (HLA), which includes three parties such as OMT (Object Model Temple), Rules and Interface Specification. WorldToolKit (WTK) is a develop tool for virtual reality. Vega is another developing tool kit for virtual reality. Canada Alberta University developed MR, and English Division corporation developed dVS, and Sweden distributed information system lab develops DIVE (Distributed Interactive Virtual Environment, DIVE), and America Naval Postgraduate School develops NPSNET-IV [4]. The network data model of DVR has three types, such as concentrated network model, distributed network model and duplicate network model. 3.2 Grid-Based Virtual Reality The development of GVR depends on the development of network technology and distributed computing as same as the development of GIS depends on the development of computer science and technology in a certain extent, and. The network based on grid computing makes it possible to build GVR, which can realize the mass data dynamic modeling, and build the distributed virtual geographic environment, and fulfill the collaborative GIS and interactive cooperation, and make it come into the fact that the tele-immersion and visual systems are built. GVR would change the old “humans/computers” interaction mode into “humans/ computers/ humans cooperation” interaction mode. GVR can be applied to distributed dynamic modeling, military simulation, collaborative virtual environment (CVE), tele-immersion, such as distributed military environment dynamic modeling, cooperative training, fight scene distribution, resource layout, information service, Log service, virtual remote library, this paper analyses and put forward the application of GVR.
4 The Architecture of GVR 4.1 Grid-Oriented Distributed Network Model of GVR In this paper, a network model of GVR is designed, named Grid-Oriented Distributed Network Model (GDNM), and Figure 2 displays the Grid-Oriented Distributed Network Model of GVR. GDNM is composed of virtual groups and GVR server, and virtual groups can access each other directly by the grid, and the GVR server is answer for the database consistency of the distributed virtual group server. Virtual group is composed of group client and group server, and group clients acquire the scene database from the self-group server no accessing the database from other group servers or GVR server. Virtual group is dynamic group and corresponding with the Virtual Organization in grid OGSA service, group clients can access the group server scene database from GVR server.
Research on Dynamic Modeling and Grid-Based Virtual Reality
Group Client
Group Client
Group Server Scene Database
Group Client
Group Client
Group Client
Group Client
Group Client
Group Server Scene Database
Virtual Group
Virtual Group
Next Generational Network Grid
Group Client
Group Client
Group Server Scene Database
Virtual Group
Group Client
Group Client
Group Server Scene Database
Group Client
Virtual Group
Group Client
Group Server Scene Database
Virtual Group
Group Client
Group Server Scene Database
Group Client
Group Client
Group Client
Group Client
GVR Server Scene Database
1039
Virtual Group
Group User
V ir
GVR Virtual Group Clients
ro u tu al G
p C li
Virtual Group Server
G VR
GVR Server
Virtual Group Server
en ts
Lay e
Lay e
r
Resource and Data Manage
Grid Middleware Globus ToolKit
Database
Computing
Network, Computing, Printer and Various Resource
GV R
Print Server
GVR
Contr
Printer
R e so
ol La
urce
y er
Control Layer
Information Service
r
Resource Layer
GVR and GVR Virtual Group
Licensing and Security
r
GVR Layer
Group User
Group User
Group User
Group User Layer
Fig. 2. The Grid-Oriented Distributed Network Model of GVR
L ay e
Fig. 3. The Architecture of GVR
4.2 The Architecture of GVR The architecture of GVR is composed of four tiers such as GVR Resource Layer, GVR Control Layer, GVR Layer and Virtual Group Clients Layer. GVR Resource layer includes the network grid-based, high performance computer server, scene database and various resources. GVR Control Layer answers for the security management, information service, data management and resource management. GVR Layer is composed with GVR server and GVR Virtual Group server, and GVR server answer for the GVR data management and the databases consistency management that are distributed in virtual group servers, and GVR Virtual Groups acquire the GVR scene
1040
L.Tang and Q.Li
data from the GVR server and answer for the GVR group client users fulfilling the distributed simulation (Figure 3). The architecture of GVR is based on the Open Grid Services Architecture (OGSA), which is centered with Web Services. This architecture of GVR is OGSA-oriented, which can utilize grid service efficiency, and decrease the conflict with the grid environment. 4.3 Object-Oriented Dynamic Modeling Based on Problem Solving Modeling is an important in Virtual Reality (VR), and there are much software which can fulfill the static object modeling, such as 3DMax, Maya, MultiGen Creator and so on. GVR is based on grid computing, and makes it possible that the dynamic object modeling is practiced in distributed environment. Terrain models can be classified into two types: Digital Elevation Model (DEM) and Triangulated Irregular Network (TIN) [9]. Generally speaking, a DEM is easier to be constructed than a TIN, but for a multi-resolution model, DEM structure may have the problem of terrain tearing between tiles with different resolutions. Although a TIN works well in the multi-resolution case, its complex algorithms of the transformation between different resolution models impede its use in real-time applications. The architecture of GVR is OGSA-oriented which is a service-oriented architecture, the researchers pay attention to the problem more than the resource type and the resource coming, because the grid computing supplies a Problem Solving Environment. A Problem Solving Environment (PSE) provides the user a complete and integrated environment for problem composition, solution, and analysis [10]. The environment should provide an intuitive interface to the available Grid resource, which abstracts the complexities of accessing grid resource by providing a complete suite of high-level tools designed to tackle a particular problem [11]. Dynamic digital terrain in GVR is applied for the method of Object-oriented Dynamic Modeling (OODM) based on the Problem Solving (PS). Digital terrain model is presented by triangulated irregular network (TIN), and the TIN creation becomes the problem. GVR put forward this problem to the grid middleware, and the Grid Service find and get the computing resource in the Problem Solving Environment.
5 Implementation of GVR It is important for GVR that the task management middleware and resource management middleware, which decide the success or failure of the whole system, now we will discuss these two parts’ technical implementation in details. Task management middleware takes charge of all the tasks including task management and dispatch, disintegration, distribution, result merge, report generation, etc. The mission of task management middleware in this system is mainly completed by task object. After receiving users’ application job, task management middleware dispatch task object to job manager, every task object has the object-oriented characteristic. Users cannot operate with resources directly except by task object. Different task objects have different missions, including searching, displaying, suspending, resuming, stopping the operation, which were decided by task management middleware when they were generated.
Research on Dynamic Modeling and Grid-Based Virtual Reality
1041
Fig. 4. Implementation of the GridService port Type Table 1. The Interface of GridService
Interface
Operation FindServiceData()
GridService
SetTerminationTime() Destroy()
NotificationSource NotificationSink Registry Factory PrimaryKey HandleMap
SubscribeToNotificationTopic() UnSubscibeToNotificationTopic() DeliverNotification() RegisterService() UnRegisterService() CreateService() FindByPrimaryKey() DestroyPrimaryKey() FindByHandle()
Resource management middleware deals with the problems how to describe resources in resource registry center and how to publish, find and bind service resources. Figure 4 displays the implementation API interface of grid service, users may fulfill the grid application in “C”, Java, VS.NET. The OGSI supplies the GridServiec interface using API operations such as FindServiceData(), SetTerminationTime() and Destroy(). OGSI supplies user other interfaces such as NotificationSource, NotificationSink, Registry, Factory, PrimaryKey and HandleMap (Table 1).
6 Conclusion According to the characteristic of Grid-based Virtual Reality (GVR) and the development of current grid computing, this paper put forward the Orient-Grid Distributed Network
1042
L.Tang and Q.Li
Model for GVR, whose dynamic Virtual group is corresponding with the Virtual Organization in OGSA service. The GDNM is of more advantage to the distributed database consistency management, and is more convenient to the virtual group users acquiring the GVR data information, and the dynamic virtual groups in GDNM are easier and more directly to utilize the grid source and communication each other. The architecture of GVR designed in this paper is based on OGSA and web services, which is keep to “the five-tiers sandglass structure” of the OGSA. This architecture is more convenient to utilizing grid service and decreasing the conflict with the grid environment. This paper presents the implementation of GVR and the interfaces of Grid Service. The next network based on grid computing bring us much more new idea and new change to our social, and many grid network researchers describe the blueprint of grid network infrastructure and good foreground of application in GIS field. While GVR is a new issue that faces many researchers in computer, virtual reality and GIS fields, and it is only in experimental period at present. GVR is a sort of project that involves in different regions, different users, different platforms, different subjects and different fields’ researchers, and will bring us more challenge.
Acknowledgement This work has been supported by the Open Research Fund of LIESMARS, No. (03)0404.
References 1. I. Foster and C. Kesselman. The Grid: Blueprint for a New Computing Infrastructure. Morgan Kaufmann, Los Altos, CA, 1998. www.mkp.com/book_catalog/1-55860-475-8.asp 2. What is the Globus Toolkit? http://www.globus.org/ 3. I. Foster, C. Kesselman, J. Nick, S. Tuecke, “The Physiology of the Grid: An Open Grid Services Architecture for Distributed Systems Integration,” June 22, 2002. 4. Zhang Maojun. Virtual reality system. Beijing, Science Publisher.2001 5. William E. Johnston. Computational and data Grids in large-scale science and engineering. Future Generation Computer Systems 18 (2002) 1085–1100. 6. Dou Zhihui, Chen Yu, Liu Peng. Grid computing. Beijing, Tsinghua University Publisher. 2002 7. Gong Jianhua, Lin Hui. Study on distributed virtual Geo-environments. Journal of Image and Graphics. Vol. 6 (A), No. 9. 2001 8. I. Foster, C. Kesselman. The Globus Project: A Status Report. Proc. IPPS/SPDP '98 Heterogeneous Computing Workshop, pp. 4-18, 1998. 9. Abdelguerfi, M., Wynne, C., Cooper, E. Roy, L. and Shaw, K., Representation of 3-D elevation in terrain database using hierarchical triangulated irregular networks: a comparative analysis, INT. J. Geographical Information Science, 1998, Vol. 12, No. 8, pp. 853-873. 10. Walker, D.W., Li, M., Rana, O.F., Shields, M.S., Huang, Y,: The software architecture of a distributed problem-solving environment. Concurrency: Practice and Experience 12(15) (2000) 1455-1480. http://www.cs.cf.ac.uk/User/David.W. Walker/papers/psearch01.ps 11. Hakki Eres, Graeme Pound, Zhouan Jiao, et al. Implementation of a Grid-Enabled Problem Solving Environment in Matlab. P.M.A. Sloot et al. (Eds.): ICCS 2003, LNCS 2660, pp. 420−429, 2003.
Design of Wireless Sensors for Automobiles Olga L. Diaz–Gutierrez and Richard Hall Dept. Computer Science & Engineering, La Trobe University, Melbourne 3086, Victoria, Australia [email protected], [email protected] Abstract. Automobile manufacturers require sense data to analyse and improve the driving experience. Currently, sensors are physically wired to both data collectors and the car battery, thus the number of wires scale linearly with sensors. We design alternative power and communications subsystems to minimise these wires’ impact on the test environment.
1
Introduction
Like many other industries, the multinational automobile industry uses radio frequency (RF) applications [1], specifically to assist driving and vehicle diagnosis [2, 3]. Such applications are supported in modern vehicles via the standard provision of hardware infrastructure consisting of the two–wire serial controller area network (CAN) bus platform [4]. All vehicle applications are required by international standards to have undergone exhaustive testing under statistically sound empirical experiments that involve the collection of sense data [5]. The development of wireless sensor networks in automobiles is thus a natural progression of typical sensing and RF devices for this industry. Wireless sensors (WS) are desirable in an automobile test environment for three reasons. First, WS are smaller since cable ports are unnecessary. Second, they eliminate the time required to painstakingly connect all sensors to power and data cables in vehicles. Consequently, WS can be more quickly deployed and easily moved around, improving both the spatial resolution of sense data and fault tolerance of the sensor network [6, 7]. Finally, the number of cables that can be safely crammed into an automobile cabin (along with sensors) during test driving is limited (see Figure 1) .
Fig. 1. Example test vehicle cabin view (courtesy Siemens VDO Automotive Pty. Ltd.)
This paper is organised as follows. In Section 2 we present our WS hardware design. Our proposed WS collaboration method for the automobile-testingX. Jia, J. Wu, and Y. He (Eds.): MSN 2005, LNCS 3794, pp. 1043–1050, 2005. © Springer-Verlag Berlin Heidelberg 2005
1044
O.L. Diaz–Gutierrez and R. Hall
environment sensor network will be discussed in Section 3. This network lifetime is bounded by the average power consumption of individual WS, discussed in Section 4. Finally we describe future work in Section 5.
2
Wireless Sensor Architecture: Hardware Design
We adopted a typical WS architecture [6, 8] consisting of a power unit, sensing module, radio communications module, and a microcontroller which performs analog–to–digital conversion (A/D) as well as software operations including aggregating data into packets at the transmitting end, and filtering and interpolates data at the receiving WS (see Figure 2). WS power units typically have operating voltages ranging from 3.0–3.6V. We decided to give the tester the option to power the WS either from small batteries as per normal usage [6] or from the automobile battery, if convenient. Thus we incorporated a TLE4274 regulator to convert this battery voltage to within the WS operating voltage (see Figure 3). In order to minimise power drain, the sensing module (see Figure 4) is switched on just before a measurement is performed then off immediately afterwards (rapid switching has no negative impact on the components [9]). As usual, this measurement, represented as an analog signal (produced by a world phenomenon sensing device) is stabilised for input to the A/D [10, 11]. The WS is designed to be phenomenon-independent: automobile manufacturers can study any phenomen of interest with these WS so long as they have a measuring device that outputs an appropriate analog signal. Being unsure about the RF operating environment within automobiles of different makes, we wanted the radio communications module to meet two design
Fig. 2. Architecture of a Wireless Sensor
Fig. 3. Power Module Schematics
Design of Wireless Sensors for Automobiles
1045
Fig. 4. Sensing Module Design
Fig. 5. Wireless Sensor RF Antenna Schematics
criteria. First, we wanted the option of being able to communicate across multiple bandwidths, so we selected two compact and low–power RF chips - the nRF905 transceiver for the 433/868/915MHz band [12] and the nRF2401 transceiver for the 2.4GHz band [13]. Second, we wanted two antenna options that could be selected where necessary (see Figure 5): both PCB (compact and lower power requirements but hard to tune [14, 15, 16]) and external antennas (vice versa [17]). We hope that the ability to switch between different RF frequencies and antennas will assist communications particularly between the engine bay and the cabin across the thick metal plate firewall standard in all vehicles. The two requirements of our microcontroller were that it had minimal power drain (to prolong WS life) and that it had a built–in Serial Peripheral Interface (SPI) functionality (to assist data logging onto the CAN bus [18]). The chip we selected was the Atmel ATmega88: it is low–powered, and has fast stand–by– mode to active–mode transition time (so the WS draws power for the shortest amount of time during measurements and transmitting). It has a built in brownout detector circuit, to alert the sensor network user as to when WS batteries need replacing. It also has a useful built–in A/D which we will use to convert sensor input to a 10-bit digital value (zero represents 0 volts and 210 - 1 represents the power source voltage/maximum possible voltage [18]). The meaning of this value is derived by mapping the voltage into the measurement scale for the real-world phenomena being sensed.
1046
O.L. Diaz–Gutierrez and R. Hall
In this section we discussed the design of an individual wireless sensor. In the automobile test environment however, it is necessary to consider how many of these sensors will simultaneously sense and communicate data.
3
Wireless Sensor Network
Our WS will use the communications protocol Time Division Multiple Access (TDMA), as there is an insufficient frequency band to support FDMA and the chosen transceivers do not support CDMA [19]. The use of TDMA means that only the actively data transmitting and receiving WS are in (the relatively high– power) communications on-mode thus all-but-two of the WS are in communications stand-by-mode at any instant. We assume that a single WS (master) operates as a constant data receiver (star topology network) and is powered by the vehicle battery (so it is unnecessary to consider power usage for the master WS). In addition to acting as a central receiver, it synchronises when all slave WS can begin data transmission using a unique req-trans message to each slave. Our two transceiver options both produce RF packets with a bit length of 240 [12, 13], and in the first instance we decided to use fifteen 16-bit packets (3 bits to id the 5 A/D channels, 10 bits sense data/packet, 3 bits left over). A window of 10ms has been allowed for simple packet processing by the receiver and a minimum TDMA slot of 5ms has been allowed. We now calculate the maximum number of WS that a TDMA network with these values can support. N oDatabitspacket = No. of data bits in a packet = 240 M easurementsize = Size of measurement (number of bits) = 16 Samplesinterval = Sample time interval, user defined (in seconds) Sensorsnumber = Number of sensors, user defined (in seconds) T DM Athreshold = Minimum time needed for each transmitter when using TDMA protocol (ms) = 5ms M CUthreshold = Minimum time needed for the master (MCU) to process each packet received (ms) = 10ms Using the variables above, the following data can be calculated: N oDatabits
= 240 Samplespacket = No. Samples / Packet = Measurementpacket 16 = 15 bitsize T XIntervalavg = Average Transmission Interval T DM AMCU = Average window time for each sensor, for both TDMA transmission and processing at master (ms) Every sensor will transmit a packet when 15 samples are taken, which are measured at a rate of Samplesinterval . The average transmission interval is thus: T XIntervalavg = Samplespacket × Samplesinterval If sensors transmit a packet every T XIntervalavg , then each sensor is allocated a window in that interval to implement TDMA. To calculate the size of the time slot, the interval is divided by the number of nodes in the network. The 1,000 is to convert seconds to milli–seconds:
Design of Wireless Sensors for Automobiles
1047
Fig. 6. System Bottleneck: Master VS Protocol Capabilities
T DM AMCU =
T XIntervalavg × 1000 Sensorsnumber
(1)
where T DM AMCU > T DM Athreshold and T DM AMCU > M CUthreshold The maximum number of WS our network could support depends on the interval at which samples are taken (Samplesinterval ). Once a suitable interval has been chosen (based on input rate of change), we use equation (1) to determine the number of nodes (Sensorsnumber ) and TDMA slot. For instance, if a sample interval of 1 second is required, then the maximum number of nodes the master can process is 1,400 (with a TDMAMCU of 10.714ms) since the master’s 10ms threshold has been reached (Figure 6B). In this section we discussed the way that our wireless sensors would collaborate and showed that relatively high spatial resolution (>1000 WS nodes with respect to the size of an automobile) could be achieved using the proposed design. However, if sensors that run off their own batteries draw power too quickly, the task of regularly replacing over a thousand batteries might be prohibitive in the testing environment, thus practically limiting spatial resolution. Therefore, we consider sensor network lifetime with respect to power consumption.
4
Sensor Network Lifetime
The three main sources of current consumption in our WS are the sensing module, the radio module and the processor module (see Figure 2). These modules are considered with respect to both WS operating modes: on–mode and off– mode. The current averages over time for each module and mode were analysed (calculations not shown) then all results are summed to generate an overall WS average power consumption. Average Current per Module (μA) On–Mode Off–Mode Sensing Module 47.475 0 Radio Module 5.369 2.232 Processor Module 400.737 15.247 Total Average Current = AvION/OF F 453.581 17.479
1048
O.L. Diaz–Gutierrez and R. Hall
In addition, since we intend to use a standard 500mAh battery, which is incapable of supplying the current required by a WS as its charge decreases, an estimated 20% of the theoretical power is deducted [20]. Thus, to calculate how average WS lifetime (Hours), the following formula is used: Batsize= Electric current provided by the battery for an hour=500mAh (2) (3) AvION = Average WS current during on–mode = 453.581μA AvIOF F = Average WS current during off–mode = 17.479μ TON = Percentage of time that WS are in on–mode (%) 7 6 Batsize " ! " × 0.8 Hours = ! ON ON AvION × T100 + AvIOF F × 100−T 100
(4) (5) (6)
Assuming WS are active constantly, battery life is 1.2 months. More realistically, if the sensor network was made active in an automobile testing environment for 10% of the time, the network could operate for a little more than 9 months.
5
Conclusion
In this paper we presented the hardware design of a wireless sensor network that could be used to assist automobile manufacturers to collect large amounts of real-world data in an automobile testing environment. This design is relatively flexible; our WS will be able to use multiple bandwidths, antennas, power sources, and sensor types. There are a number of questions remaining however about how well this design will perform in the field. The automobile testing environment is much less than ideal to implement a wireless star network topology. There may be multiple RF devices transmitting simultaneously, causing intermodulation distortion. The same signal will be received multiple times as the signals wave propagate (reflect, diffract etc.) around objects in the vehicle. The antennas may also be placed close to conducting surfaces, which may cause noisy signals. It will take some experimenting to characterise the operational range for a WS in an automobile environment. We intend to test this design in the first instance using electronic temperature sensors (thermistors), placing 60 WS in a Siemens VDO test vehicle. These tests will allow us to assess whether WS deliver reductions in automobile test setup time with respect to wiring, and whether WS lead to improvements with respect to data spatial resolution. It is essential that TDMA time slots are allocated accurately and WS transmit only during their respective slot to avoid interference. However, since external oscillators are expensive in terms of power consumption, a less accurate, internal microcontroller oscillator will be used. Thus, due to higher clock error rate, re-synchronisation must be performed periodically in order to achieve the expected communication performance. Several re-synchronisation algorithms will be trialled to determine the most suitable.
Design of Wireless Sensors for Automobiles
1049
Acknowledgement. We would like to thank Siemens VDO Automotive Pty Ltd for sponsoring this project, particularly Shaun Murray and Martin Gonda, whose assistance with design development has been exhaustive. We would also like to thank Darrell Elton and Paul Main from the Department of Electronic Engineering LTU, for providing enormous practical assistance in electronics design.
References 1. CHANG, K. In: RF and Microwave Wireless Systems. John Wiley & Sons, Inc (2000) 2. KOPETZ, H.: (Automotive electronics – present state and future prospects) http://www.nzdl.org/cgi-bin/cstrlibrary?e=d-0cstr--00-0-0-014-Document ---0-1l--1-en-50---20-about-RF+sensors+automotive--001-001-0isoZz8859Zz-1-0&cl=search& d=HASH015d59d7e6c34232e3473dcd.1&hl=0&gc=0>=1. 3. ERIKSSON, L., S.BRODEN: High performance automotive radar. In: Microwave Journal. Volume 39. (1996) 24 – 38 4. VRBA, M.S.P.B.R., ZEZULKA, F.: Chapter 10: Introduction to industrial sensor networking. In: Handbook of Sensor Networks: Compact Wireless and Wired Sensing Systems, CRC Press (2005) 10:5 5. FLINK, J. In: The Automobile Age. The MIT Press (1988) 1 – 26, 358 – 376 6. HASSANEIN, Q.W.H., XU, K.: Chapter 9: A practical perspective on wireless sensor networks. In: Handbook of Sensor Networks: Compact Wireless and Wired Sensing Systems, CRC Press (2005) 7. ROMER, K.: Tracking real–worls phenomena with smart dust. In: Wireless Sensor Networks – First European Workshop, Berlin, Germany, John Wiley & Sons, Inc (2004) 8. PAPAVASSILIOU, S., ZHU, J.: Chapter 15: Architecture and modeling of dynamic wireless sensor networks. In: Handbook of Sensor Networks: Compact Wireless and Wired Sensing Systems, CRC Press (2005) 15:3 9. Corporation, B.B.: MicroPower, Single–Supply OPERATIONAL AMPLIFIERS MicroAmplifier Series. (1999) http://www.fulcrum.ru/Read/CDROMs/ TI-2001.June/docs/sbos088.pdf. 10. BOYLESTAD, R.: 21.5: R–C Low–Pass Filter. In: Introductory Circuit Analysis. 9 edn. Prentice Hall (2000) 916 – 923 11. SEDRA, A., SMITH, K.: 3.8: Limiting and Clamping Circuits. In: Microelectronic Circuits. 4 edn. Oxford University Press, Inc (1998) 195 12. ASA, N.S.: Single Chip 433/868/915 MHz Transceiver nRF905 Product Specification. 1.2 edn. (2005) http://www.nordicsemi.no/files/Product/ data_sheet/nRF905rev1_2.pdf. 13. ASA, N.S.: Single Chip 2.4 GHz Transceiver nRF2401A Product Specification. (2004) http://www.nordicsemi.no/files/Product/data_sheet/ nRF2401A_rev1_0.pdf. 14. ASA, N.S.: Quarterwave printed monopole antenna for 2.45ghz. Technical report (2003) http://www.nordicsemi.no/files/ Product/white_paper/PCB-quarterwave-2_4GHz-monopole-jan05.pdf. 15. ASA, N.S.: nAN900–04: nRF905 RF and antenna layout. Technical report (2004) http://www.nordicsemi.no/files/Product/applications/nAN900-04_nRF905_ RF_and_antenna_layout_rev2_0.pdf.
1050
O.L. Diaz–Gutierrez and R. Hall
16. ASA, N.S.: nAN24–01: nRF2401 RF layout. Technical report (2004) http://www.nordicsemi.no/files/Product/applications/nAN24-01 rev2_0.pdf. 17. ASA, N.S.: nAN24–05: nRF24E1 wireless hands–free demo. Technical report (2003) http://www.nordicsemi.no/files/Product/applications/nAN24-05 _rev1_2.pdf. 18. Corporation, A.: ATmega48/88/168 Product Specification. E edn. (2005) http://www.atmel.com/dyn/resources/prod_documents/doc2545.pdf. 19. RAZAVI, B.: 4: Multiple Access Techniques and Wireless Standards. Prentice Hall Communications Engineering and Emerging Technologies. In: RF Microelectronics. PRENTICE HALL PTR (1998) 103 – 110 20. WRIGHT, S.R.D.S.L.F.P., RABAEY, J.: Power sources for wireless sensor networks. In: Wireless Sensor Networks – First European Workshop, Berlin, Germany, John Wiley & Sons, Inc (2004)
Mobile Tracking Using Fuzzy Multi-criteria Decision Making Soo Chang Kim1, Jong Chan Lee2, Yeon Seung Shin1, and Kyoung-Rok Cho3 1
Converged Access Network Research Team, ETRI, Korea [email protected] 2 Dept. of Computer Information Science, Kunsan National Univ., Korea 3 Dept. of Information & Comm. Eng., Chungbuk National Univ., Korea Abstract. In the microcell- or picocell-based system the frequent movements of the mobile bring about excessive traffics into the networks. A mobile location estimation mechanism can facilitate both efficient resource allocation and better QoS provisioning through handoff optimization. In this study, we propose a novel mobile tracking method based on Multi-Criteria Decision Making (MCDM), in which uncertain parameters such as PSS (Pilot Signal Strength), the distance between the mobile and the base station, the moving direction, and the previous location are used in the decision process using the aggregation function in fuzzy set theory. Through numerical results, we show that our proposed mobile tracking method provides a better performance than the conventional method using the received signal strength.
1 Introduction There will be a strong need for the mobile terminal tracking in the next generation mobile communication systems. The location of a Mobile Terminal must be found out, e.g., in wireless emergency calls already in the near future. It is of great importance to the efficiency of next generation mobile communication systems to know the exact position of the moving mobile user in order to reduce the number of paging messages and cell handover messages. Handover efficiency will be an important aspect in next generation mobile communication systems because it affects directly to the switching road and QoS, particularly in combined micro cell of Pico cell networks. Many methods and systems have been proposed based on radio signal strength measurement of a mobile object's transmitter by a set of base stations. Time of arrival (TOA) of a signal from a mobile to neighboring base stations are used in [1], but this scheme has two problems. First, an accurate synchronization is essential between all sending endpoints and all receiving ones in the system. An error of 1 μs in synchronization results to 300 m error in location. Secondly this scheme is not suitable for the microcellular environment because it also assumes LOS environment. Time difference of arrival (TDOA) of signals from two base stations is considered in [2]. TOA scheme and TDOA scheme have been studied for IS-95B where PN code of CDMA system can be used for the location estimation. Enhanced Observed Time Difference (E-OTD) is a TDOA positioning method based on OTD feature already existing in GSM. The mobile measures arrival time of signals from three or more cell sites in a X. Jia, J. Wu, and Y. He (Eds.): MSN 2005, LNCS 3794, pp. 1051 – 1058, 2005. © Springer-Verlag Berlin Heidelberg 2005
1052
S.C. Kim et al.
network. In this method the position of mobile is determined by trilateration [3]. EOTD, which relies upon the visibility of at least three cell sites to calculate it, is not a good solution for rural areas where cell-site separation is large. However, it promises to work well in areas of high cell-site density and indoors. In this study, to enhance estimation accuracy, we propose a scheme based on MCDM which considers multiple parameters: the signal strength, the distance between the base station and mobile, the moving direction, and the previous location. This process is based on three step location estimations which can determine the mobile position by gradually reducing the area of the mobile position [4]. Using MCDM, the estimator first estimates the locating sector in the sector estimation step, then estimates the locating zone in the zone estimation step, and then finally estimates the locating block in the block estimate step.
2 Estimation Procedure Figure 1 shows how our scheme divides a cell into many blocks based on the signal strength and then estimates the optimal block stepwise where the mobile is located using MCDM. 0π y axis block
sector 6
−
sector 1
π
zone n-1
3
...
2π 3
zone n
zone 2
sector 5
−
π
π
zone 1
x axis 2
2
−
sec tor 2
2π 3
π 3 sector 4
sector 3
±π
Fig. 1. Sector, Zone and Block
The location of a mobile within a cell can be defined by dividing each cell into sectors, zones and blocks and relating these to the signal level received by it at that point. It is done automatically in three phases of sector definition, zone definition and block definition. Then the location definition block is constructed with these results. They are performed at the system initialization before executing the location estimation. The sector definition phase divides a cell into sectors, and assigns a sector number to blocks belonging to each sector. The zone definition phase divides each sector into zones, and assigns a zone number to blocks belonging to each zone. The block definition phase assigns a block number to each block. In order to indicate the location of each block within a cell, 2-dimensional vector (d, a) is assigned to each block. After the completion of this phase each block has a set of block information.
Mobile Tracking Using Fuzzy Multi-criteria Decision Making
1053
The set of block information is called the block object. The block object contains the following information: the sector number, the zone number, the block number, the vector data (d, a), the maximum and the minimum values of the average PSS for the LOS block, the compensated value for the NLOS block, and a bit for indicating “node” or “edge”. class blockObject { private: unsigned int sector_num; unsigned int zone_num; unsigned int block_num; double vector(double d, double a); unsigned int los_sig(int min, int max); unsigned int nlos_sig; unsigned int node; unsigned int edge; public: · · · }
3 Mobile Tracking Based on MCDM In our study, the received signal strength, the distance between the mobile and the base station, the previous location, and the moving direction are considered as decision parameters. The received signal strength has been used in many schemes, but it has very irregular profiles due to the effects of radio environments. The distance is considered because it can explain the block allocation plan; however, it may also be inaccurate due to the effect of multi-path fading, etc. It is not sufficient by itself. We consider the previous location. It is normally expected that the estimated location should be near the previous one. Therefore, if the estimated location is too far from the previous one, the estimation may be regarded as inaccurate. We also consider the moving direction. Usually the mobile is most likely to move forward, less likely to move rightward or leftward, and least likely to move backward more than one block. The low-speed mobile (a pedestrian) has a smaller moving radius and a more complex moving pattern, while the high-speed mobile (a motor vehicle) has a larger radius and a simpler pattern. 3.1 Membership Function The membership function with a trapezoidal shape is used for determining the membership degree of the mobile because it provides a more versatile degree between the upper and the lower limits than the membership function with a step-like shape. Let us define the membership functions for the pilot signal strengths (PSSs) from neighboring base stations. The membership function of PSSi , μ R ( PSSi ) , is given by Eq. (1). PSSi is the signal strength received from the base station i, s1 is the lower limit, and s2 is the upper limit.
1054
S.C. Kim et al.
ª 0, PSSi < s1 º « PSS − s » i 1 μ R ( PSSi ) = «1 − , s1 ≤ PSSi ≤ s2 » s2 − s1 « » « 1, PSSi > s2 »¼ ¬
(1)
Now we define the membership function of the distance. The membership function of the distance μ R ( Di ) is given by Eq. (2), where Di is the distance between the base station i and the mobile [4]. 1, Di < d1 º ª « » Di − d 2 , d1 ≤ Di ≤ d 2 » μ R ( Di ) = «1 − d1 − d 2 « » « 0, Di > d 2 »¼ ¬
(2)
The membership function of the previous location of the mobile μ R ( Li ) is given by Eq. (3), where Li is its current location, E1 ,L , E4 is the previous location, and g i is the physical difference between them [4]. Li < E1 º 0, ª « Li − E1 » , E1 ≤ Li ≤ E2 » «1 − gi « » μ R ( Li ) = « E2 ≤ Li ≤ E3 » 1, « L −E » 3 «1 − i , E3 ≤ Li ≤ E4 » gi « » « 0 , Li > E4 »¼ ¬
(3)
The membership function of the moving direction μ R (Ci ) is given by Eq. (4). Ci is the moving direction of the mobile, PSS1 ,L , PSS 4 is the pilot signal strength, and oi the physical difference between the previous location and the current one. 0, Ci < PSS1 º ª » « Ci − PSS1 , PSS1 ≤ Ci ≤ PSS 2 » «1 − o i » « μ R ( Ci ) = « 1, PSS 2 ≤ Ci ≤ PSS3 » » « C − PSS 3 «1 − i , PSS3 ≤ Ci ≤ PSS 4 » oi » « » « 0, Ci > PSS 4 ¼ ¬
(4)
3.2 Location Estimation Most of the MCDM approaches face the decision problem in two consecutive steps: aggregating all the judgments with respect to all the criteria and per decision alternative and ranking the alternatives according to the aggregated criterion. Also our approach uses this two-steps decomposition [5, 6]. Let Ji (i ∈ {1, 2, K , n} be a finite number of alternatives to be evaluated against a set of criteria Kj (j=1, 2, K , m). Subjective assessments are to be given to determine (a) the degree to which each alternative satisfies each criterion, represented as a fuzzy
Mobile Tracking Using Fuzzy Multi-criteria Decision Making
1055
matrix referred to as the decision matrix, and (b) how important each criterion is for the problem evaluated, represented as a fuzzy vector referred to as the weighting vector. Each decision problem involves n alternatives and m linguistic attributes corresponding to m criteria. Thus, decision data can be organized in a m × n matrix. The decision matrix for alternatives is given by Eq. (5).
ª μ R ( PSS11 ) μ R ( D12 ) « μ ( PSS ) μ ( D ) 21 R 22 « R μ = « μ R ( PSS 31 ) μ R ( D32 ) « ... ... « «¬ μ R ( PSS n1 ) μ R ( Dn 2 )
μ R ( L13 ) μ R (C14 ) º μ R ( L23 ) μ R (C 24 ) »» μ R ( L33 ) μ R (C34 ) »
» ... ... » μ R ( Ln3 ) μ R (C n 4 )»¼
(5)
The weighting vector for evaluation criteria can be given by using linguistic terminology with fuzzy set theory [5, 6]. It is a finite set of ordered symbols to represent the weights of the criteria using the following linear ordering: very high ≥ high ≥ medium ≥ low ≥ very low. Weighting vector W is represented as Eq. (6).
W = ( wiPSS , wiD , wiL , wiC )
(6)
3.2.1 Sector Estimation Based on Multi –criteria Parameters The decision parameters considered in the Sector Estimation step are the signal strength, the distance and the previous location. The mobile is estimated to be located at the sector neighboring to the base station whose total membership degree is the largest. The sector estimation is performed as follows. Procedure 1. Membership degrees are obtained using the membership function for the signal strength, the distance and the previous location. Procedure 2. Membership degrees obtained in Procedure 1 for the base station neighboring to the present station are totalized using the fuzzy connective operator as shown in Eq. (7). μ i = μ R ( PSS i ) ⋅ μ R ( Di ) ⋅ μ R ( Li ) (7)
We obtain Eq. (8) by imposing the weight on
μ i . The reason for weighting is that
the parameters used may differ in their importance. ωμ i = μ R ( PSS i ) ⋅ W PSS + μ R ( Di ) ⋅ W D + μ R ( Li ) ⋅ WL
(8)
WPSS is the weight for the received signal strength, WD for the distance, and WL for the location. Also WPSS + WD + WL = 1 , and WPSS =0.5, WD =0.3 and
where
WL =0.2 respectively. Procedure 3. Blocks with the sector number estimated are selected from all the blocks within the cell for the next step of the estimation. Selection is done by examining sector number in the block object information.
1056
S.C. Kim et al.
3.2.2 Track Estimation Based on Multi -criteria Parameters The decision parameters considered in the Track Estimation step are the signal strength, the distance and the moving direction. From the blocks selected in the sector estimation step, this step estimates the track of blocks at one of which the mobile locates using the following algorithm. Procedure 1. Membership degrees are obtained using the membership function for the signal strength, the distance and the moving direction. Procedure 2. Membership degrees obtained in Procedure 1 is totalized using the fuzzy connective operator as shown in Eq. (9). μ i = μ R ( PSS i ) ⋅ μ R ( Di ) ⋅ μ R (Ci ) (9) We obtain Eq. (10) by imposing the weight on μi .
ωμ i = μ R ( PSS i ) ⋅ W PSS + μ R ( Di ) ⋅ WD + μ R (Ci ) ⋅ WC where
(10)
WPSS is assumed to be 0.6, WD 0.2 and WC 0.2 respectively.
Procedure 3. Blocks which belong to the track estimated above are selected for the next step. It is done by examining the track number of the blocks selected in the sector estimation. 3.2.3 Block Estimation Based on Multi –criteria Parameters From the blocks selected in the zone estimation step, this step uses the following algorithm to estimate the block in which the mobile may be located. Procedure 1. Membership degrees are obtained using the membership function for the signal strength, the distance and the moving direction. Procedure 2. Membership degrees obtained in Procedure 1 are totalized using the fuzzy connective operator as shown in Eq. (11). (11) μ i = μ R ( PSS i ) ⋅ μ R ( Di ) ⋅ μ R (C i ) We obtain Eq. (12) by imposing the weight on μ i . (12) ω μ i = μ R ( PSS i ) ⋅ W PSS + μ R ( Di ) ⋅ W D + μ R (C i ) ⋅ WC
where
WPSS is assumed to be 0.6, WD 0.1 and
WC 0.3 respectively.
Procedure 3. The selection is done by examining the block number of the blocks selected in the track estimation.
4 Performance Analysis In our paper we assume that low speed mobiles, pedestrians, occupy 60% of the total population in the cell and high-speed mobiles, vehicles, 40%. One half of the pedestrians are assumed to be still and another half moving. Also the private owned cars occupy 60% of the total vehicle, the taxi 10% and the public transportation 30%. Vehicles move forward, leftward/rightward and U- Turn. The moving velocity is assumed to have a uniform distribution. The walking speed of pedestrians is 0~5 Km/hr, the speed of private cars and taxis 30~100 Km/hr, and buses 10~70 Km/hr.
Mobile Tracking Using Fuzzy Multi-criteria Decision Making
1057
The speed is assumed to be constant during walking or driving. In order to reflect more realistic information into our simulation, it is assumed that the signal strength is sampled every 0.5 sec, 0.2 sec, 0.1 sec, 0.1 sec and 0.05 sec for the speed of ≤10km/h, ≤ 20km/h, ≤50km/h, ≤70km/h and ≤ 100km/h, respectively. If CT is too small, we cannot obtain enough samples to calculate the average signal strength. Figure 2 shows estimation results of three schemes for the situation where the high speed mobile moves along a straight or curved sector boundary area. In this figure the horizontal and vertical axes represent the relative location of the area observed and the path generated in this simulation. Results are shown for AP (Area Partitioning), VA (Virtual Area) [4] and MCDM from left to right in this figure respectively. As can be seen AP sometimes selects faulty locations far away from the generated path. That is because inaccurate results in sector estimation stage are escalated into track and block estimation. VA has better accuracies for curved path. In our understanding it may be attributed to the fact that the average value of pilot signal strengths sampled by high speed mobile passing through two sectors falls into the range of PSS values of the sector boundary area. The performance of MCDM is less affected during a left turn or right turn. A left or right turn causes an abrupt signal distortions, but their effects on estimation can be compensated for by using information on previous location and distance to base station. 1000
1000
1000
900
900
900
800
800
800
700
700
700
600
600
600
500
500
500
400
400
400
300
300
300
200
200
200
100
100
100
0
0 0
200
400
600
(a) AP
800
1000
0 0
200
400
600
800
1000
0
(b) VA
200
400
600
800
100 0
(c) MCDM
Fig. 2. The estimation results on the move
We compare our scheme, MCDM with VA [4], E-OTD and TDOA in Figure 3. In this figure the mobile maintains its y position at 1000m and traverse x axis from x = 0m to x = 2000m. The y axis means how the drms varies with x position of mobile. Drms (Distance Root Mean Square) stands for the root-mean-square value of the distances from the true location point of the position fixes in a collection of measurements. In order to get the estimated values for comparison we take an average of 20 values for each mobile position. We assume NLOS environment and the signal level of mobile may change abruptly due to shadowing. It shows that the performance of MCDM is least affected by abrupt change of signal level. MCDM has the most accurate result. This may well be attributed to the fact that it imposes less weight on
1058
S.C. Kim et al.
drms
MCDM VA E-OTD TDO A
X distance covered by a mobile Y=1000
Fig. 3. Comparison of the estimation accuracy
the received signal strength in NLOS area and, instead, greater weights on other parameters such as the distance between mobile and base station, previous location, and moving direction are considered as decision parameters.
5 Conclusions In this study, we proposed a MCDM-based mobile tracking method for estimating more accurately the mobile location by considering multiple parameters, such as the signal strength, the distance between the base station and mobile, the moving direction, and previous location. We have demonstrated that our scheme increases the estimation accuracy when the mobile moves along a boundary area. The effect of weight factor variations on the estimation performance of our scheme and the determination of the optimal weight should be the subject of a future study. Also further researches are required on their implementation and applications to the handoff and channel allocation strategies.
References 1. T. Nypan and O. Hallingstad, “Cellular Positioning by Database Comparison and Hidden Markov Models.,” PWC2002:, pp. 277-284, Oct. 2002 2. T. S. Rappaport, J. H. Reed and B. D. Woerner, "Position Location Using Wireless Communications on Highways of the Future," IEEE Communications Magazine, pp. 33-41, Oct. 3. Y. A. Spirito, “On the Accuracy of Cellular Mobile Station Location Estimation,” IEEE Trans. Veh. Technol., vol. 50, no. 3, pp. 674-685, 2001. 4. J. C. Lee and Y. S. Mun, "Mobile Location Estimation Scheme," SK telecommunications Review, Vol. 9, No. 6, pp. 968-983, Dec. 1999. 5. C. Naso and B. Turchiano, "A Fuzzy Multi-Criteria Algorithm for Dynamic Routing in FMS," IEEE ICSMC’1998, Vol. 1, pp. 457-462, Oct. 1998. 6. C. H. Yeh and H. Deng, "An Algorithm for Fuzzy Multi-Criteria Decisionmaking," IEEE ICIPS’1997, pp. 1564-1568, 1997.
Pitfall in Using Average Travel Speed in Traffic Signalized Intersection Networks Bongsoo Son1, Jae Hwan Maeng1, Young Jun Han1, and Bong Gyou Lee2 1
Dept. of Urban Planning and Eng., Yonsei Univ., Seoul, Korea {sbs, mjray, hizune}@yonsei.ac.kr 2 Graduate School of Information, Yonsei Univ., Seoul, Korea [email protected]
Abstract. For the effective use and management of urban traffic control systems, it is necessary to collect and process traffic data and produce traffic information. Average travel speed is typically used for classifying traffic conditions of traffic signalized intersection networks. The purpose of this paper is to solve the pitfall caused by the usage of average travel speed estimated by conventional technique for the signalized intersection networks. To do this, this paper has suggested the basis of criteria for selecting the speed data to be used for final estimation of travel speed. The key point is to check the relevancy of travel time of the vehicles traveled during the same evaluation time period.
1 Introduction A number of delays such as stopped delay, approach delay, travel-time delay and time-in-queue delay may occurred at a traffic signalized intersection during the same time period. These delays are mainly dependent upon the coordination of green times of traffic signals relatively closely spaced in the signalized intersection network [1]. Thus, it is important for the traffic engineers to coordinate their green times in order to improve the vehicles’ movement through the set of signals. It is often called as “signal progression” in traffic engineering. Figure 1 shows the time-space diagram for the signal progression. In the figure the yellow interval are not shown due to the scale. If a vehicle were to travel at the speed limit (or free-flow travel speed), it would arrive at each of the signals just as they turn green; this is indicated by the heavy dashed line. There is a window of green in the figure and this window is called the “bandwidth” that is a measure of how large a platoon of vehicles can be passed without stopping. It is noteworthy that the efficiency of a bandwidth is defined as the ratio of the bandwidth to the cycle length, expressed as a percentage, and an efficiency of 40% to 55% is considered good. In fact, it is almost impossible to coordinate all green times for all approaches at traffic signal intersection network such as shown in Figure 1. Due to this fact, some vehicles inherently experience the delays under the same traffic condition. More conventionally, average travel speed is typically used for classifying traffic conditions of traffic signalized intersection networks. Theoretic methods for travel X. Jia, J. Wu, and Y. He (Eds.): MSN 2005, LNCS 3794, pp. 1059 – 1064, 2005. © Springer-Verlag Berlin Heidelberg 2005
1060
B. Son et al.
speed estimation are common in the traffic literature, but treatments of how to account delay caused by traffic signal are less common. Conventional method is to simply calculate the average value of travel speed of all vehicles traveled during the same time period [2]. However, if we estimate the average travel speed of all vehicles traveled during the same time period, it may lead to misunderstanding for traffic condition as well as traffic information.
Fig. 1. Vehicle trajectory and “bandwidth” in a signal progression
This paper is an attempt to solve the pitfall caused by the usage of average travel speed in urban traffic signalized intersection network. To do so, this paper employs the vehicles’ trajectories in the time-space diagram. Son et al. [3] have first employed the trajectories for predicting the bus arrival time in the signalized intersection networks. Most recently, Kim et al. [4] have proposed a heuristic algorithm for estimating travel speed in the signalized intersection networks, which was developed based on the trajectories. However, the algorithm has limitation to avoid the pitfall caused by the usage of average travel speed during non-congested time period.
2 Vehicles’ Trajectories Figure 2 illustrates the trajectories that five vehicles take as time passes in traffic signalized intersection network. In the figure, trajectory type I is associated with the vehicles arrived at traffic signal i during the red time period. Trajectory II represents the vehicles arrived at the signal during the period between ending of red time and beginning of green time and experienced delay for passing the traffic signal.
Pitfall in Using Average Travel Speed in Traffic Signalized Intersection Networks
1061
Trajectory type III is related to the vehicles passed the traffic signal without any delay. The three types of trajectories indicate that the travel speeds on traffic signalized intersection network are widely different and greatly dependent upon whether or not vehicles await at traffic signals. In other words, the travel speed in traffic signalized intersection network significantly vary depending upon the state of signals (i.e., green time or red time of traffic signal) as well as the coordination of green times of traffic signals [4].
Fig. 2. Time-space diagram for vehicles’ trajectories in traffic signalized network
As above-mentioned, the conventional method estimates the average travel speed of all vehicles passing a roadway section in traffic signalized intersection network over some specified time period (e.g., every 5 to 15 minutes), where the speed is the inverse of the time taken by the vehicles to traverse the roadway distance. However, as can be seen from Figure 2, the waiting times occurred at traffic signalized intersections would cause major differences in estimating of each vehicle’s travel speed in traffic signalized intersection network. The conventional method definitely has limitation in reflecting the variation of vehicles’ trajectories for the estimation of the travel speed.
3 Pitfalls of Average Travel Speed Some field speed data were collected by floating car method for both congested and non-congested time periods. A total of ten passenger cars were assigned to depart from the upstream of “Intersection 1” to “Intersection 5” every 1 minute time interval. The study site consisted of five traffic signal intersections is 2.5 km long a major arterial that links between the western region and the old downtown area in Seoul. The study site is consisted of 6 to 8-lane sections. The speed limit of the study site is 60km/h.
1062
B. Son et al.
Table 1 shows a set of sample speed data which were calculated based on each vehicle’s travel time measured from the field. In the table, the speed data were grouped for 5-minute evaluation time period. Table1. Travel speed data
Speed values of ten floating vehicles Non-congested period Congested period {16, 13, 12, 14, 12} {45, 43} {13, 10, 11} {22, 25, 29, 35} {8, 11} {29, 35, 43, 46} With respect to non-congested traffic condition, the speed values measured under non-congested traffic condition were fluctuated from 22km/h to 46km/h. With the exception of the first 5-minute evaluation time period, the fluctuation was severe during the other periods. These results are not surprising, since the speed variation would be occurred by both the state of traffic signals and the coordination of green times of traffic signals. (In order to better understand some of the discussion, refer to Figure 2.) For the first time period, the average value of speed data during the time period is reasonable for representing the reality, but the average values of the other two time periods are not good, since the vehicles’ speeds vary even under the same traffic condition. It is logical to not wonder why travel speed of each vehicle does result in sufficient discrepancy from the average speed estimated during the same evaluation time period. However, if we estimate the average travel speed of all vehicles traveled during the same evaluation time period by using the same manner of conventional method, it may lead to misunderstanding for traffic condition as well as traffic information. More specifically, for an example, there is enough unused green time during a given green phase under non-congested traffic condition. If some vehicles arrived just after the ending of green time during the same evaluation time period, they should wait until the signal turns green again. Thus, the average travel speed would result in lower value due to the red time of signals included during the same time period. Consequently, the average travel speed may not be appropriate for representing the noncongested traffic conditions. For this case, somewhat higher speed value among all the speeds measured during the same time period is more appropriate for representing the real traffic conditions rather than the average value. Most of the vehicles traveled on congested signalized intersection network experienced all of stopped delay, approach delay, travel-time delay and time-in-queue delay at the signalized intersections. This case will not be misinterpreted if we use the average travel speed estimated by the conventional method, since most of the speeds collected during the same time period would be maintained at the similar speed level. Therefore, the data associated with the congested traffic condition are not severely fluctuated so that the average speed of all vehicles traveled during the same evaluation time period seems to be reasonable for representing the reality.
Pitfall in Using Average Travel Speed in Traffic Signalized Intersection Networks
1063
4 Method for Selecting Reasonable Speed Values The most significance shortcoming of conventional technique is that the delays caused by traffic signals are overlooked in estimating the average travel speed. For the congested traffic condition, we can simply figure out reasonable speed as the average value of all vehicles’ speeds, since each vehicle’s speed does not much differ from the average travel speed of all the vehicles. However, the problem occurs when the traffic condition is not congested and traffic signals do not well coordinated all green times for the vehicles. The main task to do in this paper is how to figure out reasonable speed value for representing the real traffic condition as well as traffic information disseminated to the travelers. To do this, this paper has carefully reviewed two cases of traffic situations for the illustrative pitfall being used. Case 1 Traffic signals well coordinated all green times for the vehicles so that they moved efficiently through all the traffic signals without any unnecessary stopping and delay. In this case, the difference of travel times under non-congested traffic condition would be mainly caused by stopped delay occurred at the signalized intersection rather than approach delay, travel-time delay and time-in-queue delay. All of the speed values collected in the case during the same time period would be less than the speed limit and the speed values might be severely fluctuated, since at most 40% to 55% of all green times could be coordinated. Consequently, all travel times of the vehicles traveled would be larger than free flow travel time (that is equal to travel distance divided by speed limit), but would be smaller than the sum of the free flow travel time and total red times of all the traffic signals passed. Theoretically, the reasonable travel time of the case seems to be close to the sum of the free flow travel time and the half of total red times of all the traffic signals passed. Therefore, in order to define a reasonable speed value, we have to first check whether the speed values collected during the same time period meet the abovementioned travel time range; and then we have to select relatively higher speed values among them, since lower speed values would be caused by the signal progression, but not caused by traffic condition. Case 2 Traffic signals have not well coordinated all green times for the vehicles so that they could not move efficiently through all the traffic signals even though traffic condition was not congested. In this case, the vehicles could not maintain higher speed so that their speed values collected during the same time period would be not severely fluctuated. However, this case makes things complex since some of the speed values would be very low even though traffic condition is not congested. Such case will be misinterpreted if we use the average travel speed estimated by the conventional method. In order to solve this problem, we have to check whether all travel times of the vehicles traveled during the same time period are reasonable. For this case, the reasonable travel time should be somewhat less than the sum of free flow travel time and total red times of all the traffic signals passed, since the efficiency of a bandwidth would be much less than 40%. Therefore, it seems to be reasonable to select the highest value among the speed values which are relevant to the above-mentioned travel
1064
B. Son et al.
time range. Otherwise, we may misunderstand this non-congested traffic condition as congested by using the lower speed values.
5 Conclusions Careful investigation of the field speed data, it was confirmed that the travel speed values measured under non-congested traffic condition were severely fluctuated and the speed variation would be occurred by both the state of signals and the coordination of green times of signals. Thus, it is not reasonable to estimate the travel speed for representing the reality simply by using the average speed of all vehicles traveled during the same evaluation time period. This paper presents two cases for the illustrative pitfall being used and suggests a method for selecting reasonable speed value among all speed data collected during the same time period. The key point is to check the relevancy of travel time of the vehicles traveled during the same time period. To do this, this paper has suggested the basis of criteria for selecting the speed data to be used for final estimation of travel speed. The criteria seem to be theoretically sound and promising.
References 1. W. R. McShane and R .P. Roess, Traffic Engineering, Prentice-Hall, Inc. 1990 2. B. Son and S. Lee, “A Development of Evaluation Method for Road Performance,” The 4th Conference of Eastern Asia Society for Transportation Studies, 24-27 October, Hanoi, Vietnam, 2001 3. B. Son, H. Kim, C. Shin, and S. Lee, “Bus Arrival Time Prediction Method for ITS Application,” KES 2004, LNAI Vol. 3215, pp.88-94, 2004 4. H. Kim, B. Son, S. Lee and S. Oh, “Heuristic Algorithm for Estimating Travel Speed in Traffic Signalized Networks,” LNCS Vol. 3415, 2005
Static Registration Grouping Scheme to Reduce HLR Traffic Cost in Mobile Networks Dong Chun Lee Dept. of Computer Science Howon Univ., South Korea, 727, Wolha-Ri, Impi, KunSan, ChonBuk, Korea [email protected]
Abstract. This paper proposes the static registration grouping scheme that solves the Home Location Register (HLR) bottleneck due to the terminal's frequent registration area (RA) crossings and that distributes the registration traffic to each of the local signaling transfer point (LSTP) area. The RAs in LSTP area are grouped statically. It is to remove the signaling overhead and to mitigate the Regional STP (RSTP) bottleneck. The proposed scheme solves the HLR bottleneck due to the terminal's frequent RA crossings to each of the LSTP area.
1 Introduction Universal Mobile Telecommunication System (UMTS) and cdma2000 are two major standards for third generation (3G) mobile telecommunication or IMT-2000 [13, 21]. Many operators commit to deploy UMTS and/or cdma2000-based 3G networks. Evolving from the existing 2G networks, construction of effective 3G networks is critical for provisioning future mobile services. The standard commonly used in North America is the Electronics Industry Association/Telecommunications Industry Association (EIA / TIA) Interim Standard 95 (IS-95), and in Europe the GSM [2]. IS-95 and GSM have a structural drawback: as the number of user increase, HLR becomes the bottleneck. A number of works have been reported to reduce the bottleneck of the HLR problems. In [8], a Location Forwarding Strategy is proposed to reduce the signaling costs for location registration. A Local Anchoring Scheme is introduced in [5]. Under these schemes, signaling traffic due to location registration is reduced by elimination the need to report location changes to the HLR. Location update and paging subject to delay constraints is considered in [9]. When an incoming call arrives, the residing area of the terminal is partitioned into a number of sub-areas, and then these sub-areas are polled sequentially. With increasing the delay time needed to connect a call, the cost of location update is reduced. Hierarchical database system architecture is introduced in [1]. A queuing model of three-level hierarchical database system is illustrated in [6], [10]. These schemes can reduce both signaling traffics due to location registration and call delivery using the properties of call locality and local mobility. Above schemes are proposed to reduce the costs for location registration or call tracking. The general caching scheme is effective when the call requests to the callee X. Jia, J. Wu, and Y. He (Eds.): MSN 2005, LNCS 3794, pp. 1065 – 1072, 2005. © Springer-Verlag Berlin Heidelberg 2005
1066
D.C. Lee
from one RA are very much. This implies that its effectiveness is god enough only when the degree of the call locality is extremely high. However, it is not real in wireless environments. Also, there exists a consistency problem between the cached information and the entry in the callee’s VLR. It also limits the cached information and the entry in the calee’s VLR. It also limits the mobility patterns. In aspect of the registration, it is same as the IS-95 scheme. As for the LA scheme, they are cost effective in reducing the HLR access traffic. However, there is a trade-off between the registration cost and call tracking cost.
2 Proposed Scheme I define post VLR, PVLR which keeps the callee's current location information as long as the callee moves within its LSTP area. If a terminal crosses the LSTP area, the VLR which serves the new RA is set to a new PVLR. If the terminal moves within the LSTP area, it is registered at its own PVLR not HLR. If the terminal moves out from the area, it is registered at the HLR. In case that a terminal is switched on, the VLR which serves the terminal's current RA is PVLR and the VLRs which serve the intermediate RAs in terminal's moving route within the LSTP area report the terminal's location information to the PVLR. We note that we don't have to consider where the callee is currently. It is because the PVLR keeps the callee's current location as long as the callee moves within its LSTP area. Therefore, without the terminal movements into a new LSTP area, the registration at HLR does not occur. I statically group the VLRs in LSTP area in order to localize the HLR traffic. It is also possible to group the RAs dynamically regardless of the LSTP area. Suppose that the PVLR and the VLR which serves the callee's RA belong to the same dynamic group but are connected to the physically different LSTPs. In this case, I should tolerate the additional signaling traffic even though the caller and callee belong to the same dynamic group. A lot of signaling messages for registering user locations and tracking calls is transmitted via RSTP instead of LSTP. If the cost of transmitting the signaling messages via RSTP is large enough compared to that via LSTP, dynamic grouping method may be degrade the performance although it solves the Ping-Pong effect. Furthermore, it is critical in case that the RSTP is bottlenecked. A location registration in HLR occurs only when a terminal moves out of its LSTP area. The RA crossings within LSTP area generate the registrations in PVLR. The followings describe the registration steps. (1) (2)
The terminal which has moved to a new RA requests a registration to the VLR which serves the new RA. The new VLR inquires the id. of the current PVLR of the old VLR and the old VLR replies to the new VLR with ACK message where the id. is piggybacked. If the terminal sends to the new VLR a message containing the id. of its PVLR, the query messages can be omitted. The new VLR only sends a registration message to PVLR. If the id. of its PVLR does not exist in the entry, the new VLR regards the terminal as having moved out of its LSTP area. The new VLR becomes the new PVLR of the terminal. Therefore, the new VLR should determine whether the old RA belongs to its LSTP area or not.
Static Registration Grouping Scheme to Reduce HLR Traffic Cost
1067
If belongs: The new VLR sends the location in formation of the terminal to PVLR. And then a registration cancellation message is sent to the old VLR by new VLR or PVLR. If not: After the new VLR transmits the location information of the terminal to HLR, HLR transmits a registration cancellation message to the old PVLR. And then the old PVLR sends a cancellation message to the old VLR, subsequently. In this case, the new VLR plays a role of the new PVLR. Fig. 1 shows the message flow due to the location registration according to the status of PVLR change.
Old Serving system VLR
HLR
NewServing system PVLR
VLR
MSC REGNOT
REQPVLR + REGCANC
NewServing system
Old Serving system VLR
HLR
PVLR
VLR
MSC REGNOT
REQPVLR + REGCANC req PVLR + regcanc
req PVLR + regcanc REGNOT
REGNOT regnot QUALREQ
New PVLR
regnot regnot
regnot
REGCANC regcanct
QUALREQ
qualreq
qualreq
PROFREQ
PROFREQ profreq
profreq
(a) Before PVLR is changed
(b) After PVLR is changed
Fig. 1. Location registration in proposed scheme
If the caching scheme is applied to call tracking, the most important thing is how to maintain the ICR low to some extent that the call tracking using the cache is cost effective compared to otherwise. If the proposed scheme is used in the caching scheme, the consistency can be maintained as long as the terminal crosses the RAs within the LSTP area. As for the hierarchical level in which the cache is located, the MSC level is not desirous considering the general call patterns. It is effective only for the users of which working and resident areas are regionally very limited, e.g., one or two RAs.
3 Performance Analysis For numerical analysis, terminal-moving probability should be computed. I adopt hexagon model as geometrical RA model. That model is considered to be common for modeling the RA. Generally, it is assumed that a VLR serves a RA.
1068
D.C. Lee
3.1 RA Model As shown in Fig. 2, RAs in LSTP area can be grouped. There are 1, 7, and 19 RAs in circle 0, circle 1, and circle 2 areas, respectively. The terminals in RAs inside circle n area still exist in circle n area after their first RA crossing. That is, the terminals inside circle area cannot cross their LSTP area when they cross the RA one time. While the terminals in RAs which meet the line of the circle in figure can move out from their LSTP area. I can simply calculate the number of moving out terminals. Intuitively, I can consider that the terminals in arrow marked areas move out in Fig. 2. Using the number of outside edges in arrow marked polygons. I can compute the number of terminals which move out from the LSTP area as follows. § Total No. of outside edges in arrow mark ed polygon s ¨¨ © No. of edg es of hexa gon × No. of RAs in LSTP a rea × No. of te r min als in LST P area
· ¸¸ ¹
(1)
Fig. 2. The RA Hexagons model
For example, there are 2 or 3 outside edges in arrow marked area in hexagon model. So 2/6 or 3/6 of terminals in corresponding RA which meets the line of the circle move out the LSTP area. In case of circle 2 in Fig.2 , the number of RAs is 19 and the number of terminals which move out from LSTP area is given by the terminal in LSTP area × (5/19). The number of VLRs in LSTP area represented as circle n can be generalized as follows. No. of VLRs in LSTP area = 1+3n(n+1) (where n = 1, 2, …) (2) The rate of terminals which move out from LSTP area can be generalized as follows. Rmove_out,No.of VLRs in LSTP area =
2n + 1 (where n = 1, 2, …) 1 + 3 n ( n + 1)
(3)
In the RA model, the rate of terminal movements within the LSTP area, Rmove_out,No.of VLRs in LSTP area is (Rmove_out,No.of VLRs in LSTP area ). Two RAs are said to be locally related when they belong to the same LSTP area and remotely related when the one of them belongs to the different LSTP area. The terminal's RA crossings
Static Registration Grouping Scheme to Reduce HLR Traffic Cost
1069
should be classified according to the local and remote relations in the following schemes. 1) Proposed Scheme
o Relation of PVLR serving RA and callee's last visiting RA o Relation of RA where callee's last visiting RA and callee's current RA 2) LA Scheme
o Relation of LA serving RA and callee's last visiting RA o Relation of callee's last visiting RA and callee's current RA I define the probabilities that a terminal moves within the LSTP area and crosses the LSTP area as P (local) and P (remote), respectively. The above relations are determined according to the terminal's moving patterns. P (local) and P (remote) can be written as follows. P (local) = Rmove_in,No.of VLRs in LSTP area, P (remote) = Rmove_out,No.of VLRs in LSTP area
(4)
P (local) and P (remote) are varied according to the number of VLRs in LSTP area. Table 1 shows the probabilities in case of the one time RA crossing. P (local) and P (remote) when the number of VLRs in LSTP area is 7, 9, 19, and 25 are shown in Table 1. Suppose that the number of RAs in LSTP area is 7. If a terminal moves into a new LSTP area in its nth movement, the terminal is located in one of outside RAs - 6 RAs in the new LSTP area. If the terminal moves into a new RA in its (n + 1)th movement, Rmove_in,No.of VLRs in LSTP area and Rmove_out,No.of VLRs in LSTP area are both 3/6. If the terminal's (n + 1)th movement occurs within the LSTP area, the two probabilities according to its (n + 2)th movement are 4/7 and 3/7, respectively. Otherwise, the terminal moves into a new LSTP area again. Therefore, the two probabilities are both 3/6. Table 1. P (local) & P (remote) according to the number of VLRs in LSTP area
Number of VLRs in group (LSTP area) Probability P(local)
7 4/7
9 6/9
19 14/19
25 20/25
P(remote)
3/7
3/9
5/15
5/25
Once a terminal moves into a new LSTP area, the pairs of two probabilities due to the next movement of the terminal according to the number of RAs in LSTP area are (3/6,3/6), (5/8,3/8), (7/12,5/12), and (11/16, 5/16), respectively. Therefore, I should classify the terminal moving patterns and compute the probabilities to evaluate the traffic costs correctly. To evaluate the performance, I define the Signaling Costs (SCs) as follows.
1070
D.C. Lee
SC1: Cost of transmitting a message from one VLR to another VLR through HLR SC2: Cost of transmitting a message from one VLR to another VLR through RSTP SC3: Cost of transmitting a message from one VLR to another VLR through LSTP I evaluate the performance according to the relative values of SC1, SC2, and SC3 which are needed for location registration. Intuitively, we can assume SC3 SC2 < SC1 or SC3 SC2