A survey on communication networks for electric system automation

In today's competitive electric utility marketplace, reliable and real-time information become the key factor for r

340 13 5MB

English Pages 146 Year 2006

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Introduction......Page 1
Internet based Virtual Private Networks......Page 4
Last mile connectivity for electric utilities......Page 5
Disadvantages......Page 7
Optical fiber communication......Page 8
Wireless communication......Page 9
Wireless sensor networks for automation......Page 10
Benefits of wireless sensor networks for automation......Page 11
Wireless automatic meter reading (WAMR)......Page 12
Electric system monitoring......Page 13
Application requirements......Page 14
WiMAX and wireless mesh networks for automation......Page 15
Benefits of hybrid network architecture using WiMAX and wireless mesh networks......Page 16
Design challenges of hybrid architecture using WiMAX and wireless mesh networks......Page 17
Conclusion......Page 18
Acknowledgements......Page 19
References......Page 20
Introduction......Page 22
Related works......Page 24
The AIPAC implementation......Page 25
Initial address configuration......Page 26
Management of IP address duplication......Page 28
Management of partitioning......Page 29
Gradual merging......Page 32
Reference scenario......Page 34
Variations of the traffic according to igm......Page 36
Traffic generated per type of packet......Page 37
Traffic with a reduced set of IP addresses......Page 40
Changes in reference scenario......Page 41
Conclusion and future works......Page 42
References......Page 43
Introduction......Page 45
Related work......Page 46
The proposed scheme......Page 47
End-to-end rate control model......Page 48
Adaptive transmission rate for a single client......Page 49
Congestion controlled adaptive transmission rates......Page 51
Optimal bandwidth allocation......Page 52
Adaptive transmission rate control......Page 53
Concurrent client presentation support......Page 55
TCP-friendly congestion control......Page 57
References......Page 60
Introduction......Page 62
Related work......Page 63
Spatial query patterns by using quadtree......Page 64
Data querying by task sets......Page 66
Performance evaluations......Page 68
Event detection rate......Page 69
Experimental results......Page 71
Conclusions......Page 74
References......Page 75
Introduction......Page 77
Statement of the problem......Page 79
Transformation of the problem......Page 82
Transformation of the problem......Page 83
Dynamic programming method......Page 84
Experimental results......Page 85
Conclusion......Page 87
References......Page 88
Dynamic online QoS routing schemes: Performance and bounds......Page 90
Introduction......Page 91
Widest shortest path algorithm......Page 92
Virtual flow deviation......Page 93
The virtual calls......Page 94
Complexity comparison......Page 95
Mathematical models......Page 96
Ideal routing model......Page 97
Min-cut model......Page 99
Numerical results......Page 100
References......Page 103
Pricing differentiated services: A game-theoretic approach......Page 106
Introduction......Page 107
The model......Page 108
Computing the throughputs......Page 109
Utility, pricing and equilibrium......Page 110
Equilibrium for only real-time sessions or only TCP connections......Page 111
Supermodular games......Page 112
Algorithm for symmetric Nash equilibrium......Page 115
Symmetric real-time flows......Page 116
Nonsymmetric real-time flows......Page 118
Symmetric TCP connections......Page 119
Conclusions and future work......Page 120
Proof of Theorem 2......Page 123
Proof of Theorem 4......Page 124
References......Page 125
Introduction......Page 127
Features of soft handoff......Page 128
The novel channel reservation scheme mdash the capacity acquisition protocol......Page 131
The analytical model......Page 133
Derivations of C1 and C2......Page 134
Derivations of lambda h......Page 135
System parameters of analytical model......Page 137
Comparison with simulations......Page 138
Comparisons with the scheme withoutchannel reservation......Page 139
Comparisons with the adaptive reservation scheme......Page 141
A load-adaptive capacity acquisition protocol and its performance......Page 142
Practicality of the capacity acquisition protocol......Page 144
References......Page 145
9.pdf......Page 146
Recommend Papers

A survey on communication networks for electric system automation

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Computer Networks 50 (2006) 877–897 www.elsevier.com/locate/comnet

A survey on communication networks for electric system automation V.C. Gungor

a,*

, F.C. Lambert

b

a

b

Broadband and Wireless Networking Laboratory, School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA 30332, United States National Electric Energy Testing, Research and Applications Center, Georgia Institute of Technology, Atlanta, GA 30332, United States Received 18 January 2006; accepted 26 January 2006 Available online 21 February 2006 Responsible Editor: I.F. Akyildiz

Abstract In today’s competitive electric utility marketplace, reliable and real-time information become the key factor for reliable delivery of power to the end-users, profitability of the electric utility and customer satisfaction. The operational and commercial demands of electric utilities require a high-performance data communication network that supports both existing functionalities and future operational requirements. In this respect, since such a communication network constitutes the core of the electric system automation applications, the design of a cost-effective and reliable network architecture is crucial. In this paper, the opportunities and challenges of a hybrid network architecture are discussed for electric system automation. More specifically, Internet based Virtual Private Networks, power line communications, satellite communications and wireless communications (wireless sensor networks, WiMAX and wireless mesh networks) are described in detail. The motivation of this paper is to provide a better understanding of the hybrid network architecture that can provide heterogeneous electric system automation application requirements. In this regard, our aim is to present a structured framework for electric utilities who plan to utilize new communication technologies for automation and hence, to make the decisionmaking process more effective and direct.  2006 Elsevier B.V. All rights reserved. Keywords: Electric system automation; Internet based Virtual Private Network; Power line communication; Satellite communication; Wireless sensor networks; Wireless mesh networks; WiMAX

1. Introduction

*

Corresponding author. Tel.: +1 404 894 5141; fax: +1 404 894 7883. E-mail addresses: [email protected] (V.C. Gungor), [email protected] (F.C. Lambert).

Electric utilities, particularly in urban areas, continuously encounter the challenge of providing reliable power to end-users at competitive prices. Equipment failures, lightning strikes, accidents, and natural catastrophes all cause power disturbances

1389-1286/$ - see front matter  2006 Elsevier B.V. All rights reserved. doi:10.1016/j.comnet.2006.01.005

878

V.C. Gungor, F.C. Lambert / Computer Networks 50 (2006) 877–897

and outages and often result in long service interruptions. Electric system automation, which is the creation of a reliable and self-healing electric system that rapidly responds to real-time events with appropriate actions, aims to maintain uninterrupted power service [6]. The operational and commercial demands of electric utilities require a high-performance data communication network that supports both existing functionalities and future operational requirements. Therefore, the design of the network architecture is crucial to the performance of the system. Recent developments in communication technologies have enabled reliable remote control systems, which have the capability of monitoring the realtime operating conditions and performance of electric systems. These communication technologies can be classified into four classes, i.e., Power Line Communication, Satellite Communication, Wireless Communication, and Optical Fiber Communication. Each communication technology has its own advantages and disadvantages that must be evaluated to determine the best communication technology for electric system automation. In order to avoid possible disruptions in electric systems due to unexpected failures, a highly reliable, scalable, secure, robust and cost-effective communication network between substations and a remote control center is vital [11,14]. This high performance communication network should also guarantee very strict Quality of Service (QoS) requirements to prevent the possible power disturbances and outages [10]. When the communication requirements of electric system automation are considered, Internet can offer an alternative communication network to remotely control and monitor substations in a cost-effective manner with its already existing communication infrastructure. However, Internet cannot guarantee very strict QoS requirements that the automation applications demand, since data communication in Internet is based on best effort service paradigm [32]. Furthermore, when a public network like the Internet is utilized to connect the substations to a remote control center, security concerns arise. In this context, Internet based Virtual Private Network (Internet VPN) technologies, which are transforming the Internet into a secure high speed communication network, constitute the cornerstone for providing strict QoS guarantees of electric system automation applications [7]. Internet VPN technologies offer a shared communication network backbone in which the cost of the network

is spread over a large number of users while simultaneously providing the benefits of a dedicated private network. Therefore, Internet VPN technology as a high speed communication core network can be utilized to enable minimum cost and highly reliable information sharing for automation applications. Although Internet VPN technologies can provide the necessary reliable communication for substations in urban areas, this may not be the case for substations in remote rural locations where the high speed communication core network, e.g., Internet, might not exist. Therefore, when the individual communication capabilities and locations of electric systems are taken into account, it is appropriate to consider the overall communication infrastructure as a hybrid network as shown in Fig. 1. This hybrid network consists of two separate parts: • High speed communication core network: It can be either a private network or public network. Due to several technical advantages [32], Internet based Virtual Private Network can be considered as a cost-effective high speed communication core network for electric system automation. • Last mile connectivity: It represents the challenge of connecting the substations to the high speed communication core network. The communication technologies for last mile connectivity can be classified as: (i) Power line communication, (ii) Satellite communication, (iii) Optical fiber communication, and (iv) Wireless communication. Each possible communication alternatives for last mile connectivity introduces its own advantages and disadvantages. Many researchers and several international organizations are currently developing the required communication technologies and the international communication standard for electric system automation. In Fig. 2, the summary of these communication system development activities is presented [14]. Despite the considerable amount of ongoing research, there still remains significantly challenging tasks for the research community to address both benefits and shortcomings of each communication technology. Since a cost-effective data communication network constitutes the core of the automation applications, in this paper, the opportunities and challenges of a hybrid network architecture are described for automation applications. More specifically, Internet based Virtual Private Networks,

V.C. Gungor, F.C. Lambert / Computer Networks 50 (2006) 877–897

879

Fig. 1. The overall communication network architecture for electric system automation.

Fig. 2. Summary of communication system development activities for electric utilities.

power line communications, satellite communications and wireless communications (wireless sensor

networks, WiMAX, and wireless mesh networks) are discussed in detail. The motivation of this paper

880

V.C. Gungor, F.C. Lambert / Computer Networks 50 (2006) 877–897

is to provide a better understanding of the hybrid network architecture that can provide heterogeneous electric system automation application requirements. In this respect, our aim is to present a structured framework for electric utilities who plan to utilize new communication technologies for automation and hence, to make the decisionmaking process more effective and direct. The remainder of the paper is organized as follows. In Section 2, the benefits and open research challenges of Internet based Virtual Private Networks are discussed for electric system automation. In Section 4, both advantages and disadvantages of alternative communication technologies are described for last mile connectivity. In Sections 5 and 6, the opportunities and challenges of wireless sensor networks, wireless mesh networks and WiMAX are explained, respectively. Finally, the paper is concluded in Section 7.

2. Internet based Virtual Private Networks Recent advances in Internet technology and Internet-ready IEDs (Intelligent Electronic Devices) have enabled cost-effective remote control systems, which makes it feasible to support multiple automation application services, e.g., remote access to IED/ relay configuration ports, diagnostic event information, video for security or equipment status assessment in substation and automatic metering. While traditional private Supervisory Control and Data Acquisition (SCADA) systems constitute the core communication network of today’s electric utility systems, the Internet based Virtual Private Network (Internet VPN) technology provides an alternative cost-effective high speed communication core network for remote monitoring and control of the electric system. Specifically, Internet VPN technology is a shared communication network architecture, in which the cost of the network is spread over a large number of users while simultaneously providing both the functionalities and the benefits of a dedicated private network. Therefore, the main objective of an Internet VPN for electric system automation is to provide the required cost-effective high performance communication between IEDs and a remote control center over a shared network infrastructure with the same policies and service guarantees that the electric utility experiences within its dedicated private communication network. In order to achieve this

objective, the Internet VPN solution should provide the following essential performance attributes: • Quality of Service (QoS): Internet technology itself cannot guarantee very strict QoS requirements that utility applications require, since data communication in the Internet is mainly based on a best effort service paradigm. In this respect, QoS capabilities of Internet VPN technologies ensure the prioritization of mission critical or delay sensitive traffic and manage network congestion under varying network traffic conditions over the shared network infrastructure. • Reliability: The communication network should be able to operate continuously over an extended period of time, even in the presence of network element failures or network congestion. To achieve this, the communication network should be properly designed with the objective of no losses in all working conditions and able to deal with failure gracefully. Service providers support Service Level Agreements (SLAs), which define the specific terms and performance metrics regarding availability of network resources and offer the Internet VPN subscriber a contractual guarantee for network services and network uptime. Therefore, Internet VPN technology should deliver data in a reliable and timely manner for automation applications. • Scalability: Since the number of substations and remote devices is large and growing rapidly, the communication system must be able to deal with very large network topologies without increasing the number of operations exponentially for the communication network. Thus, the designed hybrid network architecture should scale well to accommodate new communication requirements driven by customer demands. • Robustness: In order to avoid deteriorating communication performance due to changing network traffic conditions, the dimensioning process to assign the bandwidth to the virtual links of the Internet VPN should be based not only on the main bandwidth demand matrix, but also on other possible bandwidth demand matrices to provide a safe margin in network dimensioning to avoid congestion [28]. In case the network congestion can not be avoided with the current network traffic, low priority non-critical data traffic should be blocked so that the most critical data can be transmitted with QoS guarantees [26]. This way, additional bandwidth for high

V.C. Gungor, F.C. Lambert / Computer Networks 50 (2006) 877–897

priority data becomes available to enable the real-time communication of critical data, which is particularly important in case of alarms in electric systems. • Security: Security is the ability of supporting secure communication between a remote control center and field devices in order to make the communication safe from external denial of service (DoS) attacks and intrusion. When a public network like the Internet is utilized to connect the field devices to a remote control center, security concerns can arise. Hence, Internet VPN has to provide secure data transmission across an existing shared Internet backbone and thus, protect sensitive data so that it becomes confidential across the shared network. • Network management: In order to provide the communication requirements of automation applications, electric utilities demand flexible and scalable network management capabilities. The primary network management capabilities of Internet VPN include: (i) bandwidth provisioning, (ii) installing security and QoS policies, (iii) supporting Service Level Agreements, (iv) fault identification and resolution, (v) addition and removal of network entities, (vi) change of network functions, (vii) accounting, billing and reporting. In addition to these network management capabilities, Internet VPN technology can enable rapid implementation and possible modifications of the communication network at a reasonable cost. Therefore, Internet VPN technology with effective network management approaches provides a flexible cost-effective solution that can be easily adapted to future communication requirements that utility automation applications demand. Despite the extensive research in Internet VPN technologies [32], there are still several open research issues, e.g., efficient resource and route management mechanisms, inter-domain network management, that need to be developed for automation applications. In the current literature, two unique and complementary VPN architectures based on Multi Protocol Label Switching (MPLS) and IP Security (IPsec) technologies are emerging to form the predominant communication framework for delivery of high performance VPN services [32]. In Fig. 3, we compare both the advantages and disadvantages of MPLS based VPN and IPSec VPN architectures in terms of performance attributes

881

described above. As shown in Fig. 3, each Internet VPN technology supports the performance attributes to varying degrees and thus, the most appropriate choice depends on the specific communication requirements of the electric utilities. In Fig. 4, a decision tree for choosing an appropriate Internet VPN technology for electric system automation is illustrated. As shown in Fig. 4, if an electric utility requires a high performance communication network ensuring very strict Quality of Service (QoS) requirements, the next decision point in the decision tree can be the size of the communication network, i.e., the number of communication entities that need to be interconnected. Electric utilities that need to connect a large number of substations and a remote control center should prefer cost-effective MPLS based Internet VPN technology, since they can reduce the communication cost significantly compared to dedicated private leased communication lines. If the number of sites is not large in the network, electric utilities can utilize a hybrid network including IPSec Internet VPN and layer 2 technologies such as Frame Relay and ATM for the automation applications. If there are no QoS communication requirements, the possible options include either using public Internet when no secure communication is required or using an IPSec Internet VPN when secure communication is required in automation applications. In fact, the actual selection of Internet VPN technology depends on several factors such as the cost of communication architecture, geographic coverage of the communication architecture, the locations of substations and a remote control center, service level agreements, network management types, i.e., customer based or network based management, etc. As a result, electric utilities should evaluate their unique communication requirements and the capabilities of Internet VPN technologies comprehensively in order to determine the best Internet VPN technology for automation applications. 3. Last mile connectivity for electric utilities In this section, both advantages and disadvantages of possible communication technologies for last mile connectivity are explained in detail. The communication technologies evaluated for last mile connectivity are: (i) Power line communication, (ii) Satellite communication, (iii) Optical fiber communication, (iv) Wireless communication.

882

V.C. Gungor, F.C. Lambert / Computer Networks 50 (2006) 877–897

Fig. 3. Comparison of MPLS based Internet VPN and IPSec Internet VPN for electric system automation applications.

Fig. 4. A basic Internet VPN decision tree for electric system automation.

V.C. Gungor, F.C. Lambert / Computer Networks 50 (2006) 877–897

3.1. Power Line Communication Power Line Communication (PLC) is transmission of data and electricity simultaneously over existing power lines as an alternative to constructing dedicated communications infrastructure. Although PLC has been in operation since the 1950s as low data rate services such as remote control of power grid devices, it has become more important in recent years due to developments in technology, which enable PLC’s potential use for high speed communications over medium (15/ 50 kV) and low (110/220 V) voltage power lines [5]. However, there are still several technical problems and regulatory issues that are unresolved. Moreover, a comprehensive theoretical and practical approach for PLC is still missing and there are only a few general results on the ultimate performance that can be achieved over the power line channel. As a result, commercially deployable, high speed, long distance PLC still requires further research efforts despite the fact that PLC might provide an alternative cost-effective solution to the last mile connectivity problem. In the following, we explain both advantages and disadvantages of power line communication technologies for automation applications.







3.1.1. Advantages • • Extensive coverage: PLC can provide an extensive coverage, since the power lines are already installed almost everywhere. This is advantageous especially for substations in rural areas where there is usually no communication infrastructure. • Cost: The communication network can be established quickly and cost-effectively because it utilizes the existing wires to carry the communication signals. Thus, PLC can offer substations new cost-saving methods for remotely monitoring power uses and outages. 3.1.2. Disadvantages • High noise sources over power lines: The power lines are noisy environments for data communications due to several noise sources such as electrical motors, power supplies, fluorescent lights and radio signal interferences [27]. These noise sources over the power lines can result in high



883

bit error rates during communication which severely degrade the performance of PLC. Capacity: New technological advances have recently enabled a prototype communication modem which achieves a maximum total capacity of 45 Mbps in PLC [1]. However, since power line is a shared medium, the average data rate per end user will be lower than the total capacity depending on coincident utilization, i.e., the number of users on the network at the same time and the applications they are using. Thus, possible technical problems should be comprehensively addressed with various field tests before the PLC technology is widely deployed. Open circuit problem: Communication over the power lines is lost with devices on the side of an open circuit [14]. This fact severely restricts the usefulness of PLC for applications especially involving switches, reclosers and sectionalizers. Signal attenuation and distortion: In power lines, the attenuation and distortion of signals are immense due to the reasons such as physical topology of the power network and load impedance fluctuation over the power lines. In addition, there is significant signal attenuation at specific frequency bands due to wave reflection at the terminal points [12]. Therefore, the communication over power lines might be lost due to high signal attenuation and distortion. Security: There are some security concerns for PLC arising from the nature of power lines [22]. Power cables are not twisted and use no shielding which means power lines produce a fair amount of Electro Magnetic Interference (EMI). Such EMI can be received via radio receivers easily. Therefore, the proper encryption techniques must be used to prevent the interception of critical data by an unauthorized person. Lack of regulations for broadband PLC: In addition to technical challenges, fundamental regulation issues of PLC should be addressed for substantial progress to be made. The limits of transmitted energy and frequencies employed for PLC should be determined in order to both provide broadband PLC and prevent the interference with already established radio signals such as mobile communications, broadcasting channels and military communications. In this respect, the Institute of Electrical and Electronics Engineers (IEEE) has started to develop a standard to support broadband communications

884

V.C. Gungor, F.C. Lambert / Computer Networks 50 (2006) 877–897

over power lines [18]. The standard is targeted for completion in mid 2006. 3.2. Satellite communication Satellite communication can offer innovative solutions for remote control and monitoring of substations. It provides an extensive geographic coverage, and thus, can be a good alternative communication infrastructure for electric system automation in order to reach remote substations where other communication infrastructures such as telephone or cellular networks might not exist. In practical applications, Very Small Aperture Terminal (VSAT) satellite services are already available that are especially tailored for remote substation monitoring applications [33]. Furthermore, with the latest developments in electric system automation, satellite communication is not only used for remote control and monitoring of substations but also used for Global Positioning System (GPS) based time synchronization, which provides microsecond accuracy in time synchronization [38]. In addition, satellites can be used as a backup for the existing substations communication network. In case of congestion or link failures in communication, critical data traffic can be routed through satellites [8]. In the following, we present both advantages and disadvantages of satellite communication technologies. 3.2.1. Advantages • Global coverage: Satellite communication supports a wide geographical coverage (including remote, rural, urban and inaccessible areas) independent of the actual land distance between any pair of communicating entities. In case no communication infrastructure exists, especially for remote substations, satellite communication provides a cost-effective solution. • Rapid installation: Satellite communication offers clear advantages with respect to the installation of wired networks. A remote substation can join a satellite communication network by only acquiring the necessary technical equipment without the need for cabling to get high-speed service [20]. Cabling is not a cost-effective nor a simple job when the substation is located in a remote place. Due to economical reasons, some utilities have already installed satellite communication for rural substations monitoring [33].

3.2.2. Disadvantages • Long delay: The round-trip delay in satellite communication, especially for Geostationary Earth Orbit (GEO) satellites,1 is substantially higher than that of terrestrial communication links. The transport protocols developed for terrestrial communication links such as TCP are not suitable for satellite communication, since necessary data rate adjustments of TCP can take a long time in high-delay networks such as satellite networks [16]. On the other hand, it is possible to reduce the round-trip delay by using satellites in lower orbits. Particularly, LEO satellites offer significantly reduced delay, which is comparable to that of terrestrial networks. • Satellite channel characteristics: Different from cabled and terrestrial network communications, satellite channels characteristics vary depending on the weather conditions and the effect of fading, which can heavily degrade the performance of the whole satellite communication system [16]. Therefore, these communication challenges should be taken into account while evaluating the communication technologies for electric system automation. • Cost: Although satellite communication can be a cost-effective solution for remote substations if any other communication infrastructure is not available, the cost for operating satellites (the infrastructure cost and monthly usage cost) for all substation communication networks is still higher than that of other possible communication options. High initial investment for satellite transceivers is one of the limitations of satellite communication. 3.3. Optical fiber communication Optical fiber communication systems, which were first introduced in the 1960s, offer significant advantages over traditional copper-based communication systems. In electric system automation, an optical fiber communication system is one of the technically attractive communication infrastructures, providing extremely high data rates. In addition, its Electro Magnetic Interference (EMI) and Radio Frequency 1

Satellites can be classified into Geostationary Earth Orbit (GEO) satellite, Middle Earth Orbit (MEO) satellite, Low Earth Orbit (LEO) satellite according to the orbit altitude above the earth’s surface [9].

V.C. Gungor, F.C. Lambert / Computer Networks 50 (2006) 877–897

Interference (RFI) immunity characteristics make it an ideal communication medium for high voltage operating environment in substations [14]. Furthermore, optical fiber communication systems support long distance data communication with less number of repeaters2 compared to traditional wired networks. This leads to reduced infrastructure costs for long distance communication that substation monitoring and control applications demand. For example, the typical T-1 or coaxial communication system requires repeaters about every 2 km whereas optical fiber communication systems require repeaters about every 100–1000 km [13]. Although optical fiber networks have several technical advantages compared to other wired networks, the cost of the optical fiber itself is still expensive to install for electric utilities. However, the enormous bandwidth capacity of optical fiber makes it possible for substations to share the bandwidth capacity with other end users which significantly helps to recover the cost of the installation. In this respect, optical fiber communication systems might be cost-effective in the high speed communication network backbone since optical fibers are already widely deployed in communication network backbones and the cost is spread over a large number of users. As a result, fiber optic networks can offer high performance and highly reliable communication when strict QoS substations communication requirements are taken into account. In the following, we describe both advantages and disadvantages of optical fiber communication for automation applications. 3.3.1. Advantages • Capacity: Extremely high bandwidth capacity of optical fiber communication can provide high performance communication for automation applications. Current optical fiber transmission systems provide transmission rates up to 10 Gbps using single wavelength transmission and 40 Gbps to 1600 Gbps using wavelength division multiplexing3 (WDM). In addition, very low bit

2 In long distance communications, it is necessary to introduce repeaters periodically in order to compensate for the attenuation and distortion of the communication signals. 3 Wavelength division multiplexing (WDM) is an effective approach to exploit the bandwidth capacity available in optical fiber. In WDM, multiple wavelengths are used to carry several data streams simultaneously over the same fiber.

885

error rates (BER = 1015) in fiber optic communication are observed. Due to high bandwidth capacity and low BER characteristics, optical fiber is used as the physical layer of Gigabit and 10 Gigabit ethernet networks. • Immunity characteristics: Optical fibers do not radiate significant energy and do not pick up interference from external sources [13]. Thus, compared to electrical transmission, optical fibers are more secure from tapping and also immune to EMI/RFI interference and crosstalk. 3.3.2. Disadvantages • Cost: Although fiber optic networks possess several technical advantages, the cost of its installation might be expensive in order to remotely control and monitor substations. However, fiber optic networks might be a cost-effective communication infrastructure for high speed communication network backbones, since optical fibers are already widely deployed in the communication network backbones and the cost is spread over a large number of users. 3.4. Wireless communication Several wireless communication technologies currently exist for electric system automation [14]. When compared to conventional wired communication networks, wireless communication technologies have potential benefits in order to remotely control and monitor substations, e.g., savings in cabling costs and rapid installation of the communication infrastructure. On the other hand, wireless communication is more susceptible to Electro Magnetic Interference (EMI) and often has limitations in bandwidth capacity and maximum distances among communication devices. Furthermore, since radio waves in wireless communication spread in the air, eavesdropping can occur and it might be a threat for communication security. Electric utilities exploring wireless communication options have two choices; (i) utilizing an existing communication infrastructure of a public network, e.g., public cellular networks, (ii) installing a private wireless network. Utilizing an existing communication infrastructure of a public network might enable a cost-effective solution due to the savings in required initial investment for the communication infrastructure. On the other hand, private wireless networks enable

886

V.C. Gungor, F.C. Lambert / Computer Networks 50 (2006) 877–897

electric utilities to have more control over their communication network. However, private wireless networks require a significant installation investment as well as the maintenance cost [14]. In electric system automation, wireless communication technology has already been deployed. Recently, Short Message Service (SMS) functionality of the digital cellular network has been applied in order to remotely control and monitor substations [34]. The control channel of the cellular network is also utilized in some alarm-based substation monitoring cases [23]. However, both of these communication technologies are suited to the applications that send a small amount of data and thus, they can not provide the strict Quality of Service (QoS) requirements that real time substation monitoring applications demand. In the following, we describe both advantages and disadvantages of wireless communication technologies.

bandwidth capacity is supported and high bit error rates (BER = 102–106) are observed in communication. In addition, since wireless communication is in a shared medium, the application average data rate per end user is lower than the total bandwidth capacity, e.g., maximum data rate of IEEE 802.11b is 11 Mbps while the average application data rate is approximately 6 Mbps [21]. Therefore, each level in the communication protocol stack should adapt to wireless link characteristics in an appropriate manner, taking into account the adaptive strategies at the other layers, in order to optimize network communication performance. • Security: Wireless communication poses serious security challenges since the communication signals can be easily captured by nearby devices. Therefore, efficient authentication and encryption techniques should be applied in order to provide secure communication.

3.4.1. Advantages • Cost: Utilizing an existing wireless communication network, e.g., cellular network, might enable a cost-effective solution due to the savings in required initial investment for the communication infrastructure. In wireless communication, cabling cost is also eliminated. • Rapid installation: The installation of wireless communication is faster than that of wired networks. Wireless communication provides more flexibility compared to wired networks. Within radio coverage, communication entities can start to communicate after a short communication infrastructure installation. 3.4.2. Disadvantages • Limited coverage: Private wireless networks provide a limited coverage, e.g., the coverage of IEEE 802.11b is approximately 100 m [21]. On the other hand, utilizing existing public wireless network, e.g. cellular network, or WiMAX technology can support much more extensive coverage compared to wireless local area networks. However, some geographical areas, e.g., remote rural locations, may still not have any wireless communication services. • Capacity: Wireless communication technologies provide typically lower QoS compared to wired communication networks. Due to limitations and interference in radio transmission, a limited

Note that with the recent advances in wireless communications and digital electronics, hybrid network architectures have enabled alternative scalable wireless communication systems, which can provide strict quality of service (QoS) requirements of automation applications. The details of these recent wireless technologies, i.e., wireless sensor networks, WiMAX and wireless mesh networks, are described in the following sections. 4. Wireless sensor networks for automation In this section, we explain the opportunities and challenges of wireless sensor networks (WSNs) and present design objectives and requirements of WSNs for electric system automation applications. In general, wireless sensor networks are composed of a large number of low cost, low power and multifunctional sensor nodes that are small in size and communicate un-tethered over short distances [4]. The ever-increasing capabilities of these tiny sensor nodes enable capturing various physical information, e.g., noise level, temperature, vibration, radiation, etc., as well as mapping such physical characteristics of the environment to quantitative measurements. The collaborative nature of WSNs brings several advantages over traditional sensing including greater fault tolerance, improved accuracy, larger coverage area and extraction of localized features. In this respect, wireless sensor networks enable low cost and low power wireless

V.C. Gungor, F.C. Lambert / Computer Networks 50 (2006) 877–897

communication for electric system automation applications, especially in urban areas. Furthermore, in the area of electric utility measurement systems, WSNs are used in wireless automatic meter reading (WAMR) systems, which can determine real-time energy consumption of the customers accurately. WAMR systems are important for electric utilities, since they can reduce operational costs and enable remotely controlled flexible management systems based on real-time energy consumption statistics. Therefore, WSNs provide an alternative real-time monitoring system for electric utilities with the potential to improve business performance and technical reliability of various electric utility operations. In WSNs, the architecture of the network depends on the purpose of the application. Based on the application requirements, the sensor nodes are scattered in a sensor field as shown in Fig. 5. Each of these scattered sensor nodes has the capability to collect data and route data back to the sink node in a multi-hop manner [3]. In this architecture, the role of the sink node is to monitor the overall network and to communicate with the task manager, e.g., control center in the power utility, in order to decide the appropriate actions. The sink node can communicate with the task manager via Internet or satellite. 4.1. Benefits of wireless sensor networks for automation Wireless Sensor Network (WSN) technology has created new communication paradigms for real-time and reliable monitoring requirement of the electric

887

systems. Some of the benefits that can be achieved using WSN technology are highlighted as follows: • Monitoring in harsh environments: The sensors in WSNs are rugged, reliable, self-configurable and unaffected by extreme ambient conditions, e.g., temperature, pressure, etc. Thus, WSNs can operate even in harsh environments and eliminate the cabling requirement in electric systems. • Large coverage: WSNs can contain a large number of physically separated sensor nodes that do not require human intervention. Although the coverage of a single sensor node is small, densely distributed sensor nodes can work simultaneously and collaboratively so that the coverage of the whole network is extended. Therefore, the coverage limitations of traditional sensing systems can be addressed efficiently. • Greater fault tolerance: The dense deployment of sensor nodes leads to high correlation in the sensed data. The correlated data from neighboring sensor nodes in a given deployment area makes WSNs more fault tolerant than conventional sensor systems. Due to data redundancy and the distributed nature of WSNs, adequate monitoring information can be transported to the remote control center even in the case of sensor and route failures. • Improved accuracy: The collective effort of sensor nodes enables accurate observation of the physical phenomenon compared to traditional monitoring systems [17]. In addition, multiple sensor types in WSNs provide the capability of monitoring various physical phenomena in the electric system.

Fig. 5. An illustrated architecture of wireless sensor networks.

888

V.C. Gungor, F.C. Lambert / Computer Networks 50 (2006) 877–897

• Efficient data processing: Instead of sending the raw data to the remote control center directly, sensor nodes can locally filter the sensed data according to the application requirements and transmit only the processed data. Thus, only necessary information is transported to the remote control center and communication overhead can be significantly reduced. • Self configuration and organization: The sensor nodes in WSNs can be rapidly deployed and dynamically reconfigured because of the self-configuration capability of the sensor nodes. The ad hoc architecture of WSNs also overcomes the difficulties raised from the predetermined infrastructure requirements of traditional communication networks. More specifically, new sensor nodes can be added to replace failed sensor nodes in the deployment field and existing nodes can also be removed from the system without affecting the general objective of the monitoring system of the electric utility. • Lower cost: WSNs are expected to be less expensive than conventional monitoring systems, because of their small size and lower price as well as the ease of their deployment. 4.2. Wireless sensor network applications for automation WSN technology can enhance the performance of electric utility operations by enabling wireless automatic meter reading and real-time and reliable monitoring systems for electric utilities. In the following, WSN applications for electric system automation are described in detail. 4.2.1. Wireless automatic meter reading (WAMR) Currently, traditional manual electricity meter reading is the most common method for the electric utilities. These systems require visual inspection of the utility meters and do not allow flexible management systems for the electric utilities. In addition, network connections between traditional meters and data collection points are basically non-existent; thus, it is impossible to implement a remotely controlled flexible management system based on energy consumption statistics by using traditional measurement systems. With the recent advances in Micro ElectroMechanical Systems (MEMS) technology, wireless communications and digital electronics; the devel-

opment of low cost smart sensor networks, that enable wireless automatic meter reading (WAMR) systems, has become feasible. As the de-regulation and competition in the electric utility marketplace increase, so does the importance of WAMR systems. Wireless collection of electric utility meter data is a very cost-effective way of gathering energy consumption data for the billing system and it adds value in terms of new services such as remote deactivation of a customer’s service, real-time price signals and control of customers’ applications. The present demand for more data in order to make cost-effective decisions and to provide improved customer service has played a major role in the move towards WAMR systems. WAMR systems offer several advantages to electric utilities including reduced electric utility operational costs by eliminating the need for human readers and real-time pricing models based on real-time energy consumption of the customers. Real-time pricing capability of WAMR systems can also be beneficial for the customers. For example, using the real-time pricing model, the electric utility can reward the customers shifting their demand to off-peak times. Therefore, the electric utility can work with customers to shift loads and manage prices efficiently by utilizing WAMR systems instead of once a month on-site traditional meter reading. However, the real-time pricing model of electric utilities requires reliable two-way communication between the electric utility and customer’s metering equipment. WSN technology addresses this requirement efficiently by providing low cost and low power wireless communication. In Fig. 6, a wireless automatic meter reading system using sensor network technology is illustrated. As shown in Fig. 6, the sensed data from the meter is collected by the utility control center through multi-hop wireless communication. This monitoring system can also provide flexibility to the electric utility so that utility personnel or mobile utility controller can monitor the system locally when it is required, e.g., in case of alarm situations. In summary, wireless automatic meter reading systems can provide the following functionalities for electric systems: • Automatic meter reading functionalities: WSNs enable real-time automatic measurement of energy consumption of the customers. The

V.C. Gungor, F.C. Lambert / Computer Networks 50 (2006) 877–897

889

Fig. 6. An illustration of WAMR system using wireless sensor network technology.

automatic meter measurements can also be classified as individual meter measurements, cluster meter measurements and global meter measurements. Here, the objective is to provide flexible management policies with different real-time monitoring choices for electric utilities. • Telemetry functionalities: The electric utility control centers can obtain real-time data from smart sensor nodes and control some elements located at selected points of the distribution network, e.g. control of the status of the switches [25]. Thus, distributed sensing and automation enhance electric utility services by reducing failure and restoration times. • Dynamic configuration functionality: In electric system automation applications, reliability of the measurements should be ensured even in case of route failures in the network [31]. Thus, it is extremely significant to dynamically adjust the configuration of the network, e.g., dynamic routing, in order to provide reliability requirements of the applications. In this respect, the self-configuration capability of WSNs enables dynamic reconfiguration of the network. • Status monitoring functionality: Monitoring the status of the metering devices, which are embed-

ded by smart sensors, is another functionality of WAMR systems. This functionality can be very helpful to determine sensor node failures in the network accurately and timely. In addition, status monitoring functionality can be utilized in case of tampering with metering devices. For example, if someone tries to vandalize a metering device, the system can notify the police automatically [24]. This reduces the considerable costs of sending service crews out to repair vandalized metering devices. As advances in WAMR technologies continue, these systems will become less expensive and more reliable. Most utility and billing companies have recognized that with the invention of low-cost, low power radio sensors, wireless RF communication is, by far, the most cost-efficient way to collect utility meter data. 4.2.2. Electric system monitoring Equipment failures, lightning strikes, accidents, and natural catastrophes all cause power disturbances and outages and often result in long service interruptions. Thus, the electric systems should be properly controlled and monitored in order to take

890

V.C. Gungor, F.C. Lambert / Computer Networks 50 (2006) 877–897

the necessary precautions in a timely manner [36]. In this respect, wireless sensor networks (WSNs) can a provide cost-effective reliable monitoring system for the electric utilities [29]. An efficient monitoring system constructed with smart sensor nodes can reduce the time for detection of the faults and resumption of electric supply service in distribution networks. In addition, electricity regulators monitor the performance of the electricity distribution network operators utilizing a range of indices relating to customer service. Distribution network operators have targets and incur penalties based on the length of time of service interruptions, i.e., both outage frequency and duration [30]. Continuity of electricity service is also crucial in today’s competitive electric utility marketplace from the perspective of customer satisfaction. In order to evaluate the performance of the electric system, several Quality of Service (QoS) indices can be obtained utilizing WSN technology. For example, average duration of service interruption and average repair time can be computed. Typically, for densely deployed urban areas, these performance indices are correlated with the time for remote or manual switching of supply circuits. In this context, smart sensor nodes deployed in the electric utility can provide rapid identification of service interruptions and timely restoration of the electric utility services. Therefore, WSNs can help electric utilities maintain regulatory targets for the performance indices. 4.3. Wireless sensor network design considerations When wireless sensor network technology for electric system automation applications is considered, there exist two key design elements which are critical to develop cost-effective wireless sensor network to support both existing functionalities and new operational requirements of the future electric systems. These key elements are described in the following. 4.3.1. Network topology and architecture requirements The topology of a sensor network has significant implications on several network aspects, including network lifetime, routing algorithms, communication range of the sensor nodes and etc. The network architecture requirements contain the physical and

logical organization of the network as well as the density of the sensor nodes. In general, the objective of sensor networks is to efficiently cover the deployment area. The logical and hierarchical organization of the network also impacts energy consumption and the selection of communication protocols. In addition, based on topology requirements, sensor networks can have a distributed organization or a clustered organization, where selected nodes can handle data forwarding. The network topology and architecture requirements for electric utilities can be determined by answering the following questions: • What type of network topology best fits the application? (Is it one to one, one to many, many to one or many to many?) • How will the monitoring network work? (Is it master–slave, point-to-point, point-to multipoint or peer to peer?) • What are the worst case ambient conditions in the coverage area? • How many substations should be controlled and monitored including both current and future requirements of the electric system? • Are there any known potential interference problems due to physical obstructions, RF interference from power lines or large induction motors? 4.3.2. Application requirements The required information that is to be relayed through the sensor network for electric utilities should be classified and quantified [2]. These requirements can be achieved by a comprehensive analysis of the electric system automation applications. Based on the application requirements, the properties of individual sensor nodes can also be identified which impact network modelling and communication protocol choices. The following questions can help electric utilities to determine these requirements: • What are the QoS requirements of the application? (Does it require real-time monitoring or delay tolerant monitoring?) • Does the system continuously poll for the information (periodic monitoring) or is it generated by exception (event-based monitoring)? • What is the type of the sensor data, i.e., video, voice, data?

V.C. Gungor, F.C. Lambert / Computer Networks 50 (2006) 877–897

As a result, electric utilities should determine the network topology, architecture and application requirements comprehensively in order to establish the best fit wireless sensor network for their applications. Full consideration of the different sensor network options and how will they fit the electric utility application is critical for a successful implementation. 4.4. Design challenges of wireless sensor networks Although WSNs bring significant advantages over traditional communication networks, the properties of WSNs also impose unique communication challenges. These challenges can be described as follows: • Limited resources: The design and implementation of WSNs are constrained by three types of resources: (i) energy, (ii) memory and (iii) processing. Constrained by the limited physical size, sensor nodes have limited battery energy supply [15]. For this reason, communication protocols for WSNs are mainly tailored to provide high energy efficiency. It is also important to note that in electric systems, the batteries of the sensors can be charged by the appropriate energy supplies. In addition, the collaborative effort of sensor nodes can handle the problems of limited memory and processing capabilities of the sensor nodes. • Dynamic topologies and environment: The topology and connectivity of the network may vary due to route and sensor node failures. Furthermore, the environment, that sensor nodes monitor, can change dramatically, which may cause a portion of sensor nodes to malfunction or render the information they gather obsolete. Thus, the developed communication protocols for WSNs should accurately capture the dynamics of the network [35]. • Quality of service concerns: The quality of service (QoS) provided by WSNs refers to the accuracy between the data reported to the control center and what is actually occurring in the environment. In addition, since sensor data are typically time sensitive, e.g., alarm notifications for the electric utilities, it is important to receive the data at the control center in a timely manner. Data with long latency due to processing or communication may be outdated and lead to wrong decisions in the monitoring system. Therefore, the

891

developed communication protocols for WSNs should address both real-time and reliable communication simultaneously. 5. WiMAX and wireless mesh networks for automation Recall that the proposed hybrid network architecture consists of various types of networks including Internet VPN, wireless sensor networks, WiMAX and wireless mesh networks. In the previous sections, we described the details of Internet VPN technologies (see Section 2) and wireless sensor networks (see Section 4) for electric utilities. In this section, we focus on wireless mesh networks and WiMAX technology for electric system automation applications. In Fig. 7, an illustration of the hybrid network architecture utilizing WiMAX technology and wireless mesh networks is shown. In this hybrid architecture, a set of electric utility subscribers is clustered into wireless mesh domains, where each domain has a smaller dimension compared to the global network. Hence, each wireless mesh domain can be easily managed by the centralized communication entities, which are called as local control centers. Furthermore, in this architecture, each wireless mesh cluster is monitored by a remote control center using WiMAX. Therefore, with the integration of wireless mesh networks and WiMAX, electric utilities can fully exploit the advantages of multiple wireless networks. The main components of the proposed hybrid network architecture are briefly described as follows: • Wireless mesh domains: In the proposed hybrid network architecture, wireless mesh domains constitute a fully connected wireless network among each electric utility subscriber. Different from traditional wireless networks, each wireless mesh domain is dynamically self-organized and self-configured. In other words, the nodes in the mesh network automatically establish and maintain network connectivity. This feature brings many advantages for electric utilities, such as low up-front cost, easy network maintenance, robustness, and reliable service coverage. In addition, with the use of advanced radio technologies, e.g., multiple radio interfaces and smart antennas, network capacity can be increased significantly. Moreover, the gateway and bridge

892

V.C. Gungor, F.C. Lambert / Computer Networks 50 (2006) 877–897

Fig. 7. An illustration of hybrid network architecture using WiMAX and wireless mesh networks.

functionalities in mesh routers enable the integration of wireless mesh domains with various existing wireless networks such as wireless sensor networks, wireless-fidelity (Wi-Fi), and WiMAX [3]. Consequently, through an integrated wireless mesh network, electric utilities can take the advantage of multiple wireless networks. • WiMAX backbone: The necessary long distance communication (up to 31 miles) between local control centers and a remote control center is provided utilizing worldwide inter-operability for microwave access (WiMAX) technology. With the integration of WiMAX technology, the capacity of the network backbone can be increased up to 75 Mbps. In addition, WiMAX offers a standardized communication technology for point-to-multipoint wireless networks, i.e., IEEE 802.16 standard [37]. This enables interoperability between different vendor products, which is another important concern for electric utilities. Furthermore, different from traditional point-to-multipoint networks, WiMAX technology also supports non-line of sight communication. Hence, electric systems suffering from environmental obstacles can benefit from WiMAX technology to improve the performance of their communication system. WiMAX tech-

nology, particularly the IEEE 802.16e standard [37], also focuses on low latency handoff management, which is necessary for communications with users moving at vehicular speeds. 5.1. Benefits of hybrid network architecture using WiMAX and wireless mesh networks With the recent advances in wireless communications and digital electronics, hybrid network architectures have enabled alternative scalable wireless communication systems, which can provide strict quality of service (QoS) requirements of electric system automation applications in a cost-effective manner. Some of the benefits of hybrid network architectures are highlighted as follows: • Increased reliability: In wireless mesh domains, the wireless backbone provides redundant paths between the sender and the receiver of the wireless connection. This eliminates single point failures and potential bottleneck links within the mesh domains, resulting in significantly increased communications reliability [3]. Network robustness against potential problems, e.g., node failures, and path failures due to RF interferences or obstacles, can also be ensured by the existence

V.C. Gungor, F.C. Lambert / Computer Networks 50 (2006) 877–897

of multiple possible alternative routes. Therefore, by utilizing WMN technology, the network for electric utilities can operate reliably over an extended period of time, even in the presence of a network element failure or network congestion. • Low installation costs: Recently, the main effort to provide wireless connection to the end-users is through the deployment of 802.11 based Wi-Fi Access Points (APs). To assure almost full coverage in a metro scale area for electric system automation, it is required to deploy a large number of access points because of the limited transmission range of the APs. The drawback of this solution is highly expensive infrastructure costs, since an expensive cabled connection to the wired Internet backbone is necessary for each AP. Installation of the required cabling infrastructure significantly increases the installation costs as well as it slows down the implementation of the wireless network. As a result, the deployment of APs for wireless Internet connection is costly and unscalable for electric system automation applications. On the other hand, constructing a wireless mesh network decreases the infrastructure costs, since the mesh network requires only a few points of connection to the wired network. Hence, WMNs can enable rapid implementation and possible modifications of the network at a reasonable cost, which is extremely important in today’s competitive electric utility environment. • Large coverage area: Currently, the data rates of wireless local area networks (WLANs) have been increased, e.g., 54 Mbps for 802.11a and 802.11g, by utilizing spectrally efficient modulation schemes. Although the data rates of WLANs are increasing, for a specific transmission power, the coverage and connectivity of WLANs decreases as the end-user becomes further from the access point. On the other hand, WiMAX technology enables long distance communication between local control centers and a remote control center without any performance degradation. As a result, the WiMAX backbone in the hybrid network can realize high speed long distance communication that automation applications demand. • Automatic network connectivity: In the proposed hybrid network architecture, wireless mesh domains are dynamically self-organized and self-configured. In other words, the nodes in the

893

mesh network automatically establish and maintain network connectivity, which enables seamless multi-hop interconnection service for the electric utilities. For example, when new nodes are added into the network, these nodes utilize their meshing functionalities to automatically discover all possible routers and determine the optimal paths to the control centers [3]. Furthermore, the existing mesh routers reorganize the network considering the newly available routes and hence, the network can be easily expanded. The self-configuration feature of wireless mesh networks is so crucial for electric system automation applications, since it enables electric utilities to cope with new connectivity requirements driven by customer demands. 5.2. Design challenges of hybrid architecture using WiMAX and wireless mesh networks Hybrid network architectures can provide an economically feasible solution for the wide deployment of high speed wireless communications for electric system automation applications. Some companies already have some products for sale and have started to deploy wireless mesh networks and WiMAX towers for various application scenarios. However, field trials and experiments with existing communication protocols show that the performance of hybrid network architectures is still far below what they are expected to be [3]. Therefore, there is a need for the development of novel communication protocols for hybrid network architectures and thus, many open research issues need to be resolved. Some of these research issues are described as follows: • Harsh monitoring environment: In substations, wireless links exhibit widely varying characteristics over time and space due to obstructions and extremely noisy environment caused by power lines and RF interferences. To improve network capacity and limit radio interferences, advanced radio technologies, such as multipleinput multiple output (MIMO) techniques, multiple radio interfaces and smart antennas, should be exploited while developing communication protocols. • Optimal placement of WiMAX towers: In the proposed hybrid architecture, it is important to design an efficient and low cost network

894











V.C. Gungor, F.C. Lambert / Computer Networks 50 (2006) 877–897

infrastructure, while meeting the deadlines of the time-critical monitoring data. Thus, the WiMAX towers, equipped with expensive RF hardware, should be optimally placed in the deployment field in order to both reduce infrastructure costs and meet QoS requirements. Mobility support: Low latency handover management algorithms are required to support the communication services of mobile utility controllers. This way, mobile utility controllers can also monitor the system locally, when it is necessary, e.g., in case of alarm situations. Integration of heterogeneous networks: Existing networking technologies have limited capabilities of integrating different wireless networks. Thus, to increase the performance of the hybrid network architectures, the integration capabilities of multiple wireless interfaces and the corresponding gateway/bridge functions of network routers should be improved. Scalability: In today’s competitive dynamic market environment, electric utilities might be able to deploy new substations and provision large service requests rapidly. In this respect, the designed hybrid network architecture should scale well to accommodate new communication requirements driven by customer demands. Coordinated resource management: Distributive and collaborative network resource management is required to effectively respond to system changes due to wireless channel characteristics, contention and traffic patterns. This way, system-wide fairness and self-configuration of the network can be realized. Security: Denial of service attacks in the network may cause severe damage to the operation of the deployed hybrid network. Using efficient encryption and cryptography mechanisms, security problems can be solved.

To solve all of these existing problems of hybrid network architectures, the protocol stack from physical to application layers needs to be improved or re-invented. In this regard, a cross-layer design is required to jointly optimize the main networking functionalities and to design communication protocol suites that are adaptive to the dynamic characteristics of the wireless channel. This way, the hybrid network architecture can provide rapid identification of service interruptions and timely restoration of the electric utility services.

6. Conclusion Electric utilities, especially in urban areas, continuously encounter the challenge of providing reliable power to the end-users at competitive prices. Equipment failures, lightning strikes, accidents, and natural catastrophes all cause power disturbances and outages and often result in long service interruptions. In this regard, electric system automation, which is the creation of a highly reliable, self-healing electric system that rapidly responds to real-time events with appropriate actions, aims to maintain uninterrupted power services to the end-users. However, the operational and commercial demands of electric utilities require a highperformance data communication network that supports both existing functionalities and future operational requirements. Therefore, the design of a cost-effective and reliable network architecture is crucial. As the individual communication capabilities and locations of electric systems are taken into account, it is appropriate to consider the overall communication infrastructure as a hybrid network architecture. This hybrid network architecture consists of various types of networks such as Internet, wireless sensor networks, WiMAX and wireless mesh networks. In this hybrid architecture, the communication network can be dynamically self-configured. This brings significant advantages for electric utilities, such as low up-front cost, easy network maintenance, robustness, and reliable service coverage. Furthermore, with the integration of different networks, electric utilities can fully exploit the advantages of multiple wireless networks. For example, while low power and low range wireless sensor networks can be utilized for urban areas, WiMAX technology, which enables reliable long distance communication, can be used for rural areas. As a result, the proposed hybrid network architecture enables a fully connected communication network for electric system automation applications, such as real-time grid and equipment monitoring and wireless automatic meter reading systems. In this paper, the opportunities and challenges of hybrid network architecture are discussed for electric system automation applications. More specifically, Internet based Virtual Private Networks, power line communications, satellite communications and wireless communications (wireless sensor

V.C. Gungor, F.C. Lambert / Computer Networks 50 (2006) 877–897

networks, WiMAX, wireless mesh networks) are described in detail. The motivation of this paper is to provide a better understanding of the hybrid network architecture that can provide heterogeneous electric system automation application requirements. Consequently, our aim is to present a structured framework for electric utilities who plan to utilize new communication technologies for automation and hence, to make the decision-making process more effective and direct. Based on our comprehensive research, we make the following recommendations for the electric utilities: • Internet-ready IEDs: Recent advances in digital electronics and communication technology have enabled the development of Internet-ready Intelligent Electronic Devices (IEDs). International standards are being developed (IEC 61850) to promote rapid configuration and integration into the utility automation system [19]. Integrating these IEDs into electric systems can offer various benefits, e.g., remote access to IED/relay configuration ports, diagnostic event information, and video for security or equipment status assessment. To make sure these benefits are fully exploited, there is a need for the appropriate digital simulators in order to test and evaluate the performance of multi-vendor IEDs and make more informed decisions. • Novel communications protocols: Although the hybrid network architecture offers many opportunities for electric utilities, field trials and experiments with existing communication protocols show that the performance of hybrid network architecture is still far below what they are expected to be. Therefore, there is a need for the development of novel communication protocols for hybrid network architectures and thus, many open research issues, such as coordinated network management, security, mobility support, integration of heterogeneous networks, need to be resolved. • Cost vs. benefit analysis: While providing communication requirements of automation applications, the hybrid network architecture should enable rapid implementation and possible modifications of the electric utility network. In this regard, the cost of the network should also be considered in order to make it feasible subject to budget constraints of the electric utilities. Hence, a detailed cost vs. benefit analysis is

895

required to evaluate the performance of the hybrid network architecture. • Wireless communications technologies: Wireless communications technologies (WiMAX, wireless sensor networks, wireless mesh networks) should be developed for deployment in electric system automation applications (see Sections 4 and 5). WiMAX is expected to become commercial in 2007–2008 and brings several advantages, such as mobility support and large coverage area. Wireless sensor networks and wireless mesh networks are under development and offer electric utilities low installation costs, increased reliability and self-configuration. • Power line communications technologies: Power line communications (PLC) technologies should be developed for deployment in electric system automation applications. PLC has become important in recent years due to developments in technology, which enable PLC’s potential use for medium and high speed communications over medium (15/35 kV) and low (120/240 V) voltage power lines. However, there are still several technical problems and regulatory issues that are unresolved (see Section 3.1). Moreover, a comprehensive theoretical and practical approach for PLC is still missing and there are only few general results on the ultimate performance that can be achieved over the power line channel. As a result, commercially deployable, high speed, long distance PLC still requires further research efforts. International standards are also needed for building power system applications and customer services on top of PLC technologies. Acknowledgements The authors would like to thank Tom Weaver, James Bales, Doug Fitchett, Eric Rehberg, Ray Hayes (AEP); Brian Deaver (Baltimore Gas & Electric); Dan Landerman (Cooper Power Systems); Brad Black, Shawn Ervin, Jeff Daugherty (Duke Power); Jerry Bernstein (Entergy), Wayne Zessin, Mark Browning (Exelon); Pat Patterson, Martin Gordon (NRECA); Mark Gray, Bill Robey (PEPCO); David White (South Carolina Electric & Gas); Brian Dockstader (Southern California Edison); Larry Smith, Bob Cheney, Mac Fry, Bob Reynolds (Southern Company), Joe Rostron (Southern States); Frank Daniel (TXU) for their valuable comments that improved the quality of this

896

V.C. Gungor, F.C. Lambert / Computer Networks 50 (2006) 877–897

paper. This work was supported by NEETRAC under Project #04-157. References [1] Y. Abe et al., Development of high speed power line communication modem, SEI Technical Review 58 (June) (2004) 28–33. [2] J. Adam et al., EHT control systems and wireless communications: the wave of the future, in: IEEE Industry Applications Society Petroleum and Chemical Industry Conference, September 2001, pp. 169–178. [3] I.F. Akyildiz, X. Wang, W. Wang, Wireless mesh networks: a survey, Computer Networks Journal (March) (2005). [4] I.F. Akyildiz, W. Su, Y. Sankarasubramaniam, E. Cayirci, Wireless sensor networks: a survey, Computer Networks 38 (4) (2002) 393–422. [5] I.H. Cavdar, A solution to remote detection of illegal electricity usage via power line communications, IEEE Transactions on Power Delivery 19 (October) (2004) 1663– 1667. [6] F. Cleveland, R. Ehlers, Guidelines for implementing substation automation using UCA-SA (Utility Communications Architecture-Substation Automation), Electric Power Research Institute Technical Report 1002071, 2003. [7] R. Cohen, On the establishment of an access VPN in broadband access networks, IEEE Communications Magazine 41 (February) (2003) 156–163. [8] E. Ekici, I.F. Akyildiz, M.D. Bender, A multicast routing algorithm for LEO satellite IP networks, IEEE/ACM Transactions on Networking 10 (2) (2002) 183–192. [9] B.R. Elbert, Satellite Communications Applications Handbook, Artech House Publishers, 2004. [10] G.N. Ericsson, Communication requirements—basis for investment in a utility wide-area network, IEEE Transactions on Power Delivery 19 (January) (2004) 92–95. [11] G.N. Ericsson, Classification of power systems communications needs and requirements: experiences from case studies at Swedish National Grid, IEEE Transactions on Power Delivery 17 (April) (2002) 345–347. [12] S. Galli, A. Scaglione, K. Dosterl, Broadband is power: Internet access through the power line network, IEEE Communications Magazine 41 (May) (2003) 82–83. [13] A.L. Garcia, I. Widjaja, Communication Networks: Fundamental Concepts and Key architectures, McGraw-Hill, 2004. [14] F. Goodman et al., Technical and system requirements for advanced distribution automation, Electric Power Research Institute Technical Report 1010915, June 2004. [15] V.C. Gungor, A forecasting-based monitoring and tomography framework for wireless sensor networks, in: Proc. of IEEE ICC, Istanbul, Turkey, June 2006. [16] Y. Hu, V.O.K. Li, Satellite-based Internet: a tutorial, IEEE Communications Magazine 39 (March) (2001) 154–162. [17] P.H. Ibarguengoytia et al., Real time intelligent sensor validation, IEEE Transactions on Power Systems 16 (4) (2001) 770–775. [18] IEEE P1675: Standard for Broadband over Power Line Hardware. Available from: .

[19] IEC 61850: Standard for Substation Automation Systems. Available from: . [20] A. Jamalipour et al., Guest editorial broadband IP networks via satellites Part I, IEEE Journal on Selected Areas in Communications 22 (2) (2004) 213–217. [21] J. Jun, P. Peddabachagari, M. Sichitiu, Theoretical maximum throughput of IEEE 802.11 and its applications, in: IEEE International Symposium on Network Computing and Applications, April 2003, pp. 249–256. [22] W. Liu, H. Widmer, P. Raffin, Broadband PLC access systems and field deployment in European power line networks, IEEE Communications Magazine 41 (May) (2003) 114–118. [23] J. Lowrey, Automation systems work best when they communicate with each other, Rural Electric (April) (2003) 38–41. [24] A. Minosi et al., Intelligent, Low-power and low-cost measurement system for energy consumption, in: Proc. of IEEE VECIMS, July 2003, pp. 125–130. [25] F.J. Molina, et al., Automated meter reading and SCADA application for wireless sensor network, in: Proc. of ADHOC-NOW, Canada, 2003, pp. 223–234. [26] J.C. de Oliveira, C. Scoglio, I.F. Akyildiz, G. Uhl, New preemption policies for DiffServ-Aware traffic engineering to minimize rerouting in MPLS networks, IEEE/ACM Transactions on Networking 12 (August) (2004) 733–745. [27] N. Pavlidou, A.J.H. Vinck, J. Yazdani, B. Honary, Power line communications: state of the art and future trends, IEEE Communications Magazine 41 (April) (2003) 34–40. [28] D. Pompili, C. Scoglio, V.C. Gungor, VFMAs, Virtual-flow multi-path algorithms for MPLS, in: Proc. of IEEE ICC, Istanbul, Turkey, June 2006. [29] J. Rabaey et al., Smart energy distribution and consumption: information technology as an enabling force, Technical Report of UC Berkeley, 2001. [30] J. Roman et al., Regulation of distribution network business, IEEE Transactions on Power Delivery 14 (2) (1999) 662–669. [31] M. Stephenson et al., Exploiting emerging tools in short range wireless technologies, in: International Conference on 3G Mobile Communication Technologies, June 2003, pp. 348–353. [32] N.K. Tan, Building VPNs: with IPSec and MPLS, McGrawHill Networking, 2003. [33] A. Tisot, Rio Grande electric monitors remote energy assets via satellite, Utility Automation & Engineering T&D Magazine (July) (2004). [34] T. Tommila, O. Venta, K. Koskinen, Next generation industrial automation—needs and opportunities, Automation Technology Review (2001). [35] M.C. Vuran, V.C. Gungor, O.B. Akan, On the interdependence of congestion and contention in wireless sensor networks, in: Proc. of ICST SenMetrics, San Diego, CA, July 2005. [36] Z. Xie et al., An information architecture for future power systems and its reliability analysis, IEEE Transactions on Power Systems 17 (3) (2002) 857–863. . [38] G.E. Ziegler, Protection and substation automation, Electra 206 (February) (2003) 14–23.

V.C. Gungor, F.C. Lambert / Computer Networks 50 (2006) 877–897 Vehbi C. Gungor received his B.Sc. and M.Sc. degree in Electrical and Electronics Engineering from Middle East Technical University, Ankara, Turkey, in 2001 and 2003, respectively. He is currently a Research Assistant in the Broadband and Wireless Networking Laboratory and pursuing his Ph.D. degree at the School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA. His current research interests include wireless sensor networks, wireless mesh networks, and WiMAX.

897

Frank C. Lambert received his B.E.E and M.S.E.E. degree from Georgia Institute of Technology, in 1973 and 1976, respectively. He is currently the Electrical Systems Program Manager at the National Electric Energy Testing, Research and Applications Center, Atlanta, GA. His current research interests include power delivery equipment, automation and systems; power quality; communications for electric utility automation, and grid connected hybrid vehicles.

Computer Networks 50 (2006) 898–920 www.elsevier.com/locate/comnet

IP address autoconfiguration in ad hoc networks: Design, implementation and measurements M. Fazio, M. Villari *, A. Puliafito Universita` di Messina, Dipartimento di Matematica, Contrada Papardo—Salita Sperone, 31, 98166 Messina, Italy Received 26 July 2004; received in revised form 21 January 2005; accepted 6 April 2005 Available online 18 August 2005 Responsible Editor: V.R. Syrotiuk

Abstract Ad hoc networks allow to create very dynamic communication systems, which are independent from any fixed infrastructure. One of the most important issues regarding the management of an ad hoc network is the configuration of the system according to the way users move. Since a centralized control structure does not exist, we need to determine how the IP addresses must be assigned to the nodes in the network. This task becomes more complex since the nodes can connect or disconnect unexpectedly, and the topology of the network changes throughout the time. Furthermore, a network can split into two or more parts and even merging of different networks may occur. In this work, we have addressed these issues through our new protocol, named AIPAC, for configuration and management of IP addresses in a large and highly mobile ad hoc network. The performances of this solution are evaluated through the NS2 simulator, which allowed us to check the correctness of the protocol and to estimate the control traffic generated under different operating conditions.  2005 Elsevier B.V. All rights reserved. Keywords: Ad hoc network; IP address configuration; Performance analysis; NS2

1. Introduction

*

Corresponding author. Tel.: +39 090 393 229; fax: +39 090 393 502. E-mail addresses: [email protected] (M. Fazio), [email protected] (M. Villari), apulia@ingegneria. unime.it (A. Puliafito).

An ad hoc network is a communication system for dynamic environments, in which users can interact, but they are free to move in the surrounding space. These users have portable devices equipped with wireless interfaces for accessing the resources of the network. Through the WLAN

1389-1286/$ - see front matter  2005 Elsevier B.V. All rights reserved. doi:10.1016/j.comnet.2005.04.017

M. Fazio et al. / Computer Networks 50 (2006) 898–920

interfaces it is possible to create a multi-hop wireless architecture called mobile ad hoc network (MANET) [4], without previous agreement nor fixed infrastructures. These networks can be used in different scenarios: • military, i.e., soldiers can be equipped with devices in enemy environments, so that they can communicate with each other [3]; • personal area network, i.e., printers, PDAs, mobile phones, digital cameras [9]; • indoor and outdoor application, i.e., meetings, symposiums, demos, taxis, cars, sport stadiums; • emergency application, i.e., emergency rescue operations, police, earthquakes [1]. The portable devices that form the nodes of an ad hoc network generally have limited resources. Moreover, the wireless channels provide a limited bandwidth and unreliable communications. A node cannot take part in a unicast communication, unless it has its own IP address [20,6]. In Internet, the IP addressing is bound to the concepts of class and subnet. So the address results from the location of the nodes in the network. A similar approach, called geographic addressing, can be used in wireless networks, as explained in [7,10]. In these solutions, the address comes from nodeÕs latitude and longitude information. The geographic addressing scheme needs particular equipments on the nodes of the network, like GPS or other location systems. The GPS cannot be used in many ad hoc scenarios because it does not work everywhere and because of the cost and the size of GPS receivers. Furthermore, the low precision of the location estimation makes this approach not very suitable for networks with high density and mobility. The mobile nodes change the topology of the network dynamically. So the IP addresses have not routing purpose, but they are simply used as a unique identifiers (IDs) in the network. A manual configuration of the IP address would involve the existence of a mechanism for the distribution of the IDs. But this assumption is against the basic idea of a spontaneous formation of an ad hoc network. The dynamic configuration mechanism used in the networks with a fixed

899

infrastructure is the DHCP (dynamic host configuration protocol) [5]. This technique is not feasible in ad hoc networks due to: • the nodes working as DHCP servers do not always remain active or connected while the network exists, because a fixed infrastructure cannot be used; • the bandwidth of the wireless communication links is limited, so the configuration of nodes (especially for large networks) should take place with distributed approaches; • since the energy resources of the devices are limited, we should not overload some specific nodes for managing addresses. Therefore in an ad hoc network the IP addresses need to be configured in a dynamic and distributed fashion. Another important issue about the address configuration of the nodes comes from the possible overlapping among different networks due to nodes mobility. Under this situation, nodes with the same IP address can come in contact, and some errors may occur in the communication among users. In such case, a mechanism must be used for managing the merging among different networks, in order to maintain the unique identity of the nodes. In this context, we propose an innovative protocol, called AIPAC, to configure nodes of wide and very dynamic ad hoc networks. The main goal is to watch over the bounded resources of devices in the ad hoc network. In order to achieve this purpose, AIPAC does not need to guarantee that no duplicate address is present, but that the packets are correctly routed in the network. We thought of a system that manages the address duplication reactively. This means working only when a node wishes to communicate with another one that has a duplicate address. In [11], we have presented the overall operation of the protocol, and we have provided the measurements about the initial configuration procedures. Later, in [12], we have assessed how the protocol manages the merging and the partitioning of the network. In this work we show our planning choices and the implementation details of AIPAC, then we provide the

900

M. Fazio et al. / Computer Networks 50 (2006) 898–920

measurements about the overhead introduced by AIPAC, with reference to each control packet that the protocol can generate. AIPAC is implemented in NS2 [8]. This simulator allowed us to check how the protocol works and to proceed with the assessment of the signaling traffic generated. In the following sections, we describe the protocol in detail and the simulation results. This paper is organized as follows: in Section 2, we evaluate the recent proposals in literature about the IP address configuration in ad hoc networks. In Section 3, we analyze the algorithm and outline its features. Then we show the protocol performance and the overhead it introduces in Section 4. In Section 5, we provide our conclusions and the possible future developments.

2. Related works Several solutions have been proposed to solve the addressing problem in ad hoc networks. One of the major problems during the automatic configuration is the duplicated address detection (DAD). In [22], a technique known as Passive DAD is presented. It uses several strategies, named PDAD-SN, PDAD-LP and PDAD-NH, to detect any IP address conflict. The adjective ‘‘passive’’ identifies a protocol that introduces a limited overhead. However, a strong dependency exists between the DAD mechanism and the adopted routing protocol. The authors propose their solution on two different routing protocols: FSR (fisheye state routing) and OLSR (optimized link state routing). The information about the links are used to manage the problems due to usersÕ mobility. We suggest that additional control parameters are needed to detect the different circumstances. But these parameters can act with complex rules and overload the nodes. At the moment, researchers are not providing an adequate analysis of the computational load introduced by the protocol. In [24], each node acts as a prophet in the MANET. This means that it knows in advance which addresses are going to be allocated. At the beginning, a generic node chooses the seed for the whole

MANET, so the sequences of available addresses may be computed locally. Therefore, every mobile node obtains an unused IP address on its own. The uniqueness of the IP address is guaranteed by a very long sequence of random numbers generated by a specific seed. This mechanism does not exclude the possibility of generating duplicate addresses. In [21], the author assumes that nodes use unique identifiers (IDs) matched with the IP addresses. Each ID can be the MAC address or a Key with a large number of bits selected at random. This way, when different networks overlap, all nodes can be distinguished. The disadvantages of this approach are (1) the selected ID, which cannot certainly be unique, and (2) the large use of storage and communication resources. In fact, both the additional ID and the IP address have to be present in the packetÕs header and in the routing table. Furthermore, a conflict cannot be detected if two nodes generate the same Key. The advantage of this approach is that the management of network merging is not necessary. Another way to configure an IP address is to assume that each new node is able to select an available address. The available address can be chosen at random and a query has to be sent through a broadcast message. In order to make this address effective, a DAD procedure must be enabled to detect whether the site-local address is already in use [19,16]. These solutions do not provide a way for solving the problem of the networks merging/partitioning. In [15], a node (Requester) that is going to enter the network asks a configured node (Initiator) to negotiate the address on its behalf. Each node belonging to the network stores all the used addresses, as well as the ones that are going to be assigned to new nodes. This allows the node to know the available addresses at any time. Once the addresses are assigned to all the nodes, any network partitioning must be detected. This must be done both for releasing IP addresses available and for managing the merging of different networks. For this purpose, a single network ID is used, which is selected by the node with the lowest IP address. The partitioning is detected during the assignment procedure of the IP address, because

M. Fazio et al. / Computer Networks 50 (2006) 898–920

some nodes do not respond. If such set of nodes also includes the one that originally determined the network ID, the node with the lowest IP address will then determine the new network ID. The algorithm shows a good design. However, distributing the information to all the nodes of the network implies a high volume of traffic in large networks, as well as some complexity for keeping the information updated in the whole network whose wireless channels have limited bandwidth and are not reliable. The solution presented in [14] shows a good autoconfiguration mechanism based on the buddy data structure. Each node has a disjoint set of IP addresses that can assigned to new nodes. As the previous solution, the system does not need a DAD procedure. However, it uses a proactive approach to update the IP address table for the node synchronization process. This means that each node periodically broadcasts its IP address table. So in large and dynamic networks it can cause high overhead for the synchronization procedure. Furthermore, this solution does not seem very robust in case of loss of update messages. Let A and B be buddy nodes of each other and the network be congested. One or more updating messages of B can be lost. Node A discovers that the IP address block corresponding to node B is missing from the table. A concludes that node B is missing and merges node BÕs IP address block with its own.

3. The AIPAC implementation Considering the peculiarities of ad hoc networks and in order to manage the IP addresses effectively in these environments, AIPAC has been designed so to: 1. save the resources of the mobile devices, both in terms of energy and in terms of memory used; 2. save the bandwidth of the wireless channels; 3. be very robust in case of information lost in the wireless channels. To achieve the first objective, we have avoided that nodes store large amount of information

901

about the network topology. Each node is aware of its neighborhood that is the set of nodes inside its range of communication. This way, even if the size of the network grows, the amount of information to be stored and processed by each node is limited. In order to save the bandwidth of the wireless channels, AIPAC interacts with the underlying routing protocol and uses the signaling packets of the routing protocol for managing the IP addresses. This choice does not violate the principle of isolation between layers, because both protocols work at the network layer of the protocol stack. The configuration of unique IP addresses in ad hoc networks does not only involve the initialization of the nodes at the start up. Nodes migrate, so different ad hoc networks may overlap. In this case, some nodes with the same address may get in contact, and it may results in errors during the data routing. For this reason, a good autoconfiguration protocol must manage and solve the presence of duplicated addresses in the network. In AIPAC, each ad hoc network has its own network identifier (NetID) and nodes are therefore identified by the pairs (IP address, NetID). Other solutions in literature assume that unique network identifiers are assigned without any interaction among different networks, as in [15,23]. Now we try to quantify the validity of this approximation. If the NetID is a 4-byte number, the probability that at least two out of m independent ad hoc networks have the same NetID derives from the wellknown ‘‘birthday problem’’ and is given by m m 1  Y ni P ¼1 ; ð1Þ n i¼1

where n = 232 is the number of possible NetIDs. Fig. 1 shows the progress of this probability according to the variation of the number of networks m. The graph shows how this probability is low (106) even in the case of a very heterogeneous system where up to 100 different NetIDs are present. According to these remarks, we can say that the NetIDs of two ad hoc networks that merge are always different. In fact, if an event occurs with a probability of (106) in the worst case, then it can be assumed as practically unfeasible. The reactive routing protocols are more suitable for the AIPAC implementation than proactive

902

M. Fazio et al. / Computer Networks 50 (2006) 898–920

Fig. 1. Probability that at least two out of m independent ad hoc networks have the same NetID.

ones, because they are conceived to limit the use of device and network resources too. If the underlying routing protocol is reactive, AIPAC detects and corrects the duplicate IP addresses during the phase of Route_Discovery of the routing protocol. This approach does not always guarantee addresses uniqueness in the whole communication system. However, a correct data routing is assured. In this sense, the AIPAC protocol is reactive with reference to the management of duplicated addresses. The problem of configuring and maintaining the IP addresses has been separated into four phases in AIPAC: A. initial configuration of the nodes: a single IP address is given to each new node that wishes to join the network; B. verification and correction of duplicated addresses; C. management of the network partitioning: if a network splits into two or more parts, each of them will need a different NetID; D. Gradual merging: if the management of duplicated addresses takes place correctly, some distinct and overlapped networks may coexist. However, in this case the data routing is more complex, because some duplicated addresses may need to be corrected during data transmission. AIPAC implements a mechanism that limits the heterogeneity of the communication system. 3.1. Initial address configuration Some solutions have been proposed in literature about the issue of the initial IP address

configuration. AIPAC combines the proposal of search for a valid address presented in [19] with the figures of Initiator and Requester used in [15]. This way—as explained in an our previous work [11]—we have obtained the advantages of both solutions and we focus our attention on other issues related to addresses management. A node that wants to join an ad hoc network (Requester) randomly selects an Host_Identifier (HID). The HID denotes the node until the configuration of a valid IP address is completed. Then the Requester looks for an already configured node that can provide it with a valid address. This node must be reachable by the Requester in one hop and is called Initiator. During the procedure of address allocation, the Requester executes the operations shown by the flow diagram in Fig. 2(a). A Requester regularly broadcasts configuration requests, through requester packets until it receives a reply from a neighbor. The reply can come from another Requester (which has also sent a requester packet) or from a configured node of the network, which makes its presence visible through an hello packet. In the first case, a new ad hoc network can be created from the two non-configured nodes. The node with a higher HID selects the network parameters. In particular, it selects the NetID of the new network that is being created, as well as its own IP and the IP for the second node. The node with a lower HID waits for configuration parameters. In the second case, that is if the Requester has received an hello packet from a configured node, it selects the configured node as its Initiator and periodically sends requester packets until the configuration parameters come from the Initiator. If a Requester is close to some configured nodes and other Requesters, it could select a Requester as Initiator, since a requester message is received before an hello message. This implies the generation of a new network consisting only of two nodes. So the system becomes very heterogeneous. The protocol allows a Requester to select the best Initiator among its neighbors. Each Requester waits tconf seconds from the first answer it receives, in order to know which are all its possible Initiators. The Requester selects as Initiator the first configured node from which it receives the

M. Fazio et al. / Computer Networks 50 (2006) 898–920

903

Fig. 2. Initial address configuration: (a) Procedure of address request made by a Requester; (b) procedure done by an Initiator to obtain a valid address.

Hello packet. In such way, all nodes entering the network will configure the same NetID, whereas a new NetID is generated only if a isolated network is created. The Requester uses the tini timer to detect the presence of its Initiator. This timer is rescheduled each time the Requester receives an hello packet from the Initiator. If the timer expires, this can either mean that the Initiator and the Requester are no longer connected, or that many hello packets have been lost on the wireless channels.

The Requester tries to solicit a reply from the Initiator. If the reply does not arrive, the configuration procedure is restarted by searching a new Initiator. When an Initiator receives a request for an IP address, it executes the operations shown in Fig. 2(b). Each Initiator can manage several Requesters at the same time. The information about Requesters are stored in the table Requester_Table. Each entry in the Requester_Table specifies the HID of the Requester, the IP that is going

904

M. Fazio et al. / Computer Networks 50 (2006) 898–920

to be assigned to it and how many times the search has been done for the same IP address. The Initiator selects an IPx address at random. Then it checks whether IPx is already in use in the network. To do this, the Initiator sends a search_ IP packet in broadcast, and waits for a reply within the scheduling time of the tsearch timer. Then the Initiator determines whether the selected IPx address can be used or not when one of these two events occurs: • a used_IP packet is received. This means that the address is already in use. Thus, a new address has to be selected, and the procedure has to be restarted; • the tsearch timer expires. This means that the time available to the other nodes for sending an used_IP reply passed. In order to avoid any error due to the loss of packets on the wireless channels, the request is sent a second time. If the Initiator does not receive again any reply, this means that the address can be used, and it is given to the Requester through the initialize packet. In order to implement the autoconfiguration procedure, AIPAC uses the following packets: • requester packet is sent by a Requester that wants to configure a valid IP address. • hello packets allow the configured nodes to make their presence noted to the neighborhood. • search_IP packet is sent by an Initiator to check if the IP selected for the Requester is already used in the network. • used_IP packet. The node receiving a search_IP packet sends a used IP packet to the Initiator that has selected its own IP address for the Requester. • initialize packet comes from the Initiator and gives the network parameters to the Requester.

3.2. Management of IP address duplication The above-mentioned addressing mechanism for the nodes that join an ad hoc network guarantees that all the nodes in the network have different

IP addresses. In a context where different MANETs coexist, some networks may overlap due to the mobility of nodes. Because of this, the uniqueness of the IP address cannot be guaranteed, and some errors may occur during the data routing. Notwithstanding this, the nodes can always be recognized according to their IP address and the NetID of the network they belong to. AIPAC provides an innovative mechanism to detect and correct the duplicated IP addresses in case of overlapping among different networks. The management of duplicated IP addresses in AIPAC has been designed so to minimize the signaling traffic in the network, and to save the limited bandwidth available. For this purpose, we have decided to not check the addresses for the nodes that do not need to exchange data, even if they are part of the network. We call this approach reactive, because the procedure for managing duplicated addresses starts only if some data are sent through the network. In our opinion, matching the AIPAC reactive approach (for address management) with a reactive routing protocol is useful. In fact, the proactive routing protocols need information about the topology of the network. Thus, in wide networks they generate too much signaling traffic and need to store too much information even if this information is not used for communications. AIPAC has been designed to avoid the wastage of the available resources of the nodes and of the communication channels. AIPAC can also be implemented to work with proactive routing protocols. In case of reactive routing protocol, the Route_ Discovery process is started whenever a node needs to send some data for determining the communication path between the source and destination. AIPAC uses the Route_Discovery packets to check for any possible destination address duplication. The reactive routing protocols use the Route_ Request and Route_Reply packets to determine the routing path. The source node sends in broadcast a Route_Request packet. The nodes that receive this message save the information about source and destination, and rebroadcast it. When the destination node receives the request, it sends

M. Fazio et al. / Computer Networks 50 (2006) 898–920

a Route_Reply packet to the source, in order to specify the path to be used for data transfer. Several paths can be used between source and destination. In many of reactive routing protocols, the selection of the best path is done by the destination node. In fact, the destination node receives the Route_ Request packet from several paths, and sends the reply through the best one (for instance, according to the number of hops or the transmission delay). However, some protocols delegate the source to select the path, like in the case of TORA [17]. When this second type of protocols is used, the source node cannot understand whether the multiple replies to the Route_Request come from the same node or from different ones. AIPAC filters the transmission and the reception of the packets of the routing protocol, and orders the destination node to include the NetID of the network it belongs to as additional information into the Route_Reply packet. The mechanism to manage duplicated addresses is shown in Fig. 3, in which we present the operations to be performed when the Route_Reply packet reaches the source node. If some Route_Replys characterized with different NetIDs reach the source, this means that the system has different nodes with the same IP address. While detecting the duplicated addresses, we need to avoid that the routing protocol transfers the data of the application when the first Route_Reply packet arrives. The routing protocol must be stopped for an interval that equals the scheduling time of the tduplication timer. AIPAC checks for duplicated addresses only when this timer expires; then the features of the routing protocol can be reactivated. If AIPAC detects some nodes with duplicated address, the Change_IP procedure is activated. This procedure forces n  1 nodes to change their IP address, using the above-mentioned Requester–Initiator mechanisms. AIPAC defines the change_IP packet for the management of duplicated addresses. This packet is sent in broadcast to force the nodes with the duplicated IP address to change their addresses. 3.3. Management of partitioning AIPAC manages addresses on the assumption that each ad hoc network has a different NetID.

905

Fig. 3. Mechanism of management of duplicated addresses while receiving the routing packets.

The initial configuration procedure guarantees that the nodes in the network always have different addresses. Due to nodes mobility, the network topology changes, and this may cause the merging of distinct networks or the division of a network in two or more parts. Whenever a network partitioning occurs, groups of independent nodes with the same NetID are created. Let us refer to Fig. 4. Let us assume that two independent partitions

906

M. Fazio et al. / Computer Networks 50 (2006) 898–920

Fig. 4. The problem of the partitioning of an ad hoc network.

P1 and P2 are created in the network NetIDx. If a new node wants to enter P1, the corresponding Initiator could allocate an address belonging to a node of the other partition (e.g., IPx address). In fact, during the address allocation procedure, the node with the address IPx cannot reply to the request Search_IP. The result is the presence of two nodes with the same IPx address and with the same NetIDx. These nodes cannot be distinguished. This example proves how we need to manage the merging of different networks together with the partitioning of the same network. The partition management scheme implemented in AIPAC is shown in Fig. 5. Each node knows its neighborhood. Such information are obtained by hello packets regularly sent by each node, and are stored in a table called Neighbor_Table. Each entry in the Neighbor_Table contains the address of the neighbor and the NetID of the network it belongs to. Whenever a node decides to disconnect from the network, it sends a goodbye packet in broadcast. Nodes that receives goodbye packets verify if the sending node is in the Neighbor_Table and, in such case, they delete the entry for that node. The TTL (time to live) in a goodbye packet is set to 3, so that if the sending node moves out of the transmission range of its neighbors, the packet will arrive to them anyway. If a node A does not receive any Hello message for a long time from a neighbor B, this means that B has either abruptly switched off or moved away. Unfortunately, it is impossible for a node of an ad hoc network to determine which of these two events is occurred. Then our protocol manages both events in the same way. Node A set to 1 a specific flag in the BÕs entry of its Neighbor_Table.

Fig. 5. Mechanism for the management of partitioning in an ad hoc network.

Periodically each node checks the flag value for all its neighbors and if at least one flag is set to 1, it decides to start the procedure to detect partitions. If B has departed from A and if no partition occurs in the network, at least one path allows to connect A with B. The partitioning control is similar to the Route_Discovery procedure of the reactive routing protocols. Node A that has detected the absence of its neighbor B sends a check_partition packet to B, and waits for the reply through a verify_partition

M. Fazio et al. / Computer Networks 50 (2006) 898–920

packet. The reply must be received within a given time interval, which is set with the tpartition timer. If the reply is received, A deletes the entry for node B from its Neighbor_Table. When the timer expires, whether some nodes have not sent their reply, the Change_NetID procedure must be activated. The Change_NetID procedure allows a node that has detected a partitioning to select a new NetID for the partition it belongs to, and to broadcast it to all the other nodes of the same subnetwork through a change_netid packet. Nodes that receive the change_netid packet process it as described in Fig. 6. Since a partitioning can be detected by several nodes of the same subnet, several change_netid packets could travel at a given time with different values for the new NetID. The nodes that receive these packets wait for a time interval tdelay before changing the configuration parameters. The tdelay timer is scheduled whenever a node receives a change_netid packet and it is great enough (tdelay  tpartion) to allow a node to receive the NetID selected from other node in the network. This means that when the tdelay timer expires, the node has received all the possible change_netid messages and it can choose the highest selected NetID. However, if a change_IP packet is lost and node A configures a different NetID from the rest of nodes in the partition, the Gradual Merging procedure (Section 3.4) will get node A to configure the correct NetID in a short time. If different NetIDs are received during this time interval, the identifier with the highest value is selected. In this way, even if several nodes try to change the NetID at the same time, the highest NetID will finally be used. Although the change_netid packet is received, the nodes that act as Initiators cannot change their network parameters, because they would no longer be recognized by their Requesters. AIPAC implements the mechanism of Gradual_Merging, as described in Section 3.4. This mechanism allows the nodes that (for the above-mentioned reasons) could not change their NetID to modify it with the NetID of the other nodes. The traffic generated due to the management of partitioning can become considerable in networks with high mobility and node density. This can cause the saturation of the communication channels, and consequently it

907

Fig. 6. Behavior of the nodes of an ad hoc network when a change_netid packet is received.

results in the loss of packets. For this reason, we have introduced some optimizations for the simple mechanism previously described: • Let us assume that two nodes A and B depart from each other, and that A is the first node

908

M. Fazio et al. / Computer Networks 50 (2006) 898–920

aware of the changes in the topology. As we have already said, it sends a check_partition packet to B. When B receives this packet, it sends the verify_partition reply, and checks whether the source of the packet A is present in its Neighbor_Table. If A has lost its connectivity with B, B is likely being aware of the absence of A. However, the fact of receiving the packet of partitioning control means that no partitioning has been done. Thus, also B deletes A from its Neighbor_Table, and does not take any other action. In this manner, the traffic for checking partitions is reduced by half when no partitioning has occurred. At the same time the protocol is robust against the unreliability of the wireless channels because if the first check_partition packet is lost, the second node will start the same procedure. • The packets for the control of the partitioning travel the network through multi-hop paths. Let us imagine that an intermediate node X is verifying a partitioning toward node B. If X forwards a verify_partition packet as a reply by B to another node (e.g., A), X can use this information to determine that no partitioning has taken place. This makes the protocol more robust in case of packets lost. In fact, even if the reply to the partitioning from B to X is lost, X obtains the information required. AIPAC uses the following packets to check and manage partitioning: • hello packets are sent regularly from each node configured with a valid IP address to the neighbors, in order to inform them of its presence. With the information held in these packets, the nodes update the Neighbor_Table. This type of packets is also used in the initial configuration procedure. • check_partition packet is used to check whether a partitioning has taken place or not. A packet of this type expects to receive a verify_partition packet. • verify_partition packets are the reply to check_partition packets. If a node does not receive this type of packet, it enables the procedure for changing NetID.

• change_netid: when a partitioning takes place, the NetID in the subnetworks must be changed. This packet specifies the new NetID of the corresponding network.

3.4. Gradual merging Let us consider a scenario where several ad hoc networks coexist. Whenever a node has to exchange data, the procedure for detecting and correcting duplicated addresses starts, as we have explained in Section 3.2. However, this operation can make the network heavier, and can considerably delay the transmission of data. The system should therefore become uniform that is a single network where all the nodes have unique addresses. When two networks come in contact, all of the nodes cannot be reconfigured quickly, because it will result in peak value of the traffic in the wireless channels. Furthermore, an immediate reconfiguration of the nodes might be useless if the networks remain overlapped for a short time and then depart again. AIPAC implements a mechanism called Gradual_Merging that reduces the heterogeneity of the system according to the dynamics of the network topology. This means that, if the networks overlap for a long time, the nodes change their configuration parameters until they have a single NetID. Conversely, if the networks show weak contact points, the process of Gradual_Merging avoids to generate traffic in the network for the reconfiguration of the nodes. The process of Gradual_Merging is shown in Fig. 7. Each node knows the neighborhood, thanks to the information stored in the Neighbor_Table, which is used for the partitioning management. In particular, the nodes know the networks they get in contact with. When they detect that the number of neighbors belonging to another network is much higher than the number of neighbors of their network, they may decide to switch from their network to the other one. Let us assume a generic node A belonging to the network NetID1. Let n_mine denote the number of nodes close to A and belonging to the same network as A, while n_other denotes the number of nodes belonging to another network (e.g.,

M. Fazio et al. / Computer Networks 50 (2006) 898–920

909

another. Conversely, the changes in configuration are favored with low values of igm, and the system tends to create a single network. When node A decides to switch from its network NetID1 to an other network NetID2, it has to receive an IP address that is not yet used in NetID2. Before doing so, it must allow the neighbors to delete its entry from their Neighbor_Tables, otherwise these nodes would no longer receive information from A, and would reveal a non-existing partitioning. For this purpose, A sends a goodbye packet to the neighboring nodes. Then A selects its Initiator among the neighbors of the NetID2, returns to the state of Requester, and waits for the configuration parameters according to what is described in Section 3.1. The operation of Gradual_Merging is stopped when:

Fig. 7. Mechanism of Gradual_Merging.

NetID2). Every tmerg seconds A measures the gap between the number of neighbors of the two networks (Dn = n_other  n_mine) and the total number of nodes observed (n_tot = n_other + n_mine). Node A decides to switch from its network to the other one when the following condition occurs: Dn > igm ; n tot

ð2Þ

where igm, called Gradual Merging Index, is the threshold for switching (or not) to a different network. igm denotes the level of acceptable heterogeneity in the system. In fact, the higher igm is, less are the nodes that can switch from a network to

• A node is in the state of Initiator. In fact, an Initiator cannot change its configuration parameters, because the Requester that depends on it could no longer recognize it. • A partitioning control is being started. The node cannot enter the condition of Requester if it is in charge of partitioning management. In fact, the node in the Requester state has limited features, since its address is not valid, and does not allow it to send a change_netid packet into the network. • The node has received a change_netid packet, and it is waiting for the timer tdelay to expire, before changing its configuration. At this stage, the information about the neighboring nodes might change, due to the reorganization of the nodes. Thus, the evaluations about the threshold needed for switching might be compromised. These are the packets used by AIPAC for the Gradual_Merging procedure: • goodbye, to inform the neighbors to delete from their Neighbor_Tables the node that is switching from a network to another. • hello, which is used in the partitioning management and the Gradual Merging procedures to make nodes aware of neighborhood.

910

M. Fazio et al. / Computer Networks 50 (2006) 898–920

• requester, search_IP, used_IP and initialize, which are also used in the initialization of the nodeÕs IP address, as described in Section 3.1. They are needed to provide the switching node with a unique IP address in the new network. With AIPAC, we give an effective solution for the merging and partitioning problem, that is not efficiently solved in [19,16]. We have chosen to identify different MANETS with NetIDs like in [15], instead of unique nodeÕs ID, as proposed in [21,24]. In fact, we want that AIPAC is suited also for wide networks, but [24] is a complex algorithm for the initial configuration of nodes that does not assure the uniqueness of addresses after a long sequence of computed addresses. So AIPAC seems more reliable than [24] in large networks. The solution proposed in [21] uses the IP address coupled with an additional ID (Key) consisting of a big number. It can cause resources loss in managing the data traffic. Compared with [15,14], AIPAC is more suitable for large network due to our reactive approach for managing addresses assignment. [15] proposes a solution which suffers from the complexity in keeping the information updated in all the nodes of a network. In the same way, the approach proposed in [14] can cause high overhead for the synchronization procedure in large and dynamic networks. Moreover, AIPAC shows a very lower level of complexity than [22]. This is an important issue because generally speaking ad hoc devices have constrained computational and power resources.

4. Simulation results The AIPAC protocol has been implemented with version 2.26 of the NS2 simulator [8] with the Carnegie–Mellon University (CMU) support for simulating multi-hop wireless networks [13]. This package provides physical, data link, and medium access control (MAC) layer models. The Distributed Coordination Function (DCF) of IEEE 802.11 for wireless LANs is used as the MAC layer protocol [2]. An unslotted carrier sense multiple access (CSMA) technique with collision avoidance (CSMA/CA) is used to transmit the data packets. The radio model uses characteristics similar to a

commercial radio interface, AT&TÕs Wavelan PCMCIA card. In the simulation, we have worked using the AODV routing protocol [18]. To easily verify the correctness of our protocol, we have used the graphical interface of the animation tool Nam. This tool allows to view the network simulation traces. The trace file contains topology information, e.g., nodes, links, and packet traces. When the trace file is appropriately generated, it is ready to be animated by Nam. The graphical interface allows to view the topology of the networks and their evolution throughout the time. The software we have developed allows to generate a trace file so that the nodes that want to join the network and are searching for a valid address are viewed in grey, while the nodes that have been configured correctly are viewed in one out of the 10 available colors. The nodes belonging to the same ad hoc network are characterized by the same color. Fig. 8 shows a screenshot of a simulation made with 30 nodes on an area of 1000 · 1000 m. In this figure, we can see three different networks (marked in red, yellow and white, and highlighted with dotted circles) at the simulation time t = 135 s. The graphical interface allows to: 1. create the reference scenario to analyze the AIPAC procedures separately; 2. experiment the correct operation of the protocol; 3. verify the convergence of the Gradual_Merging process, checking the actual trend of the system toward the creation of compact groups of nodes with the same color (i.e., with the same value of NetID) in the different simulations. With NS2 we have also estimated the number/ type of messages exchanged by our protocol with the quantitative evaluation of the traffic load. In the following subsections we present the scenarios we have used for simulations (Section 4.1) and the results we have obtained (Sections 4.2–4.4). 4.1. Reference scenario We have performed our simulations using the mobility model included in the NS2 wireless exten-

M. Fazio et al. / Computer Networks 50 (2006) 898–920

911

Fig. 8. Screenshot of a simulation run.

sion, i.e., the CMU random waypoint model. During our tests, the sets of 10, 20, 30, 40, 50 and 60 nodes have been distributed random over an area of 100 · 1000 m. We have set the transmission power of wireless devices to 0.2818 W, that is the default value provided by the CMU model assuming AT&TÕs Wavelan PCMCIA card. This means that the communication range for each node was 250 m. The simulations have been run for 400 s and with several values for the parameter igm (relation (2)). We have switched on the nodes at random, during a time interval between 0 and 100 s. During the remaining 300 s of the simulation, half of the nodes have been moved along random directions and with speeds distributed in the interval of [0, 5] m/s. Such node movements can cause partitioning and merging of the networks. The set of 210 addresses are available for configuring the nodes. The measures we present are the results that we have obtained on average with 100 simulations for each scenario and the 95% of confidence interval. Our simulations have provided the data needed for an analysis of the traffic generated by the protocol independently from the application level, that is, the traffic concerning the

operations for the configuration of the nodes, due to 1. the arrival of new elements in the network, 2. the mechanism of Gradual_Merging, 3. the management of partitioning. The behavior of each node can be divided in two phases: the first phase regards the initial configuration of the node, that includes the operations done whenever a node is switched on and it requests for an address. The second phase regards the ordinary behavior of the node, i.e., how it manages the merging and the partitioning of the network. During the phase 2, when a node executes the procedure of Gradual_Merging to move into a new network, it must start the procedure for IP address configuration again, to be sure that an unused address is present in the new network. In order to assess the traffic introduced during the phase 1 and the phase 2 separately, a distinction is needed between the packets used for the configuration of the address. Let us introduce a new notation that denotes the initial configuration packets of phase 1 with requester1, search_IP1, used_IP1 and initialize_1, while the packets of

912

M. Fazio et al. / Computer Networks 50 (2006) 898–920

phase 2 for the operations of Gradual_Merging are denoted with requester_2, search_IP2, used_IP2 and initialize_2. In Table 1, we show the packets used by AIPAC. We need to outline that we include the hello packets. Even if AODV provides a mechanism to manage neighbors through hello packets exchange, not all routing protocols for Ad Hoc networks use this type of packets. As our goal is to propose a solution potentially suited for every routing protocol, we have analyzed the performance of our protocol in the worst case, i.e., assuming that the underlying routing protocol does not provide hello packets exchange. Moreover, the change_netid packet is absent from the Table, because it is used when the request for transferring data comes from the application level. In this context, we are assessing the traffic generated independently on the upper layers. 4.2. Variations of the traffic according to igm The simulations have been done by changing the parameter igm of the AIPAC protocol. Our first

purpose is to determine the best value of igm to minimize the signaling traffic introduced by the protocol. The parameter igm characterizes the process of Gradual_Merging (see Section 3.4), so this parameter influences the node configuration packets of the phase 2. In fact, only the nodes that already have a valid address can start the Gradual_ Merging procedure. The graph of Fig. 9 shows the progress of the traffic initialize1 according to the variation in the number of nodes, for different values of igm. We need to outline that the curves overlap. This confirms how the operations of initial configuration of the nodes are not influenced by the variation of the merging threshold. Conversely, Fig. 10 shows that the traffic due to the initialize2 packets decreases when igm increases. This occurs because the nodes hardly tend to switch from a network to another with a high value of igm. Thus, a lower number of Initiators are involved in phase 2. We have studied how the overlapped networks tend to uniform the NetID through the analysis

Table 1 AIPAC packets Initial configuration of nodes requester1 Sent by a node that wants to be configured with a valid IP address (Requester) search_IP1 Sent by an Initiator to check if the IP selected for the requester is already used in the network used IP_1 The node receiving the search_IP1 sends a used_IP1 packet to the Initiator that has selected its own IP address for the Requester initialize1 Comes from the Initiator and gives networkÕs parameters to a Requester node hello Makes noted to the neighbors the presence of nodes that can work as Initiator Gradual_Merging goodbye hello requester2 search_IP2 used_IP2 initialize2

Informs the neighbors to delete from their Neighbor_Tables the node that is switching from a network to another Allows to update the Neighbor_Tables that are used for the partitioning control Sent by a node that wants a valid IP address for the new network Sent by an Initiator to check if the IP selected for the Requester is already used in the network The node receiving the search_IP2 sends a used_IP2 packet to the Initiator that has selected its own IP address for the Requester Gives new networkÕs parameters to a Requester node

Management of partition hello Sent regularly from the nodes configured with a valid IP address to the neighbors, in order to prove their presence. The information in this packet update the Neighbor_Table of the nodes check_partition Checks whether a partitioning has taken place or not. A packet of this type waits to receive a verify_partition packet verify_partition Avoids that a node enables the procedure for managing the partitioning sending a change_netid packet change_netid When a partitioning takes place, the NetID of the subnetworks must be changed. This packet specifies the new NetID of the network

913

M. Fazio et al. / Computer Networks 50 (2006) 898–920

Fig. 9. Number of the initialize1 packets exchanged for different igm values.

Fig. 11. Number of the change_netid packets exchanged for different igm values.

Table 2 Values of igm according to the variation of the number of nodes for which the traffic change_netid is at the minimum

Fig. 10. Number of the initialize2 packets exchanged for different igm values.

of the traffic of the change_netid packets. In fact, a limited use of these packets proves that the system tends to a stable behavior. The trend of change_ netid traffic according to the number of nodes and to the variation of the parameter igm is shown in Fig. 11. The important aspect of the graph is not the total quantity of exchanged packets (the order of magnitude is of about 10 packets per 400 s of simulation), but the variation in the number of exchanged packets at different thresholds of merging. With low values of igm the instability of the system derives from the tendency of the nodes to switch from a network to another easily. This causes an unstable reconfiguration of the other nodes, as well as of the whole network. In case of higher values of igm the network is very disaggregated. Under these conditions, a partitioning is very likely to occur. The graph shows that the lowest number of change_netid packets is generated with values of igm close to 0.3, quite apart from the

Number of nodes

igm

10 20 30 40 50 60

0.28 0.29 0.33 0.31 0.35 0.34

number of nodes. For this reason, we have done other simulations with values of igm between [0.2, 0.4]. Table 2 shows the values of igm at which the traffic change_netid is at the minimum. We can therefore conclude that, given the number of nodes, if we select the value of igm shown in Table 2, we obtain a system that tends to uniform itself through the process of Gradual_Merging maintaining the maximum level of stability. 4.3. Traffic generated per type of packet In this section, we assess the importance of each type of AIPAC packet on the total signaling traffic introduced. The graphs we show refer to the value of igm equal to 0.3. In fact, as we have proved before, this value of igm assures a good level of stability for the system with any number of nodes. The simulations show that the traffic due to the requester1 and hello packets is much higher than the traffic of the other packets (see Figs. 12 and 13). This does not limit the performance of the protocol, because the requester1 and hello packets

914

M. Fazio et al. / Computer Networks 50 (2006) 898–920

do not spread in the network, but involve only the nodes that can be reached within one step. In fact, these are the packets that are regularly sent by the nodes, in order to make their presence noted to the neighbors. The requester1 packets are sent by nodes that want to enter the network and ask for a valid IP address for the first time. When these nodes have received the configuration parameters from their Initiator, they send hello packets to update the Neighbors_Tables. The scale of the graph

Fig. 12. Number of the hello and requester1 packets exchanged.

in Fig. 12 is logarithmic. The number of hello packets exchanged is of some thousands in 400 s. The number of requester1 packets is less than 1000, and it decreases in as the number of node increases. This behavior is due to the variation in the density of nodes in the simulation area. In fact, when the number of nodes increases, the time intervals during which Requesters remain isolated decrease. Thus, the configuration times decrease too see Fig. 12. The traffic introduced by the other AIPAC packets is shown in Fig. 13(a)–(d). The histograms show how the different types of packets contribute to the total traffic with reference to the procedures of initial configuration of nodes (A), of Gradual_Merging (B), and of management of partitioning (C) for sets of 10, 20, 40 and 60 nodes, respectively. The packets used_IP1 and used_IP2 are absent from the histograms. They are the packets used during the procedure for searching a valid address, when the selected IP address is already present in the network. Given the set of allowed values and the number of nodes considered in

Fig. 13. Histogram of the number of packets exchanged under different networking conditions. Number of packets exchanged for 10 (a), 20 (b), 40 (c) and 60 (d) nodes with reference to: (A) initial configuration, (B) Gradual_Merging, (C) partitioning.

M. Fazio et al. / Computer Networks 50 (2006) 898–920

the simulation, the probability of a collision on addresses is nearly zero. In Section 4.4, we will present new simulations, limiting the set of values available for the IP address allocation, to evaluate used_IP1 and used_IP2 traffic. The number of search_IP packets is double than the number of initialize packets, both in phase 1 and in phase 2. The reason is that the protocol provides for the double retransmission of the search_IP packet if no collision occurs before the Initiator sends the address to the Requester, to recover possible lost packets. So the relationship that links the packets for the configuration of the nodes is: n search IP  n used IP þ 2  n initialize.

ð3Þ

The most frequently used packet in the simulations with 10 or 20 nodes is search_IP1, which characterizes the initial configuration of the nodes. The number of goodbye packets (which are sent by the nodes that decide to switch to another network) and of check_partition packets (for the control of partitioning) is nearly the same. Thus, the operations of merging and partitioning produce the same amount of traffic in simulations done with a few nodes. The weight of check_partition and verify_partition packets increases considerably in the histograms for 40 and 60 nodes, and becomes even higher than the search_IP1 in the simulations with 60 nodes. Conversely, the traffic of Gradual_Merging does not provide a big contribution to the total traffic. In order to assess how the partitioning traffic varies according to the variation of the number of nodes, we have studied the system in ordinary conditions (i.e., when all the nodes have been correctly configured). We have considered only the traffic for the maintenance of the addresses, that is the traffic generated in simulations in the time interval between 101 and 400 s. The percentage of each type of packet is shown in Fig. 14. The graph shows that the weight of the procedures for the management of the network partitioning (check_partition and verify_partition packets) considerably increases when the number of nodes increases. In fact, if the density of the nodes is high, more neighbors are seen by each node, and the number of partitioning checks increases. On the other hand, partitioning is less likely to occur.

915

Fig. 14. Number of packets exchanged during the second phase of the simulations expressed in percent of the total traffic.

This is proved by the decrease in the number of change_netid packets in comparison with the total traffic, when the number of nodes increases. The decrease in change_netid packets confirms that the system tends to uniform the NetID of the most frequently connected networks. We can note how the number of search_IP2 and goodbye packets is the same. Due to the Gradual_Merging mechanism, the nodes that switch to a network with a different NetID at first send a goodbye packet, then return to the state Requester and invest an Initiator to start the search_IP2 procedure. In order to obtain some measurements independently from the type of simulations, Fig. 15 shows the number of packets generated in the time unit T according to the variation of the number of nodes. The traffic is mainly due to the operations of initial configuration of the nodes (search_IP1 and initialize1). However, while the search_IP1, search_IP2 and goodbye packets give a great contribute to the total traffic, growing proportionally with the

Fig. 15. Mean traffic generated from each type of AIPAC packets in a time unit.

916

M. Fazio et al. / Computer Networks 50 (2006) 898–920

Fig. 16. Total number of exchanged packets.

number of nodes, the traffic increases more with the check_partition and verify_partition packets. The progress of the traffic for the other types of packets is nearly constant, notwithstanding the increase in the number of nodes. The progress of the total mean traffic expressed in packets/second according to the variation of the number of nodes is shown in Fig. 16. We can note that this progress can be approximated by a curve with the following equation: Y ¼ 0:12  X 1:364 ;

ð4Þ

where Y is the total traffic, while X is the number of nodes. This equation allows us to know the traffic generated according to the variation of the number of nodes with a good approximation. The equation has a complexity O(n1.364). Since the exponent of the variable X is nearly 1, the total traffic generated does not increase significantly if the number of nodes increases. We can therefore conclude that AIPAC can also work in very large networks.

for the IP address allocation, and to check how the collision of addresses affects the traffic for the configuration of nodes (i.e., the traffic of search_ IP, used_IP, and initialize packets). Let us denote a as the set of simulations considered so far. Let us denote b as the new simulations, where the number of available IP addresses is 30% higher than the number of nodes present in the network. For instance, with 30 nodes in b simulations each Initiator can select the addresses in a pool of 39 available values, while with 40 nodes we have a pool of 52 available values. The contributions of the different packets to the total traffic for the simulations b with 60 nodes are shown in the histograms of Fig. 17. If we compare Figs. 13(d) and 17, we can see that in the b simulations a discrete number of collisions are generated. In fact, the traffic due to used_IP1 and used_IP2 is no longer negligible in comparison with the traffic introduced by the other AIPAC packets. Each time an Initiator detects the collision for the selected address, the procedures must be restarted to determine the new IP address and to check its availability. This causes an increase in the number of search_IP packets used according to the relation (3), which is also confirmed by the last histograms. In Fig. 18(a), we show the number of search_ IP1 packets generated for the different simulations (a and b). When the number of nodes increases, the difference in the traffic introduced (from simulations a to b) increases. Since we are interested in determining the behavior of AIPAC in large networks, Fig. 18(b) shows the increase in the difference of search_IP1 traffic with simulations b, according to the increase in the number of nodes.

4.4. Traffic with a reduced set of IP addresses In this section, we analyze the behavior of the protocol according to the change in the supply of available addresses. In fact, in Section 4.3 we noticed that the selected simulation parameters make the traffic due to used_IP1 and used_IP2 packets negligible. In fact, during the search and configuration of a valid address, almost no collision occurs among the addresses selected by the Initiators and the ones already used in the network. We have therefore decided to perform new simulations, limiting the set of values available

Fig. 17. Histogram of the packets exchanged for 60 nodes in the b simulations.

M. Fazio et al. / Computer Networks 50 (2006) 898–920

917

Fig. 18. Traffic of search_IP1 packets in simulations a and b. (a) Number of search_IP1 packets exchanged in simulation a and b. (b) Approximating the gap between search_IP1 traffics as a linear function.

The graph shows that the search_IP1 traffic increases with a linear regression in comparison with the set of nodes. The same way, we show the traffic of search_IP2 packets in Fig. 19(a), as well as the difference of traffic between the two types of simulations according to the number of nodes in Fig. 19(b). In this graph, we can see that the increase in search_IP2 traffic can be approximated by the equation: Y ¼ 0:06  X 1:765 .

ð5Þ

This means that the signaling traffic introduced does not affect the performance of the communication channels considerably, even in critical conditions, when the number of available addresses can be compared with the number of nodes present in the network (for instance, in large networks). 4.5. Changes in reference scenario We have performed our simulation experiments using the Carnegie–Mellon University (CMU)

support for simulating multi-hop wireless networks on NS2 simulator. This package provides physical, data link, and medium access control (MAC) layer models. The distributed coordination function (DCF) of IEEE 802.11 for wireless LANs is used as the MAC layer protocol. An unslotted carrier sense multiple access (CSMA) technique with collision avoidance (CSMA/CA) is used to transmit the data packets. The radio model uses characteristics similar to a commercial radio interface, AT&TÕs Wavelan PCMCIA card. We have set the transmission power of wireless devices to 0.2818 W, that is the default value provided by the CMU model. This means that the communication range was 250 m. We have considered a 1000 · 1000 m simulation area. Under these conditions, we have noted that increasing the number of nodes over 60 units, a great amount of packets has been dropped. The main cause of packet dropping has been the collision event on wireless links. For example, with 100 nodes on average the 9.8% of the total packets get in the network are dropped

Fig. 19. Traffic of search_IP2 packets in simulations a and b. (a) Number of search_IP2 packets exchanged in simulation a and b. (b) Approximating the gap between search_IP2 traffics as a known function.

918

M. Fazio et al. / Computer Networks 50 (2006) 898–920

and the 80.3% of these dropped packets are lost due to collision. We thought that with such values of dropped packets, we could not give a good analysis of the performance offered. So we have considered 60 as the maximum number of nodes in our simulations. To increase the number of nodes we should decrease their transmission power, so to limit the collision of packets. For example we could set the transmission power to 7.214e3 W and, consequently, the transmission range to 100 m. We have performed further simulations with this new value of range and with 100 nodes. On average, only the 0.097% of the total packets are dropped and 36% of these packets are dropped for collision. Fig. 1 shows the new results in terms of traffic introduced by each procedure (initial configuration of nodes (A), Gradual Merging (B), and management of partitioning (C)). With this new set of simulation we have increased the density of the nodes in the network (100 nodes on the 1000 · 1000 m area), but, from the point of view of nodes, the density of neighbors is nearly the same, because we have reduced the communication range. As we explain in the paper, the traffic for the management of the network partitioning is related to node density. The packets for the partition management procedure (check_partition, verify_partition and change_ netid) are sent in broadcast with low TTL (Time To Live), such as, whenever a node n starts the partition management procedure, this type of traffic involves nodes in a small area surrounding n. Then the behavior of our protocol referred to the partition management depends also on the density of neighbors for each node in the network, and not only on the total number of nodes in the network. To compare the new results of Fig. 20 with the old ones, let Nb(N) denote the Neighbor Density, that is the number of neighbors on average for each node in the network, when the simulation is performed with N nodes. Nb(N) = q(N) * pr2, where q is the node density (N/simulation area = N/1000 · 1000 m) and r is the communication range. Let r0 = 250 m (the old communication range) and r = 100 m (the new communication range). If the Neighbor Density is the same in old and in new simulations, we have

Fig. 20. Number of packets exchanged for 100 nodes and communication range of 100 m with reference to: (A) initial configuration, (B) Gradual_Merging, (C) partitioning.

N b ¼ ðN x Þ  r20 ¼ ð100Þ  r2 ) N x ¼ 100  ðr=r0 Þ2 ¼ 16  20. Then we can say that the Neighbor Density is roughly the same in simulations with 100 nodes and communication range r = 100 m (S(100/100)) and in simulations with 20 nodes and communication range r = 250 m (S(20/250)). We can compare the partition management traffic in S(100/100) (Fig. 20 block C) and in S(20/250) (Fig. 13(b) block C). Obviously, more nodes in the networks imply more checks of partition. However, the relation among check_partition, verify_partition and change_netid packets is very similar for the two types of simulation. This means that even increasing the number of nodes in the network and modifying the communication range, the protocol works correctly.

5. Conclusion and future works In this paper, we have presented a new protocol named AIPAC for the configuration of the IP addresses in an ad hoc network. We have also provided our planning choices and the implementation details. In particular, we have proved how AIPAC has been designed so to be used in wide ad hoc networks. This purpose has been achieved through a reactive approach in managing duplicated addresses, as well as through appropriate choices in the use of network resources, in terms of quantity of information stored in the nodes and of consumption of network bandwidth.

M. Fazio et al. / Computer Networks 50 (2006) 898–920

The protocol provides the mechanism of Gradual_Merging to manage heterogeneous systems that are created when different networks get in contact. This procedure allows to make a system homogeneous if it consists of several overlapped networks, according to the evolution of their topologies. The protocol has been studied through the NS2 simulator. We have analyzed the different procedures in the mechanism of address management. Furthermore, we have identified the optimal value for the parameter igm, which characterizes the process of Gradual_Merging with reference to the level of stability in the configuration of network parameters. We are currently implementing AIPAC on proactive routing protocols, like DSDV. As a future development, we plan to optimize the procedures of network partitioning, which cause most of the traffic in the network. We also intend to introduce new criteria to manage the energy parameters of the nodes. Acknowledgment This work is supported by Consiglio Nazionale delle Ricerche (CNR) with the Strategic IS-MANET Project ‘‘Middleware Support for Mobile Ad-hoc Networks and their Application’’ [1].

[7]

[8] [9]

[10]

[11]

[12]

[13]

[14]

[15]

[16]

References [1] Infrastrutture software per reti ad-hoc orientate ad ambienti difficili, Consiglio Nazionale delle Ricerche (CNR) with the Strategic IS-MANET Project, 2002–2004. Available from: . [2] Wireless LAN medium access control and physical layer specifications, IEEE 802.11 Standard (IEEE Computer Society LAN MAN Standards Committee), August 1999. [3] Enhanced communications and situational awareness demonstrated, The DARPA SUO SAS Program, October 2002. Available from: . [4] Mobile ad-hoc networks, Mobile ad-hoc networks (MANET) charter. Available from: . [5] R. Droms, Automated configuration of TCP/IP with DHCP, IEEE Internet Computing, July–August 1999. [6] U. Jonsson et al., MIPMANET—Mobile IP for mobile ad hoc networks, in: Proceedings of the 1st Workshop on

[17]

[18]

[19]

[20]

[21]

919

Mobile Ad hoc Network and Computing (MobiHOC 2000), Boston, MA, 6–11 August 2000, pp. 75–85. T. Imielinski, J.C. Navas, GPS-based geographic addressing, routing and resource discovery, Communication of the ACM 42 (4) (1999) 86–92. K. Fall, K. Varadhan (Eds.), NS notes and documentation, 14 April 2002. M. Guarnera, M. Villari, A. Zaia, A. Puliafito, MANET: Possible applications with PDA in wireless image environment, in: The 13th IEEE International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC2002), Lisboa, Portugal, 15–18 September 2002. D. Heutelbeck, R. Rath, C. Unger, Fault tolerant geographical addressing, in: Innovative Internet Community Systems, Third International Workshop (IICS 2003), Leipzig, Germany, 19–21 June 2003, pp. 144–155. M. Villari, M. Fazio, A. Puliafito, Autoconfiguration and maintenance of the IP address in ad-hoc mobile networks, in: Australian Telecommunications, Networks and Applications Conference (ATNAC 2003), Melbourne, 8–10 December 2003. M. Villari, M. Fazio, A. Puliafito, Merging and partitioning in ad hoc networks, in: IEEE Symposium on Computers and Communications (ISCC 2004), Alexandria, Egypt, 29 June–1 July 2004. D.A. Maltz, The CMU monarch projects wireless and mobility extensions to NS, Monarch Project, [email protected], Snapshot release 1.1.1 edition, 1999. M. Mohsin, R. Prakash, IP address assignment in mobile ad hoc networks, in: Proceedings of IEEE MILCOM, September 2002. S. Nesargi, R. Prakash, MANETCONF: Configuration of hosts in a mobile ad hoc network, in: The Conference on Computer Communications (IEEE INFOCOM 2002), New York, 23–27 June 2002. J.-S. Park, Y.-J. Kim, S.-W. Park, Stateless address autoconfiguration in mobile ad hoc network using sitelocal address, 2001, Internet DRAFT, . V.D. Park, M.S. Corson, A highly adaptive distributed routing algorithm for mobile wireless networks, in: The Conference on Computer Communications (IEEE INFOCOM 1997), Kobe, Japan, 7–11 April 1997. C.E. Perkins, E.M. Belding-Royer, S.R. Das, Ad hoc ondemand distance vector (AODV) routing, June 2002, IETF Internet DRAFT. C.E. Perkins, J.T. Malinen, R. Wakikawa, E.M. Royer, Y. Sun, IP address autoconfiguration for ad hoc networks, July 2002, Internet DRAFT, . Y. Sun, E.M. Belding-Royer, C. Perkins, Internet connectivity for ad hoc mobile networks, International Journal of Wireless Information Networks special issue on Mobile Ad hoc Networks 43 (6) (2002) 75–99. N.H. Vaidya, Weak duplicate address detection in mobile ad hoc networks, in: ACM International Symposium on

920

M. Fazio et al. / Computer Networks 50 (2006) 898–920

Mobile Ad hoc Networking and Computing, Boston, MA, 9–11 June 2002. [22] K. Weniger, Passive duplicate address detection in mobile ad hoc networks, in: IEEE Wireless Communications and Networking Conference (WCNC) 2003, New Orleans, USA, 16–20 March 2003. [23] K. Weniger, M. Zitterbart, IPV6 autoconfiguration in large scale mobile ad-hoc networks, in: European Wireless 2002, Florence, Italy, February 2002. [24] H. Zhou, L. Ni, M. Mutka, Prophet address allocation for large scale MANETS, in: The Conference on Computer Communications (IEEE INFOCOM 2003), San Francisco, 1–3 April 2003.

Maria Fazio received the degree in Electronic Engineering from the University of Messina (Italy) in 2002. Since then she has been engaged in research on wireless and ad hoc networks. She is currently attending her Ph.D. studies in Advanced Technologies for Information Engineering at the University of Messina. Scientific activity of Dr. Fazio has been focussed on studying networks systems. Her main research interests include wireless and mobile systems, especially with regard to ad hoc networks. The current activities on ad hoc networks include: automatic configuration of the IP address, Data Dissemination, QoS management and Middleware platform development. Particular attention is given on Disaster Recovery (DR) environment issues. She is currently spending six months as exchange visitor at the Department of Computer Science of University of California in Los Angeles.

Massimo Villari received the degree in 1999 in Electronic Engineering from University of Messina (Italy). In the same year, he was adviser near the STMicroelectronics of Catania, in software agents; realization architecture in order to run Agents from mobile telephone for Web services. Since 2000 is Ph.D. student at Faculty of Engineering University of Messina, in ‘‘Advance Technologies for Information Engineering’’. In the 2001 he was assistant professor of the matters Fondamenti di Informatica and Sistemi di Elaborazione at the

University of Messina (Italy). Moreover same year, he was designer of architecture to agents for the management of Digital Still Camera, on behalf of the STMicroelectronics of Catania. In the 2002 he was in a stage at the Laboratories of Cisco Systems Europe (Sophia Antipolis—France), Technology Centre, on two main research projects: ‘‘Development of new technologies on video streaming MP4, on infrastructure wired and wireless’’ MPEG4IP project and ‘‘Network Mobility on IPv6: Mobile Routers’’ NEMO project. In the 2003 he received Ph.D. degree, in ‘‘Advance Technologies for Information Engineering’’, University of Messina. In the 2003/2004 he was professor of the matters Fondamenti di Informatica and Laboratorio di Informatica at the Engineering Faculty, University of Messina. Actually he is professor of the matters Database and Laboratorio di Informatica next same Faculty.

Antonio Puliafito received the electrical engineering degree in 1988 from the University of Catania and the Ph.D. degree in 1993 in computer engineering, from the University of Palermo. He has been engaged in research on parallel and distributed systems with the Institute of Computer Science and Telecommunications of Catania University, where he was an assistant professor. He is currently a full professor of computer engineering at the University of Messina. His interests include parallel and distributed systems, networking, wireless and GRID computing. During 1994–1995 he spent 12 months as visiting professor at the Department of Electrical Engineering of Duke University, North Carolina— USA, where he was involved in research on advanced analytical modelling techniques. He is the coordinator of the Ph.D. course in Advanced Technologies for Information Engineering currently available at the University of Messina and the responsible for the course of study in computers and telecommunication engineering. He was a referee for the European Community for the projects of the fourth, fifth and sixth Framework Program. He has contributed to the development of the software tools WebSPN, MAP and ArgoPerformance, which are being used both at national and international level. Dr. Puliafito is co-author (with R. Sahner and Kishor S. Trivedi) of the text entitled ‘‘Performance and Reliability Analysis of Computer Systems: An Example-Based Approach Using the SHARPE Software Package’’, edited by Kluwer Academic Publishers. He is currently the responsible for all the ICT and e-learning activities for the University of Messina.

Computer Networks 50 (2006) 921–937 www.elsevier.com/locate/comnet

An optimal resource utilization scheme with end-to-end congestion control for continuous media stream transmission Hongli Luo a, Mei-Ling Shyu a

a,*

, Shu-Ching Chen

b

Department of Electrical and Computer Engineering, University of Miami, Coral Gables, FL 33124, USA b Distributed Multimedia Information System Laboratory, School of Computer Science, Florida International University, Miami, FL 33199, USA Received 5 November 2004; received in revised form 24 February 2005; accepted 29 June 2005 Available online 2 August 2005 Responsible Editor: G. Morabito

Abstract In this paper, we propose an optimal resource utilization scheme with end-to-end congestion control for continuous media stream transmission. This scheme can achieve minimal allocation of bandwidth for each client and maximal utilization of the client buffers. By adjusting the server transmission rate in response to client buffer occupancy, playback requirements in the client, variations in network delays and the packet loss rate, an acceptable quality-of-service (QoS) level can be maintained at the end systems. Our proposed scheme can also achieve end-to-end TCP-friendly congestion control, which the multimedia flows can share fairly with the TCP flows instead of starving the TCP flows and the packet loss rate is reduced. Simulations to compare with different approaches under different network congestion levels and to show how our approach achieves end-to-end congestion control and inter-protocol fairness are conducted. The simulation results show that our proposed scheme can generally achieve high network utilization, low losses and end-toend congestion control.  2005 Elsevier B.V. All rights reserved. Keywords: Adaptive transmission rate; Quality-of-service (QoS); End-to-end congestion control; Network delay; TCP-friendly; Network resource optimization

1. Introduction *

Corresponding author. Tel.: +1 305 284 5566; fax: +1 305 284 4044. E-mail address: [email protected] (M.-L. Shyu).

The Internet has experienced an increasing growth of multimedia applications such as audio

1389-1286/$ - see front matter  2005 Elsevier B.V. All rights reserved. doi:10.1016/j.comnet.2005.06.005

922

H. Luo et al. / Computer Networks 50 (2006) 921–937

and video streaming. To support streaming applications over the Internet, the following two requirements should be addressed: quality of service (QoS) and congestion control from the end-to-end point of view. In order to satisfy the QoS requirement, the transmission of real-time video has bandwidth, delay and loss requirements [20]. For example, there is a minimal bandwidth requirement to provide acceptable presentation quality for the transmission of real-time video. However, the available bandwidth of the Internet may change from time to time, and currently the Internet does not provide bandwidth reservation to meet such a requirement. Real-time video also has a stringent delay requirement since those video packets arriving later than the playback schedule become useless and harmful to the playback quality. Moreover, the packet loss ratio should be kept below a threshold value to provide an acceptable visual quality for real-time video. Excessive delays and the loss of video packets are usually caused by network congestion and can degrade the presentation quality of the video. Thus, congestion control is required in continuous media stream transmission to reduce the packet loss and delay. An efficient transmission scheme should optimally utilize the available network resources such as bandwidth and buffer (at the client or in the inter- mediate router) and have the congestion control facility to satisfy the requirements of bandwidth, delay and packet loss. In this paper, an optimal resource utilization scheme with end-to-end TCP-friendly congestion control is proposed. The central component of the proposed scheme is a rate control model that can dynamically changes the server transmission rates for multiple clients considering the relationships among the server transmission rates, client buffer occupancies, playback rates, network delays and packet loss ratios [15–18]. The proposed scheme can obtain an optimal utilization of network resources such as to maximally utilize the client buffers and minimally allocated network bandwidth to the clients. Moreover, during the network congestion, the adaptive rate is adjusted in a way that the multimedia flows can share the bandwidth fairly with the TCP flows (instead of

starving the TCP flows) and the packet loss rate can be reduced. The simulation results show that our approach better utilizes the bandwidth and client buffers, supports more concurrent clients under the constraint of the limited available bandwidth, and achieves end-to-end TCP-friendly congestion control. The paper is organized as follows. The related work is discussed in the next section. In Section 3, the rate control model, the derivation of the adaptive transmission rate and optimal bandwidth allocation, and congestion controlled adaptive transmission rates of the proposed scheme are presented. Simulation results for the various scenarios are given in Section 4. Conclusions are presented in Section 5.

2. Related work Extensive research has been done on resource allocation schemes in multimedia transmission. Adaptive allocation of network resources is more desirable for QoS provision. Several research works concentrated on adaptive rate control for Internet video transmission and statistics methods were used. In [11,10], the authors introduced an algorithm for the evaluation of the traffic parameters that statistically characterize the video stream when a given QoS is required. Bandwidth can be dynamically renegotiated according to changes in the bit stream level statistics [8]. In [21], their approach used a neural network method and concentrated on the video content and traffic statistics analysis. In their approach, the resource request method depends on the prediction of future traffic patterns using the content and traffic information of short video segments. In [4], the artificial intelligence techniques such as fuzzy-logic-based and artificial-neural-networkbased techniques were used in traffic control mechanisms, but the computation overhead is too high. On the other hand, congestion control must be implemented for real-time stream transmission. Internet is a shared environment where the TCP flows are the dominant traffic. Current stability of the Internet is greatly dependent on the end-

H. Luo et al. / Computer Networks 50 (2006) 921–937

to-end congestion control. Since the routers typically do not actively provide congestion control [1], end-to-end congestion control is more recommended for Internet multimedia transmission [5]. The real-time multimedia flows should also respond to network congestion and share the bandwidth fairly with the non-real-time flows (e.g., TCP flows). That is, when the multimedia flows compete bandwidth with the traditional TCP flows, they should share the bandwidth fairly, which is called TCP-friendly. It has been shown in [3] that the Additive Increase and Multiplicative Decrease (AIMD) algorithm efficiently converges to a fair state. The recently proposed congestion control mechanisms are TCP-like window-based schemes or equation-based schemes [7]. Some rate adaptation approach for congestion control has been proposed. TCP-friendly protocols employing rate-based control to compete fairly with other TCP flows for bandwidth were proposed in [6]. They tried to stabilize the throughput and reduce the jitter for multimedia streams, and the bandwidth was estimated in an equation-based way. In [19], a TCP-friendly adaptation scheme based on the TCP-throughput model was proposed. In general, their scheme achieved high network utilization but it may result in large oscillation and low throughput. In [14], an end-to-end TCPfriendly rate adaptation protocol which employs an AIMD algorithm was presented. Some improvements to AIMD were proposed in [10,22]. A generalized form of AIMD was introduced in which the increase value and the decrease ratio are parameterized. In [10], AIMD with fast convergence was proposed which converges faster to fairness and achieve better efficiency. A binomial congestion control mechanism suitable for multimedia transmission was proposed in [2]. It is a non-linear congestion control mechanism that provides a more smoothly adjusting of the transmission rate. It has a tradeoff between the aggressiveness of the probing bandwidth and the conservativeness of the responses to congestion. However, all of these approaches mainly aim at congestion control and inter-protocol fairness, without taking into account the optimal utilization of network resources, e.g., bandwidth and client buffer.

923

3. The proposed scheme Assume there is an end-to-end transmission from the server to the client. The server regularly receives feedback information from the clients and adaptively adjusts its transmission rates. The packets will be used for the playback after they arrive at the client buffer. When the transmission rates at the server and the playback rates at the client are different, client buffer is used to accommodate the mismatches between transmission rates and playback rates. With the buffering of some packets and a slightly delay of the playback schedule, the client decreases the playback jitter caused by variations of network bandwidth and the end-to-end delay. Hence, it is meaningful and critical to maximize the utilization of the client buffer to provide a better QoS for the playback at the client. The server sends the packets with sequence numbers, and the client acknowledges the received packets at a regular time period, providing end-toend feedback. Each acknowledgement (ACK) contains the percentage of arriving packets, which is used to indicate the network delay, the packet loss information and the client buffer occupancy. We have a protocol design that is similar to RTP/ RTCP, but we do not really apply RTP/RTCP protocol here. What is different from the other RTP-similar feedback rate adaptive approach is that we introduce a packet arrival percentage value as the feedback information, which is used by the server to determine the optimal transmission rate for the client. No explicit congestion signal from the network is needed. Our proposed scheme performs the following three steps: 1. Feedback analysis. The feedback information of all the clients is analyzed, including the packet loss rate, playback requirement, packet arrival percentage, and buffer occupancy. 2. Congestion state analysis. The congestion state of the network is determined by the packet loss rate perceived by the client. Then our scheme decides whether to increase or decrease the weight value in response to the congestion status.

924

H. Luo et al. / Computer Networks 50 (2006) 921–937

3. Bandwidth adjustment. The bandwidth of the multimedia application is calculated according to the buffer occupancy, playback requirement and weight value determined in Step 2. The allocated bandwidth should be adjusted under the limit of the available network bandwidth.

3.1. End-to-end rate control model To describe the relationships among the transmission rate, client buffer occupancy, playback rate and network delay, an end-to-end rate control model is depicted in Fig. 1. Normally, the end-toend delay in the Internet is time varying and not easy to be modeled precisely. Our rate control model tries to capture the delay from the server to the client in terms of the percentage of packets sending at a time interval while arriving at the client at a later time interval. Such information is provided as part of the feedback to the server regularly. The variables used in this paper are defined as follows: • k: time interval; • Qk: buffer occupancy at the beginning of time interval k; • Rk: the transmission rate at time interval k in terms of the number of packets transmitted from the server during per time interval; • Pk: the arrival rate at time interval k in terms of the number of packets arriving at the client buffer during per time interval;

• Lk: the playback rate at time interval k in terms of the number of packets used for playback during per time interval; and • Qr: the allocated buffer size for each client, which is determined by the server at the time the client and the server setup their connection. Consider the relationships among Qk, Rk, Pk, and Lk in Fig. 1. Let Qk+1 denote the buffer occupancy at the time interval k + 1. We have the following equation Qkþ1 ¼ Qk þ P k  Lk .

ð1Þ

However, because of the changeable network delays, Pk is not equal to Rk at a certain time interval. Assume that the packets arriving at the client buffer at time interval k comprise of packets transmitted from the server at the time interval k  d, k  d  1, . . ., k  d  i + 1, . . ., and k  d  m + 1. Let bi,k denote the percentage of the packets transmitted at Rkdi+1 that arrives at the client buffer at time interval k to capture the scenario of changing network delays. Therefore, Pk can be represented as a function of Rkd, Rkd1, . . ., Rkdi+1, . . . and Rkdm+1, as shown in the following equation. P k ¼ b1;k Rkd þ b2;k Rkd1 þ    þ bi;k Rkdiþ1 þ    þ bm;k Rkdmþ1 ;

ð2Þ

where the subscript k  d is to denote the closest time interval when the transmitted packet can arrive at the buffer, and k  d  m + 1 denotes the farthest time interval when the transmitted packet can arrive at the buffer at time interval k.

Fig. 1. The proposed end-to-end rate control model.

H. Luo et al. / Computer Networks 50 (2006) 921–937

925

As can be seen from Eq. (2), our rate control model can effectively capture the effects of changing network delays. First, the larger the value of d, the larger the network delay and the more congested the network are. Second, the larger the value of m, the larger the network delay is since it takes longer for all of the packets transmitted at a previous time interval to arrive at the client buffer. Third, the value of bi,k can reflect the changing network delays. The larger the value of bi,k, the larger percentage of packets transmitted at time interval k  d  i + 1 can arrive, which indicates the network is less congested. We can use the total number of bi,kÕs considered and the position of the first non-zero bi,k value to indicate the network delay during the packet transmission. Different combinations can be used to simulate different network delay situations. For example, when considering 10 bi,kÕs in Eq. (2), if at time instant k, the coming packets is represented as P k ¼ 12 Rk7 þ 14 Rk8 þ 18 Rk9 þ 18 Rk10 , it denotes that the packets arriving at time interval k consist of 1/2 of packets transmitted from the server at time interval k  7, 1/4 of packets transmitted at time interval k  8, 1/8 of packets transmitted at time interval k  9, and 1/8 of packets transmitted at time interval k  10. That is,

time keep the difference between the buffer occupancy and the allocated buffer size small, the quadratic performance index we try to minimize is set as

b1;k ¼ 1=2; b4;k ¼ 1=8;

where z1 is the delay operator (i.e., z1Qk = Qk1). In addition, let d0 = d + 1, and A(z1) = 1  z1 and B(z1) = b1,k + b2,kz1 +    + (m1) be two polynomials. We can have bm,kz

b2;k ¼ 1=4; b3;k ¼ 1=8; d ¼ 7 and m ¼ 10.

Since every packet transmitted from the server is attached with a timestamp that indicates the time it is transmitted, the client can count the number of packets that arrive at a certain interval according to the timestamp, and then calculate the corresponding bi,k value. The packet loss information can also be reflected by the value of bi,k. The larger the value, the less the packet loss ratio is. When the value of bi,k is used as feedback, the server can decide the optimal transmission rate while also considering the packet loss ratio. In order to fully utilize the network resources such as the network bandwidth and client buffers, we need to maximize the utilization of client buffers and minimize the bandwidth allocation. To keep the transmission rate small, and at the same

J k ¼ ðwp Qkþd 0  wq Qr Þ2 þ ðwr Rk Þ2 ;

ð3Þ

where wp, wq and wr are the weighting coefficients, and d0 is referred to as the transmission control delay. Different weighting coefficients can be selected to specify a wide range of adaptive rate controllers that result in the desirable closed-loop behaviors. The objective is to select a transmission rate sequence Rk to minimize Jk for a given buffer size Qr. 3.2. Adaptive transmission rate for a single client Rewrite Eq. (1) with one time interval shift, we have Qk ¼ Qk1 þ P k1  Lk1 .

ð4Þ

Combine Eqs. (2) and (4) in the time domain [12] to obtain the following equation ð1  z1 ÞQk ¼ b1;k1 Rk zd1 þ b2;k1 Rk zd2

þ    þ bi;k1 Rk zdi þ     Lk z1 ;

ð5Þ

Aðz1 ÞQk ¼ zd 0 Bðz1 ÞRk  Lk .

ð6Þ

Here we assume b1,k 5 0, and as defined earlier, d0 is referred to as the transmission control delay. When 1 is divided by A(z1), we can get the following quotient F(z1) and remainder zd 0 Gðz1 Þ. 1 ¼ Aðz1 ÞF ðz1 Þ þ zd 0 Gðz1 Þ; where F ðz1 Þ ¼ 1 þ f1 z1 þ    þ fd 0 1 zðd 0 1Þ ;

Gðz1 Þ ¼ g0 þ g1 z1 þ    þ gn1 zðn1Þ .

ð7Þ

926

H. Luo et al. / Computer Networks 50 (2006) 921–937

From Eq. (6), we get Qkþd 0 ¼

Solving for the optimal transmission rate at time interval k yields   1 2 wr Rk w2p Bðz1 ÞF ðz1 Þ þ b1;k

1 ðBðz1 ÞRk  Lkþd 0 1 Þ Aðz1 Þ

¼ Bðz1 ÞF ðz1 ÞRk  F ðz1 ÞLkþd 0 1 þ zd 0

¼ w2p Gðz1 ÞQk þ wp wq Qr þ w2p F ðz1 ÞLkþd 0 1 .

1 ðBðz1 ÞRk  Lkþd 0 1 Þ  Aðz1 Þ

ð11Þ

or Qkþd 0 ¼ Bðz1 ÞF ðz1 ÞRk  F ðz1 ÞLkþd 0 1 þ Gðz1 ÞQk .

ð8Þ

As can be seen from Eq. (8), the number of packets in the buffer at time interval k + d0 (denoted as Qkþd 0 ) can be represented as a combination of values of transmission rates and buffer occupancies that occur at time interval k and the time intervals before k. It is also related to the playback rates that occur at time interval k and the time intervals after k. Hence this is a predictive formulation that can be used to predict the buffer occupancy at time interval k + d0. With this transformation, an optimal transmission rate can be obtained. When the packets are transmitted with this optimal transmission rate at time interval k, we can expect how many packets will arrive at the client at time interval k + 1, k + 2, . . ., etc. For an individual packet, we can schedule its transmission time instant to ensure that it can arrive at the client before the playback since the delay that it will experience is considered in this model. This rate is said to be optimal because with this transmission rate, the performance index function in Eq. (3) can be minimized. That is, the transmission rate is minimal and the buffer occupancy is maximal in the meaning of the sum of values at all the time intervals. That is, the objective function can be defined as follows: J k ¼ ðwp Bðz1 ÞF ðz1 ÞRk  wp F ðz1 ÞLkþd 0 1 þ wp Gðz1 ÞQk  wq Qr Þ2 þ ðwr Rk Þ2 .

ð9Þ

Differentiate Jk with respect to Rk to obtain the optimal transmission rate Rk. oJ k ¼ 2wp b1;k ðwp Bðz1 ÞF ðz1 ÞRk  wp F ðz1 ÞLkþd 0 1 oRk þ wp Gðz1 ÞQk  wq Qr Þ þ 2w2r Rk ¼ 0. ð10Þ

This equation can be expanded as follows:  w2p ðb1;k þ b2;k z1 þ    þ bm;k zðm1Þ Þ ð1 þ f1 z1 þ    þ fd 0 1 zðd 0 1Þ Þ þ Rk ¼



1 2 w b1;k r



w2p ðb1k þ ðb2;k þ b1;k f1 Þz1 þ   

þ bm;k fd 0 1 z

ðmþd 0 2Þ

1 2 w Þþ b1;k r



Rk ¼ w2p Gðz1 ÞQk þ wp wq Qr þ w2p F ðz1 ÞLkþd 0 1 ; 

w2p b1;k þ

 1 2 wr Rk ¼ w2p ðb2;k þ b1;k f1 ÞRk1 þ    b1;k

 w2p bm;k fd 0 1 Rkðmþd 0 2Þ  w2p Gðz1 ÞQk

þ wp wq Qr þ w2p F ðz1 ÞLkþd 0 1 .

Therefore, the equation is a recursive equation for Rk represented in terms of Rk1, Rk2, . . ., Qk, Qk1, . . ., and Qr. All of them are known variables. In other words, the optimal transmission rate Rk depends on the buffer occupancy, the allocated buffer size, and the previous transmission rates. The computational load introduced by the approach is low, since the calculation of the optimal rate is a recursive function, and it involves only the addition and multiplication of the buffer occupancy, allocated buffer size, playback rates, and transmission rates of the previous time intervals. To evaluate the validity of the proposed model, we also need to examine the value of Qk,. Here we use Qk in the feedback from the client. Since the feedback arrives at the server at a certain delay, the value of Qk is actually the buffer occupancy a certain time ago. Hence, Qk may not still be a valid value for the proposed model. To address this issue, we propose the following equation to estimate

927

H. Luo et al. / Computer Networks 50 (2006) 921–937

the real value of Qk. Let Qt_est be the estimation of the current buffer occupancy at the client, td be the delay the feedback experiences from the client to server, Pt be the arriving rate of packets at the client during td, and Lt be the playback rate during td. Qt

est

¼ Qk þ ðP t  Lt Þ  td.

If td is quite small, (Pt  Lt) * td  Qk, then (Pt  Lt) * td can be ignored so that Qk can be used as Qt_est. It is a quite reasonable assumption, if td is in the order of milliseconds. If the feedback delay is a large end-to-end delay, we need to examine the value of the expression (Pt  Lt) * td. Normally, Pt and Lt are in the order of MBps or even KBps. So if (Pt  Lt) * td  Qk, it still can be ignored. If (Pt  Lt) * td is not ignorable, we need to estimate the current buffer occupancy. In the calculations of the transmission rates, if there is any packet loss for the packets, the playback rate at time interval k, Lk, will not be the value known by the server from the video content. The real playback rate which can indicate the real buffer occupancy should be used. In this case, in order to precisely reflect the dynamic changes of buffer occupancy, the real playback rate Lk can be obtained from the clients. There are three approaches to get Lk: (1) Lk can be indicated by the terms of bi,k in the feedback information; (2) Lk can be calculated through the packet loss ratios in the feedback information; and (3) to make it more explicit, Lk can be sent directly back to the server in the feedback information. 3.3. Adaptive transmission rates and optimal bandwidth allocation for multiple clients Normally the server needs to allocate its available bandwidth to provide multimedia services to a large population of clients. If the server allocates each client a fixed bandwidth during the connection, it will be a great waste of the network bandwidth since each client has its own traffic requirement. In order to provide services to the maximal number of clients with their QoS requirements while achieving high utilization of the bandwidth resource, the bandwidth allocated to each

client needs to be minimized given the total network resource. Assume there are n active clients requesting services from the server. According to Eq. (3), let Jj,k, be the cost function of the jth client, ej,k be the difference between the allocated buffer size and the buffer occupancy of the jth client at time interval k, Rj,k be the transmission rate of the jth client at time interval k, Qj;kþd 0 be the buffer occupancy of the jth client at time interval k + d0, and Qj,r be the allocated buffer size of the jth client, the optimization performance index for a single client becomes J j;k ¼ e2j;k þ w2r R2j;k ;

ð12Þ

where ej;k ¼ wp Qj;kþd 0  wq Qj;r . For each client, the approach described in Section 3.2 can be used to obtain the optimal transmission rate. When there are multiple clients, the optimization function for resource allocation of the server should be n n X X J¼ ðe2j;k þ w2r R2j;k Þ. ð13Þ J j;k ¼ j¼1

j¼1

Since the n clients are independent, if the Jj,k value of the jth client in Eq. (12) is minimal, the sum of the Jj,k functions (i.e., the J value) is also the minimal. Therefore, the optimal transmission rate for each client and the optimization of the performance index in Eq. (13) can both be obtained at the same time. However, the transmission rate is optimized individually, so it is possible that the sum of all the requested bandwidth is greater than the network bandwidth. When such a situation occurs, we need to reallocate the bandwidth to each client connection. To provide fairness among all clients, we reallocate the bandwidth to each client proportionally to its actual requirement in our proposed scheme. 3.4. Congestion controlled adaptive transmission rates In the Internet environment, all sources are expected to react to congestion by adapting their transmission rates, which can avoid the network congestion collapse and at the same time keep the network utilization high. In other words,

928

H. Luo et al. / Computer Networks 50 (2006) 921–937

real-time flows prefer smoother changes in the transmission rates so that a more stable presentation quality can be provided. For real-time flows, there exist variants of AIMD-like congestion control mechanisms. For example, [9] reduced the transmission rate to 7/8 of the previous value instead of using a decrease-by-half reduction in response to congestion. Our approach achieves end-to-end congestion control in a way similar to AIMD but differs in the following two aspects: (1) the transmission rate is reduced to a rate larger than half during congestion, and (2) the transmission rate is not increased linearly when the congestion status is alleviated. It can be seen from the simulation results in Section 3.2 that our approach can have smoother rate changes than the decreaseby-half reduction approaches. Real-time streams should also perform TCPfriendly congestion control. Our approach is TCP-friendly, which means that the transmission rate will decrease when network congestion is detected. In our approach, congestion control is implemented via adjusting the weight wr of the transmission rate in the optimization index in Eq. (3). The wr value is adjusted corresponding to different packet loss rates perceived by the clients, thus achieving a TCP-friendly state. When relatively a high packet loss rate is detected, wr is doubled. Hence, the transmission rate can be decreased quickly to reduce the network traffic. However, if the new wr value is greater than an upper bound (say, wr_bound), then wr is set to wr_bound. On the other hand, when the packet loss rate is decreased, wr is decreased by one in each adjustment period. However, if wr is less than a lower bound (say, 1), then wr is set to this lower bound. In this way, the transmission rate can be increased slowly to alleviate the congestion status of the whole network. A packet loss rate threshold value can be defined to determine whether wr needs to be doubled or decreased by one. The advantage is that under different congestion statuses, an optimal utilization of the network bandwidth and the client buffer can still be achieved for all those sclients connecting to the server. Our approach can also achieve a smoother transmission rate since we do not directly reduce the rate to half during congestion. Instead, the weight value wr for the

transmission rate is doubled. This results in a rate that is normally a little larger than half of the previous rate after the adjustment. This kind of more smoothly rate adjustment will result in a more stable presentation quality at the client. Our proposed rate control algorithm consists of two parts: optimal rate algorithm and congestion control. The parameters needed to calculate Rk are the bi,k values, current buffer occupancy, Rk1, Rk2, etc., and the playback rate for the current and next several intervals, Lk, Lk+1, etc. The playback rates Lk+1, Lk+2, etc. and the previous transmission rates Rk1, Rk2, etc. are known at the server. The other parameters required by the algorithm to calculate Rk can be obtained from the feedback information generated by the clients. The server uses the congestion control algorithm to tune the value of the parameter Wr. Let plr be the packet loss ratio, plr_thresh be the packet loss ratio threshold value that can trigger the congestion control, and Wr_upper_bound and Wr_lower_bound be the upper bound and lower bound of Wr. The pseudocode for the proposed rate control algorithm at the server side is given below. Congestion control: If (plr > =plr_thresh) { Wr = Wr * 2 If (Wr > Wr_upper_bound)Wr = Wr_upper_bound } else { Wr = Wr  1 If (Wr < Wr_lower_bound)Wr = Wr_lower_bound } Optimal transmission rate: Calculate the current buffer occupancy if needed; Calculate the optimal rate using the function in Eq. (11).

4. Simulation results 4.1. Optimal bandwidth allocation To study the performance of the proposed approach, we compare our approach with other bandwidth allocation mechanisms such as the

929

H. Luo et al. / Computer Networks 50 (2006) 921–937

fixed rate approach and the rate by playback requirement approach under single-client and multiple-client scenarios. In fixed rate transmission, the bandwidth allocated to each client is a constant bandwidth. On the other hand, the rate by playback requirement is a more flexible bandwidth allocation approach. In this approach, the server allocates the bandwidth to the clients according to their playback requirements. For simplicity, the proposed approach, the rate by playback requirement approach, and the fixed rate approach are denoted as approach A, approach B, and approach C respectively in this paper. 4.1.1. Simulation setup The simulation parameters used throughout our simulations are given in Table 1. As shown in Table 1, it is assumed that the buffer allocated to each client is 2 MB (Mbytes), the bandwidth at the server is 5 MBps (Mbytes/second), and the playback rates are generated randomly between [0.1 · 105, 0.9 · 105] MBps in the simulations. The simulations were run in 10,000 time intervals with the increment of 1 time interval. Moreover, the simulations were run for three different scopes of playback rates, corresponding to less, medium, and severe traffic loads, to simulate different network congestion levels. In our approach, we use three different combinations of the bi,kÕs in our rate control model to capture the three congestion levels. The playback requirements for different levels of network congestion are given in Table 2.

Table 2 Different levels of network congestion Network congestion level

Playback requirement

Less congested network Medium congested network Severe congested network

0.1 · 105–0.3 · 105 Bps 0.4 · 105–0.6 · 105 Bps 0.7 · 105–0.9 · 105 Bps

rate approach. The playback rate requirement is set to be between 0.1 · 105 Bps and 0.3 · 105 Bps (the less congested network scenario as shown in Table 2). Fig. 2 shows how the transmission rate is adjusted according to the playback rate and buffer occupancy for the 10,000 time intervals. It is evident from the figure that there are no overflows or underflows at the buffer. For better illustration, Fig. 3 gives the transmission rate changes during time intervals [1000, 1020]. As can be seen from the figure, the transmission rate is dynamically adjusted according to the buffer occupancy and playback requirement. For example, at 1007th time interval, the playback rate is low and buffer occupancy is medium, but the transmission rate is high. This is because there is a peak playback requirement at 1014th time interval. Considering there is a network delay, it takes a while for the transmitted packet to arrive at the user buffer. For example,

4.1.2. Adaptive transmission rate control We demonstrate how our proposed rate control model adaptively adjusts the transmission rate according to the playback rates and buffer occupancy for the jth client by comparing with the rate by playback requirement approach and the fixed Table 1 Simulation parameters Allocated buffer size of each client Bandwidth at server Time intervals Time increment Combination of bi,k values

2 MB 5 MBps 10,000 1 b1,k = 1/2, b2,k = 1/4, b3,k = 1/8, b4,k = 1/8.

Fig. 2. Transmission rate changes with buffer occupancies and playback rates during time intervals [1, 10,000].

930

H. Luo et al. / Computer Networks 50 (2006) 921–937

Fig. 3. Transmission rate changes with buffer occupancies and playback rates during time intervals [1000, 1020].

1/2 of the packets transmitted at 1007th time interval will arrive at the buffer at 1014th time interval. The playback rate at 1008th time interval is low but the transmission rate at that time interval is high, because the playback rate at 1015th is also high. Transmission rates at 1007th and 1008th time intervals should be high enough so that the packets arriving at the user buffer at 1014th and 1015th time intervals can satisfy the playback requirements in the next time interval. It means that the transmission rate should be adjusted beforehand. From the simulation results, it can be seen that our approach has the capability of predicting the necessary packets at the buffer so that the server can transmit beforehand enough packets to provide for the future use. On the contrary, at 1000th time interval, the playback rate is very high and the buffer occupancy is medium, but the calculated optimal transmission rate is low. This is because the playback requirement is quite low in the 1007th time interval and 1/2 of the packets transmitted at 1000th time interval will arrive at the buffer at 1007th time interval. Considering the transmission delay, the transmission rate needs to be decreased, so overflow will not happen at the buffer in 1007th time interval. Since there are enough packets in the buffer to accommodate the playback requirements, underflow will not happen

at the buffer in the future. The transmission rate can be adjusted dynamically to provide enough packets for the playback requirements of the clients. Hence, the QoS of the playback can be guaranteed. To further illustrate the efficiency of our approach (approach A), we compare the transmission rates and buffer occupancies with the rate by playback requirement approach (approach B) under the same playback requirements (shown in Fig. 4) and with the fixed rate approach (approach C) using the 0.3 · 105 Bps rate (shown in Fig. 5). In Fig. 4, when the value of the buffer occupancy is above the horizontal solid line (denoting the allocated maximal buffer size), overflow occurs. For example, as shown in Fig. 4, between 1020th and 1025th time intervals, there are overflows in approach B, while there is no overflow in approach A. It can also be easily seen from this figure that generally our approach has better buffer utilization than approach B, and there is no overflow in our approach (unlike approach B). In addition, the changes of the buffer occupancy and transmission rates in approach A are less drastic than those in approach B, which indicates our approach is more robust in bandwidth adaptive allocation. In addition, from Fig. 5, we can observe that the buffer

Fig. 4. Comparison of approach A (solid line) and approach B (dashed line) in transmission rates and buffer occupancy during time intervals [1000, 1100].

931

H. Luo et al. / Computer Networks 50 (2006) 921–937

Table 3 Maximal numbers of clients that can be supported concurrently

Approach A Approach B

Fig. 5. Comparison of approach A and approach C in buffer occupancy during time intervals [1, 5000].

occupancy in approach C increases continually so that overflow occurs and it cannot be recovered. 4.1.3. Concurrent client presentation support In our simulations, to compare the performance of our approach with the other approaches, assume that a server simultaneously transmit different video streams on demand for many clients. Three different scopes of playback requirements are used to simulate three levels of network congestion (as defined in Table 2). When the playback requirement is larger, the clients need to request more data from the server. Hence, the traffic will be larger and the network is more congested. Here, the playback schedule of the movie is generated randomly. In order to compare approach A with approach B under the same condition, in approach B, when the total bandwidth requirement of the m clients is larger than the bandwidth at the server, we adjust the rates and distribute the bandwidth fairly to the clients according to the playback requirements. To compare approach A with approach C, the fixed rate is obtained by equally allocating the bandwidth to all the clients. The maximal number of clients that can be supported concurrently by approaches A and B under the bandwidth constraint

Less congested

Medium congested

Severe congested

250 183

100 87

62 57

is shown in Table 3. The numbers of clients to be supported by approach C are not shown in Table 3 since approach C behaves badly and has underflow at a very early time interval (will be shown later). As can be seen from the values in Table 3, even in case of severe network congestion, our approach can still efficiently well adjust the transmission rate for each client to support presentation to more clients under the same bandwidth limit at the server. For illustration purpose, the buffer occupancies of the three approaches under the less congestion network scenarios are presented in Figs. 6–9. The playback requirement (0.1 · 105–0.3 · 105 Bps) given in Table 2 is used to simulate the less congested network scenario. In Fig. 6, there are 183 clients to request services from the server. The server allocates the available bandwidth to the clients and there are no underflow and overflow in the buffers in approach A. Although there are

Fig. 6. Buffer occupancies of three approaches under the less congested network condition with 183 clients.

932

H. Luo et al. / Computer Networks 50 (2006) 921–937

Fig. 7. Buffer occupancies of three approaches under the less congested network condition with 184 clients.

Fig. 8. Buffer occupancies of three approaches under the less congested network condition with 250 clients.

overflows in approach B and the buffer occupancy tends to decrease, we still consider the server can support presentations for 183 clients. In Fig. 7, when there are 184 clients requesting services from the server, under the bandwidth limitation of the server, the buffer occupancy in approach B is decreasing to below zero (i.e., underflow occurs). So the maximal number of clients that the server

Fig. 9. Buffer occupancies of three approaches under the less congested network condition with 251 clients.

can support is 183 in approach B. Moreover, as can be seen from Figs. 6 and 7, the buffer occupancies in approach A are normally more than those in approach B, which means that our approach can better utilize the buffer capacity than approach B. There is no overflow and underflow in our approach, while underflow occurs in approach B. We increase the number of clients and try to find the maximal number of clients that can be supported concurrently in approach A. In Fig. 8, when there are 250 clients requesting services from the server, the buffer occupancies are decreasing but there is no underflow. Therefore, the allocated bandwidth can satisfy the playback requirements of the 250 clients. However, when the number of clients increases to 251, underflow occurs (as shown in Fig. 9), which means the allocated bandwidth cannot satisfy the playback requirements of the 251 clients. So the maximal number of clients the server can support for presentations is 250 in approach A. From the simulation results, it can be seen that approach A can increase the number of clients that can be supported greatly. It is also obvious from the figures that approach C behaves badly. The buffer underflow occurs at a very early time interval and the underflow is unrecoverable. Under the same constraint of the bandwidth, if the same number of clients needs to be provided

H. Luo et al. / Computer Networks 50 (2006) 921–937

services by the server, the QoS requirements cannot be satisfied. After comparing the buffer occupancy of each client and the maximal numbers of clients that can be supported concurrently under different levels of network congestion, the simulation results show that our approach is efficient in providing services to larger numbers of clients. 4.2. Congestion controlled optimal rate adjustment We also use NS2 simulator [13] to demonstrate how our proposed rate control model can also achieve TCP-friendly congestion control. The objectives are to demonstrate that (1) our approach can adjust the transmission rate in response to network congestion in a way similar to TCP, and (2) the transmission rate is adjusted according to the network delay and buffer occuTable 4 Simulation parameters Packet size

1000 Bytes

Bottleneck bandwidth Bottleneck delay Delay of other links wr_bound Adjusting period

2 MBps 10 ms 3 ms 8 50 ms

ACK size

40 Bytes

Playback rate Size of client buffer Bandwidth of other links Packet_loss_rate_threshold Simulation length

1 MBps 1/8 MB 5 MBps 0.95 10 s

Fig. 10. Simulation topology.

933

pancy, as well as the playback requirement and packet loss. 4.2.1. Simulation setup Simulation parameters are summarized in Table 4. Fig. 10 shows the topology of our simulations. The link between Router1 and Router2 is the bottleneck link, and Router1 is the bottleneck point where most of the packet drops occur. The switches implement FIFO scheduling and DropTail queuing. The properties of our scheme are demonstrated when competing with the background TCP traffic. The multimedia packet size is selected as 1000 Bytes, which is suitable for the video transmission. To better show how our approach can adapt the rate according to the network congestion status, a fixed playback rate is used. Here we use FTP session as TCP flows, sending from Server2 to Client2. Adaptive realtime multimedia flows are sent from Server1 to Client1. The TCP flows and adaptive multimedia flows share the bottleneck bandwidth from Router1 to Router2. The TCP packet is 1000 Bytes and the ACK is 40 Bytes. For a fair comparison, the end-to-end delay for the TCP flow and adaptive multimedia flow is the same. Link delays are also the same except the bottleneck link. 4.2.2. TCP-friendly congestion control The transmission rates of the TCP flow and multimedia flow are displayed in Fig. 11. As can be seen from the figure, the transmission rate of

Fig. 11. Transmission rates of TCP flow (dashed line) and multimedia flow (solid line).

934

H. Luo et al. / Computer Networks 50 (2006) 921–937

the multimedia flow varies in the range of about 30% of the average rate, while the rate of the TCP flow changes more drastically. The changes of the multimedia rates are smoother compared with the TCP flow. As discussed in Section 2.4, the reason is that the adaptive rate is not reduced to the half of its previous rate in response to congestion, instead it is a little larger than the half of the previous rate. The multimedia flow also ocsilates less frequently than the TCP flow. The ocsilation results from the congestion status feedbacked by the client. For the multimedia flow, the server adjusts the transmission rate when the feedback information arrives for each time interval (i.e., 50 ms). However, on the other hand, TCP changes its transmission rate at acknowledge for every packet. The two flows can share the 2 MBps bottleneck bandwidth fairly. During the congestion status (indicated by the packet loss perceived by the client), the multimedia flow will decrease the transmission rate to reduce the traffic load in the network. When the packet loss ratio is reduced to below the threshold value, the network congestion is alleviated, and the transmission rate will increase gradually due to the slow increase of the weight value in our proposed scheme. Fig. 12 demonstrates the transmission rates of the multimedia flow and Fig. 13 shows the packet loss ratios perceived at the client. The multimedia servers adjust their transmission rates based on the network delay, packet loss, buffer occupancy and

Fig. 12. Transmission rate of the multimedia flow.

Fig. 13. Packet loss ratio of the multimedia flow at the client.

playback requirement at each time interval. When the packet loss occurs (e.g., at 4th second and 6.5th second), the transmission rate is reduced in response to the packet loss during the congestion period. At the same time, TCP connection also reduces its transmission window, which leads to a congestion reduction. The packet loss ratio received at the client is therefore reduced, and this reduced congestion state will then be feedbacked to the server. If the packet loss ratio is reduced to a threshold value, the server will calculate a new optimal transmission rate with the weight value decreased by one. In this way, the transmission rate will be increased slowly. The simulation demonstrates that our approach can dynamically adjust the transmission rate in response to the network congestion level. This rate adjustment is done in a TCP-friendly way. The server adopts a smaller weight value during the less congestion periods, which results in a larger transmission rate. During the more severe congestion periods, the server adopts a larger weight value, which decreases the transmission rate. When the previous transmission rate is relatively low, more packets in the buffer are used for playback, so the next transmission rate will be high in order to fill the buffer occupancy up. Hence normally, there is a much larger transmission rate following a much lower transmission rate. Figs. 14 and 15 give a better view of how the transmission rate and buffer occupancy change during a long period (100 s) to show the stability of our proposed end-to-end congestion control

935

H. Luo et al. / Computer Networks 50 (2006) 921–937

Average buffer occupancy of client

1.8

Fig. 14. Client buffer occupancy at time period [1, 100] seconds.

x10

6

1.6 1.4 1.2 1 0.8 0.6

0

20

40

60

80

100

time (Sec) Fig. 16. Average buffer occupancy of client over time period [1, 100] seconds.

5

Fig. 15. Transmission rate at time period [1, 100] seconds.

approach. As can be seen from Fig. 14, there is no overflow and underflow at the client buffer, which indicates that our approach can prevent (1) packet losses resulting from overflows and (2) playback jitters resulting from underflows, and thus to guarantee the presentation quality. Another advantage of our approach is that when there are multiple clients experiencing different network congestion levels, the server can still allocate the minimal bandwidth to each client connection based on its buffer occupancy, congestion level, and end-toend delay. The instantaneous transmission rate and client buffer occupancy in Figs. 14 and 15 demonstrate the instantaneous performance of the proposed approach. We also average the buffer occupancy and the transmission rate over every

Average multimedia flow rate

11

x 10

10.5 10 9.5 9 8.5 8 7.5 7 0

20

40

60

80

100

time (Sec) Fig. 17. Average transmission rate over time period [1, 100] seconds.

second (as shown in Figs. 16 and 17). It can be seen from these two figures that the average buffer occupancy maintains in a stable level. Although the instantaneous transmission rate has some oscillation, the average transmission rate over each second actually changes smoothly and in a relatively small scope. These demonstrate that the

936

H. Luo et al. / Computer Networks 50 (2006) 921–937

traffic generated by our approach is helpful to the network stability.

5. Conclusions When there are multiple clients requesting data from a server simultaneously, how to efficiently allocate the network bandwidth to each client to satisfy the QoS requirements of the applications and at the same time to avoid network congestion is a challenging task. In this paper, we presented an end-to-end optimal resource utilization scheme that can achieve maximum utilization of the client buffer, minimal allocation of the bandwidth, and TCP-friendly congestion control for continuous media stream transmission. The transmission rate for each client is determined adaptively based on the playback requirement, buffer occupancy, changing network delays, and packet loss rate. The optimized transmission rate can be adjusted to satisfy the constraint of the network bandwidth and to avoid overflows and underflows. Comparisons are made with the fixed rate approach and the rate by playback requirement approach under different network congestion levels. Simulation results show that our proposed scheme is efficient in providing QoS for more clients concurrently under the constraint of the bandwidth limitation. Our proposed scheme can also achieve end-toend TCP-friendly congestion control to guarantee the fairness of bandwidth allocation in the best-effort network, such as Internet.

References [1] B. Braden, D. Black, J. Crowcroft, B. Davis, S. Deering, D. Estrin, S. Floyd, V. Jacobson, G. Minshall, et al., Recommendation on queue management and congestion avoidance in the Internet, RFC 2309, Internet Engineering Task Force, April 1998. [2] D. Bansal, H. Balakrishnan, Binomial congestion control algorithm, in: Proceedings of the IEEE INFOCOMÕ01, 2001, pp. 631–640. [3] D. Chiu, R. Jain, Analysis of the increase and decrease algorithm for congestion avoidance in computer networks, Journal of Computer Networks and ISDN 17 (1) (1989) 1– 14.

[4] C. Douligeris, G. Develekos, Neuro-fuzzy control in ATM networks, IEEE Communication Magazine (May) (1997) 154–161. [5] S. Floyd, K. Fall, Promoting the use of end-to-end congestion control in the internet, IEEE/ACM Transactions on Networking 7 (4) (1999) 458–472. [6] S. Floyd, M. Handley, J. Padhye, J. Widmer, Equationbased congestion control for unicast applications, Applications, Technologies, Architectures and Protocols for Computer Communication (October) (2000) 43–56. [7] S. Jin, L. Guo, I. Matta, A. Bestavros, A spectrum of TCPfriendly window-based congestion algorithms, IEEE/ACM Transactions on Networking 11 (3) (2003) 341–355. [8] M.R. Izquierdo, D.S. Reeves, A survey of statistical source models for variable bit-rate compressed video, Multimedia System 7 (3) (1999) 199–213. [9] R. Jain, K. Ramakrishnam, D. Chiu, Congestion avoidance in computer networks with a connectionless network layer, Tech. Rep. DEC-TR-506, Digital Equipment Corporation, August 1987. [10] A. Lahanas, V. Tsaoussidis, Exploiting the efficiency and fairness potential of AIMD-based congestion avoidance and control, Computer Networks 43 (2) (2003) 227–245. [11] A. Lombardo, G. Schembra, G. Morabito, Traffic specifications for the transmission of stored MPEG video on the internet, IEEE Transactions on Multimedia 3 (1) (2001) 5– 16. [12] F. Lewis, L. Syrmos, Optimal Control, John Wiley & Sons, Inc., 1995. [13] NS (network simulator), Available from: , 1995. [14] R. Rejaie, M. Handley, D. Estrin, RAP: An end-to-end rate-based congestion control mechanism for realtime streams in the Internet, in: Proceedings of IEEE INFOCOM, vol. 3, March 1999, pp. 1337–1345. [15] M.-L. Shyu, S.-C. Chen, H. Luo, Optimal resource utilization in multimedia transmission, in: IEEE International Conference on Multimedia and Expo (ICME), August 22–25, 2001, Waseda University, Tokyo, Japan, pp. 880–883. [16] M.-L. Shyu, S.-C. Chen, H. Luo, An adaptive optimal multimedia network transmission control scheme, in: Second IEEE Pacific-Rim Conference on Multimedia 2001 (PCMÕ2001), Beijing, China, October 24–26, 2001, pp. 1042–1047. [17] M.-L. Shyu, S.-C. Chen, H. Luo, Self-adjusted network transmission for multimedia data, in: Proceedings of the Third IEEE Conference on Information Technology: Coding and Computing (ITCC-2002), Las Vegas, NV, USA, April 8–10, 2002, pp. 128–133. [18] M.-L. Shyu, S.-C. Chen, H. Luo, Optimal bandwidth allocation scheme with delay awareness in multimedia transmission, in: IEEE International Conference on Multimedia and Expo (ICME2002), Lausanne, Switzerland, August 26–29, 2002, pp. 537–540. [19] D. Sisalem, H. Schulzrinne, The direct adjustment algorithm: A TCP-friendly adaptation scheme, in: Quality of

H. Luo et al. / Computer Networks 50 (2006) 921–937 Future Internet Services Workshop, Berlin, Germany, September 25–27, 2000, pp. 68–79. [20] D. Wu, Y. Hou, Y.-Q., Zhang, Transporting real-time video over the Internet: Challenges and approaches, in: Proceedings of the IEEE, vol. 88, no. 12, December 2000, pp. 1–19. [21] M. Wu, R.A. Joyce, H. Wong, L. Guan, S. Kung, Dynamic resource allocation via video content and shortterm traffic statistics, IEEE Transaction on Multimedia 3 (2) (2001) 186–199. [22] Y.R. Yang, S.S. Lam, General AIMD congestion control, in: Proceedings of the International Conference on Network Protocols, November 2000, pp. 187–198.

Hongli Luo received her B.S. and M.S. degrees in Electrical Engineering from Hunan University, Hunan, China, in 1993 and 1996. She is currently pursuing the Ph.D. degree at the Department of Electrical and Computer Engineering, University of Miami, Coral Gables, FL, USA. Her research interests include multimedia streaming and networking.

Mei-Ling Shyu received her Ph.D. degree from the School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN, USA in 1999, and her three master degrees from Computer Science, Electrical Engineering, and Restaurant, Hotel, Institutional, and Tourism Management from Purdue University. She has been an Assistant Professor at the Department of Electrical and Com-

937

puter Engineering, University of Miami since January 2000. Her research interests include data mining, multimedia database systems, multimedia networking, and database systems. She has authored and co-authored more than 90 technical papers published in various prestigious journals, referred conference/symposium/workshop proceedings, and book chapters. She is the founder and program co-chair of the ACM International Workshop on Multimedia Databases.

Shu-Ching Chen received his Ph.D. from the School of Electrical and Computer Engineering at Purdue University, West Lafayette, IN, USA in December 1998. He also received Masters degrees in Computer Science, Electrical Engineering, and Civil Engineering from Purdue University, West Lafayette, IN, USA. He has been an Associate Professor in the School of Computer Science (SCS), Florida International University (FIU) since August 2004. Before then, he was an Assistant Professor in SCS at FIU from August 1999. His main research interests include distributed multimedia database systems, data mining, and multimedia networking. He is authored and co-authored more than 110 research papers in journals, refereed conference/symposium/workshop proceedings, and book chapters. He was awarded University Outstanding Faculty Research Award from FIU in 2004. He also received Outstanding Faculty Research Award from SCS at FIU in 2002. He is the General co-chair of the IEEE International Conference on Information Reuse and Integration, the founder and program co-chair of the ACM International Workshop on Multimedia Databases, and program chair of several conferences.

Computer Networks 50 (2006) 938–952 www.elsevier.com/locate/comnet

Querying sensor networks by using dynamic task sets Erdal Cayirci a, Vedat Coskun

b,*

, Caghan Cimen

c

a

c

GENET Laboratories, Istanbul, Turkey b ISIK University, Istanbul, Turkey Naval Sciences and Engineering Institute, Turkish Naval Academy, 34942 Istanbul, Turkey

Received 27 January 2004; received in revised form 29 October 2004; accepted 10 May 2005 Available online 8 August 2005 Responsible Editor: N.B. Shroff

Abstract A data querying scheme is introduced for sensor networks where queries formed for each sensing task are sent to task sets. The sensor field is partitioned into subregions by using quadtree based addressing, and then a given number of sensors from each subregion are assigned to each task set by using a distributed algorithm. The number of nodes in a task set depends on the task specifications. Hence, the sensed data is retrieved from a sensor network in the level of detail specified by users, and a tradeoff mechanism between data resolution and query cost is provided. Experiments show that the dynamic task sets scheme systematically reduces the number of sensors involved in a query in the orders of magnitude in the expense of slight reduction in the event detection rate.  2005 Elsevier B.V. All rights reserved. Keywords: Data aggregation; Data querying; Data dissemination; Task sets; Node clustering; Task assignment; Data fusion; Quadtree addressing; Sensor networks

1. Introduction One of the most challenging tasks in wireless sensor networks (WSN) [1–3] is to synthesize the information requested by users from the available data measured or sensed by a large number of sensor *

Corresponding author. E-mail addresses: [email protected] (E. Cayirci), [email protected] (V. Coskun), [email protected] (C. Cimen).

nodes. Since there are sheer number of nodes with stringent energy constraints in a WSN, it may not be feasible to fetch every reading of sensor nodes for central processing [4–7]. Effective data querying and aggregation techniques are needed instead. Data queries in sensor networks can be continuous and periodical, continuous and event driven or snapshot, i.e., one time queries. We can categorize sensor network queries also as aggregated or non-aggregated. Queries can also be complex or simple. Finally, queries for replicated data can be

1389-1286/$ - see front matter  2005 Elsevier B.V. All rights reserved. doi:10.1016/j.comnet.2005.05.031

E. Cayirci et al. / Computer Networks 50 (2006) 938–952

generated. The users should be able to carry out any of these types of queries by using a data-querying scheme that fits the following characteristics: • Sensor nodes are limited in both memory and computational resources. They cannot buffer large number of data packets. • Sensor nodes generally disseminate short data packets to report an ambient condition, e.g., temperature, pressure, humidity, proximity report etc. • The observation areas of sensor nodes often overlap. Therefore, many sensor nodes may report the correlated data related to the same event. However, in many cases the replicated data are needed because sensor network concept is based on the cooperative effort of sensor nodes [1]. For example, nodes may report only proximity, then the size and the speed of the detected object can be derived from the locations of the nodes reporting them, and timings of the reports. The collaboration among the nodes should not be hampered by eliminating the replicated data when resolving a query. • Since there may be thousands of nodes in a sensor field, associating data packets from numerous sensors to the corresponding events, and correlating the data about the same event reported at different times may be a very complicated task for a central node. • Due to a large number of nodes and other constraints such as power limitations, sensor nodes are not globally addressed [1]. Therefore, address-centric protocols are mostly inefficient. Instead of address-centric protocols, data-centric or location aware addressing protocols where intermediate nodes can route data according to its content [8] or the location of the nodes [9] can be used. • Querying the whole network node by node is impractical. So attribute-based naming and data-centric routing [10] are essential for WSNs. The distribution of the sensors in the sensor field is often non-homogenous. Therefore, the need for retrieving data from the sensor field by minimum power dissipation and high reliability coerced us to design a systematic approach. The idea of query-

939

ing sensor nodes by using task sets arises from this need. We form task sets of sensors to effectively manage the sensor network. Sensor field is spatially partitioned into equally sized subregions. The number of nodes in each subregion varies because of the non-homogenous distribution of nodes. Therefore the cost of querying sensor field varies in different subregions. To balance this cost, task sets (TS) are formed with a specific amount of nodes in each subregion. Hence a user also has the initiative to trade off between accuracy/reliability and communications cost. The number of nodes in a task set indicates the resolution of data that can be collected by querying the task set. Higher number of nodes in a task set implies higher accuracy and reliability, i.e., the probability that an event is detected by a task set is higher when the number of nodes in the task set is higher. On the other hand, more power is consumed for resolving a query as the number of nodes in a task set increases. For partitioning a sensor field into smaller subregions, we use quadtree addressing which is the process of recursively dividing the sensor field to four equally sized subareas until the required spatial resolution level. We can address a specific sensor node by the quadrant where the sensor node is in. A sensor node responds to a query if it is in the rectangle that represents its quadrant. This scheme helps us in designing efficient geocasting [11] techniques applicable to sensor networks, and designing the algorithm that forms task sets such that it is adaptive to sensor node mobility. Our paper is organized as follows: In Section 2 the related work from the literature is briefly explained. Quadtree addressing, its advantages and application to our task sets scheme are elaborated in Section 3. Data querying by task sets (DQTS) scheme is introduced in Section 4. The performance of the scheme is analytically evaluated in Section 5, and then numerical results from our experiments are provided in Section 6. Section 7 concludes our paper.

2. Related work Queries should be resolved in the most power efficient way in WSNs. This can be achieved by

940

E. Cayirci et al. / Computer Networks 50 (2006) 938–952

reducing either the number of nodes involved in resolving a query or the number of messages generated to convey the results. There is a considerable research interest to develop efficient data querying schemes for WSNs. The active query forwarding in sensor networks (ACQUIRE) scheme [12] aims to reduce the number of nodes involved in queries. In ACQUIRE each node that forwards a query tries to resolve it. If the node resolves the query, it does not forward it further but sends the result back. Nodes collaborate with their n hop neighbors. The parameter n is named as the look ahead parameter. If a node cannot resolve a query after collaborating with n hop neighbors, it forwards it to another neighbor. When the look ahead parameter n is 1, ACQUIRE performs as flooding in the worst case. Mobility assisted resolution of queries in large scale mobile sensor networks (MARQ) [13] makes use of the mobile nodes to collect data from the sensor network. In MARQ when contacts move around, they interact with other nodes and collect data. Nodes collaborate with their contacts to resolve the queries. Sensor query and tasking language (SQTL) [10] is proposed as an application layer protocol that provides a scripting language. Three techniques are introduced to resolve a query in [10]: sampling operation, self-orchestrated operation, and diffused computation operation. In sampling operation some sensor nodes may not need to respond if their neighbors do. Nodes make autonomous decisions to participate in resolving a query based on a given response probability. To prevent having more responses from denser areas, the nodes may be clustered, and the response probability can be computed at each cluster-head node based on the number of replies from the cluster. This operation called adaptive probability response (APR). In selforchestrated operation the responses of nodes are delayed such that they can be aggregated with the data coming from the other nodes before forwarding them to the next node in the path to the data gathering node. In Diffused Computation Operation each sensor node is assumed to have knowledge of its immediate neighbors. Sensor nodes know how to aggregate the sensed data coming from their immediate neighbors based

on the SQTL scripts received during task dissemination. Clustering sensor nodes can facilitate more scalable query resolution mechanisms. Algorithms such as the one in [4] can be used to cluster and recluster sensor nodes as needed, and to elect the cluster heads. A clustering scheme can also be applied recursively to have a hierarchy of clusters. Clusters can be formed based on proximity or some other parameters such as the power available in sensor nodes. We use quadtree addressing to spatially cluster sensor nodes, then form our task sets in each of these groups based on the parameters specified by the users of the WSN.

3. Spatial query patterns by using quadtree When nodes are location aware, they can be addressed by using quadtree addressing. Location awareness is an acceptable assumption for WSNs because the sensed data are needed to be associated with the location data in many applications. For example, in target tracking and intrusion detection WSNs, the sensed data is almost meaningless without being related to a location. Therefore, location awareness of sensor nodes is a requirement imposed by many applications. There are a number of practical node localization techniques for WSNs [14,15]. In order to determine the quadtree addresses of the nodes, we take the complete sensor field as the root of a quadtree. At the first step, we divide the sensor field to four equal sized subareas. At the next step each subarea is taken as a parent and divided into four subareas if there is more than one sensor node in it. This process is repeated until zero or one sensor node exists in the subarea. The final subareas form the leaf nodes in the quadtree. There exist four children of any parent in the quadtree, where the children are addressed with concatenating 00, 01, 10, and 11 to the address of the parent. This addressing style can be used to address the whole field by successively dividing the field into as many subareas as required. Hence each sensor can be addressed by a location based bit string that represents the quadtree address of

941

E. Cayirci et al. / Computer Networks 50 (2006) 938–952

the node. As an example, a sensor network made up of 5 sensors and its quadtree are shown in Figs. 1 and 2. The sensors and their addresses in Fig. 1 can be shown by the following tuples: {(a, 00) (b, 010100) (c, 010111) (d, 1101) (e, 1100)}. Note that the unique addresses of the nodes are made up of varying number of bits. For example, the address of Node a is 2 bit long, whereas the address of Node b is 6 bit long. This is more effective than a grid based addressing scheme [16], where the sensor field is divided into tiny cells so that many of those tiny cells do not consist of any nodes, and therefore a somewhat large part of the addressing space is wasted. Addressing each level requires using 2 bits which are {00,01,10,11} as described above. Hence, the length of a quadtree address increases 2 bits at each level k of the quadtree. The number of nodes that can be addressed at level k is 4k. Therefore, we can address 4n/2 nodes by using n bits. Quadtree addressing is not a scalable approach to address a particular node in sensor networks because it requires that the locations of nodes are

a

0100

b c

0111

0110

00

d

e

10 1111

1110

known to the central system that constructs the quadtree. However it is a very effective technique for geocasting in sensor networks. For example, ‘‘11’’ addresses both nodes d and e in Fig. 1. Similarly, ‘‘01’’ addresses nodes b and c. Hence, we can geocast in 1/4 of the sensor field by using a 2 bit address. We can also geocast polygonal areas by using multiple quadtree addresses of subareas that have varying sizes. Multicasting can also be utilized to query the specific quadrants which have a specific size at a level k by using quadtree addressing. For example, nodes at the upper right quadrants at level 3, i.e., the quadrants marked with ‘‘#’’ in Fig. 3, can be addressed by ‘‘****01’’. Similarly, the multicast address of all nodes in the lower quadrants at level 3, i.e., the quadrants marked with ‘‘+’’ in Fig. 3, is ‘‘****1*’’. Four significant patterns to be used in intruder detection and target tracking applications are given below. Other query patterns can also be designed, and simplified by using Karnaugh maps or similar techniques. The nodes those are addressed by the query patterns are shaded in the corresponding figures. Pattern 1 queries a buffer zone around a sensor field. The depth of the zone is determined by the level k of the quadtree address. This pattern can be very useful to detect the intruders trying to access or exit a rectangular region. The nodes which are not included in this pattern stay idle until an intruder is detected in the buffer zone, hence a considerable amount of energy is saved. In this query model, OR (+) function is used to generate

Fig. 1. The quadtree partitioning of a sensor field. #

ROOT

+

+

# +

# 00 a

01

10

11

+

00 01 10 11 e d 00

01

10

+

+

01

10

11 c

Fig. 2. The quadtree of the example sensor field.

+

+

+

+

+

+

+

+

+

+

+

+

+

+ #

+

# +

+ #

#

# +

+

#

#

#

# 00 b

+

#

# +

11

+

+

#

+ #

+

+

Fig. 3. Multicasting by quadtree addressing.

942

E. Cayirci et al. / Computer Networks 50 (2006) 938–952

a complex quadtree address mechanism. Please note that the sensor field needs not be in square shape, so that the two side length of the rectangle may be different. As an example, we can address the shaded area in Fig. 4 by using the following quadtree address: ‘‘0*0* + 1*1* + *1*0 + *0*1’’. Pattern 2 can be used to design efficient queries for some specific applications. For example, a query can be designed based on the expected size of a target in target detection and tracking applications as shown in Fig. 5. If we track a target which covers an area of n · n in a sensor field, it suffices to divide the sensor field into h pieces of c · c, i.e., c < n, and then to query sensors in h/2 subareas only. This scenario is depicted in Fig. 5 by using quadtree address ‘‘***0’’. Pattern 3 can be used to detect border crossing attempt in the sensor field. This pattern can be addressed as ‘‘(01** + 10**) · (**00 + **11)’’ for the example given in Fig. 6. Note that we have also used AND (·) function in this query. Pattern 4 nodes addressed by ‘‘11*1 + 01*1’’ at level k = 2 and depicted in Fig. 7 divides a sensor field diagonally into two separate area. While Pattern 3 can be used to address buffer zones horizon-

0000

0001

0100

0101

0011

0010

0111

0110

1100

1101

1000

1001

1111

1110

1011

1010

Fig. 4. The quadrants queried by Pattern 1.

0000

0100

0101

0001

target

0000

0001

0100

0101

0011

0010

0111

0110

1100

1101

1000

1001

1111

1110

1011

1010

Fig. 6. The quadrants queried by Pattern 3.

0000

0001

0100

0101

0011

0010

0111

0110

1100

1101

1000

1001

1111

1110

1011

1010

Fig. 7. The quadrants queried by Pattern 4.

tal or vertical in a rectangular region, Pattern 4 is used for the diagonal buffer zones. Many other patterns can also be generated by using quadtree addressing. Binary trees can also be used for spatial patterns where the region at each level is divided horizontally or vertically into two instead of four pieces. Moreover a hybrid binary tree and quadtree approach can be used together for generating more complex patterns. Quadtree addresses are used to query a pattern. A sensor node which receives a query processes address field of the query in order to decide whether itself is expected to respond the query or not. Our scheme does not force the sensors for processing whole address string. Sensors process the address string by two bits at a time, and continues to process the address as long as the previous bits match its quadratic region. Otherwise, sensors quit processing the address string, and ignore the query.

0011

0010

0111

0110

1100

1101

1000

1001

4. Data querying by task sets

1111

1110

1011

1010

The number of nodes in each quadrant varies because of the non-homogenous distribution of nodes. Therefore the cost of querying a specific

Fig. 5. The quadrants queried by Pattern 2.

943

E. Cayirci et al. / Computer Networks 50 (2006) 938–952

quadrant will be different from the cost of querying another quadrant. To balance this cost we propose to form task sets (TS) with a specific amount of nodes in each quadrant. The advantages of using task sets can be listed as follows: • We can assign sensing tasks fairly among sensors in a quadrant. • We can trade off between accuracy/reliability and power consumption. • We can design efficient load balancing techniques, and increase the lifetime of a sensor network.

notations can be developed according to the nature of different applications. After receiving a form task set message, nodes start to run the form task sets procedure given in Fig. 8. They prepare a status data packet which consist of quadtree address, sensor type and power

Receive a message.

Sensor nodes can be assigned to task sets by using either central or distributed approaches. In the central approach, a central node assigns each node to the task sets. The central node can be the sink or a remote node. In the distributed approach, sensor nodes are self organized into task sets. In our distributed task set formation scheme, the formation process is initiated by disseminating a form task sets message consisting of the following fields: • Query pattern: This field indicates the pattern where the task set will be formed. Quadtree based spatial patterns can be used in this field. • Task Set Identification (TID): This is the identification information for the task set that will be formed. After a task is formed, queries are addressed to the corresponding task set by using its TID. • Criteria: The task sets are formed according to the criteria given in this field. In our experiments we have used three criterion: sensor type, power available, and sensor type and power available. This field can be enriched according to the requirements of applications. • Parameters: Parameters related to the criteria are given in this field. For example, if the criteria is the power available in nodes, ‘‘1’’ indicates the node which has the highest power available in each quadrant, ‘‘2’’ indicates two nodes with the highest power available, ‘‘1’’ indicates the node that has the lowest power available, and ‘‘0’’ indicates nodes not in any task set. Similar

Is it a form task set message?

Broadcast the current state.

Yes

No Is it a state report?

No

Take the appropriate action.

Yes Does the report changes the status table?

No

Yes Modify the status table.

Determine the task set.

Form Task Set Message: Query Pattern

TID Criteria

Parameters

Status Report: Address

Sensors Available Power Available Fig. 8. Form task set procedure.

944

E. Cayirci et al. / Computer Networks 50 (2006) 938–952

available fields and broadcast it. Quadtree address of a node is determined according to the depth k of the quadtree in the query pattern field of the related form task set message. For example, if the query pattern is ‘‘*****1’’, the depth k of the quadtree is 3, and therefore the quadtree addresses in the status reports become 6 bit long. Nodes receive status data packets from the other nodes in the same quadrant and create a status table by using these status data packets. An example status table is given in Table 1. The leftmost field includes the addresses of the corresponding nodes at level 3. Please note that, the status table consists of all nodes except the owner of the table. By using status table and criterions for forming task sets, each node determines the task set it belongs to. The fields in the status table and status reports can be designed differently based on the requirements of specific applications. As an example, TS1 can be specified as two nodes that have the highest power available in quadtree address ‘‘****1*’’ in a form task set command. Similarly TS2 can be specified as all nodes that are not in TS1 in quadtree address ‘‘****1*’’. After constructing status table, a node can find out which task set it belongs based on these specifications. If the power available in the node that has the status table in Table 1 is higher than or equal to 0.96, it is in TS1 according to the specifications given in the example, otherwise it is in TS2. Self assignment of nodes to two task sets, namely TS1 and TS2 are depicted in Fig. 9. If Node a is the owner of the status table in Table 1 and has 0.97 unit of power available, Node b has 0.98 unit of power available, and nodes c, d, and e are the other nodes shown in Table 1, then task sets in quadrant 001111 will be formed as shown in Fig. 9. After task set formation process,

e c d a

b

Task Set 2 (TS2) Task Set 1 (TS1) Quadrant 001111

Fig. 9. Assigning sensor nodes to task sets.

task sets can be queried by explicitly naming them in the address field of a query. For example a query can be generated directly for TS1. In some cases, the status table of nodes in the same quadrant may not consist of the same nodes due to hidden node problem. This may cause task sets to be formed slightly different than what a user wants to specify. However, this is negligible and can also be corrected by central nodes as recognized in the reception of reports from task sets. Sensor nodes may be mobile and may have limited power, may fail or may be destroyed after deployment. Therefore task set formations may change in time for some reasons, and require to be updated. The trivial way to do this is to repeat the task set formation process with the same parameters. This can be done time based and can be synchronized by a central node, i.e. sink or a remote node, by sending task set reformation command at certain time intervals. In an alternative strategy, task set reformation can be initiated by sensor nodes when the power available at a sensor node changes more than a threshold value or a sensor node changes its quadrant. When a task set reformation command is received, sensor nodes run the same process as the one that they run at the form task sets command.

5. Performance evaluations Table 1 An instance of a status table Quadtree addresses

Sensor type

Power available

001111 001111 001111 001111

1 1 1 1

0.95 0.98 0.93 0.96

In this section we evaluate our scheme for two performance metrics: event detection rate x and the task set gain v. Event detection rate x is the probability that an event, i.e., a stimuli in a sensor field, is detected by at least one node. Task set gain v is the ratio between the number of sensor nodes not involved in resolving a query and the total

945

E. Cayirci et al. / Computer Networks 50 (2006) 938–952

w

fX ðxÞ ¼ fY ðyÞ ¼

rx

ry h

0,0 Fig. 10. The coordinate system for performance evaluation.

number of sensor nodes in the sensor field. For performance evaluation we use a grid coordinate system that has its origin in the leftmost and bottommost corner of a sensor field as shown in Fig. 10. The width w of the sensor field is the x axis and the height h of it is the y axis of this coordinate system. The side length of a quadrant is notated by rx in x axis and ry in y axis. The sensor nodes are randomly deployed according to a given distribution e.g., Uniform, Exponential, Gaussian, etc, in the sensor field.

The event detection rate x is given by N

1

Z1 1 1

fX n X e ðx  xe ; xe Þ dxn fY n Y e ðy  y e ; y e Þ dy n

ð3Þ

Since Xn, Xe, Yn and Ye are independent random variables, Z 1 fX ðxÞ ¼ fX n ðx  xe ÞfX e ðxe Þ dxn Z1 ð4Þ 1 fY ðyÞ ¼ fY n ðy  y e ÞfY e ðy e Þ dy n 1

At the second step to formulate the pdf of Z, an auxiliary random variable T, as T = X, is introduced. This will enable us to use the general formula of finding fZT from two functions of two random variables with n real roots, given below n X fXY ðxi ; y i Þj e J ij ð5Þ fZT ðz; tÞ ¼ i¼1

The equations pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Z  X2 þ Y2 ¼ 0 T X ¼0

ð6Þ

have two real roots, for jtj < z, namely pffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffi x1 ¼ t; x2 ¼ t; y 1 ¼ z2  t2 ; y 2 ¼  z2  t2

5.1. Event detection rate

x ¼ 1  ð1  P / Þ

Z

ð1Þ

where N is the number of sensor nodes, and P/ is the probability that an event is detected by a specific node in the sensor field. qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi  2 2 P/ ¼ P ðxn  xe Þ þ ðy n  y e Þ < d n þ d e ð2Þ where (xn, yn) is the coordinate of the sensor node, (xe, ye) is the coordinate of the event to be detected, dn is the sensing range of a sensor node and de is the effect radius of the event. We find P/ in two steps. First, we compute the probability density functions (pdf) of X = Xn  Xe and Y = Ynffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi  Ye; then we compute the pdf of Z ¼ q

ðX 2 þ Y 2 Þ. At the first step by substituting Xn = X  Xe and Yn = Y  Ye, we find

At both roots, j e J j has the same value: z je J 1j ¼ je J 2 j ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffi 2 z  t2

ð7Þ

ð8Þ

Since X and Y are independent random variables, a direct application of Eq. (4) yields z fZT ðz; tÞ ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffi ½fX ðx1 ÞfY ðy 1 Þ þ fX ðx2 ÞfY ðy 2 Þ z2  t 2 ð9Þ We get fX(x) and fY(y) in Eq. (4), so we can find FZ(dn + de) Z 1 fZ ðzÞ ¼ fZT ðz; tÞ dt ð10Þ 1 Z d n þd e fZ ðzÞ dz ð11Þ P / ¼ F Z ðd n þ d e Þ ¼ 0

where P/ is the probability of detection.

946

E. Cayirci et al. / Computer Networks 50 (2006) 938–952

Eq. (11) that gives probability of detection can be extended for the Gaussian distributions, where xn, xe, yn and ye are distributed according to Nð0; r2xn Þ, Nð0; r2xe Þ, Nð0; r2y n Þ, Nð0; r2y e Þ respectively, and the Gaussian distribution is centered at the center of the sensor field. If we substitute functions in Eq. (9), we get ( ) 1 pffiffiffiffiffiffiffi z ffi z > 0; jtj < z 2 2 2 fZT ðz; tÞ ¼ pr z t 0 otherwise Z 1 fz ðzÞ ¼ fZT ðz; tÞ dt 1  Z  z z2 =2r2 2 z dt pffiffiffiffiffiffiffiffiffiffiffiffiffi uðzÞ ¼ 2e ð12Þ r p 0 z2  t 2

Sinceqffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi the term in parenthesis has value p/2, Z ¼ ðX 2 þ Y 2 Þ; fz ðzÞ is the Rayleigh density function where standard deviation r is,   rxn  rxe 0 r¼ 0 ry n  ry e Z d n þd e z z2 =2r2 P / ¼ F Z ðd n þ d e Þ ¼ e dðzÞ r2 0

ð13Þ ð14Þ

Eq. (11) will also be extended for uniform random variables Xn(0, w), Xe(0, w), Yn(0, h) and Ye(0, h) where w and h are the width and height of the sensor field respectively. If we solve Eq. (4) for these random variables, we get 9 8 R xþw 1 d ¼ wþx ;> > 0 w2 X n w2 > > > > > > > > = < w 6 x 6 0 fX ðxÞ ¼ R w > > 1 wx > > 2 dX n ¼ w2 ; > > > > > > x w ; : 06x6w ð15Þ 9 8 R yþh hþy 1 > > 2 dxn ¼ 2 ;> > 0 h h > > > > > > > = < h 6 y 6 0 > fY ðyÞ ¼ R h 1 > > > d ¼ hy ; > > > y h2 y n > > h2 > > > > ; : 06y6h

Same steps from Eqs. (6)–(8) are followed, and then fX(x1), fX(x2), fY(y1) and fY(y2) are substituted in Eq. (9).

ffi pffiffiffiffiffiffiffiffi 9 8 wþt hþpffiffiffiffiffiffiffi z2 t2 wþt h z2 t2 ð Þð Þð Þ þ ð Þ; > > 2 2 2 w w > > h h2 > > > > > > > > w 6 x 6 0;h 6 y 6 0 > > > > > pffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffi > > > 2 t2 2 t2 > > wþt h z wþt hþ z > > Þ þ ð Þ; Þð Þð ð > > w2 2 2 w2 > > h h > > > > = < w 6 x 6 0;0 6 y 6 h z fZT ðz;tÞ ¼ pffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffi 2 2 2 2 z2  t 2 > > > Þðh hz2 t Þ þ ðwt Þðhþ hz2 t Þ; > ðwt > > w2 w2 > > > > > > > > > > 0 6 x 6 w;0 6 y 6 h > > > pffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffi > > > > > 2 t2 2 t2 wt hþ z wt h z > > > > ð Þ þ ð Þ; Þð Þð 2 2 2 2 > > w w h h > > ; : 0 6 x 6 w;h 6 y 6 0 9 8 2 1 t ðh2 Þðw þ w2 Þ; > > > > > > > > = < w 6 x 6 0;h 6 y 6 h z fZT ðz;tÞ ¼ pffiffiffiffiffiffiffiffiffiffiffiffi > z2  t 2 > > > ðh22 Þðw1  wt2 Þ; > > > > ; : 0 6 x 6 w;h 6 y 6 h ð16Þ

where z > 0, jtj < z conditions must be satisfied. Z 1 fZT ðz; tÞdt fZ ðzÞ ¼ 1

R z t dt 9 8 2z R z dt pffiffiffiffiffiffiffiffi uðzÞ þ 2z2 pffiffiffiffiffiffiffiffi uðzÞ; > > 2 t2 2 t2 0 hw 0 hw z z > > > > > > = < w 6 x 6 0; h 6 y 6 h f ðzÞ ¼ R R z z 2z dt ffi 2z t dt ffi > pffiffiffiffiffiffiffi pffiffiffiffiffiffiffi uðzÞ  hw uðzÞ; > > > 2 > > 0 hw 0 z2 t2 z2 t2 > > ; : w 6 x 6 0; h 6 y 6 h 2

2

ð17Þ

If we substitute v as v = z  t , so dv becomes dv = 2w dw and solve the integrals, since z P 0 and jwj < z we get h pffiffiffiffiffiffiffiffiffiffiffiffiffiiz 9 8   2z t z 2z > arcsin  z2  t2 ÞuðzÞ; > þ ð > > 2 hw z 0 hw > > 0 > > > > > > = < w 6 x 6 0; h 6 y 6 h fZ ðzÞ ¼ h i pffiffiffiffiffiffiffiffiffiffiffiffiffi z  z > > 2z 2z > ðhw arcsin zt 0  hw z2  t2 ÞuðzÞ; > > > 2  > > > > 0 > > ; : w 6 x 6 0; h 6 y 6 h 9 8 2z p z ð þ wÞuðzÞ; > > hw 2 > > > > > > < w 6 x 6 0; h 6 y 6 h = fZ ðzÞ ¼ 2z p > > ð2  wz ÞuðzÞ; > > hw > > > > ; : w 6 x 6 0; h 6 y 6 h ð18Þ The probability P/ that an event is detected by a specific node becomes

947

E. Cayirci et al. / Computer Networks 50 (2006) 938–952

Z d n þd e fZ ðzÞdz P / ¼ F Z ðd n þ d e Þ ¼ 0 9 8 3 pðd n þd e Þ2 n þd e Þ > > þ 2ðd3hw ; > 2 > 2hw > > > = < w 6 x 6 0; h 6 y 6 h > P/ ¼ 2 3 pðd n þd e Þ > > n þd e Þ > > ;  2ðd3hw 2 > > > > ; : 2hw 0 6 x 6 w; h 6 y 6 h

ð19Þ

5.2. Task set gain Since only a subset of sensor nodes which is in the task set specified in a query is involved in resolving the query, we use the ratio between the nodes not in the task set and the total number of nodes to indicate the gain of the DQTS scheme. Task set gain v as given in Eq. (20), is obtained in the expense of a decrease in the event detection rate x given in Eq. (1): v¼

N

P4k

i¼1 ni

N

ð20Þ

where v is the task set gain, k is the level of quadtree used for task set formation, ni is the number of nodes involved in resolving the query in quadrant i; and ni can be calculated by the Eq. (21), where gi is total number of nodes in quadrant i and n is the number of nodes specified for each quadrant in a task set. gi if gi < n ni ¼ ð21Þ n otherwise

at locations also randomly selected according to the Uniform and Gaussian distributions. An event is anything that can stimulate a sensor, and we assume that events are effective in a circular region, i.e., event region, whose radius is 20. Since we carry out our tests for varying sensing ranges we do not factorize the event region size. If the event region is within the sensing range of a sensor we assume that the sensor detects the event. To be fair in our results, and to be able to analyze the results more clearly, we do not simulate obstacles and other anomalies that can change the sensing range of sensors. After nodes are deployed and an event is created, we find out if the event is detected by any node for the following cases: • When we use all nodes. • When we use all the nodes in the Pattern 2 given in Section 3. • When we use randomly selected two nodes named TS-1 nodes in the Pattern 2. We also count the number of nodes involved in resolving the query for these cases. In Figs. 11– 19, the results from these experiments are given.

χ 0.8 0.75 Pattern 2 all nodes k=4 Pattern 2 all nodes k=5

0.7

Pattern 2 TS-1k=4

6. Experimental results In this section, simulation results are presented which verify the mathematical models introduced in Section 5. To evaluate the gains of DTQS scheme, we analyze task set gain v, i.e., the ratio between the number of nodes not involved in resolving a query and the total number of nodes. Since there is a tradeoff between task set gain v and event detection rate x, we also examine event detection rate x. In the simulations, we randomly deploy varying number of nodes over an area 1024 · 1024 in size according to both the Uniform and Gaussian distributions. Then we create events

Pattern 2 TS-1k=5

0.65 0.6 0.55 0.5 0.45 100

150 200

250 300

350 400

450 500

550

# of nodes Fig. 11. Task set gain when nodes are deployed according to Uniform distribution.

948

E. Cayirci et al. / Computer Networks 50 (2006) 938–952

χ

ω

1.00

1

0.9

0.90

0.8 0.7

0.80

0.6 0.5

0.70

500 400 300 # of

0.4

Pattern 2 all nodes k=4 0.3

Pattern 2 all nodes k=5

0.60

20 30 200 40 50 60 70 80 90 100 100 110 120 sensing range

Pattern 2 TS-1 k=4 Pattern 2 TS-1 k=5

nodes

Fig. 14. Event detection rate when TS-1 nodes in Pattern 2 are used.

0.50

0.40 100 150 200

250 300

350 400

450 500

550

# of nodes Fig. 12. Task set gain when nodes are deployed according to Gaussian distribution.

ω 1

0.9

0.8

ω 1

all nodes pattern 2 all nodes k=4 pattern 2 all nodes k=5 pattern 2 TS-1 k=4 pattern 2 all nodes k=5

0.7 0.9 0.8

0.6

0.7

0.5 100

0.6

150

200

250 300

350

400

450

500

550

# of nodes

0.5 550

0.4

400 0.3 20 30 40

250 50

60

70

80

sensing range

# of nodes

90 100 100 110 120

Fig. 13. Event detection rate when all nodes in Pattern 2 are used.

In Figs. 11 and 12 task set gain v is depicted for varying number of nodes for the cases when nodes are deployed according to the Uniform and Gauss-

Fig. 15. Event detection rate when nodes are deployed according to Uniform distribution.

ian distributions respectively. When all Pattern 2 nodes are used task set gain is approximately 0.5, which indicates that half of the nodes are idle when the Pattern 2 nodes are queried. This can be intuitively justified because Pattern 2 covers half of a sensor field. The task set gain does not change for varying number of nodes when ‘‘all nodes Pattern 2’’ are queried because half of the nodes always

949

E. Cayirci et al. / Computer Networks 50 (2006) 938–952

ω

ω 1.00

1.00 all nodes Pattern 2 all nodes k=4 Pattern 2 all nodes k=5 Pattern 2 TS-1 k=4 Pattern 2 TS-1 k=5

0.90

0.90

0.80

0.80

0.70

0.70

0.60

0.60

all nodes

0.50 0.50 100

Pattern 2 all nodes k=4

150

200

250

300

350

400

450

500

550

# of nodes Fig. 16. Event detection rate when nodes are deployed according to Gaussian distribution.

Pattern 2 all nodes k=5

0.40

Pattern 2 TS-1 k=4 Pattern 2 TS-1 k=4

0.30 20

30

40

50

60

70

80

90

100 110 120

sensing range

ω

Fig. 18. Event detection rate when nodes are deployed according to Gaussian distribution.

0.9

ω

0.8

1.05

0.7

0.95 0.85

0.6 all nodes

0.5

Pattern 2 all nodes k=4 Pattern 2 all nodes k=5

0.4

Pattern 2 TS-1 k=4 Pattern 2 TS-1 k=5

0.3

0.75

all nodes Pattern 2 all nodes k=4

0.65

Pattern 2 all nodes k=5 Pattern 2 TS-1 k=4

0.55

Pattern 2 TS-1 k=5

20

30

40

50

60

70

80

90

100 110 120

sensing range Fig. 17. Event detection rate when nodes are deployed according to Uniform distribution.

react the queries. The gain is much higher when we use TS-1 nodes. It is as high as 0.9, i.e., 90% of nodes are idle, for some cases in TS-1 queries. When nodes are deployed according to Gaussian distribution, t ask set gain v for TS-1 is higher because nodes are concentrated in some regions,

0.45 20

30

40

50

60

70

80

90

100

110 120

sensing range Fig. 19. Event detection rate when both nodes and events are deployed according to Gaussian distribution.

and therefore high number of nodes exist in some quadrants. Since two nodes from the same quadrant respond a query when TS-1 is queried, many nodes stay idle especially when nodes are not deployed according to Uniform distribution.

950

E. Cayirci et al. / Computer Networks 50 (2006) 938–952

As shown in Figs. 11 and 12, 90% of nodes stay idle in some cases when DTQS scheme is used. This gain is obtained in the expense of reduced event detection rate x. The event detection rates for ‘‘ all nodes Pattern 2’’ and ‘‘TS-1 nodes Pattern 2’’ queries are shown for varying sensing ranges and number of nodes deployed in the sensor field in Figs. 13 and 14. In these experiments, quadtree is constructed at level k = 4, which indicates the size of a quadrant is 64, because the sensor field in our experiments is 1024 · 1024 in size. Therefore, we observe a higher increase in the event detection rate when sensing range changes from 60 to 70 comparing to the other ranges. When sensing range is more than 60, a sensor covers an area larger than its quadrant. This increases the probability that an event is detected because one in every two quadrants is queried in Pattern 2 as shown in Fig. 5. The relation between the number of nodes and event detection rate is almost linear. In Figs. 15 and 16 event detection rate is shown for varying number of nodes. In these experiments, sensing range is 80. Event detection rate is higher when nodes are deployed according to Uniform distribution. The difference in event detection rate between ‘‘ all nodes’’ queries and ‘‘all nodes Pattern 2’’ or ‘‘TS-1 nodes Pattern 2’’ queries becomes negligible when more than 250 nodes are deployed and the quadtree is constructed at level k = 5. When 250 nodes are deployed according to Gaussian distribution and the quadtree is constructed at level k = 5, the difference in the event detection rate between ‘‘all nodes’’ and ‘‘TS-1 nodes Pattern 2’’ queries is less than 2%. However, 76% of nodes used during ‘‘all nodes’’ query stay idle in ‘‘TS 1 nodes Pattern 2’’ query for this case. Please note that this performance gain is achieved by a practical query system based on a distributed algorithm that can run on tiny nodes. In Figs. 17 and 18, event detection rate is shown for varying sensing ranges for the case that 350 nodes are deployed in the sensor field. We observe also here that event detection rate is higher when nodes are deployed according to Uniform distribution. The reason for this is intuitively clear. When nodes are deployed according to Uniform distribution all sensor field is covered with the same node density. When quadtree level k = 4, the increase in

the event detection rate is higher when the sensing range is changed from 30 to 40 comparing to the other sensing ranges. For quadtree level k = 4, a similar increase is observed when sensing range changes from 60 to 70. This is related to the size of quadrants. After these points the difference between ‘‘all nodes’’ and pattern queries becomes smaller. The results from the experiments where we deploy nodes according to Gaussian distribution, and create events at locations randomly selected also according to Gaussian distribution are shown in Fig. 19. In these experiments we observe that all of the events are detected when sensing range is higher than 70 even when ‘‘TS-1 nodes Pattern 2’’ queries are carried out. This indicates that when enough number of nodes are deployed, and an appropriate quadtree level is selected, huge performance gains, i.e., decrease in power consumption, can be achieved, without any reduction in the event detection rate. For example, if we deploy 350 nodes with sensing range 80, and construct the quadtree at level k = 5, the event detection rate is 100% for ‘‘TS-1 nodes Pattern 2’’ queries. On the other hand, 80% of nodes are not involved in resolving these queries. The equations given in Section 5 can be used to find out correct number of nodes and sensing range for a given sensor field.

7. Conclusions We introduce DQTS scheme where sensor nodes are organized into task sets. For task set formation, sensor field partitioned into equally sized subregions by using quadtree addressing, and then each task set is assigned a specific number of nodes in every subregions. A distributed algorithm simple enough to be run on tiny sensor nodes is used to determine which node belongs to which task set. The number of nodes in each subregion varies because of the non-homogenous distribution of nodes. Hence the cost of querying sensor field varies in different subregions. DTQS scheme balances this cost by forming task sets (TS) with a specific amount of nodes in each subregion. It also provides a trade off mechanism between accuracy/reliability and communications cost. The number of

E. Cayirci et al. / Computer Networks 50 (2006) 938–952

nodes in a task set indicates the resolution of the data that can be collected by querying the task set. The higher number of nodes in a task set implies higher accuracy and reliability of the system. On the other hand, more power is consumed for resolving a query as the number of nodes in a task set increases. Once task sets are formed, they are queried directly. Since a task set does not include all sensor nodes in the sensor field, only a subset of nodes are involved in resolving a query, and therefore the power required to resolve a query is reduced. Our experiments show that power consumption for queries can be reduced in the orders of magnitude by using DTQS scheme in the expense of decrease in sensing accuracy and resolution. Our studies also show that the decrease in sensing accuracy and resolution becomes negligible as the sensor node density and range increase.

References [1] I.F. Akyildiz, W. Su, Y. Sankarasubramaniam, E. Cayirci, Wireless Sensor Networks: A Survey, Computer Networks (Elsevier) Journal (March) (2002) 393–422. [2] D. Estrin, R. Govidan, J. Heidemann, S. Kumar, Next Century Challenges: Scalable Coordination in Sensor Networks, MobicomÕ99, Seatle Washington USA, 1999. [3] J. Mirkovic, G.V. Venkataramani, S. Lou, L. Zhang, A Self-Organizing Approach to Data Forwarding in Wireless Sensor Networks, ICC 2001, June 2001. [4] M.T. Jones, S. Mehrotra, J.H. Park, Tasking Distributed Sensor Networks, International Journal of High Performance Computing Applications 16 (August) (2002) 243– 259. [5] M. Stemm, R. Katz, Measuring and reducing energy consumption of network interfaces in hand-held devices, Transactions on Communications, Special Issue on Mobile Computing 8 (1997) 1125–1131. [6] S. Slijepcevic, M. Potkonjack, Power Efficient Organization of Wireless Sensor Networks, Proceedings of IEEE International Conference on Communications, vol. 2, pp. 472–476, Helsinki, June 2001. [7] E. Shih, S. Cho, N. Ickes, R. Min, A. Sinha, A. Wang, and A. Chandrakasan, Physical Layer Driven Protocol and Algorithm Design for Energy-Efficient Wireless Sensor Networks, Proc. of ACM MobiComÕ01, pp. 272–286, Rome, Italy, July 2001. [8] B. Krishnamachari, D. Estrin, S. Wicker, Modelling DataCentric Routing in Wireless Sensor Networks, IEEE INFOCOMÕ2002.

951

[9] E. Cayirci, Addressing in Wireless Sensor Networks, COST-NSF Workshop on Exchanges and Trends in Networking (NetworkingÕ03), Crete, 2003. [10] C.-C. Shen, C. Srisathapornphat, C. Jaikaeo, Sensor Information Networking Architecture and Application, IEEE Personal Communications Magazine (August) (2001) 52–59. [11] Y.B. Ko, N.H. Vaidya, Geocasting in Mobile Ad Hoc Networks: Location-Based Multicast Algorithms, Second IEEE Workshop on Mobile Computer Systems and Applications, Louisiana 1999. [12] N. Sadagopan, B. Krishnamachari, A. Helmy, The Acquire Mechanism for Efficient Querying in Sensor Networks, Elsevier Ad Hoc Networks (to appear). [13] A. Helmy, Mobility-Assisted Resolution of Queries in Large-Scale Mobile Sensor Networks, Computer Networks (Elsevier), vol. 43, Issue 4, pp. 437–458. [14] A. Nasipuri, K. Li, A Directionality Based Location Discovery Scheme for Wireless Sensor Networks, Proc. of First ACM International Workshop on Wireless Sensor Networks and Applications, Atlanta, pp. 105–111, September 2002. [15] A. Savvides, H. Park, M.B. Srivastava, The Bits and Flops of the N-hop Multilateration Primitive for Node Localization Problems, Proc. of First ACM International Workshop on Wireless Sensor Networks and Applications, Atlanta, pp. 112–121, September 2002. [16] J. Li, J. Jannotti, D.S.J. De Couto, D.R. Karger, R. Morris, A Scalable Location Service for Geographic Ad Hoc Routing, Proc. 6th Annual ACM/IEEE IntÕl. Conf. Mobile Comp. Net., pp. 120–30, Boston, 2000.

Erdal Cayirci graduated from Turkish Army Academy in 1986, and from Royal Military Academy Sandhurst in 1989. He received his M.S. degree from Middle East Technical University, and the Ph.D. degree from Bogazici University in Computer Engineering in 1995 and 2000, respectively. He was a visiting researcher and a visiting lecturer with the School of Electrical and Computer Engineering at Georgia Institute of Technology in 2001. He is Chief CAX Training and Support Branch in NATO Joint Warfare Center. His research interests include military constructive simulation, tactical communications, sensor networks, and mobile communications. He was an editor for IEEE Transactions on Mobile Computing, AdHoc Networks (Elsevier Science), ACM/Kluwer Wireless Networks, ASP Sensor Letters. He received 2002 IEEE Communications Society Best Tutorial Paper Award for his paper titled ‘‘A Survey on Sensor Networks’’ published in the IEEE Communications Magazine in August 2002, and ‘‘Fikri Gayret’’ Award from Turkish Chief of General Staff in 2003.

952

E. Cayirci et al. / Computer Networks 50 (2006) 938–952

Vedat Coskun graduated from Turkish Naval Academy (1984), received his M.Sc. degree in Computer Science from Naval Post Graduate School, California (1990); and his Ph.D. degree in Computer Security from Yildiz Technical University, Istanbul (1998). He managed the software development group in Naval Wargaming Center between 1997 and 2001. He is currently a staff member of the Information Technology Department at ISIK University. His current research areas include software engineering, sensor networks, computer security and cryptography.

Caghan Cimen received his B.S. degree in Electronic Engineering from Turkish Naval Academy in 1996 and received his M.S. degree in Computer Science from Naval Science and Engineering Institute in 2003. His research interests include sensor networks, sensor fusion and wireless communications.

Computer Networks 50 (2006) 953–965 www.elsevier.com/locate/comnet

An optimization model for Web content adaptation Rong-Hong Jan a

a,*

, Ching-Peng Lin a, Maw-Sheng Chern

q

b

Department of Computer and Information Science, National Chiao Tung University, 1001 Ta Hsueh Road, Hsinchu 30050, Taiwan, ROC b Department of Industrial Engineering and Engineering Management, National Tsing Hua University, Hsinchu 30043, Taiwan, ROC Received 18 December 2003; received in revised form 7 June 2005; accepted 16 June 2005 Available online 8 August 2005 Responsible Editor R. Boutaba

Abstract This paper considers Web content adaptation with a bandwidth constraint for server-based adaptive Web systems. The problem can be stated as follows: Given a Web page P consisting of n component items d1, d2, . . . , dn and each of the component items di having Ji versions d i1 ; d i2 ; . . . ; d iJ i , for each component item di select one of its versions to compose the Web page such that the fidelity function is maximized subject to the bandwidth constraint. We formulate this problem as a linear multi-choice knapsack problem (LMCKP). This paper transforms the LMCKP into a knapsack problem (KP) and then presents a dynamic programming method to solve the KP. A numerical example illustrates this method and shows its effectiveness.  2005 Elsevier B.V. All rights reserved. Keywords: Content transcoding; Knapsack problem; Dynamic programming

1. Introduction

q

This research was supported in part by the Communications Software Technology Project of Institute for Information Industry and in part by the National Science Council, Taiwan, ROC, under grant NSC 93-2219-E-009-002 and NSC 93-2752E-009-005-PAE. * Corresponding author. Tel.: +886 3 573 1637; fax: +886 3 572 1490. E-mail address: [email protected] (R.-H. Jan).

Over the past decade, Internet use has exploded with people gaining rich information from the World Wide Web (WWW). With traditional wired-line Internet, users can only access the Internet in fixed places. Recently, however, due to the technology explosion in wireless communication and portable communication devices, e.g., cellular phones, personal digital assistants, and pagers, it

1389-1286/$ - see front matter  2005 Elsevier B.V. All rights reserved. doi:10.1016/j.comnet.2005.06.006

954

R.-H. Jan et al. / Computer Networks 50 (2006) 953–965

has become possible for people to connect to the Internet and remain on-line while roaming. However, these portable communication devices are very different from the typical personal computers (PC). They vary widely in their screen size, resolution, color depth, computing power, and memory. From notebook PCs to cellular phones, the diversity of these devices makes it difficult and expensive to offer contents separately for each type of device. Many generic WWW servers lack the ability to adapt to the greatly varying bandwidths or to the heterogeneity of client devices. Therefore, the technologies that adapt the Web content to diverse portable communication devices will become very important in the future. Many content adaptation technologies have been proposed for the WWW [1–10]. These adaptation methods can be divided into three categories: client-based, proxy-based and server-based adaptations. In client-based adaptations [7], the client transforms the original Web pages to the proper presentation according to its capability. However, this method does not work well for mobile devices because mobile devices have lower computing power. In proxy-based adaptations [4,8,9], the proxy intercepts the requested Web pages, performs the adaptation, and then sends the transformed content to the client. But this method requires huge calculations when transforming

multi-media data. In contract, server-based adaptations [1,5,10] offer key advantages. Specifically, the server constructs Web pages in accordance to the usersÕ device capabilities and network bandwidths. Repositing multi-versions of Web pages on Web servers in advance not only accelerates response time but also reduces network traffic. In this paper, we consider a server-based adaptive Web system as shown Fig. 1. Clients can access the Internet via local area networks (LAN), wireless LAN, dial up, or GPRS networks. The Web server contains a set of multi-media Web pages. A multi-media Web page is composed of a number of component items. The clients browse Web pages by sending http requests with capability and preference information [11–13] to the Web server. The Web server parses the requests to learn the capabilities of the clients and probes the network to determine the bandwidth of the connection. Based on clientsÕ capabilities and the bandwidth of the connection, the Web server generate an optimal version of the requested Web page and returns it to the clients. This paper studies how to generate an optimal version of a Web page with a bandwidth constraint for the server-based adaptive Web system. Formally, the problem, denoted as a Web content selection problem, can be stated as follows: Given a Web page P consisting of n component items

Fig. 1. An adaptive Web system architecture.

955

R.-H. Jan et al. / Computer Networks 50 (2006) 953–965

The remainder of this paper is organized as follows. In Section 2, we formulate the Web content selection problem as an optimization problem. Section 3 discusses the solution method, and experimental results are given in Section 4.

d1, d2, . . . , dn and each of the component items di having Ji versions d i1 ; d i2 ; . . . ; d iJ i , for each component item di select one of the versions to compose the Web page such that the fidelity function is maximized subject to the bandwidth constraint. We formulate the Web content selection problem as a linear multi-choice knapsack problem (LMCKP) [14]. This paper transforms the LMCKP into a 0/ 1 knapsack problem (KP)[15,16]. The 0/1 KP problem is a well-known problem in combinatorial optimization. The problem has a large range of applications: capital budgeting, cargo loading, cutting stock, and so on. It can be solved by dynamic programming [17,18], branch and bound [19–21], and greedy methods. This paper presents a dynamic programming method for solving the 0/1 KP because dynamic programming can be easily extended to solve parametric LMCKP problem with different resources. This avoids having to solve the problem anew and slashes the computations needed.

2. Statement of the problem Consider an adaptive Web server having three major modules: content analysis and transcoding, capability and preference information (CPI) filter, and content selection. The architecture of the adaptive server is based on [1]. Fig. 2 illustrates the content adapting process in the adaptive server. In the content analysis and transcoding module, the Web contents are analyzed and transformed into different versions. They are then organized into a content pyramid. The content is prepared in XML, which is converted to HTML prior to delivery. If the server receives an http

Request with CPI

CPI Filter

Response

CPI Parser

Content Selection

Content Selection

Content Analysis and Transcoding

Bandwidth Estimator

Rendering Module (Rendering to HTML/WML)

Objective Function

Content Source Vn Vm

Image

... ... Content Analysis (Text/Image/......)

v3 v2 v1 Text

Text

v3 v2

Transcoding Modules

v1 Image

Content Pyramid

Fig. 2. Server-based content adaptation system architecture.

956

R.-H. Jan et al. / Computer Networks 50 (2006) 953–965

Fig. 3. An example of multi-media Web page.

request from a client, the CPI filter module processes the capabilities of the request and forwards the results to the content selection module. The content selection module selects a set of feasible versions from the content pyramid and calls on the bandwidth probing engine [22,23] to find the bottleneck bandwidth between the client and server. With the clientÕs capabilities and bandwidth information, the content selection module determines an appropriate version for each component item. Based on the appropriate versions, the rendering module tailors a style sheet represented by XML style-sheet language (SML), generates an adaptive content and replies to the client. Note that a multi-media Web page is composed of a number of component items. For example, the document shown in Fig. 3 consists of five component items. These include four image component items and one text component item. Usually, the image component item can be described at multiple resolutions, called versions. The versions can be transformed from raw data at different resolutions. The different version of the component item has a different data size. Suppose a multi-media Web page, P, consists of a number of component items di where P = d1, d2, . . . , dn. A component item di can be computed by transcoding into versions, d i1 ; d i2 ; . . . ; d iJ i with different resolutions

and modalities. Let wij be the data size of version dij. For each version dij, we can assign a measure of fidelity, called value vij. Value vij can be defined as follows: perceived value of transcoded version d ij vij ¼ ; perceived value of original d i1 where 0 6 vij 6 1. With value vij, we can then compare different component items that are in different versions. The perceived value may either be assigned by the author for each version, or determined by a function of data size. In this paper, we assume vij = f(wij) that captures the general trend of fidelity in value. f(wij) may be a concave, convex/nonconcave or discrete function of wij. In this paper, we define1 rffiffiffiffiffiffi wij f ðwij Þ ¼ ; wi1 where wi1 is the data size of item di with the original version (see Fig. 4). However, the Web content 1

This paper is not to suggest that there actually exists a simple function for assigning values to vij. This is because measuring perceived quality of an image is not easy. Our optimization model allows one to assign arbitrary value to vij for Web content adaptation problem, by assuming f(wij).

957

R.-H. Jan et al. / Computer Networks 50 (2006) 953–965

Fig. 4. An example of versions for an image item.

creator can define his own f(wij), say f(wij) = wij/wi1 or f(wij) = ln wij/ln wi1. Thus, the Web server can be designed to select the best versions of content items from the Web document sets to meet the client resources while delivering the largest total value of fidelity. Usually, clients do not have the patience to wait for a long time for a Web page. One may expect to receive a

Web page in a reasonable waiting time Ttotal, say 15 s. The next problem for the Web server is to determine the data size W (maximum) for transmission so as to fall within the expected waiting time. Fig. 5 illustrates the browsing procedure. The total waiting time for the user is T total ¼ T prop þ T probe þ T proc þ T trans þ T prop ;

Client

Web Server

T0

Reques

t

T1 th

andwid

T2

Probe B

Respon

se of P

Probe Bandwidth robe Ba

T3

ndwidth

Processing Time Waiting Time

T4 T5

Transport Web Content Transporting Time

T6

Fig. 5. Event timing for browsing an adaptive Web page.

958

R.-H. Jan et al. / Computer Networks 50 (2006) 953–965

where Ttotal = total time to wait for each Web page; Tprop = propagation time = T1  T0 = T5  T4; Tprobe = time to probe bandwidth = T3  T1; Tproc = time to process Web content selection = T4  T3; Ttrans = time to transmit Web content = T6  T5. Here we assume for simplicity that Tprop, Tprobe, and Tproc are constants. Then data size W = b · t = b · (Ttotal2Tprop  Tprobe  Tproc) where b is the bottleneck bandwidth and t = Ttrans. For example, if Ttotal = 15, 2Tprop + Tprobe + Tproc = 4, and b = 10 Kbps, then the Web server will send a Web page with size not greater than W = (15  4) · 10 = 110 KB. Therefore, the Web content adaptation can be mathematically stated as follows. Problem LMCKP: Maximize

n X Ji X i¼1

Subject to

Ji n X X i¼1

Ji X j¼1

ð1Þ

vij xij

j¼1

wij xij 6 W ;

j¼1

xij ¼ 1;

xij ¼ 0 or 1;

into a 0/1 KP and apply the dynamic programming method to solve the 0/1 KP. 3.1. Transformation of the problem At first, we define a knapsack problem, which is equivalent to LMCKP, as follows. For each i, let y i1 ¼ xi1 ; y i2 ¼ xi1 þ xi2 ; 

y iJ i 1 ¼ xi1 þ xi2 þ    þ xiJ i 1 ; y iJ i ¼ xi1 þ xi2 þ    þ xiJ i ¼ 1. Then, we can rewrite the objective function (1) as in the following: n Ji n X X X vi1 y i1 þ vi2 ðy i2  y i1 Þ þ    vij xij ¼ i¼1

i¼1 j¼1

þ viJ i 1ðy iJ i 1 y iJ i 2 Þþ viJ i ðy iJ i y iJ i 1 Þ n X ðvi1  vi2 Þy i1 þ ðvi2  vi3 Þy i2 þ    ¼

ð2Þ

i¼1

1 6 i 6 n;

þ ðviJ i 1  viJ i Þy iJ i 1 þ viJ i y iJ i " # J i 1 n X X ¼ ðvij  vijþ1 Þy ij þ viJ i y iJ i

ð3Þ

for all i; j;

where vij and wij are the measures of fidelity and data size of version dij, respectively. W is the maximum payload. xij is the decision variable where xij = 1 indicates version j is selected for item i; otherwise, xij = 0. Constraint (2) ensures that the size of the Web page is not greater than Web page W = b · t. Constraint (3) limits our choice for each item to be one of its versions. Note that this problem is known as the linear multiple choice knapsack problem (LMCKP). We can apply the dynamic programming method to find the optimal solution for problem LMCKP. An appropriate content can be determined by solving the LMCKP problem.

i¼1

i¼1 j¼1

The LMCKP is a well-known problem. Many solution methods that have been presented for solving it. This section transforms the LMCKP

i¼1

Similarly, the constraint (2) can be rewritten as J i 1 n X X i¼1

j¼1

ðwij  wijþ1 Þy ij þ

n X

wiJ i 6 W .

i¼1

Pn Pn Note that i¼1 viJ i and i¼1 wiJ i are constants. Let 0 eP ij = vij  vij+1, dij = wij  wij+1 and W ¼ W  n i¼1 wiJ i . Then, the problem LMCKP (Eqs. (1)– (3)) is equivalent to the following KP:

Maximize

J i 1 n X X i¼1

3. The solution method

j¼1

J i 1 n n X X X ¼ viJ i . ðvij  vijþ1 Þy ij þ

Subject to

J i 1 n X X i¼1

y ij ¼ 0 or 1;

eij y ij

ð4Þ

d ij y ij 6 W 0 ;

ð5Þ

j¼1

j¼1

y i1 6    6 y iJ i ; 1 6 i 6 n;

1 6 j 6 J i  1.

ð6Þ

959

R.-H. Jan et al. / Computer Networks 50 (2006) 953–965

Note that the above problem can be rewritten as a precedence constraint 0/1 KP [24,25] as follows. Problem KP: Maximize

m X

p i zi

ð7Þ

d i zi 6 M;

ð8Þ

i¼1

Subject to

m X i¼1

zi ¼ 0 or 1;

zh 6 zk ; ðh; kÞ 2 Ai ;

1 6 i 6 m; Pn PJ i 1 where m ¼ i¼1 j¼1 1; M ¼ W 0 and Ai is the precedence constraint as described in (6). 3.2. Dynamic programming method The precedence constraint 0/1 knapsack problem can be solved by dynamic programming method as that for the ordinary 0/1 knapsack problem with slight modification. We may make a decision on z1 first, then on z2, then on z3, etc. The solution to the 0/1 KP problem can be viewed as the result of a sequence of decisions. An optimal sequence of z1, z2, . . . , zk will maximize the objective function and satisfy the constraint. Moreover, we can apply dynamic programming to solve the parametric precedence constraint 0/1 KP problem with right-hand side M 2 [a, b]. Let KP(j, S) denote the problem Maximize

j X

p i zi

j X

d i zi 6 S;

ð9Þ

Clearly, fn(M) is the value of an optimal solution to KP(n, M). fn(M) can be solved by beginning with f0(s) = 0 for all s > 0 and f0(s) = 1, s < 0. Then f1, f2, . . . , fn can be successively computed using Eq. (9). Notice that fk(s) is an ascending step function; i.e., there are a finite number of s, s1 < s2 <    < st, such that fk(s1) 6 fk(s2) 6    6 fk(st). For the parametric precedence constraint 0/1 KP problem, we solved the problem KP(n, M) for each M 2 [a, b] at the last stage. 3.3. A numerical example Consider an example of a Web page with three image items (i.e., P = d1, d2, d3). Each item has three versions. The right-hand side W 2 [6.7, 30.8]. The data sizes wij and the values vij, i, j = 1, 2, 3, are 3 2 9:0 4:4 1:3 7 6 ½wij  ¼ 4 10:8 4:6 1:0 5; 11:0

5:4 1:3 2 1:0 0:7 rffiffiffiffiffiffi wij 6 ½vij  ¼ ¼ 4 1:0 0:7 wi1 1:0 0:7

v0 ¼

1 6 i 6 j; where 1 6 j 6 n and 0 6 S 6 M. Note that KP(j, S) is a sub-problem of Problem KP with variables z1, z2, . . . , zj and right-hand side S. Problem KP is KP(n, M). Let fk(s) be the value of an optimal solution to KP(k, s). From the principle of optimality it follows that:

3 X 3 X i¼1

3 3 X X i¼1

zh 6 zk ; ðh; kÞ 2 Ai ;

3 0:4 7 0:3 5. 0:3

The content selection problem can be formulated as follows:

Subject to

i¼1

zi ¼ 0 or 1;

þ pk ; subject to precedence constraintsg.

Maximize

i¼1

Subject to

fk ðsÞ ¼ maxffk1 ðsÞ; fk1 ðs  d k Þ

3 X j¼1

vij xij

j¼1

wij xij 6 W ;

j¼1

xij ¼ 1;

xij ¼ 0 or 1;

W 2 ½6:7; 30:8;

1 6 i 6 3; 1 6 i 6 3; 1 6 j 6 3.

3.3.1. Transformation of the problem For i = 1, 2, 3, let yi1 = xi1, yi2 = xi1 + xi2, and yi3 = xi1 + xi2 + xi3 = 1. Then, the problem can be transformed as follows:

960

Maximize Subject to

R.-H. Jan et al. / Computer Networks 50 (2006) 953–965

0:3y 11 þ 0:3y 12 þ 0:3y 21 þ 0:4y 22 þ 0:3y 31 þ 0:4y 32 þ 1

4:6y 11 þ 3:1y 12 þ 6:2y 21 þ 3:6y 22 þ 5:6y 31 þ 4:1y 32 þ 3:6 6 W ; W 2 ½6:7; 30:8;

y 11 6 y 12 ; y 21 6 y 22 ; y 31 6 y 32 ; y 11 ; y 12 ; y 21 ; y 22 ; y 31 ; y 32 ¼ 0 or 1.

Let z1 = y11, z2 = y12, z3 = y21, z4 = y22, z5 = y31, z6 = y32. Clearly, the above problem is equivalent to the following problem, KP(6, M), M 2 [3.1, 27.2]: Maximize 0:3z1 þ 0:3z2 þ 0:3z3 þ 0:4z4 þ 0:3z5 þ 0:4z6 Subject to

4:6z1 þ 3:1z2 þ 6:2z3 þ 3:6z4 þ 5:6z5 þ 4:1z6 6 M; M 2 ½3:1; 27:2; z1 6 z2 ; z3 6 z4 ; z5 6 z6 ;

zi ¼ 0 or 1; i ¼ 1; . . . ; 6. f1(s) 2

0.3 0

3.3.2. Dynamic programming method Let fk(s) be the value of an optimal solution to KP(k, s) where s 2 [0, 27.2] and i = 1, . . . , 6. Clearly, f6(M) is the value of an optimal solution to KP(6, M). Applying Eq. (9), f6(M), M 2 [a, b] can be solved by starting with f0(s) = 0 for all s > 0. Then f1(s), f2(s), . . . , f6(s), s 2 [0, 27.2] can be successively found. For example  0:4 þ f3 ðs  3:6Þ; z4 ¼ 1; s P 3:6; f4 ðsÞ ¼ max z4 ¼0;1 f2 ðs  3:6Þ; z4 ¼ 0. Fig. 6 graphically shows f1(s), f2(s), . . . , f5(s), s 2 [0, 27.2], and f6(s), s 2 [3.1, 27.2]. Thus, the optimal values and the optimal solutions for M 2 [3.1, 27.2] is summarized in Table 1. For example, the optimal solution to KP(6, 27.2) is f6(27.2) = 2.0. The optimal solution of KP(6, 27.2) is: (z1, z2, z3, z4, z5, z6) = (y11, y12, y21, y22, y31, y32) = (1, 1, 1, 1, 1, 1). Thus, the f2(s) 2

(

0 4.6 5.6

27.2 s

(a)

f3(s) 2

0.6 ( 0.3 ( 0 0 3.1 7.7 8.7

27.2 s

(b)

f4(s) 2

0.9 0.6 ( ( 0.3 ( 0 0 3.1 7.7 13.9 8.7 (c)

27.2 s

f5(s) 2

1.3 1 0.7 0.4 0.3 00

( ( (

(

(

17.5 3.1 6.7 11.3 12.3 (d) 3.6

f6(s) 2

1.6 1.3 ( 1 ( 0.7 ( 0.4 ( 0.3 ( 0 0 3.1 6.7 11.3 16.9 12.3 (e) 3.6

(

23.127.2 s

1.7 ( 1.4 ( 1.1 0.8 ( ( 0.7 0.4 ( ( 0.3 0 ( 0 3.1 6.7 10.8 15.4 21 3.6 7.7 (f) 16.4

Fig. 6. Functions of f1(s), f2(s), f3(s), f4(s), f5(s), and f6(s).

s 27.2

•(

s 27.2

R.-H. Jan et al. / Computer Networks 50 (2006) 953–965

961

Table 1 Summary of the optimal solutions Right-hand side

f6(M)

(y11, y12, y21, y22, y31, y32)

v0

(x11, x12, x13, x21, x22, x23, x31, x32, x33)

M 2 [3.1, 3.6) M 2 [3.6, 6.7) M 2 [6.7, 7.7) M 2 [7.7, 10.8) M 2 [10.8, 15.4) M 2 [15.4, 21.0) M 2 [21.0, 27.2) M = 27.2

0.3 0.4 0.7 0.8 1.1 1.4 1.7 2

(0, 1, 0, 0, 0, 0) (0, 0, 0, 1, 0, 0) (0, 1, 0, 1, 0, 0) (0, 0, 0, 1, 0, 1) (0, 1, 0, 1, 0, 1) (1, 1, 0, 1, 0, 1) (1, 1, 0, 1, 1, 1) (1, 1, 1, 1, 1, 1)

1.3 1.4 1.7 1.8 2.1 2.4 2.7 3

(0, 1, 0, 0, 0, 1, 0, 0, 1) (0, 0, 1, 0, 1, 0, 0, 0, 1) (0, 1, 0, 0, 1, 0, 0, 0, 1) (0, 0, 1, 0, 1, 0, 0, 0, 1) (0, 1, 0, 0, 1, 0, 0, 1, 0) (1, 0, 0, 0, 1, 0, 0, 1, 0) (1, 0, 0, 0, 1, 0, 1, 0, 0) (1, 0, 0, 1, 0, 0, 1, 0, 0)

optimal solution for the content selection problem is ðx11 ; x12 ; x13 ; x21 ; x22 ; x23 ; x31 ; x32 ; x33 Þ ¼ ð1; 0; 0; 1; 0; 0; 1; 0; 0Þ

and the optimal value is 3.0. That is, version 1 is selected for each item. If another request for this page arrives, the adaptive server finds b = 16 Kbps and t = 5 s for this connection. Then, the total data size W that the adaptive server may return to the client is W ¼

16  10 ¼ 20 Kbps. 8

Note that the adaptive server does not need to solve the problem KP(6, 20  3.6) anew. The optimal solution for KP(6, 16.4) can be found in Table 1. Since M = 16.4, we look in Table 1 down the M 2 [15.4, 21.0) row. We find that the optimal solution for KP(6, 16.4) is (y11, y12, y21, y22, y31, y32) = (1, 1, 0, 1, 0, 1), and the optimal solution for the content selection problem is ðx11 ; x12 ; x13 ; x21 ; x22 ; x23 ; x31 ; x32 ; x33 Þ ¼ ð1; 0; 0; 0; 1; 0; 0; 1; 0Þ.

That is, the returned Web page is compose of version 1 for item 1 and version 2 for items 2 and 3.

4. Experimental results In order to test our optimization model for Web content adaptation, we built three Web servers: two adaptive and one non-adaptive servers. Both adaptive Web servers consisted of three major

modules (content analysis and transcoding, CPI filter, and content selection) as shown in Fig. 2. The difference between them is in the content selection module. One, denoted as Sever 1, found optimal solutions of parametric LMCKP by dynamic programming method in advance after the Web page was created. When the request arrives, it just looks up the optimal solutions table. The other, denoted as Sever 2, selects contents by using greedy algorithm [14] to solve LMCKP whenever the request arrives. The non-adaptive server is denoted as Sever 3. A Linux operating system and an Apache server were selected as developing platform for three servers. Apache is a well-known, open source Web server that performs well. The machines for the three servers are the desktop computers with AMD K7-850 and 256 MB memory. The test Web page, a sub-page of the UniversityÕs Web pages, consists of seven component items with 140 KB data size. These include six image component items and one text item. Each image item has six versions. The Servers 1, 2 and 3 were tested by two clients. The clientsÕ browser was modified from Internet Explorer (IE) so that the users can specify the expected time, CPI data, and where the CPI profiles are by URLs (see Fig. 7). Client 1 was a notebook PC with Intel P3-650 and 256MB memory, using PPP-dialup 56 Kbps (campus dialup service) connecting to the campus Internet. The expected waiting time was set to 15 s when browsing the Servers 1 and 2. For the measurement, the system clock of the three servers and two clients was synchronized using the Network Time Protocol (NTP). Client 1 browsed the Web page 10 times. We use the t distribution with 9 degrees of

962

R.-H. Jan et al. / Computer Networks 50 (2006) 953–965

Fig. 7. An example of CC/PP browser.

freedom and a 95% confidence interval to estimate the delays. The average delays for Client 1 are shown in Table 2a. Table 2b shows the percentage of measured delays out of the total delay. The other client, Client 2 was a Compaq pocket PC using IEEE 802.11b wireless LAN connecting to campus Internet. The results are summarized in Tables 3a and 3b. Our experiments use campus

Internet as network testbed. There are many factors, such as irrelevant traffic in the network, buffer sizes, etc., that may influence (or pollute) the results. Tables 2 and 3 are only intended to offer the reader some realistic feeling about the respond time and how the system works. From the theoretical point of view, the time complexity for Server 1 to pick up an optimal solution from the optimal

Table 2a Results for notebook PC with 56K dialup

Server 1 Server 2 Server 3

2 · Tprop (ms)

Tprobe (ms)

Tproc (ms)

Ttrans (ms)

Ttotal (ms)

W (KB)

179.5 ± 2.0 173.5 ± 2.8 180.2 ± 2.0

6000.1 ± 2.0 6123.3 ± 1.8 –

: 0 otherwise.

for (i, j) 2 A 0 , k 2 K and t 2 I. We force xkijt ¼ 0; 8 t 62 I k . Since the goal is to minimize the connection rejection probability, we can equivalently maximize the number of connections accepted by the network. The problem can thus be formulated as follows: X Maximize bk  xkSS k S k BðkÞ; ð1Þ k2K

s.t.

X

d k  xkijt 6 C ij

X

xkijt

k2K

ð2Þ

M(t)

X

3

ðj;lÞ2A0

2 1 1

2

3

4 5

6

7

Time Slots

Fig. 5. Number M(t) of active connections in each time slot.

xkjlt



8 1 > < ¼ 0 > : 1

8ði; jÞ 2 A; t 2 I;

ði;jÞ2A0

if j 2 SN if j 2 N if j 2 TN

8k 2 K; j 2 N 0 ; t 2 I ð3Þ

974

A. Capone et al. / Computer Networks 50 (2006) 966–981

xkijt ¼ xkijBðkÞ

8k 2 K; ði; jÞ 2 A0 ; t 2 I k ;

ð4Þ

xkijt 2 f0; 1g

8k 2 K; ði; jÞ 2 A0 ; t 2 I k ;

ð5Þ

The objective function (1) is the weighted sum of the connections accepted in the network, where bk represents the benefit associated with connection k. Different settings of bk are possible, and they reflect different behaviors of the model as discussed later. Constraints (2) ensure that, at each time slot, the total flow due to all the connections that use arc (i, j) does not exceed the arc capacity, Cij, for all (i, j) 2 A. Constraints (3) represent the flow balance equations expressed for each node belonging to the extended graph G 0 , in each time slot t 2 T. Note that these constraints define a path for each connection between its source and destination nodes. Constraints (4) impose that the accepted connections cannot be aborted or rerouted for their entire lifetime. Finally, requiring that the decision variables in (5) are binary implies that each connection is routed on a single path. The online QoS routing algorithms we are considering in this paper do not reject a new connection with (Sk, Tk, dk) if at least one path with a residual available bandwidth greater than or equal to the requested bandwidth dk exists. To account for this feature, the objective function (1) must be properly set. To this purpose it is sufficient to set: bk ¼ 2N c k ;

ð6Þ

having numbered the Nc connections from 1 to Nc according to their arrival times. With such a setting of bk the benefit to accept connection k is always greater than the benefit of accepting, instead, all the connections from k + 1 to Nc, since: Nc X 2N c i . ð7Þ 2N c k > i¼kþ1

This choice of the weights bk allows the mathematical formulation to model very closely the behavior of real online routing algorithms. To verify the accuracy of the model we have considered a simple

scenario where a single link connects a source/destination pair. We have obtained the performance in the case of channel capacity equal to 20 bandwidth units and assuming the bandwidth bk to be uniformly distributed between 1 and 3 units and the lifetime sk to be exponentially distributed with mean 15 s. In this simple case all the routing algorithms provide the same performance since only one path exists between source and destination. The rejection probability shown in Fig. 3(a) has been computed using the multi-class Erlang Formula. The bound provided by the IR model completely overlaps the online routing performance. Note that different choices of bk provide different IR Model performance. For instance, selecting bk = 1 for all k we obtain the performance shown in Fig. 3(b). The large reduction in rejection probability is expected since the optimization of the objective function will result in rejecting connections with high bandwidth requirements and long lifetime in favor of smaller and shorter ones. The difference between the bound and the real performance increases as the network load increases. The above stated ILP formulation is quite general and by simply modifying the constraints (2)– (5) allows to solve different problems. In the following a few alternative modeling formulations are outlined. Removing constraints (4), the network can possibly change the path of a connection at each time slot. If the constraints (5) are relaxed, the problem formulation is no longer an integer program and its solutions requires less computing time and memory occupation. However, in this formulation each connection can be split over multiple paths. Such splitting, that requires packet reordering, is not always tolerated by end userÕs applications that may use transport protocols like TCP. Finally, it is possible to consider link capacities that change in time slot by slot. To include this feature, that allows to take into account link failures or variations in the available capacity along a path, it is sufficient to substitute the constant parameter Cij with a time-varying one, Cijt. The above model has been implemented using the AMPL language [19], and solved using the

A. Capone et al. / Computer Networks 50 (2006) 966–981

CPLEX solver [18]. This problem formulation, however, involves a large number of decision variables, mN c ð2N c  1Þ  2mN 2c , since a variable is associated to each arc (m), to each connection (Nc) and to each time interval (2Nc  1). For practical size network, and for a large number of offered connections the memory occupation requested by CPLEX can be too large. Fortunately, the number of variables to be considered can be reduced without loosing optimality in the solution by observing that not all the 2Nc  1 time slots must be considered in the optimization process but only those that correspond to local maxima of M(t). In fact, the connection requests in non local maxima time slots are a subset of those active in the relevant local maxima. This condition implies that the corresponding requirements in such time slots are dominated by those corresponding to local maxima and therefore have no impact on the optimum solution. The reduction in the number of decision variable that has been observed in the several examples considered has been remarkable. However, in the worst case, half of the time slots correspond to local maxima and the number of variables to be considered is bounded by mN 2c . 4.2. Min-cut model In this section, we propose a second mathematical model that allows to determine a lower bound to the connection rejection probability. The solution of this model has computing times and memory occupation considerably lower than the previous one. However, in some scenarios, the bound obtained can be quite lower than the value provided by IR. Let us consider a directed graph G = (N, A) defined by a set of nodes, N, and a set of arcs, A each one characterized by a capacity Cij. A set of source/destination pairs K = {1, . . ., Ns}, indicated by Si and Ti, respectively, i 2 K, is also assigned. Each source Si generates a flow afi, towards destination Ti, that can be split over multiple paths. The problem is to find the maximum a, indicated by a*, such that for all i 2 K the flow quantities a*fi can be routed to their destinations.

975

The solution to this problem, obtained via linear programming techniques, provides the P maximum multi-commodity flow F max ¼ i2K a fi . Note that Fmax represents a lower bound to the capacity of the minimum multi-commodity cut of the network, as discussed in [21,22]. Once Fmax has been obtained, the connection rejection probability for the given network scenario is obtained by using the multi-class Erlang formula with Fmax servers [20] that is briefly reviewed in the following. Let us consider N different traffic classes offered to a network system with C servers. The connections belonging to the class i request di bandwidth units. The connections arrival process is a Poisson process with average ki, while the connections duration is distributed according to a generic disPN tribution fHi ðhi Þ. Let K ¼ i¼1 ki be the total load offered to the network. An appropriate state description of this system is n = (n1, . . ., nN), where ni, i = 1, . . ., N is the number of connections belonging to the class i that occupy the servers. The set of all the possible states X is expressed as X = {njX 6 C}, where X, the total occupation of all the servers, is given by PN X ¼ i¼1 ni d i . If we indicate with Ai = ki E[Hi] the traffic offered to the network by each class, the steady state probability of each state is simply given by the multi-class Erlang formula: pðnÞ ¼

N 1 Y Ani i ; G i¼1 ni !

ð8Þ

where G is the normalization constant that ensures that the p(n) sum to 1 and it has therefore the following expression: ! N X X Y Ani i G¼ pðnÞ ¼ . ð9Þ ni ! n2X n2X i¼1 Using the steady state probability calculated with equation (8) we can derive the loss probability of the generic class i, Pi, as follows: X pðnÞ; ð10Þ Pi ¼ n2Bi

where Bi is the set of the blocking states for the class i, defined as Bi = {njC  di < X 6 C}. The

976

A. Capone et al. / Computer Networks 50 (2006) 966–981

overall connection rejection probability, prej, is then given by: N X Ai P i . PN i¼1 i¼1 Ai

ð11Þ

If we substitute C with the maximum multicommodity flow value Fmax in all the above expressions, we can compute the connection rejection probability using equation (11). In network topologies with high link capacities, Fmax can assume high values, and the enumeration of all the allowed states becomes computationally infeasible, since the cardinality of X is of the order of F Nmax [27]. In these network scenarios, Eqs. (8)– (11) are computationally too complex so we propose to apply the algorithm described in [27,28] that computes recursively the blocking probability based on the peculiar properties of the normalization constant G. For network topologies with very high link capacities we implemented the inversion algorithm proposed in [29] to compute the blocking probabilities for each class.

5. Numerical results In this section, we compare the performance, measured by the percentage of rejected calls versus the average total load offered to the network, of the Virtual Flow Deviation algorithm, the MinHop Algorithm and MIRA with the bounds provided by the mathematical models presented in the previous section referring to different network scenarios in order to cover a wide range of possible environments. The first scenario we consider is illustrated in Fig. 6. In this network the links are unidirectional with capacity equal to 120 bandwidth units. In the following capacities and flows are all given in bandwidth units. The network traffic, offered through the source nodes S1, S2 and S3, is unbalanced since sources S2 and S3 generate a traffic four times larger than S1. Each connection requires a bandwidth uniformly distributed between 1 and 3. The lifetime of the connections is assumed to be exponentially distributed with average equal to 15 s.

T

S

T

S

Fig. 6. Network topology with unbalanced offered load: the source/destination pairs S2–T2 and S3–T3 offer to the network a traffic load which is four times higher than that offered by the pair S1–T1.

In this simple topology connections S1–T1 and S3–T3 have one path only, while connections S2– T2 have two different paths. The rejection probability versus the offered load for MIRA, MHA, VFD, IR and Min-Cut models are shown in Fig. 7. The poor performance of MIRA is due to its lack of considering any information about the load distribution in the network. In this particular topology, due to critical links (1, 2), (2, 3) and (8, 9), S2–T2 connections are routed on the path (5–8–9–6) that contains the minimum number of critical links. MHA, that selects for connections S2–T2 the path with the minimum

0.25

0.2

Rejection Probability

prej ¼

T

S

0.15

MHA MIRA

0.1

VFD N’ = 0 V

VFD N’ = 0.5 N V V

0.05

VFD Min – Cut Model ILP Model

0 0

25

50

75

100

125

150

Offered Load (connections/s)

Fig. 7. Connection rejection probability versus the average total load offered to the network of Fig. 6.

977

A. Capone et al. / Computer Networks 50 (2006) 966–981

sponds to the VFD version described in Section 3.2 that takes most advantage from traffic information. The best performance has been measured and the gain over existing algorithms is provided even at high loads. An intermediate value of N 0v (case 2) provides, as expected, intermediate performance. As far as the performance of the two mathematical models, we observe that the approximate Min-Cut model curve overlaps that of the IR model. Note that VFD performs very close to the theoretical bounds in this scenario. In the same network scenario we have verified that the VFD algorithm practically reaches the

number of hops, routes the traffic as MIRA and their performances overlap. Better performance is achieved by VFD. Since its behavior depends on the number of virtual connection N 0v used in the routing phase, we have considered three cases: N 0v ¼ 0, N 0v ¼ 0:5  N v and Nv = b(Nmax  NA)c. In the first case, even if no information on network traffic statistics is taken into account, the VFD algorithm achieves much better performance than previous schemes due to the better traffic balance provided by the Flow Deviation algorithm. Only when the offered load reaches very high values the improvement reduces. The third case corre-

0.7

0.3

0.6

Connection Rejection Rate

Rejection Probability

0.25

0.2

0.15 MHA MIRA

IR Model

VFD

0.1

0.05

(a)

0 0

20

30

40

50

0.4 0.3 VFD

MHA MIRA

0.2

IR Model

0.1

Min-Cut Model

10

0.5

60

Offered Load (connections/s)

0

Min–Cut Model 0

50

(b)

100

150

200

250

300

350

Offered Load (connections/s)

0.8

Connection Rejection Rate

0.7 0.6 0.5 0.4 0.3

MHA

VFD

0.2 MIRA

IR Model 0.1 0 0

(c)

Min–Cut Model 50

100

150

200

250

300

350

Offered Load (connections/s)

Fig. 8. Connection rejection probability versus the average total load offered to the network of Fig. 6: (a) with link capacity equal to 24 and bandwidth requests always equal to 1; (b) with link capacity equal to 60 and bandwidth requests always equal to 1; (c) with link capacity equal to 60 and bandwidth requests uniformly distributed between 1 and 3.

978

A. Capone et al. / Computer Networks 50 (2006) 966–981

S2

3

0

T1

2

S1

1

4

T2

Fig. 9. Network topology with a large number of critical links.

0.35 0.3

Rejection Probability

bound provided by the IR model when exact future connections requests are known. To investigate the impact of connection lifetime distribution, we have considered a Pareto distribution with the same average as the previous exponential distribution and several shape parameters (a = 1.9, 1.95, 2.1,3). The performance observed in all cases are within 1% of those shown in Fig. 7. To test the sensitivity of the performance to the network capacity, we have considered, for the network in Fig. 6, different parameters. The results, shown in Fig. 8, are very similar to those of Fig. 7. It is worthwhile to observe that in all the different scenarios considered the approximate model provides results very close to IR. This validates the approximate model that can be easily evaluated even in more complex networks. The second network considered is shown in Fig. 9 where an equal traffic is offered at S1 and S2. All links have the same capacity equal to 120 and are bidirectional. Also in this scenario VFD outperforms MIRA and MHA, as shown in Fig. 10. MHA is the worst due to its poor choice of the paths that are used until saturation before switching to other paths with less utilized links. MIRA shows a performance that worsens as the offered load increases. In fact due to the critical links identified, (0, 1), (0, 2), (0, 3), (1, 4), (2, 4), (3, 4) for connections S2–T2 and (1, 0), (1, 2), (1, 4), (0, 3), (2, 3), (4, 3) for connections S1–T1, the path (1–2–3) is the only one available for connections S1–T1 and the path (0, 2, 4) the only one for connections S2–T2.

0.25 0.2

MHA 0.15

MIRA

VFD

0.1

Min–Cut Model

0.05 0 100

150

200

250

300

350

400

Offered Load (connections/s)

Fig. 10. Connection rejection probability versus the average total load offered to the network of Fig. 9.

S

S

T

T

S

S T

T

Fig. 11. Network topology with a large number of nodes, links, and source/destination pairs.

Due to computational complexity only the performance of the Min-Cut model is shown in Fig. 10. A more realistic scenario that was first proposed in [14,15] is shown in Fig. 11. The links marked by heavy solid lines have a capacity of 480 while the others have a capacity equal to 120, in order to replicate the ratio between OC-48 and OC-12 links. The performance for the case of balanced offered traffic, considered in [14], are shown in Fig. 12.

A. Capone et al. / Computer Networks 50 (2006) 966–981

If we consider on the same topology an unbalanced load where for instance traffic S1–T1 is four times the traffic of the other sources, the improvement in the performance obtained by VFD is remarkable. The results shown in Fig. 13 confirm that unbalanced situations are more demanding on network resources and the rejection probability for the same given offered load is much higher. In these more critical network operating conditions VFD still well approaches the lower bound provided by the Min-Cut model.

0.2 0.18

Rejection Probability

0.16 0.14 0.12 0.1 MHA 0.08 0.06 Min–Cut Model

0.04 MIRA

0.02 0 250

300

350

400

450

VFD 500

550

600

650

700

Offered Load (connections/s)

6. Conclusions

Fig. 12. Connection rejection probability versus the average total load offered to the network of Fig. 11.

VFD and MIRA achieve almost the same performance and are much better than MHA. VFD presents a slight advantage at low load since it starts rejecting connections at an offered load 10% higher than MIRA. We have measured that a rejection probability of 104 is reached at an offered load of 420 connections/s by MIRA as opposed to 450 connections/s for VFD. Also in this case the IR model is computationally too demanding. Therefore, we applied the Min-Cut model with the inversion algorithm proposed in [29], as the maximum multi-commodity flow is equal to 1200 bandwidth units. 0.35

Rejection Probability

0.3 0.25 0.2

MHA MIRA

0.15

979

We have discussed and analyzed the performance of online QoS routing algorithms for bandwidth guaranteed connections in MPLS and label switched networks. To provide a theoretical bound on the performance achievable by dynamic online QoS routing algorithms we have proposed two novel mathematical models. The first is an Integer Linear Programming model that extends the well known maximum multi-commodity flow problem to include connections arrival-times and durations, while the second, which has a much lower complexity, is based on the application of the multiclass Erlang formula to a link with capacity equal to the residual capacity of the minimum network multi-commodity cut. We have shown that the Virtual Flow Deviation scheme reduces the blocking probability with respect to previously proposed routing schemes and approaches the lower bounds provided by the mathematical models in the considered network scenarios.

VFD 0.1

0 150

References

Min–Cut Model

0.05

200

250

300

350

400

450

500

550

600

650

Offered Load (connections/s)

Fig. 13. Connection rejection probability versus the average total load offered to the network of Fig. 11, where the traffic between S1–T1 is four times higher than the traffic produced by the other pairs.

[1] Hui-Lan Lu, I. Faynberg, An architectural framework for support of quality of service in packet networks, IEEE Communications Magazine 41 (6) (2003) 98–105. [2] E. Rosen, A. Viswanathan, R. Callon, Multiprotocol Label Switching Architecture, in: IETF RFC 3031, January 2001. [3] L. Berger (Ed.), Generalized Multi-Protocol Label Switching (GMPLS) Signaling Functional Description, in: IETF RFC 3471, January 2003.

980

A. Capone et al. / Computer Networks 50 (2006) 966–981

[4] A. Banerjee, L. Drake, L. Lang, B. Turner, D. Awduche, L. Berger, K. Kompella, Y. Rekhter, Generalized multiprotocol label switching: an overview of signaling enhancements and recovery techniques, IEEE Communications Magazine 39 (7) (2001) 144–151. [5] S. Chen, K. Nahrstedt, An overview of quality-of-service routing for the next generation high-speed networks: problems and solutions, IEEE Network 12 (6) (1998) 64– 79. [6] J.L. Marzo, E. Calle, C. Scoglio, T. Anjali, QoS online routing and MPLS multi-level protection: a survey, IEEE Communications Magazine 41 (10) (2003) 126–132. [7] Bin Wang, Xu Su, C.L.P. Chen, A new bandwidth guaranteed routing algorithm for MPLS traffic engineering, in: IEEE International Conference on Communications, ICC 2002, vol. 2, 2002, pp. 1001–1005. [8] Z. Wang, J. Crowcroft, QoS routing for supporting resource reservation, IEEE Journal on Selected Areas in Communications (September) (1996). [9] A. Orda, Routing with end to end QoS guarantees in broadband networks, in: IEEE–INFOCOMÕ98, March 1998. [10] B. Awerbuch et al., Throughput competitive on line routing, in: 34th Annual Symposium on Foundations of Computer Science, Palo Alto, CA, November 1993. [11] S. Suri, M. Waldvogel, D. Bauer, P.R. Warkhede, Profilebased routing and traffing engineering, Computer Communications 26 (4) (2003) 351–365. [12] D.O. Awduche, L. Berger, D. Gan, T. Li, V. Srinivasan, G. Swallow, RSVP-TE: Extensions to RSVP for LSP tunnels, in: IETF RFC 3209, December 2001. [13] R. Guerin, D. Williams, A. Orda, QoS routing mechanisms and OSPF extensions, in: Proceedings of Globecom, 1997. [14] Murali S. Kodialam, T.V. Lakshman, Minimum interference routing with applications to MPLS traffic engineering, in: Proceedings of INFOCOM, pp. 884–893, 2000. [15] Koushik Kar, Murali Kodialam, T.V. Lakshman, Minimum interference routing of bandwidth guaranteed tunnels with MPLS traffic engineering applications, IEEE Journal on Selected Areas in Communications 18 (12) (2000). [16] A. Capone, L. Fratta, F. Martignon, Dynamic routing of bandwidth guaranteed connections in MPLS networks, International Journal on Wireless and Optical Communications 1 (1) (2003) 75–86. [17] R.K. Ahuja, T.L. Magnanti, J.B. Orlin, Network Flows, Prentice-Hall, 1993. [18] ILOG CPLEX. Available at: . [19] AMPL: A Modeling Language for Mathematical Programming. Available at: . [20] J.F.P. Labourdette, G.W. Hart, Blocking probabilities in multitraffic loss systems: insensitivity, asymptotic behavior, and approximations, IEEE Transactions on Communications 40 (1992) 1355–1366. [21] Tom Leighton, Satish Rao, Multicommodity max-flow min-cut theorems and their use in designing approximation algorithms, Journal of the ACM 46 (6) (1999).

[22] Y. Aumann, Y. Rabani, An O(log k) approximate min-cut max-flow theorem and approximation algorithm, SIAM Journal on Computing 27 (1) (1998) 291–301. [23] R. Guerin, H. Ahmadi, M. Naghshineh, Equivalent capacity and its application to bandwidth allocation in high speed networks, IEEE Journal on Selected Areas in Communications (September) (1991) 968–981. [24] J.A. Schormans, J. Pitts, K. Williams, L. Cuthbert, Equivalent capacity for on/off sources in ATM, Electronic Letters 30 (21) (1994) 1740–1741. [25] L. Fratta, M. Gerla, L. Kleinrock, The flow deviation method: An approach to store-and-forward network design, Networks 3 (1973) 97–133. [26] D. Bertsekas, R. Gallager, Data Networks, Prentice-Hall, 1987. [27] J.S. Kaufman, Blocking in a shared resource environment, IEEE Transactions on Communications 29 (10) (1981) 1474–1481. [28] A.A. Nilsson, M. Perry, A. Gersht, V.B. Iversen, On multirate Erlang-B computations 16th International Teletraffic Congress; ITCÕ16, Elsevier Science, 1999, pp. 1051–1060. [29] G.L. Choudhury, K.K. Leung, W. Whitt, An inversion algorithm to compute blocking probabilities in loss networks with state-dependent rates, IEEE/ACM Transactions on Networking 3 (5) (1995) 585–601.

Antonio Capone received the Laurea degree (MS degree equivalent) and the PhD degree in telecommunication engineering from the Politecnico di Milano in July 1994 and June 1998, respectively. In 2000, he was a visiting scientist at the University of California, Los Angeles. He is now an associate professor in the Department of Electronics and Information at the Politecnico di Milano. His current research activities include packet access in wireless cellular network, routing and MAC for multihop wireless networks, congestion control and QoS issues of IP networks, network planning and optimization. He is a member of the IEEE and the IEEE Communications and Vehicular Technology Societies.

Luigi Fratta received the doctorate degree in electronics engineering from the Politecnico di Milano, Italy, in 1966. From 1967 to 1970, he worked at the Laboratory of Electrical Communications, Politecnico di Milano. As a research assistant in the Computer Science Department, University of California, Los Angeles (UCLA), he participated in data network design under the ARPA project from 1970 to 1971. From November 1975 to September 1976, he was at the

A. Capone et al. / Computer Networks 50 (2006) 966–981 Computer Science Department of IBM Thomas J. Watson Research Center, Yorktown Heights, New York, working on modeling analysis and optimization techniques for teleprocessing systems. In 1979, he was a visiting associate professor in the Department of Computer Science at the University of Hawaii. In the summer of 1981, he was at the Computer Science Department, IBM Research Center, San Jose, California, working on local area networks. During the summers of 1983, 1989, and 1992, he was with the Research in Distributed Processing Group, Department of Computer Science, UCLA, working on fiber optic local area networks. During the summer of 1986, he was with Bell Communication Research working on metropolitan area networks. In 1994, he was a visiting scientist at NEC Network Research Laboratory, Japan. Since 1980, he has been a full professor in the Dipartimento di Elettronica e Informazione at the Politecnico di Milano. His current research interests include computer communication networks, packet switching networks, multiple access systems, modeling and

981

performance evaluation of communication systems, local area networks, wireless cellular systems, and integrated services over IP networks. Dr. Fratta is a fellow of IEEE.

Fabio Martignon received the Laurea and the PhD degree in telecommunication engineering from the Politecnico di Milano in October 2001 and May 2005, respectively. He is now an assistant professor in the Department of Management and Information Technology at the University of Bergamo. His current research activities include routing for multihop wireless networks, congestion control and QoS routing over IP networks.

Computer Networks 50 (2006) 982–1002 www.elsevier.com/locate/comnet

Pricing differentiated services: A game-theoretic approach Eitan Altman a, Dhiman Barman b, Rachid El Azouzi c, David Ros Bruno Tuffin e

d,*

q

,

a

INRIA, B.P. 93, 2004 Route des Lucioles, 06902 Sophia-Antipolis Cedex, France 111 Cummington Street, Department of Computer Science, Boston University, Boston, MA 02215, USA Universite´ d’Avignon et des Pays de Vaucluse (IUP), LIA—CERI, 339 Chemin des Meinajarie`s, BP 1228, 84911 Avignon Cedex 9, France d GET/ENST Bretagne, Rue de la Chaˆtaigneraie, CS 17607, 35576 Cesson Se´vigne´ Cedex, France e IRISA/INRIA Rennes, Campus de Beaulieu, 35042 Rennes Cedex, France

b c

Received 16 November 2004; received in revised form 16 May 2005; accepted 26 June 2005 Available online 8 August 2005 Responsible Editor: J.C. de Oliveira

Abstract The goal of this paper is to study pricing of differentiated services and its impact on the choice of service priority at equilibrium. We consider both TCP connections as well as noncontrolled (real-time) connections. The performance measures (such as throughput and loss rates) are determined according to the operational parameters of a RED (Random Early Discard) buffer management. The latter is assumed to be able to give differentiated services to the applications according to their choice of service class. We consider a service differentiation for both TCP as well as real-time traffic where the quality of service (QoS) of connections is not guaranteed, but by choosing a better (more expensive) service class, the QoS parameters of a session can improve (as long as the service class of other sessions are fixed). The choice of a service class of an application will depend both on the utility as well as on the cost it has to pay. We first study the performance of the system as a function of the connectionsÕ parameters and their choice of service classes. We then study the decision problem of how to choose the service classes. We model the problem as a noncooperative game. We establish conditions for an equilibrium to exist and to be uniquely defined. We further provide conditions for convergence to equilibrium from nonequilibria initial states. We finally study the pricing problem of how to choose prices so that the resulting equilibrium would maximize the network benefit.  2005 Elsevier B.V. All rights reserved.

q

A shorter version of this paper appeared in the Proceedings of the Third IFIP-TC6 Networking Conference [1]. Corresponding author. Tel.: +33 2 99 12 70 46. E-mail addresses: [email protected] (E. Altman), [email protected] (D. Barman), [email protected] (R.E. Azouzi), [email protected] (D. Ros), Bruno.Tuffi[email protected] (B. Tuffin). *

1389-1286/$ - see front matter  2005 Elsevier B.V. All rights reserved. doi:10.1016/j.comnet.2005.06.008

E. Altman et al. / Computer Networks 50 (2006) 982–1002

983

Keywords: TCP; RED/AQM; Nash equilibrium; Pricing; Service differentiation

1. Introduction We study in this paper the performance of competing connections that share a bottleneck link. Both TCP connections with controlled rate as well as CBR (Constant Bit Rate) connections are considered. A RED active queue management (AQM) algorithm is used for the early dropping of packets. We allow for service differentiation between the connections through the rejection probability (as a function of the average queue size), which may depend on the connection (or on the connection class). More specifically, we consider a buffer management scheme that uses a single averaged queue length to determine the rejection probabilities (similar to the way it is done in the RIO-C (coupled RIO) buffer management [2]); for any given averaged queue size, packets belonging to connections with higher priority have smaller probability of being rejected than those belonging to lower priority classes. To obtain this differentiation in loss probabilities, we assume that the loss curve of RED is scaled by a factor that represents the priority level of the application. We obtain various performance measures of interest such as the throughput, the average queue size and the average drop probability. We then address the question of the choice of priorities. Given utilities that depend on the performance measures on one hand and on the cost for a given priority on the other hand, the sessions at the system are faced with a noncooperative game in which the choice of priority of each session has an impact on the quality of service of other sessions. For the case of CBR traffic, we establish conditions for an equilibrium to exist. We further provide conditions for convergence to equilibrium from nonequilibria initial states. The game formulation of the problem arises naturally, since a classical optimization approach where a common objective function is maximized, is not realistic in IP networks; indeed, it is quite rare that users of a network collaborate with each other (or even ‘‘know’’ each other).

Finally we study numerically the pricing problem of how the network should choose prices so that the resulting equilibrium would maximize its benefit. We briefly mention some recent work in that area. Ref. [3] has considered a related problem where the traffic generated by each session was modeled as a Poisson process, and the service time was exponentially distributed. The decision variables were the input rates and the performance measure was the goodput (output rates). The paper restricted itself to symmetric users and symmetric equilibria and the pricing issue was not considered. In this framework, with a common RED buffer, it was shown that an equilibrium does not exist. An equilibrium was obtained and characterized for an alternative buffer management that was proposed, called VLRED. We note that in contrast to [3], since we also include in the utility of CBR traffic a penalty for losses (which is supported by studies of voice quality in packet-based telephony [4]), we do obtain an equilibrium when using RED. For other related papers, see for instance [5] (in which a priority game is considered for competing connections sharing a drop-tail buffer), [6] as well as the survey [7]. In [8], the authors present mechanisms (e.g., AIMD of TCP) to control end-user transmission rate into differentiated services Internet through potential functions and corresponding convergence to a Nash equilibrium. The approach of our pricing problem is related to the Stackelberg methodology for hierarchical optimization: for a fixed pricing strategy one seeks the equilibrium among the users (the optimization level corresponding to the ‘‘follower’’), and then the network (considered as the ‘‘leader’’) optimizes the pricing strategy. This type of methodology has been used in other contexts of networking in [9,10]. The structure of this paper is as follows. In Section 2 we describe the model of RED, then in Section 3 we compute the throughputs and the loss probabilities of TCP and of CBR connections for given priorities chosen by the connections. In Section 4 we introduce the model for competition

E. Altman et al. / Computer Networks 50 (2006) 982–1002

between connections at given prices. In Section 5 we focus on the game in the case of only CBR connections or only TCP connections and provide properties of the equilibrium: existence, uniqueness and convergence. Remark that isolating elastic (i.e., TCP) flows from real-time (i.e., UDP/ CBR) flows—that is, mapping TCP and UDP flows to two different service classes—is a fairly common way of protecting TCP traffic from UDP flows in a differentiated-services architecture. Note that, inside each service class, we consider that flows have different parameters (like, say, different round-trip times). In Section 6 we provide an algorithm for computing Nash equilibrium for the symmetric case. The optimal pricing is then discussed in Section 7. We present numerical examples in Section 8 to validate the model.

2. The model The main goal of the Random Early Discard (RED) algorithm is to provide congestion avoidance (that is, an operating region of low delay and high throughput) by trying to control the average queue length at a router [11]. A RED-enabled router estimates the average queue length q by means of an exponentially-weighted moving average; this estimate is updated with every incoming packet as: q (1  wq)q + wqQ, where Q denotes here the instantaneous queue length ‘‘seen’’ by the packet, and wq 2 [0, 1] is the averaging weight (the lower the value of wq, the longer the ‘‘memory’’ of the estimator). Here we assume that the time averaging parameters of RED are such that the average queue size, and hence the drop probabilities piÕs have negligible oscillations. We are aware of the fact that for some RED parameters this may not be the case, and that the interaction between RED and TCP can lead to instabilities if the parameters are not chosen correctly. This average queue value is then compared to two thresholds qmin and qmax, with qmin < qmax, in order to decide whether or not the incoming packet should be dropped. The drop probability is 0 if q 6 qmin, 1 if q P qmax, and pmax(x  qmin)/ (qmax  qmin) if q = x with qmin < x < qmax; the lat-

drop probability

984

1

p(i)

average queue length qmin

qmax

Fig. 1. Drop probability in RED as a function of q.

ter is the congestion avoidance mode of operation. pmax is the value of the drop probability as the average queue tends to qmax (from the left). This is illustrated in Fig. 1. In a best-effort network, the value of pmax is the same for all flows sharing the buffer, whereas in a network implementing service differentiation packets may ‘‘experience’’ different values of pmax, according to the service class they belong to—as we will see below, it is the latter case which we are focusing on. The purpose of the early discarding of packets (i.e., dropping a packet before the actual physical queue is full) is to signal the sources that implement congestion-control mechanisms—like TCP sources—to reduce their sending rates, in order to prevent heavy congestion. The random nature of drops aims at avoiding synchronization of flows having similar round-trip times [11], i.e., all sources increasing and decreasing their congestion windows in unison, leading to strong oscillations of queue lengths and lower throughput. We consider a set N containing N TCP flows (or aggregate of flows) and a set I containing I real-time CBR flows that can be differentiated by RED; they all share a common buffer yet RED handles them differently.1 We assume that they all share common values of qmin and qmax but each flow i may have a different value p(i) of pmax, leading to a differentiated treatment. In other words, the slope ti of the linear part of the curve in Fig. 1 depends on the flow i:

1 RED punishes aggressive flows more by dropping more packets from those flows.

E. Altman et al. / Computer Networks 50 (2006) 982–1002

ti ¼

pðiÞ . qmax  qmin

Denote t ¼ ðti ; i 2 I [ NÞ. We identify ti as the priority class of a connection. The service rate of the bottleneck router is given by l. 2.1. Practical considerations Let us add a few remarks concerning practical issues of this proposal, like scalability and implementation complexity. First, in a DiffServ-like architecture [12], users may select a specific QoS treatment on a packet-per-packet basis, and that treatment corresponds precisely to a RED-like AQM policy that may drop packets with a probability that depends on a tag carried by the packet (this is how the Assured Forwarding per-hop behavior [13] operates)—the tag may well be set by the user to signal how the packet should be treated by the core routers. So, in the context of our proposal, from a practical (i.e., implementation) viewpoint, a user choosing her own p(i) in the router requires just a straightforward setting of the QoS tag she puts on her packets. On the other hand, letting a user choose the thresholds qmin and qmax does not seem realistic: the (feasible) values of the thresholds depend on link speeds and on the actual, ‘‘physical’’ capacity of router queues, which may vary from a link/ router to another [11]. The fact that each source may choose a different value for the slope could cause problems in scaling our approach to large networks or to a large number of flows. This scaling problem can, however, be solved by using the following distributed approach: the RED queue could restrict to put on each packet the value of q at a given time. Then the decision of whether the packet would be dropped or not, depending on the slope that corresponds to the source of the packet, can be delegated to the edge router that corresponds to that connection. (In a differentiated service environment, there are edge routers that behave as policers, i.e. they can mark or drop packets that do not comply with the userÕs type.) The edge routers are directly connected to the corresponding sources so it is much easier to take the dropping decisions there. Note that our

985

analysis does not depend on how exactly congestion signals are conveyed to a given source so using the above approach does not change our results.

3. Computing the throughputs We use the well-known relation for TCP rate: ki ¼

1 Ri

rffiffiffiffi a ; pi

i 2 N;

ð1Þ

where Ri and pi are TCP flow iÕs round-trip time and drop probability, respectively. a is typically taken as 3/2 (when the delayed-ACKs option is disabled) or 3/4 (when it is enabled). We shall assume throughout the paper that the queueing delay is negligible with respect to Ri for the TCP connections. In contrast, the rates ki, for i 2 I, of real-time flows are not controlled and are assumed to be fixed. P If N ¼ ; we assume throughout the paper that j2I kj > l (unless otherwise specified), otherwise the RED buffer is not a bottleneck. Similarly, if I ¼ ; we assume that TCP senders are not limited by the receiver window. In the model above, we assume that the number of flows is constant over time. This corresponds to a scenario of long-lived flows in which, for instance, TCP connections are used for the transfer of large files in storage networks or in backup of disks (so that we may assume that the square-root throughput formula (1) holds) and UDP flows are associated to the streaming of long CBR-encoded multimedia flows. Furthermore, we assume that the short-lived TCP flows, even if more numerous than long lived flows, do not affect the performance of long-lived TCP flows. (This assumption is compatible with the natural scaling that is expected to occur as the Internet grows, see [14].) In general, since the bottleneck queue is seen as a fluid queue, we can write X kj ð1  pj Þ ¼ l. j2I[N

If we operate in the linear part of the RED curve then this leads to the system of equations:

986

E. Altman et al. / Computer Networks 50 (2006) 982–1002

8 P kj ð1  pj Þ ¼ l; < j2I[N

:

pi ¼ ti ðq  qmin Þ;

8i 2 I [ N

with (N + I + 1) unknowns: q (average queue length), and pi, i 2 I [ N, where ki, i 2 N is given by (1). Substituting (1) and pi ¼ ti ðq  qmin Þ 8i;

ð2Þ

into the first equation of the above set, we obtain a single equation for q: X 1 rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi a ð1  tj ðq  qmin ÞÞ R tj ðq  qmin Þ j2N j X þ kj ð1  tj ðq  qmin ÞÞ ¼ l. j2I

ð3Þ

pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi If we write x ¼ q  qmin , then (3) can be written as a cubic equation in x: ZðxÞ ¼ z3 x3 þ z2 x2 þ z1 x þ z0 ¼ 0;

ð4Þ

where z3 ¼

X

kj t j ;

j2I

z1 ¼ l 

X j2I

X 1 pffiffiffiffiffiffi atj ; R j2N j X 1 rffiffiffi a z0 ¼  . R t j j2N j

z2 ¼ kj ;

Note that this equation has a unique positive solution if there are only TCP or only real-time connections; in either case, it becomes a quadratic equation. Proposition 1. Fix the values of tj, j 2 I [ N. The cubic Eq. (4) has a unique real positive solution. Assume that the solution lies in the linear region of RED. Then the average queue size is given as qmin + x2 where x is the unique positive solution of (4) and the loss probability for session i is given by pi = ti(q  qmin). Proof. Assume first that I and N are both nonempty. Since the coefficients of the cubic equation are real, it has either a single real solution and two other conjugate complex solutions, or it has three real solutions [15]. Consider first the case in which all solutions are real. Then since the product of

solutions is positive (it equals z0/z3), there are either one or three positive solutions. But the latter is excluded since the sum of solutions is positive (it equals z2/z3). Next consider the case of a single real solution. Since the two other solutions are conjugate, their product is positive. Then since the product of all solutions is positive (it equals z0/z3), the real solution is positive. h Note that, in the case of only real-time connections ðN ¼ ;Þ operating in the linear region, we have P j2I kj  l q ¼ qmin þ P ð5Þ j2I kj t j and

P j2I kj  l pi ¼ ti P . j2I kj t j

ð6Þ

(Recall that, throughout the paper,P when considering this case we shall assume that j2I kj > l.) In the case of only TCP connections ðI ¼ ;Þ operating in the linear region, we have sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffi!2   P P tj 1 ffiffiffi l þ l2 þ 4a j2N p j2N R Rj

q ¼ qmin þ

tj

j

 pffiffiffi2 P tj 4a j2N Rj

ð7Þ

and sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffi!2   P P tj 1 2 lþ l þ4a j2N pffiffiffi j2N R Rj

pi ¼ti

4a



P

tj

pffiffiffi2

j

.

tj

j2N Rj

ð8Þ

4. Utility, pricing and equilibrium We denote a strategy vector by t for all flows such that the jth entry is tj. By (ti, [t]i), we define

E. Altman et al. / Computer Networks 50 (2006) 982–1002

a strategy where flow i uses ti and all other flows j 5 i use tj from vector [t]i. We associate to flow i a utility Ui. The utility will be a function of the QoS parameters and the price payed by flow i, and is determined by the actions of all flows. More precisely, Ui(ti, [t]i) is given by ai ki ð1  pðti ; ½ti ÞÞ  bi pðti ; ½ti Þ  dðti Þ; ai > 0; bi P 0 where the first term stands for the utility for the goodput, the second term stands for the dis-utility for the loss rate and the last term corresponds to the price d(ti) to be paid by flow i to the network.2 In particular, we find it natural to assume that a TCP flow i has bi = 0 (as lost packets are retransmitted anyhow, and their impact is already taken into account in the throughput). Moreover, since ki for TCP already includes the loss term pi(ti, [t]i), the utility function of TCP is assumed to be U i ðti ; ½ti Þ ¼ ai ki ð1  pðti ; ½ti ÞÞ  dðti Þ. We assume that the strategies or actions available to session i are given by a compact set of the form:   ti 2 S i where S i ¼ timin ; timax ; i 2 I [ N.

Here we assume that timin > 0 for all i 2 I [ N. Each flow of the network strives to find its best strategy so as to maximize its own objective function. Nevertheless its objective function depends upon its own choice but also upon the choices of the other flows. In this situation, the solution concept widely accepted is the concept of Nash equilibrium.

Definition 1. A Nash equilibrium of the game is a strategy profile t = (t1, t2, . . . , tM) where M = I + N from which no flow has any incentive to deviate. More precisely, the strategy profile t is a Nash equilibrium if the following holds true for any i

2 Linear utilities are commonly used for their tractability (see e.g. [16]), but they also have some mathematical justification: a utility that is given as the sum of (weighted) performance measures can be interpreted as the Lagrange relaxation of constraints that are imposed on the average delays, average loss probabilities, etc.

987

ti 2 arg max U i ðti ; ½ti Þ. ti 2S i

ti is the best strategy that flow i can use if the other flows choose the strategies [t]i. P Note that the network income is given by i2I[N dðt i Þ. Since the pi(ti, [t]i) are functions of ti and [t]i, d can include pricing per volume of traffic successfully transmitted. In particular, we allow for d to depend on the uncontrolled arrival rates of real-time sessions (but since these are constants, we do not make them appear as an argument of the function d). We shall sometimes find it more convenient to represent the control action of connection i as Ti = 1/ti instead of as ti. Clearly, properties such as existence or uniqueness of equilibrium in terms of ti directly imply the corresponding properties with respect to Ti.

5. Equilibrium for only real-time sessions or only TCP connections We assume throughout that timax 6 1= ðqmax  qmin Þ for all connections. The bound for timax is given so that we have timax ðqmax  qmin Þ 6 1. From (2) we see that pi 6 1 with equality obtained only for the case ti = 1/(qmax  qmin).3 In our analysis, we are interested mainly in the linear region. For only real-time sessions or only TCP connections, we state the assumptions and describe the conditions for linear region operations and we show the existence of a Nash equilibrium. Theorem 1. A sufficient condition for the system to operate in the linear region is that for all i: 1. For only real-time connections: k>l

3

and

timin >

kl . kðqmax  qmin Þ

ð9Þ

Note that if the assumption does not hold then for some value q 0 < qmax we would already have for some i, pi = 1 so one could redefine qmax to be q 0 . An important feature in our model is that the queue length beyond which pj = 1 should be the same for all j.

988

E. Altman et al. / Computer Networks 50 (2006) 982–1002

2. For only TCP connections: 0

timin > @

l þ

where k ¼

P

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 12 P 2 l2 þ 4að j2N R1j Þ A; pffiffiffiffiffiffiffiffiffiP 4 aDq j2N R1j

j2I kj

ð10Þ

and Dq: = qmax  qmin.

Proof. The condition (9) (respectively (10)) will ensure that the value of q obtained in the linear region (see (5) and (7), respectively) is not larger that qmax. Indeed, for real-time connections, (9) implies that X j2I

kj t j >

kl ; qmax  qmin

which implies together with (5) that q < qmax. Finally, the fact that the queue size is not below the lower extreme of the linear region (i.e., pi > 0 for all i) is a direct consequence of k > l. The case of only TCP connections is proved in Appendix A.1. h The following result establishes the existence of Nash equilibrium for only real-time sessions or only TCP connections.

• Several dynamic update schemes (for example, a round robin one) converge to a Nash equilibrium. For example, if we start with all players using their smallest available action and a round robin update scheme is used (where at each time period another player changes its action to a best response against the actions used by other players) then the sequence of actions will be monotone nondecreasing and hence will converge to a limit. (More details will be given below in the so called ‘‘Greedy Algorithm’’ that we shall introduce.) • This limit turns out to be an equilibrium. Hence the monotonicity property of the best response sequence implies existence of an equilibrium. • Using the same procedure when starting with the largest strategy of each user gives a monotone decreasing sequence whose limit is again a (possibly different) Nash equilibrium. For more details see [17]. Definition 2. The game (S1, . . . , SM, U1, . . ., UM) is supermodular if for all i • Si is a sublattice,4 • Ui is upper semi-continuous in ti and [t]i, • Ui has nondecreasing differences in (ti, [t]i), i.e., for all ti P t0i and ½ti P ½t0i ,

Theorem 2. Consider either the case of only realtime sessions or of only TCP connections. Assume that the system operates at the linear regime and the functions d are convex in Ti :¼ 1/ti. Then a Nash equilibrium exists.

U i ðti ; ½ti Þ  U i ðt0i ; ½ti Þ P U i ðti ; ½t0i Þ

Proof. See Appendix A.2. h

By nondecreasing differences in (ti, [t]i), we mean that it has the property that the incremental gain by choosing a greater ti is greater when [t]i is larger. For example if the utility of user i has nondecreasing differences in the vector t, user i increases her utility if she increases her slope in response to an increase in the slope of another user j. If Ui is twice differentiable, then the supermodularity is equivalent to

5.1. Supermodular games Let us now introduce the notion of a supermodular game, which will be used in Theorems 3–5 below. Supermodular games have the following appealing monotonicity property: for any user i and any fixed policy [t]i of the users other than i, the best response of user i to the other usersÕ policy [t]i is monotone in [t]i. This implies the following properties of supermodular games.

 U i ðt0i ; ½t0i Þ;

where S i ¼ ½timin ; timax , S = S1 · S2    · SM and M = I + N.

4

Si is a sublattice of RM if t 2 Si and t 0 2 Si imply that t ^ t 2 S i a n d t ^ t 0 2 S i , w h e r e t ^ t0 ¼ ðmaxðt1 ; t01 Þ; . . . ; maxðtM ; t0M ÞÞ and t ^ t ¼ ðminðt1 ; t01 Þ; . . . ; minðtM ; t0M ÞÞ. 0

989

E. Altman et al. / Computer Networks 50 (2006) 982–1002

o2 U i P 0; oti otj

ð11Þ

for all t in S. Applying TopkisÕ Theorem [17] in this context shows immediately that each flowÕs best response function is increasing in the action of the other flows. An useful propriety of supermodular games is that we can use monotonicity to prove the existence of equilibria and greedy algorithms. A greedy algorithm is a simple, socalled tatoˆnnement of Round Robin scheme for best response that converges to the equilibrium. Let us now introduce the following asynchronous dynamic greedy algorithm (GA). 0

Greedy Algorithm. Assume a given initial choice t for all flows. At some strictly increasing times sk, k = 1, 2, 3, . . ., flows update their actions; the actions tki at time sk > 0 are obtained as follows. so as to A single flow i at time sk+1 updates its tkþ1 i optimize Ui(Æ, [tk]i) where [tk]i is the vector of actions of the other flows j 5 i. We assume that each flow updates its actions infinitely often. In particular, for the case of only real-time sessions, we update tkþ1 as follows: i tkþ1 ¼ arg max ai ki ð1  pi Þ  bi pi  dðti Þ; i ti 2½timin ;timax 

ð12Þ

where pi in (12) is given by (6). For the TCP-only case, we update tkþ1 as i follows: rffiffiffiffi ai a tkþ1 ¼ arg max ð1  pi Þ  dðti Þ; ð13Þ i pi ti 2½ti ;timax  Ri min

where pi in (13) is given by (8). We assume that the duration of a stage is quite long, so that sufficient information can be obtained by the user in order to be able to estimate pi. Remark 1. For the case of real-time sessions, we may obtain a closed-form solution for tkþ1 with j specific cost function d(ti) such as tdi which will lead to update of tkþ1 as follows: i P k j6¼i kj t j dki ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffi ; P P ðai ki þ bi Þð j2I kj  lÞð j6¼i kj tkj Þ  ki d

i j k ¼ 0 and Ui correwhere dki is such that oU oti ti ¼di sponds to the utility function of real-time session i. Then tkþ1 is given by: i 8 timin if dki < 0; > > > > > > < ti if dki < timin ; dki P 0; max kþ1 ti ¼ > > timin if dki > timax ; dki P 0; > > > > : di otherwise.

Theorem 3. For the case of only real-time connections we assume that "j, kmin 6 kj 6 kmax, and

ðI  1Þkmin tmin P kmax tmax ;

where tmin ¼ mini2I ftimin g and tmax ¼ maxi2I ftimax g. Then there is smallest equilibrium t and largest equilibrium t, and the GA dynamic algorithm converges to t (respectively t) provided it starts with tjmin for all j (respectively tjmax for all j). Proof. Both statements will follow by showing that the game is super-modular, see [17,18]. A sufficient condition is that o2 U i o2 p i ¼ ðai ki þ bi Þ P 0. oti otj oti otj We have opi ¼ oti

X j

leading to

kj  l

!

1 P

t i ki

j2I kj t j

!

 P ; 2 ð j2I kj tj Þ

! P X  j2I kj tj þ 2ti ki o2 p i ¼ kk kj  l . P 3 oti otk ð j2I kj tj Þ j2I The P

latter is nonpositive if and only if k j j6¼i t j P ki t i . A sufficient condition is that (I  1)kmintmin P kmaxtmax. Thus the game is super-modular. The result then follows from standard theory of super-modular games [17,18]. h

Theorem 4. For the case of only real-time connections, we assume that "j, kmin 6 kj 6 kmax, and 2t3min k2min > t3max k2max . Under supermodular condition, the Nash equilibrium is unique. Proof. See Appendix A.3.

h

990

E. Altman et al. / Computer Networks 50 (2006) 982–1002

Theorem 5. For the case of only TCP connections, assume that "j, tmin 6 t 6 tmax and ð3 þ pi Þ

opi opi o2 p i P 2pi ðpi þ 1Þ 8 i; j; i 6¼ j. oti otj oti otj ð14Þ

Then the game is super-modular. Proof. See Appendix A.4. h Remark 2. It would also be interesting to consider a price per unit of received volume, i.e., of the form d(ti)ki(1  pi). However, looking at the super-modularity of the utility function gives a condition depending on d 0 (ti), d(ti) and the tj that does not seem tractable. On the other hand, we can consider a pricing per unit of sent volume, i.e., of the form d(ti)ki (since ki is fixed), Conditions of Theorems 2,3 then hold to provide a Nash equilibrium. Note that in the model presented above, users choose at each stage an action that maximizes their utility function, depending on the actions of all other flows. This dependence appears in the loss probability pi. In our case, a user can determine her utility function without hypothesis of full knowledge: since users only need to have aggregate information about other flows (like the total rate P j2I kj in the CBR-only scenario), there are in principle no scalability issues, i.e., no need of exchanging or storing per-flow information at the routers. The issue of how such aggregate values are signaled to sources is outside the scope of this paper. Remark 3. As already mentioned, in supermodular games, one can obtain equilibrium dynamically in stages, such that during each stage, users choose an action that maximizes their utility function, depending on the actions of all other flows. This dependence appears in the loss probability pi. In our case, a user can determine his utility function without hypothesis of full knowledge of the actions of other players. Indeed, in real-time connections, it is possible for each source to obtain the sufficient information for determining its actions by using RTP/RTCP (each source can

obtain the receiver reports (RRs) that include the reception quality statistics such as the number of packets received, fraction lost, and cumulative number of packets lost). Hence the source i can obtain the loss probability pi at each stage. For TCP connections, the ACK packets could be sufficient to acquire the loss probabilities pi. In case of only real-time connections, the loss probability at stage k is given by P j2I kj  l k1 P ; ð15Þ pi ðti ; ½t i Þ ¼ ti k1 þ k t i i j6¼i kj t j

is the optiwhere ti is the action of user i and tk1 j mal action of user j at stage k  1. At stage k  1, we have P j2I kj  l k1 k1 k1 k1 pi ¼ pi ðti ; ½t i Þ ¼ ti P ; k1 j2I kj t j

where tk1 is the optimal action of user i at stage i k  1. Note that the loss probability pk1 is estii mated through RTP/RTCP protocol at end of stage k  1. Hence at stage k, when the action of user i is ti, the loss probability (15) of user i becomes: P j2I kj  l k1 ; pi ðti ; ½t i Þ ¼ ti P k1 þ k t i i j6¼i kj t j 0P 11 k1 k1 k t þ k ðt  t Þ ti i i j2I j j P i A ¼ k1 @ ti k1 t kj  l i

¼

j2I

0

tk1 Þ i

11

ti @ 1 ki ðti  P A þ k1  tk1 p k1 i i k  l ti j j2I

¼ ^pðti ; tk1 ; pk1 Þ. i i

Thus, the utility function becomes: U i ðti ; ½ti Þ ¼ ai ki ð1  ^pðti ; tk1 ; pk1 ÞÞ i i

 bi ^pðti ; tk1 ; pk1 Þ  dðti Þ i i

^ i ðti ; tk1 ; pk1 Þ. ¼U i i

In the above formulation, the sources need to know the total rate through the bottleneck router in order to execute the iteration. This can be

E. Altman et al. / Computer Networks 50 (2006) 982–1002

achieved using bottleneck capacity estimation (e.g., pathrate, pathchar) and available bandwidth estimation tools (e.g., pathload), see [19, 20]. In the case of only TCP connections, the loss probability at stage k is given by pi ðti ;½tk1 i Þ vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi1 0 ( ) ! u pffiffiffiffiffiffi  pffiffi 2 u tjk1 P P 1 ffiffi p1 ffiffiffiffiffiffi þ p þ ti A ti @l þ tl2 þ 4a j6¼i

¼

Rj

j6¼i

Ri ti

tk1 j

Ri

Rj

pffiffiffiffiffiffi  2 pffiffi tjk1 P ti 4a j6¼i Rj þ Ri

;

where ti is the action of TCP i and tk1 is the optij mal action of TCP j at stage k  1. Note that at stage k  1, we have  pk1 i

¼ pi ðtk1 ; ½tk1 i Þ i

vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ! pffiffiffiffiffiffiffi12 u u P tk1 P j @l þ tl2 þ 4a A p1 ffiffiffiffiffi ffi tk1 j j i R k1 0

¼

Rj

4a



P

j

j

tj

pffiffiffiffiffi ffi2 k1

.

tj

Rj

Rj

p1 ffiffiffiffiffi ffi k1 tj

and of

From this reason, we define another algorithm which allows us to obtain the Nash  equilibrium 

p1 ffiffiffiffiffiffi and without ffi estimating the value of Rj tk1 pffiffiffiffiffi j tk1 j . In this algorithm we consider that the Rj probability at stage k is approximated by

pi ¼ pi ðti ; tk1 ; pi Þ i vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 0 ! pffiffiffiffiffiffiffi12 u u tk1 P P j A p1 ffiffiffiffiffiffi ti @l þ tl2 þ 4a j

¼

¼

4a

ti k1  pi . tk1 i

tk1 j

Rj

ffi2  pffiffiffiffiffi P tk1 j j

Rj

Thus, the utility function becomes ; pk1 ÞÞ  dðti Þ U i ðti ; ½ti Þ ¼ ai ki ð1  pðti ; tk1 i i ^ i ðti ; tk1 ; pk1 Þ. ¼U i i

From the definition of the loss probability at stage k (see (16)), we can see that if this algorithm converges, it will be to a Nash equilibrium. We postpone to future work the mathematical analysis of the convergence of that algorithm. Nonetheless, using numerical simulations, we have found out that so far, the iterative algorithm always converged to a Nash equilibrium.

6. Symmetric users In this section, we assume that all flows have the same utility function (for all i, ai = a, ki ¼ k and bi = b for real-time sessions and ai = a and Ri = R for TCP connections) and the same intervals for strategies (timin ¼ tmin and timax ¼ tmax ). 6.1. Algorithm for symmetric Nash equilibrium

From the definition of loss probability at stage k, it is difficult to express pi as a function of  pk1 , ti and i k1 ti , as in real-time connectionscase. The  source

needs pffiffiffiffiffi ffito   estimate the value of tk1 j . Rj

991

j

For symmetric Nash equilibrium, we are interested in finding a symmetric equilibrium strategy t* = (t*, t*, . . . , t*) such that for any flow i and any strategy ti for that flow (real-time session or TCP connection), U ðt Þ P U ðti ; ½t i Þ. Next we show how to obtain an equilibrium strategy. We first note that due to symmetry, to see whether t* is an equilibrium it suffices to check (6) for a single flow. We shall thus assume that there are L + 1 flows all together, and that the first L flows use the strategy to = (to, . . ., to) and flow L + 1 uses tL + 1. Define the set QLþ1 ðto Þ ¼ arg maxtLþ1 2½tmin ;tmax  ðU ðtLþ1 ; ½to ðLþ1Þ ÞÞ;

Rj

where to denotes (with some abuse of notation) the strategy where all flows use to, and where the maximization is taken with respect to tL+1. Then t* is a symmetric equilibrium if t 2 QLþ1 ðt Þ.

992

E. Altman et al. / Computer Networks 50 (2006) 982–1002

Theorem 6. Consider real-time connections only, operating in the linear region. Assume that the functions d are convex in Ti :¼ 1/ti. The symmetric equilibrium t* satisfies: ^ ak þ b  odðT Þ ; ¼ T 2 oT  ðI kÞ T ¼T

^ Þ ¼ dð 1 Þ. where T* = 1/t* and dðT T

Proof. Recall that k ¼ I  k. Then for real-time connections, we have ðk  lÞ ^ i Þ; U ¼ a k  ða k þ bÞ  P  dðT k þ T i j6¼i  k=T j

which gives, when considering the derivative, P ^ iÞ k=T j ðk  lÞ j6¼i  oU odðT   ¼ ðak þ bÞ . P 2   þ Ti oT i oT i ðk j6¼i k=T j Þ

Equating

oU oT i

¼ 0 we obtain (6). h

7. Optimal pricing The goal here is to determine a pricing strategy that maximizes the networkÕs benefit. Typically, pricing is motivated by two different objectives: (1) it generates revenue for the system and (2) it encourages the players to use the system resources more efficiently. Our focus here is on pricing strategies for revenue maximization, i.e., how a service provider should price resources to maximize revenue. The corresponding maximization problem is given by cðt Þ ¼ arg maxd

I X i¼1

dðti Þ;

where t* is a Nash equilibrium which can be obtained when considering special classes function of d(Æ) depending on a real parameter that we will also (with some abuse of notation) call d. We then obtain a system of equations that can be solved numerically (to get the t* satisfying the Nash equilibrium), and a numerical optimization over the parameter d can be obtained. We use in our

numerical example d(t) = d/exp(t). We also considered other families of pricing functions such as d/t, d/t2 and so on and we observed monotonous behaviour of cost c as a function of d. Nevertheless, an assumption of this optimization problem is that the network knows the number of flows and the parameters ai, bi and Ri "i. A more likely situation is when the network only knows the distribution of the number of players I (now a random variable) and the distribution of parameters ai, bi and Ri (assumed independent and independent between flows for convenience). A numerical investigation of optimal parameters can be realized as well.

8. Numerical examples In the following simulations, we obtain a unique Nash equilibrium for only real-time sessions or only TCP connections. Moreover, the GA algorithm converges as it satisfies the conditions of supermodularity. All the conditions of supermodular games (Theorems 3 and 5) and uniqueness of Nash equilibrium (Theorems 2 and 3) are only sufficient but not necessary as shown in the numerical results. The pricing function that we use for player i throughout this section is d/exp(ti). We shall investigate how the choice of the constant d will affect the revenue of the network.5 8.1. Symmetric real-time flows In the following numerical evaluations, we show the variation of different metrics as function of d. Figs. 2 and 3 correspond to a unique symmetric Nash equilibrium case in which all the real-time flows have ki = 2 Mbps with tmin = 1, tmax = 15, I ¼ 20, qmin = 10, qmax = 40, l = 30 Mbps. Here we set the values of parameters to ensure that the

5

We note that it is desirable to have a ‘‘nontrivial’’ parameterized pricing function that leads to an optimal revenue for some parameter. We also tested other pricing functions that did turn out to be ‘‘trivial’’ in the sense that the benefit was always monotone in the parameter; an example of such a function is exp(bti) and the network optimizes with respect to b.

993

E. Altman et al. / Computer Networks 50 (2006) 982–1002 10.3

5 a=1,b=3.5,µ=30,lambda=2, t

=1, t

min

=15

max

4.5 4

Strategy at NE

Queue Size

10.25

10.2

10.15

3.5 3 2.5 2

10.1

1.5 a=1,b=3.5,µ=30,lambda=2, tmin=1, tmax=15

10.05

1 0

5

10

15

20

25

30

0

5

10

15

20

25

30

25

30

d

d 0.8

25 a=1,b=3.5,µ=30,lambda=2, tmin=1, tmax=15

0.6 20

Network Income

0.4

Utility

0.2 0

–0.2

15

10

–0.4

5

–0.6 –0.8

a=1,b=3.5,µ=30,lambda=2, tmin=1, tmax=15

0

5

10

15

20

25

0

30

0

5

10

15

d

20

d

Fig. 2. Symmetric real-time flows. (a) Queue size vs. d, (b) t* vs. d, (c) utility vs. d and (d) network income vs. d.

16 a=1,b=3.5,µ=30,lambda=2, t

=1, t

min

4.5

=15, d=20

=1, t

min

=15, d=20

max

14

4

12

3.5 3

tk

tk

a=1,b=3.5,µ=30,lambda=2, t

max

10

2.5 8 2 6

1.5 1

0

2

4

6

8

10

12

Iteration step, k

4

2

4

6

8

10

12

Iteration step, k

Fig. 3. Symmetric real-time flows: convergence to Nash equilibrium. (a) t0 = tmin and (b) t0 = tmax.

system operates in the linear region such as   1 tmin > Dq 1  P l k ¼ 0:0083. Moreover, the j2I

j

values above also ensure the uniqueness of the equilibrium (Theorem 3). The bound on tmax is

needed only to limit the value of loss probability to 1. The value of d which maximizes the network revenue occurs at d = 3.33. All the flows attain a loss rate of 0.25. Note P that for the real-time flows P symmetric case, pi ¼ ð j2I kj  lÞ= j2I kj at the

994

E. Altman et al. / Computer Networks 50 (2006) 982–1002

Nash equilibrium is a constant. The average queue size, given by qmin þ pi =ti , is shown in Fig. 2. We observe the value of t* at which maximum network income is achieved is close to tmin while the system operates in the linear region of RED throughout. We plot in Fig. 3 sample paths of a connection that uses the GA Algorithm for symmetric users (Section 6) (the evolution for all connections is the same). The figure illustrates convergence to

the same Nash equilibrium when t0 started from tmin or tmax. We plot it for d = 20. In Fig. 3(a), the value of t* is 4.152208, and in Fig. 3(b), it is 4.152208. 8.2. Nonsymmetric real-time flows In the next experiment, instead of having the symmetric case, the rates ki are drawn uniformly

10.8

4

10.7

3.5

λ1=6.24 Mbps

λ13=8.05 Mbps

10.6

3

λ =1.53 Mbps 12

10.5 2.5

t*

Queue Size

λ2=4.81 Mbps

10.4 2 10.3 10.2 10.1 0

1.5

a=1,b=3.5,µ=30,lambda~U(1,10), tmin=1, tmax=15

10

20

30

40

1 0

50

10

20

d 0

d

30

40

50

120 a=1,b=3.5,µ=30,lambda~U(1,10), t

=1, t

min

–1

=15

max

100

Network Income

–2

Utility

–3 –4 λ1=6.24 Mbps

–5

λ =4.81 Mbps 2

–6

λ13=8.05 Mbps

–7

80 60 40 20

λ12=1.53 Mbps

–8 0

10

20

30

40

0 0

50

10

20

d 1.05

40

a=1,b=3.5,µ=30,lambda~U(1,10), t

1

50

=1, t

min

λ2=4.81 Mbps

=15

max

0.76

λ =8.05 Mbps 13

0.95

30

0.77

λ =6.24 Mbps

1

d

λ =1.53 Mbps 12

Average Loss

Loss

0.9 0.85 0.8 0.75

0.75 0.74 0.73

0.7 0.72 0.65 0

10

20

d

30

40

50

0.71 0

10

20

30

40

50

d

Fig. 4. Nonsymmetric real-time flows. (a) Queue size vs. d, (b) t* vs. d, (c) utility vs. d, (d) network income vs. d, (e) loss probability vs. d and (f) average loss probability vs. d.

995

E. Altman et al. / Computer Networks 50 (2006) 982–1002

from [1, 10] Mbps with tmin = 1, tmax = 15, qmax = 40, qmin = 10, I ¼ 20, l = 30 Mbps. Fig. 4 shows how different metrics vary with d at unique Nash equilibrium. To ensure that the flows operate  in the linear region, we need tmin > 1 P 1 1  Pl . We observe that d = 15.66 Dq

Dq

j

8.3. Symmetric TCP connections For symmetric TCP connections we have considered Ri = R = 20 ms for all connections with tmin = 2, tmax = 20, l = 30 Mbps, N = 20, a = 0.1. Fig. 5(a)–(d) show the effect of increasing d on the queue size, equilibrium strategy, utility and network income. Fig. 6(a)–(b) show the convergence to Nash equilibrium in case of symmetric TCP connections starting from tmin and tmax respectively. The maximum value of network revenue is found at d = 0.6704. In this symmetric case, the loss probability is given by 8 sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi9 2 < 2 R 3N 6N 2 = 2 2 þ p ¼ ¼ 0:0017. l þ  l l 3N 2 : R2 R2 ;

kj

maximizes the network revenue. Fig. 4(b) shows that values of t* for flows having higher rates increase slower than that of flows having lower rates, i.e., higher rate flows experience less loss rates. Fig. 4(c) shows that flows having different rates gain similarly in their utility functions. We plot the individual and average loss rate in Fig. 4(e) and (f). We confirm in these experiments about uniqueness of Nash equilibrium, although the sample path of different connections will depend on the connection rates. The condition for uniqueness is that 2t3max k2min > t3min k2max which in our case is given by, 2 · 153 · 12 = 6750 > 100 = 13 · 102.

To ensure that the symmetric TCP flows operate in the linear region, we satisfy the condition on tmin:

10.001

6

10.0009

5.5

10.0008

5

Strategy at NE

Queue Size

a=1,b=3.5,µ=30,RTT=20, tmin=2, tmax=15

10.0007 10.0006 10.0005

4.5 4 3.5

10.0004

3

10.0003

2.5

a=1,b=3.5,µ=30,RTT=20, t

0

0.5

1

1.5

2

2.5

3

3.5

4

0.5

1

1.5

d

2.5

3

3.5

4

0.7 a=1,b=3.5,µ=30,RTT=20, t

=2, t

min

=15

a=1,b=3.5,µ=30,RTT=20, t

max

=2, t

min

0.15

=15

max

0.6

Network Income

0.145 0.14

Utility

2

=15

max

d

0.155

0.135 0.13 0.125

0.5 0.4 0.3 0.2 0.1

0.12 0.115

=2, t

min

2 0

0

0.5

1

1.5

2

d

2.5

3

3.5

4

0 0

0.5

1

1.5

2

2.5

3

3.5

4

d

Fig. 5. Symmetric TCP Flows. (a) Queue size vs. d, (b) t* vs. d, (c) utility vs. d and (d) network income vs. d.

996

E. Altman et al. / Computer Networks 50 (2006) 982–1002 20

3.4

18

3.2

16 3 14 2.8

tk

tk

12 10

2.6

8 2.4 6 2.2 2

4 0

2

4

6

8

10

12

14

Iteration step, k

16

2

0

2

4

6

8

10

Iteration step, k

12

14

16

Fig. 6. Symmetric TCP flows: convergence to Nash equilibrium. (a) t0 = tmin and (b) t0 = tmax.

0

tmin > @

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 12 P 2 l2 þ 4ð j2N R1j Þ A pffiffiffiffiffiffiffiffiffiP 4 aDq j2N R1j

l þ

¼ 4:6271  105 .

We plot sample paths of a connection which illustrate convergence to Nash equilibrium when t0 started from tmin or tmax. We plot it for d = 0.1. In Fig. 6(a), the value of t* is 2.724076, and in Fig. 6(b), it is 2.724076. 8.4. Nonsymmetric TCP connections We present a nonsymmetric case in Fig. 7 in which the Ris are drawn uniformly from [1, 20] ms with tmin = 2, tmax = 20, l = 30 Mbps, N = 20, a = 0.2, qmax = 40, qmin = 10. The value of d at which network revenue is highest is 0.8948. We ensure that the nonsymmetric connections operate in the linear region by setting: 0

tmin > @

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 12 P 2 l2 þ 4ð j2N R1j Þ A ¼ 0:5476. pffiffiffiffiffiffiffiffiffiP 4 aDq j2N R1j

l þ

8.5. Real-time flows and TCP connections In this experiment (see Fig. 8), we combine both real-time and TCP connections. We have I = 15, N = 15, l = 13 Mbps, RTT = 10 ms, treal min ¼ 5, TCP TCP treal ¼ 11, t ¼ 5, t ¼ 11, k = 1 Mbps, qmin = max min max

10, qmax = 40, a for both real-time and TCP connections are 100 and b = 4. We found two Nash equilibriums for each values of d. Therefore, we plot two curves in each plot corresponding to each Nash equilibrium. The highest network revenue is achieved at d = 353.15, treal = 11, tTCP = 5 and at d = 254.35, treal = 5, tTCP = 11. In the simulations, we observe the values of q < qmax and since there is at least one TCP flow i with throughput ki > 0, this implies that the flow has loss probability pi > 0 and average queue length q > qmin. We conclude that the system operates in the linear region. Our objective in this set of experiments is to show that there exists a Nash equilibrium for both real-time and TCP connections. The loss experienced by realtime flows at the first NE is 0.3676 and that by TCP flows is 0.1826. The corresponding values at the second NE is 0.2349 and 0.4461.

9. Conclusions and future work We have studied in this paper a fluid model of the RED buffer management algorithm with different drop probabilities applied to both UDP and TCP traffic. We first computed the performance measures for fixed drop policies. We then investigated how the drop policies are determined. We modeled the decision process as a noncooperative game and obtained its equilibria. We showed the existence of the equilibria under various conditions, and provided ways for computing them (establishing also convergence properties of best-

997

E. Altman et al. / Computer Networks 50 (2006) 982–1002 10.007

8 RTT1=16.9

=30,RTT~U(1,20), tmin=2, tmax=20,a=0.2

10.0065

RTT =1.37

7

2

RTT4=8.21

6

10.0055 10.005

t*

Queue Size

10.006

10.0045

RTT6=10.6

5 4

10.004 3

10.0035 10.003

0

0.5

1

1.5

2

2.5

3

3.5

2 0

4

0.5

1

1.5

2

1.8

0.9

1.6

0.8

Network Income

RTT =16.9

1.2

1

RTT =1.37

Utility

3

3.5 4

0.7

1.4

2

1

RTT4=8.21

0.8

RTT6=10.6

0.6

0.6 0.5 0.4 0.3

0.4

0.2

0.2

0.1

0

2.5

d

d

0

0.5

1

1.5

2

d

2.5

3

3.5

0

4

=30,RTT~U(1,20), t

=2, t

=20, a=0.2

2 d

2.5

3 3.5

2

2.5

3

min

0

0.5

1

1.5

max

4

0.0185 0.024

0.018

0.022

0.0175

0.02

Average Loss

0.017

Loss

0.018

0.0165

0.016

RTT =16.9

0.014

RTT =1.37

0.012

RTT =8.21

0.015

RTT =10.6

0.0145

1 2 4

0.01

6

0.016 0.0155

0.014

0.008 0

0.5

1

1.5

2

2.5

3

3.5

d

4

0.0135

0

0.5

1

1.5

3.5 4

d

Fig. 7. Nonsymmetric TCP. (a) Queue size vs. d, (b) t* vs. d, (c) utility vs. d, (d) network income vs. d, (e) loss probability vs. d and (f) average loss vs. d.

response dynamics). The goal of the network provider is to use a pricing function that is going to optimize its revenue. Determining an optimal function seems a difficult problem (left for future research), and we restricted ourselves to specific classes of functions where only one parameter varies. We finally addressed the problem of optimizing the revenue of the network provider.

Concerning the future work, we are working on deriving sufficient and necessary conditions for operating at the linear region when there are both real-time and TCP connections; these seem to be more involved than the conditions we have obtained already. We will further study the impact of buffer management schemes on the performance and on the revenues of the network; in particular,

998

E. Altman et al. / Computer Networks 50 (2006) 982–1002 10.046

12 NE

1

10.044

11

NE

real

NE1

2

10

real 2

NE

10.04

t*

Queue Size

10.042

10.038

9

NETCP 1

8

NETCP 2

7

10.036

6 10.034 5 10.032 200

250

300

350

400

450

500

200

250

300

d 36

70

34

60

Utility

50

Network Income

1

real 2

NE

40

NETCP 1

30

TCP NE2

26

10

22 400

450

NE2

28

24

350

500

30

20

300

450

NE1

32

NEreal

250

400

d

80

0 200

350

20 200

500

250

300

d

350

400

450

500

d NEreal 1

0.7

NEreal 2

Loss

0.6

TCP 1

NE

NETCP 2

0.5 0.4 0.3 0.2 200

250

300

350

400

450

500

d

Fig. 8. Both real-time and TCP flows. (a) Queue size vs. d, (b) t* vs. d, (c) utility vs. d, (d) network income vs. d and (e) average loss probabilities vs. d.

other versions of RED will be considered (such as the gentle-RED variant). We will also examine how well the fluid model is suitable for the packetlevel model that it approximates. We intend to consider in the future other utility functions, and in particular include delay and/or

jitter terms. We plan to compare the performance of Nash equilibrium with the team problem in which the whole network efficiency is maximized. We shall then consider other pricing functions which would increase the efficiency of the Nash equilibrium.

E. Altman et al. / Computer Networks 50 (2006) 982–1002

Acknowledgements The work of E. Altman and R. El Azouzi was supported in part by research contract 001B001 with France Telecom R&D. The work of D. Barman was performed during internship at INRIA, financed by the INRIAÕs PrixNet ARC collaboration project. All authors were supported in part by the EuroNGI Network of Excellence.

A sufficient condition for the latter is X pffiffiffiffiffiffiffi X pffiffiffiffiffiffiffiffiffi tmin 1 pffiffiffiffiffiffiffi 6 l aDq þ aDq . R R t j min j2N j j2N

999

ð17Þ

Appendix A

Solving the quadratic equation (17) for tmin, we see that this is implied by (10). Finally the fact that we are not below the lower extreme of the linear region (i.e., pi > 0 for all i) is a direct consequence of the fact that zero loss probability would imply infinite throughput (see (1)), which is impossible since the link capacity l is finite.

A.1. Proof of Part 2 of Theorem 1

A.2. Proof of Theorem 2

For only TCP connections, we have, pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffi q  qmin 6 qmax  qmin ¼ Dq.

We first show that the utility function is concave in the case of only real-time sessions. Replacing ti by 1/Ti in Eq. (6), we obtain P j2I kj  l P pi ¼ ; ki þ T i j6¼i kj =T j

ð16Þ

From (7), we get the following sufficient and necessary condition for q 6 qmax: sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi   pffiffiffi P 1 P tj 2 ffiffi ffi p l þ l þ 4a j Rj j Rj tj pffiffiffiffiffiffi  6 Dq pffiffiffi pffiffiffi P tj 2 a j Rj

or equivalently, vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi !ffi ! u X 1 X pffiffiffi u t j tl2 þ 4a pffiffiffi Rj t j Rj j j pffiffiffi! pffiffiffi X tj pffiffiffiffiffiffi Dq; 6lþ2 a Rj j which is equivalent to ! ! X 1 X pffiffiffi t j l2 þ 4a pffiffiffi Rj t j Rj j j pffiffiffi! pffiffiffi X tj pffiffiffiffiffiffi 2 6 l þ 4 al Dq Rj j !2 X pffiffiffi tj þ 4aDq Rj j

or a

X j

! ! X pffiffiffi pffiffiffiffiffiffiffiffiffi tj 1 . pffiffiffi 6 l aDq þ aDq Rj Rj t j j

which is convex in Ti. The P convexity of pi in Ti follows from the fact that j2I kj  l > 0. Hence Ui are concave in Ti and continuous in Tj. The existence then follows from [21]. For TCP connections, we have 2

o2 U i o ki oki opi o2 p i  k ¼ a ð1  p Þ  2 i i i oT i oT i oT 2i oT 2i oT 2i ^ iÞ o2 dðT  ; ð18Þ oT 2i ^ i Þ ¼ dð1=ti Þ. On the other hand, (1) where dðT implies ok

opi 2a i ¼  2 oT3i oT i Ri k i and

"  2 # o2 p i 2a o2 ki oki . ¼ 2 4 ki  3 oT i oT 2i Ri ki oT 2i

ð19Þ

Then (18) becomes !   " # 2 o2 U i o2 ki a oki 2a ¼ ai 1þ 2 2  oT i R2i k3 oT 2i oT 2i Ri ki 

^ iÞ o2 dðT . oT 2i

ð20Þ

1000

E. Altman et al. / Computer Networks 50 (2006) 982–1002

Since the function d^ is convex in Ti, then form (20), it is sufficient to show that the second derivative of ki with respect to Ti is nonpositive. We have  pffiffiffi P tj 2a rffiffiffiffi j2N Rj Ri 1 a sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ki ¼ ¼ pffiffiffi!   Ri pi pffiffiffi P P tj 1 ffiffiffi ti l þ l2 þ 4a j2N R p j2N Rj t j

j

pffiffi

 þ C1 rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ¼   pffiffi  pffiffiffi ti 1 ffiffi ti l þ l2 þ 4a Ri p þ C þ C 2 1 ti Ri   ffiffiffiffi ffi p 2a 1ffi þ C1 T i pffiffiffi Ri T i Ri ffi rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ¼ pffiffiffiffi  1ffi l þ l2 þ 4a RTi i þ C 2 ðpffiffiffi þ C Þ 1 T i Ri 2a Ri

ti Ri

rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi  pffiffiffiffi  ffi pffiffiffiffi Ti 1ffi l þ l2 þ 4a RTi i þ C 2 pffiffiffi þ C1 2Ri T i Ri pffiffiffiffi  ¼ Ti þ C2 Ri rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 pffiffiffiffi  ffi3 pffiffiffiffiffi Ti 1ffi 2 þ 4a pffiffiffiffiffi pffiffiffi þ C þ C l T 2 1 i R T i Ri i 7 1 6 6pffiffiffiffi T i l  þ 7 pffiffiffiffi  ¼ 5 Ti 2Ri 4 T i þ C 2 þ C2 Ri Ri ¼

1 ½F 1 ðT i Þ þ F 2 ðT i Þ; 2Ri

where C 1 ¼

P

pffiffiffi tj

j6¼i Ri

A.3. Proof of Theorem 4 Under supermodular condition, to show the uniqueness of Nash equilibrium, it suffices to show that [22], 

o2 U i ðoT i Þ2

P

X o2 U i oT i oT j j6¼i

ð21Þ

or equivalently, o2 p i ðoT i Þ

þ 2

X o2 p i P 0. oT oT i j j6¼i

ð22Þ

For P the case of only real-time sessions, pi ¼ kj l P . We have k 1 k þT i

i

jT

j6¼i

j

P ðk  lÞ k6¼i Tkkk opi ¼  2 ; P oT i ki þ T i k6¼i Tkkk P 2 kk 2 2ðk  lÞ k6¼i T k o pi ¼ P kk 3 ; 2 oT i ðki þ T i k6¼i T Þ k

P k o pi kj ki  T i k6¼i T kk ¼ ðk  lÞ 2   . oT i oT j T j k þ T P kk 3 i i k6¼i T k 2

and C 2 ¼

P

j6¼i

p1ffiffiffi . Now, we tR j i

must prove that the second derivative of the functions F1 and F2 are nonpositive for all C1 P 0 and C2 P 0. We begin by taking the second derivative of F1. After some simplification, we obtain pffiffiffiffiffi o2 F 1 ðT i Þ lR2i C 2 ð3 T i þ C 2 Ri Þ ¼ 1=4 3=2 pffiffiffiffiffi ; 3 oT 2i ðT i ð T i þ C 2 Ri Þ Þ

which is positive. For the second function F2, since the function F2 is positive, it suffices to show that the second derivative of function [F2(Ti)]2 is nonpositive, we have 2

o2 ½F 2 ðT i Þ Ri ¼  3=2 pffiffiffiffiffi 2 4 oT i 2T i ð T i þ C 2 Ri Þ pffiffiffiffiffi 2  ð6T i aC 2 þ 8 T i aC 2 Ri þ 2aT 2i C 1 3=2

þ 8T i aC 2 Ri C 1 þ 6aT i R2i C 1 C 22

þ 2aR2i C 32 þ 3T i l2 R2i C 2 Þ; which is nonpositive.

Therefore, in order to get the uniqueness, we need that o2 p i X o2 p i þ oT i oT j oT 2i j6¼i P 2 kk 2ðk  lÞ k6¼i T k ¼  3 P ki þ T i k6¼i Tkkk P ki  T i k6¼i Tkkk X kj þ ðk  lÞ  3 P T 2j j6¼i ki þ T i k6¼i Tkkk " ! X kk X kj ðk  lÞ ¼ P kk 3 ki j6¼i T 2  T i Tk j k6¼i k þT i

i

k6¼i T k

X  X 2 # kk kk  P 0. þ2 k6¼i T k6¼i T 2 k k

E. Altman et al. / Computer Networks 50 (2006) 982–1002

This leads to the sufficient condition: 2

k2min k2max ðI  1Þ k2min  T P 0; þ 2 max T 2min T 2max T 3min 2

k2min k2max > T max 3 ; T 2max T min

2T 3min k2min > T 3max k2max . A.4. Proof of Theorem 5 For supermodularity on TCP connections, we 2 consider the sufficient condition oto iUotij P 0. It follows that rffiffiffiffi ai a Ui ¼ ð1  pi Þ  dðti Þ Ri p i ai pffiffiffi 1=2 1=2 ¼ aðpi  pi Þ  dðti Þ. Ri Then, for j 5 i, ! pffiffiffi oU i ai a pi3=2 opi pi1=2 opi ¼  ; Ri 2 2 otj otj otj ! pffiffiffi " 5=2 3=2 o2 U i ai a 3pi pi opi opi þ ¼ Ri oti otj oti otj 4 4 # ! 3=2 1=2 2 p p o pi þ i  i . oti otj 2 2

Thus a sufficient condition for supermodularity   o2 U i P 0; 8i; j; j ¼ 6 i is oti otj ð3 þ pi Þ

opi opi o2 p i P 2pi ðpi þ 1Þ ; oti otj oti otj

8i; j; j 6¼ i.

References [1] E. Altman, D. Barman, R. El Azouzi, D. Ros, B. Tuffin, Pricing differentiated services: a game-theoretic approach, Proceedings of NETWORKING 2004, Lecture Notes in Computer Science, vol. 3042, Springer, Athens, Greece, 2004, pp. 430–441. [2] P. Pieda, J. Ethridge, M. Baines, F. Shallwani, A Network Simulator Differentiated Services Implementation, Technical report, Available from: , July 2000.

1001

[3] A.G.D. Dutta, J. Heidemann, Oblivious AQM and Nash equilibria, in: IEEE Infocom, 2003. [4] J. Janssen, D. De Vleeschauwer, M. Bu¨chli, G.H. Petit, Assessing voice quality in packet-based telephony, IEEE Internet Computing (2002) 48–56. [5] M. Mandjes, Pricing strategies under heterogeneous service requirements, Computer Networks 42 (2003) 231– 249. [6] T. Alpcan, T. Basar, A game-theoretic framework for congestion control in a general topology networks, in: 41st IEEE Conference on Decision and Control, Las Vegas, Nevada, 2002, pp. 10–13. [7] E. Altman, T. Boulogne, R. El-Azouzi, T. Jime´nez, L. Wynter, A survey on networking games in telecommunications, Computers and Operations Research 33 (2) (2006) 286–311. [8] Y. Jin, G. Kesidis, Nash equilibria of a generic networking game with applications to circuit-switched networks, in: IEEE INFOCOM, 2003. [9] T. Basar, R. Srikant, A Stackelberg network game with a large number of followers, Journal of Optimization Theory and Applications 115 (3) (2002) 479–490. [10] Y.A. Korilis, A.A. Lazar, A. Orda, Achieving network optima using Stackelberg routing strategies, IEEE ACM Transactions on Networking 5 (1) (1997) 161–173. [11] S. Floyd, V. Jacobson, Random early detection gateways for congestion avoidance, IEEE/ACM Transactions on Networking 1 (4) (1993) 397–413. [12] S. Blake, D. Black, M. Carlson, E. Davies, Z. Wang, W. Weiss, An architecture for differentiated services, Internet Standards Track RFC 2475, IETF, December 1998. [13] J. Heinanen, F. Baker, W. Weiss, J. Wroclawski, Assured forwarding PHB group, Internet Standards Track RFC 2597, IETF, June 1999. [14] S. Shakkottai, R. Srikant, How good are deterministic fluid models of Internet congestion control, in: Proceedings of IEEE INFOCOM, New York, 2002. [15] Numerical Recipes in C, The Art of Scientific Computing, secpnd ed., Section 5.6, Tech. report, Available from: . [16] B. Tuffin, Charging the Internet without bandwidth reservation: an overview and bibliography of mathematical approaches, Journal of Information Science and Engineering 19 (5) (2003) 765–786. [17] D. Topkis, Equilibrium points in nonzero-sum n-person submodular games, SIAM Journal of Control and Optimization 17 (1979) 773–787. [18] D.D. Yao, S-modular games with queueing applications, Queueing Systems 21 (1995) 449–475. [19] C. Dovrolis, P. Ramanathan, D. Moore, Packet-dispersion techniques and a capacity-estimation methodology, IEEE/ ACM Transactions on Networking 12 (6) (2004) 963– 977.

1002

E. Altman et al. / Computer Networks 50 (2006) 982–1002

[20] A.B. Downely, Using pathchar to estimate Internet link characteristics, in: Proceedings of the Conference on Applications, Technologies, Architectures, and Protocols for Computer Communication, Cambridge, MA, United states, 1999. [21] J.B. Rosen, Existence and uniqueness of equilibrium points for concave n-person games, Econometrica 33 (1965) 153– 163. [22] F. Bernstein, A. Federgruen, A general equilibrium model for decentralized supply chains with price- and servicecompetition, Technical report, Available from: .

E. Altman received the B.Sc. degree in Electrical Engineering (1984), the B.A. degree in Physics (1984) and the Ph.D. degree in Electrical Engineering (1990), all from the Technion-Israel Institute, Haifa. In (1990) he further received his B.Mus. degree in Music Composition in Tel-Aviv University. Since 1990, he has been with INRIA (National Research Institute in Informatics and Control) in Sophia-Antipolis, France. His current research interests include performance evaluation and control of telecommunication networks and in particular congestion control, wireless communications and networking games. He is in the editorial board of several scientific journals: Stochastic Models, JEDC, COMNET, SIAM SICON and WINET. He has been the (co)chairman of the program committee of several international conferences and workshops (on game theory, networking games and mobile networks). More information can be found at http://www.inria.fr/mistral/personnel/Eitan.Altman/me.html.

Dhiman Barman received his B.Tech degree in Computer Science and Engineering at Indian Institute of Technology, Bombay, India in 1996, and MA in Computer Science at Boston University in 2003. He is a Ph.D. student in Computer Science at Boston University. His research interests include protocol design and performance evaluation, game-theory and Internet measurement.

Rachid El Azouzi received the Ph.D. degree in Applied Mathematics from the Mohammed V University, Rabat, Morocco (2000). He joined INRIA (National Research Institute in Informatics and Control) Sophia Antipolis for post-doctoral and Research Engineer positions. Since 2003, he has been a researcher at the University of Avignon, France. His research interests are mobile networks, performance evaluation, TCP protocol, error control in wireless networks, resource allocation, networking games and pricing.

David Ros received his B.Sc. (with honors) and M.Sc. degrees, both in Electronics Engineering, from the Simo´n Bolı´var University, Caracas, Venezuela in 1987 and 1994, respectively, and his Ph.D. in Computer Science from the Institut National de Sciences Applique´es (INSA), Rennes, France in 2000. He is currently working as an Associate Professor (Maıˆtre de Confe´rences) at GET/ENST Bretagne, Rennes, in the Networking and Multimedia Department. His active research interests include transport protocols, network modelling and simulation, as well as pricing and quality of service issues in IP networks.

Bruno Tuffin (IRISA/INRIA) received his Ph.D. degree in applied mathematics from the University of Rennes 1 in 1997. Since, he has been with INRIA-Rennes, France. His research interests include developing Monte Carlo and quasi-Monte Carlo simulation techniques for the performance evaluation of computer and telecommunication systems, and more recently developing Internet active measurement techniques and new pricing schemes. On this last topic, he is the coordinator of the INRIAÕs cooperative research action PRIXNET (see http://www.irisa.fr/armor/Armor-Ext/RA/ prixnet/ARC.htm).

Computer Networks 50 (2006) 1003–1021 www.elsevier.com/locate/comnet

A capacity acquisition protocol for channel reservation in CDMA networks Xudong Wang

*

Kiyon, Inc., R&D Division, 4225 Executive Square, Suite 290, La Jolla, CA 92037, United States Received 20 February 2005; accepted 16 May 2005 Available online 8 August 2005 Responsible Editor: E. Ekici

Abstract In this paper, a capacity acquisition protocol is proposed for channel reservation in CDMA networks. Under this protocol, a cell is virtually divided into three regions (i.e., inner region, forced handoff region, and active handoff region). A new call in the active handoff region works in soft handoff mode upon its admission, while a new call in the inner and forced handoff regions works in single mode. However, in the forced handoff region, calls working in single mode can be forced into soft handoff mode, when extra capacity is needed by soft handoff calls. As a result, no explicit channel reservation is required before a call enters soft handoff. By adjusting the size of the forced handoff region, the capacity acquisition can adapt to traffic load and guarantee a desired call dropping probability. To evaluate the capacity acquisition protocol, an analytical model is derived and is also validated through computer simulations. Numeric results illustrate that the capacity acquisition protocol significantly reduces the call dropping probability.  2005 Elsevier B.V. All rights reserved. Keywords: Capacity acquisition; Soft handoff; CDMA; Channel reservation

1. Introduction Dropping an on-going call is more disturbing than blocking a new call. To resolve this problem, channel reservation for handoff calls can be used in a call admission control (CAC) algorithm to prior-

*

Tel.: +1425 442 5039; fax: +1858 453 3647. E-mail address: [email protected]

itize handoff calls over new arrival calls. However, channel reservation in CDMA networks is nontrivial because of the special features of soft handoff. It is well known that soft handoff of CDMA networks reduces interference and increases the interference-sensitive capacity [1]. This feature must be taken into account by the channel reservation scheme in a CAC algorithm. In addition, two other important features found in this paper need to be considered. One is that soft handoff and the

1389-1286/$ - see front matter  2005 Elsevier B.V. All rights reserved. doi:10.1016/j.comnet.2005.05.032

1004

X. Wang / Computer Networks 50 (2006) 1003–1021

need of channel reservation occur at different times. The other is that a call working in single mode releases a certain amount of capacity when it is forced into soft handoff mode. Based on these features, a novel channel reservation scheme called capacity acquisition protocol is proposed in this paper for soft handoff calls in CDMA networks. This protocol focuses on uplink operation of a CDMA network. In the capacity acquisition protocol, three regions, i.e., inner region, forced handoff region, and active handoff region, are introduced for each cell. When an admitted call moves into or a new call arrives at the active handoff region, it works in soft handoff mode. In the forced handoff region, an admitted new call works in single mode and thus only communicates with one base station. Since a call in soft handoff mode consumes less capacity than it does in single mode, some capacity is implicitly reserved by a new admitted call in the forced handoff region. When more capacity is needed by soft handoff calls, it can be acquired from new admitted calls in the forced handoff region by forcing them into soft handoff mode. Thus, before a call enters soft handoff, no explicit channel reservation is required to reserve capacity. The size of forced handoff region in the capacity acquisition protocol can be adjusted according to the traffic load in the network. As long as traffic load is lower than an upper bound, a target call dropping probability is guaranteed by the loadadaptive protocol. To date, many algorithms have been proposed for call admission in CDMA networks [2–7]. Neither the signal-to-interference ratio (SIR)-based CAC algorithm in [2] nor the interference levelbased CAC algorithm in [3] considers interference reduction by soft handoff. The interference reduction brought by soft handoff is not considered either in [4], although a call in the soft handoff region can access two base stations. Compared to the schemes in [2–4], the CAC analytical model proposed in [5] achieves better performance, because it takes into account the capacity increase factor introduced by soft handoff. According to this model, the larger is the size of soft handoff region in a cell, the smaller is the call blocking probability of a CDMA network. No differentiation is

performed between soft handoff and new arrival calls in [5]. Algorithms reserving fixed channels [4] for soft handoff calls waste resources. In [6], a ‘‘look around’’ CAC algorithm is proposed to reduce dropped calls. Soft guard channels are exclusively used for handoff calls, which also results in low resource utilization. In [7], an adaptive channel reservation scheme is proposed for soft handoff calls in a CDMA network. When a user with an on-going call moves into a reservation region, it starts a channel reservation procedure, so channels are reserved individually for each handoff call. Thus, each handoff call does not have fixed reservation of capacity and the utilization is improved. However, this method still wastes a large amount of capacity, because the reserved capacity for a soft handoff call are held useless during the period from the approval of reservation request to the initiation of a soft handoff call. In addition, when traffic load (i.e., new call arrival rate) is high, the resource utilization of the adaptive reservation scheme is not guaranteed to be more efficient than that of a fixed reservation scheme, because many users need to have reserved channels. In other words, this scheme is not actually adaptive to traffic load. The rest of this paper is organized as follows. The capacity acquisition protocol is proposed in Section 2, and is analyzed in Section 3. The analytical model is justified by simulations in Section 4 where analytical results are used to compare the new scheme with other channel reservation schemes for CDMA networks. In Section 5, a load-adaptive capacity acquisition protocol and its performance are presented. Practical issues of the capacity acquisition protocol is discussed in Section 6. The paper is concluded in Section 7.

2. Capacity acquisition based on soft handoff The capacity acquisition protocol is motivated by the features of soft handoff. 2.1. Features of soft handoff As shown in Fig. 1, a mobile terminal in soft handoff is able to communicate with base stations

1005

X. Wang / Computer Networks 50 (2006) 1003–1021 2

l ðbbrÞ ; PH B ¼ Py e

Base Station B

Base Station A r1

Mobile Terminal

Fig. 1. A soft handoff example.

and

A and B. For a user at distance r from a base station, the signal attenuation is assumed to be f(r) = rl10f/10 [1], where l and f capture the power loss and shadowing, respectively. For base station i, f = an + bni [1], where n is the common part to all base stations, and ni pertains solely to base station i. Moreover, n and ni are independent Gaussian random variables with mean and standard deviation equal to zero and r, respectively. Based on this assumption, the interference reduction by soft handoff is analyzed as follows: Suppose the received signal power at base station A, denoted by P H A , is required to be P. When the mobile terminal communicates with base station A only, its average transmitting power is PE(f(r1)), and the average interference power level at base station B, denoted by P H B , is PE(f(r1)/f(r0)), where E(Æ) is the expectation function. When soft handoff is used, the mobile terminal communicates with either base station A or base station B, depending on the attenuation f(r0) and f(r1). If f(r0) > f(r1), the mobile terminal must communicate with base station A; otherwise, it communicates with base station B. Thus, the average power levels at base stations A and B, represented SH by P SH A and P B , are

P SH B

P SH A ¼ P Prðf ðr0 Þ > f ðr 1 ÞÞ þ PEðf ðr0 Þ=f ðr1 Þjf ðr0 Þ 6 f ðr1 ÞÞ;

ð1Þ

and P SH B ¼ P Prðf ðr1 Þ > f ðr 0 ÞÞ þ PEðf ðr1 Þ=f ðr0 Þjf ðr1 Þ 6 f ðr0 ÞÞ;

ð2Þ

respectively. SH Considering that f is Gaussian, P H B , P A , and SH P B can be expressed as





bbr l lnðyÞ erfc pffiffiffi þ pffiffiffi 2 2bbr   l lnðyÞ þ P erfc  pffiffiffi ; 2bbr

l P SH A ¼ Py e

r0

ðbbrÞ2 2



bbr l lnðyÞ ¼ Py e erfc pffiffiffi  pffiffiffi 2 2bbr   l lnðyÞ þ P erfc pffiffiffi ; 2bbr l

ðbbrÞ2 2

ð3Þ

ð4Þ  ð5Þ

respectively, b = ln(10)/10, y = r0/r1, and R 1 where 2 pffiffiffi erfcðxÞ ¼ x et = p dt. Thus, the reduced power levels of the mobile terminal under consideration (i.e., reduced interference to other mobile terminals) at base stations A and B, denoted by DP A and DP B , respectively, can be derived according SH H SH to DP A ¼ P H A  pffiffiPffi A and DP B ¼ P B  P B . Suppose l = 4, b ¼ 1= 2, then r = 8, and the power levels (normalized by P) versus distance ratio, denoted by r1/r0, are shown in Figs. 2 and 3, respectively, for base stations A and B. It is obvious that power levels required at base stations are significantly reduced when soft handoff is used. Suppose a mobile terminal works in single mode and communicates with base station A, then the closer it is to base station B, the more significant is the interference reduction. However, two more distinguished features of soft handoff can be found from Figs 2 and 3: • Feature 1: soft handoff and the need of channel reservation actually occur at different times. Assume that the mobile terminal is admitted where r1/r0 is small, i.e., the mobile terminal is closer to base station A in Fig. 1. When the mobile terminal moves towards base station B, its interference to mobile terminals in the cell covered by base station B increases (See the curve without soft handoff in Fig. 3). When the mobile terminal enters soft handoff, the interference is greatly reduced. Thus, before or when the on-going call enters soft handoff mode, there is no need to reserve extra channels because completion of soft handoff is guaranteed. However, when the mobile terminal in soft

1006

X. Wang / Computer Networks 50 (2006) 1003–1021

Normalized Power Levels at Base Station A

1 0.9

With Soft Handoff Without Soft Handoff Reduced Interference

0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Distance Ratio r1/r0 Fig. 2. Normalized power levels with and without soft handoff at base station A.

Normalized Power Levels at Base Station B

2.2

With Soft Handoff Without Soft Handoff Reduced Interference

2 1.8 1.6 1.4 1.2 1 0.8 0.6 0.4 0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Distance Ratio r1/r0

Fig. 3. Normalized power levels with and without soft handoff at base station B.

handoff mode continues to move towards base station B, its average power level at base station B steadily increases, and may cause call dropping and blocking in the cell covered by base

station B. To reduce call dropping as well as call blocking probabilities, a certain amount of capacity needs to be acquired to compensate the increased interference.

X. Wang / Computer Networks 50 (2006) 1003–1021

• Feature 2: a single mode call releases a certain amount of capacity when it is forced into soft handoff. For a call that is admitted and works in single mode, if it switches into soft handoff mode, it reduces the interference to all base stations that surround it. In other words, its consumed capacity at all base stations is decreased. The released capacity can be used to compensate the increased interference caused by mobile soft handoff calls. Thus, an admitted call working in single mode implicitly reserves capacity for future acquisition needed by mobile soft handoff calls.

2.2. The novel channel reservation scheme—the capacity acquisition protocol An implicit channel reservation scheme is proposed for soft handoff calls in CDMA networks by utilizing the special features of soft handoff itself. In this scheme a cell is divided into three regions, i.e., inner region, forced handoff region, and active handoff region. In the cellular model shown in Fig. 4, the region starting from the cell boundary up to distance d1 is the active handoff region, while the region from distance d1 to d2 is the forced handoff region. The remaining region of a cell is the inner region. Since the soft handoff region is the overlapping area of neighboring cells, the actual shape of a cell is not a square but similar to a hexagon. Since it is difficult to find the exact location of a mobile terminal in a cell, signal strength is used to determine the region that a mobile terminal is located in. A mobile terminal in the cell covered by base station A monitors the pilot signal from base station A, if the signal strength is larger than a threshold Sfr, the mobile terminal is in the inner region; else if the signal strength is larger than another threshold Saf (Saf < Sfr), the mobile terminal is in forced handoff region. Finally, if the signal strength is less than Saf, the mobile terminal is in the active handoff region. In each region, a mobile terminal keeps a set (called active set) of base stations that it can communicate with. In the inner region, only one base

1007

station exists in the active set, while in soft handoff regions (either forced handoff region or active handoff region) at least two base stations must be in the active set so that the mobile terminals can enter soft handoff mode. Based on the investigated features of soft handoff and the introduced regions of a cell, capacity acquisition works as follows: • Reserve capacity implicitly. An admitted call works in a different mode depending on where it is located. When a mobile terminal is in the active handoff region, it needs to work in soft handoff mode. In the inner region, a mobile terminal can only communicate with one base station, so it works in single mode. For a new call originated in the forced handoff region of a base station, e.g., base station A, its admission is performed by assuming that it can only communicate with base station A. From Feature 2, the admitted single mode call carries extra capacity that can be released on the demand of soft handoff calls that need more capacity. Thus, implicit channel reservation is fulfilled by allowing new arrival calls in the forced handoff region to work in single mode. • Detect outage. When a soft handoff call of a mobile terminal moves towards base station B, the received power level at base station B increases. Thus, the SINRÕs of the received signals of all other mobile terminals decrease. Given a threshold SINRthr, base station B checks the SINR of the received signal of each mobile terminal, denoted by SINRMT, and compares it with the target SINRo. If SINRMT < SINRo + SINRthr, outage is detected. The threshold ensures that outage is detected before it actually occurs. If the soft handoff call under consideration continuously moves towards base station B, outage may take place in the uplink of multiple mobile terminals. In order to avoid dropping multiple calls, it is better to drop the soft handoff call under consideration. In such a way, call dropping probability can be reduced. However, to further reduce the call dropping probability of soft handoff calls, a certain amount of system capacity implicitly carried by new admitted

1008

X. Wang / Computer Networks 50 (2006) 1003–1021

Fig. 4. The cell structure.

calls in the forced handoff region must be used to accommodate the increased received power level of a soft handoff call at base station B, as explained next.

• Acquire implicitly reserved capacity. Suppose a soft handoff call communicates with base stations A and B and moves towards base station B, as shown in Fig. 4. When it requires extra

X. Wang / Computer Networks 50 (2006) 1003–1021

capacity in the cell of base station B, some single mode calls in the forced handoff region in the cell of base station A need to be forced into soft handoff mode. Call dropping probability can be greatly reduced by forcing such single mode calls into soft handoff. Since soft handoff reduces interference, as pointed out by Feature 1, an admitted new call in the forced handoff region can always successfully switch into handoff mode. However, the following procedure is followed so that the necessary capacity can be acquired within a short time. First, base station A finds a mobile terminal in the forced handoff region, e.g. mobile terminal MTa, that is the farthest to it in the sense of the strength of pilot signal. Second, mobile terminal MTa is forced into soft handoff mode. Maximum capacity is released by MTa. The reason is that, if the single mode call is father from base station A (i.e., r1/ r0 is larger), the interference reduction at base station B is larger. After MTa is forced into soft handoff, if SINRMT > SINRo + SINRthr is satisfied, channel acquisition is completed; otherwise, the next single mode call with the farthest distance to base station A is forced into soft handoff. This process repeats until SINRMT > SINRo + SINRthr is satisfied for all the received signals at base station B.

3. Analysis of the capacity acquisition protocol 3.1. Assumptions In a CDMA network, due to the factors such as irregular cell boundaries, traffic characteristics, and the movement of mobile terminals [5], it is complicated to find an accurate model of soft handoff. To simplify the analysis, assumptions are given as follows: • Residual times of a call in all regions are generally distributed [8]. • The new call arrival rate is a Poisson process with rate kn. • The new calls are uniformly distributed in the cells.

1009

• The call holding time Tc is exponentially distributed with mean 1/lc. • The average residual time of a call in a region is proportional to the shortest distance from the center to the boundary of the region. • Mobile terminals move to different directions with equal probability. • A mobile terminal communicates with two base stations when it is in soft handoff mode. Based on these assumptions, an analytical model is proposed next. 3.2. The analytical model Considering a CDMA network before the capacity acquisition protocol is applied, new arrival and soft handoff calls are served in the same queue. Since a call in soft handoff mode generates less interference than it does in single mode, a soft handoff call consumes less amount of capacity than a single mode call. Thus, there are two methods to model the system when the capacity acquisition protocol is not applied. The first option is to treat soft handoff calls differently from new arrival calls in the queueing process, in which the system has fixed system capacity. However, this method complicates the queueing analysis. The other option is to treat the soft handoff and new arrival calls equally in the sense of capacity consumption. However, the server capacity in this method varies with the average arrival rate of soft handoff calls in the system. This method has been widely accepted in the related work [1,5,7], and is also adopted in our analytical model. Thus, the CDMA network without capacity acquisition can be modeled as a queueing system, where the server capacity varies with the number of soft handoff calls, but new arrival and soft handoff calls are processed in the same way. When the capacity acquisition protocol is used to prioritize soft handoff calls over new arrival calls, a soft handoff call blocked from the previous queue system has another chance to be served in the CDMA network by applying capacity acquisition. Thus, blocked soft handoff calls in the previous queue are served in another queue. The capacity of the second queue depends on how

1010

X. Wang / Computer Networks 50 (2006) 1003–1021

n+

C1 h

uch PB1

C2 PB1

h

uch PB

2

Fig. 5. The analytical model.

much capacity can be released by the new admitted calls in the forced soft handoff region. Thus, the overall CDMA network can be analyzed using two sequenced queueing processes, as shown Fig. 5. C1 is the uplink capacity of a cell when new arrival calls in the forced handoff region do not work in soft handoff mode (i.e., before the capacity acquisition protocol is applied), while C2 is the reserved capacity carried by single mode calls in the forced soft handoff region. kh is the generating rate of handoff calls, and lch is the average channel holding time. Moreover, P B1 and P B2 are the call blocking probabilities of the first and the second queueing processes, respectively. According to this model, the call blocking probability PB is equal to the call blocking probability in the first queueing process, i.e., P B ¼ P B1 , while the call dropping probability PD of the system is P D ¼ P B1 P B2 . In order to find the solutions of PB and PD, C1, C2, and kh need to be derived.

In a CDMA network, the uplink capacity of a cell is inversely proportional to 1 + f [1], where f is the interference factor that is defined as the total interference from other cell users normalized by the average number of users per cell. Suppose the zeroth cell in a cellular network is considered. We define R1 as the inner region in all cells except the zeroth cell, R2 as the active handoff region in all cells, and R3 as the forced handoff region in all cells. According to the definition of interference factor, if Chard denotes the uplink capacity of a cell in a CDMA network without soft handoff, then C1 of the zeroth cell is C 1 ¼ C hard

1 þ fhard ; 1 þ fR1 þ fR2 þ fR3

ð6Þ

where fR1 ; fR2 ; fR3 are the interference factors due to the mobile terminals in regions R1, R2, and R3, respectively. For the scenario corresponding to C3, all calls in region R3 are in single mode, so the interference factor due to calls in this region is different from fR3 , and is thus denoted as fR0 3 . In the scenario of C4, all mobile terminals in region R3 are in soft handoff mode, so the interference factor corresponding to this region is denoted as fR003 . Thus, C3 and C4 can be derived as C 3 ¼ C hard

1 þ fhard ; 1 þ fR1 þ fR2 þ f 0 R3

ð7Þ

1 þ fhard ; 1 þ fR1 þ fR2 þ f 00 R3

ð8Þ

and

3.3. Derivations of C1 and C2

C 4 ¼ C hard

In order to determine the capacities C1 and C2, two parameters C3 and C4 are introduced. As shown in Fig. 6, C3 denotes the uplink capacity when a cell has only one soft handoff region with size equal to the size of the active handoff region (i.e., the region represented by d1 in Figs. 4 and 6), while C4 denotes the uplink capacity when the single soft handoff region is equal to the total size of the forced handoff region and the active handoff region (i.e., the region represented by d2 in Figs. 4 and 6). Given a cellular CDMA network, C3 and C4 can be determined [1,5], and will be presented in Section 4.1.

respectively. From (6) and (7), C1 and C3 are related as follows:   1 1  ð9Þ C hard ð1 þ fhard Þ ¼ fR3  fR0 3 : C1 C3 Similarly, the relationship between C1 and C4 can be derived from (6) and (8) as   1 1 ð10Þ  C hard ð1 þ fhard Þ ¼ fR3  fR003 : C1 C4 Considering the scenario corresponding to C1, some calls work in single mode, while others are in soft handoff mode. The densities of single mode

X. Wang / Computer Networks 50 (2006) 1003–1021

1011

Fig. 6. Different handoff regions corresponding to C3 and C4.

n h and soft handoff calls are knkþk and knkþk , respech h tively, where kn is the new call arrival rate and kh is the generating rate of soft handoff calls. From [1], interference due to calls in a region is proportional to the density of calls in this region. In addition, if all calls in R3 are in single mode, their generated interference at the base station in the zeroth cell is fR0 3 N , where N is the average number of calls in a cell. Thus, in the scenario corresponding to C1, the experienced interference at the base station in the zeroth cell due to single mode calls in n . Similarly, the interference R3 must be fR0 3 N knkþk h due to soft handoff calls in R3 can be derived as h fR003 N knkþk . Thus, the interference due to both types h n h of calls in R3 is I R3 ¼ fR0 3 N knkþk þ fR003 N knkþk . h h According to the definition of interference factor, I fR3 ¼ NR3 , so kn kh fR3 ¼ fR0 3 þ fR003 : ð11Þ kn þ kh kn þ kh Thus, kn : ð12Þ fR3  fR003 ¼ ðfR0 3  fR3 Þ kn þ kh Combining (9), (10), and (12), and with some algebra, C1 can be derived as 1 1 kh 1 kn ¼ þ : ð13Þ C 1 C 4 kn þ kh C 3 kn þ kh

Since C4 > C3, so it is easy to show that C1 satisfies C3 < C1 < C4. The difference between C4 and C1 is the reserved capacity carried by the new arrival calls working in single mode in the forced handoff region, so C2 is given by

C2 ¼ C4  C1:

ð14Þ

3.4. Derivations of kh A new arrival call switches into soft handoff mode under the following situations: • For a new call initiated in the inner region, it switches into soft handoff mode if it enters the active handoff region or if it is forced into soft handoff when it enters the forced handoff region before it is terminated. • For a new call initiated in the forced handoff region, it switches into soft handoff mode if it enters the active handoff region before it is terminated. Such a new call may be forced into soft handoff mode when soft handoff calls need more extra capacity to compensate increased interference so that call dropping probability is reduced. • For a new call initiated in the active handoff region, it immediately enters soft handoff mode upon being admitted. If N denotes the number of handoff that are generated by a new arrival call, then the probability that it causes one handoff, denoted by Pr(N = 1), is given by PrðN ¼ 1Þ ¼ P I ð1  P B ÞP Ish xh þ P F ð1  P B ÞP Fsh xh þ P A ð1  P B Þxh ; ð15Þ

1012

X. Wang / Computer Networks 50 (2006) 1003–1021

where PI, PF, PA are the probabilities that the new call arrives in the inner region, forced handoff region, and active handoff region, respectively. Thus, in Eq. (15), the first, second, and third items on the right side represent the probabilities that a handoff call is generated by a call in the inner region, forced handoff region, and active handoff region, respectively. P Ish is the probability that a new call in the inner region requests handoff before it is terminated, while P Fsh is the same probability for a new call in the forced handoff region. xh denotes the probability that no more handoff is requested by a handoff call. Suppose P Is is the probability that a single mode call in the inner region enters the forced handoff region before it is terminated, P Fs is the probability of a single mode call in the forced handoff region enters the active handoff region before it is terminated, P Fi is the probability that a single mode call in the forced handoff region enters the active handoff region under the condition that it has left the inner region, and y Fs denotes the probability that a single mode call in the forced handoff region is forced into soft handoff mode. If a new call initiated in the inner region requests handoff, it must first enter the forced handoff region before it is terminated. The probability of this action is P Is . When this call is in the forced handoff region, handoff occurs when it continues to move and enters the active handoff region or when it is forced into soft handoff. The probability of the former scenario is P Fi , while that of the latter scenario is ð1  P Fi Þy Fs . Thus, P Ish is equal to P Ish ¼ P Is ðP Fi þ ð1  P Fi Þy Fs Þ:

ð16Þ

Similarly, P Fsh can be derived as P Fsh ¼ P Fs þ ð1  P Fs Þy Fs :

• The soft handoff call leaves the handoff region and stays in the inner region until it is terminated. Thus, xh is expressed as xh ¼ P D þ ð1  P D Þ    1  P Hh þ P Hh P a ð1  P Ih Þ ;

ð18Þ

where P Hh is the probability that a handoff call leaves the handoff region before it is terminated, Pa denotes the conditional probability that a mobile terminal moves from the handoff region to the inner region under the condition that the mobile terminal leaves the handoff region [5], and P Ih is the probability that the handoff call moves into the inner region and then leaves the region before the call is terminated. When N = 2, the probability that an admitted call causes two handoffs, denoted by Pr(N = 2), is PrðN ¼ 2Þ ¼ P I ð1  P B ÞP Ish ð1  P D Þy h xh þ P F ð1  P B ÞP Fsh ð1  P D Þy h xh þ P A ð1  P B Þð1  P D Þy Ai xh ;

ð19Þ

where y Ai is the probability of requesting another handoff by the new call in the active handoff region, and yh is the probability that a handoff call makes another handoff. If a soft handoff call successfully switches into handoff mode, it must make another handoff and the new handoff call will not be dropped. Thus, yh(1  PD) = 1xh, so yh is yh ¼

1  xh : 1  PD

ð20Þ

Similarly, when N = 3, Pr(N = 3) is 2

PrðN ¼ 3Þ ¼ P I ð1  P B ÞP Ish fð1  P D Þy h g xh 2

ð17Þ

No more handoff occurs in a call under the following situations: • The soft handoff call is dropped. • If the soft handoff call is not dropped, the soft handoff call does not leave the soft handoff region (including both forced handoff region and active handoff region) before it is terminated.

þ P F ð1  P B ÞP Fsh fð1  P D Þy h g xh þ P A ð1  P B Þð1  P D Þy Ai ð1  P D Þy h xh . ð21Þ In general, Pr(N = n) is given by n1

PrðN ¼ nÞ ¼ P I ð1  P B ÞP Ish ½ð1  P D Þy h 

xh

þ P F ð1  P B ÞP Fsh ½ð1  P D Þy h  þ P A ð1  P B Þy Ai ½ð1  P D Þy h 

n1

n2

xh

xh . ð22Þ

X. Wang / Computer Networks 50 (2006) 1003–1021

Thus, given the new call arrival rate kn, the the average rate of handoff calls kh must be 1 X nPrðN ¼ nÞ kh ¼ kn n¼1



¼ kn P I P Ish þ P F P Fsh þ P A ð1  P B Þxh



P A ð1  P B Þxh y Ai þ yh

ð1  P B Þxh

ð1  ð1  P D Þy h Þ 1

ð1  ð1  P D Þy h Þ

4. Performance of the capacity acquisition protocol !

1 : 2 ð23Þ

ð24Þ

where lch is the average channel holding time and Pb0 is given by P b0 ¼

1 C1 P i¼0

:

ðkn þkh Þi i!lich

ð25Þ

The blocked handoff calls access the reserved capacity C2, so in the second queueing process m = C2. Thus, the call blocking probability P B2 is C

P B2 ¼

ðkh P B1 Þ 2 P d0 ; C 2 !lCch2

ð26Þ

where Pd0 is given by P d0 ¼

1 C2 P i¼0

ðkh P B1 Þi i!lich

:

In this section the analytical model is first verified through simulations. Based on the validated analytical model, the capacity acquisition protocol is then compared with other channel reservation schemes. 4.1. System parameters of analytical model

Both queueing processes in Fig. 5 are an M/M/ m/m queueing system. In the first one, m = C1, so the call blocking probability P B1 is ðkn þ kh ÞC1 P b0 ; C 1 !lCch1

rithm is out of the scope of this paper. However, given the constraints of 0 < PB < 1 and 0 < PD < 1, the iterative computation in our experiments has not met a problem to find a unique point for PB and PD.

2

3.5. Iterative solution of PB and PD

P B1 ¼

1013

ð27Þ

Based on P B1 and P B2 , the call blocking probability PB and call dropping probability PD are equal to P B1 and P B1 P B2 , respectively. Since PB, PD, C1, C2, and kh are correlated, no close-form solution is available. Thus, PB and PD need to can calculated iteratively. Proof of the existence and uniqueness of a solution for this iterative algo-

Before iteratively calculate the values of PB and PD, the parameters such as C3, C4 in (13), PI, PF, PA in (15), P Is ; P Fs ; P Fi ; y Fs in (16) and (17), y Ai in (19), P Hh ; P Ih ; P a in (18), and lch in (24) need to be determined. These parameters have been defined in the previous section. But for the reason of clarity, they are summarized in Table 1. The generally distributed residual time in a region is assumed to be exponentially distributed. Although it is shown that exponential distribution does not provide a good approximation to the residual time of a PCS network [10], it is still appropriate for cellular networks with a large cell size. Moreover, to have a fair comparison with other schemes such as [7], the same assumption (i.e., exponentially distributed residual time) as that in [7] is used. • PI, PF, and PA. Since new arrival calls are assumed to be uniformly distributed in a cell, the probability of a new call arriving in a particular region is proportional to the size of the region. If k1 = d1/a and k2 = d2/a, then PI = (1  k2)2, PA = k1(2  k1), and PF = 1  PI  PA. • rh and rn. These two parameters can be determined from the area ratios, i.e., rh = PF/ (PF + PA) and rn = PF/(PI + PM + PA). • C3 and C4. C3 and C4 are the system capacities corresponding to two different sizes of handoff regions, which can be determined by considering the characteristics of soft handoff [1,5].

1014

X. Wang / Computer Networks 50 (2006) 1003–1021

Table 1 Notations used in performance analysis PI

The probability that a new call arrives in inner region

PF PA C3 C4

The probability that a new call arrives in forced handoff region The probability that a new call arrives in active handoff region The system capacity of a cell when the soft handoff region is just the active handoff region The system capacity of a cell when the soft handoff region includes both the forced handoff region and the active handoff region The probability that a call in the inner region enters the forced handoff region region before it is terminated The probability that a call in the forced handoff region enters the active handoff region region before it is terminated The probability that a call in the forced handoff region enters the active handoff region region under the condition that it has left the inner region The probability that a handoff call moves into the inner region and leaves the region before it is terminate The probability that a handoff call leaves the handoff region before it is terminated The conditional probability that a mobile terminal moves from the handoff region to the inner region under the condition that the mobile terminal leaves the current handoff region The probability that a new call in the forced handoff region requests a handoff The probability that a new call in the active handoff region requests another handoff The average channel holding time

P Is P Fs P Fs P Ih P Hh Pa y Fs y Ai lch

Given the pshadowing parameters r = 8 and ffiffiffi a ¼ b ¼ 1= 2, the system capacity increasing factor corresponding to the different sizes of soft handoff region is shown in Table 2, where the size of handoff region is represented by distance d2 that is from the inner region boundary to the cell boundary. From Table 2, if the capacity without using soft handoff is 30, then C4 will be 30 · 1.89 = 56.7 when d2 = 0.3. • P Is ; P Fs ; P Fi ; P Ih ; and P Hh . Since the residual time is exponentially distributed, by using the memoryless property of an exponential distribution, P Is ¼ P Ih . Also, the average residual time in a region is proportional to the shortest distance from the center to the boundary of I IF the region, so P Is ¼ lIlþl , P Fs ¼ lIFlþl , and c c F [5], where P Fi ¼ lFlþl c

1 1 ¼ lcell ð1  k 2 Þ, l1IF ¼ lI 1 1 ðk 2  K 1 Þ, and l1F ¼ lcell ð1  k 1 Þ, k 1 ¼ d 1 =a, lcell H where and k 2 ¼ d 2 =a. Similarly, P Hh ¼ lHlþl c 1 1 1 1 1 ¼ Or ðl  l Þ and l ¼ l ð1 þ k 2 Þ. Or is a l H

O

I

O

• Pa. It is assumed that a mobile terminal moves to different directions with the same probability. ad ffiffi 2 . Thus, as derived in [5], P a ¼ aþðp 21Þd 2 • y Fs and y Ai . From the memoryless property of exponential process, y Fs ¼ y Ai ¼ y h . 1 • lch. From [5], l1ch ¼ lc þl . O In the analytical model, a normalized cell is assumed, i.e, a = 1. Also, it is assumed that 1/lc = 1/ lcell = 100 s, and Or = 1. The uplink capacity of a cell without soft handoff is assumed to be 30 channels. d1, d2 and kn are the input parameters of experiments. Experiments are carried out as follows. First, the analytical model is verified through computer simulation. Then, based on the analytical results, the capacity acquisition protocol is compared with other schemes when d1 = 0.1 and d2 = 0.3 are assumed. 4.2. Comparison with simulations

cell

parameter dependent on the shape and size of the overlap region and the mobility model [5].

In order to verify the analytical model, computer simulation is developed according to the

Table 2 Capacity increasing factors Size of soft handoff region (d)

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Capacity increasing factor

1.42

1.74

1.89

1.95

1.97

1.97

1.97

1.97

1.97

X. Wang / Computer Networks 50 (2006) 1003–1021

operation procedure described in Section 2.2. The assumptions of cell, traffic, and mobility models used in the simulation are the same as those used in analysis. The layout of cells is the same as Fig. 4. The radius of a cell is equal to 1000 m. The distances d1 and d2 are equal to 100 and 300 m, respectively. Nine cells are simulated. Since the number of cells is limited, a call may move out of the area of simulated cells. Thus, wrapped topologies are simulated to eliminate the boundary effect [11]. In each cell, the new calls are generated according to a Poisson process, and call duration is exponentially distributed with average equal to 100 s. An admitted call in a cell moves to different directions with equal probability. The moving speed of a call is uniformly distributed in [0, 10] meters per second. Channel attenuation and shadowing experienced by a call is simulated according to the model described in Section 2.1. In such a model, l = 4.0 and r = 8.0. When checking if a new call can be accepted or if a handoff call needs to be dropped, Ôlook aroundÕ technique is used, i.e., the SINR of calls in all cells must be verified. If the SINR of some calls cannot be satisfied when a new call is generated, the new call is blocked. For a handoff call, if the SINR is not satisfied, a certain number of single mode calls working in forced handoff need to be switched into soft handoff mode until the SINR of all calls are satisfied. Otherwise, the handoff call under consideration is dropped. In the simulation, the processing gain G is assumed to be 16. The target SINR used in simulation is determined as follows. A call experiences interference from calls in its own cell and that in other cells as well. Suppose the received power of a call is P and the number of calls simultaneously supported in a cell without soft handoff is N, then interference from the calls of its own cell is (N  1)P. As used in analytical model, N = 30. Suppose fhard is the other cell interference normalized by the number of calls per cell. By using the same method in [5] and considering the cell layout in Fig. 4 and the channel model in Section 2.1, the value of fhard can be calculated, which is equal to 2.55. Thus, the SINR of a call at the ¼ 0:152. The threshold base station is ðN 1ÞPGP þfhard NP

1015

SINRthr described in Section 2.2 is assumed to be 0.052 in the simulation. For both simulation and theoretical analysis, call blocking and call dropping probabilities with respect to different traffic load are illustrated in Fig. 7. The match between simulation and analytical results proves that the analytical model has captured the characteristics of the new channel reservation scheme for soft handoff in CDMA networks. It is a valid model to analyze the performance of the new channel reservation scheme. 4.3. Comparison with existing schemes 4.3.1. Comparisons with the scheme without channel reservation In this experiment, our proposed scheme is compared with a scheme that does not use channel reservation [5]. In such a scheme, although no channel reservation is used, a large soft handoff region is used in order to decrease both call blocking probability and call dropping probability. To have a fair comparison, the size of the handoff region in such a scheme, represented by distance d, is assumed to be 0.3, i.e., the handoff region has the same size as the total size of the forced handoff region and active handoff region in our new scheme. Thus, this experiment actually illustrates the different performance between the schemes with and without using the calls in the forced handoff region to carry reserved capacity for soft handoff calls. The results are shown in Fig. 8. Since the scheme without using channel reservation does not differentiate the new arrival calls and handoff calls, the call dropping probability and call blocking probability have the same value. However, in our proposed scheme, the call dropping probability is greatly reduced when compared to call blocking probability. For example, the call dropping probability is lower than the call blocking probability by more than 40% even when traffic load is higher than 0.6 calls/s/cell. The reason is that the single mode calls in the forced handoff region carry the capacity that can be used by soft handoff calls to compensate the increased interference. Since the reduced interference brought by

1016

Probabilities of Call Blocking and Dropping

X. Wang / Computer Networks 50 (2006) 1003–1021 10

0

10

–1

10

–2

10

–3

10

–4

Analysis (Blocking) Analysis (Dropping) Simulation (Blocking) Simulation (Dropping)

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

New Call Arrival Rate (calls/s/cell)

Probabilities of Call Blocking and Dropping

Fig. 7. Simulation versus analytical results.

10

0

10

–1

10

–2

10

–3

10

–4

New Scheme (Blocking) New Scheme (Dropping) No Reservation (Blocking) No Reservation (Dropping)

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

New Call Arrival Rate (calls/s/cell) Fig. 8. Comparisons with the scheme without channel reservation.

forced handoff calls also affects the admission of the new arrival calls, the call blocking probability

is only slightly higher than that of the scheme without using channel reservation.

1017

X. Wang / Computer Networks 50 (2006) 1003–1021

4.3.2. Comparisons with the fixed reservation scheme In order to differentiate new arrival calls and handoff calls, a simple method is to reserve a fixed number of channels for handoff calls [4]. The results of our scheme and the fixed channel reservation scheme are compared in Fig. 9, where five channels are reserved for soft handoff calls in the fixed channel reservation scheme. As shown in Fig. 9, in both schemes, the call dropping probability is much lower than the call blocking probability. When the traffic load is higher than 0.7 calls/s/cell, although the call blocking probability in the fixed channel reservation scheme is only slightly higher than that of the new channel channel reservation, the call dropping probability is much higher than that of the new scheme. The reason is that fixed reservation cause low utilization of channels. Due to this reason, if the call dropping probability in the fixed reservation scheme needs to be decreased, the call blocking probability must increase. This illustrates the advantage of the new channel reservation scheme.

4.3.3. Comparisons with the adaptive reservation scheme In this experiment, our scheme is compared to the adaptive channel reservation scheme proposed in [7]. In the adaptive channel reservation scheme, the soft handoff regions determined by thresholds TADD and TDROP, and the reservation region are represented by distances dADD, dDROP, and dRSR, respectively. In order to have a fair comparison, the size of the handoff region determined by threshold TADD in the adaptive channel reservation scheme is equal to the size of the active handoff region, and the size of the reservation region is equal to the size of the forced handoff region. Thus, dADD = 0.1 and dRSR = 0.2. As in [7], dDROP is assumed to be 0.3. The call blocking and dropping probabilities of the capacity acquisition protocol and the adaptive reservation scheme are shown in Fig. 10. The call dropping probability of the adaptive channel reservation scheme is improved compared to the fixed channel reservation scheme. However, it is still much higher than the new scheme at all traffic loads. When traffic load is between 0.5 and

Probabilities of Call Blocking and Dropping

10 0

10 –1

10 –2

10 –3

10 –4

New Scheme (Blocking) New Scheme (Dropping) Fixed Reservation (Blocking) Fixed Reservation (Dropping)

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

New Call Arrival Rate (calls/s/cell) Fig. 9. Comparisons with the scheme with fixed channel reservation.

0.9

1018

Probabilities of Call Blocking and Dropping

X. Wang / Computer Networks 50 (2006) 1003–1021 10

0

10

–1

10

–2

10

–3

10

–4

New Scheme (Blocking) New Scheme (Dropping) Adaptive Reservation (Blocking) Adaptive Reservation (Dropping)

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

New Call Arrival Rate (calls/s/cell) Fig. 10. Comparisons with the adaptive channel reservation scheme.

0.9 calls/s/cell, the call dropping probability is more than 10% higher than the new scheme. During this load range, the call blocking probability is almost the same as that the new scheme. However, when traffic load is less than 0.5 calls/s/cell, the call blocking probability of the adaptive channel reservation scheme becomes much higher than our new scheme.

5. A load-adaptive capacity acquisition protocol and its performance Given a fixed forced handoff region in the capacity acquisition protocol, if it is small, the call dropping probability will be very large when the new call arrival rate is high. On the other hand, if the size of the forced handoff region is large, many calls will unnecessarily work in soft handoff mode when traffic load is low, which wastes system capacity because soft handoff calls decrease the available channels on the forward link of a base station [9]. In order to keep the call dropping probability as low as possible and maximize the available channels on the forward link, a load-

adaptive capacity acquisition protocol is needed. In this scheme, the size of the forced handoff region d2 is adjusted adaptively according to the traffic load in the network. When the new call arrival rate increases so that the call dropping probability exceeds a desired value CD, d2 is reduced to enlarge the size of forced handoff region. In such a way, more capacity from single mode calls can be borrowed by the soft handoff calls that need extra capacity. Given the desired call dropping probability CD and call arrival rate kn, the optimal value of d2 can be iteratively calculated by using the analytical model in Section 3. However, a more practical solution is under investigation. Based on the analytical model, experiments of two scenarios are carried out to investigate the load-adaptive capacity acquisition protocol. The desired call dropping probability of one scenario is less than 0.01, while that of the other is less than 0.05. The results of call blocking and dropping probabilities are shown in Figs. 11 and 12, respectively, for both scenarios. In Fig. 13, the size of forced-handoff region (in terms of distance d2) versus traffic load is also illustrated.

1019

Call Blocking Probability

X. Wang / Computer Networks 50 (2006) 1003–1021

10

–1

10

–2

Desired Call Dropping Probability < 0.01 Desired Call Dropping Probability < 0.05

10

–3

0.1

0.2

0.3

0.4

0.5

0.6

0.7

New Call Arrival Rate (calls/s/cell)

Call Dropping Probability

Fig. 11. Call blocking probability when using dynamic forced handoff region.

10

–1

10

–2

Desired Call Dropping Probability < 0.01 Desired Call Dropping Probability < 0.05

10

–3

0.1

0.2

0.3

0.4

0.5

0.6

0.7

New Call Arrival Rate (calls/s/cell) Fig. 12. Call dropping probability when using dynamic forced handoff region.

At the same traffic load, for the scenario requiring a lower desired call dropping probability, a larger size of the forced handoff region is needed, as

shown in Fig. 13. Due to the larger size of the forced handoff region, the call blocking probability of this scenario is also smaller, as illustrated in

1020

X. Wang / Computer Networks 50 (2006) 1003–1021

Desired Call Dropping Probability < 0.01 Desired Call Dropping Probability < 0.05

Size of Forced–Handoff Region d2

0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.1

0.2

0.3

0.4

0.5

0.6

0.7

New Call Arrival Rate (calls/s/cell) Fig. 13. The dynamic size of the forced handoff region versus traffic load.

Fig. 11. However, the difference in call blocking probability of those two scenarios is small, because most of the increased capacity introduced by forced handoff calls is used by soft handoff calls instead of new arrival calls. For the same desired call dropping probability, the size of the forced handoff region needs to be increased as the traffic load increases. However, since the size of forced handoff region is limited, i.e., d2 < 1, the desired call dropping probability cannot be guaranteed when the traffic load approaches an upper bound. When a higher call dropping probability is desired, the upper bound is lower. For example, as shown in Figs. 12 and 13, the bound is 0.67 calls/s/cell when the call dropping probability is desired to be less than 0.05. However, when the call dropping probability needs to be less than 0.01, the upper bound is 0.55. As illustrated in Table 2, the uplink capacity of a cell does not increase when the size of the handoff region in terms of distance d is larger than 0.5. However, as shown in Figs. 12 and 13, when the traffic load is so high that distance d2 is larger than 0.5, it is still effective to increase the size of the forced handoff region to guarantee a desired dropping probability. The reason is that the number of calls

that can be forced into soft handoff still increases even when the size of the forced handoff region (d2) is larger than 0.5, which increases the reserved capacity and improves the probability of successful capacity acquisition for soft handoff calls.

6. Practicality of the capacity acquisition protocol There are a few practical issues concerning the implementation of the capacity acquisition protocol. • SINR measurement. SINR is used in the capacity acquisition protocol to determine if enough capacity has been acquired. SINR measurement is usually available as one of the building blocks for power control in CDMA networks. The accuracy of SINR measurement depends on the performance of the estimation algorithm. Tradeoff between accuracy and measurement periods must be considered for SINR measurement [12]. • Operation in the downlink. In the downlink, the base station controller (BSC) can also dynamically select a base station with lowest power to

X. Wang / Computer Networks 50 (2006) 1003–1021

communicate with the mobile terminal in soft handoff mode. However, this may cause ‘‘ping–pong’’ effect [13] and thus make this mechanism infeasible. Consequently, how to apply the capacity acquisition protocol to the downlink operation needs further investigation. Despite the difficulty in the downlink, the capacity acquisition protocol is effective to enhance the performance of uplink.

7. Conclusions An implicit channel reservation was proposed for soft handoff calls in CDMA networks. The effectiveness of this scheme was based on the features of soft handoff. An analytical model was also derived for the new channel reservation scheme. It was validated through computer simulations. The proposed capacity acquisition protocol was shown to outperform other existing channel reservation schemes for CDMA networks. It was also extended into a traffic-load adaptive channel reservation scheme. How to propose a similar protocol for the downlink operation is subject to future research. In addition, how to extend the capacity acquisition protocol to a CDMA network supporting both voice and data calls is another interesting research topic.

References [1] A.J. Viterbi et al., Soft handoff extends CDMA cell coverage and increase reverse link capacity, IEEE J. Select. Areas Commun. 12 (8) (1994) 1281–1288. [2] Z. Liu, M.E. Zarki, SIR-based call admission control for DS-CDMA cellular systems, IEEE J. Select. Areas Commun. 12 (4) (1994) 638–644. [3] Y. Ishikawa, N. Umeda, Capacity design and performance of call admission control in cellular CDMA systems, IEEE J. Select. Areas Commun. 15 (8) (1997) 1627–1635. [4] S.L. Su, J.Y. Chen, J.H. Huang, Performance analysis of soft handoff in CDMA cellular networks, IEEE J. Select. Areas Commun. 14 (9) (1996) 1762–1769.

1021

[5] D.K. Kim, D.K. Sung, Characterization of soft handoff in CDMA systems, IEEE Trans. Vehic. Technol. 48 (4) (1999) 1195–1202. [6] Y. Ma, J.J. Han, K.S. Trivedi, Call admission control for reducing dropped calls in code division multiple access (CDMA) cellular systems, in: Proc. IEEE INFOCOM 2000, March 2000, pp. 1481–1490. [7] J.W. Chang, D.K. Sung, Adaptive channel reservation scheme for soft handoff in DS-CDMA cellular systems, IEEE Trans. Vehic. Technol. 50 (2) (2001) 341–353. [8] Y.-B. Lin, S. Mohan, A. Noerpel, Queueing priority channel assignment strategies for PCS handoff and initial access, IEEE Trans. Vehic. Technol. 43 (3) (1994) 704–712. [9] C.C. Lee, R. Steele, Effect of soft and softer handoffs on CDMA system capacity, IEEE Trans. Vehic. Technol. 47 (3) (1998) 830–841. [10] Y. Fang, I. Chlamtac, Teletraffic analysis and mobility modeling of PCS networks, IEEE Trans. Commun. 47 (7) (1999) 1062–1072. [11] Y.-B. Lin, V.W. Mak, Eliminating the boundary effect of a large scale personal communication service network simulation, ACM Trans. Modeling Comput. Simul. 4 (2) (1994) 165–190. [12] H.-J. Su, E. Geraniotis, Adaptive closed-loop power control with quantized feedback and loop filtering, IEEE Trans. Wireless Commun. 1 (1) (2002) 76–86. [13] A.J. Viterbi, CDMA: Principles of Spread Spectrum Communication, Prentice Hall PTR, 1995.

Xudong Wang received his B.E. and Ph.D. degrees from Shanghai Jiao Tong University, Shanghai, China, in 1992 and 1997, respectively. From 1998 to 2003, he was with the Broadband and Wireless Networking (BWN) Lab at Georgia Institute of Technology. He also received the Ph.D. degree from Georgia Institute of Technology in 2003. Currently, he is a senior researcher with Kiyon, Inc., where he leads a research and development team working on MAC, routing, and transport protocols for wireless mesh networks. His research interests also include software radios, cross-layer design, and communication protocols for cellular, mobile ad hoc, sensor, and ultra-wideband networks. He is a guest editor for the special issue on wireless mesh networks in IEEE Wireless Communications Magazines. He has been a technical committee member of many international conferences, and a technical reviewer for numerous international journals and conferences. He has two patents pending in wireless mesh networks. He is a member of IEEE, ACM, and ACM SIGMOBILE.

EDITORS-IN-CHIEF Ian F. Akyildiz School of Electrical & Computer Engineering Georgia Institute of Technology Atlanta, GA 30332, USA E-mail: comnet@ece. gatech.edu Harry Rudin Vordere Bergstrasse 1 8942 Oberrieden, Switzerland E-mail: [email protected] EDITOR-IN-CHIEF EMERITUS: Phillip H. Enslow EDITORIAL BOARD Marco Ajmone Marsan Politecnico di Torino, Italy Nail Akar Bilkent University, Turkey Eitan Altman INRIA Sophia-Antipolis, France Novella Bartolini Universita di Roma, Italy Buyurman Baykal Middle East Technical University Ankara, Turkey

Edith Cohen AT&T Labs-Research, Florham Park, NY, USA

Edward Knightly Rice University Houston, TX, USA

Xuemin Sherman Shen University of Waterloo Canada

Francesca Cuomo University of Rome Italy

Udo R. Krieger Otto Friedrich University Bamberg, Germany

Ness Shroff Purdue University West Lafayette, IN, USA

Christos Douligeris University of Piraeus, Greece

Ajay Kshemkalyani University of Illinois Chicago, IL, USA

Patrick W. Dowd National Security Agency, Fort Meade, MD, USA Eylem Ekici Ohio State University Columbus, OH, USA Serge Fdida Universite´ P. & M. Curie Paris, France Nelson L. S. Fonseca State University of Campinas Brazil Deborah Frincke University of Idaho Moscow, ID, USA Andrea Fumagalli University of Texas Richardson, TX, USA Dominique Gaiti Universite´ de Technologie de Troyes, France Reinhard Gotzhein Universita¨t Kaiserslautern Germany

Geng-Sheng Kuo National Chengchi University Taipei, Taiwan Simon S. Lam University of Texas Austin, TX, USA C.-T. Lea Hong Kong University of Science and Technology China Luciano Lenzini University of Pisa, Italy Renato LoCigno University of Trento, Italy

Raghupathy Sivakumar Georgia Institute of Technology Atlanta, GA, USA Michael Smirnow Fraunhofer Institute Berlin, Germany Josep Sole´-Pareta Universitat Polite`cnica de Catalunya Barcelona, Spain Arun K. Somani Iowa State University Ames, IA, USA Ioannis Stavrakakis University of Athens, Greece

Ibrahim Matta Boston University Boston, MA, USA

Dimitrios Stiliadis Bell Labs Holmdel, NJ, USA

Jelena Misic University of Manitoba Winnipeg, Canada

Violet R. Syrotiuk Arizona State University Tempe, AZ, USA

Refik Molva Institut EURECOM, France

Vassilis Tsaoussidis Democritos University of Greece Xanthi, Greece

Giacomo Morabito University of Catania, Italy

Kul Bhasin NASA Glenn Research Center Cleveland, OH, USA

Enrico Gregori Institute for Informatics & Telematics Pisa, Italy

Ioanis Nikolaidis University of Alberta Edmonton, Canada

Chris Blondia University of Antwerp Belgium

Jennifer Hou University of Illinois Urbana, IL, USA

Ariel Orda Technion, Israel Institute of Technology, Haifa, Israel

Raouf Boutaba University of Waterloo Canada

Ahmed Kamal Iowa State University Ames, IA, USA

Sergio Palazzo University of Catania, Italy

Milind Madhav Buddhikot Bell Labs, Holmdel, NJ, USA

Krishna Kant Intel Inc., Hillsboro, OR, USA

Jaudelice Cavalcante de Oliveira Drexel University Philadelphia, PA, USA

Gunnar Karlsson Royal Institute of Technology KTH, Stockholm, Sweden

Jonathan Chao Polytechnic University of New York Brooklyn, NY, USA

Sneha Kumar Kasera University of Utah Salt Lake City, UT, USA

Luca Salgarelli Universita di Brescia, Italy

Carlos Becker Westphall Federal University of Santa Catarina, Florianopolis, Brazil

Edwin K.P. Chong Colorado State University Fort Collins, CO, USA

Wolfgang Kellerer DoCoMo Comm. Labs., Munich, Germany

Guenter Schaefer Technische Universita¨t Ilmenau, Germany

Guoliang Larry Xue Arizona State University Tempe, AZ, USA

Andreas Pitsillides University of Cyprus, Cyprus Juan Quemada ETSI Telecommunication Madrid, Spain Debanjan Saha IBM T. J. Watson Research Ctr. USA

Tuna Tugcu Bogazici University, Turkey Piet F. A. van Mieghem Delft University, The Netherlands Muthaiah Venkatachalam Intel Corp., USA Giorgio Ventre University of Naples, Italy Wenye Wang North Carolina State University Raleigh, NC, USA Cedric Westphal Nokia Research Center, USA