134 51 21MB
English Pages 1282 [1281] Year 2002
Lecture Notes in Computer Science Edited by G. Goos, J. Hartmanis, and J. van Leeuwen
2345
3
Berlin Heidelberg New York Barcelona Hong Kong London Milan Paris Tokyo
Enrico Gregori Marco Conti Andrew T. Campbell Guy Omidyar Moshe Zukerman (Eds.)
NETWORKING 2002 Networking Technologies, Services, and Protocols; Performance of Computer and Communication Networks; Mobile and Wireless Communications Second International IFIP-TC6 Networking Conference Pisa, Italy, May 19-24, 2002 Proceedings
13
Volume Editors Enrico Gregori Marco Conti Consiglio Nazionale delle Ricerche Istituto di Informatica e Telematica Via G. Moruzzi, 1, 56124 Pisa, Italy E-mail: {enrico.gregori,marco.conti}@cnuce.cnr.it Andrew T. Campbell Columbia University, Department of Electrical Engineering 1312 Seeley W. Mudd Bldg., New York, NY 10027-6699, USA E-mail: [email protected] Guy Omidyar National University of Singapore, Center for Wireless Communications Singapore Science Park II, TeleTech Park 20 Science Park Road, 02-34/37, Singapore 117674 E-mail: [email protected] Moshe Zukerman The University of Melbourne, EEE Department Grattan St. Victoria 3010, Australia E-mail: [email protected] Cataloging-in-Publication Data applied for Die Deutsche Bibliothek - CIP-Einheitsaufnahme Networking 2002 : networking technologies, services, and protocols ; performance of computer and communication networks ; mobile and wireless communication ; proceedings / Second International IFIP TC6 Networking Conference, Pisa, Italy, May 19 - 24, 2002. Enrico Gregori ... (ed.). Berlin ; Heidelberg ; New York ; Barcelona ; Hong Kong ; London ; Milan ; Paris ; Tokyo : Springer, 2002 (Lecture notes in computer science ; Vol. 2345) ISBN 3-540-43709-6 CR Subject Classification (1998): C.2, C.4, D.2, H.4.3, J.2, J.1, K.6, K.4 ISSN 0302-9743 ISBN 3-540-43709-6 Springer-Verlag Berlin Heidelberg New York This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting, reproduction on microfilms or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer-Verlag. Violations are liable for prosecution under the German Copyright Law. Springer-Verlag Berlin Heidelberg New York a member of BertelsmannSpringer Science+Business Media GmbH http://www.springer.de ©2002 IFIP International Federation for Information Processing, Hofstrasse 3, A-2361 Laxenburg, Austria Printed in Germany Typesetting: Camera-ready by author, data conversion by PTP-Berlin, Stefan Sossna Printed on acid-free paper SPIN 10869977 06/3142 543210
Preface
This book constitutes the refereed proceedings of the Second IFIP-TC6 Networking Conference, Networking 2002. Networking 2002 was sponsored by the IFIP Working Groups 6.2, 6.3, and 6.8. For this reason the conference was structured into three tracks: i) Networking Technologies, Services, and Protocols, ii) Performance of Computer and Communication Networks, and iii) Mobile and Wireless Communications. This year the conference received 314 submissions coming from 42 countries from all five continents Africa (4), Asia (84), America (63), Europe (158), and Oceania (5). This represents a 50% increase in submissions over the first conference, thus indicating that Networking is becoming a reference conference for worldwide researchers in the networking community. With so many papers to choose from, the job of the Technical Program Committee, to provide a conference program of the highest technical excellence, was both challenging and time consuming. From the 314 submissions, we finally selected 82 full papers for presentation during the conference technical sessions. To give young researchers and researchers from emerging countries the opportunity to present their work and to receive useful feedback from participants, we decided to include two poster sessions during the technical program. Thirty-one short papers were selected for presentation during the poster sessions. The conference technical program was split into three days, and included, in addition to the 82 refereed contributions, 5 invited papers from top-level researchers in the networking community. The technical program also included a panel session, and three invited talks from worldwide leaders – Imrich Chlamtac “Managing Optical Networks in the Optical Domain”, Randy Katz “The Post-PC Era: It’s All About Service”, and Gerald Maguire “Personal Computing and Communication”. The panel session, organized by Andrew T. Campbell (Columbia University), was entitled “Post 9-11 Networking Challenge” and is devoted to the discussion on how to cope with the vulnerabilities of communications systems revealed by the World Trade Center attack on September 11. This conference would not have been possible without the enthusiastic and hard work of a number of colleagues. First of all, I would like to thank the three track chairs – Andrew T. Campbell, Guy Omidyar, and Moshe Zukerman – for their valuable contribution in setting up the very high quality conference program. A special thanks to the TPC members, and all the referees, for their invaluable
VI
Preface
help in reviewing the papers for Networking 2002. Finally, I would like to thank all the authors that submitted their papers to this conference for their interest and time.
March 2002
Marco Conti
Message from the General Chairs
Networking 2002 was organized by the Italian National Research Council (CNR) and Telecom Italia and was sponsored by the IFIP working groups WG 6.2 (Network and Internetwork Architectures), WG 6.3 (Performance of Communication Systems ), and WG 6.8 (Wireless Communications ). The program of the conference spanned on five days and included the main conference (three days), two tutorial days, and one day of thematic workshops. The organization of such a complex event required a major effort and we wish to express our sincere appreciation to all the executive committee members for their excellent work. We would like to express our special appreciation to the main conference technical program chair Marco Conti and to the special track chairs: Andrew T. Campbell, Moshe Zukerman, Guy Omidyar. The overall high quality of the conference technical sessions is the result of a complex evaluation process that they handled in an excellent way. Special thanks goes to Giuseppe Anastasi and Stefano Basagni for the organization of an original and interesting tutorial program. The conference considered tutorials an important cultural event, and encouraged in several ways, the participation of young researchers in these tutorials. We decided to have a single, modest fee to provide access to all. The tutorial program included nine half-day tutorials organized in three parallel sessions. Networking 2002 also decided to stimulate thematic events covering hot research topics in the networking field. Three thematic workshops were held: Web Engineering, Peer-to-Peer Computing, and IP over WDM. Hence our third word of thanks goes to the chairs of the thematic workshops: Fabio Panzieri, Ludmilla Cherkasova (Workshop on Web Engineering), Gianpaolo Cugola, Gian Pietro Picco (Workshop on Peer-to-Peer Computing), and Giancarlo Prati, Piero Castoldi (Workshop on IP over WDM). We are also indebted to our supporters. First of all, CNR not only allowed Enrico Gregori and Marco Conti to dedicate considerable time to the organization of this event, but also financially supported the event through the sponsorship by the CNUCE and IIT institutes. A special thanks to Telecom Italia for joining us in the organization of this event. We are also indebted to our corporate sponsors (Cassa di Risparmio di Pisa, Compaq, Microsoft, and Softech) whose help removed much of the financial uncertainty, involved in the organization of such an event, and who also provided interesting suggestions for the program.
VIII
Message from the General Chairs
Our last word of gratitude goes to the Web manager Alessandro Urpi and the Web designer Patrizia Andronico. Alessandro created a very fancy and efficient system for the handling of electronic submissions. This system greatly facilitated the paper reviewing process, as well as the preparation of the proceedings. Patrizia was responsible for designing the Networking 2002 Web site that played an important role in the success of the event.
March 2002
Enrico Gregori Ioannis Stavrakakis
Organizers
Sponsoring Institutions
Organization
Conference Executive Committee General Chair: Enrico Gregori, National Research Council, Italy General Vice-Chair: Ioannis Stavrakakis, University of Athens, Greece Technical Program Chair: Marco Conti, National Research Council, Italy Special Track Chair for Networking Technologies, Services, and Protocols: Andrew T. Campbell, Columbia University, USA Special Track Chair for Performance of Computer and Communication Networks: Moshe Zukerman, University of Melbourne, Australia Special Track Chair for Mobile and Wireless Communications: Guy Omidyar, National University of Singapore Tutorial Program Co-chairs: Giuseppe Anastasi, University of Pisa, Italy Stefano Basagni, Northeastern University, USA Workshop Chairs: Workshop 1 — Web Engineering Fabio Panzieri, Universit` a di Bologna, Italy Ludmilla Cherkasova, Hewlett Packard Labs, USA Workshop 2 — Peer to Peer Computing Gian Pietro Picco, Politecnico di Milano, Italy Gianpaolo Cugola, Politecnico di Milano, Italy Workshop 3 — IP over WDM Giancarlo Prati, Scuola Superiore S. Anna, Italy Piero Castoldi, Scuola Superiore S. Anna, Italy Invited Speaker Chair: Fabrizio Davide, PhD Telecom Italia S.p.A., Italy
X
Organization
Organization Chair: Stefano Giordano, University of Pisa, Italy Publicity Chair: Silvia Giordano, Federal Inst. of Technology Lausanne (EPFL), Switzerland Laura Feeney, SICS, Sweden Steering Committee Chair: Harry Perros, North Carolina State University, USA Steering Committee Members: Augusto Casaca, IST/INESC, Portugal S. K. Das, The University of Texas at Arlington, USA Erol Gelenbe, University of Central Florida, USA Harry Perros, NCSU, USA (Chair) Guy Pujolle, University of Paris 6, France Harry Rudin, Switzerland Jan Slavik, TESTCOM, Czech Republic Hideaki Takagi, University of Tsukuba, Japan Samir Thome, ENST, France Adam Wolisz, TU–Berlin, Germany Electronic Submission: Alessandro Urpi, University of Pisa, Italy Web Designer: Patrizia Andronico, IAT–CNR, Italy Local organizing Committee: Renzo Beltrame, CNUCE–CNR, Italy Raffaele Bruno, CNUCE–CNR, Italy Willy Lapenna, CNUCE–CNR, Italy Gaia Maselli, CNUCE–CNR, Italy Renata Bandelloni, CNUCE–CNR, Italy
Technical Program Committee Special Track for Networking Technologies, Services, and Protocols Ian Akyldiz, Georgia Institute of Technology, USA Andrea Basso, AT&T Labs Research, USA Edoardo Biagioni, University of Hawaii at Manoa, USA Giuseppe Bianchi, University of Palermo, Italy Andrea Bianco, Politecnico di Torino, Italy Claude Castelluccia, INRIA, France Piero Castoldi, Scuola Superiore Sant’Anna, Italy Piergiorgio Cremonese, Netikos, Italy
Organization
XI
Jon Crowcroft, Cambridge University, UK Christophe Diot, Sprint, USA Serge Fdida, Universit´e Pierre et Marie Curie, France Tiziana Ferrari, INFN-CNAF, Italy Luigi Fratta, Politecnico di Milano, Italy Maurice Gagnaire, Ecole Nationale Sup´erieure des Telecommunications, France Dieter Gantenbein, IBM Research Laboratory - Zurich, Switzerland Per Gunningberg, Uppsala University, Sweden Salim Hariri, The University of Arizona, USA David Hutchison, Lancaster University, UK Bijan Jabbari, George Mason University, USA Mohan Kumar, The University of Texas at Arlington, USA Alfio Lombardo, University of Catania, Italy Nicholas F. Maxemchuk, Columbia University, USA Derek McAuley, Marconi Labs, Cambridge, UK Refik Molva, Institut Eur´ecom, France Guido H. Petit, Alcatel, Belgium Chiara Petrioli, University “La Sapienza” Rome, Italy Luigi Rizzo, Univeristy of Pisa, Italy Roberto Sabella, Ericsson, Italy Michael I. Smirnov, FHI FOKUS, Germany Andras Valko, Ericsson, Sweden Giorgio Ventre, Universit` a di Napoli Federico II, Italy Lars Wolf, University of Karlsruhe, Germany Stefano Zatti, ESA/ESRIN, Italy Special Track for Performance of Computer and Communication Networks Ron Addie, University of Southern Queensland, Australia Marco Ajmone, Marsan Politecnico di Torino, Italy Eitan Altman, INRIA, France Lachlan Andrew, The University of Melbourne, Australia Andrea Baiocchi, University “La Sapienza” Rome, Italy Chris Blondia, University of Antwerp, Belgium Herwig Bruneel, University of Ghent, Belgium Werner Bux, IBM Research Laboratory - Zurich, Switzerland Mariacarla Calzarossa, University of Pavia, Italy Olga Casals, Universitat Politecnica de Catalunya, Spain Nelson Fonseca, State University of Campinas, Brazil Peter Harrison, Imperial College, UK Farouk Kamoun, Tunisia Peter Key, Microsoft Research Ltd, Cambridge, UK Ulf Korner, Lund University, Sweden Demetres Kouvatsos, University of Bradford, UK
XII
Organization
Debasis Mitra, AT&T Bell Laboratories, USA Sandor Molnar, Budapest University of Technology and Economics, Hungary Tim Neame, Telstra Research Laboratories, Australia Ilkka Norros, VTT, Finland Ramon Puigjaner, Universitat de les Illes Balears, Spain Jim Roberts, France Telecom, France Yutaka Takahashi, Kyoto University, Japan Don Towsley, University of Massachusetts, USA Phuoc Tran-Gia, University of W¨ urzburg, Germany Jorma Virtamo, Helsinki University of Technology, Finland Maria C. Yuang, National Chiao Tung University, Taiwan Bartek Wydrowski, The University of Melbourne, Australia Special Track for Mobile and Wireless Communications: Victor Bahl, Microsoft Research, USA Roberto Battiti, University of Trento, Italy Luciano Bononi, University of Bologna, Italy Azzedine Boukerche, University of North Texas, USA Franco Davoli, University of Genova, Italy Khaled Elsayed, Cairo University, Egypt Anthony Ephremides, University of Maryland, USA Kari-Pekka Estola, Nokia Research Center, Finland Laura M. Feeney, SICS, Sweden Gabor Fodor, Ericsson, Sweden Jerome Galtier, INRIA, France Mario Gerla, University of California at Los Angeles, USA Silvia Giordano, ICA-DSC-EPFL, Switzerland Zygmunt Haas, Cornell University, USA Pascal Lorenz, Universit´e de Haute Alsace, France Thomas Luckenback, FhG Fokus, Germany Gerald Maguire, Royal Institute of Technology, Sweden Stephan Olariu, Old Dominion University, USA George Polyzos, Athens University of Economics and Business, Greece Jiang Shengming, National University of Singapore, Singapore Violet R. Syrotiuk, University of Texas at Dallas, USA Ivan Stojmenovic, University of Ottawa, Canada Terry Todd, McMaster University, Canada Nitin Vaidya, Texas A&M University, USA Roberto Verdone, CSITE - CNR, Italy Jeff Wieselthier, Naval Research Laboratory, USA
Referees Samuli Aalto Ron Addie
Ian Akyldiz Khalid Al-begain
Guido Albertengo Eitan Altman
Organization
Lachlan Andrew Csaba Antal Panagiotis Antoniadis Irfan Awan Andrea Baiocchi Dennis Baker Mario Baldi Mark Banfield Chadi Barakat Jose Barcelo Novella Bartolini Stefano Basagni Andrea Basso Roberto Battiti Daniel Bauer Sergio Beker Sebastien Bertrand Supratik Bhattacharyya Edoardo Biagioni Giuseppe Bianchi Andrea Bianco Michael Biggar Jozsef Biro Mats Bjorkman Chris Blondia Bernd Bochow Rene Boel Luciano Bononi Tamas Borsos Alessandro Bosco Azzedine Boukerche Onno Boxma Rafik Braham Hartmut Brandt Alberto Bricca Mauro Brunato Raffaele Bruno Sonja Buchegger Laurent Bussard Werner Bux Mariacarla Calzarossa Andrew Campbell Roberto Canonico Antonio Capobianco Georg Carle
Olga Casals Ramon Casellas Claudio Casetti Maurizio Casoni Claude Castelluccia Piero Castoldi Nedo Celandroni Llorenc Cerda Walter Cerroni Carla Chiasserini Phil Chimento Kwan-wu Chin Chen-nee Chuah Tibor Cinkler Touati Corinne Luis Costa Piergiorgio Cremonese Jon Crowcroft Filippo Cugini John Cushnie Jeremy De Clercq John Daigle Olivier Dalle Davide Dardari Maurizio Darienzo Bruce Davie Franco Davoli Stijn De Vuyst Andrea De Vendictis Christophe Deleuze Francesco Delfino Luca Dell’Uomo Jing Deng Ada Diaconescu Gianluca Dini Christophe Diot Constantinos Dovrolis Anca Dracinschi-sailer Adam Dunkels Martin Dunmore Larry Dunn Amre El-hoiydi Didier Erasme Christopher Edwards Wolfgang Effelsberg
XIII
Viktoria Elek Khaled Elsayed Anthony Ephremides Vincenzo Eramo
Alberto Escudero-Pascual
Marcello Esposito Kari-pekka Estola Nader Fahmy Romano Fantacci Laura Feeney Meiping Feng Tiziana Ferrari Afonso Ferreira Joe Finney Paul Fitzpatrick Gabor Fodor Chuan Foh Nelson Fonseca Luigi Fratta Laurent Frelechoux Rod Fretwell Hiroki Furuya Philippe Godlewski Dominique Grad Maurice Gagnaire Giulio Galante Jerome Galtier Dieter Gantenbein Jorge Garcia Rosario Garroppo Michael Gau Yu Ge Mario Gerla Vittorio Ghini Marco Ghizzi Paolo Giaccone Giovanni Giambene Chris Giblin Silvia Giordano Alessandra Giovanardi Gaby Goldacker Marcel Graf Enrico Gregori Fredrik Gunnarsson Per Gunningberg
XIV
Organization
Gary Hanson Robert Haas Stephen Hanly Uli Harder Salim Hariri Jarmo Harju Richard Harris Peter Harrison Dajiang He Jens Huenerberg David Hutchison Esa Hyyti Christian Hertnagl Gianluca Iannaccone Lengliz Ilhem Sandor Imre Hazer Inalteki Veronique Inghelbrecht Paola Iovanna Nando Iscra Milosh Ivanovich Bijan Jabbari Laura Jackson Yuming Jiang Mai Jin Josue Kuri Ahmed Kamal Farouk Kamoun Holger Karl Gunnar Karlsson Jouni Karvo Hiroyuki Kawano Mitchell Ken Csaba Keszei Peter Key Dr Khairy Kalevi Kilkki Andreas Kind Ulf Korner Demetres Kouvatsos Ferenc Kubinszky Mohan Kumar Pirkko Kuusela Stefan Kehler Chia Lee
Koen Laevens Willy Lapenna John Larson Pasi Lassila Gwendal Le Grand Jean-Yves Le Boudec Chris Lechie Sunj-Ju Lee Oscar Lepe Yuhong Li Ben Liang Xinhua Ling Cati Llado Francesco Lo Presti Alfio Lombardo Rui Lopes Pascal Lorenz Flaminia Luccio Stefano Lucetti Thomas Luckenback Andrey Lyakhov Joseph Macker Gerald Maguire Szabolcs Malomsoky Dave Maltz Roberto Mameli Eleonora Manconi Vincenzo Mancuso Petteri Mannersalo Mario Marchese Chiani Marco Dan Marinescu Cristina Martello Fabio Martignon Piergiulio Maryni Laurent Mathy Nicholas Maxemchuk Derek Mcauley Octavio Medina John Mellor Michael Menth Michela Meo Bernard Metzler Pietro Michiardi Gyorgy Miklos
Enrico Milani Jens Milbrandt Debasis Mitra Gergely Molnar Sandor Molnar Refik Molva Tim Moors Giacomo Morabito Sayandev Mukherjee Rami Mukhtar Maurizio Munafo Pars Mutaf Gaurav Navlakha Tim Neame Giovanni Neglia Marcel Neuts Gam Nguyen Saverio Niccolini Ilkka Norros Antonio Nucci Eeva Nyberg Stephan Olariu Ertan Ozturk Dimitri Papadimitriou Fabrice Poppe
Panagiotis Papadimitratos
Dina Papagianaki Davide Parisi Laurence Park Gianni Pasolini Andrea Passarella Tao Peng Antonio Pescap¨e Fabien Petitcolas Alexandru Petrescu Chiara Petrioli Dimitrios Pezaros Tom Pfeifer George Polyzos Francesco Potorti Fabio Pugini Ramon Puigjaner Guy Pujolle Rudesindo Queija Nicholas Race
Organization
Andras Racz Carla Raffaelli Jianqiang Rao Christoph Reichert Franklin Reynolds Jose Rezende Fabio Ricciato Ad Ridder Herve Rivano Romeo Rizzi Luigi Rizzo Jim Roberts Vincent Roca Marco Roccetti Hermann Rohling Simon Romano Miklos Ronai Sean Rooney Yves Roudier George Rouskas Alain Roy Romit Roychoudhury Giuseppe Ruggeri Jussi Ruutu Winston Seah Mike Sexton Roberto Sabella Stefano Salsano Elio Salvadori Prince Samar Volker Sander Takashi Sasaki Durga Satapathy Paolo Scotton Nabil Seddigh Ahmed Sehrouchni Faisal Shad N. Shankaranarayanan
Charles Shen Jiang Shengming Kasahara Shoji Steven Simpson Dorgham Sisalem Tara Small Michael Smirnov Paul Smith Sergios Soursos Kathleen Spaey Dirk Staehle George Stamoulis2 Burkhard Stiller Ivan Stojmenovic Moon Sue Violet Syrotiuk Csanad Szabo Istvan Szabo Robert Szabo Wayne Szeto Marco Tacca Nina Taft Yutaka Takahashi Christina Tavoularis Ben Teitelbaum David Thornley Neame Tim Ilenia Tinnirello Carsten Tittel Terry Todd Petia Todorova Samir Tohme Samir Tohme2 Don Towsley Velio Tralli Phuoc Tran-gia Linh Truong Jaidi Tuah
Zoltan Turanyi Kurt Tutschku Alessandro Urpi Masafumi Usuda Peter Vetter Mickey Vucic Francesco Vacirca Nitin Vaidya Luca Valcarenghi Benny Van Houdt Vasos Vassiliou Giorgio Ventre Roberto Verdone Andras Veres Rolland Vida Attila Vidacs Jorma Virtamo Thiemo Voigt Hai Le Vu Krzysztof Wajda Joris Walraevens Eric Wang Andreas Wespi Jeff Wieselthier Lars Wolf Mike Woodward Bartek Wydrowski Yang Xue George Xylomenos Miki Yamamoto Jackson Yin Maria Yuang Gergely Zaruba Stefano Zatti Artur Ziviani Moshe Zukerman
XV
Table of Contents
Multicasting I Channel Islands in a Reflective Ocean: Large Scale Event Distribution in Heterogeneous Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jon Crowcroft
1
A Reliable Multicast Protocol with Delay Guarantees . . . . . . . . . . . . . . . . . . . 10 Nicholas F. Maxemchuk Optimizing QoS-Based Multicast Routing in Wireless Networks: A Multi-objective Genetic Algorithmic Approach . . . . . . . . . . . . . . . . . . . . . . 28 Abhishek Roy, Sajal K. Das
Differentiated Services I An Experimental Study of Probing-Based Admission Control for DiffServ Architectures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 Susana Sargento, Roger Salgado, Miguel Carmo, Victor Marques, Rui Valadas, Edward Knightly High Performance DiffServ Mechanism for Routers and Switches: Packet Arrival Rate Based Queue Management for Class Based Scheduling 62 Bartek Wydrowski, Moshe Zukerman Session-Aware Popularity Resource Allocation for Assured Differentiated Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 Paulo Mendes, Henning Schulzrinne, Edmundo Monteiro
Network Performance I Most Probable Path Techniques for Gaussian Queueing Systems . . . . . . . . . 86 Ilkka Norros On the Queue Tail Asymptotics for General Multifractal Traffic . . . . . . . . . . 105 S´ andor Moln´ ar, Trang Dinh Dang, Istv´ an Maricza Some Models for Contention Resolution in Cable Networks . . . . . . . . . . . . . . 117 Onno Boxma, Dee Denteneer, Jacques Resing
XVIII Table of Contents
Self-Organizing Networks: Services and Protocols Adaptive Creation of Network Applications in the Jack-in-the-Net Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 Tomoko Itao, Tetsuya Nakamura, Masato Matsuo, Tatsuya Suda, Tomonori Aoyama Anchored Path Discovery in Terminode Routing . . . . . . . . . . . . . . . . . . . . . . . 141 Ljubica Blaˇzevi´c, Silvia Giordano, Jean-Yves Le Boudec Distributed Transmission Scheduling Using Code-Division Channelization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154 Lichun Bao, J.J. Garcia-Luna-Aceves
Call Admission Control Towards Efficient Decision Rules for Admission Control Based on the Many Sources Asymptotics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 ´ ad Szl´ Gergely Seres, Arp´ avik, J´ anos Z´ atonyi, J´ ozsef B´ır´ o QoS with an Edge-Based Call Admission Control in IP Networks . . . . . . . . 178 Daniel R. Jeske, Behrokh Samadi, Kazem Sohraby, Yung-Terng Wang, Qinqing Zhang Admission Control and Capacity Management for Advance Reservations with Uncertain Service Duration . . . . . . . . . . . . . . . . . 190 Yeali S. Sun, Yung-Cheng Tu, Meng Chang Chen
Voice/Video Performance Modeling Performance Evaluation of the Deadline Credit Scheduling Algorithm for Soft-Real-Time Applications in Distributed Video-on-Demand Systems 202 Adamantia Alexandraki, Michael Paterakis The Impact of Replacement Granularity on Video Caching . . . . . . . . . . . . . . 214 Elias Balafoutis, Antonis Panagakis, Nikolaos Laoutaris, Ioannis Stavrakakis Utility Analysis of Simple FEC Schemes for VoIP . . . . . . . . . . . . . . . . . . . . . . 226 Parijat Dube, Eitan Altman
Web Access A Power Saving Architecture for Web Access from Mobile Computers . . . . 240 Giuseppe Anastasi, Marco Conti, Enrico Gregori, Andrea Passarella A Resource/Connection Management Scheme for HTTP Proxy Servers . . . 252 Takuya Okamoto, Tatsuhiko Terai, Go Hasegawa, Masayuki Murata
Table of Contents
XIX
Measurement-Based Modeling of Internet Round-Trip Time Dynamics Using System Identification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264 Hiroyuki Ohsaki, Mitsushige Morita, Masayuki Murata
Optical Networks Optimal Link Capacity Dimensioning in Proportionally Fair Networks . . . . 277 Micha9l Pi´ oro, G´ abor Malicsk´ o, G´ abor Fodor Load Balancing in WDM Networks through Adaptive Routing Table Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289 Mauro Brunato, Roberto Battiti, Elio Salvadori Models for the Logical Topology Design Problem . . . . . . . . . . . . . . . . . . . . . . . 301 Nicolas Puech, Josu´e Kuri, Maurice Gagnaire Dynamic Shaping for Self-Similar Traffic Using Network Calculus . . . . . . . . 314 Halima Elbiaze, Tijani Chahed, T¨ ulin Atmaca, G´erard H´ebuterne
Network and Traffic Modeling Is Admission-Controlled Traffic Self-Similar? . . . . . . . . . . . . . . . . . . . . . . . . . . 327 Giuseppe Bianchi, Vincenzo Mancuso, Giovanni Neglia Analysis of CMPP Approach in Modeling Broadband Traffic . . . . . . . . . . . . 340 R.G. Garroppo, S. Giordano, S. Lucetti, M. Pagano A Mathematical Model for IP over ATM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352 Irena Atov, Richard J. Harris Analysis and Comparison of Internet Topology Generators . . . . . . . . . . . . . . 364 Damien Magoni, Jean-Jacques Pansiot
Ad Hoc Networks Energy Efficient Design of Wireless Ad Hoc Networks . . . . . . . . . . . . . . . . . . 376 Carla-Fabiana Chiasserini, Imrich Chlamtac, Paolo Monti, Antonio Nucci Performance of Multipoint Relaying in Ad Hoc Mobile Routing Protocols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387 Philippe Jacquet, Anis Laouiti, Pascale Minet, Laurent Viennot An Adaptive Location-Aware MAC Protocol for Multichannel Multihop Ad-Hoc Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399 Zi-Tsan Chou, Ching-Chi Hsu, Ferng-Ching Lin Capacity Assignment in Bluetooth Scatternets – Analysis and Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411 Gil Zussman, Adrian Segall
XX
Table of Contents
Resource Allocation I Optimization-Based Congestion Control for Multicast Communications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423 Jonathan K. Shapiro, Don Towsley, Jim Kurose Severe Congestion Handling with Resource Management in Diffserv on Demand . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443 Andr´ as Cs´ asz´ ar, Attila Tak´ acs, R´ obert Szab´ o, Vlora Rexhepi, Georgios Karagiannis Resource Allocation with Persistent and Transient Flows . . . . . . . . . . . . . . . . 455 Supratim Deb, Ayalvadi Ganesh, Peter Key
LAN and PAN A Novel and Simple MAC Protocol for High Speed Passive Optical LANs . 467 Chuan Heng Foh, Moshe Zukerman The Bluetooth Technology: State of the Art and Networking Aspects . . . . . 479 Dajana Cassioli, Andrea Detti, Pierpaolo Loreti, Franco Mazzenga, Francesco Vatalaro Time and Frequency Synchronization for Hiperlan/2 . . . . . . . . . . . . . . . . . . . . 491 Anna Berno, Nicola Laurenti
Performance of Wireless Networks Performance Analysis of a Forwarding Scheme for Handoff in HAWAII . . . . 503 Chris Blondia, Olga Casals, Lloren¸c Cerd` a, Gert Willems Evaluating the Performance of a Network Management Application Based on Mobile Agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 515 Marcelo G. Rubinstein, Otto Carlos Muniz Bandeira Duarte, Guy Pujolle Performance Evaluation on WAP and Internet Protocol over 3G Wireless Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 527 Hidetoshi Ueno, Norihiro Ishikawa, Hideharu Suzuki, Hiromitsu Sumino, Osamu Takahashi
Multimedia Performance Evaluation of H.263–Based Video Transmission in an Experimental Ad–Hoc Wireless LAN System . . . . . . . . . . . . . . . . . . . . . . . 539 Mat´ıas Freytes Differentiated Services Based Priority Dropping and Its Application to Layered Video Streams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 551 Markus Fidler
Table of Contents
XXI
Optimal Feedback for Quality Source-Adaptive Schemes in Multicast Multi-layered Video Environments . . . . . . . . . . . . . . . . . . . . . . . . . . 563 Paulo Andr´e da Silva Gon¸calves, Jos´e Ferreira de Rezende, Otto Carlos Muniz Bandeira Duarte, Guy Pujolle A Fibre Channel Dimensioning for a Multimedia System with Deterministic QoS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 575 Laurent George, Dana Marinca, Pascale Minet
Transmission Control Protocol (TCP) On the Resource Efficiency of Explicit Congestion Notification . . . . . . . . . . . 588 Kostas Pentikousis, Hussein Badr Sender-Side TCP Modifications: An Analytical Study . . . . . . . . . . . . . . . . . . . 600 R. Lo Cigno, G. Procissi, Mario Gerla Modeling a Mixed TCP Vegas and TCP Reno Scenario . . . . . . . . . . . . . . . . . 612 Andrea De Vendictis, Andrea Baiocchi Performance Sensitivity and Fairness of ECN-Aware ‘Modified TCP’ . . . . . 624 Archan Misra, Teunis J. Ott
Future Wireless Networks I Call Admission Control for 3G CDMA Networks with Differentiated QoS . 636 Qian Huang, Hui Min Chen, King Tim Ko, Sammy Chan, King Sun Chan Performance Evaluation of Channel Switching Scheme for Packet Data Transmission in Radio Network Controller . . . . . . . . . . . . . . . . . . . . . . . 648 Yoshiaki Ohta, Kenji Kawahara, Takeshi Ikenaga, Yuji Oie An Optimal Reservation-Pool Approach for Guaranteeing the Call-Level QoS in Next-Generation Wireless Networks . . . . . . . . . . . . . . . . . . 660 Fei Hu, Neeraj K. Sharma A New Adaptive Channel Reservation Scheme for Handoff Calls in Wireless Cellular Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 672 Zhong Xu, Zhenqiang Ye, Srikanth V. Krishnamurthy, Satish K. Tripathi, Mart Molle
Internet Protocol (IP) Connection of Extruded Subnets: A Solution Based on RSIP . . . . . . . . . . . . 685 C´edric de Launois, Aur´elien Bonnet, Marc Lobelle Adjusted Probabilistic Packet Marking for IP Traceback . . . . . . . . . . . . . . . . 697 Tao Peng, Christopher Leckie, Kotagiri Ramamohanarao
XXII
Table of Contents
Tuning Delay Differentiation in IP Networks Using Priority Queueing Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 709 Pedro Sousa, Paulo Carvalho, Vasco Freitas QoS-Conditionalized Handoff for Mobile IPv6 . . . . . . . . . . . . . . . . . . . . . . . . . 721 Xiaoming Fu, Holger Karl, Cornelia Kappler
Queueing Models On Loss Probabilities in Presence of Redundant Packets with Random Drop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 731 Parijat Dube, Omar Ait-Hellal, Eitan Altman Performance Analysis of a GI-G-1 Preemptive Resume Priority Buffer . . . . 745 Joris Walraevens, Bart Steyaert, Herwig Bruneel Analysis of the Discrete-Time G(G) /Geom/c Queueing Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 757 Sabine Wittevrongel, Herwig Bruneel, Bart Vinck On a Theory of Interacting Queues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 769 Alexander Stepanenko, Costas C. Constantinou, Theodoros N. Arvanitis, Kevin Baughan
Satellite Networks Analysis of a MAC Protocol for a Time-Code Air Interface in LEO Mobile Satellite Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 778 Romano Fantacci, Giovanni Giambene Performance Analysis of LEO Satellite Networks . . . . . . . . . . . . . . . . . . . . . . . 790 A. Halim Zaim, Harry G. Perros, George N. Rouskas Gateway Architecture for DVB-RCS Satellite Networks . . . . . . . . . . . . . . . . . 802 Antonio Pietrabissa, Cristiana Santececca Connection Admission Control CAC and Differentiated Resources Allocation RA in a Low Earth Orbit LEO Satellite Constellation . . . . . . . . . 814 Rima Abi Fadel, Samir Tohm´e
Resource Allocation II Dimensioning Bandwidth for Elastic Traffic . . . . . . . . . . . . . . . . . . . . . . . . . . . 826 Zhong Fan Fair Adaptive Bandwidth Allocation: A Rate Control Based Active Queue Management Discipline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 838 Abhinav Kamra, Huzur Saran, Sandeep Sen, Rajeev Shorey
Table of Contents XXIII
Distributed Scheduling via Pricing in a Communication Network . . . . . . . . . 850 Tiina Heikkinen
Performance of Optical Networks A Simulation Study of Access Protocols for Optical Burst-Switched Ring Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 863 Lisong Xu, Harry G. Perros, George N. Rouskas Capacity Efficiency of Distributed Path Restoration Mechanisms in Optical Mesh Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 875 Bart Rousseau, Fabrice Poppe Helios: A Broadcast Optical Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 887 Ilia Baldine, Laura E. Jackson, George N. Rouskas
Future Wireless Networks II Service and Network Management Interworking in Future Wireless Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 899 V. Tountopoulos, V. Stavroulaki, P. Demestichas, N. Mitrou, M. Theologou Scheduling Differentiated Traffic in Multicarrier Unlicensed Systems . . . . . . 911 Giannis F. Marias, Lazaros Merakos A Simple Model for Calculating SIP Signalling Flows in 3GPP IP Multimedia Subsystems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 924 Alexander A. Kist, Richard J. Harris
Multiprotocol Label Switching (MPLS) Dynamic Online Routing Algorithm for MPLS Traffic Engineering . . . . . . . 936 W. Szeto, R. Boutaba, Y. Iraqi Optimal Capacity Provisioning for Label Switched Paths in MPLS Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 947 C. Bruni, C. Scoglio, S. Vergari A New Class of Online Minimum-Interference Routing Algorithms . . . . . . . 959 Ilias Iliadis, Daniel Bauer Performance Analysis of Dynamic Lightpath Configuration for WDM Asymmetric Ring Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 972 Takuji Tachibana, Shoji Kasahara
XXIV
Table of Contents
Networks Performance II A Queueing Model for a Wireless GSM/GPRS Cell with Multiple Service Classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 984 D.D. Kouvatsos, K. Al-Begain, I. Awan Integrated Multi-purposed Testbed to Characterize the Performance of Internet Access over Hybrid Fiber Coaxial Access Networks . . . . . . . . . . . 996 Hung Nguyen Chan, Belen Carro Martinez, Rafa Mompo Gomez, Judith Redoli Granados 802.11 LANs: Saturation Throughput in the Presence of Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1008 Vladimir Vishnevsky, Andrey Lyakhov Efficient Simulation of Blocking Probabilities for Multi-layer Multicast Streams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1020 Jouni Karvo
Multicasting II Aggregated Multicast – A Comparative Study . . . . . . . . . . . . . . . . . . . . . . . .1032 Jun-Hong Cui, Jinkyu Kim, Dario Maggiorini, Khaled Boussetta, Mario Gerla New Center Location Algorithms for Shared Multicast Trees . . . . . . . . . . . .1045 Young-Chul Shim, Shin-Kyu Kang A Multicast FCFS Output Queued Switch without Speedup . . . . . . . . . . . . .1057 Maurizio A. Bonuccelli, Alessandro Urpi Fault-Tolerant Support for Reliable Multicast in Mobile Wireless Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1069 Giuseppe Anastasi, Alberto Bartoli, Flaminia L. Luccio
Posters Session JumpStart: A Just-in-Time Signaling Architecture for WDM Burst-Switched Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1081 Ilia Baldine, Harry G. Perros, George N. Rouskas, Dan Stevenson Device Discovery in Bluetooth Networks: A Scatternet Perspective . . . . . . .1087 Stefano Basagni, Raffaele Bruno, Chiara Petrioli QoS Evaluation of Real-Time Applications over a Multi-domain DiffServ Experimental Test-Bed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1093 G. Carrozzo, V. Chionsini, S. Giordano, S. Niccolini A New Policy Based Management of Mobile IP Users . . . . . . . . . . . . . . . . . . .1099 Hakima Chaouchi, Guy Pujolle
Table of Contents
XXV
A Framework for Policy-Based Management of QoS Aware IP Networks . .1105 P. Cremonese, M. Esposito, S. Giordano, M. Mondini, S.P. Romano, G. Ventre SIP-H323: A Solution for Interworking Saving Existing Architecture . . . . . .1111 G. De Marco, S. Loreto, G. Sorrentino, L. Veltri High Router Flexibility and Performance by Combining Dedicated Lookup Hardware (IFT), off the Shelf Switches and Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1117 Christian Duret, Francis Rischette, Jo¨el Lattmann, Val´ery Laspreses, Pim Van Heuven, Steven Van den Berghe, Piet Demeester Group Security Policy Management for IP Multicast and Group Security . .1123 Thomas Hardjono, Hugh Harney Issues in Internet Radio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1129 Yasushi Ichikawa, Kensuke Arakawa, Keisuke Wano, Yuko Murayama I/O Bus Usage Control in PC-Based Software Routers . . . . . . . . . . . . . . . . . .1135 Oscar-Iv´ an Lepe-Aldama, Jorge Garc´ıa-Vidal Multiple Access in Ad-Hoc Wireless LANs with Noncooperative Stations . .1141 Jerzy Konorski Next Generation Networks and Services in Slovenia . . . . . . . . . . . . . . . . . . . .1147 Andrej Kos, Janez Beˇster, Peter Homan Minimizing the Routing Delay in Ad Hoc Networks through Route-Cache TTL Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1153 Ben Liang, Zygmunt J. Haas Long-Range Dependence of Internet Traffic Aggregates . . . . . . . . . . . . . . . . .1159 Solange Lima, Magda Silva, Paulo Carvalho, Alexandre Santos, Vasco Freitas Improved Initial Synchronisation in the Presence of Frequency Offset in UMTS FDD Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1165 Valentina Lomi, Gianfranco L. Pierobon, Daniele Tonetto, Lorenzo Vangelista Scalable Adaptive Hierarchical Clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1172 Laurent Mathy, Roberto Canonico, Steven Simpson, David Hutchison How to Achieve Fair Differentiation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1178 Eeva Nyberg, Samuli Aalto
XXVI
Table of Contents
Measurement-Based Admission Control for Dynamic Multicast Groups in Diff-Serv Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1184 Elena Pagani, Gian Paolo Rossi A Framework to Service and Network Resource Management in Composite Radio Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1190 L.-M. Papadopoulou, V. Stavroulaki, P. Demestichas, M. Theologou JESA Service Discovery Protocol (Efficient Service Discovery in Ad-Hoc Networks) . . . . . . . . . . . . . . . . . . . . . .1196 Stephan Preuß Performance Simulations of a QoS Aware Caching Method . . . . . . . . . . . . . .1202 Pertti Raatikainen, Mika Wikstr¨ om, Timo H¨ am¨ al¨ ainen Call Admission Control for Multimedia Cellular Networks Using Neuro-dynamic Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1208 Sidi-Mohammed Senouci, Andr´e-Luc Beylot, Guy Pujolle Aspects of AMnet Signaling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1214 Anke Speer, Marcus Sch¨ oller, Thomas Fuhrmann, Martina Zitterbart Virtual Home Environment for Multimedia Services in 3rd Generation Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1221 Orazio Tomarchio, Andrea Calvagna, Giuseppe Di Modica On Providing End-To-End QoS Introducing a Set of Network Services in Large-Scale IP Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1227 E. Tsolakou, E. Nikolouzou, S. Venieris SaTPEP: A TCP Performance Enhancing Proxy for Satellite Links . . . . . . .1233 Dimitris Velenis, Dimitris Kalogeras, Basil Maglaris An Overlay for Ubiquitous Streaming over Internet . . . . . . . . . . . . . . . . . . . . .1239 Chai Kiat Yeo, Bu Sung Lee, Meng Hwa Er A Measurement-Based Dynamic Guard Channel Scheme for Handover Prioritization in Cellular Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1245 Roland Zander, Johan M. Karlsson Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1253
Channel Islands in a Reflective Ocean: Large Scale Event Distribution in Heterogeneous Networks Jon Crowcroft University of Cambridge Computer Laboratory William Gates Building J J Thomson Avenue Cambridge CB3 0FD [email protected]
Abstract. This is a discussion paper about the possible future use of network and transport level multicast services to support extremely large scale event distribution. To date, event notification services[40] have been limited in their scope due to limitations of the infrastructure At the same time, Internet network and transport layer multicast services have seen limited deployment due to lack of user demand (with the exception more recently of streaming services, e.g. on Sprint’s US core network, and in the Internet II). Recent research in active and reflective middleware suggests a way to resolve these two problems at one go. Event-driven and messaging infrastructures are emerging as the most flexible and feasible solution for enabling rapid and dynamic integration of legacy and monolithic software applications into distributed systems. Event infrastructures also support deployment and evolution of traditionally difficult-to-build active systems such as large-scale collaborative environments and mobility aware architectures. Event notification is concerned with propagation of state changes in objects in the form of events. A crucial aspect of events is that they occur asynchronously. Event consumers have no control over when events are triggered. On the other hand, event suppliers do not generally know what entities might be interested in the events they provide. These two aspects clearly define event notification as a model of asynchronous and de-coupled communication, where entities communicate in order to exchange information, but do not directly control each other. The IETF is just finishing specifying a family of reliable multicast transport protocols, for most of which there are pilot implementations. Key amongst these for the purposes of this research is the exposure to end systems of router filter functionality in a programmable way, known as Generic Router Assist. This is an inherent part of the Pragmatic General Multicast service, implemented by Reuters, Tibco and Cisco in their products, although it has not been widely known or used outside of the TIBNET products until very recently. E. Gregori et al. (Eds.): NETWORKING 2002, LNCS 2345, pp. 1–9, 2002. c Springer-Verlag Berlin Heidelberg 2002
2
J. Crowcroft The goal of this paper is to describe a reflective middleware system that integrates the network, transport and distributed middleware services into a seamless whole. The outcome of this research will be to integrate this ’low-level’ technology into an event middleware system, as a toolkit as well as evaluation of this approach for massive scale event notification, suitable for telemetry, novel mobile network services, and other as yet unforeseen applications.
1
Background and Introduction
The last decade has seen the great leaps in the maturity of distributed systems middleware, and in one particular area in support of a wide variety of novel applications, event notification systems. Current work on event notification middleware[39][40][41], has concentrated on providing the infrastructure necessary to enable content-based addressing of event notifications. These solutions promote a publish-subscribe-match model by which event sources publish the metadata of the events they generate, event consumers register for their events of interest passing event filter specifications, and the underlying event notification middleware undertakes the event filtering and routing process. Solutions differ usually on whether they undertake the filtering process at the source or at an intermediary mediator or channel in which the event filtering takes place. The trade-off lies on whether to increase the computational load of sources and decrease the network bandwidth consumption, or minimise the extra computational load on the sources and outsource the event filtering and routing task to a mediator component (hopefully located close to the source). All of these solutions do not leverage on the potential benefits that event multicasting to consumers requiring the same type of events, and applying very similar filters could bring. They usually require an individual unicast communication per event transmitted. At the same time, the underlying network has become very widespread. New services such as IP multicast are finally seeing widespread deployment, especially in core networks and in intranets. The combination of these two technologies, event services and multicast, originates historically with Tibco[20], a subsidiary of Reuters. However, their approach is somewhat limited as it takes a strict layered approach. At the highest level, there is a publish/subscribe system, which in TIBNET uses Subject Based Addressing and Content Based Addressing. Receivers subscribe to subjects. The Subject is used to hash to a multicast group. Receivers subscribe to a subject but can express interest by declaring filters on content. The TIBNET system is then hybrid. In the wide area, IP multicast is used to distribute all content on a given subject topic to a set of site proxy servers. The site proxy servers then act on behalf of subscribers at a site and filter appropriate content out of each subject stream and deliver the remains to each subscriber. Between the notification layer and the IP layer there is a transport layer, called Pragmatic General Multicast. To provide semi-reliable, in-order delivery,
Channel Islands in a Reflective Ocean
3
the subject messages are mapped onto PGM[10] messages, which are then multicast in IP packets. PGM provides a novel retransmission facility which takes advantage of router level “nack aggregation” (which itself prevents message implosion towards the event source), to provide filtering[15][16] of retransmissions so that only receivers missing a given message sequence number, receive it. The PGM protocol is essentially a light weight signaling protocol which allows receivers to install and remove filters on parts of the message stream. The mechanism is implemented in Cisco and other routers that run IP multicast. The end system part of the protocol is available in all common operating systems. Almost all other event notification systems have taken the view that IP multicast was rarely deployed1 , and that the overheads in the group management protocols were too high for the rate of change of interest/subscription typical in many applications usage patterns. Instead, they have typically taken an alternative approach of building a server level overlay for event message distribution. Recent years have seen many such overlay attempts[22] [23] [24] [25] [26] [27] [28] [29] [30]. These have met with varying degrees of success. One of the main problems of application layer service location and routing is that the placement of servers does not often ,match the underlying true topology of the physical network, and is therefore unable to gain accurate matching between a distribution tree and the actual link throughput or latencies. Nor is the system able to estimate accurately the actual available capacity or delay. Even massive scale deployments such as Akamai[31], for example, do not do very well. Secondly, the delays through application level systems are massively higher than those through routers and switches (which are after all designed for packet forwarding, rather than server or client computation or storage resource sharing). The message is that overlays and measurement are both hard to optimise, and inefficient. We see a number of advantages in continuing forward from where Tibco left off in integrating efficient network delivery through multicast, with an event notification service including: Scale. We obviate the need to deploy special proxy servers to aid the distribution. Throughput. We will be able therefore to distribute many more events per second. Latency. Event distribution latency will approximate the packet level distribution delay , and will avoid the problems of high latency and jitter incurred when forwarding through application level processes on intermediaries. There are two ideas we will draw from in moving forward. Firstly we will exploit advances in the network support for multicast, such as Generic Router Assist service in the PGM router element in IP multicast. Secondly, we will carry 1
Ironically, this view was fuelled partly by a report by Sprint[21], when in fact the entire Sprint IP service supports multicast and they have at least 3500 commercial customers streaming content.
4
J. Crowcroft
out research in ways to distribute an open interface to the multicast tree computation that IP routers implement. The way we propose doing this is through reflection. Reflection is becoming commonplace in middleware[32] [33] [34], but has not been applied between application level systems and network level entities to our knowledge. The intent here is to offer a common API to both the multicast service, and the filtering service, so that the event notification module implementor need not be aware which layer is implementing a function. We would envisage an extremely simple API, viz: Create(Subject) Subscribe/Join(Subject) Publish/Send(Subject, Content) Receive(Subject, Content Filter Expression) The router level will create both a real distribution tree for subjects, and a sub-tree for each filter or merged filter set. This will be done with regard to the location (and density) of receivers. It is possible that we can use an multicast tunnel or multicast address translation service such as the one described in[11], to provide further levels of aggregation within the network. This will require the routers to perform approximate tree matching algorithms. 1.1
Solution, and Proposed Experiment
The approach we will take in the work is one of “build and learn”. We will build a piece of reflective middleware that is a shim between an existing event notification service and the reflective routing and filter service. This will involve extending the PGM signaling protocol that installs and activates (via IP router alerts) the filters. We will also investigate efficient hashes for subject to group and content to sequence number mapping. Subsequently, we aim to evaluate our approach by applying it to a large-scale event driven (sentient) application, such as novel context-aware applications for the emerging UMTS mobile telephony standard[37] or large-scale location tracking applications[38]. For example, there is the possibility of developing a location tracking (people, vehicles and baggage) for large new airport terminals.
2
Overlays and Reflection
As we can see, what we are designing is effectvely a two-tier system, which entails multicast trees, and within these, filters. To these, we believe we have to add a third layer, which is illustrated in figure 1. The purpose of the overlay is to accomodate a varieity of qualitative heterogeneity, where the lower two layers of multicast and filtering target the area of quantitative performance differences.
Channel Islands in a Reflective Ocean
5
Firstly, initial event systems are built without any notion of a multicast filtercapable transport. Thus we must haev an overlay of event distribute servers. These can, where the lower services are available, be programmed to take advantage of it, amongt themselves, thus providing a seemless mechanism to deploy the new service transparently to publisher and subscriber systems. However, we also believe that there are inherent structural reasons why such an applicaiton layer overly is needed. These include: Policies. Different regions of the network will have different policies about which events may be published and which not. Security. There may be firewall or other security mechanisms which impede the distribution via lower level protocols. Evolution. We would like to accomodate evolution (in the same way that interdomain routing protocols such as BGP allow intra-domain routing to eveolve). Interworking. We would like to accomodate multiple event distribution middleware. Others. There are other such “impedence mismatches” which we may encounter as the system scales up. A novel aspect of our approach is that the overlay system does not, itself, construct a distribution tree. Isntead, a set of virtual members are addd to the lower level distribution system whcih then uses its normal multicast routign algorithms to cnstruct a distributio ntree amongstr a set of event notificaiton servers seperated in islands of multicast capable networks. These servers then use an open interface to quret the routers as to the computed tree, and then use this as their own distribtion - in this way the overlay can take advanatage of detaield metric information that the router layer has access to (such as delay, throughput and current load on links) instead of measuring a poor shadow of that data which would lead to, an inaccurate and out of date parameters with which to build the overlay. In some senses, what we are doign here is like multicast traffic engineering! We believe that our system provides a number of engineering performance enhancements over previous event notificaiton architectures. Future work will evaluate these, which include: 1. System performance - improvement in scalability, including reduction in join/leave publish/subscribe latency, increase in event throughput, etc. 2. Network impact - impact on router load by filter cost group join, leave and multicast packet forwarding. 3. Expressiveness and seamlessness of API - try it with variety of event notification systems! export via public CVS and see what open source community do?
3
Discussion
For now, its an idea, but we can envisage a world in which perveasive computing devices generate 10,000,000,000 events per second. We can foresee a time when
6
J. Crowcroft
there are thousands of millions of event subscribers all over the planet, with publishers having popularities as low as no or only a single subscriber, or as high as the entire world. One of the goals of this system is to explore the way that the multicast treees evolve and the filtering system evolves. Another goal is to see how multicast routign can be “laid open” as a service to be used to build distribution trees for other layers. Fianlyl, we belieev that the three levels we have may not be enough, and that as the system grows larger still, other services may emerge. It is frequently the case that in the long term, business migrates into the infrastructure. (c.f. voice, IP, etc). We expect many overlay services to do this. We believe that this process by will accelerate due to use of state of the art network, middleware and software engineering approaches. However, this process will not stop - there is an endless stream of new services being introduced “at the top”, and makign their way down to the bottom, to emerge as part of the critical information infrastructure. The architecture is illustrated in figure 1. In this we can see that a publisher creates a sequence of events, which carry attributes with given values. A consumer subscribes to a publisher, and may express content based filters to the publisher. In our system, these filter expressions can be distributed up-stream from the consumer towards the publisher. As they pass through Application-
Producer
*$
$
Consumer
GRA 1
*$ *$
* GRA 2
* Consumer
Consumer
* * Consumer
Producer/Consumer GRA Application-level event notification distributor Normal Router
*$
Event Types
* Consumerr
GRA1 GRA2 E1 E2 E3
- Blocks - Blocks - Blocks - Splits - Blocks
Fig. 1. Channel Islands System Architecture
*
$
*
Channel Islands in a Reflective Ocean
7
level event notification distributers, they can be evaluated and compared, and possibly combined with other subscription filters. Notifications of interest are passed up stream all the way to the publisher, or to the application-level cevent notification distributer nearest the publiser, which can then compute a set of fixed tags for data; it can also, by consulting with the IP and GRA routers, through the reflective multicast routing service, compute a set of IP multicast groups over which to distribute the data, which will create the most efficient trade-off between source and network load, and receiver load, as well as tag and filter evaluation, as the events are carried downstream from the publiser, over the IP multicast, GRA, and application-level event notification nodes. Devising and evaluating the detailed performance of the algorithms to carry out these tasks out form the core of the requirements for future work. Acknowledgements. The author gratefully acknowledges discussions with his colleagues, particularly Jean Bacon and George Coulouris.
References [1] A. Mankin, A. Romanow, S. Bradner and V. Paxson, “IETF Criteria for Evaluating Reliable Multicast Transport and Application Protocols” RFC2357, June 1998. [2] Reliable Multicast Research Group http://www.east.isi.edu/RMRG/ [3] S.Floyd, V.Jacobson, C.Liu, S.McCanne, L. Zhang, “A Reliable Multicast Framework for Light-weight Sessions and Application Level Framing, Scalable Reliable Multicast (SRM)”, ACM SIGCOMM’95. [4] M.Handley and J.Crowcroft, “Network Text Editor (NTE): A scalable shared text editor for the Mbone”, ACM SIGCOMM’97, Cannes, France, September 1997. [5] “TCP-like Congestion Control for Layered Multicast Data Transfer”, L.Vicisano, L.Rizzo, J.Crowcroft, INFOCOM’98. [6] “IEEE Standard for Distributed Interactive Simulation - Application Protocols” IEEE std 1278.1-1995, IEEE Computer Society [7] “IEEE Standard for Distributed Interactive Simulation - Communications Services and Profiles”, IEEE std 1278.2-1995, IEEE Computer Society [8] Mark Handley et al, Building Blocks for Reliable Multicast Transport Protocols, Work in progress, RMT Working Group, IETF. [9] “Rate Adjustment Protocol” Handley, M. et al Proc Infocom 1999, NY [10] Pragmatic Generalised Multicast Tony Speakman, et al, Work in Progress, http://search.ietf.org/internet-drafts/draft-speakman-pgm-spec-07.txt [11] “Multicast Address Translation” Work in Progress, http://www.ietf.org/internet-drafts/draft-crowcroft-mat-00.txt [12] “Self Organising Transcoders”, Kouvelas, I. et al Proc NOSSDAV 1998, Cambridge England [13] “Router Mechanisms to Support End-to-End Congestion Control”, S.Floyd, K.Fall, Technical report, ftp://ftp.ee.lbl.gov/papers/collapse.ps. [14] “RMTP: A Reliable Multicast Transport Protocol”, J.C. Lin, S.Paul, IEEE INFOCOM ’96, March 1996, pp.1414-1424. Available as ftp://gwen.cs.purdue.edu/pub/lin/rmtp.ps.Z
8
J. Crowcroft
[15] “Generic Router Assist Building Block”, B. Cain, T. Speakman, D. Towsley, Internet Drafts, Work in progress. http://search.ietf.org/internet-drafts/draft-ietf-rmt-gra-fspec00.txt and http://search.ietf.org/internet-drafts/draft-ietf-rmt-gra-arch-02.txt [16] GMTS “Generic Multicast Transport Services” B. Cain, D. Towsley, in Proc. Networking 2000, Paris, France May 2000. http://www.east.isi.edu/RMRG/cain-towsley3/ [17] “Incremental Depoyment of a Router-assisted Relaible Multicast Scheme” C. Papadopoulos, E. Laliotis Proc of NGC 2000 WOrkshop. [18] “COBEA: A CORBA-Based Event Architecture” C. Ma and J. Bacon Proc of 4th Usenix Conference on Object Oriented Technologies and Systems, 1998 [19] “Building Event Services on Standard Middleware” Jean Bacon, Alexis Hombrecher, Chaoying Ma, Ken Moody, Peter Pietzuch Work in Progress. [20] TIBCO http://www.tibco.com [21] “Deployment Issues for the IP Multicast Service and Architecture”, C. Diot, B. N. Levine, B. Lyles, H. Kassem, D. Balensiefen. IEEE Network magazine special issue on Multicasting. January/February 2000. [22] “A Case For End System Multicast”, Y. Chu, S. Rao, H. Zhang, Proceedings of ACM SIGMETRICS , Santa Clara,CA, June 2000, pp 1-12. [23] “Enabling Conferencing Applications on the Internet Using an Overlay Multicast Architecture” Y. Chu, S. Rao, S. Seshan, H. Zhang, Proc. ACM Sigcomm 2001, http://www.acm.org/sigs/sigcomm/sigcomm2001/p5-chu.pdf [24] “Overcast: Reliable Multicasting with an Overlay Network”, J. Jannotti, D. K. Gifford, K. L. Johnson, M. F. Kaashoek, and J. W. O’Toole, Jr., Proceedings of OSDI’00. http://gaia.cs.umass.edu/cs791n/Jannotti00.pdf [25] “Tapestry: a fault tolerant wide area network infrastructure”, B. Zhou, D. A. Joseph, J. Kubiatowicz, Sigcomm 2001 poster and UC Berkeley Tech. Report UCB/CSD-01-1141. http://www.cs.berkeley.edu/ ravenben/publications/CSD-01-1141.pdf [26] “Chord: A Scalable Peer-To-Peer Lookup Service for Internet Applications” I. Stoica, R. Morris, D. Karger, F. Kaashoek, H. Balakrishnan, ACM Sigcomm2001, http://www.acm.org/sigcomm/sigcomm2001/p12.html [27] S. Ratnasamy, P. Francis, M. Handley, R. Karp, S. Shenker, “A Scalable ContentAddressable Network” ACM Sigcomm 2001, http://www.acm.org/sigcomm/sigcomm2001/p13.html [28] “Application-Level Anycasting: a Server Selection Architecture and Use in a Replicated Web Service” E. Zegura, M. Ammar, Z. Fei, and S. Bhattacharjee. IEEE/ACM Transactions on Networking, Aug. 2000. ftp://ftp.cs.umd.edu/pub/bobby/publications/anycast-ToN-2000.ps.gz [29] “Evaluation of a Novel Two-Step Server Selection”, K. M. Hanna, N. Natarajan, and B.N. Levine, Metric To Appear in IEEE ICNP 2001. November 2001. http://www.cs.umass.edu/ hanna/papers/icnp01.ps [30] “Finding Close Friends on the Internet” Christopher Kommareddy, Narendar Shankar, Bobby Bhattacharjee, To appear in ICNP 2001. [31] “An Investigation of Geographic Mapping Techniques for Internet Hosts” Venkata N. Padmanabhan, Lakshminarayanan Subramanian, Proc of ACM SIGCOMM 2001, San Dieogo, 2001. http://www.acm.org/sigcomm/sigcomm2001/p14.html
Channel Islands in a Reflective Ocean
9
[32] “Integrating Meta-Information Management and Reflection in Middleware”, Fabio Costa and Gordon Blair 2nd International Symposium on Distributed Objects & Applications pp. 133-143, Antwerp, Belgium, Sept. 21-23, 2000. Internal report number MPG-00-20 [33] “The Role of Open Implementation and Reflection in Supporting Mobile Applications ” Gordon Blair Proceedings of the IEEE Workshop on Mobility in Databases and Distributed Systems (MDDS’98), Vienna, August 1998. Internal report number MPG-98-35. [34] “Open Implementation and Flexibility in CSCW Toolkits”, Paul Dourish, PhD Thesis, 1996, Supervisor, Jon Crowcroft Available from ftp://cs.ucl.ac.uk/darpa/dourish-thesis.ps.gz [35] “A Language-Based Approach to Programmable Networks”, Ian Wakeman, Alan Jeffrey and Tim Owen, IEEE Conference on Open Architectures and network Programming, March 2000, Tel-Aviv, Israel. [36] What is Reflective Middleware? Geoff Coulson http://computer.org/dsonline/middleware/RMarticle1.htm [37] “UMTS Networks: Architecture, Mobility and Services”, Wiley & Sons. 2001; ISBN: 047148654X, Heikki Kaaranen (Editor), Siam¨ ak Naghian, Lauri Laitinen, Ari Ahtiainen, Valtteri Niemi [38] The Graticule System http://www.graticule.com/products/MapGPS.html [39] “A Survey of Event System”, A. Rifkin and R. Khare. http://www.cs.caltech.edu/ adam/isen/event-systems.html [40] “Notification Service Specification”, Object Management Group, June 2000, ftp://ftp.omg.org/pub/docs/formal/00-06-20.pdf [41] “Design and evaluation of a wide-area event notification service”, Carzaniga A., Rosenblum D. S. and Wolf A. L. ACM Transactions on Computer Systems, Volume 19, no. 3, pp. 332-383, 2001
A Reliable Multicast Protocol with Delay Guarantees Nicholas F. Maxemchuk Columbia University, Dept. of Elec. Eng., New York, N.Y. [email protected] Abstract. The reliable multicast protocol guarantees that all receivers place the source messages in the same order. We have changed this protocol from an event driven protocol to a timed protocol in order to also guarantee that all of the receivers have the message by a dead line. In this work we present two modifications to the timed protocol that provide shorter deadlines. In the examples that we consider the tighter deadlines approach the nominal network delay.
1 Introduction The Internet uses very simple protocols in the core of the network and relegates many functions to the end user. This strategy makes it possible to introduce new services by changing the programs at the users that require the services, rather than changing the entire network. The Internet provides best effort delivery. It does not guarantee the message delay or that the message will be delivered at all. In order for the end user to guarantee that messages are delivered within a certain interval, the end user must have a concept of time and take action within the interval. In conventional ARQ protocols the source users a timer to periodically retransmit a message until it receives a response from the receiver. Alternatively, if the source transmits at known times, a receiver that has a clock and knows the source schedule can take action when messages aren’t received. Periodic updates have been used in point to point transport protocols [ 1]. Recently, time has been added to the reliable broadcast protocol [ 2], RBP, to guarantee that all of the receivers have a message in a specified interval [3]. In the modified protocol messages are acknowledged according to a schedule and the receivers use absolute time to recover missing acknowledgements and source messages. Receivers that receive the acknowledgements and source messages do not have to send any further messages. RBP was invented in 1984. This protocol used as few as one control message for each broadcast message, independent of the number of receivers, to guarantee that all of the receivers correctly received a broadcast message. In addition to guaranteeing that all of the receivers correctly receive every broadcast message, it guarantees that every receiver places the broadcast messages in the same sequence. RBP was originally used to build a distributed database on an Ethernet[4]. In the early 90’s, this protocol was adapted to operate on a multicast network over the Internet and was renamed the reliable multicast protocol[5], RMP. E. Gregori et al. (Eds.): NETWORKING 2002, LNCS 2345, pp. 10−27, 2002. Springer-Verlag Berlin Heidelberg 2002
A Reliable Multicast Protocol with Delay Guarantees
11
RMP is event driven. The receivers do not take any action until a message is received. The protocol guarantees that all of the receivers "eventually" receive a message, rather than guaranteeing when they receive the message. If there are N receivers, the protocols guarantees that all of the receivers have a message after N − 1 additional messages have been acknowledged. RMP is described in section 2. In 1999 RMP was applied to an international, distributed stock market[3]. By adding a knowledge of absolute time to the protocol, and making the protocol time driven, rather than event driven, the earlier characteristics of RMP are maintained while also guaranteeing that every receiver receives every broadcast message within a specified time. The timed version of RMP, T-RMP, is described in section 3. T-RMP periodically sends a control message that simultaneously acknowledges all of the unacknowledged source messages. All of the receivers know when a control message is scheduled to be transmitted and begin the recovery process soon after the scheduled transmission time, rather than waiting for a message. Once the control message is received, the receivers request any missing source messages that it acknowledged. When the period between control messages equals the average interarrival time of source messages one source message is acknowledged by each control message, on the average, and the message efficiency of T-RMP and RMP is the same. When the period between control messages is greater than the average interarrival time of source messages, more than one source message is acknowledged by each control message, and the efficiency of T-RMP is higher than RMP. However, as the period between control messages decreases, the message efficiency of T-RMP also decreases. The version of T-RMP that is used in the stock market application is relatively easy to understand because the period between control messages is large enough for all of the receivers that have missed the control message or any of the source messages that it acknowledged to recover those messages before the next control message is transmitted. We can guarantee that the control message period is large enough for a receiver to recover a missing message because the ARQ protocol is not open ended. After a fixed number of attempts, the requesting site assumes that the site with the message has failed and enters a reformation process. Therefore, at the end of each control message period either all of the operable receivers have all of the acknowledged messages, or the system has entered a reformation process to identify failed sites. The reformation process is a lengthy process. In order to prevent the protocol from performing a reformation when the network experiences slightly longer than normal delays, the message recovery time is much greater than the average message delay in the Internet.
12
N.F. Maxemchuk
The control message period is the guaranteed delivery delay for an acknowledged message. The delay guarantee that is provided by the original version of T-RMP is adequate for the stock market application, but reducing the delay will make the protocol applicable to a larger class of applications, such as remote classrooms where students ask questions. One way to reduce the control message interval is to reduce the time between retry attempt to recover a lost message. As we make the retry intervals smaller we can take advantage of the small delays that usually occur in the network. However, the smaller retry intervals result in more frequent retries when a message has not been lost but is only delayed by the network. As the retry interval goes to zero, the time to recover a message can track the distribution of delays in the network, but the number of retries, and hence the number of overhead messages, becomes large. This effect occurs for all ARQ protocols that are used on the Internet, or any other network with variable delays. The effect is not unique to T-RMP and is not investigated in this paper. In sections 4 and 5 we consider two ways to reduce the guaranteed delivery time that are unique to T-RMP. The original version of T-RMP uses separate retry counters to recover the control message and the source messages that it acknowledged. In section 4 we combine the counts and show that we can significantly reduce the control message period without increasing the probability of erroneously entering the reformation process. In the original version of T-RMP the control message interval, the time until a message is recovered by all of the receivers, and the time to enter the reformation process, are all the same. In section 5 we consider using different time intervals for each of these events. The operation of the protocol is more complicated. We show that the protocol operates as a D/G/1 queue and show, by an approximate analysis of the queue, that using different intervals fro the three events can significantly reduce the delay guarantees.
2 The Reliable Multicast Protocol RMP has three characteristics that distinguishes it from earlier protocols: 1. Every receiver places the messages from the sources in the same sequence. 2. Every receiver eventually knows that every other receiver has the data. 3. When there aren’t any losses, there is only one control message per source message, independent of the number of receivers. (In reference 6 there is an analysis of the number of messages that are transmitted when there are losses.) The RMP protocol has two parts. The first part operates on multicast messages during normal operation. It guarantees delivery and ordering of the messages from the sources. The second part is a reformation protocol that reorganizes the broadcast group and guarantees the consistency of message sequences at the receivers after failures and recoveries. The complete protocol is described in reference 2. In this presentation we are concerned with the first part of the protocol.
A Reliable Multicast Protocol with Delay Guarantees
13
There are n sources and m receivers that participate in the protocol, as shown in figure 1. The sources and receivers may be the same or different. A single receiver, called the token site, acknowledges a source message and assigns the message a sequence number. All of the receivers place the messages in the order indicated by the sequence number. We guarantee that every receiver has all of the messages by sequentially passing the token to each receiver. A receiver does not accept the token until it acquires all of the preceding acknowledgments and the messages that they acknowledged. Therefore, when the receiver with the token sends an explicit acknowledgment for a source message, it implicitly acknowledges that it has received all of the source messages that have been acknowledged prior to this message.
1,1 1,2 ... 1,M 1
2,1 2,2 ... 2,M 2
...
s,1 s,2 ... s,M s
...
1
2
...
s
...
n,1 Transmitted n,2 Messages ... n,M n n Sources
Broadcast Medium Token Ring 0
1
1 ...
1 ...
t-r+1 t-r+2 ? ... t ?
t-r+2 t-r+3 ? ... t ?
...
... r r +1 m−1 Token Site 1. Ack t for Msg (s, M s ) 2. Ack token to r − 1 3. Pass token to r + 1 r −1
1 ...
1
... t-1 t ?
...
1 ... t-m+1 t-m+2 ?
1 ... ...
... t
Receivers
t ?
Received t-r Messages t-r+1 ? ... t ?
Fig. 1. The Reliable Broadcast Protocol The sources use a positive acknowledgment protocol. A message from source s contains the label (s, M s ) to signify that it is the M sth message from source s. Source s transmits message M s at regular intervals until it receives an acknowledgment or decides that the token site is not operating. If a source decides that the token site is not operating it initiates a reformation.
14
N.F. Maxemchuk
The receivers take turns acknowledging messages from sources by passing a token. A single control message, acknowledgment t from receiver r, serves three separate functions: 1. it acknowledges (s, M s ) and assigns it sequence number t, 2. it is an acknowledgment to receiver (r − 1) mod m that the token was successfully transferred to r, and, 3. it transfers the token to receiver (r + 1) mod m. The token transfer uses a positive acknowledgment protocol. Token site r periodically sends acknowledgment t until it receives acknowledgment t + 1 or greater or it receives a separate token acknowledgment. If the acknowledgment isn’t received in a specified number of attempts, receiver r decides that receiver r + 1 is inoperable and initiates a reformation. When r sends acknowledgment t it stops acknowledging source messages, even though receiver (r + 1) mod m may not have received, or may not be able to accept the token. This guarantees that at most one receiver can acknowledge source messages. When a receiver accepts the token it also assumes responsibility for servicing retransmission requests. Receiver (r + 1) mod m does not accept the token until it has all of the acknowledgments and source messages that were acknowledged up to and including t. Receiver r does not stop servicing retransmission requests until it receives the acknowledgment for passing the token. This guarantees that there is always at least one site, that has all of the source and control messages, that is responding to retransmission requests. Receivers place the messages in the sequence assigned by the acknowledgments. Each receiver, r, tracks t r , the next acknowledgment that it expects. If an acknowledgment number greater than t r is received, acknowledgment t r is missing. If acknowledgment t r is received and the source message that is acknowledged is not in the receiver’s queue of unacknowledged messages, then the source message is missing. The receivers use a negative acknowledgment strategy. No control messages are sent unless a missing message is detected. When a receiver detects a missing message it recovers the message using a positive acknowledgment protocol. The receiver periodically requests the message until it receives the message or decides that the retransmit server is inoperable and initiates a reformation. As the token is passed, the token site can infer information about the other receivers. When receiver r transmits acknowledgment t, receiver r and any receiver that receives the acknowledgment knows that
A Reliable Multicast Protocol with Delay Guarantees
15
— receiver r has all of the acknowledged messages up to and including the t th message, — receiver (r − 1) mod m has all of the acknowledged messages up to and including the (t − 1)th message, — . . ., and — receiver (r − m + 1) mod m has all of the acknowledged messages up to and including the message acknowledged by t − m + 1. Since (r − m) mod m = r, receiver r knows that all of the receivers have all of the source messages up to and including the message acknowledged by t − m + 1. By a similar argument all of the receivers know that all of the other receivers have all of the messages up to and including the (t − m + 2)th message. Figure 2 is an extended finite state machine, E-FSM, representation of the actions that a receiver takes when an acknowledgment is processed. The states indicate tests that are performed or situations where the receiver waits for an external stimuli, such as a message or a time out. The transitions between states are labeled with the event that caused the transition, followed by a "*"’ed list of actions that occur during the transition.
3 The Timed Reliable Multicast Protocol T-RMP uses the same token passing mechanisms and retransmission strategies as RMP, as shown in figure 1. The difference is that T-RMP is time driven rather than event driven. Acknowledgments are transmitted by the token site at scheduled times separated by t seconds. In addition, T-RMP is a bulk acknowledgment protocol. An acknowledgment message contains a list of all of the source message that the token site has received, but which have not been acknowledged by the previous token sites. The t th token passing message acknowledges a sequence of k source messages, where k is variable. The messages are assigned sequence numbers s + 1 to s + k, where s is the last sequence number assigned in the (t − 1)th acknowledgment. ÿ
In T-RMP we assume that the receivers have synchronized clocks. Synchronization may be performed on the multicast network using other protocols [7, 8, 9] or may be performed on a parallel network, such as a satellite network, with deterministic delays. The clock synchronization technique is not part of T-RMP and is not considered in this presentation. The primary advantage of the timed protocol is that a receiver detects a missing token based upon the time that it was scheduled to be transmitted, rather than later events that occur at undetermined times in the future. Negative acknowledgments have much more significance in the scheduled protocol than in the event driven protocol. In the event driven protocol, RMP, we cannot assume that a receiver that has not sent
16
N.F. Maxemchuk
Wait for Ack 1 r>e *Store Ack(r) *nr = 0
r=e *nr = 0
Msg(e) Rcvd *e++ *Accept Token?
t > ts + TR Test nr
2
3 nr < nmax *Rqst Ack(e) *nr + + *t s = t
Rcv Ack(e) *nr = 0 Wait for Ack(e)
7 Reformation
Check Msg(e)
Msg(e) Not Rcvd *nr = 0 t > ts + TR Test nr
nr = nmax
4
5
Msg(e) Rcvd *e++ *Accept Token? 6
Wait for Msg(e)
nr < nmax *Rqst Msg(e) *nr + + nr = nmax *t s = t e = number of expected acknowledgment r = number of received acknowledgment Msg(e) = message acknowledged by Ack(e) t e = scheduled transmission time for Ack(e) T R = time out for retransmission request nr = number of recovery attempts nmax = maximum number of recovery attempts
Fig. 2. An E-FSM representation of acknowledgment processing at a receiver in the RMP protocol a negative acknowledgment has received a source message. The receiver may also have missed the positive acknowledgment for that source message and any subsequent acknowledgments that would indicate that it missed the first acknowledgment. We cannot be certain that the receiver has a source message until that receiver sends an implicit acknowledgment by sending a positive acknowledgment for a subsequent message. In the scheduled protocol, T-RMP, a receiver is aware that it has missed an acknowledgment one network delay time after the acknowledgment is scheduled. Message recovery uses a positive acknowledgment protocol that retransmits unanswered requests at fixed intervals and declares a failure and places the system in reformation after a fixed number of unanswered requests. Therefore, after a fixed time following a message’s acknowledgment, either all of the operable receivers have the message or the system is in reformation.
A Reliable Multicast Protocol with Delay Guarantees
17
Figure 3 is the E-FSM representation of how acknowledgments are processed in T-RMP. We can use this state diagram to prove that all of the operable receivers have received a source message, or have placed the system in reformation within time (nmax + 1/2)T R of when it was scheduled to be acknowledged. If the token site, that was scheduled to send the acknowledgment has failed, the system is placed in the reformation phase by the receivers. The sources don’t have to detect a failed token site. Wait for Ack(e) 1 t > t e + T R /2 *nr = 0
Finish
X(e) Empty
Rcv Ack(e) *nr = 0 t > ts + TR
Test nr
2
3 nr < nmax *Rqst Ack(e) *nr + + *t s = t
8
Rcv Ack(e)
4
Check Msg(e)
Wait for Ack(e) X(e) Not Empty t > ts + TR Test nr
nr = nmax
7 Reformation
X(e) Empty
5
6
Wait for X(e)
nr < nmax *Rqst X(e) *nr + + nr = nmax *t s = t e = number of expected acknowledgment Msg(e) = set of messages acknowledged by Ack(e) X(e) = subset of Msg(e) that are not received t e = scheduled transmission time for Ack(e) T R = time out for retransmission request nmax = maximum number of recovery attempts nr = number of recovery attempts
Fig. 3. An extended finite state machine representation of acknowledgment processing at a receiver in the timed RMP protocol A source message is scheduled to be acknowledged at time t e . If the acknowledgment is received before t e + T R /2, the receiver moves to state 4, with nr = 0. Otherwise, at t e + T R /2 the receiver moves to state 2 with nr = 0, requests the missing acknowledgment, increments nr to 1, and moves to state 3. If the missing acknowledgment is received within T R seconds, the receiver moves to state 4, otherwise it returns to state 2. The receiver circulates around the loop between states 2 and 3 at most nmax times. Either the receiver enters state 4 before
18
N.F. Maxemchuk
t e + (nmax + 1/2)T R , or enters state 7, and initiates a reformation at time t e + (nmax + 1/2)T R . If a receiver enters state 4 at time t4 such that t e + (k 4 − 1/2)T R ≤ t 4 < t e + (k 4 + 1/2)T R , then nr = k 4 . If the receiver has the acknowledged source message, then the receiver move to state 8, otherwise it moves to state 5. If k = nmax , the receiver moves immediately to state 7, otherwise it follows the 5->6->5 recovery loop up to nmax − k times. The receiver enters state 7 at time t 4 + (nmax − k)T R < t e + (nmax + 1/2)T R , if it does not enter state 8 prior to this time. Therefore, by t e + (nmax + 1/2)T R all operable receivers either have the message, or have started a reformation process. If the token passing period is T P ≥ (nmax + 1/2)T R , the next token site has recovered all of the messages, and is ready to acknowledge messages before the next acknowledgment is scheduled to be transmitted. The structure of the state machine for T-RMP is similar to the state machine for RMP in figure 2. Two obvious differences are that: 1. T-RMP moves from state 1 to state 2 when the local clock exceeds the scheduled acknowledgment time plus a reasonable network delay, while RMP makes the same transition when it receives a token with a larger sequence number than expected, and, 2. T-RMP checks for, and may have to recover, a set of source messages for each acknowledgment, while RMP only checks for a single source message. There are two other things that should be noted in the T-RMP state machine, 1. the time out that activates the transition from state 1 to state 2 is half the time out that activates the transitions between states 2 and 3 or 5 and 6, and, 2. the sum of retries to recover a missing acknowledgment and a missing message, is limited, rather than separately limiting the number of retries to recover each. The sum of the timer delays in T-RMP determine how frequently we can transfer the token. The smaller the timers, the more frequently we can transfer the token. The more frequently we are able to transfer the token, the smaller the time until we are certain that all of the receivers have a message. In addition, smaller token transfer times result in a smaller waiting time until source messages are acknowledged. Therefore, we would like to make the total timer delays as small as possible.
4 Merged Retry Count We merge the count of retry requests to recover lost acknowledgments and lost messages because it reduces the maximum time that we allow to recover messages, without increasing the probability of erroneously entering the reformation phase. As an example, consider a system with independent messages losses, P L :
A Reliable Multicast Protocol with Delay Guarantees
19
— The probability that a receiver does not receive an acknowledgment is P A = P L ; — The probability that the receiver misses at least one of k source messages that are covered by an acknowledgment is P S = 1 − (1 − P L )k ; — And, the probability that the request for a retransmission from a receiver, or the retransmitted acknowledgment message, or retransmitted source messages, is lost is P R = 1 − (1 − P L )2 . In a system that allows n1 attempts to recover a missing acknowledgment and a separate n1 attempts to recover any missing source messages, the probability of initiating a reformation process because a sequence of messages has been lost, rather than because a component has failed, is 1 P R,1 (n1 ) = (P A + P S )P nR1 − P A P S P 2n R .
In a system that allow a total of n2 attempts to recover both the missing acknowledgments and retries, the probability of initiating the same erroneous reformation is P R,2 (n2 ) = (P A + P S )P nR2 + P A P S n2 P nR2 −1 (1 − P R) − P nR2 . When P L P R,2 (n1 + 1) In other words, if we make the sum of the retries one greater than the number of separate retries to recover the acknowledgment and source messages, we are less likely to initiate an erroneous reformation process. A system that allows 3 separate tries to recover acknowledgments and source messages must allow 6 recovery intervals before passing the token. A system that monitors the sum of the retries can provide better performance while only allowing 4 recovery intervals before passing the token. Of course we can make the above model more accurate by — allowing different loss probabilities for different length messages, a short acknowledgment message versus up to k source messages, — considering time correlation of the losses, and — taking into account other receivers that may miss the same messages. Our objective, however, is to demonstrate the advantage of summing the retry attempts, rather than to recommend a specific number of attempts for a particular network condition. In a real network the loss and delay change continuously. We
20
N.F. Maxemchuk
recommend increasing n2 by one, and slowing down the token passing, when the receivers initiate unnecessary reformations, and decreasing n2 , and speeding up the token passing, when there is a long time between erroneous reformations. How long is long depends upon how badly we want to avoid erroneous reformations.
5 Separating Events At each time t e acknowledgment Ack(e) is scheduled to be transmitted. ∆T = t e+1 − t e is the token passing period. Let Ack(e) and the source messages that it acknowledges comprise the message set Msg(e). At t e + ∆C all of the receivers that have recovered the source messages in Msg(e) commit those messages. We assume that at t e + ∆C most, if not all, of the receivers have these messages. At t e + ∆ R any receiver that has not recovered Msg(e) initiates a reformation. In our initial description of T-RMP ∆T = ∆C = ∆ R = ∆init . This simplified the description and understanding of the protocol because the operation of the protocol is the same at every receiver and token site during every token passing interval. At each t e , if the system is not being reformed all of the receivers, including the token site, have all of the Msg(i) for all i < e. At t e the token site transmits Ack(e). At t e + T R /2 all of the receivers that do not receive Ack(e) try to recover it. At t e + ∆ R (≤ t e+1 ) any receiver that has not recovered Msg(e) initiates a reformation process. Therefore, if the system is not being reformed, the operation at t e+1 is the same as the operation at t e . In addition, t e+1 is the commit time for the messages acknowledged at t e , since we can guarantee that all of the receivers have those messages. When a source message is received at the token site it may wait up to ∆T before the token is transmitted, and then must wait an additional ∆C before the receivers commit the acknowledged message. We would like to make ∆max = ∆C + ∆T as small as reasonable, in order to provide stronger quality of service guarantees. In the initial system ∆max,init = 2 * ∆init . In this section we set ∆T < ∆init . However, in order to keep the probability that a receiver has a message the same as in the initial system, we must make ∆C > ∆init . We show that ∆max = ∆T + ∆C < ∆max,init , for a certain range of ∆T . We further reduce ∆max by making ∆C < ∆ R . We justify this reduction by noting that false alarms, that cause unnecessary reformations, are generally more costly than the late arrival of a message. When we make ∆T < ∆ R Msg(e) may be recovered after Ack(e + 1) is scheduled to be transmitted, since t e+1 < t e + ∆ R . Recovering Msg(e) after t e+1 does not have to affect the operation of a receiver that is not also the token site. The receiver can start recovering the missing components of Msg(e + 1) at the scheduled time whether or not is has completed the recovery of any Msg(i), i < e + 1. A receiver may have several recovery processes in progress simultaneously, or, since all of the requests for missing messages are directed to the current token site, the receiver may combine all of the requests into a single message.
A Reliable Multicast Protocol with Delay Guarantees
21
However, when the site that is scheduled to transmit Ack(e + 1) fails to recover Msg(e) before t e+1 , all of the receivers are affected. By the conventions of the protocol, the token site does not transmit an acknowledgment until it recovers the earlier messages and can service all retransmission requests. All of the other receivers may start transmitting their retransmit requests at t e+1 + T R /2, but the recovery cannot start in earnest until after the token site completes its recovery and transmits the acknowledgment. In our model, the number of retries needed to recover messages, and the distribution of the recovery time, is independent of when the recovery starts. Therefore, if the recovery starts later than t e+1 + T R /2, it will end later. If we make ∆T < ∆init , we must make ∆ R > ∆init order to keep the probability of reformation when there isn’t a failure the same. The operation of the token sites can be mapped onto the operation of a D/G/1 queue, where the period of the arrival process is ∆T and the service process is the distribution of times to recover Msg(e). In order to perform this mapping, site s e , that transmits Ack(e), arrives in the queue at time t e . the scheduled time to transmit the acknowledgment. If site s e−1 , that transmits Ack(e − 1), has successfully transmitted Msg(e − 1) to s e ( that is to say, s e has successfully recovered Msg(e − 1) ) before t e , then the queue is empty, and immediately begins to service Msg(e). The service time of Msg(e) is the time needed to successfully transmit Ack(e) from site s e to site s e+1 , which is responsible for transmitting Ack(e + 1), and for s e+1 to recover any missing source messages in Msg(e). If Msg(e − 1) is not transferred to s e by t e , s e must wait for the transfer to be complete before beginning to service Msg(e). Site s e receiving the token at t e + and beginning the next token transfer is equivalent to s e arriving at the queue at t e , and waiting until the previous service is completed at t e + to begin its own service. Note that s e+1 begins trying to recover Msg(e) at t e + T R/2 , and combines any other missing messages with this request. This makes the service time independent of the past history of site s e+1 . Whenever s e transmits the acknowledgment, s e+1 is ready to start recovery, without waiting for an earlier recovery to be complete. ÿ
ÿ
The queue builds up because of the token passing process, but the waiting time distribution for the queue is the waiting time component for the delay at any receiver. None of the receivers can start recovering Msg(e) until s e has the token. Therefore they all have the same waiting time. The delay between the time that a source message is scheduled to be acknowledged and the time that a receiver has that message is the convolution of the waiting time distribution with the service time distribution. The service time distribution is the time needed for the receiver to acquire a message set Msg(e), when the token sites have not failed. When the delay at a receiver reaches ∆ R , the receiver starts a reformation process, even though there has not been a failure. The waiting time is zero after a reformation. Since the probability of a false reformation is intentionally small, we approximate this probability as the probability of exceeding ∆ R in an infinite queue. The probability that a receiver has not acquired a source message when it is scheduled to be committed is the probability that the delay exceeds ∆C .
22
N.F. Maxemchuk
Following the model in the previous section, the service time is s = d n,1 (1 − x A ) +
TR x A + j 1 T R + d n,2 + d n,3 x A + j 2 T R + d n,4 + d n,5 x S 2
where: d n,i are delays through the network that depend on the source, the current token site and the network congestion, 1 xA = 0
with probability P L otherwise
1 with probability 1 − (1 − P L )r xS = 0 otherwise r is the number of arrivals from independent sources during ∆T and is distributed as p(r) =
(
A ∆T ) ÿ
r −
e
þ
A ∆T
r! and, j i are the number of unsuccessful retransmission attempts before acquiring a missing message and is distributed as j
p( j) = (1 − P R )P R for j = 0, 1, 2, . . . where P R = 1 − (1 − P L )2 . When the delay and retries are uncorrelated, the average service time is: ÿ
S
where
= ÿ
ÿ
N 1
N
+ P L + 2(1 − e− þ
A ∆T P L
TR PR − PL + TR ) + P L + 1 − e 1 − P 2 R þ
= E(d N , j ). When P L 0 and f(0)=0. For RB control, P is typically driven by an integration process, which sums the excess demand, such as P(t+1) = P(t) + ∆P×(x(t) – u×c(t)), where x(t) and c(t) are the arrival and service rates at time t respectively, ∆P is the gain of the control which affects the stability and convergence, and u controls the target utilisation. Although BB congestion control is simpler to implement, it has some inherent limitations not present in RB control. The backlog (queuing) process b(t+1)=[b(t) + x(t) – c(t)]+, cannot observe long term arrival rates x(t) < c(t) as if x(t) < c(t) for a sufficient period of time, then b(t) reaches zero. Once b(t)=0, and if x(t) continues to be less than c(t) and b(t) remains zero, we can say nothing about how close x(t) is to c(t) by observing the state of b(t). Therefore, by observing b(t) the sources cannot be provided with feedback about the level of x(t) to control their transmission rate, as b(t) stays at zero and provides no information about x(t). Given that a positive feedback signal P is required to control the source at some steady rate x(t) where x(t) < c(t), and P(t)=f(b(t)), the backlog must be positive, b(t) > 0, for P to be positive. This shows how BB control posits the existence of backlog and backlog is necessary for the control process itself. Backlog is undesirable because it creates packet latency and delay jitter. Furthermore, delay in the congestion control system loop pushes the network towards instability, increasing the likelihood of buffer overflow and under-utilisation. Of course, some
High Performance DiffServ Mechanism for Routers and Switches
65
backlog is necessary to achieve a desired utilisation of a link with a non-deterministic arrival process, however this is at worst equal to, but typically far less than, the backlog created by BB control such as drop-tail or RED [8]. Unlike the BB schemes, the RB control mechanism can observe x(t) directly. In a steady state situation, where the input process is stationary, the amount of backlog kept can therefore be only the minimum required to achieve the desired utilisation. It is not the intention of this paper to give a thorough performance comparison of different congestion control strategies, only to indicate some of the reasons why it is desirable to have a RB control strategy. For more background, the reader is referred to [5] [8]. Now that we have presented the need for (1) class based scheduling algorithms and (2) RB control, the algorithm which combines the two is presented in Section 2 and its performance evaluation is presented in Section 3.
2 Algorithm Background RB AQM operates in symbiosis with a scheduler. Our proposed design of RB AQM applies to a work conserving WFQ like scheduler. A work conserving scheduler is never idle if there are any packets in any queue. A WFQ like scheduler, such as RR and many variants of WFQ, allocates a portion of service time to each queue during an interval of operation. The scheduler is interfaced to by the enqueue and dequeue functions, which accept and provide the next packet for queuing or transmission respectively. Class 1 Class 2
AQM 1 AQM 2
S(t)
… Class N …
AQM N
Fig. 1. RB AQM architecture in a class based scheduler
As shown in Fig. 1, each queue in the scheduler is managed by a separate instance of an AQM algorithm. The AQM algorithm decides which packets to drop or ECN mark. Packet marking/dropping gives the source algorithm a feedback signal which controls its transmission rate and avoids queue overflow or excessive backlog. Traditionally, this would be performed by BB control, such as drop-tail or RED queue. RB control directly replaces these algorithms. In general, RB AQM is any process which determines the packet marking/dropping rate, P(t), from at least the packet arrival rate x(t) and capacity c(t). Typically, the process for P(t) is an integrator of excess demand [10], P(t+1) = P(t) + ∆P×(x(t) – u×c(t)), however, other functions are possible, motivated by better convergence or stability (eg: REM, GREEN).
Pi (t + 1) = AQM (ci (t ), xi (t ), Pi (t ),...)
1 ≥ Pi (t ) ≥ 0
(1)
66
B. Wydrowski and M. Zukerman
The distinctive issue, faced by RB AQM in a class-bases scheduler, is that the capacity available to each class i, denoted ci, and the packet arrival rate for that class, denoted xi, need to be known. In work conserving scheduler, such as WFQ, where unused capacity in one class is redistributed to other classes, the capacity available to each class is time-varying and depends on, and affects, the traffic in other classes. This paper enables RB AQM by presenting a technique for calculating and controlling ci, the capacity allocated to each class. Class Dimensioning, or controlling the number of users per class is beyond the scope of this paper. A basic algorithm is introduced in Subsection 2.1 which results in a functional work conserving RB system, where each class is guaranteed its minimum share, Mi. However, the capacity above the minimum is not distributed with any notion of fairness. Instead, the classes with the most aggressive traffic win the slack capacity. In Subsection 2.2, we present a notion of proportional fairness, and a mechanism to enforce it. 2.1 Basic Algorithm 2.1.1 Capacity Estimation Consider a stream of packets scheduled by a work-conserving WFQ scheduler, of N classes. Let B be the vector representing the sizes (bits) of the H packets that have been served most recently. The order of the elements of vector B are in reverse order to their service completion times. In other words, B0 is the size of the most recently served packet, B1 is the size of the previous packet and so on. Finally, BH is the size of the oldest packet packet in B. Similarly, we define the vector C, of H elements, such that Cj is the class (Cj ∈ {1, 2, 3, … N}) of the packet represented by Bj, j = 1, 2, 3, … H. Let S(t) be the physical capacity of the link at time t. When S(t) is time varying, such as with Ethernet, DSL, or radio, it can be estimated from the last packet’s transmission time. The scheduling algorithm, such as WFQ, may guarantee minimum rates to each class. Let W be a vector whose element Wi corresponds to the share of capacity that each class i is guaranteed. For a WFQ scheduler, Wi corresponds to the service quantum for class i. In a work conserving scheduler, the actual capacity available to a class depends on the traffic in other classes as well as on the minimum rate allocation W. Without apriori knowledge of the traffic, the future capacity available to a class, can only be estimated from the previous capacity. Let the identity function I(j,i) be:
I ( j, i) =
{
1 if C j =i 0 otherwise.
(2.1)
The estimate class capacity, Si(t), is calculated from the portion of server time allocated to class i by the scheduling mechanism in the past H packets:
High Performance DiffServ Mechanism for Routers and Switches H
S i (t ) =
∑B
(t ) ⋅ I ( j, i )
j
j =0
67
∑B j =0
where
S (t )
H
i < N.
(2.2)
j
Note reduced complexity techniques such as exponential averaging could be employed to compute (2.2). 2.1.2 Capacity Allocation The minimum service rate guaranteed by the WFQ scheduling mechanism, Mi, is given by:
M i (t ) =
Wi
S (t ) .
N
∑W j =1
(3)
j
The capacity allocated to each class is therefore also bounded by the minimum rate enforced by the WFQ scheduling policy. The capacity allocated to class i, denoted ci(t), is:
ci (t ) = Max(M i (t ), S i (t ) ) .
(4)
Notice that ci(t) is the capacity allocated to class i, not the capacity actually consumed by class i. The capacity not consumed by the class to which it is allocated, may be used by other classes. If for example, no class i packets arrive, si(t) will be 0, and ci(t)=Mi(t). Although in this case no capacity is consumed by class i, if a burst of class i packets were to arrive, Mi(t) capacity is guaranteed. Note (4) is evaluated at each update of the AQM process (1), which at the maximum rate, is at every enqueue event. 2.2 Extended Fair Share Algorithm The algorithm in 2.1 is extended here to enforce a notion of proportional fairness. The fair allocation enforcement applies only to bottlenecked classes, where xi(t) ≥ ci(t). Classes which are not bottlenecked at the link, xi(t) < ci(t), need no enforcement of fairness, since their rate is below their fair capacity and their bandwidth demand is satisfied. We define a fair allocation of capacity to a bottlenecked class i, Fi(t), as:
Fi (t ) =
Wi
∑W
j j = all bottlenecked classes
( S (t ) −
∑x
).
j j = all non −bottlenecked classes
(5)
68
B. Wydrowski and M. Zukerman
In the extended algorithm, the capacity of non-bottlenecked classes is given by (4), and for bottlenecked classes, the capacity is given be (5). Notice that the sum of ci(t) for non-bottlenecked by (4) and Fi(t) by (5) may be more than S(t). However, the nonbottlenecked classes do not utilise their allocated capacity ci(t), and the aggregate arrival rate is controlled below the capacity S(t).
3 Implementation and Transient Performance Evaluation 3.1 Implementation For class i, RB control was implemented in a WFQ scheduler with a variation of GREEN as the AQM algorithm, as follows:
Pi (t ) = Pi (t ) + ∆Pi (t ) ⋅ U ( x i (t ) − u i ⋅ ci (t )) .
(6.1)
where
+ 1 x ≥ 0 U ( x) = − 1 x < 0
(6.2)
∆Pi (t ) = max(abs ( x i (t ) − u i ⋅ ci (t )), k ) .
(6.3)
and
where ui controls the target utilisation and hence also the level of queuing, and k is a constant which limits the minimum adjustment to Pi(t), to improve convergence. The values of Pi(t), xi(t) and ci(t) are updated with every class i packet arrival. The pseudocode for the WFQ scheduling algorithm used is: TOX [JUHIUYI _ [LMPI869) _ JSV-!XS2_ MJGPEWW?-ARI\XTOXWM^I 7?-A _ 7?-A!7?-AGPEWW?-ARI\XTOXWM^I VIXYVRGPEWW?-AHIUYIYI a a JSV-!XS2_ MJ7?-A 1E\7 7?-A!7?-A;?-A a a a [JUIRUYITOX TEGOIX _ GPEWW?TEGOIXGPEWWAIRUYITEGOIX a Fig. 2. Low jitter WFQ scheduler
This particular WFQ variant minimizes the jitter of higher priority classes, lower class number. The wfq.deque function is invoked when the link is ready to transmit the next packet and the wfq.enque function is invoked when a packet is received for transmission onto the link. A packet queued in a higher priority class will always be
High Performance DiffServ Mechanism for Routers and Switches
69
served next, so long as the class’s work quantum, W, has not been exceeded. Note that the function class[I].nextpkt.size returns the size [bits] of the next packet in class I, or infinity if there are no packets left in the class. The constant MaxS controls the maximum burst size allowable to be transmitted in a class that has been previously idle. 3.2 Performance Evaluation The system was simulated using Network Simulator 2 [9]. Three scenarios simulated are presented in this paper. All scenarios used the same network topology, as depicted in Fig. 3. For Scenarios 1 and 2, the Diffserv managed link X has a 1 Mbps capacity and it is 2Mbps in Scenario 3. Multiple TCP or UDP sessions are aggregated to form the traffic of each of the four classes presented to the link. All data packets are 1000 bytes. We will now describe each simulation scenario and the results.
Class 1 Sources
X Mbps
Class 2 Sources
WFQ with 4 class RB AQM
Destination node
Class 3 Sources Class 4 Sources Fig. 3. Overview of Simulation Topology
Scenario 1A and 1B: TCP Traffic The traffic of this scenario consists only of TCP sources. Scenario 1A uses RB and WFQ with the fairness enhancement (5). Scenario 1B uses WRED and WFQ. The flow rates of traffic in each class and the total number of packets backlogged for all classed was measured. The parameters for this scenario are listed in Table 1. Table 1. Simulation Parameters for Scenario 1 Class 1 2 3 4
ui Utilisation 0.93 0.93 0.93 0.93
Wi 8001 4001 2001 1001
Sources 8 TCP 40ms RTT 8 TCP 40ms RTT 16 TCP 40ms RTT 16 TCP 40ms RTT
Start (sec) 0 20 40 60
Stop (sec) 100 140 180 220
The WRED implementation uses a weighted average of backlog, denoted Bw(t), to determine the packet marking/dropping probability. The marking probability is related linearly to Bw(t), by P(t) = αBw(t), where α is the reciprocal of the maximum queue size q. In Scenario 1B q equals 10.
70
B. Wydrowski and M. Zukerman
Fig. 4 confirms that a fair allocation of capacity is achieved with the RB and WFQ, as the magnitude of the flow rate from each class is proportional to its minimum rate W when the traffic from different classes is switched on and off. Figures 5 and 7 show the backlog of the RB and BB (WRED) system, with the thick black line being the average backlog measured over 300 packets. The figures illustrate the poorer queuing performance of WRED and WFQ compared to RB and WFQ congestion control. In the interval 50s to 100s, when all classes are active, note how backlog increases with increasing traffic load. This illustrates the previous analysis, that with BB control where P(t)=f(b(t)), backlog is necessitated by the control system. With increased traffic load, the feedback signal P(t) must also increase to control the sources, and since P(t) is coupled with backlog, the backlog must also increase. Compare this with RB congestion control in Fig. 5, where the backlog varies about 0 regardless of the traffic. Fig. 5. Scenario 1A: RB Aggregate Backlog Backlog (1000 byte Packets)
45 40 35 30 25 20 15 10 5 0 0
50
100
Time (Sec)
150
200
Fig. 7. Scenario 1B:WRED Aggregate Backlog Backlog (1000 byte Packets)
45 40 35 30 25 20 15 10 5 0 0
50
100 Time (Sec)
150
200
Scenario 2: TCP and UDP Traffic This traffic scenario consists of both UDP and TCP sources. Classes 1 and 4 are UDP constant bit rate sources transmitting at 0.8 Mbps and 0.05 Mbps respectively. UDP sources ignore congestion notification. Classes 2 and 3 are comprised of TCP sources. For the complete parameters refer to Table 2. Fig. 8 shows that RB control allocates bandwidth fairly, despite the presence of an unfriendly, non-congestion-controlled UDP sources. Notice that at 50sec, when Class 2 traffic is switched on, the UDP traffic in Class 1 is throttled down to its fair share by an increased packet dropping rate. At this point Class 1 becomes a bottlenecked class.
High Performance DiffServ Mechanism for Routers and Switches
71
In this way, the TCP sources can attain their fair share despite the aggressive UDP source.
M b p s t ra n s f e rre d w ith d e la y le s s th a n 5 0 m s (C la s s 1 tra ff ic )
Fig. 9. Scenario 3: WRED and RB Delay Performance 900000 800000 700000
WRED5
600000 500000
WRED10 WRED20
400000 300000 200000 100000
RB85
RB80
0 1
101
201
301
401
Number of TCP Sessions (Class 1 traffic)
Scenario 3: Real-Time Traffic In this scenario, it is demonstrated how a RB Diffserv architecture outperforms BB control for real-time traffic. Two classes are used to simulate the interaction of data traffic and real-time traffic. Class 2 contains TCP/FTP data traffic, and is insensitive to delay. Class 1 is the real-time traffic, with a hard maximum queuing delay requirement of 50 ms. The traffic in Class 1, the real-time traffic, consists of saturated TCP transfers, with the number of sessions increasing linearly from 1 to 450. A number of trails were simulated, using WRED with queue size value q set to 5 (WRED5),10 (WRED10) and 20 (WRED20) packets, and using RB control with parameter u1 set to 0.8 (RB80) and 0.85 (RB85). The Diffserv link capacity is 2Mbps, with 1Mbps assigned to Class 1 and 1Mbps assigned to Class 2. Table 2. Simulation Parameters for Scenario 2 Wi
Sources
0.93
8001
1 UDP 20ms RTT 0.8 Mbps
2
0.93
4001
16 TCP 20ms RTT
50
150
3
0.93
2001
16 TCP 20ms RTT
100
150
4
0.93
1001
1 UDP 20ms RTT 0.05 Mbps
0
150
Clas s
ui Utilisation
1
Start (sec)
Stop (sec)
0
150
Table 3. Simulation Parameters for Scenario 3 Class 1 2
ui Utilisation 0.8, 0.85 0.95
Wi 2001 2001
Sources 50-450 TCP 40ms RTT 8 TCP 40ms RTT
Start (sec) 0 0
Stop (sec) 450 450
TCP is used to approximate a real-time adaptive multi-rate source [11] [12] [13]. Audio and video protocols are typically based on UDP, RTP and RTCP. Recent realtime multimedia protocols respond to loss by adjusting their rate, and are thus in principle similar to TCP [11] [13]. Although their transient behaviour, and amount of
72
B. Wydrowski and M. Zukerman
response to loss is different than TCP, any real-time protocol that seeks to take advantage of available capacity on a best effort network, must in principle be congestion controlled. Unless the real-time source increases its rate when there is available capacity, and decreases it when capacity decreases, the quality of transmission is suboptimal. Many existing CODECS are designed for varying channel conditions, such as a best effort network. For instance, the G.723.1 Audio speech codec adjusts its output rate, and adapts to the available bandwidth. Similarly, MPEG4 includes extensive support for multi-layered, multi-rate video. The RTP communicates the amount of packets lost, which allows the sender to adapt its rate to the channel. At a bottleneck link, adaptive multimedia sources are like saturated sources, such as an FTP transfer, as the source always has more video or audio information that it could possibly send to improve quality. In the simulation we measure the amount of packets, in Mbps, which are delivered with less than 50ms queuing delay in the Diffserv queue. Packets served late, >50ms, no longer contain useful information to a real-time application and do not contribute to the Mbps. Since real-time sources do not retransmit packets, the TCP packet retransmissions are considered as new packets in the simulation. The results, in Fig. 9, show how for a variety of settings, and traffic loads, RB control effectively delivers more useful data. As discussed previously, the problem with BB schemes such as WRED, is that the backlog must be positive for source rate to be controlled. In this trial, the maximum queue size for WRED was reduced from 20 to 10 and then to 5. Reducing the maximum queue size gave diminishing returns since the utilisation was significantly lowered. On the other hand, increasing the queue size resulted in a higher average backlog, which delayed more traffic beyond the 50ms requirement. Also, as evident in Fig. 9, unlike RB control, the optimal setting of parameters for WRED varied widely with the traffic load. RB control was able to deliver more data in the delay specification, since it was able to control the arrival rate to some specified fraction below the service capacity, leaving spare capacity for the bursts in the traffic. 3.3 UDP: Throw Away – No Delay In result in this section we focused on the possible disruptive effect of UDP traffic on TCP traffic, or the interaction between TCP traffic in different classes. An important issue is the performance of non-congestion controlled UDP traffic. UDP is typically used for real-time services with an upper bound delay requirement. If such traffic receives enough capacity, both BB and RB schemes function identically. However, when the amount of non-congestion controlled UDP traffic exceeds the capacity, BB schemes, such as WRED will increase backlog and delay, whereas RB control will prevent excessive delay by increasing the dropping rate. This means, that is instead of being excessively delayed, packets are discarded. Therefore in a congestion situation, the portion of packets which are transmitted, still meet the delay requirements. The portion which are discarded would likely not have been able to be served within the delay requirement. With WRED, in a congestion situation, the delay performance of all packets suffers.
High Performance DiffServ Mechanism for Routers and Switches
73
4 Conclusion We have presented a technique for applying rate based active queue management to a class based scheduling algorithm. The method presented is scalable, and low in computational complexity. It forms a solid architecture for DiffServ implementation in routers and switches and has been shown to outperform the current WRED with WFQ architecture. Furthermore, this work will enable the wide body of research into rate based congestion control schemes to be applied to improving the performance of DiffServ.
References 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14.
Cisco Systems Document, “Class-Based Weighted Fair Queueing” Cisco Systems Document, “Low Latency Queueing”, http://www.cisco.com/warp/public/732/Tech/qos/techdoc/diffserv.shtml S. Floyd and V. Jacobson, “Random early detection gateways for congestion avoidance” IEEE/ACM Transactions on Networking, 1(4):397--413, August 1993. Wu-chang Feng, Dilip D Kandlur, Debanjan Saham Kang G.Shin, “Blue: A new class of active queue management”. Department of EECS University of Michigan S. H. Low and D. E. Lapsley, “Optimization Flow Control, I: Basic Algorithm and Convergence”, IEEE/ACM Transactions on Networking, vol 7 part 6 pp861-875, Dec. 1999. Internet Engineering Task Force IETF, “Recommendations on Queue Management and Congestion Avoidance in the Internet”, RFC 2309. F. P. Kelly, A.K. Maulloo and D.K.H, “Rate control in communication networks: shadow prices, proportional fairness and stability”, Tan (Statistical Laboratory, University of Cambridge), Journal of the Operational Research Society, vol. 49, pp 237-252. 1998 B. Wydrowski and M. Zukerman, “GREEN: An Active Queue Management Algorithm”, 2001, (submitted for publication, available: http://www.ee.mu.oz.au/pgrad/bpw). The Network Simulator - ns-2 homepage: http://www.isi.edu/nsnam/ns/ F. Paganini, J. C. Doyle and S. H. Low, “Scalable Laws for Stable Network Congestion Control”, submitted to CDC01. March 2, 2001. J. Padhye, J. Kurose, D. Towsley, and R. Koodli, "A model based TCP-friendly rate control protocol," in Proc. International Workshop on Network and Operating System Support for Digital Audio and Video (NOSSDAV), Basking Ridge, NJ, June 1999. I. Busse, B. Deffner, and H. Schulzrinne, "Dynamic QoS control of multimedia applications based on RTP," Computer Communications, Jan. 1996. R. Rejaie, D. Estrin, and M. Handley, "Quality Adaptation for Congestion Controlled Video Playback over the Internet," Proc. of ACM SIGCOMM '99, Cambridge, Sept. 1999. Anupama Sundaresan, Gowri Dhandapani, “Diffspec - A Differentiated Services tool”, The University of Kansas Lawrence, KS 66045-2228, December 19, 1999. http://qos.ittc.ukans.edu/DiffSpec/diffspec.html.
Session-Aware Popularity Resource Allocation for Assured Differentiated Services Paulo Mendes1,2 , Henning Schulzrinne1 , and Edmundo Monteiro2 1
2
Department of Computer Science, Columbia University New York, NY 10027, USA {mendes,schulzrinne}@cs.columbia.edu CISUC, Department of Informatics Engineering, University of Coimbra 3030 Coimbra, Portugal, {pmendes,edmundo}@dei.uc.pt
Abstract. Differentiated Service networks (DS) are fair in the way that different types of traffic can be associated to different network services, and so to different quality levels. However, fairness among flows sharing the same service may not be provided. Our goal is to study fairness between multirate multimedia sessions for an assured DS service, in a multicast network environment. To achieve this goal, we present a fairness mechanism called Session-Aware Popularity Resource Allocation (SAPRA), which allocates resources to multirate sessions based upon their number of receivers. Simulation results in a multirate and multi-receiver scenario show that SAPRA maximizes the utilization of bandwidth and maximizes the number of receivers with high-quality reception. Keywords: fairness, multimedia sessions, multicast, differentiated networks, multirate sources.
1
Introduction
Almost all multimedia applications in the Internet use unirate sources, generating flows with rates that don’t change over time. For example, the SureStream technology from RealNetworks allows streams’ broadcast with multiple rates by creating unirate stream copies. This approach leads to bandwidth waste in heterogeneous environments, such as the Internet, because sources broadcast copies of the same stream in order to satisfy receivers with different quality requirements. This can be solved by replacing unirate sources with multirate ones. Multirate sources [8,19] divide streams into cumulative layers. Each layer has a different rate and importance, and the stream rate is equal to the sum of all its layers’ rates. This approach avoids waste of bandwidth, since sources broadcast only one stream to all receivers, sending each stream’s layer to a different multicast group. Receivers join as many multicast groups as their connection speed
This work is supported by POSI-Programa Operacional Sociedade de Informa¸c˜ ao of Portuguese Funda¸c˜ ao para a Ciˆencia e Tecnologia and European Union FEDER
E. Gregori et al. (Eds.): NETWORKING 2002, LNCS 2345, pp. 74–85, 2002. c Springer-Verlag Berlin Heidelberg 2002
Session-Aware Popularity Resource Allocation
75
allows them [14], starting by the most important layer. We use the designation of session to define the group of all layers belonging to the same stream. Due to their real-time characteristics, multimedia sessions need quality guarantees from the network. These guarantees can be provided by the DS model [2], which allows network providers to aggregate traffic in different services at the boundaries of their network. Each service is based upon a per-hop behavior (PHB), which characterizes the allocation of resources needed to give an observable forwarding behavior (loss, delay, jitter) to the aggregate traffic. One important question about Assured Forwarding (AF) [5] services concerns their capability to be fair. AF services provide intra-session fairness, between receivers in the same session, since each session’s layer can be mapped to a different drop precedence, considering its importance. However, how to achieve inter-session fairness in AF services, allowing receivers from all sessions to get their required quality level without wasting resources, is still a challenging research topic. The goal of our work is to contribute to the study of inter-session fairness between sessions in AF services, keeping the intra-session fairness property. To achieve this goal, we propose the enhancement of AF services with a Session-Aware Popularity Resource Allocation fair mechanism (SAPRA), which provides inter-session fairness by assigning more service bandwidth to sessions with higher number of receivers. SAPRA is a session-based mechanism and not only multicast-based, since hiding session information from DS routers results in intra-session unfairness, higher quality oscillations and lower quality for all receivers. SAPRA also includes a resource utilization maximization function, because fairness policies based only upon the number of receivers could still lead to waste of resources. This can occur when the bandwidth assigned to a session is higher than the rate really used by that session, as might happen with mobile phone or personal digital assistant (PDA) sessions, since they have low rate requirements and normally a high number of receivers. SAPRA also detects and punishes high-rate sessions in times of congestion, as an incentive for sessions to adapt to the network capacity. We present ns1 simulations that evaluate SAPRA behavior in a multirate multi-receiver environment using a simple dropper, which we called SAPRAD, and using RIO, the dropper normally used in AF. The remaindder of the paper is organized as follows. In section 2, we present a brief description of some fairness definitions and some multirate source implementations. Section 3 describes SAPRA functionality and section 4 presents simulation results. Finally, section 5 presents some conclusions and future work.
2
Related Work
There are several experimental multirate codecs, such as the Scalable Arithmetic Video Codec from the University of Berkeley2 developed by D. Taubman [19], 1 2
Network Simulator: http://www.isi.edu/nsnam/ns/ Experimental software at: http://www-video.eecs.berkeley.edu
76
P. Mendes, H. Schulzrinne, and E. Monteiro
or the Scalable Video Conferencing project from the Framkom Research Corporation3 [8]. To fairly distribute AF resources between multirate traffic generated by these codecs, the max-min fairness definition [1] could be used since its formal definition is a well accepted criterion for fairness and its multicast definition [20] was extended to include multirate sessions [17]. However, Rubenstein et al. [17] show that max-min fairness can not be provided in the presence of discrete set of rates, as is the case of multirate sources. The maximal fairness definition presented by Sankar et al. [18] exists in the presence of a discrete set of rates, but it doesn’t consider the number of receivers in each session. Therefore, maximal fairness can’t maximize resource utilization and at the same time maximize the number of receivers with good quality level. Legout et al. present a proposal [11] to distribute bandwidth between sessions considering their number of receivers. However, this proposal assumes that every router in the path between the session’s sender and its receivers keep information about the session’s layers and the receivers receiving those layers. This proposal also doesn’t maximize the utilization of resources and doesn’t punish high rate flows. Li et al. present [12] another proposal to improve inter-session fairness based upon the max-min fairness definition. Besides max-min limitation with discrete multirate sessions, this proposal only considers one shared link and doesn’t consider the number of receivers and layers importance of a session.
3
SAPRA Fairness Mechanism
In this section, we introduce the Session-Aware Popularity Resource Allocation fairness mechanism (SAPRA), which is implemented only in DS-edge routers. We assume that each possible multicast branch point is located only in DSedge routers and that several multimedia applications can share the same host. We name each application source and each host sender. Since sources are multirate, they generate multimedia sessions with several layers, each layer identified by a Source-Specific Multicast (SSM) channel [6] - sender IP address and destination multicast group. Each receiver can join more than one session at the same time, even if those sessions belong to the same sender. To join a session, receivers start joining the SSM channel of the most important layer. They can try to increase their reception quality by joining more layers, always from the most important one. They can also get information about sessions using, for example, the Session Announcement Protocol (SAP) [4]. The number of receivers in each session correspondes to the number of receivers of the most important layer. Implementing a fairness mechanism in DS-edge routers that only have information about multicast groups and not about sessions results in intra-session unfairness, higher quality oscillations and lower quality for all receivers. Fig. 1 shows the difference between a scenario where routers have information only 3
Project page: http://mbc.framkom.se/projects/scale/
Session-Aware Popularity Resource Allocation
77
about multicast groups and a scenario where routers have knowledge about session.
Mb/s
Rate Multicast−based scenario We assume that the session-based sce1 nario has two sessions (S1 and S2 ) sharing 0.75 a link with 1 Mb/s. Each session has 500 re400 ceivers, which mean that it has 0.5 Mb/s of 0.5 300 400 0.3 0.3 500 500 bandwidth allocated. Session S1 has three 0.25 0.2 0.24 0.24 0.1 0.1 0.19 0.19 layers (l0 , l1 and l2 ) joined by 500, 400 and 0.14 Flows 300 receivers respectively, and session S2 has two layers (l0 and l1 ) joined by 500 and Rate Session−based scenario 1 400 receivers respectively. In the multicastbased scenario all layers are considered as 0.75 500 500 0.6 independent multicast groups (flows f1 to 0.5 0.5 0.4 f3 are layers from S1 and flows f4 and f5 0.25 are layers from S2 ), which means that the Sessions total number of receivers sharing the link is S1 S2 2100. Therefore flow f1 and f4 have an alloFig. 1. SAPRA scenarios cated bandwidth of 0.24 Mb/s each, f2 and f5 of 0.19 Mb/s each and f3 of 0.14 Mb/s. Considering for example session S1 , Fig. 1 shows that the 100 receivers of S1 that only join l0 have the same reception rate (0.1Mb/s) and zero loss in both scenarios, since the rate is lower than the fair rate. However the 100 receivers that join l0 and l1 have a reception rate of 0.29 Mb/s and 5% loss in the multicast-based scenario and a rate of 0.3 Mb/s and zero loss in the session-based scenario. The situation becomes worst for the 300 receivers that join the three layers, since they have a reception rate of 0.43 Mb/s and 58% losses in the multicast-based scenario and a rate of 0.5 Mb/s and 16% losses in the session-based scenario. This shows that receivers have lower rate and higher loss percentage in a multicast scenario than in a session-based one. The multicast-based scenario isn’t also intra-session fair, because AF drop precedences don’t respect layers’ importance. It also presents a higher quality oscillation, since receivers detect losses not only in the less important layer, but also in intermediary ones. receivers
receivers
receivers
f1 (l0 of S1)
receivers
f2 (l1 of S1)
receivers
f3 (l2 of S1)
f4 (l0 of S2)
f5 (l1 of S2)
Mb/s
receivers
receivers
l2 l1
l1
l0
l0
We propose two methods to implement SAPRA as a session-based mechanism in DS-edge routers. In the first method, each sender allocates consecutive multicast addresses to all layers inside a session and keeps one address gap between sessions. With SSM this method doesn’t bring any address allocation problem, since each source is responsible for resolving address collisions between all the channels (232/8 addresses) they create. In this scenario each sender manages 224 addresses in IPv4 and 232 per scope in IPv6. With this method, DS-edge routers identify as belonging to the same session all layers that receivers join with consecutive SSM channels. The second proposed method is to change the way IGMPv3 [7] is used. The auxiliary data field of IGMPv3 reports can be used to include the multicast address of the most important layer - which identifies the session - in reports about other layers. So, DS-edge routers explicitly know what is the session of each layer. Routers that don’t implement SAPRA ignore the
78
P. Mendes, H. Schulzrinne, and E. Monteiro
auxiliary data field as is done in the current IGMPv3 implementations. In both proposed methods, routers know the relationship between layers in a session by the order receivers use to join sessions’ layers. Receivers are motivated to join layers from the most important to the less important one, because less important layers are useless without the most important ones in the re-construction of the session’s multimedia stream. We assume that TCP and UDP traffic use different AF services. However unicast and multicast flows can share the same AF service. In this case SAPRA treats unicast flows as sessions with one layer and one receiver only. All layers of the same session use the same AF service, being however marked with different drop precedences. SAPRA only uses two drop precedences, IN and OUT, from the three allowed by AF services. We also assume that sources mark all their traffic as IN. SAPRA has two components, one agent and one marker. Each DS-edge router has only one SAPRA agent and one marker for each downstream link. SAPRA agents exchange control information periodically with their neighbours. This information includes an update message sent to upstream neighbours with the number of receivers and fair rate of each session that presents changes in those values since the last time an update message was sent. This reduction of the update message size and the fact that agents don’t need to have global network knowledge increases SAPRA scalability. The update messages information is used by agents to compute sessions’ fair rates. Control information also includes a sync message sent to downstream neighbours. This message, which contains the lowest fair rate that each session has in the path from the source, can be used by quality adaptive mechanisms in the receivers. A brief description of the protocol used to exchange update and sync messages is presented in [16] and its performance study will be presented in a future paper. Next, we describe the SAPRA agent and marker. 3.1
SAPRA Agent
When a SAPRA agent receives an update message, it updates the local information about the sessions in the message and computes their new fair rates. In DS-edge routers that have local receivers, agents gather the number of receivers from IGMPv3 “State-Changes” reports. Agents have to reserve local resources to store the received and computed information. For each upstream interface, agents reserve four bytes for each session and four bytes for each layer. For the local interface and for each downstream interface agents reserve twelve bytes for each session and eight bytes for each layer. As an example, consider 1000 sessions, each one with three layers, that are going through a DS-edge router with three downstream interfaces. Consider also that each session is present in each downstream interface and that the router doesn’t have local receivers. In this situation the router reserves 124 Kb. To compute session Su fair rate, Fui , in a link i, agents use Eq. 1, which defines Fui as the ratio between the session’s number of receivers, nui , and the
Session-Aware Popularity Resource Allocation
total number of receivers in that link, considering the AF service capacity4 , In Eq. 1 mi is the number of sessions that share link i. nui Fui = ( mi x=1
nxi
) ∗ Ci
79
Ci . (1)
All computed fair rates are adjusted considering downstream fair rates. This adjustment is required to maximize the utilization of resources, because sessions whose fair rate is higher than their downstream fair rate waste resources, since packets are dropped downstream. Therefore, if a session has a computed fair rate, Fui , higher than its downstream fair rate, Fuj (j is a link downstream of i), Fui becomes equal to Fuj and the rate difference, Fui − Fuj , is added to the available shared bandwidth in the link, wi . The available shared bandwidth allows fair rate increase for sessions that have a fair rate lower than their downstream fair rate. If the difference Fuj − Fui is lower than wi , then Fui becomes equal to Fuj and wi is reduced by that difference. However if Fuj − Fui is higher than wi , then Fui is only added by wi , and wi becomes zero. The available shared bandwidth is used by all sessions, starting by those with the highest number of receivers. This maximizes the utilization of resources increasing the number of receivers with good quality. Agents functionality can be described by a fairness definition, which can be stated as: Consider that Fui and Fuj are the fair rates of a session Su in a link i and in a link j downstream of i, respectively. A fair rate allocation vector 1 1 Vi1 (F1i , . . . , Fm ) in a link i is said to be SAPRA-fairer if for any alternative i 5 2 2 feasible fair rate allocation vector Vi2 (F1i , . . . , Fm ): i 2 1 2 2 2 1 1 1 ∀u ∈ [1, mi ], Fui > Fui ∧ Fui ≤ Fuj ⇒ ∃v ∈ [1, mi ], Fvi < Fvi ∧ Fvi ≤ Fvj
(2)
After being adjusted, sessions’ fair rates are passed by the agent to each SAPRA marker present in the downstream links. 3.2
SAPRA Marker
Fig. 2 shows the SAPRA marker - shadowed component - which replaces the usual marker in AF services. This enhances the AF service with the ... ... ... ... capability to fairly distribute resources between sessions, based upon the fair rates computed by the SAPRA agent and the layers’ average rate. The marker needs to know the arrival rate of each layer. The easiest way to achieve this would be to obtain that inFig. 2. SAPRA marker formation directly from the sources. However sources could indicate a lower rate than they actual have, trying to get a higher percentage of IN packets. Therefore a meter, included in the DS model Classifier
EF service Best−Effort AF service
Fair Rates from SAPRA Agent
Scheduler
Yes
METER
MARKER
CONGESTED?
No
DROPPER
No
FIFO
Yes
ID TRIGGER?
No
FILTER
Yes
ID Sessions
4 5
How the AF capacity in a DS-edge router is configured is a DS model implementation concern. A feasible vector means that the sum of all fair rates is equal or lower than the AF capacity.
80
P. Mendes, H. Schulzrinne, and E. Monteiro
as shown in Fig. 2, is used to estimate average rates, maintaining the fairness mechanism independent of the sources. With the information from the agent and the meter, the marker marks layer traffic as IN or OUT. Only packets that arrive already marked IN will be remarked, since OUT packets are not compliant with upstream fair rates. All incoming IN packets are marked IN or OUT as follows: Considering that l0 is the most importance layer and ln the less important one of a session Su , Eq. 3 and Eq. 4 give the probability that a layer lk of that session has to be marked in out , and OUT, Puki , in a link i. With this marking strategy there is also IN, Puki a differentiation between sessions that have traffic marked OUT, since sessions with higher rates will have more packets marked OUT. in Puki =
1
Muki −Lu(k−1)i r in
out = Puki
uki
0
Luki −Muki r in uki
, Luki ≤ Fui , Luki > Fui
, Luki ≤ Fui , Luki > Fui
(3)
(4)
In Eq. 3 and Eq. 4 session Su has the following values in link i: fair rate Fui ; in rate of IN of its layer lk , ruki ; sum of all rates from layer l0 to layer lk kpackets in Luki = r -; maximum value between the session’s fair rate and the sum u,x,i x=0 of all layers’ rate from l0 to lk−1 - Muki = max(Fui , Lu(k−1)i ). When the meter detects that the link is congested, the marker filters all layers from sessions with rate higher than their fair rate plus their share of the available bandwidth, before sending the marked packets to the DS dropper. We define these sessions as high-rate sessions. The strategy to identify and punish high-rate sessions is based upon the Random Early Detection with Preferential Dropping mechanism (RED-PD) [13]. However, contrary to RED-PD, SAPRA uses fixed length intervals in congested periods to identify high-rate sessions and doesn’t need to maintain a list of all layers that suffer drops in each interval. This simplifies the mechanism avoiding the estimation of the recent average packet drop rate used by RED-PD to compute their variable interval length. In each identification interval, SAPRA starts by verifying which sessions have total (IN and OUT packets) rate, r ui , higher than their fair rate. Session Su total rate in a link i, is given by rui = n−1 r , considering that each layer lk from x=0 uxi l0 to ln−1 has rate ruki . A session with rate lower than its fair rate isn’t using all its share of the link bandwidth, so the unused bandwidth becomes available for other sessions’ OUT packets. Fig. 3 shows that SAPRA distributes this available bandwidth in equal shares between all sessions with rate higher than their fair rate and identifies which of these sessions are high-rate sessions. To punish each high-rate session Su in a link i, SAPRA computes its dropping probability in each identification interval t, Dui (t), using Eq. 5, where zi is the available bandwidth in link i. Dui (t) = (Dui (t − 1) + σd +
zi Fui + m 1 i )) ∗ (1 − 100 rui
(5)
Session-Aware Popularity Resource Allocation
81
This equation shows that in each interval the dropping probability of highrate sessions is increased by two values: a drop factor, σd , and a value proportional to the excess rate the session is using. This excess rate corresponds to the difference between the session rate and the sum of the session fair rate and its share of the available bandwidth, as shown in Fig. 3. The dropping probability of each Rate session is used to compute dropping probabilities for their layers, being the less important layer the first to suffer an increase of its dropping, because losses induce a higher quality degradation if they happen in more important layers [9]. Since hierarchiSessions cal codecs are tolerant to loss in less important layer, SAPRA computes Fig. 3. Punishment mechanism layers’ dropping probability with a linear quality degradation. However SAPRA can be configured to be more aggressive, dropping all layers that have a dropping probability higher than a predefined limit Θd . If in an identification interval a session is no longer identified, its dropping probability is halved until it reaches a minimum value, after which the session will stop to be filtered. Packets that aren’t dropped by the filter are sent to the DS dropper as happens with all packets from non-identified sessions. In the DS model the dropper is managed with RIO (RED with in/out) [3]. However, RIO introduces some complexity, since it needs to compute the total average queue size, the average queue size of IN packets, has different dropping scheme (random, front and tail) and its four thresholds can introduce oscillations. Therefore, we show that SAPRA has similar behavior with RIO and with a simpler dropper, which we named SAPRAD (SAPRA Dropper). SAPRAD manages a FIFO queue preferentially dropping OUT packets. When the queue is full an OUT packet is randomly discarded. Only if the queue doesn’t have OUT packets, an IN packet is randomly discarded. This guarantees that layers with higher rates are more severely punished. excess rate
share of available bandwidth
Sessions’ fair rate
l2
available bandwidth
packets with probability to be dropped
l2
l1 l1 l0 l0 S1 S2 rate < fair rate
4
l1
layers without dropped packets
l1
l0 l0 S3 S4 rate > fair rate
High−Rate Session
SAPRA Simulations
In this section we present simulations that aim to analyse the ability of SAPRA to distribute AF bandwidth between multirate sessions with different number of receivers, considering the number of receivers and the relationship between layers inside each session. We use a scenario - Fig. 4 - with three DS-edge routers and two congested links. The upstream link is configured with 10 Mb/s and the downstream with 5 Mb/s of bandwidth. The queue in each link has a size of 64 packets - default value in Cisco IOS 12.2 -, and each packet has a size of 1000
82
P. Mendes, H. Schulzrinne, and E. Monteiro
bytes. We analyse SAPRA’s behavior in the presence of two types of droppers, RIO, the dropper normally used in AF, and SAPRAD. In these simulations we use 11 sessions, S1 to S11 , each one with three layers, l0 , l1 and l2 , being l0 the most important. Each layer is identified by a SSM channel and each session has a different number of reFig. 4. Simulation scenario ceivers, from one in S1 to eleven receivers in S11 , increasing by one receiver per session. Although SAPRA can deal with any number of layers, we consider sessions with three layers in the present simulations, since this partitioning provides a good quality/bandwidth trade-off and additional layers only provide marginal improvements [9]. We performed sixty seconds simulations with sources that have increasing rates, from S1 to S11 , in multiples of 25 Kb/s from session to session, starting with 25 Kb/s for S1 . The session rate is the rate of its most important layer, l0 , and each layer lk has a rate equal to twice the rate of lk−1 . The dropping probability of all sessions is computed using Eq. 5, with σd equal to 0.5% and the dropper used is SAPRAD. Fig. 5 shows that sessions’ fair rates are proportional to sessions’ number of receivers, since SAPRA distributes resources considering the number of receivers in each session. They also show that, in the upstream link, sessions use fair rates lower than the computed ones. This happens because SAPRA adjusts the upstream link computed fair rates, since they are higher than the downstream link ones. Another conclusion is that SAPRA respects layers’ relationship, since packet dropping starts always by l2 , which can be clearly seen in Fig. 5 (right) where the rate of l2 is reduced to a minimum value. Layer0 Layer1 Layer2
R1
...
Session 1 (1 receiver)
5 Mb/s
R1
Layer0 Layer1 Layer2
...
Session 11 source
10 Mb/s
R11
Session 11 (11 receivers)
Session Bandwidth (Mb/s)
2
2
Computed Fair rate Used Fair rate Layer 0 11 Layers 0+1 00 11 Layers 0+1+2 (Session) 00
Computed Fair rate Used Fair rate Layer 0 Layers 0+1 00 11 Layers 0+1+2 (Session) 00 11
1.5
1.5 0 1 1 0 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 00 11 0 1 000000000000000000000000000000000000000000 111111111111111111111111111111111111111111 0 1 0 1 0 1 0 1 00 11 0 1 000000000000000000000000000000000000000000 111111111111111111111111111111111111111111 0 1 0 1 0 1 0 1 00 11 0 1 000000000000000000000000000000000000000000 111111111111111111111111111111111111111111 0 1 0 1 0 1 0 1 0 1 00 11 0 1 000000000000000000000000000000000000000000 111111111111111111111111111111111111111111 0 1 0 1 0 1 0 1 0 1 0 1 0 1 00 11 0 1 000000000000000000000000000000000000000000 111111111111111111111111111111111111111111 0 1 0 1 0 1 0 1 0 1 0 1 0 1 00 11 0 1 000000000000000000000000000000000000000000 111111111111111111111111111111111111111111 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 00 11 0 1 000000000000000000000000000000000000000000 111111111111111111111111111111111111111111 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 00 11 0 1 0 1 000000000000000000000000000000000000000000 111111111111111111111111111111111111111111 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 00 11 0 1 0 1 000000000000000000000000000000000000000000 111111111111111111111111111111111111111111 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 00 11 0 1 00 11 0 1 000000000000000000000000000000000000000000 111111111111111111111111111111111111111111 0 1 0 1 0 1 0 1 0 1 0.5111111111111111111111111111111111111111111 0 1 0 1 0 1 00 11 0 1 00 11 0 1 000000000000000000000000000000000000000000 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 00 11 0 1 00 11 0 1 000000000000000000000000000000000000000000 111111111111111111111111111111111111111111 0 1 00 11 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 00 11 0 1 00 11 0 1 000000000000000000000000000000000000000000 111111111111111111111111111111111111111111 0 1 00 11 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 00 11 0 1 00 11 0 1 000000000000000000000000000000000000000000 111111111111111111111111111111111111111111 0 1 00 11 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 00 11 0 1 00 11 0 1 000000000000000000000000000000000000000000 111111111111111111111111111111111111111111 0 1 00 11 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 00 11 0 1 00 11 0 1 000000000000000000000000000000000000000000 111111111111111111111111111111111111111111 0 1 00 11 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 00 11 0 1 00 11 0 1 000000000000000000000000000000000000000000 111111111111111111111111111111111111111111 0 1 00 11 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 00 11 0 1 00 11 0 1 00 11 000000000000000000000000000000000000000000 111111111111111111111111111111111111111111 0 1 00 11 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 00 11 0 1 00 11 0 1 00 11 000000000000000000000000000000000000000000 111111111111111111111111111111111111111111 0 1 00 11 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 00 11 0 1 0 1 0 1 00 11 0 1 00 11 0 1 00 11 000000000000000000000000000000000000000000 111111111111111111111111111111111111111111 0 1 00 11 0 1 00 11 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 00 11 0 1 0 1 0 1 00 11 0 1 00 11 0 1 00 11 000000000000000000000000000000000000000000 111111111111111111111111111111111111111111 0 1 00 11 0 1 00 11 0 1 0 1 0 1 0 1 0 1 0 1 00 11 0 1 0 1 00 11 0 1 0 1 0 1 00 11 0 1 00 11 0 1 00 11 000000000000000000000000000000000000000000 111111111111111111111111111111111111111111 0 1 00 11 0 1 00 11 0 1 0 1 0 1 0 1 0 1 0 1 00 11 0 1 0 1 00 11 0 1 0 1 0 1 00 11 0 1 00 11 0 1 00 11 0 1 00 11 0 1 00 11 0 1 0 1 0 1 0 1 0 1 0 1 00 11 0 1 0 1 00 11 0 1 0 1 0 1 00 11 0 1 00 11 0 1 00 11 0 1 00 11 0 1 00 11 0 1 0 1 0 1 0 1 0 1 0 1 00 0 1 0 1 00 11 0 1 0 1 0 1 00 11 011
1
2
3
4
5 6 7 Session Number
8
9
10
11
Session Bandwidth (Mb/s)
Session 1 source
1
0 1 11111111111111111111111111111111111111111 00000000000000000000000000000000000000000 0 1 00000000000000000000000000000000000000000 11111111111111111111111111111111111111111 0 1 0 1 00000000000000000000000000000000000000000 11111111111111111111111111111111111111111 0 1 0 1 00 11 00000000000000000000000000000000000000000 11111111111111111111111111111111111111111 0 1 0 1 0 1 00 11 00000000000000000000000000000000000000000 11111111111111111111111111111111111111111 0 1 0 1 0 1 0 1 00 11 0 1 00000000000000000000000000000000000000000 11111111111111111111111111111111111111111 0 1 0 1 0 1 0 1 00 11 0 1 00000000000000000000000000000000000000000 11111111111111111111111111111111111111111 0 1 0 1 0 1 0 1 00 11 0 1 0 1 00000000000000000000000000000000000000000 11111111111111111111111111111111111111111 0 1 0 1 0 1 0 1 00 11 0 1 0 1 0 1 00 11 00000000000000000000000000000000000000000 11111111111111111111111111111111111111111 0 1 0 1 0 1 0 1 00 11 0 1 00 11 0 1 0 1 00 11 00000000000000000000000000000000000000000 11111111111111111111111111111111111111111 0 1 0 1 0 1 0 1 00 11 0 1 00 11 0 1 0 1 0.511111111111111111111111111111111111111111 00 11 0 1 00000000000000000000000000000000000000000 0 1 0 1 0 1 0 1 00 11 0 1 00 11 0 1 0 1 00 11 0 1 00000000000000000000000000000000000000000 11111111111111111111111111111111111111111 0 1 0 1 0 1 0 1 00 11 0 1 00 11 0 1 00 11 0 1 00 11 0 1 00000000000000000000000000000000000000000 11111111111111111111111111111111111111111 00 11 0 1 0 1 0 1 0 1 00 11 0 1 00 11 0 1 00 11 0 1 00 11 0 1 00000000000000000000000000000000000000000 11111111111111111111111111111111111111111 00 11 0 1 0 1 0 1 0 1 00 11 0 1 00 11 0 1 00 11 0 1 0 1 00 11 0 1 00000000000000000000000000000000000000000 11111111111111111111111111111111111111111 00 11 0 1 0 1 0 1 0 1 0 1 00 11 0 1 00 11 0 1 00 11 0 1 0 1 0 1 00 11 0 1 00000000000000000000000000000000000000000 11111111111111111111111111111111111111111 00 11 0 1 0 1 0 1 0 1 0 1 00 11 0 1 00 11 0 1 00 11 0 1 0 1 0 1 00 11 0 1 00000000000000000000000000000000000000000 11111111111111111111111111111111111111111 00 11 0 1 0 1 0 1 0 1 0 1 00 11 0 1 0 1 00 11 0 1 00 11 0 1 0 1 0 1 00 11 0 1 00000000000000000000000000000000000000000 11111111111111111111111111111111111111111 00 11 0 1 0 1 0 1 0 1 0 1 00 11 0 1 0 1 00 11 0 1 00 11 0 1 0 1 0 1 0 1 00 11 0 1 00000000000000000000000000000000000000000 11111111111111111111111111111111111111111 00 11 0 1 0 1 0 1 0 1 0 1 00 11 0 1 0 1 00 11 0 1 00 11 0 1 0 1 0 1 0 1 00 11 0 1 00000000000000000000000000000000000000000 11111111111111111111111111111111111111111 00 11 0 1 0 1 0 1 0 1 0 1 00 11 0 1 0 1 00 11 0 1 00 11 0 1 0 1 00 11 0 1 0 1 00 11 00 11 0 1 00000000000000000000000000000000000000000 11111111111111111111111111111111111111111 00 11 0 1 0 1 0 1 0 1 0 1 00 11 0 1 0 1 00 11 0 1 00 11 00 11 0 1 0 1 00 11 0 1 0 1 00 11 00 11 0 1 00000000000000000000000000000000000000000 11111111111111111111111111111111111111111 00 11 0 1 0 1 0 1 0 1 0 1 00 11 0 1 0 1 00 11 0 1 00 11 00 11 00 11 0 1 0 1 00 11 0 1 0 1 00 11 00 11 0 1 00 11 0 1 0 1 0 1 0 1 0 1 00 11 0 1 0 1 00 11 0 1 00 11 00 11 00 11 0 1 0 1 00 11 0 1 0 1 00 11 00 11 0 1 00 11 0 1 0 1 0 1 0 1 0 1 00 11 0 1 0 1 00 11 0 1 00 11 00 11 00 0 1 0 1 00 11 0 1 011 0 1 00 11 00 11
1
2
3
4
5 6 7 Session Number
8
9
10
11
Fig. 5. Sessions’ fair rates in upstream (left) and downstream (right) links
In Fig. 5 sessions with higher rate suffer higher drop rates. For example, in the upstream link, S11 incoming rate is 1925 Kb/s and it has a loss rate of 31.35%, while S10 has an incoming rate of 1213 Kb/s and 30.72% loss. Fig. 5 also shows that in the upstream link all sessions are identified as high-rate sessions, since their incoming rates are higher than their fair rates and therefore the available bandwidth is zero.
Session-Aware Popularity Resource Allocation
83
The filter agressiveness can be configured by changing σd and the dropping probability limit Θd . To better show the punishment effect, we used an equal number of receivers per session, i.e., sessions have the same fair rate. Fig. 6 shows results for the upstream link using a σd value of 5%: In the left figure, Θd isn’t defined and so layers suffer a linear dropping increase and in the latter; in the right one, Θd is equal to 50%, after which layers are completely dropped. 2
Computed Fair rate Used Fair rate Layer 0 Layers 0+1 00 11 Layers 0+1+2 (Session) 00 11
1.5
1
00 11 00 11 00 11 00 11 00 11 11 00 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11
0 1 0 1
0 1 0 1
0 1 0 1 1 0 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1
0 1 0 1 0 1 1 0 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1
00 11 00 11 00 11 11 00 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11
00 11 00 11 00 11 00 11 00 11 11 00 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11
0 1 0 1 0 1 00 11 00 11 0 1 00 11 00 11 00 11 0 1 0 1 0 1 0.500000000000000000000000000000000000000000 00 11 0 1 00 11 00 11 0 1 11111111111111111111111111111111111111111 0 1 00 11 00 11 00 11 0 1 00 11 0 1 0 1 0 1 0 1 00 11 0 1 00 11 00 11
00 11 11 00 00 11 00 11 00 11 00 11 00 00 11 011 00 11 00 11
1
00 11 00 11 11 00 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11
2
0 1 1 0 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1
3
0 1 1 0 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1
4
0 1 1 0 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1
0 1 00 11 0 1 11 00 0 1 00 11 0 1 00 11 0 1 00 11 0 1 00 11 0 1 00 11 0 1 00 11 0 1 00 11 0 1 00 11 0 1 00 11 0 1 00 11 0 1 00 11
5 6 7 Session Number
8
9
10
11
Session Bandwidth (Mb/s)
Session Bandwidth (Mb/s)
2
Computed Fair rate Used Fair rate Layer 0 Layers 0+1 00 11 Layers 0+1+2 (Session) 00 11
1.5
1
0 1
0 1 0 1 0 1 0 1 0 1 1 0 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1
00 11 0 1 0 1 0 1 0 1 00 11 00 11 0 1 0.5000000000000000000000000000000000000000000 0 1 0 1 111111111111111111111111111111111111111111 0 1 0 1 0 1 00 11 00 11 0 1 0 1 0 1 0 1 0 1
00 11 00 11 11 00 00 11 00 11 00 11 00 00 11 00 11 011 00 11
1
11 00 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11
2
0 1 1 0 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1
3
0 1 0 1 0 1 1 0 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1
4
00 11 11 00 0 1 00 11 0 1 00 11 0 1 00 11 0 1 00 11 0 1 00 11 0 1 00 11 0 1 00 11 0 1 00 11 0 1 00 11 0 1 00 11 0 1 00 11
00 11 0 1 11 00 0 1 00 11 0 1 00 11 0 1 00 11 0 1 00 11 0 1 00 11 0 1 00 11 0 1 00 11 0 1 00 11 0 1 00 11 0 1 00 11 0 1 00 11 0 1
00 11 00 11 00 11 11 00 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11
5 6 7 Session Number
0 1 0 1 1 0 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1
8
9
0 1 0 1 0 1 0 1 0 1 0 1 1 0 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1
10
0 1 1 0 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1
11
Fig. 6. Punishment mechanism with σd value of 5%
Fig. 6 shows that all layers in S1 and S2 have null dropping probability. The same happens for l0 in any session. As for l2 - Fig. 6 (right) - in S3 presents a dropping probability of 22%, in S4 of 61%, growing up until 100% from S7 to S11 , which means that it’s completely dropped from S4 to S11 . Nevertheless, l2 doesn’t have a null rate in these sessions. This is due to the time gap between the beginning of the simulation and the moment agents receive the first update message, during which agents don’t have any information about sessions, being unable to differentiate them. Consequently all layers have the same dropping probability, making possible for receivers to get a certain number of packets from all layers. SAPRA has similar results for the downstream link. These results can be found in [15]. 2 To compare SAPRAD behavior against RIO, we used again a sixty sec1.5 onds simulation but sessions with equal rate and one receiver only. SAPRA uses a value of 0,5% for σd and Θd isn’t defined. 1 RIO’s minimum and maximum thresholds 0.5 have the following values: IN min of 60 packets, IN max of 64 packets, IN drop of 0 0.5%, OUT min of 32 packets, OUT max 1 2 3 4 5 6 7 8 9 10 11 Session Number of 48 packets and OUT drop of 50%. With these values the dropping of OUT Fig. 7. SAPRAD and RIO packets is higher than the one of IN packets for the 64 packets queue, which approximates the behavior of the SAPRAD dropper. Session Bandwidth (Mb/s)
Used Fair rate 11 SAPRA 00 RIO−Random 11 RIO−Tail 00 11 RIO−Front 00
0 1 0 11 00 01 1 0 1 00 1 11 0 0 00 11 01 1 0 1 00 1 11 0 0 1 00 11 0 0 00 1 11 01 1 0 1 00 1 11 0 0 00 11 01 1 0 1 00 1 11 0 0 00 11 01 1 0 1 00 1 11 0 0 1 00 11 0 1 0 00 11 01 1 0 1 00 1 11 0 0 00 11 01 1 0 1 00 1 11 0 0 1 00 11 0 1 0 1 00 1 11 01 0 00 11 0 1 0 00 11 01 1 0 1 00 1 11 0 0 00 11 01 1 0 1 00 1 11 0 0 1 00 11 0 1 0 00 1 11 01 0 1 00 11 0 1 0 1 00 11
00 11 0 1 00 11 00 11 1 0 00 11 00 11 0 1 00 11 00 11 0 1 00 11 00 11 0 1 00 11 00 11 0 1 00 11 00 11 0 1 00 11 00 11 0 1 00 11 00 11 0 1 00 11 00 11 0 1 00 11 00 11 0 1 00 11 00 11 0 1 00 11 00 11 0 1 00 11 00 11 0 1 00 11 00 11 0 1 00 11 00 11 0 1 00 11 00 11 0 1 00 11 00 11 0 1 00 11 00 11 0 1 00 11 00 11 0 1 00 11 00 11 0 1 00 11 00 11 0 1 00 11 00 11 0 1 00 11 00 11 0 1 00 11 00 11 0 1 00 11 00 11 0 1 00 11 00 11
1 0 00 11 0 0 1 01 1 00 11 0011 11 01 1 0 1 0 11 0 1 00 00 0 1 0 1 0 0 1 00 11 00 01 1 0 1 0 11 1 01 1 00 11 0011 11 0 0 1 0 1 0 1 00 00 0 0 1 0 11 1 01 1 00 11 0011 11 01 1 0 1 0 0 1 00 00 11 0 1 0 1 0 1 01 1 00 11 0011 11 01 0 1 0 11 0 1 00 00 0 1 0 1 0 1 0 1 00 11 00 11 01 1 0 1 0 11 01 1 00 11 0011 0 0 1 0 1 0 1 00 00 11 0 1 0 1 0 11 1 01 1 00 11 0011 01 0 1 0 0 1 00 00 11 0 1 0 1 0 1 0 1 00 11 0011 11 01 1 0 1 0 11 01 1 00 0011 0 0 1 0 1 0 1 00 00 11 0 1 0 1 0 11 1 01 1 00 11 0011 01 0 1 0 0 1 00 00 11 0 1 0 1 0 11 1 01 1 00 11 0011 01 0 1 0 0 1 00 00 11 0 1 0 1 0 1 0 1 00 11 00 11 01 1 0 1 0 11 01 1 00 11 0011 0 0 1 0 1 0 1 00 00 11 0 1 0 1 0 11 1 01 1 00 11 0011 01 0 1 0 0 1 00 00 11 0 1 0 1 0 1 0 1 00 0011 11
001 11 00 1 11 01 0 1 0 11 001 11 0 00 0 1 1 0 0 00 11 0 1 00 0 0 1 0 11 1 001 11 0 1 00 1 11 01 1 0 1 0 00 11 0 00 0 0 1 0 11 1 001 11 0 1 00 1 11 01 1 0 1 0 00 11 0 00 11 0 0 1 0 1 00 11 0 1 00 1 11 01 1 0 1 0 11 1 001 11 0 00 0 1 0 1 0 00 11 0 1 00 11 0 1 0 1 0 11 1 001 11 0 1 00 1 01 0 1 0 00 11 0 00 11 0 1 0 1 0 1 001 11 0 1 00 1 11 01 0 1 0 11 00 11 0 00 0 1 0 1 0 1 00 11 0 00 1 11 01 0 1 0 11 1 001 11 0 1 00 1 01 0 1 0 00 11 0 00 11 0 1 0 1 0 11 1 001 11 0 1 00 1 01 0 1 0 00 11 0 00 11 0 1 0 1 0 1 00 11 0 1 00 1 11 01 0 1 0 11 1 001 11 0 00 0 1 0 1 0 00 11 0 1 00 11 0 1 0 1 0 11 1 001 11 0 1 00 1 01 0 1 0 11 00 11 0 00 0 1 0 1 0 1 001 11 0 1 00 1 11 01 0 1 0 00 00 11 0 0 1 0 1 0 11 001 11 0 00 1 11 01
00 11 00 11 011 1 0 1 00 1 0 0 1 00 011 1 0 1 00 11 0 1 0 1 00 11 0 1 0 1 00 11 011 1 0 1 00 0 1 0 1 00 11 011 1 0 1 00 011 1 0 1 00 0 1 0 1 00 11 011 1 0 1 00 0 1 0 1 00 11 011 1 0 1 00 0 1 0 1 00 11 011 1 0 1 00 0 1 0 1 00 11 0 1 0 1 00 11 011 1 0 1 00 0 1 0 1 00 11 000 1 0 1 011 1 0 1 00 11 011 1 0 1 00 0 1 0 1 00 11 0 1 0 1 00 011 1 0 1 00 11 0 1 0 1 00 11
0 1 0 1 1 0 00 11 00 11 0 0 01 1 0 1 00 1 11 001 11 0 1 0 11 1 0 0 1 00 00 11 0 1 0 0 1 0 1 00 001 11 0 1 0 11 1 01 1 0 1 00 1 11 00 11 0 0 0 0 1 00 001 11 0 1 0 11 1 01 1 0 1 00 1 11 00 11 0 0 11 1 0 0 1 00 00 11 0 1 0 1 0 1 0 1 00 1 11 001 11 0 1 0 11 01 0 1 00 00 11 0 0 1 0 1 0 1 00 11 001 11 0 1 0 11 01 1 0 1 00 1 00 11 0 0 1 0 0 1 00 11 00 11 0 0 11 1 01 1 0 1 00 1 001 11 0 1 0 11 0 0 1 00 00 11 0 1 0 1 0 1 0 1 00 1 11 001 11 0 1 0 11 01 0 1 00 1 00 11 0 0 1 0 0 1 00 11 001 11 0 1 0 11 01 1 0 1 00 1 001 11 0 0 0 0 1 00 11 00 11 0 0 1 01 1 0 1 00 1 11 001 11 0 1 0 11 0 0 1 00 00 11 0 1 0 1 0 1 0 1 00 11 001 11 0 0 11 01 1 0 1 00 1 00 11 0 1 0 1 0 0 1 00 11 001 11 0 1 0 11 01 1 0 1 00 1 001 11 0 0 01 0 1 00 11 00 11 0 1 0 1 0 1 00 0 11
00 11 00 11 0 1 00 11 00 11 1 0 00 11 00 11 0 1 00 11 00 11 0 1 00 11 00 11 0 1 00 11 00 11 0 1 00 11 00 11 0 1 00 11 00 11 0 1 00 11 00 11 0 1 00 11 00 11 0 1 00 11 00 11 0 1 00 11 00 11 0 1 00 11 00 11 0 1 00 11 00 11 0 1 00 11 00 11 0 1 00 11 00 11 0 1 00 11 00 11 0 1 00 11 00 11 0 1 00 11 00 11 0 1 00 11 00 11 0 1 00 11 00 11 0 1 00 11 00 11 0 1 00 11 00 11 0 1 00 11 00 11 0 1 00 11 00 11 0 1 00 11 00 11 0 1 00 11 00 11
0 01 1 01 0 1 1 0 0 0 01 1 01 1 0 1 0 1 0 1 0 0 1 01 1 0 1 01 1 0 0 1 0 1 0 1 0 01 1 01 0 1 0 1 0 1 0 0 1 01 1 0 1 01 1 0 0 1 0 1 0 1 0 1 01 1 01 0 0 1 0 1 0 01 1 01 0 1 0 1 0 0 01 1 01 1 0 1 01 1 01 0 0 1 0 1 0 01 1 01 0 1 01 1 0 0 0 1 01 1 0 1 01 1 0 0 0 1 01 1 0 1 01 1 01 0 0 1 0 1 0 1
84
P. Mendes, H. Schulzrinne, and E. Monteiro
Fig. 7 shows, for the upstream link, that SAPRAD and RIO behavior is similar, being the session rate closer to its fair rate in the link when SAPRAD is used. These results show that an Internet Service Provider (ISP) that uses RIO can still use SAPRA to distribute AF resources between multimedia sessions. However implementing SAPRA with SAPRAD instead of RIO reduces the mechanism complexity. To understand what is the best RIO’s configuration we made several simulations by changing the value of IN and OUT thresholds. These simulations show that SAPRA behavior with RIO has a high variation between different RIO configurations. Detailed results can be found in [15].
5
Conclusion and Future Work
This paper describes and evaluates SAPRA, whose components are only installed in DS-edge routers, computing sessions’ fair rate based upon SAPRA fairness definition. SAPRA enhances DS functionality by fairly distributing bandwidth and by punishing high-rate sessions. SAPRA distributes bandwidth between sessions considering their number of receivers, which increases receivers motivation to use multicast, since they will experience higher quality than unicast ones. SAPRA also increases providers’ motivation to use multicast: ISPs can have more clients using fewer resources and multimedia providers can deploy new services that scale with large number of receivers. However, SAPRA doesn’t attempt to be the optimal fairness mechanism, because social and economic issues can influence fairness as much as technical ones. But being based upon sessions’ number of receivers and a maximal resource utilization function, SAPRA can be the base of a hierarchical fairness mechanism for multirate multicast sessions. To evaluate SAPRA behavior, we presented simulations with two congested links that showed its performance with a simple dropper, SAPRAD, and also with RIO. Simulations showed that SAPRA maximizes the utilization of bandwidth and the number of receivers with high quality reception. As future work we’ll simulate SAPRA in more complex scenarios in order to analyze the SAPRA protocol oscillations with the variation of the number of receivers. We’ll also create a receiver-driven adaptive mechanism that will use SAPRA network support, mainly fair rates collected in sync messages, trying to solve some of the problems presented by other adaptive mechanisms such as RLM [14] and RLC [21]. Legout et al. [10] show that RLM presents inter-session unfairness and has low convergence time and low link utilization, while RLC is unfair to TCP for large packets and its bandwidth inference mechanism is very sensitive to queue size. Also, both mechanisms can induce losses in all layers when a join experience occurs. This can be avoided if the adaptive mechanim is based upon SAPRA, since it guarantees intra-session fairness.
References 1. D. Bertsekas and R. Gallager. “Data Networks”. Prentice-Hall, 1987.
Session-Aware Popularity Resource Allocation
85
2. S. Blake, D. Black, M. Carlson, E. Davies, Z. Wang, and W. Weiss. “An architecture for differentiated service”. Request for Comments 2475, Internet Engineering Task Force, December 1998. 3. D. Clark and W. Fang. “Explicit allocation of best-effort packet delivery service”. Journal of IEEE/ACM Transactions on Networking, 6(4):362–373, August 1998. 4. M. Handley, C. Perkins, and E. Whelan. “Session announcement protocol”. Request for Comments 2974, Internet Engineering Task Force, October 2000. 5. J. Heinanen, F. Baker, W. Weiss, and J. Wroclawski. “Assured forwarding PHB group”. Request for Comments 2597, Internet Engineering Task Force, June 1999. 6. H. Holbrook and B. Cain. “Source-specific multicast for IP”. Internet draft, Internet Engineering Task Force, March 2001. 7. H. Holbrook and B. Cain. “Using IGMPv3 for source-specific multicast”. Internet draft, Internet Engineering Task Force, March 2001. 8. M. Johanson. “Scalable video conferencing using subband transform coding”. In In Proc. of ICSPAT’99, Orlando, FL, USA, November 1999. 9. J. Kimura, F. Tobagi, J. Pulido, and P. Emstad. “Perceived quality and bandwidth characterization of layered MPEG-2 video encoding”. In In Proc. of SPIE International Symposium on Voice, Video and Data Communications, Boston, MA, USA, September 1999. 10. A. Legout and E. W. Biersack. “Pathological behaviors for RLM and RLC”. In In Proc. of NOSSDAV’00, Chapel Hill, NC, USA, June 2000. 11. A. Legout, J. Nonnenmacher, and E. W. Biersack. “Bandwidth allocation policies for unicast and multicast flows”. In In Proc. of IEEE INFOCOM’99, New York, NY, USA, March 1999. 12. X. Li, S. Paul, and M. Ammar. “Multi-session rate control for layered video multicast”. In In Proc. of Multimedia Computing and Networking, San Jose, CA, USA, January 1999. 13. R. Mahajan and S. Floyd. “Controlling high-bandwidth flows at the congested router”. Tr-01-001, ICSI, April 2001. 14. S. McCanne, V. Jacobson, and M. Vetterli. “Receiver-driven layered multicast”. In In Proc. of ACM SIGCOMM’96, pages 117–130, Palo Alto, CA, USA, August 1996. 15. P. Mendes. “Session-aware popurality resource allocation”. http://www.cs.columbia.edu/˜mendes/sapra.html. 16. P. Mendes, H. Schulzrinne, and E. Monteiro. “Multi-layer utilization maximal fairness for multi-rate multimedia sessions”. Cucs-007-01, Columbia University, July 2001. 17. D. Rubenstein, J. Kurose, and D. Towsley. “The impact of multicast layering on network fairness”. In Proc. of ACM SIGCOMM’99, Cambridge, MA, USA, September 1999. 18. S. Sankar and L. Tassiulas. “Fair allocation of discrete bandwidth layers in multicast networks”. In Proc. of IEEE INFOCOM’00, Tel Aviv, Israel, March 2000. 19. D. Taubman and A. Zakhor. “Multirate 3-D subband coding of video”. Journal of IEEE Transactions on Image Processing, 3(5):572–588, September 1994. 20. H. Tzeng and K. Siu. “On max-min fair congestion control for multicast ABR service in ATM”. IEEE Journal on Selected Areas in Communications, 15:542– 556, April 1997. 21. L. Vicisano, L. Rizzo, and J. Crowcroft. “TCP-Like Congestion control for layered multicast data transfer”. In Proc. of IEEE INFOCOM’98, San Francisco, CA, USA, March/April 1998.
Most Probable Path Techniques for Gaussian Queueing Systems Ilkka Norros VTT Information Technology, P.O. Box 1202, 02044 VTT, Finland
Abstract. This paper is a review of an approach to queueing systems where the cumulative input is modelled by a general Gaussian process with stationary increments. The examples include priority and Generalized Processor Sharing systems, and a system where service capacity is allocated according to predicted future demand. The basic technical idea is to identify the most probable path in the threshold exceedance event, or a heuristic approximation of it, and then use probability estimates based on this path. The method is particularly useful for long-range dependent traffic and complicated traffic mixes, which are difficult to handle with traditional queueing theory.
1
Introduction
This paper is a review of an approach to queueing systems with Gaussian input. The motivation to study such systems is twofold. On one hand, complicated dependence structures are easiest to study first in a Gaussian framework, where the dependence is reduced to correlation. This is also the historical origin of this work — it started with queues with fractional Brownian motion (fBm) as input [19], which is the simplest process that has the self-similarity property, first observed in the famous Bellcore measurements [11]. On the other hand, it could be expected that, thanks to the Central Limit Theorem, traffic in high capacity systems would be rather well modelled with Gaussian processes [1]. Empirical studies indicate, however, that a good fit to Gaussian distribution may require very high traffic aggregation levels. The Gaussian approach can be useful in making rough performance estimates for Differentiated Services in Internet, because one works there with large traffic aggregates. Our interest in most probable paths started by applying the generalized Schilder’s theorem to the fBm queue [20]. The approach was extended to ordinary queues with general Gaussian input in [2,3], and further to priority queues in [14]. In [13] and [15], we applied a similar machinery to Generalized Processor Sharing (GPS) schedulers and presented a somewhat improved version of the priority case. Most of this research was done within the COST Actions 257 and 279. A summary on Gaussian traffic modelling, linked to the technical documents, can be found in the hypertext Final Report of the action [26]. The paper is structured as follows. We start with discussing the definitions of Gaussian queueing systems in Section 2. This involves some technical details caused by the unavoidable presence of negative traffic in Gaussian modelling. E. Gregori et al. (Eds.): NETWORKING 2002, LNCS 2345, pp. 86–104, 2002. c Springer-Verlag Berlin Heidelberg 2002
Most Probable Path Techniques for Gaussian Queueing Systems
87
Section 3 presents the main ideas of our approach. A central role is played by the most probable paths along which queue size thresholds are exceeded. The rest is devoted to two cases, where the most probable paths obtain particularly interesting shapes. Section 4 shows how the most probable path can experience a kind of “phase transition” between short and long busy periods. Section 5 studies a simple model of dynamical capacity allocation. This is a new type of application, first time presented here.
2 2.1
Definition of Gaussian Queueing Systems Gaussian Models of Traffic
Our basic traffic model is a continuous Gaussian process A = (At )t∈ with stationary increments. For s < t, At − As presents the amount of traffic in time interval (s, t], and we set A0 ≡ 0. A process is called Gaussian, if all its finitedimensional distributions are multivariate Gaussian. The property of stationary increments means that for any t0 ∈ , the processes A and (At+t0 − At0 )t∈ have the same finite-dimensional distributions. We denote A(s, t) = At − As , and use similar notation for other processes as well. The use of Gaussian models for big traffic aggregates can be justified by the Central Limit Theorem. However, even the question about the Gaussian character of some traffic cannot be raised without specifying the relevant timescale, say δ. There should be a large number of individual sources contributing to the traffic in every time interval of size δ. Moreover, if the marginal distribution of the contribution of an individual source in those intervals has very high variability, the application of CLT may still be problematic. In our study on Internet users over ISDN [10], it was found that a few Mbit/s of such traffic had good fit with Gaussian distribution when the time resolution δ was coarser than 100 ms. Note that this traffic was exceptionally well-behaving, because the users were restricted to the ISDN access speed. A non-pleasant special feature of Gaussian models is that there is always a positive probability of negative input. Such input does not correspond to anything real, and its existence destroys some classical arguments of queueing theory. In a Gaussian framework, the non-problematic definitions of queueing theory must be replaced by analogously defined functionals of a Gaussian process. Moreover, we don’t have much hope to obtain other kinds of rigorous general results on the distributions of these functionals than inequalities and limit theorems. At our present “state-of-art”, we must often be satisfied with heuristic approximations. Despite these reservations, Gaussian models are tempting because of their many nice features: • a Gaussian process with stationary increments is completely characterized by its mean m = E {A1 } and cumulative variance function v(t) = Var (At ); indeed, we can write At = mt + Zt ,
88
I. Norros
where Z is a centered (mean zero) process, and the covariance function of A (and Z) can be written as 1 (v(s) + v(t) − v(s − t)); 2 a superposition of independent Gaussian traffic streams is Gaussian; multiclass traffic consisting of Gaussian traffic classes, such that their joint distribution is Gaussian also, can be studied within the same framework; unlike most other traffic models, At has an explicitly known (Gaussian) distribution for any t; the quantities m and v(t) can be rather well estimated from measurement data; long-range dependence does not provide any extra difficulty. Cov (As , At ) = Cov (Zs , Zt ) =
• • • • •
In the multiclass case, let the input traffic consist of k classes, and denote {j} the cumulative arrival process of class j ∈ {1, . . . , k} by (At )t∈ . We also . {j} {j} denote A{j} (s, t) = At − As . For the superposition of a set of traffic classes J ⊆ {1, . . . , k} we write . {j} AJt = At j∈J
and use similar superscript notation also for other quantities defined later. We assume that the processes A{j} are independent, continuous Gaussian processes with stationary increments and denote {j}
At
{j}
= mj t + Zt
,
m=
k
mi ,
{j} = vj (t), Var Zt
(1)
i=1
{j} , Γj (s, t) = Cov Zs{j} , Zt where the Z {j} ’s are centered (zero-mean) processes. To exclude certain degenerate cases, we assume that vi (t) = 0, i ∈ {1, . . . , k} . (2) tα Finally, let us specify the mathematical framework completely. Define a path space Ω1 as ω(t) =0 . Ω1 = ω : ω is continuous → , ω(0) = 0, lim t→±∞ 1 + |t| ∃α ∈ (0, 2) : lim
t→∞
{i}
(The relation limt→∞ Zt /t = 0 a.s., is a consequence of (2) — see [3].) Equipped with the norm ω(t) : t∈ , ω Ω1 = sup 1 + |t| Ω1 is a separable Banach space. We choose Ω = Ω1k as our basic probability space by letting P be the unique probability measure on the Borel sets of Ω such {i} that the random variables Zt (ω1 , . . . , ωk ) = ωi (t) form independent Gaussian processes with covariance functions Γi (·, ·).
Most Probable Path Techniques for Gaussian Queueing Systems
2.2
89
Definition of Simple Queues
Consider first the case of a simple queue, i.e. k = 1, and let the server have a constant capacity c. The storage process (queue length process) is then naturally defined as Qt = sup(A(s, t) − c(t − s)). (3) s≤t
The process Q is obviously stationary, and a sufficient stability condition is that m < c. Because only the net input process At − ct matters in this definition, it can be extended to the case that the service process is stochastic as well. Indeed, assume that the cumulative service capacity process Ct is a Gaussian process with stationary increments such that the difference At − Ct is also Gaussian with stationary increments and a negative mean rate. The queue length process is then Qt = sup(A(s, t) − C(s, t)), (4) s≤t
and all results for simple Gaussian queues are applicable. One example of this is given in Section 5. 2.3
Definitions of GPS and Priority Queues
The Generalized Processor Sharing (GPS) service discipline [23] (an idealized version of Weighted Fair Queueing) is a theoretical model which isolates flows and provides service differentiation. Let us consider a GPS queueing system for our k traffic classes, service rate for each class i is µi c, such that the guaranteed where c > m = i mi , µi > 0 for each i, and µi = 1. It is not at all obvious how a GPS queue should be defined when negative input is allowed. An elegant definition which results in positive queue length processes even in our case was given by Massoulie [16]. Assume that the amount of potential service for each class i in time interval (s, t) is µi cT (s, t), where T (s, t) = Tt − Ts and T is a non-decreasing stochastic process with T0 ≡ 0. T varies according to the number of backlogged classes. The queue of class i, Q{i} , and the total queue Q then satisfy {i}
= sup(A{i} (s, t) − µi cT (s, t)) s≤t k {i} A (s, t) − c(t − s) . Qt = sup
Qt
s≤t
(5)
i=1
k {i} Together with the requirement Qt = i=1 Qt , the equations (5) uniquely {1} {k} and Qt [16]. The construction works define the k + 1 processes Q , . . . , Q and yields non-negative queues in the Gaussian case also. Let us then turn to priority queues. Assume that there are k priority classes, numbered with descending priority. There is no distinction between preemptive and non-preemptive priority, because the model is continuous. Since lower class traffic does not disturb upper class traffic, a simple approach is the following:
90
I. Norros
define Q{1} , Q{1,2} , Q{1,2,3} etc. as ordinary queues with service rate c, and then set Q{2} = Q{1,2} − Q{1} , ··· {k}
Q
{1,...,k}
=Q
(6)
{1,...,k−1}
−Q
.
Using this definition with Gaussian traffic has the non-desirable effect that it does not yield non-negative queue lengths to other classes than the first one. This has, however, little significance in the cases where Gaussian modeling is adequate, so we prefer using it. (Massoulie’s GPS definition does not work, as such at least, with µ2 = 0, which would correspond to a two-class priority queue. It is shown in [15] how discrete time Gaussian priority queues can be defined in such a way that the individual queues are non-negative and sum up to the total queue, and the continuous time could probably be obtained as a limit when the discretization step goes to zero.)
3 3.1
Probability Estimates Based on Most Probable Paths The Reproducing Kernel Hilbert Space and Large Deviations of Gaussian Processes
For i = 1, . . . , k, the reproducing kernel Hilbert space (RKHS) Ri of the process Z {i} is defined as follows (see, e.g., [4]): start with the functions Γi (t, ·), t ∈ , define their inner products as . Γi (s, ·), Γi (t, ·)Ri = Γi (s, t), extend to a linear space (with pointwise operations), and complete the space . with respect to the norm f Ri = f, f Ri . It is easy to verify that Ri is a linear subspace of Ω1 , and the topology induced by · Ri is finer than that induced by · Ω1 . {1} {k} The RKHS of the multivariate process (Zt , . . . , Zt ) is, by the indepen. dence of the Z {i} ’s, R = R1 × · · · × Rk with the inner product k
. fi , gi Ri . (f1 , . . . , fk ), (g1 , . . . , gk )R = i=1
The reproducing kernel property, which is a straightforward consequence of the definition of the inner products, tells that within R, the functions can be evaluated by taking an inner product with a corresponding vector of covariance functions: k (f1 , . . . , fk ), (Γ1 (t1 , ·), . . . , Γk (tk , ·)R = fi (ti ). (7) i=1
The above construction can be further extended to the case that the component processes are dependent, as long as all joint distributions are Gaussian. A large deviation principle for Gaussian measures in Banach space is given by the generalized Schilder’s theorem (Bahadur and Zabell [6], see also [5,9]).
Most Probable Path Techniques for Gaussian Queueing Systems
91
Theorem 1. The function I : Ω → ∪ {∞}, 1 ω 2R , if ω ∈ R, I(ω) = 2 ∞, otherwise, is a good rate function for the centered Gaussian measure P , and P satisfies the following large deviation principle:
1 Z for F closed in Ω : lim sup log P √ ∈ F ≤ − inf I(ω); ω∈F n n→∞ n
1 Z log P √ ∈ G ≥ − inf I(ω). for G open in Ω : lim inf n→∞ n ω∈G n Thus, the essential problem is to find a path ω that minimizes I(ω) in a given set B, or, equivalently, the norm f R in the set B ∩ R. We call it the most probable path in that set. Intuitively, one can can think of e−I(ω) as something like the probability density of our infinite dimensional Gaussian measure, so that minimizing I(ω) corresponds to maximizing likelihood. In most cases, the most probable path is unique, but the examples in Section 4 show that non-unique paths may appear and even have interesting meaning as “phase transitions” of the queueing system. The approach presented here was originally motivated by the generalized Schilder’s theorem [20]. However, our main interest is not in large deviations limits but in estimates that are applicable for whole distributions. It was shown in [2] by examples of ordinary queues that estimates of the type P(A) ≈ exp(− inf ω∈A I(ω)) give indeed often a reasonable approximation of the whole queue length distribution, not only for tail behavior. On the other hand, note that it is problematic to even formulate large deviations limit theorems with Gaussian traffic, because the Gaussian character is already the result of another kind of limit procedure, the Central Limit Theorem. 3.2
Half-Space Approximations
Consider first the case of a simple queue. What can be said about the marginal distribution of Qt ? Writing (cf. [17]) Zt − Zs >1 {Qt > x} = sup s≤t x + (c − m)(t − s) (x,t) we see that this event is in fact of the form sups Ys > 1 for the centered (x,t)
Gaussian process Ys = (Zt − Zs )/(x + (c − m)(t − s)). Thus, we encounter the very classical problem of estimating the distribution of the maximum of a centered Gaussian process. Consider the obvious lower bound ∗ x + (c − m)u
= ((x), (8) P(Qt > x) ≥ sup P Ys(x,t) > 1 = Φ v(u∗ ) s≤t
92
I. Norros
where Φ is the residual distribution function of the standard normal distribution and u∗ > 0 minimizes (x + (c − m)u)2 /v(u) w.r.t. u. The value u∗ has the important practical meaning of characterizing the relevant timescale of queues of length x. Note the geometry of the set {Qt > x}: it is the union over s of the sets {A(t − s) − c(t − s) > x} which are half-spaces, and thus the complement of a convex set containing the origin. Let f ∗ be a most probable path in {Qt > x}. The following proposition, which we formulate directly in the multiclass case, gives an explicit expression of f ∗ .
f*
0
Fig. 1. The half-space {−Z−t∗ ≥ x + (c − m)t∗ } is contained in the set {Q0 ≥ x}. For both sets, the closest point to origin is f ∗ .
{1,...,k} Proposition 1. Most probable path vectors f ∗ in the set Q0 ≥ x have the form x + (c − m)t∗ − k (Γ1 (−t∗ , ·), . . . , Γk (−t∗ , ·)), ∗ i=1 vi (t ) where t∗ > 0 minimizes the expression h(t) =
(x + (c − m)t)2 . k i=1 vi (t)
Proof. Note that {1,...,k} Q0 A{1,...,k} (s, 0) − c(0 − s) ≥ x ≥x = s≤0
=
Z {1,...,k} (s, 0) ≥ x + (c − m)(0 − s) ,
s≤0
and, by the reproducing kernel property,
(9)
Most Probable Path Techniques for Gaussian Queueing Systems
93
f ∈ Z {1,...,k} (s, 0) ≥ x + (c − m)(−s) ∩ R ⇔ ⇔
f ∈ R, −f1 (s) + · · · − fk (s) ≥ x + (c − m)(−s) −f, (Γ1 (s, ·), . . . , Γk (s, ·)R ≥ x + (c − m)(−s).
Thus, the problem reduces to minimizing the Hilbert norm when the inner product with a fixed element is given, and the solution is a proper multiple of that element. It remains to minimize the norm of ((x + (c − m)t)/ vi (t))(Γ1 (−t, ·), . . . , Γk (−t, ·)) with respect to t > 0. Let f ∗ ∈ R be a most probable path in a closed set B ⊂ Ω such that f ∗ = 0. We call the set . B ∗ = cl Ω {g ∈ R : g − f ∗ , f ∗ R ≥ 0} , where cl Ω G denotes the closure of G in the topology of Ω, the half-space approximation of B. In particular, it is easy to see that ∗
{Q0 ≥ x} = {−Z−t∗ ≥ x + (c − m)t∗ } , and the lower bound (8) is a consequence of the fact that in this case the halfspace approximation is contained in the original set. See Figure 1. It is worth of noting also that the most probable path vector in a set {1,...,k} ≥ y , where y > mt, is in fact the conditional expectation At {1,...,k} =y . E (Zs{1} , . . . , Zs{k} ) At This is a consequence of the fact that the conditional distribution of a Gaussian vector w.r.t. a linear condition is Gaussian, and its expectation equals the point where the density is highest. More accurate estimates take, in some way or other, the geometry of the set {Q0 ≥ x} around f ∗ into account. For different methods, see the books by Adler [4] and Piterbarg [25]. An original geometric reasoning, after transforming the problem into Fourier space, was given in [18]. Identifying most probable paths is interesting with its own rights — it is like “seeing what really happens” when the rare event occurs. For ordinary queues, this has mainly heuristic value, but we shall see that identifying these paths has an essential role in choosing a good approximation in the case of GPS and priority queues. 3.3
General Heuristic Approximations
Within logarithmic accuracy, the lower bound can be replaced by the still simpler approximate expression
(x + (c − m)u∗ )2 P(Qt > x) ≈ exp − , (10) 2v(u∗ )
94
I. Norros
which was called the basic approximation in [3]. Simulations of many cases indicate that the basic approximation may in fact be a general upper bound of P(Qt > x), but no proof of this is known. In all empirical studies and simulations one works in discrete time. The discrete time queue is always a little smaller than the corresponding continuous time queue. Indeed, if A is our continuous time model, the cumulative input process in discrete time is simply (An )n∈Z , Qdiscr = n
sup
(A(m, n) − c(n − m)) ≤
m≤n, m∈Z
sup (A(s, n) − c(n − s)) = Qcont . n
s≤n, s∈R
It was observed in [3] that one often gets fairly good approximations for a discrete time Gaussian queue Qdiscr by multiplying the basic approximation by an appropriate constant p such that
(x + (c − m)u∗x )2 p lim exp − ≈ P Qdiscr >0 . t ∗ + 2v(ux ) x→0 A good heuristic approximation for the non-emptiness probability of a discrete time queue with time resolution δ is (see [3]) P Qdiscr > 0 ≈ 2P(Aδ > cδ) . t 3.4
Approximations for GPS and Priority Queues
The structure of our method for getting estimates of queue length distributions in GPS systems is the following. In order to get an approximation and priority {i} for Q0 > x , do {1,...,k} >x . Step 1. Find the most probable path vector f ∗ of the event Q0 The path vector can be immediately written and plotted using Proposition 1. {1,...,k}\{i} ∗ Step 2. Check whether Q0 (f ) = 0. If yes, go to Step 3, otherwise go to Step 4. ∗ Step 3. (Empty Buffer Approximation) f is the most probable path vector in {i} Q0 > x ; use the corresponding half-space approximation. Stop. Step 4. (Rough Full Link Approximation) Find a certain f RFLA , where the only {i} positive queue is Q0 (or the others are much smaller); use the half-space approximation corresponding to f RFLA . The Empty Buffer Approximation uses the true most probable path vector, and it can be considered as reliable as the simple queue estimates of Section 3.2. In the Rough Full Link Approximation, the path vector f RFLA also is just a heuristic approximation of the true most probable path vector. Both approximations are discussed in more detail below.
Most Probable Path Techniques for Gaussian Queueing Systems
95
The Empty Buffer Approximation. The idea of the Empty Buffer Approximation (EBA), first studied by Berger and Whitt [7,8], is that in a two-class priority queue, the total queue usually consists almost exclusively of lower class traffic, and therefore its distribution is a good approximation to that of the pure lower class queue. Our approach gives a straightforward method to check the applicability of EBA in any particular combination of Gaussian traffic streams. Our examples indicate that EBA is a very good principle in most practically interesting priority scenarios with Gaussian traffic. The EBA is also often useful in the study of a GPS system. However, it is never sufficient, because the classes are in a symmetric position in GPS, and the distribution of at most one class can be estimated with EBA. Whereas it may require some work to check analytically whether the most probable path vector producing joint queue x satisfies the EBA condition, an approximately similar condition is much easier: in the two-class priority case, just check the “rough EBA condition” {1}
− A−t∗ (f ∗ ) ≤ ct∗ .
(11)
This also leads to some interesting insight. Consider the priority system with two classes and assume, without restricting generality, that m1 = 0. The condition 11 can be written as x v2 (t∗ ) c. (12) − m ≤ 2 t∗ v1 (t∗ ) In particular, we see that (12) holds if m2 ≥ x/t∗ (note, however, that t∗ depends on the other quantities). In the special case that v1 is a multiple of v2 , say v2 (t) = av(t), v2 (t) = bv(t), the condition becomes still simpler. Then t∗ is independent of a and b, and we obtain the rather surprising result that when m2 exceeds a certain threshold, then we are roughly in the EBA irrespective of the variance coefficients a and b! For example, if both Z (i) ’s are fractional Brownian motions with same self-similarity parameter H, then t∗ = Hx/((1−H)(c−m2 )), which gives the condition m2 ≥ (1−H)c. The higher H, the lower is the threshold for m2 above which a typical large class 2 queue consists of class 2 traffic alone. The Rough Full Link Approximation. Consider the case of two traffic classes. For priority queues this does not restrict generality (since we neglect the effect of negative traffic). For GPS queues, the idea below could be extended to a larger number of classes, but the details would be much more complicated and, moreover, the heuristic probability estimates would be less reliable. Consider a GPS system with weights µ1 and µ2 . In the two class case, the priority system is obtained as the special case µ2 = 0. Assume that we are {2} interested in the number P Q0 ≥ x . As before, we first identify the most {1,2} {1} probable path pair f ∗ of Q0 ≥ x . If Q0 (f ∗ ) = 0, we can use the EBA, {1}
as discussed in Section 3.4. So assume that Q0 (f ∗ ) > 0. The idea of our approximation in the non-EBA case is that any superfluous queue buildup decreases the likelihood of our path pair. Since we are only re{2} {1} quiring that Q0 (ω) be big, Q0 (ω) must be close to zero with the optimal ω.
96
I. Norros
Thus, a class 2 queue of size x is most easily made so that the role of class 1 is essentially to fill its quota (in the priority case, to fill the whole link) without making a queue, while class 2 fills its quota and additionally builds a queue of size x. To make this condition still simpler, we reduce this behavior to the onedimensional conditions A{1} (−t, 0) = µ1 ct, A{2} (−t, 0) = µ2 ct + x,
(13)
write down the most probable path pair fulfilling this, and finally minimize their norm with respect to t. We call this procedure the Rough Full Link Approximation (RFLA). It is again an easy Hilbert space exercise, similar to Proposition 1, to determine the most probable paths in RFLA (see [15]): Proposition 2. The most probable path pair f RFLA satisfying (13) is of the form f RFLA (·) = (f1RFLA (·), f2RFLA (·))
(µ1 c − m1 )t∗ −x + (µ2 c − m2 )t∗ ∗ ∗ Γ1 (t , ·), Γ2 (t , ·) , = v1 (t∗ ) v2 (t∗ ) where t∗ < 0 minimizes, w.r.t. t, the expression 2
2
(µ1 c − m1 ) t2 (x − (µ2 c − m2 ) t) + . v1 (t) v2 (t)
(14)
In the case that both classes are Brownian motions (counterpart of Poisson processes), the RFLA gives the true most probable path pair in the non-EBA case. In general, however, the class 1 path in RFLA does not fill its quota over the whole interval (−t∗ , 0), thus part of class 2 traffic is “wasted”, and there is a small class 1 queue at time 0, whereas the class 2 queue remains correspondingly smaller than x. Using the reproducing kernel property and the fact that evaluation at a time point is a continuous linear functional both in R and Ω, we see that the half-space corresponding to f RFLA can be written as E = Y ≥ f RFLA 2R , where Y =
(µ1 c − m1 )t∗ {1} x − (µ2 c − m2 )t∗ {2} Zt∗ + Zt∗ . v1 (t∗ ) v2 (t∗ )
Thus, our RFLA approximation, which the simulations indeed indicate to be a lower bound, is {2} P Q0 ≥ x ≈ P(E) (15) 2 2 (µ1 c − m1 ) t∗ 2 (x − (µ2 c − m2 ) t∗ ) + . = Φ v1 (t∗ ) v2 (t∗ )
Most Probable Path Techniques for Gaussian Queueing Systems
97
In order to check accuracy of our estimates, we have compared them to the empirical measures calculated from simulations. The simulation traces were generated using an extension of random midpoint displacement algorithm (RMDmn , see [21]). Many examples are included in the papers [3,15], and they show reasonable accuracy of the method. In particular, the “basic approximations” turn always out to be upper bounds and the probabilities of the half-space approximations lower bounds. In the present overview paper, we restrict to the following example taken from [15]. Example: two fBm traffic classes with same self-similarity parameter. Let us consider a GPS system with two classes with vi (t) = σi2 t2H for i = 1, 2, and the parameter H is any number in (0, 1). In this case we can compute the above quantities analytically. First, fix x > 0 and consider the total queue. We have (see, e.g., [3]) t∗ =
Hx . (1 − H)(c − m)
Second, the rough EBA criterion (cf. (12)) for estimating class 1 reads (µ2 c − m2 )t∗ ≥ f2∗ (t∗ ) = (x + (c − m)t∗ )
σ12
σ22 . + σ22
Substituting t∗ , we obtain the criterion (µ2 c − m2 )H σ2 ≥ 2 2 2. c−m σ1 + σ2
(16)
Note that only the mean and service rates appear on left and only the variance coefficients on right. If (16) is satisfied, the “basic approximation” reads
(c − m)2H x2−2H {1} {1,2} , ≥ x ≈ exp − 2 · P Q0 ≥ x ≈ P Q0 σ1 + σ22 2κ(H)2 where κ(H) = H H (1 − H)1−H . Third, if (16) does not hold, we use the RFLA. The squared R-norm of the most probable path in the set {1} {2} −A−t ≥ µ1 ct + x, −A−t ≥ µ2 ct is
((µ1 c − m1 )t + x)2 (µ2 c − m2 )2 2−2H + t . 2 2H σ1 t σ22 The minimum is obtained at t∗ = ηx, where η is the positive root of a quadratic equation: √ b + b2 + 4aH η= , where 2a (µ2 c − m2 )2 (µ1 c − m1 )(2H − 1) (µ1 c − m1 )2 + )(1 − H), b = . a=( 2 σ1 σ22 σ12
98
I. Norros
{1} The basic approximation of P Q0 ≥ x is then {1} P Q0 ≥ x
(µ2 c − m2 )2 2−2H 1 ((µ1 c − m1 )η + 1)2 2−2H x . + η ≈ exp − 2 σ12 η 2H σ22
(17)
Simulations indicate that the approximations of this section work quite well — see [14,13,15].
4
“Phase Transitions” of Typical Queues
Even for simple queues, the most probable paths need not be unique. Nice examples of this were found by P. Mannersalo by superposing a periodic source and a fBm source [3,15]. A kind of phase transition was observed: typical small queues were caused by the periodical fluctuation of the periodic traffic, whereas typical long queues were caused by sustained heightened activity of the fBm traffic. (Cf. also [12].) Another and, most importantly, non-artificial example was encountered by Pazhyannur and Fleming [24]. They studied a queue with input consisting of periodic coded voice traffic, modelled as follows. A source transmits with period d and uniformly distributed phase U . Volume in ith period is Xi . The Xi ’s can be strongly dependent. There are n i.i.d. sources. See Figure 2. X1 X3 X4
X2
0
U
d
2d
3d
Fig. 2. The structure of vocoder traffic in [24]. Each source transmits periodically bursts whose sizes are random but correlated. The Xi ’s in the picture come from the same source.
Assuming that the number of sources is large enough for Gaussian modelling, our technique can be applied in a straightforward way. We only need to compute v(t) for a single source — using a mathematical computer tool, the rest follows “according to the recipe”. Denote the phase of our source by U and choose, for simplicity, d = 1. Then At =
t
Xi + 1{U x).
Look first at the function h(t) = (x + t)2 /v(t), which has to be minimized with respect to t. For x = 0.3 or smaller, we have t∗ ≈ x, whereas for x = 0.4, t∗ ≈ 4. Somewhere between 0.3 and 0.4 is a value of x = x0 where the two local minima are equal. As a function of x, t∗ makes big jump at x0 . 2
2
1.8
1.8
1.6
1.6
1.4
1.4
1.2
1.2
2
4
6
8
10
Fig. 4. Plot of the function
2
4
6
8
10
12
(x + t)2 . Left: x = 0.3. Right: x = 0.4. v(t)
Finally, the most probable paths shows that there is a very clear difference between typical queues of sizes 0.3 and 0.4. In the former, the queue is caused
100
I. Norros
only by the bursts from different users, which are independent. In the latter, the busy period is larger than the period of the sources, which has the effect that the strong correlations between bursts of each source have become dominant, and the distribution tail decreases much slower than it did for small x’s. Pazhyannur and Fleming discovered this queue behavior originally using more traditional heavy traffic approximations, but our method added an immediate visual insight which agreed with their interpretation. Moreover, they found that the Gaussian approximations were also quantitatively quite good. 0.3 0.4 0.25 0.3
0.2 0.15
0.2
0.1 0.1
0.05 -6
-4
-2
2
4
6
-6
-4
-2
2
4
6
Fig. 5. Most probable queue path. Left: x = 0.3. Right: x = 0.4.
5
A Simple Model for Bandwidth Allocation by Prediction
Our last example analyses the performance of a queue whose service capacity is dynamically adjusted according to predicted demand, with a fixed prediction delay. The following setup is probably the simplest possible model for that kind of system. Let At again be a Gaussian traffic process with parameters m and v(t). Assume that instead of a fixed service rate, the service capacity is allocated dynamically with a delay ∆, with a relative surplus capacity 8. That is, we define the cumulative service process as . Ct = (1 + 8)(At−∆ − A−∆ )
(18)
(the last term is included in order to have C0 = 0). The queue length process is Qt = sup(A(s, t) − C(s, t)) s≤t
D
= sup(Ut − 8mt), t≥0
where Ut = Zt − (1 + 8)(Zt+∆ − Z∆ ). A straightforward computation gives
Most Probable Path Techniques for Gaussian Queueing Systems
101
Var (Ut ) = (1 + (1 + 8)2 )v(t) − (1 + 8)(v(t − ∆) + v(t + ∆)) + 2(1 + 8)v(∆). In the space R we have f (t) − (1 + 8)(f (t + ∆) − f (∆)) = f, Γ (t, ·) − (1 + 8)(Γ (t + ∆, ·) − Γ (∆, ·))R . Thus, by our general method, the most probable path of Z creating a queue of size x at time 0 is fx∗ (s) = −
x + 8mt∗ (Γ (−t∗ , s) − (1 + 8)(Γ (−t∗ ∆, s) − Γ (−∆, s)), Var (Ut∗ )
where t = t∗ > 0 minimizes
(x + 8mt)2 . Var (Ut )
In fact, the delay in such a system is bounded by ∆. The delay of a “fluid molecule” entering the system at time t can be expressed as . Dt = inf {τ : C(t, t + τ ) ≥ Qt } . Now, Qt − C(t, t + τ ) = sup(A(s, t) − (1 + 8)A(s − ∆, t − ∆)) s≤t
−(1 + 8)A(t − ∆, t − ∆ + τ ) = sup(A(s, t) − (1 + 8)A(s − ∆, t − ∆ + τ )) ≤ 0 s≤t
for τ ≥ ∆, assuming that At is nondecreasing (which does not hold strictly for a Gaussian traffic model). (I thank P. Mannersalo for this insight.) As an example, let us look at some paths in the case of fBm input At = mt + σZt , where Z is a normalized fBm with self-similarity parameter H. The figures below were made with 8 = 0.1, ∆ = 1, m = 3, σ 2 = 1, and H = 0.75. Figure 6 compares the dynamically varied service with fixed service rate and same 10% overallocation. It is no surprise that very big queues arise when such a high load is offered to a fixed capacity server, whereas the queue remains essentially bounded in the former case (remember that the delays are strictly bounded). Figure 7 shows lower bound estimates of the complementary distribution functions. Indeed, the distribution tail of the dynamically served queue decreases very fast (faster than exponentially). Figure 7 shows the most probable paths of the input rate and the queue of size 4. Note how cleverly our system makes its big (by its scale) queues: in order to fool the prediction, the input is first very slow and then, when the control cannot react any more, it suddenly speeds up. The queue path also has a noteworthy feature: after an input peak, the typical queue first decreases quickly, but then shifts to much slower decrease, whose slope corresponds to the overhead 8.
102
I. Norros
6
80
5 60
4 3
40
2 20 1 2000
4000
6000
2000
10000
8000
4000
6000
8000
10000
Fig. 6. Queue length processes of a system with prediction based dynamic allocation with = 0.1, ∆ = 1 (left), and a system with fixed service capacity (1+ )m (right). The input processes are identical discrete time fBm traces with m = 3, σ 2 = 1, H = 0.75.
log10 P(Q>x) 1
2
3
4
x
5
−1 −2 −3 −4 −5 −6
Fig. 7. Queue length distribution lower bounds log10 (x) (see (8)) for a system with prediction based dynamic allocation with = 0.1, ∆ = 1 (squares), and a system with fixed service capacity (1+ )m (stars). The input processes are fBm with m = 3, σ 2 = 1, H = 0.75.
Rate 6
Q 4
5 3
4 3
2
2 1
1 −3
−2
−1
1
2
3
t
−3
−2
−1
1
2
3
t
Fig. 8. The most probable path with queue size x = 4 in a system with prediction based dynamic allocation with = 0.1, ∆ = 1. The input process is fBm with m = 3, σ 2 = 1, H = 0.75. Left: rate. Right: queue length.
Most Probable Path Techniques for Gaussian Queueing Systems
6
103
Conclusion
We have presented a straightforward method for studying various queueing systems with general Gaussian input traffic. These included priority queues, twoclass GPS queues, and an example of dynamic server capacity allocation. Using any advanced mathematical tool, it is possible to build expert systems, which make the analyses in this paper half or fully automatic once the parameters are given. In particular, the traffic in each class in described simply with mean rate and the cumulative variance function. The novel theoretical aspect in this work is that we are looking for approximations and bounds in a Gaussian space — not large deviation theorems, which at least “officially” tell only about certain logarithmic limits. Although most of our quantitative estimates are more or less heuristic, we hope that this new point of view to queueing phenomena will prove fruitful in rigorous mathematics also. One of the key challenges may then be understanding the geometry of the threshold exceedance set in the neighborhood of the most probable path.
References 1. R.G. Addie. On weak convergence of long range dependent traffic processes. Journal of Statistical Planning and Inference, 80(1-2):155–171, 1999. 2. R.G. Addie, P. Mannersalo, and I. Norros. Performance formulae for queues with Gaussian input. In P. Key and D. Smith, editors, Teletraffic Engineering in a Competitive World. Proceedings of the International Teletraffic Congress — ITC16, pages 1169–1178, Edinburgh, UK, 1999. Elsevier. 3. R.G. Addie, P. Mannersalo, and I. Norros. Most probable paths and performance formulae for buffers with Gaussian input traffic. To appear in European Transactions on Telecommunications, 2002. 4. R.J. Adler. An Introduction to Continuity, Extrema, and Related Topics for General Gaussian Processes, volume 12 of Lecture Notes-Monograph Series. Institute of Mathematical Statistics, 1990. 5. R. Azencott. Ecole d’Et´e de Probabilt´es de Saint-Flour VII-1978, chapter Grandes deviations et applications, pages 1–176. Number 774 in Lecture notes in Mathematics. Springer, Berlin, 1980. 6. R.R. Bahadur and S.L. Zabell. Large deviations of the sample mean in general vector spaces. Ann. Prob., 7(4):587–621, 1979. 7. A.W. Berger and W. Whitt. Effective bandwidths with priorities. IEEE/ACM Transactions on Networking, 6(4), 1998. 8. A.W. Berger and W. Whitt. Extending the effective bandwidth concept to networks with priority classes. IEEE Communications Magazine, August 1998. 9. J.-D. Deuschel and D.W. Stroock. Large Deviations. Academic Press, Boston, 1989. 10. J. Kilpi and I. Norros. Testing the Gaussian character of access network traffic. Technical Report COST279TD(01)03, COST, 2001. Available from http://www.vtt.fi/tte/projects/cost279/. 11. W.E. Leland, M.S. Taqqu, W. Willinger, and D.V. Wilson. On the self-similar nature of Ethernet traffic (extended version). IEEE/ACM Transactions on Networking, 2(1):1–15, February 1994.
104
I. Norros
12. M.R.H. Mandjes and J.H. Kim. An analysis of the phase transition phenomenon in packet networks. To appear in Adv. or J. of Applied Probability. 13. P. Mannersalo and I. Norros. GPS schedulers and Gaussian traffic. Infocom 2002, New York. 14. P. Mannersalo and I. Norros. Gaussian priority queues. In Proceedings of ITC 17. Elsevier, 2001. 15. P. Mannersalo and I. Norros. A most probable path approach to queueing systems with general Gaussian input. Computer Networks, 2002. To appear. 16. L. Massoulie. Large deviations estimates for polling and weighted fair queueing service systems. Adv. Perf. Anal., 2(2):103–127, 1999. 17. L. Massoulie and A. Simonian. Large buffer asymptotics for the queue with FBM input. J. Appl. Prob., 36(3):894–906, 1999. 18. O. Narayan. Exact asymptotic queue length distribution for fractional Brownian traffic. Advances in Performance Analysis, 1:39–63, 1998. 19. I. Norros. A storage model with self-similar input. Queueing Systems, 16:387–396, 1994. 20. I. Norros. Busy periods of fractional Brownian storage: a large deviations approach. Adv. Perf. Anal., 2:1–19, 1999. 21. I. Norros, P. Mannersalo, and J.L. Wang. Simulation of fractional Brownian motion with conditionalized random midpoint displacement. Adv. Performance Anal., 2:77–101, 1999. 22. I. Norros, J.W. Roberts, A. Simonian, and J.T. Virtamo. The superposition of variable bit rate sources in an ATM multiplexer. IEEE JSAC, 9(3):378–387, April 1991. 23. A.K. Parekh and R.G. Gallager. A generalized processor sharing approach to flow control in integrated services network: the single node case. IEEE/ACM Transaction on Networking, 1(3):344–357, 1993. 24. R.S. Pazhyannur and P. Fleming. Asymptotic results for voice delay in packet networks. In Vehicular Technology Conference / Fall, 2001. 25. V.I. Piterbarg. Asymptotic Methods in the Theory of Gaussian Processes and Fields. American Mathematical Society, 1996. 26. P. Tran-Gia and N. Vicari, editors. Impacts of new services on the architecture and performance of broadband networks. COST 257 Final Report. compuTEAM W¨ urzburg, 2000. http://nero.informatik.uni-wuerzburg.de/cost/Final/.
On the Queue Tail Asymptotics for General Multifractal Traffic S´ andor Moln´ ar1 , Trang Dinh Dang1 , and Istv´an Maricza1 High Speed Networks Laboratory, Dept. of Telecommunications and Telematics, Budapest University of Technology and Economics H–1117, Magyar tud´ osok k¨ or´ utja 2, Budapest, Hungary Tel: (361) 463 3889, Fax: (361) 463 3107 {molnar, trang, maricza}@ttt-atm.ttt.bme.hu
Abstract. The tail asymptotics in an infinite capacity single server queue serviced at a constant rate and driven by general multifractal input process is presented. It has been shown that in the important subcase of the monofractal Fractional Brownian Motion (FBM) input traffic our result gives the well-known Weibullian tail. Practical engineering applications and validation of the results based on the analysis of measured network traffic have also been presented.
1
Introduction
Teletraffic research papers have reported the high variability and burstiness nature of network traffic in several LAN/WAN environments in the last decade. Moreover, it seems that most of the measured network traffic exhibits properties of scale invariance. It means that within a range of scales no characteristic dominant scale can be identified and some statistical properties within this range are not changing. This remarkable scaling phenomenon called for the fractal modeling of the investigated LAN/WAN traffic [21,20,9,19]. In the fractal modeling framework long-range dependence (LRD) and selfsimilarity have been analyzed intensively, and a number of studies is focused on how to detect accurately the LRD property and how to estimate the Hurst parameter [3,2]. LRD is revealed by the power law decay of the autocorrelation function at large lags, i.e., r(k) ∼ c|k|2H−2 , k → ∞, H ∈ (0.5, 1), where c is a constant [3]. The degree of this slow decay is determined by the Hurst parameter (H). A large group of traffic models (Fractional Brownian Motion (FBM) models, FARIMA models, Cox’s M/G/∞ models, on/off models, etc.) to capture LRD and self-similar properties has also been developed [16]. Among these models the FBM [17] was found to be a popular parsimonious and tractable model of traffic aggregation [4,12]. The performance implications of the fractal property are also addressed in a series of studies [8,7]. After a number of new measurements and deeper analysis of network traffic it was discovered that the LAN/WAN traffic has a more complex scaling behaviour E. Gregori et al. (Eds.): NETWORKING 2002, LNCS 2345, pp. 105–116, 2002. c Springer-Verlag Berlin Heidelberg 2002
106
S. Moln´ ar, T.D. Dang, and I. Maricza
which cannot be described by LRD and self-similarity [21,9]. More precisely, it has been found that aggregate network traffic is asymptotically self-similar over time scales of the order of a few hundreds of milliseconds and above but it exhibits multifractal scaling below this time scale [9]. It has been also pointed out that the transition from the multifractal to self-similar scaling occurs around time scales of a typical packet round-trip time in the network [9]. However, some studies showed that multifractal scaling can also be present even at large time scales [15]. Therefore the monofractal traffic models (e.g. FBM) are inadequate to characterize the network traffic and multifractal traffic models with a much more flexible rule for the scaling law seem to be needed, especially for some WAN environments. Multifractal models can allow a compact description of a complex scaling behavior and it can also capture the non-Gaussian character of network traffic. Multifractal models imply the non-redundant scaling behavior of moments of many orders. The physical explanations and engineering implications are also addressed in several papers, e.g. [9]. A stochastic process X(t) is called multifractal [13] if it has stationary increments and satisfies E[|X(t)|q ] = c(q)tτ (q)+1
(1)
for some positive q, where τ (q) is called the scaling function of multifractality and c(q) is independent of t. An easy consequence of this definition is that τ (q) is a concave function [13]. If the scaling function τ (q) is a linear function of q the process is called monofractal. Multifractality is thus defined as a global property of the process moments. The definition is very general and it covers a very large class of processes. Multifractal processes are also called processes with scaling property. From a practical point of view queueing analysis of fractal traffic is a very important issue for network dimensioning and management. Therefore the study of queueing systems with fractal traffic input is a challenge in queueing theory. In the recent years the performance of queues with LRD or self-similar input has been deeply analyzed. A collection of studies has proven that the FBM based models have a tail queue distribution that decays asymptotically like a Weibullian law, i.e., P[Q > b] exp(−δb2−2H ), where δ is a positive constant that depends on the service rate of the queue [17,6]. This important result shows that queues with FBM input (H > 1/2) have a much slower decay than that of the exponential. However, there is a lack of queueing results available in the cases when the input traffic has a more complex scaling behaviour. Especially, queueing systems with multifractal input are an undiscovered field and only a few results have been published in the literature. V´ehel et al. [22] suggested a cascade model for TCP traffic based on the retransmission and congestion avoidance mechanisms with no performance analysis. Riedi et al. [19,18] developed a multiscale queueing analysis in the case of tree-based multiscale input models. Gao et al. simulated queues fed by multiplicative multifractal processes in [10] but provided no analytical results. In contrast to these results we consider general multifractal
On the Queue Tail Asymptotics for General Multifractal Traffic
107
process without any restrictions and derive analytical results for the queue tail asymptotics. Our aim is to contribute to the queueing theory of multifractal queues and also to the traffic engineering implications. In this paper we present a novel analysis of multifractal queues including the tail asymptotics, special cases, and practical applications.
2
Queueing Model
We consider a simple queueing model: a single server queue in continuous time, the serving principle for offered work is defined to be FIFO (First In, First Out), the queue has infinite buffer and constant service rate s. Denote by X(t) the total size of work arriving to the queue from time instant −t in the past up to this moment, time instant 0. The so called workload process W (t) is the total amount of work stored in the buffer in time interval (−t, 0), i.e., W (t) = X(t) − st
(2)
Our interest, however, is the current buffer length of the queue, denoted by Q. This is the queue length in the equilibrium state of the queue when the system has been running for a long time and the initial queue length has no influence. If this state of the system does exist, i.e., stationarity and ergodicity of the workload process hold, and the stability condition for the system is also satisfied, i.e., lim supt E[X(t)]/t < s, then: Q = sup W (t),
(3)
t≥0
where W (0) is assumed to be 0. This equation is also referred to as Lindley’s equation. The input process X(t) is considered as a general multifractal process which is defined by Eq. 1. This definition, presented by Mandelbrot et al. in [13], describes multifractal processes in terms of moments which leads to a more intuitive understanding of multifractality.
3
Approximation for Queue Tail Probabilities
We now state our main proposition: Proposition 1. The probabilities for the queue tail asymptotic of a single queueing model with general multifractal input is accurately approximated by: τ0 (q) b τ0 (q) s(q−τ0 (q)) q log(P[Q > b]) ≈ min log c(q) , b large (4) q>0 bq q−τ (q) 0
where τ0 (q) := τ (q) + 1. The scaling function τ (q) and c(q) are the functions which define the multifractal input process.
108
S. Moln´ ar, T.D. Dang, and I. Maricza
Proof Using Lindley’s equation the tail probabilities of queue length can be rewritten of the form: P[Q > b] = P[supt≥0 W (t) > b]. First let consider the quantity P[W (t) > b]: Replacing W (t) by Eq. 2 we have P[W (t) > b] = P[X(t) − st > b] ≤ P[|X(t)| > b + st] = P[|X(t)|q > (b + st)q ], for arbitrary q > 0 E[X(t)q ] , using Markov’s inequality. ≤ (b + st)q
(5) (6)
Since the input process is multifractal defined by Eq. 1 then: P[W (t) > b] ≤
c(q)tτ0 (q) (b + st)q
⇒ sup P[W (t) > b] ≤ sup t≥0
t≥0
c(q)tτ0 (q) =: sup f (t). (b + st)q t≥0
(7)
The straightforward derivation of f (t) shows that it has a maximal value at bτ0 (q) t = s[q−τ > 0. Therefore 0 (q)]
sup P[W (t) > b] ≤ sup f (t) = c(q) t≥0
t≥0
⇒ log sup P[W (t) > b] t≥0
⇒ log sup P[W (t) > b] t≥0
b τ0 (q) s(q−τ0 (q))
≤ log c(q)
bq q−τ0 (q)
b τ0 (q) s(q−τ0 (q))
≤ min log c(q)
,
q
b τ0 (q) s(q−τ0 (q))
q>0
q
τ0 (q)
bq q−τ0 (q)
τ0 (q)
for arbitrary q > 0
τ0 (q)
bq q−τ0 (q)
q
.
(8)
For a large class of stochastic processes (including FBM) the following limit holds [11]: log(P[Q > b]) = 1. (9) lim b→∞ log(supt≥0 P[W (t) > b]) In addition, log(P[Q > b]) ≥ log(sup P[W (t) > b]), t≥0
(10)
then the right-hand side of Eq. 8 is a upper bound of a lower bound on log(P[Q > b]). The used inequalities in Eq. 10 and Eq. 6 become tight for finite large b.
On the Queue Tail Asymptotics for General Multifractal Traffic
109
Thus our approximation for the queue tail asymptotics is the following: τ0 (q) log(P[Q > b]) ≈ min log c(q) q>0
b τ0 (q) s(q−τ0 (q))
bq q−τ0 (q)
q
,
b large.
For positive multifractal processes, i.e. X(t) > 0, Eq. 5 is an equality. In addition, the approximation in Eq. 10 and the inequality in Eq. 6 turn to be more accurate approximations as b tends to infinity. Thus the presented approximation is supposed to be asymptotically tight. The tightness and accuracy of the approximation is also experimentally investigated in Section V. Considering the formula in Eq. 4 we see that it has an implicit form and just the given form of the functions c(q) and τ (q) can provide the final result. The reason behind this is that the definition for the class of multifractal processes gives no restrictions for the functions c(q) and τ (q) (beyond that τ (q) is concave). Our conjecture is that the analysis of queueing systems with general multifractal input may produce some similar general results. It means that there is no general queueing behaviour for these systems as the Weibullian decay in the case of Gaussian self-similar processes [17]. An actual multifractal model will determine, for example, the queue length probabilities of the system.
4 4.1
Applications Fractional Brownian Motion
As a simple application first we consider a monofractal Gaussian process, called Fractional Brownian Motion (FBM). FBM is self-similar which is a simple case of monofractality and it is also Gaussian. The increment process of FBM is called Fractional Gaussian Noise (FGN). Queueing analysis of a single queue with FBM input is first presented by Norros [17] which showed the Weibullian decay for the asymptotic tail behaviour, i.e., P[X > x] ∼ exp(−γxβ ) with β ≤ 1. This result is also justified by Large Deviation techniques in [6]. Applying this input process model to our formula should show its use and robustness when comparing to these available results. First we prove that any Gaussian process with scaling property is in the class of monofractal processes. Furthermore we give the explicit forms for τ (q) and c(q). Consider the following lemma: Lemma 1. A Gaussian process with scaling property is monofractal with parameters τ (q) = 2q [τ (2) + 1] − 1 q/2 √ , Γ q+1 c(q) = [2c(2)] 2 π +∞ z−1 exp−x dx, z > 0. where Γ (·) denotes the Gamma function, Γ (z) = 0 x The proof of this Lemma is provided in [5].
110
S. Moln´ ar, T.D. Dang, and I. Maricza
Turning back to our case of FBM with c(2) = 1 and τ (2) = 2H − 1 where H is referred to as the Hurst parameter, we have τ (q) = qH − 1 q/2 . c(q) = 2√π Γ q+1 2 Insert these two functions into our formula in Eq.4 we get qH bH 2q/2 q + 1 s(1−H) q log(P[Q > b]) ≈ log min √ Γ =: log(min g(q)). q>o q>0 2 b π 1−H
The minimum value of the g(q) for q > 0 function can be easily determined by taking its derivatives. The result is the following: log(P[Q > b]) ≈ log(min g(q)) = log q>o
−1 1 Γ Ψ (log K) √ =: log(TF BM (H, s, b)), π K Ψ −1 (log K)−1/2 (11)
where K = K(H, s, b) = 12 b2(1−H) s2H (1 − H)−2(1−H) H −2H , Ψ (·) is the digamma
function, Ψ (x) = of Ψ (·).
d dx
log Γ (x) =
Γ (x) Γ (x) ,
and Ψ −1 (·) denotes the inverse function
log[-logTFBM(b)]
s=2 H=0.9 s=1 H=0.8
log[TFBM(b)]
log(b)
LD formula
Fig. 1. By setting fixed values for H and s, Fig. 2. Our approximation compared to the line in the log-log plot of − log TF BM (b) the Large Deviation technique result. versus b clearly shows the Weibullian decay for TF BM (H, s, b).
The TF BM (H, s, b) function is quite complex with the presence of Gamma, digamma, and its inverse function. However, we have quite a good approximation of TF BM (H, s, b): Proposition 2. The approximation 1 Γ Ψ −1 (log x) √ ≈ exp(−x) π xΨ −1 (log x)−1/2 holds for large x, x > 0. The proof and the precise sense of this approximation can be found in [5].
(12)
On the Queue Tail Asymptotics for General Multifractal Traffic
111
Applying this approximation we find that the queue tail for the FBM case satisfies: 1 log (TF BM (H, s, b)) ≈ − b2(1−H) s2H (1 − H)−2(1−H) H −2H , 2
b large. (13)
Eq. 13 shows the Weibullian decay of this queue which was first recognized and proven by Norros [17]. Numerical evaluations of the result are presented in Fig. 1 and Fig. 2. In Fig. 1 we fix the values of H and s and then calculate the values of the queue tail approximation TF BM (H, s, b) versus the queue size b and then plot it in the log-log scale. The linearity of the plot also demonstrates the Weibullian decay. Now we compare our result to the result obtained by Duffield and O’Connell. The asymptotic formula for queue tail probabilities provided by Large Deviation technique presented in [6] is lim b−2(1−H) log P[Q > b] = − inf c−2(1−H)
b→∞
c>0
(c + s)2 2
1 ⇔ log P[Q > b] → − b2(1−H) s2H (1 − H)−2(1−H) H −2H , 2
as b → ∞, (14)
where s also denotes the service rate. Therefore we can conclude that our approximation yields the Large Deviation result, see Eq. 13 and Eq. 14. The two results are depicted in Fig. 2 and we can see that the plots almost coincide for all calculated values of the queue size. Our conclusions can be summarized in two main points: (i) the asymptotic tail approximation for the case of FBM has Weibullian decay; (ii) this result is also consistent with the formula presented by Norros [17] and by Duffield et al. with Large Deviation technique [6]. In the case of H = 1/2 (Brownian Motion) the above formula results in log P[Q > b] ≈ −2sb/σ 2 where σ 2 denotes the variance of the process, which is in agreement with the queueing formula known from the theory of Gaussian processes [14,6]. 4.2
Practical Solutions
We show here the practical use of the formula. Assume that we are interested in the behaviour of the tail of the steady-state buffer occupancy (queue length) distribution at a specific multiplexer in our network. The first step should be the fine resolution measurements of the input process. We also assume that the input process exhibits multifractal scaling properties. Then the scaling function τ (q) and the function c(q) can be estimated from the collected data for some available parameters q > 0. We emphasize the importance of the function c(q) as the quantity factor of multifractal processes which is sometimes neglected in a number of studies dealing with multiscaling properties of the high-speed network traffic. The scaling function τ (q) defines only the quality of multiscaling and it is not enough for the description of a multifractal model and therefore for the analysis of queueing models with multifractal input processes.
112
S. Moln´ ar, T.D. Dang, and I. Maricza
Now we suggest two practical methods for the approximation of the queue tail distribution: 1. Given the service rate s and the two sets {c(q)} and {τ (q)}, using Eq. 4 the approximation of log(P[Q > b]) can be computed for each value of b. This method is very simple but it is the more useful from network planning and capacity dimensioning point of view since we are only interested in some values of the tail probabilities. We mainly focus on the practical use of this method in this study. 2. The input process is fitted to a multifractal model. The two measured sets of c(q) and τ (q) are fitted by c˜(q) and τ˜(q). Then the analysis of the Eq. 4 with these functions can result in simple closed form of the queue tail probabilities. We use this method when studying the queue tail behaviour of a multifractal model.
5
Queueing Analysis
In this section we show the validation for the mentioned practical solution presented above by the queueing analysis of some real traffic traces. We also provide a simple method for estimation of multiscaling functions c(q) and τ (q). 5.1
Simple Method for Multiscaling Functions Estimation
The full description of a multifractal model involves both c(q) and the scaling function τ (q). We present here a simple method for testing of scaling properties and also for the estimation of these functions. The definition of multifractal processes (Eq. 1) claims the stationarity condition for the increments. Therefore it is easy to verify the following relation for the moments of the increments: E[|Z (t) |q ] = c(q)(t)τ (q)+1 = c(q)(t)τ0 (q) , q > 0, where Z (t) denotes the increment process of time sample t. Thus E[|Z (mt) |q ] = c(q)(mt)τ0 (q) , q > 0 also holds for m = 1, 2, . . . Choose t as the time unit, then log E[|Z (m) |q ] = τ0 (q) log m + log c(q),
q > 0.
(15)
Based on this property, the method is the following: Given a data series of a process increments Z1 , Z2 , . . . , Zn and define its corresponding real aggregated sequence {Z (m) } of the aggregation level m by (m)
Zk
= Z(k−1)m+1 + Z(k−1)m+2 + . . . + Zkm ,
k, m = 1, 2, . . .
(16)
If the sequence {Zk } has scaling property then the plot of absolute moments E[|Z (m) |q ] versus m on a log-log plot should be a straight line due to Eq. 15. The slope of the straight line provides the estimate of τ0 (q) and the intercept is the value for log c(q). The illustration of the method can be seen in Fig. 3. Note that we have no need to estimate c(q) and τ0 (q) for all positive value of q, which is an impossible task. In fact, the largest value of q we should considered depends on the interested finite queue length of the involved queue length probability, see below.
113
DEC-PKT-1 s=1 b=500
*
-1.0 -1.5
τ0(q)
pbq
logT (s,b)
-0.5
log(E[|Z(m)|q])
0.0
On the Queue Tail Asymptotics for General Multifractal Traffic
qmin -2.0
log[c(q)]
0
log(m)
T(s,b) 0
2
4
q
6
8
10
Fig. 3. A simple method for scaling test Fig. 4. Theoretical queue tail probability and the estimation of c(q) and the scaling at each value of queue size b is the minifunction τ (q). mum of log T ∗ (s, b).
5.2
Analysis Results
Our results have been first validated by simulation of multifractal cascades [5]. We have also carried out analysis of several measured IP packet arrival traffic traces (DEC-PKT-1, DEC-PKT-2, and DEC-PKT-3) obtained from the Internet Traffic Archive [1]. In this paper we present only two typical cases, i.e., monofractal (DEC-PKT-2) and multifractal traffic (DEC-PKT-3). The analysis validates the use of our approximation in a single queue with constant service rate and general multifractal input. Figure 5(a) shows the plot of absolute moments of the aggregated sets of the set DEC-PKT-3 versus the aggregation level in a log-log plot for some values of moment q. The linearity of the plots observed in the figure clearly indicates the scaling property of this data set. After applying the estimation method we presented in the previous subsection we get the two sets of estimated τ0 (q) and c(q) which are drawn in Fig. 5(b) and Fig. 5(c) (we estimate log c(q) instead of c(q)). The plot of the function τ0 (q) = τ (q) + 1 is a concave curve which suggests the multifractal property of DEC-PKT-3. We then make a comparison between our approximation and the queueing simulation of real data traces to validate the use of the formula in practice. The approximation for probabilities of queue tail presented in Proposition 1 can be rewritten in the form bq bτ0 (q) log P[Q > b] ≈ min log c(q) + τ0 (q) log − q log q>0 s(q − τ0 (q)) q − τ0 (q) ∗ =: min {log T (s, b)} = T (s, b). (17) q>0
For the sake of calculation simplicity we choose the service rate such that s = 1. The lower curve in Fig. 5(d) shows the simulation result of the DEC-PFT-3 data set. Using Eq. 17 the value of the logarithmic tail probability at each concerned value of queue size b is taken by the numerical minimization of log T ∗ (s, b) with the estimated sets {c(q)} and {τ0 (q)}. An example is shown in Fig. 4.
S. Moln´ ar, T.D. Dang, and I. Maricza
114
(a)
(b)
(c)
(d)
Fig. 5. Analysis results of the DEC-PKT-3 data set.
In addition, we do not need to plot log T ∗ (s, b) at each value of q to find its minimum. A simple program routine can do it for all concerned value of b at once. Our theoretical tail probabilities are on the upper curve in Fig. 5. As comparing with the simulation result which is seen in the same figure we found that it has the similar shape and becomes tight as b increases. This validates our result. We have performed the same analysis with an other data set DEC-PFK2. The results are summarized in Fig. 6. The DEC-PKT-2 data set, however, has the exact monofractal structure and can be well modelled by statistical selfsimilarity with Hurst parameter H = 0.8. Our queueing model deals with general multifractal input so it also involves the case of monofractal processes. Thus it is not surprising that the analysis also provides the correct queueing results in this case.
6
Conclusion
In this paper we studied the queueing performance of a single server infinite capacity queue with a constant service rate fed by general multifractal input process. We have provided the following results:
On the Queue Tail Asymptotics for General Multifractal Traffic
115
(a)
(b)
Fig. 6. Analysis results of the DEC-PKT-2 data set.
(i)
We derived an asymptotic approximation of the steady-state queue length probabilities. (ii) We showed that our results gives the well-known Weibullian queue tail in case of the monofractal Fractional Brownian Motion input process. (iii) We proved that the class of Gaussian processes with scaling properties is limited to monofractal processes. (iv) We demonstrated the practical applicability of our approximation and validated the method by queueing analysis of both multifractal and monofractal network traffic cases. There are several interesting topics for further research. Based on the multifractal process characterization one of our goal is to build a multifractal traffic model parameterized by the multifractal functions. We also intend to carry out more multifractal analyses of measured LAN/WAN traffic with corresponding performance analysis.
References 1. The internet traffic archive. http://ita.ee.lbl.gov. 2. P. Abry and D. Veitch. Wavelet analysis of long range dependent traffic. IEEE Trans. Inform. Theory, 44(1):2–15, January 1998. 3. J. Beran. Statistics for Long-Memory Processes. Chapman & Hall, One Penn Plaza, New York, NY 10119, 1995. 4. D. R. Cox. Statistics: An Appraisal, Proc. 50th Anniversary Conference, chapter Long-Range Dependence: A Review. Iowa State University Press, 1984. 5. T. D. Dang and S. Moln´ ar. Queue asymptotics with general multifractal input. Technical report, Budapest University of Technology and Economics, July 2001. 6. N.G. Duffield and N. O’Connell. Large deviations and overflow probabilities for the general single-server queue, with applications. In Proc., Cam. Phil. Soc., volume 118, pages 363–374, 1994. 7. A. Erramilli, O. Narayan, A. L. Neidhardt, and I. Saniee. Performance impacts of multi-scaling in wide-area TCP/IP traffic. In Proc., IEEE INFOCOM 2000, volume 1, pages 352–359, Tel Aviv, Israel, 2000.
116
S. Moln´ ar, T.D. Dang, and I. Maricza
8. A. Erramilli, O. Narayan, and W. Willinger. Experimental queueing analysis with long-range dependent packet traffic. IEEE/ACM Trans. on Networking, 4(2):209– 223, April 1996. 9. A. Feldmann, A. C. Gilbert, and W. Willinger. Data Networks as Cascades: Investigating the Multifractal Nature of Internet WAN Traffic. ACM Computer Communication Review, 28:42–55, September 1998. 10. J. Gao and I. Rubin. Multifractal modeling of counting processes of long-range dependent network traffic. In Proceedings SCS Advanced Simulation Technologies Conf., San Diego, CA, April 1999. 11. J. H¨ usler and V. Piterbarg. Extremes of a certain class of Gaussian processes. Stochastic Process. Appl., 83:257–271, 1999. 12. T. G. Kurtz. Stochastic Networks: Theory and Applications, chapter Limit Theorems for Workload Input Models. Oxford University Press, 1996. 13. B. B. Mandelbrot, A. Fisher, and L. Calvet. A Multifractal Model of Asset Return. Yale University, 1997. Working Paper. 14. M. B. Marcus and L. A. Shepp. Sample behaviour of Gaussian processes. In Proceedings of the Sixth Berkeley Symposium, 1972. 15. S. Moln´ ar and T. D. Dang. Scaling analysis of IP traffic components. In ITC Specialist Seminar on IP Traffic Measurement, Modeling and Management, Monterey, CA, USA, 18-20 September 2000. 16. S. Moln´ ar and A. Vid´ acs. On Modeling and Shaping Self-Similar ATM Traffic. In 15th International Teletraffic Congress, Washington, DC, USA, June 1997. 17. I. Norros. A storage model with self-similar input. Queueing Systems, 16:387–396, 1994. 18. V. J. Ribeiro, R. H. Riedi, M. S. Crouse, and R. G. Baraniuk. Multiscale queuing analysis of long-range-dependent network traffic. In Proceedings of IEEE INFOCOM 2000, Tel Aviv, Israel, March 2000. 19. R. H. Riedi, M. S. Crouse, V. J. Ribeiro, and R. G. Baraniuk. A multifractal wavelet model with application to network traffic. IEEE Trans. Inform. Theory, 45(3):992–1018, April 1999. 20. R. H. Riedi and J. L´evy V´ehel. Multifractal properties of TCP traffic: a numerical study. INRIA research report 3129, Rice University, February 1997. 21. M. S. Taqqu, V. Teverovsky, and W. Willinger. Is network traffic self-similar or multifractal? Fractals, 5:63–73, 1997. 22. J. L´evi V´ehel and B. Sikdar. A multiplicative multifractal model for TCP traffic. In Proc., IEEE ISCC, Hammamet, Tunisia, July 2001.
Some Models for Contention Resolution in Cable Networks Onno Boxma1 , Dee Denteneer2 , and Jacques Resing1 1
EURANDOM and Department of Mathematics and Computer Science Eindhoven University of Technology P.O. Box 513, 5600 MB Eindhoven, The Netherlands {Boxma,Resing}@win.tue.nl 2 Philips Research, Digital Signal Processing Group Prof. Holstlaan 4, 5656 AA Eindhoven, The Netherlands [email protected]
Abstract. In this paper we consider some models for contention resolution in cable networks, in case the contention pertains to requests and is carried out by means of contention trees. More specifically, we study a number of variants of the standard machine repair model, that differ in the service order at the repair facility. Considered service orders are First Come First Served, Random Order of Service, and Gated Random Order of Service. For these variants, we study the sojourn time at the repair facility. In the case of the free access protocol for contention trees, the first two moments of the access delay in contention are accurately represented by those of the sojourn time at the repair facility under Random Order of Service. In the case of the blocked access protocol, Gated Random Order of Service is shown to be more appropriate.
1
Introduction
Cable networks are currently being upgraded to support bidirectional data transport, see e.g. van Driel et al. [1]. The system is thus extended with an ’upstream’ channel to complement the ’downstream’ channel that is already present. This upstream channel is time slotted and shared among many stations so that contention resolution is essential for upstream data transport. An efficient way to carry out the upstream data transport is via a request-grant mechanism, like in Digital Video Broadcasting [2]: stations request data slots in contention with other stations via contention trees. After a successful request, data transfer follows in reserved slots, not in contention with other stations. A tractable model for the access delay due to this request procedure is an essential step toward a better understanding of such a request-grant mechanism, and expressions for the first moments of the distribution of the access delay are particularly relevant. However, the performance analysis of contention trees, see Mathys and Flajolet [3] or Tsybakov [4], has been carried out under the assumption of a Poisson source model. This does not easily lead to properties E. Gregori et al. (Eds.): NETWORKING 2002, LNCS 2345, pp. 117–128, 2002. c Springer-Verlag Berlin Heidelberg 2002
118
O. Boxma, D. Denteneer, and J. Resing
of the closed model for a finite number of stations that is appropriate when contention trees are used for reservation. The machine repair model, also known as the computer terminal model or as the time sharing system (e.g. Kleinrock [5], Section 4.11; Bertsekas and Gallager [6], Example 3.22), is one of the key performance models that assumes a finite input population. Therefore, it is a promising model for contention resolution using contention trees. The basic model is as follows. There are N machines working in parallel. After a working period a machine breaks down and joins the repair queue. At the repair facility, a single repairman repairs the machines according to some service discipline. Once repaired, a machine starts working again. In the basic model, the distribution of both the working time and the repair time of machines is assumed to be exponential and the service discipline at the repair facility is assumed to be First Come First Served (FCFS). In this paper, we show that the machine repair model can be an appropriate model for contention resolution in cable networks for the case that so-called Capetanakis-Tsybakov contention trees are used for reservation (see [7,8]). It turns out that the average time spent in contention resolution, obtained via simulations, matches the average sojourn time at the repair facility in the basic machine repair model almost perfectly. However, the basic model fails to accurately predict the variance of the time spent in contention resolution. Closer inspection of contention trees reveals a possible source for this mismatch. Contention trees, to be described in Section 2, deviate from queues with a FCFS discipline in that each station in a given group has the same probability of being served, irrespective of the instant at which it entered the group. This suggests that variants of the basic machine repair model are needed to obtain a more appropriate model for the time spent in contention resolution, and that these variants should have some randomness built into their service discipline. In this paper, we consider two such variants. Firstly, we consider the machine repair model as described above with a random order of service (ROS) discipline. Here, after a repair, the next machine to be repaired is chosen randomly from the machines in the repair queue. We analyse the sojourn time distribution at the repair queue for this model by exploiting a close relationship with the machine repair model considered in Mitra [9], in which the service discipline at the repair facility is processor sharing (PS). We shall see that the variance of the sojourn time under ROS gives an accurate prediction of the access delay of requests in contention, when the so-called free access protocol is used. However, the prediction is not accurate in case of the so-called blocked access protocol. For that protocol, we consider an extension of the machine repair model. In this extension, machines that broke down are first gathered in a waiting room before they are put in random order in the actual repair queue at the instants that this repair queue becomes empty. In the sequel this service discipline will be called gated random order of service (GROS). For the GROS service discipline, just as for the ROS discipline, the average sojourn time at the repair facility is identical to the average sojourn time in case of the FCFS discipline – which, as mentioned above, accurately matches the mean time
Some Models for Contention Resolution in Cable Networks
119
spent in contention resolution. Hence, the emphasis of our analysis will be on obtaining an (approximate) expression for the variance of the sojourn time at the repair facility. It is appropriate to comment briefly on the relevance of the variance of the access delay in contention resolution. Firstly, low variability implies low jitter. As such, access variability is a key performance measure in itself. However, the main reason for studying the variance of the access delay is that it is needed in understanding the total average waiting time in cable networks. This follows from the request grant mechanism employed, as explained in the first paragraph of this introduction. Data transfer in cable networks consists of two stages. In the first stage, bandwidth for data transfer is being requested via the contention procedure. Once successfully transmitted, the requests queue up in a second queue. In this queue, the service time distribution is given by the distribution of the number of packets for which transfer is being requested. Now, due to the phenomenon of request merging, which will be described in more detail in Section 2, the number of packets being requested depends on the time spent in contention so that the variance of the service time depends on the variance of the access delay in the contention resolution. Clearly, the variance of the service time is needed to estimate the average waiting time in this second queue. In the present paper we concentrate on the first stage; to analyze the overall delay is a topic for further study. The rest of the paper is organised as follows. In Section 2 we describe the contention resolution process using contention trees in more detail. In Section 3 we review some of the properties of the basic machine repair model. Moreover, we derive expressions for the first two moments of the steady state sojourn time distribution. The machine repair model with ROS service discipline is considered in Section 4. Here, we first relate the model with ROS service discipline to the model with PS. After that, we briefly review the main results from Mitra [9] for the model with PS. In Section 5, we give an approximate derivation of the moments of the sojourn time in the model with GROS service discipline. In Section 6 we present numerical results which show that the models of Section 4 and 5 can be used to approximate the sojourn time for contention resolution in cable networks using contention trees.
2
Access via Contention Trees
Tree algorithms are a popular tool to provide access to a channel that is time slotted and shared among many stations. These algorithms and their many variants are also referred to as stack algorithms or splitting algorithms; we refer to Bertsekas and Gallager [6], Section 4.3, for a survey. In this paper, we will confine attention to the basic ternary tree, illustrated in Figure 1. The basic tree consists of nodes, and each of these nodes comprises three slots of the access channel. A collision occurs if more than one station attempts a transmission in a slot. These collisions are then resolved by recursively splitting the set of colliding stations, plus possible newcomers as explained below, into three disjoint subgroups. For
120
O. Boxma, D. Denteneer, and J. Resing
c
c
0
c
1
0
0
1
c
c
1
1
1
0 Channel c c 1 c c 0 1 1 0 0 1 1 c 1 0 1 0 1
1
1
1
}
0
Fig. 1. Basic tree algorithm: slots of the Fig. 2. Same tree as in Figure 1, with a tree with a collision (c) are recursively breadth first ordering of the nodes split until all slots are empty (0) or have a successful transmission (1)
this, usually, a random mechanism is employed. This splitting continues until all tree slots are either empty or contain a successful transmission. This splitting process can be thought of as a tree, but takes place in time slots of the communication channel devoted to the contention resolution, so that the nodes of the tree must be time ordered. For this, we will use the breadth first ordering, as illustrated in Figure 2. This basic tree algorithm must be complemented with a ’channel access protocol’ that describes the procedure to be followed by stations that have data to transmit and that are not already contending in the tree. We consider two such access protocols: free access and blocked access. In the former protocol, access to the tree is free and any station can transmit a request in the next node of the tree, as soon as it has data to transmit. In the latter protocol, the tree is blocked so that new stations can only transmit requests in the root node of the tree that is started as soon as the current tree has been completed. The stations exhibit the following behaviour: – A station becomes active in the contention process upon generation of a data packet. In case of free access it will then transmit a request in the next tree node, randomly choosing one of the three slots in this node. In case of blocked access it will wait for the next new tree to be started and transmit its request in one of the slots of the root node of this tree. – A station stays active until its request has been successfully transmitted. – While active, the station can update its request (request merging). Hence, packets generated at such an active station do not cause extra requests. – After successful transmission of the request, the station becomes inactive, to become active again upon the generation of a new data packet. Note that request merging implies that the number of stations that can be active in contention is bounded. Exactly this property makes results on the performance of contention trees in open models, as investigated in e.g. Mathys and Flajolet [3] or Tsybakov [4], less relevant to contention resolution in cable
Some Models for Contention Resolution in Cable Networks
121
networks. This property also explains the approach in this paper, which approximates the access delay in transmitting a request by means of the sojourn time in a machine repair model.
3
Properties of the Basic Machine Repair Model
First we introduce some notation and quote some properties of the basic machine repair model. The total number of machines in the system is denoted by N . The machines work in parallel and break down, independently, after an exponentially distributed working period with parameter λ. Machines that broke down queue up in the repair queue, where they are served FCFS by a single repairman. The repair times of machines are exponentially distributed with parameter µ. With the random variables X and Y we denote the steady state number of machines that are in QW (i.e., are working) and that are in QR (i.e., are in repair), respectively. Clearly, the number of working machines and the number of machines in repair evolve as Markov processes. Their steady state distributions are (in fact even for generally distributed working periods, cf. [5,10]): ρk /k! Pr(X = k) = Pr(Y = N − k) = N , i i=0 ρ /i!
k = 0, . . . , N,
(1)
where ρ := µ/λ. For the mean and variance of X and Y we have E(X) = ρ(1 − BN (ρ)), E(Y ) = N − E(X), var(X) = var(Y ) = E(X) − ρBN (ρ)[N − E(X)],
(2) (3)
where BN (ρ) denotes Erlang’s loss probability, which is given by ρN /N ! . BN (ρ) = N i i=0 ρ /i!
(4)
Indeed, it is well known that the number of operative machines has the same distribution as the number of busy lines in the classical Erlang loss model. We now turn to the moments of the sojourn time of an arbitrary machine at the repair facility. To this end, consider the time epoch at which an arbitrary machine breaks down and jumps to the repair queue. Stochastic quantities related to this instant will be denoted by a subscript 1. Thus X1 is the number of working machines at this instant, and Y1 is the number of machines in repair at this instant. From the arrival theorem, see Sevcik and Mitrani [11], it follows that the distributions of X1 and Y1 are given by (1), with N replaced by N − 1: ρk /k! Pr(X1 = k) = Pr(Y1 = N − 1 − k) = N −1 , i i=0 ρ /i!
k = 0, . . . , N − 1. (5)
The sojourn time of an arbitrary machine at the repair facility equals its own repair time plus the sum of the repair times of the machines already present at the repair facility. Thus, denoting this sojourn time by S, we have that
122
O. Boxma, D. Denteneer, and J. Resing
S=
Y 1 +1
Bi ,
(6)
i=1
with Bi , i = 1, 2, . . . , a sequence of independent, exponentially distributed random variables with parameter µ. Equation (6) enables us to obtain the LaplaceStieltjes transform (LST) of the sojourn time at the repair facility (see also [10]). Here, however, we are mainly interested in the first two moments of the sojourn time. These can be obtained by consideration of the moments of the random sum, i.e., 1 (N − ρ(1 − BN −1 (ρ))), µ 1 var(SF CF S ) = 2 (N − ρBN −1 (ρ)[N − 1 − ρ(1 − BN −1 (ρ))]) . µ E(SF CF S ) =
(7) (8)
Now, for N large and N >> µ/λ, BN (ρ) goes to zero like ρN /N !. Hence, in that case, the following are extremely sharp approximations: E(SF CF S ) ≈
1 N − , µ λ
var(SF CF S ) ≈
N . µ2
(9)
In Sections 4 and 5 we shall study the sojourn time distribution at QR under the assumption that the service discipline at that queue is Random Order of Service (ROS) and Gated Random Order of Service (GROS), respectively. The mean sojourn time in QR is the same under FCFS, ROS and GROS; this is a direct consequence of Little’s formula and the fact that the distribution of the number of customers in QR is the same for any work-conserving service discipline that does not pay attention to the actual service requests of customers. We therefore focus in particular on the variance of the sojourn time in QR . Formula (9) shows that for the FCFS discipline, asymptotically, this variance is linear in the number of machines and does not depend on λ, the parameter of the distribution of the working times.
4
The Model with ROS Service Discipline
Again we consider the basic machine repair model, but now the service discipline at QR is random order of service. For reasons that will soon become clear, we assume that the system contains not N but N + 1 machines. The main goals of this section are: (i) to determine the LST of the waiting time distribution at QR , (ii) to relate this distribution to the sojourn time distribution at QR in case the service discipline is PS instead of ROS, and (iii) to determine the asymptotic behaviour of the variance of the waiting (and sojourn) time at QR under the ROS discipline. Consider a tagged machine, C, at the instant it arrives at QR . Let SROS (WROS ) denote the steady state sojourn (waiting) time of C at QR . SROS is the sum of WROS and a service time that is independent of WROS , and hence
Some Models for Contention Resolution in Cable Networks (N +1)
we can concentrate on WROS . We denote by Y1 QR , as seen by C upon arrival in QR . Introduce (N +1)
φj (ω) := E[e−ωWROS |Y1
= j + 1],
123
the number of machines in
Re ω ≥ 0,
j = 0, . . . , N − 1.
We can write, for Re ω ≥ 0, E[e−ωWROS |WROS > 0] =
N −1
(N +1)
P(Y1
(N +1)
= j + 1|Y1
> 0)φj (ω).
(10)
j=0
The following set of N φ0 (ω), . . . , φN −1 (ω) holds:
equations
for
the
N
unknown
(N − j − 1)λ µ + (N − j − 1)λ [ φj+1 (ω) µ + (N − j − 1)λ + ω µ + (N − j − 1)λ 1 j µ ( + φj−1 (ω))]. + µ + (N − j − 1)λ j + 1 j + 1
functions
φj (ω) =
(11)
Notice that the pre-factors of φ−1 (ω) and φN (ω) equal zero. Formula (11) can be understood in the following way. The pre-factor (µ + (N − j − 1)λ)/(µ + (N − j − 1)λ + ω) is the LST of the time until the first ‘event’: Either an arrival at QR or a departure from QR . An arrival occurs first with probability (N − j − 1)λ/(µ + (N − j − 1)λ). In this event, the memoryless property of the exponential working and repair times implies that the tagged machine C sees the system as if it only now arrives at QR , meeting j + 2 other machines there. A departure occurs first with probability µ/(µ + (N − j − 1)λ). In this event, C is with probability 1/(j + 1) the one to leave the waiting room and enter the service position; if it does not leave, it sees QR as if it only now arrives, meeting j other machines there. We can use (11) to obtain numerical values of E(WROS |WROS > 0) and var(WROS |WROS > 0). Formula (11) can also be used to study this mean and variance asymptotically, for N → ∞. In fact, for this purpose we can also use the analysis given by Mitra [9] for a strongly related model: The machine-repair model with processor sharing at QR and with N (instead of N + 1) machines. Denote the LST of the sojourn time distribution of a machine meeting j machines at QR , in the case of processor sharing, by ψj (ω). A careful study of Formula (11) and the explanation following it reveals that exactly the same set of equations holds for ψj (ω), if in the PS case there are not N + 1 but N machines in the system. Not only do we have φj (ω) = ψj (ω), j = 0, . . . , N − 1, but it also follows (N +1) (N +1) (N ) = j + 1|Y1 > 0) = P(Y1 = j), j = 0, . . . , N − 1. from (5) that P(Y1 The above equalities, combined with (10), imply that WROS , conditionally upon it being positive, in the machine-repair system with N +1 machines, has the same distribution as the sojourn time under processor sharing in the corresponding system with N machines. Adding a superscript (N ) for the case of a machinerepair system with N machines, we can write: (N )
(N +1)
P(SP S > t) = P(WROS
(N +1)
> t|WROS
> 0).
(12)
124
O. Boxma, D. Denteneer, and J. Resing
This equivalence result between ROS and PS may be viewed as a special case of a more general result in [12] (see [13] for another special case). Using (12), it is easily verified that, for the machine repair model with N machines, ESROS = ESP S = ESF CF S , just as indicated in Section 3. Multiplication by t and integration over t in (12) yields: (N +1)
(N )
var(SP S ) =
var(WROS ) (N +1)
P(WROS
> 0)
,
(13)
(N +1)
where P(WROS > 0) is easily obtained from (1). If N is large and N > µ/λ, (N +1) then P(WROS = 0) is negligibly small. The previous formula hence implies: (N ) (N ) (N ) (N ) For N → ∞, var(SP S ) ∼ var(WROS ) – and hence also var(SP S ) ∼ var(SROS ). (N ) (N ) For an asymptotic analysis of EWROS and var(WROS ) we can thus immediately apply corresponding asymptotics of Mitra [9] for the PS-variant. Mitra [9] shows that SP S is hyper-exponentially distributed. This, in turn, immediately implies that (see Proposition 12 in [9]), var(SP S ) ≥ (ESP S )2 .
(14)
Hence var(SP S ) = O(N 2 ) for N → ∞, which sharply contrasts with the O(N ) behavior for FCFS (cf. (9)).
5
The Model with GROS Service Discipline
In this section, we consider the model with GROS service discipline as described in Section 1. Again, we let Y denote the number of machines in the total waiting area (i.e. waiting room plus waiting queue). Obviously the distribution of Y equals the distribution of the number of machines in the repair queue in the standard model described in Section 3, and is given by (1). We will now consider the sojourn time until repair, SGROS , of an arbitrary (tagged) machine for the model with GROS service discipline. Observe that (1)
Y1
SGROS =
i=1
(1)
(2)
(1) Bi
Y1
+
(2)
+1 i=1
(2)
Bi .
(15)
Here, the random variables Bi and Bi are independent, exponentially dis(1) tributed service times with parameter µ. The random variable Y1 is the number of machines in the waiting queue (including the one in repair) at the instant that (2) the tagged machine breaks down. The random variable Y1 + 1 equals the random position allocated to the tagged machine in the waiting queue at the instant it is moved from the waiting room to the waiting queue. This model is not a closed product-form network, so that an exact analysis of the sojourn time is considerably more difficult than the analysis for the models considered above. However, a particularly easy approximation of the first moments can be obtained, if one makes the following two assumptions:
Some Models for Contention Resolution in Cable Networks
125
– The two components of SGROS in (15) are uncorrelated. (1)
(2)
– The random variables Y1 and Y1 are uniformly distributed on 0, 1, · · · , Y1 , where the random variable Y1 is as defined in Section 3. Neither assumption is strictly valid; however, for the case considered in which N λ > µ and N large, they appear to be good approximations. Using (15) and the fact that BN (ρ) → 0 like ρN /N ! for N large and N >> µ/λ, see above (9), it follows that var(SGROS ) ≈
1 (N − µ/λ)2 /6 + (4N − 2µ/λ)/3 . 2 µ
(16)
Thus, for large N , the GROS variance is much larger than the variance in the machine repair model with the FCFS service discipline. However, it is considerably smaller than the variance in the machine repair model with the ROS service discipline.
6
A Comparison
We now turn to a comparison of the access delay due to contention resolution and the sojourn time in the variants of the machine repair model. In this comparison, we will confine ourselves to the first two moments of the various distributions: we consider first moments in Section 6.1 and standard deviations in Section 6.2. The procedures for contention resolution were described in Section 2, and the access delay due to contention resolution is the delay experienced by stations that use contention trees for reservation. More formally, it is defined as the number of tree slots elapsed from the instant a station becomes active until the instant its request is successfully transmitted. As already indicated in Section 2, there are no closed form expressions for the moments of the distribution of the access delay. Hence, these are obtained via simulation. In these simulations, the stations execute the procedure outlined in Section 2. Thus, we use a source model in which each of a finite number, N , of stations generates packets according to a Poisson process with rate λ, independently of the other stations. B , for the ’free’ F and ES The average delays thus obtained are denoted ES and ’blocked’ channel access protocol respectively. Likewise, the estimated standard deviations are denoted by σ F and σ B . The ’hat’ serves as a reminder that the moments are estimated from a simulation. We use 1000 trees in each simulation. The moments of the sojourn time of the various machine repair models have been obtained in Sections 3 to 5. In utilizing the results from these sections, we will use µ = log(3) for the rate of the service time distribution. The motivation behind this value is in Janssen and de Jong ([14], Eq. 26-27). They show that the average number of nodes to complete a tree with n contenders is well approximated by n/ log(3).
126
O. Boxma, D. Denteneer, and J. Resing
F , with blocked tree, Table 1. Average access delay for reservation with free tree, ES B , and expected sojourn time for the machine repair model, ES, for number of ES stations N , and total traffic intensity Λ N = 100 N = 200 N = 1000 F ES B ES ES F ES B ES ES F ES B ES Λ ES 2.5 5.0 10.0 16.5
6.1
43.0 63.0 73.1 77.1
50.1 70.5 80.5 84.5
51.0 71.0 81.0 84.9
86.0 125.9 146.0 154.5
101.3 141.5 161.6 169.5
102.0 142.0 162.0 169.9
429.1 629.9 729.7 824.9
509.4 710.8 811.2 848.5
510.0 710.0 810.0 850.0
First Moments
The average access delays for the tree models and the expected sojourn time for the machine repair model are given in Table 1. There is only one entry in the table corresponding to the expected sojourn time, as it is the same for all variants of the machine repair model considered. In the table, we have varied the number of stations, N , and the total traffic intensity Λ := N λ. The primary purpose of this table is to compare average access delay with expected sojourn time. Whence, the intensities are chosen so that Λ is well above µ, which is the case most relevant to access in cable networks. From the figures we can draw various conclusions. Firstly, and most importantly, we observe that the expected sojourn time in the machine repair model provides an excellent approximation to the average access delay for reservation with contention trees. The agreement with the figures obtained via simulations with blocked access is almost perfect; the agreement with the results for free access is less good. The former result is closely related to a result in Denteneer and Pronk [15] on the average number of contenders in a contention tree. Secondly, we see that free access is a more efficient access protocol than blocked access in that the average access delay with the former is smaller than the average delay with the latter. This result parallels the result for the open model and the Poisson source model, as graphically illustrated in Figure 16 of Mathys and Flajolet [3]. The considered variants of the machine repair model all lead to the same expected sojourn time and are apparently not sufficiently detailed as models to capture the first moment differences between the blocked and the free access protocols. Finally, we observe that all quantities investigated in Table 1 depend approximately linearly on the number of stations (for the cases with N >> µ/λ). 6.2
Standard Deviations
We next turn to a numerical comparison of the standard deviations in the various models. These are given in Table 2, again for different N and Λ.
Some Models for Contention Resolution in Cable Networks
127
Table 2. Standard deviations of the access delay for reservation with free tree, σ F, with blocked tree, σ B , and standard deviations for the basic machine repair model, σ, the ROS machine repair model, σROS , and the GROS machine repair model, σGROS for number of stations N , and total traffic intensity Λ Tree Λ σ σ F B 2.5 5.0 10.0 16.5
46.1 68.0 78.4 83.2
N = 100 Repair σ σROS σGROS
19.5 26.7 30.4 31.5
9.1 9.1 9.1 9.1
50.45 70.18 80.13 84.06
22.8 30.6 34.6 36.2
N = 1000 Tree Repair σ σ σ σROS σGROS F B 429.1 629.9 729.7 786.6
185.1 261.6 299.2 310.6
28.8 28.8 28.8 28.8
509.64 709.39 809.34 848.73
210.4 291.7 332.4 348.4
Several conclusions can be drawn from the table. Firstly, we observe that the standard deviation in either tree model changes with traffic intensity and grows approximately linearly with the number of stations. Neither of these properties is captured by the basic machine repair model; there, the standard deviation of the sojourn time is independent of the traffic intensity and grows only with the square root of the number of stations in the model. Secondly, the standard deviation of the access delay in the blocked tree model corresponds closely to the corresponding figure for the GROS machine repair model. The difference between the two standard deviations is approximately 15%. The results for the GROS model capture both the dependence on the traffic intensity and the dependence on the number of machines that is observed in the tree simulations. Similarly, the standard deviation of the access delay in the free tree model corresponds closely to the corresponding figure for the ROS machine repair model. Looking more closely at the results, we see that the standard deviations obtained for the GROS machine repair model are always larger than those obtained in the blocked tree simulations. We consider this as a fundamental limitation of the machine repair model as an approximation. The batch nature of the contention trees implies that it takes some initial time before the first successful request is transmitted. After this initial period, successful transmissions occur fairly uniformly over the length of the trees. Thus the variability of the waiting period is somewhat reduced as compared to the proposed model in which the successful transmissions occur uniformly over the full length of the tree. Thirdly, the standard deviations with the free access protocol far exceed those with the blocked access protocol. This result has no parallel in the open model. In fact, Figure 17 in Mathys and Flajolet [3] shows that the standard deviation of the delay with free access protocol is below the corresponding value with blocked access for most traffic intensities. However, for large traffic intensities just below the stability bound the order reverses and blocked access then results in smaller standard deviations. Of course, our simulations operate at total traffic intensities that exceed the stability bound for the open system.
128
O. Boxma, D. Denteneer, and J. Resing
Summarizing, our numerical experiments show that the expected sojourn time in the repair stage perfectly matches the average access delay for both variants of the tree procedure. The sojourn time variance in the model with ROS service discipline gives a good approximation of the access delay variance when using free trees. Similarly, the sojourn time variance in the model with GROS service discipline gives a good approximation of the access delay variance when using blocked trees. More numerical results are presented in [16]. Acknowledgement. The authors like to thank Marko Boon for doing a major part of the numerical calculations.
References 1. Driel, C-J. van, van Grinsven, P.A.M., Pronk, V., Snijders, W.A.M.: The (r)evolution of access networks for the information super-highway. IEEE Communications Magazine 35 (1997) 2-10 2. Digital Video Broadcasting (DVB); DVB interaction channel for Cable TV distribution systems (CATV), working draft (Version 3), June 28, 2000, based on European Telecommunications Standard 300 800 (March 1998) 3. Mathys, P., Flajolet, Ph.: Q-ary collision resolution algorithms in random-access systems with free or blocked channel access. IEEE Trans. Inf. Theory 31 (1985) 217-243 4. Tsybakov, B.: Survey of USSR contributions to random multiple-access communications. IEEE Trans. Inf. Theory 31 (1985) 143-165 5. Kleinrock, L.: Queueing Systems, Vol. 2. Wiley, New York (1976) 6. Bertsekas, D.P., Gallager, R.G.: Data Networks. Prentice-Hall, Englewood Cliffs, N.J (1992) 7. Capetanakis, J.I.: Tree algorithms for packet broadcast channels. IEEE Trans. Inf. Theory 25 (1979) 505-515 8. Tsybakov, B.S., Mikhailov, V.A.: Random multiple access of packets: Part and try algorithm. Probl. Peredachi Inf. 16 (1980) 65-79 9. Mitra, D.: Waiting time distributions for closed queueing network models of sharedprocessor systems. In: F.J. Kylstra (ed.), Performance’81, NHPC, Amsterdam (1981) 113-131 10. Kobayashi, H.: Modeling and Analysis. An Introduction to System Performance Evaluation Methodology. Addison-Wesley, Reading (Mass.) (1978) 11. Sevcik, K.C., Mitrani, I.: The distribution of queueing network states at input and output instants, In: M. Arato et al. (eds.), Performance’79, NHPC, Amsterdam (1979) 319-335 12. Borst, S.C., Boxma, O.J., Morrison, J.A., N´ un ˜ez Queija, R.: The equivalence of processor sharing and service in random order. SPOR-Report 2002-01, Eindhoven University of Technology (2002) 13. Cohen, J.W.: On processor sharing and random order of service (Letter to the editor). J. Appl. Probab. 21 (1984) 937 14. Janssen, A.J.E.M., de Jong, M.J.M.: Analysis of contention tree-algorithms. IEEE Trans. Inf. Theory 46 (2000) 2163-2172 15. Denteneer, D., Pronk, V.: On the number of contenders in a contention tree. Proc. ITC Specialist Seminar, Girona (2001) 105-112 16. Boxma, O.J., Denteneer, D., Resing, J.A.C.: Some models for contention resolution in cable networks. EURANDOM Report 2001-037 (2001)
Adaptive Creation of Network Applications in the Jack-in-the-Net Architecture Tomoko Itao1 , Tetsuya Nakamura1 , Masato Matsuo1 , Tatsuya Suda2 , and Tomonori Aoyama3 1 NTT Network Innovation Laboratories, Nippon Telegraph and Telephone Corporation (NTT), 3-9-11 Midori-cho, Musashino-shi, Tokyo, 180-8585, Japan {tomoko, tetsuya, matsuo}@ma.onlab.ntt.co.jp 2 Information and Computer Science, University of California, Irvine, Irvine, CA 92697-3425, USA [email protected] 3 Information and Communication Engineering, The University of Tokyo, 7-3-1 Hongo Bunkyo-ku, Tokyo, 113-8656, Japan [email protected]
Abstract. The Jack-in-the-Net Architecture (Ja-Net) is a biologicallyinspired approach to design adaptive network applications in large-scale networks. In Ja-Net, a network application is dynamically created from a collection of autonomous components called cyber-entities. Cyberentities first establish relationships with other cyber-entities and collectively provide an application through interacting or collaborating with relationship partners. Strength of a relationship is the measure for the usefulness of the partner and adjusted based on the level of satisfaction indicated by a user who received an application. As time progresses, cyber-entities self-organize based on strong relationships and useful applications that users prefer emerge. We implemented Ja-Net platform software and cyber-entities to verify how popular applications (i.e., applications that users prefer) are created in Ja-Net.
1
Introduction
We envision in the future that the Internet spans the entire globe, interconnecting all humans and all man-made devices and objects. When a network scales to this magnitude, it will be virtually impossible to manage a network through a central, coordinating entity. A network must be autonomous and contain built-in mechanisms to support such key features as scalability, adaptability, simplicity,
A part of Tatsuya Suda’s research presented in this paper was supported by the National Science Foundation through grants ANI-0083074 and ANI-9903427, by DARPA through Grant MDA972-99-1-0007, by AFOSR through Grant MURI F49620-00-1-0330, and by grants from the University of California MICRO Program, and Nippon Telegraph and Telephone Corporation (NTT). Tatsuya Suda also holds the title of NTT Research Professor, and his NTT contact information is same as the co-authors’ contact information.
E. Gregori et al. (Eds.): NETWORKING 2002, LNCS 2345, pp. 129–140, 2002. c Springer-Verlag Berlin Heidelberg 2002
130
T. Itao et al.
and survivability. We believe that applying concepts and mechanisms from the biological world provides a unique and promising approach to solving key issues that future networks face. The Jack-in-the-Net Architecture (Ja-Net)[1][2] is a biologically-inspired approach to design adaptive network applications in future networks. The biological concept that we apply in Ja-Net is emergent behavior where desirable structure and characteristics emerge from a group of interacting individual entities. In Ja-Net, a network application is dynamically created from a group of interacting autonomous components called cyber-entities. A cyber-entity is software with simple behaviors such as migration, replication, reproduction, relationship establishment and death, and implements a set of actions related to a service that the cyber-entity provides. An application is provided through interactions of its cyber-entities. In providing applications, cyber-entities first establish relationships with other cyber-entities and then choose cyber-entities to interact with based on relationships. Strength of a relationship indicates the usefulness of the partner and dynamically adjusted based on the level of satisfaction indicated by a user who received an application. As time progresses, cyberentities self-organize based on strong relationships resulting in useful emergent applications that users prefer. In this paper, we describe design and implementation of mechanisms to create applications adaptively in Ja-Net. The rest of the paper is organized in the following manner. Section 2 describes related work. Section 3 describes the overview of Ja-Net Architecture and design of cyber-entities. Section 4 describes experiments on dynamic creation of applications in Ja-Net. Conclusion and future work are discussed in section 5.
2
Related Work
Currently, some frameworks and architectures exist for dynamically creating applications. One such example is Hive [3], where an application is provided through interaction of distributed agents. In Hive, agents choose agents to interact with by specifying the Java interface object that each agent implements. Thus, interaction in Hive is limited to among the agents that mutually implement the interface object of the partner. Unlike Hive, Ja-Net supports ACL (Agent Communication Language) [4] to maximize flexibility in cyber-entity interactions. Bee-gent [5] is another example of a framework to create applications dynamically. It uses a centralized mediator model; a mediator agent maintains a centralized application scenario (logic) and coordinates agent interactions to reduce complexity in multi-agent collaboration. With this centralized mediator approach, Bee-gent restricts the flexibility and scalability of the agent collaboration. Unlike Bee-gent, in Ja-Net, there is no centralized entity to coordinate cyber-entity services, and thus, it scales in the number of cyber-entities. In addition, Ja-Net goes one step further than these architectures by providing built-in mechanisms to support adaptive creation of network applications that reflect user preferences and usage patterns.
Adaptive Creation of Network Applications
131
Link B
Ja-Net Platform Software Virtual Machine (Java) Heterogeneous Operating Systems / Hardware
Wired Interface
Link A
Wireless Interface
Cyber-Entities
Link C Link D
Fig. 1. Ja-Net node structure
Current popular mobile agent systems, including IBM’s Aglets [6], General Magic’s Odyssey [7], ObjectSpace’s Voyager [8] and the University of Stuggart’s Mole project [9], adopt the view that a mobile agent is a single unit of computation. They do not employ biological concepts nor take the view that a group of agents may be viewed as a single functioning collective entity.
3 3.1
Design of Cyber-Entities Overview of the Ja-Net Architecture
Each node in Ja-Net consists of the layers as shown in Figure 1. Ja-Net platform software (referred to as the platform software in the rest of the paper) runs using a virtual machine (such as the Java virtual machine) and provides an execution environment and supporting facilities for cyber-entities such as a communication and life-cycle management of cyber-entities. Cyber-entities run atop the platform software. The minimum requirement for a network node to participate in Ja-Net to run the platform software. A cyber-entity consists of three main parts: attributes, body and behaviors. Attributes carry information regarding the cyber-entity (e.g., cyber-entity ID, service type, keywords, age, etc.). The cyber-entity body implements a service provided by a cyber-entity. Cyber-entity behaviors implement non-service related actions of a cyber-entity such as migration, replication, relationship establishment and death. 3.2
Cyber-Entity Communication
In order to collectively provide an application by a group of cyber-entities, cyberentities exchange messages during the execution of cyber-entity services. Upon receiving a message, a cyber-entity interprets the message and invokes an appropriate service action and sends the outcome of the action to another cyber-entity that it interacts with. This, in turn, triggers service invocation of those cyberentities that receive a message. Cyber-entities may also invoke their services based on an event notification. In Ja-Net, various events may be generated triggered by changes in the network or in the real world (such changes may be captured by sensors).
132
T. Itao et al.
In Ja-Net, to maximize the flexibility in application creation, we adopt Speech Act based FIPA ACL (Agent Communication Language)[4] with extensions specific to the Ja-Net as a communication language of cyber-entities. In the Ja-Net ACL, we define a small number of communicative acts (such as request, agree, refuse, inform, failure, query-if, advertise, recruit, and reward) to facilitate communication between cyber-entities. Advertise, recruit and reward are not in the FIPA ACL communicative acts and specific to the Ja-Net ACL. They are used during the execution of relationship establishment behavior (please see section 3.4 for relationship establishment behavior). In the Ja-Net ACL, an event notification message is also delivered in ACL using inform communicative act. Each ACL message exchanged between cyber-entities contains a communicative act and parameters such as :receiver, :sender, :in-reply-to, :ontology, :sequence-id and :content. :Receiver and :sender, parameters specify the receiver of the current message and the sender of the current message, respectively. :In-reply-to specifies to which message it is replying and is to manage the message exchange flow between cyber-entities. :Ontology specifies the vocabulary set (dictionary) used to describe the content of the message. :Sequence-id specifies a unique identifier of a message sequence in providing an application . A sequence-id is generated by a cyber-entity at the initial point of an application and piggy backed by each ACL message exchanged during the application. :Content specifies data or information associated with a communicative act in the message. A :content parameter is described with Extensible Markup Language (XML)[10]. 3.3
Cyber-Entity Body
A cyber-entity service is implemented as a finite state machine. A cyber-entity may have multiple state models and execute them in parallel. Each state model consists of states and state transition rules. A state implements an atomic service action and message exchanges associated with the action (to allow inputting data to and outputting data from a given action in a given state). A state transition rule associated with a state specifies the next state to transit to. When an action in a given state completes, the current state moves to the next state based on the state transition rule. In sending the outcome of a service action, a cyber-entity may either respond to a cyber-entity that sent the previous message, or send the message to another cyber-entity (or cyber-entities) by selecting a cyber-entity (or cyber-entities) to interact with using relationship (please see section 3.4 for interaction partner selection mechanism). Upon receiving a message from another cyber-entity, a cyber-entity invokes an appropriate state (action) that can handle the message by examining parameters of the incoming message in the following manner. If the parameter :in-reply-to is set in the incoming message, it is in response to a previously transmitted message. In this case, the cyber-entity compares the data type of the :content in the incoming message with the input data type required by an action (state) where the previous message was transmitted, and invokes the action if it can take the incoming message as its input. If the parameter :inreply-to of the incoming message is null, the incoming message is the first message
Adaptive Creation of Network Applications
Cyber-Entity Body
(9) tell (7) set (8) select
A base class of a cyber-entity Proxy Relationship Establishment Behavior
Action
(1) register
(6) act
State Model Engine (5) create (4) invoke
Relationship Record (10) convey/spread
Platform Software
133
State Model Caster
(3) get
Other Behaviors… (2) dispatch
Communication Service
Fig. 2. Function components at a cyber-entity
from the sender cyber-entity. In this case, the receiver cyber-entity examines the current state of a state model that is ready to interact with a new cyber-entity and the initial state of each and every state model that it implements. Among them, a state that can take the incoming message as its input is then invoked. Figure 2 shows the main function components (classes) of a cyber-entity. In our current design, classes in the cyber-entity body except action as well as classes in cyber-entity behaviors are implemented in a base class of a cyber-entity, and all cyber-entities are derived from the base class. Service actions are implemented by cyber-entity designers and registered with a state model (depicted as (1) “register” in Figure 2). Caster receives an ACL message from another cyberentity via the communication service in the platform software (depicted as (2) “dispatch” in Figure 2), examines state models (depicted as (3) “get” in Figure 2) and invokes an appropriate state (action) (depicted as (4) “invoke” and (6) “act” in Figure 2). State model engine is a generic class to execute a state model. Proxy represents a remote cyber-entity and provides an API to send a message to the remote cyber-entity (depicted as (9) “tell” in Figure 2). The outgoing message is unicast (or multicast)/broadcast by the platform (depicted as (10) “convey/spread” in Figure 2). 3.4
Relationship Management
Relationship Attributes. A relationship may be viewed as (cyber-entity’s) information cache regarding other cyber-entities. Table 1 shows example relationship attributes stored in a relationship record (depicted as “Relationship Record” in Figure 2) at a cyber-entity. CE-id is to uniquely identify a relationship partner cyber-entity. Action-name specifies an action of the cyber-entity itself to interact with a relationship partner cyber-entity. Service-properties is to store information regarding the service that a relationship partner cyber-entity provides (such as the service type and keywords of a relationship partner cyberentity). Access-count may be incremented when a service message is exchanged with a relationship partner cyber-entity. Strength evaluates the usefulness of a
134
T. Itao et al. Table 1. Example attributes of a relationship record at a cyber-entity
Attribute Meaning CE-id A globally unique identifier of a relationship partner cyber-entity. Action-name An action of the cyber-entity itself that may be used to interact with a relationship partner cyber-entity. ServiceInformation regarding the service that a relationship partner cyberproperties entity provides. Access-count The number of interactions with a relationship partner cyber-entity. Strength Indication of the usefulness of a relationship partner cyber-entity.
partner cyber-entity and is used to help cyber-entities to select useful interaction partners. Relationship Establishment. Cyber-entities first establish relationships with other cyber-entities to interact with. For instance, a cyber-entity that has just migrated to a new node may broadcast an advertise message specifying information regarding the sender cyber-entity (e.g., service type and/or attributes) to establish relationships with nearby cyber-entities. Upon receiving an advertise message, a cyber-entity creates a new relationship record (depicted as (5) “create” in Figure 2) and stores the sender cyber-entity’s CE-id and information obtained from the incoming advertise message in the relationship record. Additional information about a relationship partner cyber-entity obtained through interaction may be stored in the Service-properties of its relationship record (depicted as (7) “set” in Figure 2). Alternatively, a cyber-entity may broadcast a recruit message specifying conditions on a partner (e.g., service type and/or attributes required for a partner cyber-entity). A cyber-entity that receives a recruit message responds with an inform message containing its own information if it satisfies conditions specified in the recruit message. Through this interaction, the sender cyber-entity and the receiver cyber-entity of the recruit message may mutually establish a relationship with each other. Partner Selection. In selecting an interaction partner cyber-entity (or cyber-entities), a cyber-entity may specify one or more relationship attributes as keys and retrieve its relationship records that match the specified keys (depicted as (8) “select” in Figure 2). If there are multiple relationship records that match the keys, a cyber-entity narrow these relationship records based on relationship strengths so that the cyber-entity interacts more often with cyber-entities with stronger relationships. If there is none or less relationship record that matches the keys, a cyber-entity attempts to discover new cyber-entities by broadcasting an advertise message or a recruit message to nearby cyber-entities. Strength Adjustment. In Ja-net, a user indicates in happiness the degree of his/her satisfaction with the received application. When a user receives an application, the user creates a reward message and sets happiness value in the message content. The reward is back propagated along the message exchange sequence from a cyber-entity at the end point of the application to a cyber-entity at the initial point of the application. In order to remember a back propagation
Adaptive Creation of Network Applications
135
path, each cyber-entity records the previous and the next cyber-entities in the message sequence along with the corresponding sequence-id (which is obtained from ACL :sequence-id parameter). Upon receiving a happiness value in the reward message, each cyber-entity modifies the strength of relationships regarding cyber-entities that it interacted with in providing an application. If a user likes the application, positive happiness value is returned and the strength value is increased. If a user dislikes the application, negative happiness value is returned and the strength value is decreased. If user is neutral or no happiness value is returned, there is no change in the strength value. Therefore, cyber-entities that collectively provide a popular application (i.e., application that a number of users like) will receive a positive happiness value more often and strengthen the relationship among themselves, while relationships among cyber-entities that provide a not-so-popular application are weakened. Group Formation. In order to allow users to explicitly request for an application, cyber-entities collectively providing an application form a group when the relationship strengths among themselves exceed a predetermined threshold value. Once a group is formed, a unique group ID, as well as human-readable application name, is assigned to each group member cyber-entity. Thus, users can request for a group service either by a unique group ID or by a human-readable application (group) name.
4
Experiments on Dynamic Application Creation
In order to verify dynamic creation of applications in Ja-Net, we implemented multiple cyber-entities, as well as platform software, based on the design described in section 3 and performed basic experiments. In our experiments, various realistic scenarios were simulated through running multiple cyber-entities and multiple platform software on computers. Our implementation and experiments are explained below. 4.1
Application Implementation
In applications that we implemented, we consider popular public spots such as the New York City’s Times Square, a theater in the nearby Broadway theater district and a cafe on the New York City’s Fifth Avenue. A number of people (users) visit these locations, stay there for a while (doing, for instance, window shopping, watching a show, having some coffee at a cafe), and leave. Assume that these users carry a mobile phone or a PDA that is capable of running cyber-entities and communicating with other mobile phones and PDAs in an ad-hoc manner. Assume also that shops in these area implement cyber-entities related to their service and run these cyber-entities on a computer in the shop. In addition, some users may implement their own cyber-entities or have down loaded and carry cyber-entities in their mobile phones and PDAs from where they visited earlier in the day. Various cyber-entities join/leave to/from each location
136
T. Itao et al. (1) move
(2) move
Theater Commercial
PC
User A’s PDA MPEG player User B’s PDA Host 1: The Times Square
Ticket Sales
(3) move Theater Commercial
Auctioneer
Ja-Net node User cyber-entity Non-User cyber-entity
Screen Digital Screen Host 2: The Theater
PC Host 3: The Cafe
Fig. 3. Overview of the Ja-Net experiment system
according to the movement of users, which triggers actions and interactions of other cyber-entities. Figure 3 shows the overview of the Ja-Net experiment system. Each host represents different public spot, such as the Times Square, a theater and a cafe, respectively, and each Ja-Net node represents a PC, a device or user’s PDA that is supposed to be present at a location that its host computer represents. User’s PDA runs a cyber-entity (user cyber-entity) representing the user. Each node runs one or more cyber-entities as described below. User A’s PDA at the Times Square runs a user cyber-entity representing user A. A PC at the theater runs a TheaterCommercial cyber-entity that stores information of a theater show (assume that this information contains a URL of a commercial video clip of the theater show and a button to request for a ticket purchase as well as other show information) and a TicketSales cyber-entity that issues a theater show ticket and generates a certification of ticket purchase. A digital screen in the theater runs a Screen cyber-entity that displays image or video on the digital screen. A PDA of another user B at the theater runs a user cyber-entity representing user B and a MPEGplayer cyber-entity (assume that it is down loaded by user B earlier in the day). A PC in the cafe runs an Auctioneer cyber-entity that purchases commercial products from other cyber-entities and sells them at auction. In order to capture users’ behaviors in our experiments, we defined two types of events. NODE ARRIVAL is an event generated by platform software when a JaNet node arrives at a new location. USER BROWSING is an event that is generated by a user cyber-entity when a human user shows interests in the information displayed on his/her PDA. (For instance, this event is generated when a user scrolls the window up and down on his/her PDA). These events are broadcast to cyber-entities in the same location (i.e., in the same host). Example Application Sequence. Figure 4 shows an message sequence of an application we implemented (referred to as ticket sales sequence). In this sequence, a TheaterCommercial cyber-entity displays information of a theater show (depicted as (1) “inform” in Figure 4) on user’s PDA. Suppose that the user is interested in the show and sends a request for ticket purchase to the TheaterCommercial cyber-entity (depicted as (2) “request” in Figure 4). Since the TheaterCommercial cyber-entity only stores information of the show and does
Adaptive Creation of Network Applications Theater Commercial (1) inform (2) request (3) request (4) inform (5) inform
Ticket Sales
(7) reward
(6) reward
137
User
Cyber-entity
Fig. 4. An example of an application sequence (ticket sales)
not implement a ticket sales service, it forwards the request to a TicketSales cyber-entity that it has relationship with (depicted as (3) “request” in Figure 4). Upon receiving a forwarded request for a ticket purchase, the TicketSales cyber-entity issues a certificate for ticket purchase and sends the certification to the User cyber-entity via the TheaterCommercial cyber-entity (depicted as (4) “inform” and (5) “inform” in Figure 4 respectively). When a human user obtains a ticket (i.e., a certificate of a ticket purchase) from the User cyber-entity, he/she expresses the level of satisfaction as the happiness value. User cyber-entity then creates a reward message and sends it to the TheaterCommercial cyber-entity, which, in turn, forwards the reward message to the TicketSales cyber-entity (depicted as (6) “reward” and (7) “reward” in Figure 4). The relationship strength between the TheaterCommercial cyber-entity and the TicketSales cyber-entity is adjusted based on the happiness value. 4.2
Experimental Results
In our experiments, we only implemented the body (i.e., services) and relationship establishment behavior of cyber-entities. Thus, cyber-entities were manually moved to simulate their migration behavior when necessary in our experiments. When an experiment starts, cyber-entities initially do not have relationship with any other cyber-entities. Each cyber-entity dynamically establishes relationships with cyber-entities in the same location (i.e., in the same host) by broadcasting an advertise message or a recruit message. Once relationships are established, cyber-entities start interacting with relationship partners and collectively provide applications. We performed several experiments to examine dynamic application creation in Ja-Net. Our experiments are described below. Experiment 1. In this experiment, we manually moved a node representing user A’s PDA and a user cyber-entity representing user A (on user A’s PDA) from the Times Square to the theater (depicted as (1) “move” in Figure3) to simulate user A’s movement and observed that an application emerged through interactions of cyber-entities (detailed explanation is described below). Upon arriving at the theater, user A’s PDA generated NODE ARRIVAL event and broadcast the event to all cyber-entities in the theater. Upon receiving the event, a user cyber-entity on user A’s PDA, one of the cyber-entities in the theater, broadcast an advertise message to cyber-entities in the theater. Then, upon receiving the advertise message, the TheaterCommercial cyber-entity (on PC) established a relationship with a user cyber-entity (on user A’s PDA) and sent theater show
138
T. Itao et al.
Fig. 5. A screen snap shot of application windows
information that it stores to the user cyber-entity (on user A’s PDA), which in turn displayed the theater show information on user A’s PDA. At this moment, user A scrolled a window on his/her PDA. (In our experiments, we, human operators conducting the experiment, scrolled a window up and down on user A’s PDA). This generated a USER BROWSING event. The event was broadcast to cyber-entities in the theater. In this experiment, we assumed that the TheaterCommercial cyber-entity had a relationship with a MPEGplayer cyber-entity on a PDA of another user B. Thus, the TheaterCommercial cyber-entity, upon receiving the USER BROWSING event, sent theater show information that it stores to the MPEGpalyer cyber-entity. The MPEGplayer cyber-entity invoked its service and accessed a commercial video clip of a theater show using a URL included in the theater show information. In this experiment, we also assumed that the MPEGplayer cyber-entity had a relationship with a Screen cyber-entity on the digital screen. Thus, the MPEGplayer cyber-entity sent the outcome of its action to the Screen cyber-entity. Consequently, the Screen cyber-entity displayed the commercial video clip of a show on the digital screen in the theater. Figure 5 shows application windows displayed by each node. A window of user A’s PDA (on the left) displays theater show information and a window of the digital screen (in the center) displays a commercial video clip of a show. Experiment 2. In this experiment, while theater show information is displayed on user A’s PDA, user A clicked a ticket purchase button. (In our experiments, we, human operators conducting the experiment, clicked the button on user A’s PDA). A user cyber-entity on user A’s PDA generated a request message for a ticket purchase and sent it to the TheaterCommercial cyber-entity (on PC). Then, we observed interaction between the TheaterCommercial cyberentity and the TicketSales cyber-entity (on PC) shown in Figure 4. User A then
Adaptive Creation of Network Applications
139
received a certificate of ticket purchase. At this point, we have demonstrated that an application was created upon receiving a request message from a user. Next, in order to show that different applications emerge in different environments (i.e., environments where different sets of cyber-entities exist), we simulated the TheaterCommercial cyber-entity migrated to user A’s PDA (i.e., TheaterCommercial cyber-entity was manually moved to user A’s PDA, which is depicted as (2) “move” in Figure 3), and also simulated user A’s movement from the theater to the cafe (depicted as (3) “move” in Figure 3). Upon arriving at the cafe, user A’s PDA generated a NODE ARRIVAL event and broadcast the event to cyber-entities in the cafe (including the TheaterCommercial cyber-entity on user A’s PDA). Upon receiving the event, the TheaterCommercial cyber-entity broadcast an advertise message to cyber-entities in the cafe. Upon receiving the advertise message, the Auctioneer cyber-entity (on PC in the cafe) established a relationship with the TheaterCommercial cyber-entity and invoked its service action to purchase a commercial product (i.e., a show ticket in this case). Then, a request for a ticket purchase is sent from the Auctioneer cyber-entity to the TheaterCommercial cyber-entity, which in turn forwarded the request message to the TicketSales cyber-entity (on PC in the theater) following the same sequence shown in Figure 4 except the Auctioneer cyber-entity played the role of “User” in this case. The Auctioneer cyber-entity received a certificate of ticket purchase and it provided auction service to users by selling the ticket. This experiment verified that the same cyber-entity may provide different applications by interacting with different cyber-entities. Experiment 3. In order to examine the group formation mechanism proposed in this paper, we artificially created a large number of requests on user A (in the cafe) to purchase a show ticket and sent them to the TheaterCommercial cyber-entity (on user A’s PDA in the cafe). We assumed in this experiment that user A is satisfied with a ticket purchased from the TheaterCommercial cyber-entity, and thus, user A always returned a positive happiness value. As time progresses, we observed that the relationship strength from the TheaterCommercial cyber-entity to the TicketSales cyber-entity (on PC in the theater) as well as the relationship strength from the TicketSales cyber-entity to the TheaterCommercial cyber-entity gradually increased. When both relationship strengths exceeded a predetermined threshold value, a group of the TheaterCommercial cyber-entity and the TicketSales cyber-entity was formed. Once a group is formed, the TheaterCommercial cyber-entity, an initial point of the application, sent an advertise message containing the group ID, and user A was able to invoke the group service by sending a request message containing the group ID to the TheaterCommercial cyber-entity. Through experiments 1–3, we verified that through the mechanisms we proposed in this paper, Ja-Net dynamically creates applications that reflect user preferences and usage patterns. Several applications emerged in our experiments, and only popular applications (i.e., applications that users prefer) formed a group.
140
5
T. Itao et al.
Conclusion and Future Work
The Jack-in-the-Net (Ja-Net) Architecture is a biologically-inspired approach to design and implement adaptive network applications. Ja-Net is inspired by and based on the Bio-Networking Architecture project in University of California, Irvine [11][12]. This paper described design of cyber-entities and key mechanisms used in Ja-Net for cyber-entity interaction and relationship management. This paper also examined and verified these key mechanisms through experiments. As for future work, we plan to support interaction protocols between cyberentities to allow more complex collaboration. We also plan to investigate various algorithms for relationship strength adjustment and partner selection in addition to these described in this paper. Various algorithms will be empirically evaluated for their efficiency in creation and provision of adaptive applications. Experimental study through implementation and deployment of a large scale applications will also be conducted.
References 1. T. Suda, T. Itao, T. Nakamura and M. Matsuo, “A Network for Service Evolution and Emergence,” Journal of IEICEJ, Invited Paper, Vol.J84-B, No.3, 2001. 2. T. Itao, T. Nakamura and M. Matsuo, T. Suda, and T. Aoyama, “Service Emergence based on Relationship among Self-Organizing Entities,” Proc. of the IEEE SAINT2002 (Best Paper), Jan., 2002. 3. N. Minar, M. Gray, O. Roup, R. Krikorian, and P. Maes, “Hive: Distributed Agents for Networking Things,” Proc. of the ASA/MA ’99, Aug., 1999. 4. Foundation for Intelligent Physical Agents, “FIPA Communicative Act Library Specification, 2000,” available at http://www.fipa.org/ 5. T. Kawamura, Y. Tahara, T. Hasegawa, A. Ohsuga and S. Honiden, “Bee-gent: Bonding and Encapsulation Enhancement Agent Framework for Development of Distributed Systems,” Journal of the IEICEJ, D-I, Vol. J82-D-I, No.9, 1999. 6. D. B. Lange and M. Oshima, “Programming & Deploying Mobile Agents with Java Aglets,” Addison-Wesley, 1998. 7. Odyssey Home Page. http://www.genmagic.com/technology/odyssey.html 8. Voyager Home Page. http://www.objectspace.com/products/voyager/ 9. Mole Project Home Page, http://inf.informatik.uni-stuttgart.de/ipvr/vs/projekte/mole.html 10. XML web site, http://www.xml.org 11. The BNA Project Home Page. http://netresearch.ics.uci.edu/bionet 12. Michael Wang and Tetsuya Suda, “The Bio-Networking Architecture: A Biologically Inspired Approach to the Design of Scalable, Adaptive, and Survivable/Available Network Applications,” Proc. of the IEEE SAINT2001, Jan., 2001.
Anchored Path Discovery in Terminode Routing Ljubica Blaˇzevi´c, Silvia Giordano, and Jean-Yves Le Boudec Laboratory for computer Communications and Applications (LCA) Swiss Federal Institute of Technology, Lausanne (EPFL), Switzerland {ljubica.blazevic, silvia.giordano,jean-yves.leboudec}@epfl.ch
Abstract. Terminode routing, defined for potentially very large mobile ad hoc networks, forwards packets along anchored paths. An anchored path is a list of fixed geographic points, called anchors. Given that geographic points do not move, the advantage to traditional routing paths is that an anchored path is always "valid". In order to forward packets along anchored paths, the source needs to acquire them by means of path discovery methods. We present two of such methods: Friend Assisted Path Discovery assumes a common protocol in all nodes and a high collaboration among nodes for providing paths. It is a social oriented path discovery scheme. Geographic Maps-based Path Discovery needs to have or to build a summarized view of the network topology, but does not require explicit collaboration of nodes for acquiring path. The two schemes are complementary and can coexist.
1 Introduction Routing in mobile ad hoc networks (Manets [7]) is already a difficult task when the network size is considerably small, as studied in most of the Manets’ protocols. When the network size increases, the routing task becomes too hard to be addressed with traditional approaches. We consider a large mobile ad hoc network, referred to as terminode network. Each node, called terminode here, has a permanent End-system Unique Identifier (EUI), and a temporary, location-dependent address (LDA). Terminode routing [2], which was proposed for coping with this scenario, is a combination of two routing protocols: Terminode Local Routing (TLR) and Terminode Remote Routing (TRR). TLR is a mechanism that allows for destinations to be reached in the vicinity of a terminode and does not use location information for taking packet forwarding decisions. It uses local routing tables that every terminode proactively maintains for its close terminodes. In contrast, TRR is used to send data to remote destinations and uses geographic information; it is the key element for achieving scalability and reduced dependence on intermediate systems. TRR default method is Geodesic Packet Forwarding (GPF). GPF is basically a greedy method that forwards the packet closer to the destination location until the destination is reached. GPF does not perform well if the source and the destination are not well connected along the shortest geodesic path. If the source estimates that GPF cannot successfully reach the destination, it uses anchored paths. In contrast with traditional routing algorithms, an anchored path does not consist of a list of nodes to be visited for reaching the destination. An anchored path is a list of fixed geographic points, called anchors. In traditional paths made of lists of nodes, if E. Gregori et al. (Eds.): NETWORKING 2002, LNCS 2345, pp. 141–153, 2002. c Springer-Verlag Berlin Heidelberg 2002
142
L. Blaˇzevi´c, S. Giordano, and J.-Y. Le Boudec
nodes move far from where they were at the time when the path was computed, the path cannot be used to reach the destination. Given that geographic points do not move, the advantage of anchored paths is that an anchored path is always "valid". In order to forward packets along an anchored path, TRR uses the method called Anchored Geodesic Packet Forwarding (AGPF), described in [2]. AGPF is a loose source routing method designed to be robust for mobile networks. A source terminode adds to the packet a route vector made of a list of anchors, which is used as loose source routing information. Between anchors, geodesic packet forwarding is employed. When a relaying terminode receives a packet with a route vector, it checks whether it is close to the first anchor in the list. If so, it removes the first anchor and sends the packet towards the next anchor or the final destination using geodesic packet forwarding. If the anchors are correctly set, then the packet will arrive at the destination with a high probability. Simulation results show that the introduction of the anchored paths is beneficial or the packet delivery rate [2]. In order to forward packets along anchored paths, the source needs to acquire them by means of path discovery methods. We presented in [2,1] the basic concepts of two such methods: Friend Assisted Path Discovery (FAPD), and Geographic Maps-based Path Discovery (GMPD). FAPD enables the source to learn the anchored path(s) to the destination using, so-called, f riends, terminodes where the source already knows how to route packets. We describe how nodes select their lists of friends and how these lists are maintained. GMPD assumes that all nodes in the network have a complete or partial knowledge of the network topology. We assume that nodes are always collaborative, that they do not behave maliciously and that they perform protocol actions, whenever requested, in the appropriate way. In this paper we describe FAPD and GMPD.
2 Friend Assisted Path Discovery FAPD is a default protocol for obtaining anchored paths. It is based on the concept of small-world graphs (SWG) [9]. SWG are very large graphs that tend to be sparse, clustered, and have a small diameter. The small-world phenomenon was inaugurated as an area of experimental study in social science through the work of Stanley Milgram in the 60’s. These experiments have shown that the acquaintanceship graph connecting the entire human population has a diameter of six or less; this phenomenon allows people to speak of the "six-degrees of separation". We view a terminode network as a large graph, with edges representing the "friend relationship". B is a f riend of A if (1) A thinks that it has a good path to B and (2) A decides to keep B in its list of friends. A may have a good path to B because A can reach B by applying TLR, or by geodesic packet forwarding, or because A managed to maintain one or several anchored paths to B that work well. The value of a path is given in terms of congestion feedback information such as packet loss and delay. Path evaluation is out of the goals of this paper. By means of the TLR protocol, every terminode has knowledge of a number of close terminodes; this makes a graph highly clustered. In addition, every terminode has a number of remote friends to which it maintains a good path(s). We conjecture that this graph has the properties of a SWG. That is, roughly speaking, any two vertices are likely to be connected through a short sequence of intermediate vertices. This means that any
Anchored Path Discovery in Terminode Routing
143
two terminodes are likely to be connected with a small number of intermediate friends. With FADP, each terminode keeps the list of its friends with the following information: location of friend, path(s) to friend and potentially some information about the quality of path(s). FAPD is composed by two elements: Friends Assisted Path Discovery Protocol (FAPDP) and Friends Management (FM). 2.1
Friend Assisted Path Discovery Protocol (FAPDP)
FAPDP is a distributed method for finding an anchored path between two terminodes in a terminode network. When a source S wants to discover a path to destination D, it requests assistance from some friend. If this friend is in condition to collaborate, it tries to provide S with some path to D (it can have it already or try to find it, perhaps with the collaboration of its own friends). Figures 1 and 2 present FAPDP in pseudocode at the source and at an intermediate friend.
if (S has a friend F1 where dist(F1,D) k.prio) /* May unicast. */ j.mode = UT; elseif (∀k ∈ Nj1 , j.prio < k.prio) /* A sink. */ j.mode = S; } /* More findings about i. */ if (i.mode ≡ UT and ∀k ∈ ∪j∈N 1 Nj1 , k = i, i i.prio> k.prio) /* Can broadcast. */ i.mode = BT; elseif (i.mode ≡ Sf and ∃j ∈ Ni1 , j.mode ≡ S and ∀k ∈ Nj1 , k = i, i.prio > k.prio) { /* Can unicast to a sink */ i.mode = ST;
29 30
for (j ∈ Ni1 ) i.out = i.out ∪{j};
31 32 33 34 35
case UT: for (j ∈ Ni1 ) if (∀k ∈ Nj1 , k = i, i.prio > k.prio) i.out = i.out ∪{j};
36 37 38 39 40 41
case ST: for (j ∈ Ni1 ) if (j.mode ≡ S and ∀k ∈ Nj1 , k = i, i.prio > k.prio) i.out = i.out ∪{j};
42 43 44 45 46 47 48 49
}
case S, Sf: if (∃j ∈ Ni1 and ∀k ∈ Ni1 , k = j, j.prio> k.prio) { i.in = j; i.RxCode = j.TxCode; }
50 51 52 53 54
/* Hidden-Terminal Avoidance. */ if (i.mode ∈ { UT, ST } and ∃j ∈ Ni1 , j.mode = UT and ∃k ∈ Nj1 , k.prio > i.prio and k.TxCode ≡ i.TxCode) i.mode = Y;
55 /* i has to listen to j. */ 56 if (∃j ∈ Ni1 , j.mode ≡ UT and ∀k ∈ Ni1 , k = j, j.prio > k.prio) 57 i.mode = Sf; 58 } 59 60 /* Determine dest or src. */ 61 switch (i.mode) { } /* case BT:
/* Ready to communicate. */ if (i.mode ∈ {BT, UT, ST } and i.Q(i.out) = ∅) { /* FIFO */ pkt = Dequeue(i.Q(i.out)); Transmit pkt on code i.TxCode; } else Listen on code i.RxCode; End of HAMA. */
Fig. 1. HAMA Specification
Distributed Transmission Scheduling
4
159
Neighbor Protocol
In HAMA, topology information within two hops of a node plays a essential role for channel access operations. In mobile networks, network topologies change frequently, which affects the transmission schedules of the mobile nodes. The ability to detect and notify such changes promptly relies on the neighbor protocol so as to reharmonize channel access scheduling. 4.1
Signals
Since HAMA adopts dynamic code assignment for channel access using the identifier and time slot number, it is impossible for a node to detect a new one-hop neighbor that transmits data packets with varying codes. We have to use an additional time section, called neighborhood section, for sending out “hello” messages and for mobility management purposes. The neighborhood section lasts for Tnbr time slots following every Thama HAMA time slots. Channel access is still based on code division scheme but the transmission code is fixed over a commonly known one selected from Cpn . Signal Frame type srcID #add #del nbrIDs
Fig. 2. Signal Frame Format
In addition, a time slot within the neighborhood section is further divided into a number of smaller time segments fit for transmitting short signal packets, where their format is as illustrated in Fig. 2. In Fig. 2, the signal frame transmitted by node srcID is indicated by its type field. And field #add and #del count the numbers of the following nbrIDs for addition and deletion, respectively, of the neighbors from the transmitter’s neighbor topology. Data Frame type srcID dstID #add #del nbrIDs
payload
Header Option
Fig. 3. Data Frame Format
Besides signals, one-hop neighbor updates are also propagated using broadcast packets if a node is activated in BT-mode, so that the update information of a node gets to all its neighbors efficiently. One-hop neighbor updates are piggyback in the option field of a data frame whenever possible and necessary. Fig. 3
160
L. Bao and J.J. Garcia-Luna-Aceves
illustrates the data packet format, which includes similar neighbor update fields as in Fig. 2, besides regular fields such as destination dstID and payload of the packet. 4.2
Mobility Handling
Signals are used by the neighbor protocol for two purposes. One is for a node to say “hello” to its one-hop neighbors periodically in order to maintain connectivity. The other is to send neighbor updates when a neighbor is added, deleted or needs to be refreshed. In case of a new link being established, both ends of the link need to notify their one-hop neighbors of the new link, and exchange their complete one-hop neighbor information. In case of a link breaking down, a neighbor-delete update needs to be sent out. An existing neighbor connection also has to be refreshed periodically to the one-hop neighbors for robustness. If a neighbor-delete update is not delivered to some one-hop neighbors, those neighbors age out the obsolete link after a period of time. However, because of the randomness of signal packet transmissions, it is possible for a signal sent by a node to collide with signals sent by some of its two-hop neighbors. Due to the lack of acknowledgments in signal transmissions, multiple retransmissions of the update information are needed for a node to ensure the delivery of the message to its one-hop neighbors. Signal intervals also jitter by a small value so that signals transmitted in the neighborhood spread out evenly over the neighborhood section to avoid collisions. Furthermore, retransmissions of a signal packet can only achieve a certain probability of successful delivery of the message. Even though the probability approaches one as the retransmissions are carried out repetitively, the neighbor protocol has to regulate the rhythm of sending signals, so that the desired probability of the message delivery is achieved with a small minimum number of retransmissions in the shortest time, thus incurring the least amount of interference to other neighbors’ signal transmissions. The number of signal retransmissions and the interval between retransmissions depends on the number of two-hop neighbors. The more neighbors a node has, the longer the interval value is chosen for signal retransmissions. Since the probability of each signal transmission trial can be determined by the interval value, the number of retransmissions can be derived to achieve the desired probability of successful message delivery. Consequently, the latency of the message delivery using retransmission approach is decided by the product of the interval and the number of retransmissions. If we do not depend on the neighbor updates transmitted in the option field of data frames, enough time slots should be allocated to the neighborhood section in the time division scheme to achieve the desired latency of message delivery, which determines the ratio between Thama and Tnbr . Since the time division is fixed during the operations of HAMA, the ratio Thama : Tnbr is computed beforehand in the neighbor protocol to handle networks with moderate density. We do not specify the relations from this aspect in this paper.
Distributed Transmission Scheduling
5 5.1
161
Performance Delay Analysis
When data packets arrive at a node according to a Poisson process with rate λ and are served according to the first-come-first-serve (FIFO) strategy, we can analyze the delay properties of HAMA by a steady-state M/G/1 queues with server vacations, where the single server is the node. The server takes a vacation for V of one time slot when there is no data packet in the queue; otherwise, it looks for the next available time slot to transmit the first packet waiting in the queue. Because a node accesses the channel in a time slot by comparing the random priorities assigned its one-hop neighbors, the attempt of channel access in each time slot is a Bernoulli trial for each node. Depending on the neighborhood topology, each node has a probability q of winning the channel access contention in each time slot. Therefore, the service time of a data packet in the queuing system is a random variable following geometric distribution with parameter q. Denote the service time as X, we have P {X = k, k ≥ 1} = (1 − q)k−1 q. The mean and second moments of random variable X and V are: 1 2−q , X2 = ; q q2
X=
V =V2 =1 . So that the extended Pollaczek-Kinchin formula for the waiting time in the M/G/1 queuing system with server vacations [2] W =
V2 λX 2 + 2(1 − λX) 2V
readily yields the average waiting period of a data packet in the queue as: W =
λ(2 − q) 1 + . 2q(q − λ) 2
Adding the average service time to the queuing delay, we get the overall delay in the system: 2 + q − 2λ . (3) T =W +X = 2(q − λ) To keep the queuing system in a steady state without packet overflow problems, it is necessary that λ < q. Since HAMA is capable of both node activation and link activation, the delays of broadcast and unicast traffics should be considered separately because the contenders of node activation and link activation are different, so are the respective activation probabilities.
162
5.2
L. Bao and J.J. Garcia-Luna-Aceves
Throughput Analysis
Network throughput is defined as the number of packets going through the network at the same time including both broadcast and unicast traffics. On account of the collision freedom in HAMA, the shared channel can serve certain load up to the channel capacity allowed without degradations. That is, the throughput over the common channel is the summation of arrival rates at all network nodes as long as the queuing system at each node remains in equilibrium on the arrival and departure events. Therefore, the system throughput S is derived as: S= min(λk , qk ) , (4) k∈V
where qk is the probability that node k may be activated, and λk is the data packet arrival rate at link k. 5.3
Simulation Results
The behaviors of HAMA is simulated in two scenarios: fully connected networks with different numbers of nodes, and multihop networks with different radio transmission ranges. The delay and throughput attributes of HAMA are gathered in each simulation, and compared with those of NAMA [1] and UxDMA [12] in the same simulation scenarios. In the simulations, we do not model the bandwidth of the radio channel with specific numbers, but use more abstract terms, such as packets per time slot for both arrival rate and throughput, which can be later translated into common bandwidth metrics, such as Mbps (megabits per second), given certain packet size distribution and transmission media. The following parameters and behaviors are assumed in the simulations: – The network topologies are static to evaluate the scheduling performance of the algorithms, only. – Signal propagation in the channel follows the free-space model and the effective range of radio is determined by the power level of the radio. Radiation energy outside the effective transmission range of the radio is considered negligible interference to other communications. All radios have the same transmission range. – 30 pseudo-noise codes are available for code assignments, i.e., |Cpn | = 30. – Packets are served in First-In First-Out (FIFO) order. Only one packet can be transmitted in a time slot. – All nodes have the same broadcast packet arrival rate for all protocols (HAMA, NAMA and UxDMA). In addition, HAMA is loaded with the same amount of unicast traffic as broadcast traffic to manifest the unicast capability of HAMA. The overall load for HAMA is thus twice as much as that of NAMA and UxDMA. The destinations of the unicast packets in HAMA are evenly distributed on all outgoing links.
Distributed Transmission Scheduling
163
– The duration of the simulation is 100,000 time slots, long enough to collect the metrics of interests. In UxDMA, a constraint set is derived for broadcast activations as NAMA does, which is give by UxDMA-NAMA = {Vtr0 , Vtt1 }. The notation of each symbol is referred to the original paper in [12]. Constraint Vtr0 forbids a node from transmitting and receiving at the same time, while Vtt1 eliminates hidden terminal problem. 20 Nodes
5 Nodes
300
HAMA Broadcast HAMA Unicast NAMA Broadcast UxDMA Broadcast
150
Delay T (Time Slots)
Delay T (Time Slots)
200
100
50
0
0
250 200 150 100 50 0
0.2
0.15 0.1 0.05 Arrival Rate λ (Pkt/Slot)
HAMA Broadcast HAMA Unicast NAMA Broadcast UxDMA Broadcast
0
20 Nodes
5 Nodes
1
0.8 0.6 0.4
HAMA Broadcast HAMA Overall NAMA Broadcast UxDMA Broadcast
0.2
0
0.05 0.15 0.1 Arrival Rate λ (Pkt/Slot)
0.2
Throughput (Pkt/Slot)
Throughput (Pkt/Slot)
1
0
0.05
0.04 0.03 0.02 0.01 Arrival Rate λ (Pkt/Slot)
0.8 0.6 0.4
HAMA Broadcast HAMA Overall NAMA Broadcast UxDMA Broadcast
0.2 0
0
0.02 0.01 0.03 0.04 Arrival Rate λ (Pkt/Slot)
0.05
Fig. 4. Average Packet Delays In Fully-Connected Networks
In the fully connected scenarios, simulations were carried out in two configurations: 5- and 20-node networks, to manifest the effects of different contention levels. Fig. 4 shows the average delay values on the first row and average throughput on the second for HAMA, NAMA and UxDMA-NAMA, respectively, under different loads on each node in the two configurations. The horizontal parts in the throughput plots indicate the total network capacities provided by different protocols. Since all nodes are within one hop to each other, there can be only one unicast or broadcast in each time slot. The network throughput tops when the loads sum up to one. In the multihop scenario, the simulations were conducted in networks that are generated by randomly placing 100 nodes within an area of 1000×1000 square meters. To simulate infinite plane that has constant node placement density, the opposite sides of the square are seamed together, which visually turns the square area into a torus. The power of the transceiver on each node was set to 100, 200, and 300 meters, respectively, so that the network topology and contention levels in these simulations varied accordingly.
L. Bao and J.J. Garcia-Luna-Aceves
150
100
50
0
0
Delay T (Time Slots)
Delay T (Time Slots)
HAMA Broadcast HAMA Unicast NAMA Broadcast UxDMA Broadcast
100 Nodes Tx 300
100 Nodes Tx 200
100 Nodes Tx 100 200
300
500
250
400
Delay T (Time Slots)
164
200 150 100 50 0
0.1 0.05 Arrival Rate λ (Pkt/Slot)
0
300 200 100 0
0.04 0.02 Arrival Rate λ (Pkt/Slot)
4
15 10
5 0
0.2 0.1 Arrival Rate λ (Pkt/Slot)
Throughput (Pkt/Slot)
20
0
HAMA Broadcast HAMA Overall NAMA Broadcast UxDMA Broadcast
8
Throughput (Pkt/Slot)
Throughput (Pkt/Slot)
30 25
6 4 2 0
0
0.04 0.06 0.02 Arrival Rate λ (Pkt/Slot)
0.015 0.01 0.005 Arrival Rate λ (Pkt/Slot)
100 Nodes Tx 300
100 Nodes Tx 200
100 Nodes Tx 100
0
3
2
1
0
0
0.02 0.01 Arrival Rate λ (Pkt/Slot)
Fig. 5. Average Packet Delays In Multihop Networks
Fig. 5 shows the delay and throughput performance of HAMA, NAMA and UxDMA-NAMA in multihop networks. UxDMA-NAMA is better than HAMA and NAMA at broadcasting in some of the multihop networks, owing to its global knowledge about topologies. However, HAMA outperforms UxDMA-NAMA in overall network throughput. Overall, HAMA has achieved much better performance than a previously proposed protocol, NAMA [1], by requiring only a little more processing on the neighbor information. Comparing HAMA with UxDMA, which uses global topology information, HAMA sustains similar broadcasting throughput, in addition to the extra opportunities for sending unicast traffic. The dependence on only two-hop neighbor information is also a big advantage over UxDMA.
6
Conclusion
We have introduced HAMA, a new distributed channel access scheduling protocol that dynamically determines the node activation schedule for both broadcast and unicast traffics. HAMA only requires two-hop neighborhood information, and avoids the complexities of prior collision-free scheduling approaches that demand global topology information. We have also analyzed the per-node delay and per-system throughput attributes of HAMA and NAMA [1], a node activation protocol, and compared system performance of HAMA with that of NAMA and UxDMA [12] by simulation.
Distributed Transmission Scheduling
165
References 1. L. Bao and J.J. Garcia-Luna-Aceves. A New Approach to Channel Access Scheduling for Ad Hoc Networks. In Proc. ACM Seventh Annual International Conference on Mobile Computing and networking, Rome, Italy, Jul. 16-21 2001. 2. D. Bertsekas and R. Gallager. Data Networks, 2nd edition. Prentice Hall, Englewood Cliffs, NJ, 1992. 3. A. Ephremides and T.V. Truong. Scheduling broadcasts in multihop radio networks. IEEE Transactions on Communications, 38(4):456–60, Apr. 1990. 4. S. Even, O. Goldreich, S. Moran, and P. Tong. On the NP-completeness of certain network testing problems. Networks, 14(1):1–24, Mar. 1984. 5. C.L. Fullmer and J.J. Garcia-Luna-Aceves. Floor acquisition multiple access (FAMA) for packet-radio networks. In ACM SIGCOMM ’95, pages 262–73, Cambridge, MA, USA, Aug. 28 -Sep. 1 1995. 6. J.J. Garcia-Luna-Aceves and J. Raju. Distributed assignment of codes for multihop packet-radio networks. In MILCOM 97 Proceedings, pages 450–4, Monterey, CA, USA, Nov. 2-5 1997. 7. M. Joa-Ng and I.T. Lu. Spread spectrum medium access protocol with collision avoidance in mobile ad-hoc wireless network. In IEEE INFOCOM ’99, pages 776– 83, New York, NY, USA, Mar. 21-25 1999. 8. P. Karn. MACA - a new channel access method for packet radio. In Proceedings ARRL/CRRL Amateur Radio 9th Computer Networking Conference, New York, Apr. 1990. 9. L. Kleinrock and F.A. Tobagi. Packet switching in radio channels. I. Carrier sense multiple-access modes and their throughput-delay characteristics. IEEE Transactions on Communications, COM-23(12):1400–16, Dec 1975. 10. L. Lamport. Time, clocks, and the ordering of events in a distributed system. Communications of the ACM, 21(7):558–65, Jul. 1978. 11. T. Makansi. Trasmitter-Oriented Code Assignment for Multihop Radio Net-works. IEEE Transactions on Communications, 35(12):1379–82, Dec. 1987. 12. S. Ramanathan. A unified framework and algorithm for channel assignment in wireless networks. Wireless Networks, 5(2):81–94, 1999. 13. R. Ramaswami and K.K. Parhi. Distributed scheduling of broadcasts in a radio network. In IEEE INFOCOM’89, volume 2, pages 497–504, Ottawa, Ont., Canada, Apr. 23-27 1989. IEEE Comput. Soc. Press.
Towards Efficient Decision Rules for Admission Control Based on the Many Sources Asymptotics ´ ad Szl´ Gergely Seres1 , Arp´ avik1 , J´ anos Z´ atonyi2 , and J´ozsef B´ır´o2 1
Traffic Lab, Ericsson Research Hungary, P.O. Box 107, H-1300 Budapest, Hungary, [email protected] 2 HSN Lab, DTT, Budapest University of Technology and Economics, P.O. Box 91, H-1521 Budapest, Hungary
Abstract. This paper introduces new admission criteria that enable the use of algorithms based on the many sources asymptotics in real-life applications. This is achieved by a significant reduction in the computational requirements and by moving the computationally intensive tasks away from the timing-sensitive decision instant. It is shown that the traditional overflow-probability type admission control method can be reformulated into a bandwidth-requirement type and a buffer-requirement type methods and that these methods are equivalent when used for admission control. The original and the two proposed methods are compared through the example of fractional Brownian motion traffic.
1
Introduction
Bandwidth requirement estimation is a key function in networks intending to provide quality of service (QoS) to their users. Network devices in QoS-capable networks must be able to control the amount of traffic they handle. This is generally performed by using some form of admission control. There are two commonly used methods for determining whether a new connection can be allowed to enter the system: in the first one an estimate of the buffer overflow probability is computed based on the properties of the new and the already active flows in the system, while the second method computes the bandwidth requirement of the existing traffic flows. When using the first method for admission control decisions, the devices check the computed overflow probability against the target overflow probability. If the second method is used, the bandwidth requirement of the existing flows is increased by the predicted bandwidth usage of the new flow and the result is compared to the capacity of the system. Often, the second method is preferred over the first, mainly because it results in a quantity – the bandwidth requirement – that is more tractable and more useful than the estimate of the overflow probability. The on-line estimation of the bandwidth requirement of the traffic enables the network operator to track the amount of allocated (and free) capacity in the network. Furthermore, the impact of network management actions (e.g. directing more traffic on the link) on the resource status of the network can be more easily assessed. The overflow probability on the other hand is a less straightforward quantity that depends on the E. Gregori et al. (Eds.): NETWORKING 2002, LNCS 2345, pp. 166–177, 2002. c Springer-Verlag Berlin Heidelberg 2002
Towards Efficient Decision Rules for Admission Control
167
parameters of the queueing system in a more complex way, thus changes in them imply a less tractable and computationally more complex update procedure. Accordingly, most of the work to date has focused on algorithms that quantify the bandwidth requirement of traffic flows. The most widespread approaches are based on the notion of the effective bandwidth, a comprehensive review of which is given in [5]. A group of algorithms use the Chernoff bound or the Hoeffding bound to derive simplified and directly applicable formulae for the effective bandwidth in case of bufferless statistical multiplexing [9]. For buffered resources, the theory of large deviations was shown to be a very capable method for calculating the bandwidth requirement of traffic flows. There are two asymptotics that can be used for this purpose: the large buffer asymptotics and the many sources asymptotics. The large buffer asymptotics provide a rate function describing the decay rate of the tail of the probability of buffer overflow when the size of the buffer gets very large. The many sources asymptotics also offer a rate function but with the assumption that the number of traffic flows in the system gets very large, while the traffic mix, per-source buffer space and system per-source capacity are held constant. Both asymptotics discussed so far provide an overflow-probability type quantity. Using the large buffer asymptotics it is easy to switch from the overflow probability representation to the bandwidth requirement representation. However, algorithms relying on this asymptotics [6] do not account for the gain arising from the statistical multiplexing of many traffic flows. In recent years, the second asymptotic regime, the many sources asymptotics (and its BahadurRao improvement) have been described and investigated in [3], [2], [1] and [7]. In the native form, the many sources asymptotics provide a rate function that can be used to estimate the probability of overflow. The computation of this rate function involves two optimisations in two variables. Yet, if it is the bandwidth requirement that is of interest, another optimisation has to be performed that requires the recomputation of the two original optimisations in each step. Despite of its complexity, this bandwidth requirement estimate is appealing because it incorporates the statistical properties of the traffic along with its QoS requirements and it also embraces the statistical multiplexing gain that occurs on the multiplexing link. However, the use of this estimator in real-time applications is not feasible because of its computational complexity. This paper introduces a new method for computing the bandwidth requirement of traffic flows that is based on the many sources asymptotics as well. Instead of the three embedded optimisations that previous approaches required, it comprises only of two optimisations that directly result in an estimate of the bandwidth requirement. The method is favourable to on-line measurement-based application, since the admission decision step is simplified and the more involving computations can be done in the background. It is shown that the new and the old methods for obtaining the bandwidth requirement are equivalent. The rest of this paper is organised as follows. Section 2 presents a brief overview of the many sources asymptotics and describes three admission decision methods based directly on this asymptotics. The proposed computationally more favourable method for computing the bandwidth requirement is introduced in
168
G. Seres et al.
Sect. 3, and the equivalence is proven. The operation of the novel method is demonstrated with the example of fractional Brownian motion traffic in Sect. 4. Conclusions are given in Sect. 5. The Appendix shows that similar results can be achieved for computing the buffer requirement of traffic flows.
2
Overflow-Probability Based Admission Criteria
This section presents an overview on the many sources asymptotics. Next, a collection of admission control methods are reviewed, all of which build on the asymptotic property of the overflow probability. 2.1
Many Sources Asymptotics
The asymptotic regime described by the many sources asymptotics can be used to form an estimate of the probability of buffer overflow in the system as follows. Let us consider a buffered communication link with transmission capacity C, buffer size B, which carries N independent flows multiplexed in the system. N is viewed as a scaling factor, i.e. we can identify a per-source transmission capacity c = C/N and a per-source buffer size b = B/N . Further, let the stochastic process X[0, t) denote the total amount of work arriving at the system during the time interval [0, t). Let us assume that X[0, t) has stationary increments. Conclusions on the behaviour of this system can be derived by investigating a queueing system of infinite buffer size that is served by a finite capacity server with service rate C = cN . In order to account for the finite buffer size B = bN of the real system, the probability of buffer overflow in the original system can be deduced from the proportion of time over which the queue length, Q(C, N ), is above the finite level B. In this system, where the system parameters (cN, bN ) and the workload (X[0, t)) are scaled by the number of sources, an asymptotic equality can be obtained in N for the probability of overflow: 1 α(s, t) def log P {Q(cN, N ) > bN } = sup inf st − s(b + ct) = −I . lim N →∞ N N t>0 s>0 (1) Here α(s, t) (the so-called effective bandwidth [5]) is defined as def
α(s, t) =
1 log E esX[0,t) st
(2)
and I is called the asymptotic rate function, which depends on the per-source system parameters and on the scaled workload process. This result was proven for discrete time in [2] and for continuous time in [3]. Equation (2) practically means that for N large, the probability of overflow can be approximated as P {Q(C, N ) > B} ≈ e−N I , where −N I can be computed from (1) as − N I = sup inf {st α(s, t) − s(B + Ct)} . t>0 s>0
(3)
Towards Efficient Decision Rules for Admission Control
169
The approximation above can also be reasoned in a less rigorous, but brief and intuitive manner as follows [7]. The Chernoff bound can be used to approximate the probability that the workload X[0, t) exceeds Ct, the offered service in [0, t) and in addition it fills up the buffer space B: P {X[0, t) > B + Ct} ≈ inf s>0 exp {st α(s, t) − s(B + Ct)} . The steady state queue length distribution can be described by Q = supt>0 {X[0, t) − Ct} provided that the X[0, t) process has stationary increments. This way, the probability of the queue length exceeding the buffer level B is P {Q > B} ≈ P {supt>0 {X[0, t) − Ct} > B} ≈ supt>0 P {X[0, t) > B + Ct} ≈ e−N I . For the sake of simplifying further discussions, let us define the function def J(s, t) = st α(s, t) − s(B + Ct). In (3), the evaluation of supt>0 inf s>0 J(s, t) is computationally complex as a double optimisation has to be performed after the computation or estimation of the effective bandwidth of X[0, t). Since the optimisations are embedded, first the optimal (minimal) s has to be found which still depends on t. Placing this optimal s into J(s, t), the task is its maximisation with respect to t. For a more formal and concise discussion the following notation is introduced: def
s∗ (t) = arg inf J(s, t), s>0
def
t∗ = arg sup J(s∗ (t), t) t>0
def
and s∗ = s∗ (t∗ ) .
(4)
Now, the extremising pair of J(s, t) is (s∗ , t∗ ) and thus −N I = J(s∗ , t∗ ). The extremising values t∗ and s∗ are commonly termed as the critical time and space scales, respectively. The intuitive explanation of the critical time scale is that it is the most probable time interval after which overflows occur in the multiplexing system (i.e. the most likely length of the busy period prior to overflow). Although many other busy periods may contribute to the total overflow, large deviation theory takes into account only the most probable one, which is the most dominant in the asymptotic sense. The rationale behind the critical space parameter is that it captures the statistical behaviour of the workload process, that is the amount of achievable statistical multiplexing gain and the burstiness. Critical space values close to 0 describe a source (or an aggregate) that can benefit from statistical multiplexing, while larger values infer a higher bandwidth requirement. Finally, it is also worth noting that s∗ and t∗ always depend on the system parameters C, B and the statistical properties of X[0, t). In practical applications there is a QoS requirement, which is often specified as a constraint for the probability of buffer overflow (e−γ ). In order to admit a source the following criterion has to be satisfied: P {Q(C, N ) > B} ≈ e−N I ≤ e−γ 2.2
or
sup inf J(s, t) ≤ −γ . t>0 s>0
(5)
Equivalent Admission Criteria
The inequalities in (5) define an admission rule that uses the method of the many sources asymptotics in the native form. In this original form, the probability of buffer overflow is estimated using X[0, t), B and C as the input quantities, whilst the target overflow probability is used as the performance criterion.
170
G. Seres et al.
It is possible to set up two other criteria that can be used for admission control decisions. As it was mentioned in the introduction, it is often preferable to express the bandwidth requirement of the traffic and compare this quantity to the server capacity. In order to form an estimate of the bandwidth requirement of the traffic, another optimisation has to be performed. For this, the server capacity has to be treated as a free variable and given the workload process, the buffer size and the QoS requirement, the smallest server capacity has to be identified for which the system still satisfies the performance criterion put forward in (5). The resulting quantity def (6) Cequ = inf C : sup inf J(s, t) ≤ −γ t>0 s>0
is termed in the rest of the paper as the equivalent capacity.1 Then the admission criterion can be written as Cequ ≤ C . (7) A similar, but less frequently used criterion can be defined that allows admission decisions to be made based on the available buffer space. In this case, the buffer requirement of the traffic is determined using a similar triple optimisation as in (6), but this time taking X[0, t), C and the QoS requirement as the input quantities and B as the performance constraint: def and Breq ≤ B . (8) Breq = inf B : sup inf J(s, t) ≤ −γ t>0 s>0
Figure 1 presents a summary of the three methods with respect to their input parameters and the quantity they use as a constraint in the decision criterion. The methods are equivalent in the sense that in a given context they arrive at the same decision. When it comes to numerical evaluation, the first (original) method with the double optimisation is, however, significantly less demanding than the others involving three embedded optimisations. X[0, t) B sup inf t>0 s>0 C
Ploss
overflow-probability type
? e−γ
X[0, t) B γ
arg inf C
sup inf
t>0 s>0
Cequ
? C
X[0, t) C γ
bandwidth-requirement type
arg inf B
Breq
sup inf
t>0 s>0
? B
buffer-requirement type
Fig. 1. Admission decision methods
1
Following the terminology of previous works, the term effective bandwidth is reserved for α(s, t), which is not directly associated with the minimal service rate required to meet the QoS target.
Towards Efficient Decision Rules for Admission Control
3
171
The Improved Bandwidth Requirement Estimator
This section introduces an alternative method for computing the equivalent capacity. The advantage of this new method is that its computational complexity is reduced to a double optimisation, resulting in a similar formula to the one used in the rate-function based estimation of the buffer overflow probability. It is shown that the estimation of the equivalent capacity using the proposed method arrives at the same decision as the method in (6) and (7). The proposed method infer a new optimisation function resulting in an alternative set of space and time scales. The equivalence of the respective methods for estimating the buffer requirement can be proven in an identical manner (see the Appendix). 3.1
Alternative Definition of the Equivalent Capacity
Let us introduce K(s, t) as def
K(s, t) = α(s, t) +
B γ − , st t
(9)
which is obtained from the isolation of C from J(s, t) = −γ. Namely, K(s, t) = C holds after the rearrangement.2 By defining a new double optimisation equ def = sup inf K(s, t) , C
(10)
t>0 s>0
similarly to (3), the extremisers are attained in the form of def
s† (t) = arg inf K(s, t), s>0
def
t† = arg sup K(s† (t), t) t>0
def
and s† = s† (t† ) .
(11)
The extremising pair of the double optimisation in (10) is then (s† , t† ) and these are the alternative space and time scales, respectively. equ = Cequ holds. In other words, we need only two It can be proven that C optimisations instead of three to arrive at the equivalent capacity Cequ . This is shown in the next subsection using the subsequent theorem. Theorem 1. The following two strict inequalities are equivalent: J(s∗ , t∗ ) < −γ ⇐⇒ K(s† , t† ) < C ,
(12)
furthermore the equations J(s∗ , t∗ ) = −γ , K(s† , t† ) = C 2
(13) (14)
The so-called Bahadur-Rao improvement, as described in [1], introduces a prefactor to the estimate of the overflow probability in (5). For the improvement of the equivalent capacity (and the buffer requirement) this manifests in a modified QoS constraint for (5) in the following form: γ = γ −
1 2
log(4πγ) 1 1+ 2γ
.
172
G. Seres et al.
are equivalent as well and consequently the two strict inequalities below also imply each other: J(s∗ , t∗ ) > −γ , K(s† , t† ) > C .
(15) (16)
Proof. First of all, note that any two of the three equivalences, (12), (13)⇔(14) and (15)⇔(16), imply the third one, therefore it is enough to prove the second and the third assertion only. The proof of the statements of Theorem 1 uses three lemmas. Lemma 1. For all t > 0, (17) ⇔ (18): J(s∗ (t), t) < −γ , K(s† (t), t) < C .
(17) (18)
Proof. Suppose (17) holds. The isolation of C gives K(s∗ (t), t)
by rearr. (17)
0 J(s∗ (t), t) > −γ ⇐⇒ K(s† (t), t) > C .
(21)
Proof. Lemma 3 is a straightforward consequence of Lemma 1 and Lemma 2.
Continuing with the proof of Theorem 1, let us suppose that (13) holds. Substituting t∗ in Lemma 2 implies that K(s† (t∗ ), t∗ ) = C. On the other hand, K(s† , t† ) = K(s† (t† ), t† ) ≥ K(s† (t∗ ), t∗ ) by the definition of t† (note that in general s† can not be used instead of s† (t∗ )). Thus K(s† (t† ), t† )
by def. of t†
≥
Towards Efficient Decision Rules for Admission Control
K(s† (t∗ ), t∗ )
by L2 with t=t∗
=
173
C. By Lemma 2 and Lemma 3 with t = t† this (13)
inequality implies that J(s∗ (t† ), t† ) ≥ −γ. Hence −γ = J(s∗ (t∗ ), t∗ )
by def. of t∗
≥
prev., L2 and L3 with t=t†
J(s∗ (t† ), t† ) ≥ −γ, from which it can be concluded that equality holds along the chain of inequalities, accordingly J(s∗ (t† ), t† ) = −γ. Lemma 2 with t = t† then implies that (14) holds. Vice versa, if (14) holds, then J(s∗ , t∗ ) = J(s∗ (t∗ ), t∗ )
by def. of t∗
≥
by def. of t†
(14)
J(s∗ (t† ), t† )
by L2 with t=t†
=
−γ, there-
prev., L2 and L3 with t=t∗
upon C = K(s† (t† ), t† ) ≥ K(s† (t∗ ), t∗ ) ≥ C. The † ∗ ∗ consequence is that K(s (t ), t ) = C holds and by Lemma 2 with t = t∗ it turns out that (13) holds as well. Thus the proof of equivalence (13)⇔(14) is done. For the proof of the third statement of Theorem 1, first suppose (15) holds. Then K(s† , t† ) = K(s† (t† ), t† )
by def. of t†
≥
K(s† (t∗ ), t∗ )
by L3 with t=t∗
>
C, consequently
(15)⇒(16). Finally, supposing (16) holds, J(s∗ , t∗ ) = J(s∗ (t∗ ), t∗ ) ∗
†
†
by L3 with t=t†
by def. of t∗
≥
J(s (t ), t ) > −γ, subsequently (16)⇒(15) as well. Therefore the equivalence (15)⇔(16) is proven as well. Equivalence (12) follows from the other two equivalences, hence the proof of Theorem 1 is completed.
3.2
Equivalence of the Two Definitions of the Equivalent Capacity
Theorem 1 can now be used to prove that the equivalent capacities defined by (6) and (10) are equal. Corollary 1. The equivalent capacity defined by the double optimisation in (10) equ = Cequ . equals the one defined by the triple optimisation in (6): K(s† , t† ) = C Proof. Observe that J(s, t) is (strictly) monotonously decreasing in the variable C for fixed B and γ. It is easy to see that Cequ = inf{C : supt>0 inf s>0 J(s, t) ≤ str. mon. decr.
−γ} = inf{C : supt>0 inf s>0 J(s, t) = −γ}. The consequence of this is that J((s∗ (Cequ ), t∗ (Cequ )) = −γ (note that the extremisers depend on the variables C and B in this case, but B is fixed here, therefore only the dependence on variable C is indicated). By Theorem 1 it follows that K(s† , t† ) = Cequ as well (here the optimising parameters depend on variables B and γ, but those equ and that corresponds are fixed). However, this is exactly the definition of C to the assertion.
The respective optimiser pairs (s∗ (B, C), t∗ (B, C)) and (s† (B, γ), t† (B, γ)) do not coincide in general, they are not even comparable as such, since they depend on a different set of variables. Nevertheless, on the boundary of the acceptance region (J(s∗ , t∗ ) = −γ ⇔ (Cequ =)K(s† , t† ) = C) the same parameter values are the optimisers of the two problems: Proposition 1. If one of the double optimisations (3) and (10) has a unique extremising pair and J(s∗ , t∗ ) = −γ (or K(s† , t† ) = C), then the two extremiser pairs coincide, t∗ = t† and s∗ = s† .
174
G. Seres et al.
Proof. Assume (s† , t† ) is unique. By supposing any of the two (equivalent) equalities as the second assumption, J(s∗ (t∗ ), t∗ ) = −γ and K(s† (t† ), t† ) = C hold. By Lemma 2 with t = t∗ this means that K(s† (t∗ ), t∗ ) = C holds as well. Since K(s† (t† ), t† ) = C, then t∗ = t† by the uniqueness property. On the other hand, rearr. by rearranging K(s† (t∗ ), t∗ ) = C, J(s† (t∗ ), t∗ ) = −γ = J(s∗ (t∗ ), t∗ ) is obuniq.
t∗ =t†
tained. This proves s∗ = s∗ (t∗ ) = s† (t∗ ) = s† . The proof of the statement is almost the same when assuming the uniqueness of (s∗ , t∗ ).
4
Comparison of the Methods for fBm Traffic
This section presents a comparison of the three admission control methods discussed in Sect. 2.2 using the new formulae developed in Sect. 3.1 and in the Appendix. The traffic case used is fractional Brownian motion (fBm), which involves closed-form formulae due to its Gaussian nature. 4.1
Key Formulae for Fractional Brownian Motion Traffic
The stochastic process {Zt , t ∈ R} is called normalized fractional Brownian motion with self-similarity (Hurst-) parameter H ∈ (0, 1) if it has stationary increments and continuous paths, Z0 = 0, E[Zt ] = 0, V ar[Zt ] = |t|2H and if Zt def
is a Gaussian process. Let us define the process X[0, t) = mt + Zt , for t > 0. It is known as fractional Brownian traffic and can be interpreted as the amount of traffic offered to the multiplexer in the time interval [0, t). This is a so-called selfsimilar model, which has been suggested for the description of Internet traffic aggregates [8], [4]. Using this model the effective bandwidth (2) can be written as α(s, t) = 2 2H−1 2 2 2H m+ sσ t2 and accordingly J(s, t) = st m+ s σ 2t −s(B+Ct). The extremisers def
for J(s, t) and −N I can be found in Table 13 , where κ(H) = H H (1 − H)1−H . The equivalent capacity can be evaluated in two ways, either using the defequ requires the diinition in (6) or the method proposed in this paper (10). C rect evaluation of K(s, t) (9) at the alternative critical space and time scales (s† , t† ) (11), i.e. “only” a double optimisation is necessary. For fBm traffic γ equ are listed in Table 1. K(s, t) = m + 12 σ 2 st2H−1 + st − Bt , its extremisers and C If Cequ is calculated in the conventional way (6), the third optimisation (with respect to C) can be exchanged for solving −N I = −γ for C = Cequ (as seen in equ as expected (using the proof of Corollary 1).4 It can be checked that Cequ = C req (see the Appendix) can the definition of κ(H)). In a similar way, s , t and B req = Breq . be computed (see Table 1) and it also turns out that B 3 4
An identical expression for the approximation of the overflow probability was obtained in [8] with a different approach. This simplification can be done only because −N I is an explicit function of C in the fBm case (C can be isolated from the equation). In most other cases the third optimisation must be done through several double optimisations of J(s, t) for different values of C in order to locate C = Cequ for which −N I = −γ.
Towards Efficient Decision Rules for Admission Control
175
Table 1. Comparison of the three admission control methods for fBm traffic
sopt (t) = arg inf f (s, t) s>0
topt = opt
arg sup f (s t>0
f (s, t) K(s, t) s† (t) =
J(s, t) s∗ (t) =
√
t−2H (B+(C−m)t) σ2 ∗
t =
(t), t)
sopt = sopt (topt ) sup inf f (s, t) t>0 s>0
B H 1−H C−m
s∗ =
1−H · κ(H)2 (C−m)2H B 1−2H σ2
−N I = 2H
2−2H
B − (C−m) 2κ(H)2 σ 2
1 − 2H
2
2γt−H σ †
t =
B √ (1−H) γσ
L(s, t) s (t) =
1
H
s† = 2(1−H)γ B
equ = m + H· C 1 1−H H 2γσ 2 2H 1−H B
√ 2γt−H σ
t =
1 √ H 2γσ 1−H C−m
H 1−H s = C−m · H 1 √ 2γσ 1−H 2γ H 1−H H req = · B C−m √ 1 (1 − H) 2γσ 1−H
Confirming the statements in the previous section, it is apparent from Table 1 that the critical space scales s∗ , s† and s are usually different and depend on different parameter sets. For given B, C, γ, m, H and σ, the corresponding scales match only when equalities (13) and (14) hold. An interesting consequence of this fact is that the solution of t∗ (B, C, m, H) = t† (B, γ, σ, H) for C results in the equivalent capacity Cequ and its solution for γ is N I. Similar statements are valid for the space scales as well. 4.2
A Numerical Example
In this subsection a numerical example is presented to demonstrate the results of the previous subsection. Let us take the fBm model of one of the H Bellcore Ethernet data traces [4]: m1 = 138 135 byte/s, σ1 = 89 668 byte/s , H = 0.81. Assume that N = 100 of such sources are multiplexed into a buffer. Hence, the model parameters of the fBm model for the aggregate traffic workH load become: m = 13.8135 Mbyte/s, σ = 0.89668 Mbyte/s , H = 0.81. The buffer size is chosen to be B = 5.3 Mbyte, the service rate is C = 16 Mbyte/s and let the constraint for the overflow be e−16 ≈ 10−7 (γ = 16). For these system parameters, the extremiser pair is (s∗ , t∗ ) = (1.453, 7.091) and therefore −N I = −20.26. Clearly, −N I < −γ, i.e. the QoS requirement is fulfilled. The alternative critical scales and the equivalent capacity are obtained equ = 15.568 Mbyte/s, thus as (s† , t† ) = (1.147, 8.203) = (s∗ , t∗ ) and Cequ = C Cequ < C holds and there is 0.432 Mbyte/s of free service capacity.
5
Conclusion
This paper has introduced a new method for the computation of the equivalent capacity (and the buffer requirement) of traffic flows that is based on the many
176
G. Seres et al.
sources asymptotics. In contrast to the method directly building on the asymptotic rate function, the new method involves only two embedded optimisations instead of three, thus it significantly reduces the computational complexity of the task. It has been shown that the two methods are equivalent. The presented method of deriving the equivalent capacity leads to an alternative domain of time and space scales. In a given system the optimisation defininig equ (10) yields different optimal pathe equivalent capacity estimate (Cequ =) C rameter values than that defining the estimate of the overflow probability e−N I (3). Consequently, the substitution of the extremisers of J(s, t) into K(s, t) (9) does not lead to a correct estimate of the equivalent capacity. The only exception is the boundary of the admission region, where the two extremising pairs coincide. In terms of applicability, it can be shown that the method of the equivalent capacity computation is more appropriate for real-time operation than those based on the asymptotic rate function, especially if the workload process is measured on-line (measurement-based admission control). Recall the admission methods defined by (5) and (6), (7). In practice these admission rules are performed at the arrival of a new flow. The effective bandwidth estimate has to be adjusted in order to take the new flow into account. For example, let us assume that the new flow is described by its peak rate only. Then α+ (s, t) = α(s, t) + p is a conservative adjustment. With the rate-function based admission method, the double optimisation has to be re-evaluated in order to update the estimate of the overflow probability: −N I + = supt>0 inf s>0 {st α+ (s, t) − s(B + Ct)}. The decision criterion remains the same in this case. Using the equivalent-capacity based admission criterion is more convenient. Here, the estimation of the equivalent capacity of the existing flows can be maintained in the background, i.e. the estimate of Cequ can be recomputed based on periodic measurements. At the arrival of a new flow, the Cequ +p ≤ C criterion has to be checked, which differs from (7) only in a correction term that is the peak rate of the new flow. Hence, the timing-sensitive operation (the admission decision) involves only a simple addition and a comparison, while the timeconsuming double optimisation can be performed in the background, with more relaxed timing requirements. The proposed method thus enables the deployment of the many sources asymptotics in practice not only through the reduction of its complexity, but through shifting the computations away from the critical decision instant.
References [1] C. Courcoubetis, V. A. Siris, and G. D. Stamoulis. Application of the many sources asymptotic and effective bandwidths to traffic engineering. Telecommunication Systems, 12:167–191, 1999. [2] C. Courcoubetis and R. Weber. Buffer over ow asymptotics for a buffer handling many traffic sources. Journal of Applied Probability, 33:886–903, 1996. [3] N. G. Duffield. Economies of scale for long-range dependent traffic in short buffers. Telecommunication Systems, 7:267–280, 1997.
Towards Efficient Decision Rules for Admission Control
177
[4] R. J. Gibbens and Y. C. Teh. Critical time and space scales for statistical multiplexing in multiservice networks. In Proceedings of International Teletraffic Congress (ITC), pages 87–96, Edinburgh, Scottland, 1999. ITC’16. [5] F. P. Kelly. Notes on effective bandwidths. Stochastic Networks: Theory and Applications, 4:141–168, Oxford University Press, 1996. [6] J. T. Lewis, R. Russell, F. Toomey, S. Crosby, I. Leslie, and B. McGurk. Statistical properties of a near-optimal measurement-based CAC algorithm. In Proceedings of IEEE ATM, pages 103–112, Lisbon, Portugal, June 1997. [7] M. Montgomery and G. de Veciana. On the relevance of time scales in performance oriented traffic characterizations. In Proceedings of the Conference on Computer Communications (IEEE INFOCOM), volume 2, pages 513–520, San Francisco, USA, March 1996. [8] I. Norros. A storage model with self-similar input. Queueing Systems, 16(3/4):387– 396, 1994. [9] P. Tran-Gia and N. Vicari, editors. Impacts of New Services on the Architecture and Performance of Broadband Networks - COST257 Final Report. compuTEAM 2000, 2000.
Appendix: The Improved Buffer Requirement Estimator def
Let us introduce L(s, t) = t α(s, t) + γs − Ct, resulting from the isolation of the variable B from J(s, t) = −γ. That is, L(s, t) = B holds after the rearrangement. Similarly to (4) and (11) the critical space and time scales of req def = sup inf L(s, t) B
(22)
t>0 s>0
def
def
def
are s (t) = arg inf s>0 L(s, t), t = arg supt>0 K(s (t), t) and s = s (t ). The extremiser pair of (22) is then (s , t ), like in the previous cases. Analogously to the equivalent capacity, it is also true that the buffer requirereq = Breq holds. Consequently, ment defined by the triple optimisation in (8) B only two optimisations are needed instead of three to determine the buffer requirement Breq , matching the case of the equivalent capacity. The statements of Sect. 3.1 and Sect. 3.2 can now be reformulated. Theorem 2. (12) ⇔ L(s , t ) < B, (13) ⇔ (14) ⇔ L(s , t ) = B and (15) ⇔ (16) ⇔ L(s , t ) > B. Lemma 4. (17) ⇔ (18) ⇔ L(s (t), t) < B, (19) ⇔ (20) ⇔ L(s (t), t) = B, (21) ⇔ L(s (t), t) > B. Corollary 2. The buffer requirement defined by the double optimisation in (22) req = Breq . equals the one defined by the triple optimisation in (8): L(s , t ) = B Proposition 2. If one of the double optimisations (3), (10) and (22) has a unique extremising pair and J(s∗ , t∗ ) = −γ (or K(s† , t† ) = C or L(s , t ) = B), then the three extremiser pairs coincide, t∗ = t† = t and s∗ = s† = s . Proof. The proofs follow the structure of those in Sect. 3.1 and Sect. 3.2.
QoS with an Edge-Based Call Admission Control in IP Networks Daniel R. Jeske, Behrokh Samadi, Kazem Sohraby, Yung-Terng Wang, and Qinqing Zhang Bell Labs, Lucent Technologies Holmdel, New Jersey 07733, USA
Abstract. Central to the viability of providing traditional services over IP networks is the capability to deliver some level of end-to-end Quality of Service (QoS) to the applications and users. IP networks continue to struggle to migrate from a cost effective best effort data service solution to revenue generating solutions for QoS-sensitive applications such as voice and real-time video. For the case of a network of Media Gateways controlled by SoftSwitches, we propose the use of a measurement based call admission control algorithm at the edge of the network as an approach to provide a cost effective QoS solution. The proposed method utilizes statistical prediction techniques based on available performance measurements without complex QoS management of the packet network. Simulation analysis shows that significant gains in QoS can be achieved with such an edge-to-edge measurement based approach. Keywords: QoS, VoIP, CAC, SoftSwitch
1. Introduction One of the notable recent advances in converged networks is the development of the SoftSwitch (SS) technology. With SS technology, control plane functions and interworking between packet and circuit switched network signaling services can be implemented with standard based protocols and used to provide among other things, a locus of resource management of the bearer channels through circuit and packet switched networks [1]. Applications of SS technology include the Voice tandem solution for the Public Switched Telephone Network (PSTN) and VoIP solutions for IP end points. A reference network architecture is shown in Fig.1. For the voice Tandem solution, voice calls originating and terminating within PSTN would be handled by signaling the ingress and the egress SoftSwitch, and then the destination PSTN switch to complete the call setup procedure. While management of voice QoS in the PSTN is well understood, a different matter is providing QoS when a packet network is used as the transport between the gateways, or when IP end points such as SIP or H.323 devices get involved with the call. Central to the viability of providing traditional services over IP networks is the capability to deliver some level of end-to-end QoS to the applications and users. Many solutions have been proposed and implemented for ATM based packet networks and standards. On the other hand, IP based networks continue to struggle E. Gregori et al. (Eds.): NETWORKING 2002, LNCS 2345, pp. 178-189, 2002. © Springer-Verlag Berlin Heidelberg 2002
QoS with an Edge-Based Call Admission Control in IP Networks
179
to migrate from cost effective best effort data service solution to revenue generating QoS solutions for more demanding applications such as voice and real-time video. Bearer
Ingress SS
SS7
PSTN SW
SIP
Egress SS
Signaling/Control
IPDC/H.248 Packet RTP/UDP/IP Network
Edge Router Ingress GW(s)
PSTN SW
Edge Router Egress GW(s) IP End
Fig. 1 . Reference Network Architecture
In this paper, we explore the possibility of utilizing available measurements at the edge of packet networks by proposing a distributed edge-to-edge measurement based call admission control (CAC) to provide a cost effective QoS solution. Controlling variables are provided to allow the service providers to influence the tradeoffs of network resource utilization and risks of compromised QoS. The proposed method utilizes statistical prediction techniques using available performance measurements. In Section 2, we discuss the framework of QoS support in IP networks. In Section 3, we present details of the statistical prediction techniques and their application to a measurement based CAC algorithm. Quantitative analysis of the achieved QoS is reported in Section 4 with comparisons to a "best case," where the packet network resources are closely managed to provide a known capacity (see e.g. [2]), and a “worst case,” where every call is admitted.
2.
Quality of Service Support
For ease of exposition, we use voice service as the application throughout this paper although many of the principles can be extended to include other applications such as multimedia calls. The basic characterization of QoS for the voice application is wellstudied (see e.g., [3]). Performance metrics often used to measure QoS for the bearer or user plane of packet telephony include: end-to-end packet delay, delay jitter and packet loss. Note that there are other performance and QoS metrics such as call setup delay, post-dial delay and ring-back delay which are outside the scope of this paper.
180
D.R. Jeske et al.
The desired end-to-end delay is usually very small for toll quality, the recommended one-way value being 150 ms [4]. Delay jitter is the variation in the delay of consecutive packets. For streaming applications, the delay jitter should be small enough so that the traffic stream can be delivered at a constant rate to the receiver to prevent packet loss. If the delay jitter increases beyond a limit, the packet is regarded as lost. Usually some play-out buffer mechanism at the receiver is provided to absorb the jitter. Buffering adds to the delay and needs to be dimensioned carefully. Packet loss can also occur as a result of buffer overflows in the network. The degree of QoS degradation due to packet loss depends on the application and the coding schemes. We now turn our attention to the issue of primary interest of this paper: providing QoS support over the packet network. The most developed paradigm is ATM QoS, which includes several key principles that are applicable beyond ATM networks (see e.g. [5,6]). Networks that are IP based are evolving toward use of DiffServ/MPLS which provides the ability to manage QoS through traffic engineering of MPLS Label Switch Paths (see e.g., [7]) and Differentiated Service markings at the edge. However, before these technologies are ubiquitous and network management solutions get developed and deployed, the service providers are left with no choice but to over-provision their networks in order to minimize the risk of degrading QoS. An economic alternative, which does not require any network resource availability information, is to utilize the measurements available at the edge of the network in the CAC to make the best prediction of whether or not to admit a new call. The objective is as usual: admit as many calls as possible while supporting adequate QoS of the calls already in progress. An obvious implication of not having proactive control of network resources is that there is a period during which the network is vulnerable to sudden changes in congestion levels. We address this and identify other critical issues associated with using available QoS measures in connection with CAC.
3. Measurement Based CAC In our proposed CAC scenario, both the ingress and egress SS use historical QoS measurements from the originating and terminating Media Gateways (MGW) to make a QoS prediction for each new call. In this paper, we focus on one-way packet loss rate as the primary QoS measure since it is readily available from most MGWs. If either of the one-way predicted QoS measures is unsatisfactory, the new call is rejected, otherwise it is accepted. To implement this approach, sufficiently accurate predictions for one-way packet loss rates are required. In this section, we describe two statistical models for predicting packet loss rates. The first is a simple exponentially weighted moving average (EWMA) model. The second is an auto-regressive (AR) model which is more sophisticated and was evaluated for use so that we could determine if the EWMA model was overly simplistic. After providing some motivation for each model, we fit each model to traces that were collected, and conclude that the extra complications associated with the AR model may not be justified.
QoS with an Edge-Based Call Admission Control in IP Networks
181
3.1 EWMA Model Let the sequence {Z k }k =−∞ denote one-way packet loss rates associated with calls completed over the same path that a new call would utilize if it were admitted into the T
network. Alternatively, without additional complication, {Z k }k =−∞ could represent observed aggregate (over all calls on the path) packet loss rates over consecutive and disjoint windows of a prescribed width. Intuitively, one would expect correlation among the Z k values, and moreover, a degree of non-stationary behavior. A widely used correlation model that accounts for a slowly varying time-dependent mean is a auto-regressive integrated moving average model (ARIMA) [9] which writes: Z k − Z k −1 = ε k − θε k −1 . (3.1) T
Here, the sequence {ε k } is a white noise process with zero mean and variance σ 2 .
θ < 1 is a parameter that dictates the correlation amongst the Z k values. In the case of a slowly varying mean function for {Z k} , the mean of the left-hand-side of (3.1) is zero and the sequence of first-order differences, Dk = Z k − Z k −1 , is a stationary process. It is easy to verify that θ is the lag-1 auto-correlation of the { D k} sequence.
At time T, the best predictor of ZT + k ( k ≥ 1 ) is the conditional [given {Z k }k =−∞ ] T
expected value of ZT + k , say ZˆT + k . It can be shown [9] that ZˆT + k = (1 − θ ) ZT + θ ZˆT , T and ZˆT + k is a weighted average of the {Z k }k =−∞ sequence. Moreover, the weights die
off exponentially. Consequently, ZˆT + k is frequently referred to as an exponential weighted moving average (EWMA) of the {Z k }k =−∞ sequence. Note that under the EWMA model, the k-step ahead predictor is independent of k. This is a convenient result for our CAC application. In particular, if T calls have completed when a new call arrives, the new call will not be the (T+1)st completed call in the sequence, but rather will be the (T+D+1)st completed call in the sequence, where D is a random variable representing the number of completed calls during the holding time of the new call. What we need is ZˆT + D +1 , which by the above property is simply ZˆT +1 . With the EWMA model, we are thus able to avoid the difficulty associated with D not only being unknown, but also being a random variable. For many EWMA applications, the value of θ is chosen based on intuitive feelings of how much the past should be weighted. If Z k is rapidly changing, then θ = 0.2 is a common choice, while if they are not changing very fast θ = 0.8 is a common choice. Since θ is the lag-1 auto-correlation of the { Dk } sequence, it could be estimated periodically from the observed packet loss measurements, possibly even using a sliding-window scheme. Alternatively, θ can be computed adaptively, changing at each prediction epoch (a new call arrival in our case) [10,11]. T
182
D.R. Jeske et al.
3.2 AR Model An alternative to the EWMA model (3.1) is a p-th order auto-regressive model, AR(p). The AR(p) model relates the current value of a process to the proceeding p values and a white noise innovation term. In particular, we have p
∑φ
Zt − µ =
k =1
k
(Zt −k − µ ) + εt
(3.2)
where µ is the mean of the {Z k } process and the {φk }k =1 are the so-called autoregressive coefficients. Unlike the EWMA model, the AR(p) model assumes the {Z k } process is stationary. (Recall that the EWMA model assumes the first-order p
differences of the {Z k } process is stationary.) At time T, the best-unbiased predictor of an observation D+1 steps into the future, say ZˆT + D +1 , is the conditional expected value of ZT + D +1 , given {Z k }k =−∞ . It can be T
shown [9] that ZˆT + D +1 = µ + ∑ j =1 π (j D +1) ( ZT +1− j − µ ) , where coefficient π (j D +1) is p
(2) = φ1π (1) computed in a bootstrap recursive manner according to: π (1) j + φ j +1 , j = φj , π j (1) ( D +1) π (3) = φ1π (2) = ∑ i =1 φiπ (j D +1− i ) + φ j + D . Unlike the EWMA j j + φ2π j + φ j + 2 , … , π j D
model, the prediction equation explicitly depends on the unknown random variable D, as is clearly evident by the general form of ZˆT + D +1 . The best we can do is replace D by its mean value, introducing another source of variability into the prediction error. Use of ZˆT + D +1 requires an estimate of µ and {φk }k =1 . Moreover, these estimates must be updated periodically to reflect the changing conditions of the underlying network. Let Z1 , K , Z n denote the set of observations corresponding to a particular p
update interval. The estimate of µ is simply Z = ∑ t =1 Z t / n and method of moment n
{ }
p estimators of {φk }k =1 , say φˆk
p
k =1
, can be obtained by solving the p x p linear system
of Yule-Walker equations [9] which utilize the first p sample autocorrelations. In order to estimate a sample autocorrelation reliably from a statistical point of view, at least 50 observations are recommended, implying n > 50 + p . On the other hand, the time required to collect n observations is n / λ , where λ is the call arrival rate (arrivals/minute), and we need n / λ < S , where S is the duration (minutes) over which we are willing to assume the network is stationary. Hence, we require (50 + p ) < n < λ S , an interval constraint on the magnitude of n. (For example, if p=10, λ = 10 calls/minute and S = 10 minutes, then we would require 60 < n < 100 .) An additional constraint linking the call arrival rate, the call holding time and the duration of the stationarity interval derives from the following conditions. First, the new call arrival should complete within the interval of stationarity, and second, the p calls used for the packet loss prediction for the new call arrival should arrive and complete within the interval of stationarity (see Figure 2). It follows that p / λ + 2 HT < S , where HT is the average call holding time. (For example, if p=10, λ = 10 calls/minute, and S = 10 minutes, then we must have HT < 4.5 minutes.)
QoS with an Edge-Based Call Admission Control in IP Networks
183
Since AR(p) has a higher dimensional parameter space than EWMA, we could expect more precise predictions from it. However, its application is hard in that D is a random variable and the best we can do is replace it by its expected value, λ × HT , which would seem to offset some of the additional precision. Moreover, it is clear that the AR(p) model is more cumbersome to implement. In particular, the need to frequently update the parameter estimates is a drawback since at each update epoch, the Yule-Walker equations must be solved and then the bootstrap recursive scheme must be used to obtain the necessary prediction coefficients required by ZˆT + D +1 . These observations raise the question as to whether or not the extra precision seemingly available from the AR(p) model is significant enough to justify its added complexity. We examine this question in the next section by fitting each model to two different packet loss traces and comparing their respective prediction capabilities. p ??
Call Holding Time
p-th Previous Call Arrives
minutes Call Holding Time
p-th Previous Call Completes
Current Call Arrives
Current Call Completes
All of these Events Should Occur Within Interval of Stationarity
Fig. 2. Feasibility Condition for AR(p) Model
3.3 Empirical Comparison of Models To compare the relative prediction accuracy of the EWMA and AR(p) models, we applied them to two packet loss traces which were collected from two different IP networks. Below we describe the traces and the results obtained from the fitting analyses. We show that the simpler EWMA models compete very well with the AR(p) model, thereby making a case for their use in real applications. 3.3.1 University of Massachusetts Trace. The first packet loss trace we examined was obtained from the University of Massachusetts at Amherst (UMASS). The trace is one of many traces collected by sending out packet probes (RTP headers) along unicast and multicast end-to-end connections over the public Internet [8]. The packets were sent out every 20ms between UMASS and UCLA. The trace is a binary time series with zeros and ones indicating whether the packet probe arrived successfully or not. There are total 358,000 packets in the trace (a two-hour trace). Post processing on the trace divided it into 179 time intervals. Each time interval represents about 40 seconds and 2000 packets. The packet loss percentage of each interval was used as Zk, in our analysis. Since there are no overlapping intervals, the analysis is window-based in terms of the generation of the time series and the prediction. The estimation algorithms, EWMA or AR(p), apply to either case.
184
D.R. Jeske et al.
Figure 3 shows the analytical results of the packet loss estimation using three th different models. AR(5) is a 5 order AR prediction model with coefficients (0.65, 0.26, 0.34, -0.07, 0.22). SEWMA is a static EWMA with fixed weight θ = 0.75 . AEWMA is an adaptive EWMA with an average weight θ = 0.75 . The circles are the actual packet loss for each interval. The results demonstrate strong predictive power th in this data trace. The 5 order AR model implies that the correlation goes back at least 10,000 packet, or 200 seconds. The AR model predicts slightly better than the two EWMA models. But simpler EWMA model competes well. Figure 4 shows the prediction errors from the AR(5) model are unbiased. Although there is a fair amount of variance in the prediction error, the algorithm predicts well enough to indicate when the packet loss is likely to be “high”. 0.3
Packet Loss
0.25 0.2
Loss AR(5)
0.15
SEWMA AEWMA
0.1 0.05 0 0
30
60
90
120
150
180
Window ID
Fig. 3. Analysis of a UMASS trace using different models
3.3.2 NetMeeting Audio Trace. To further evaluate the prediction models, we conducted another experiment to generate a voice over IP trace. The experiment was set up between two desktop computers, located in New Jersey and California. A presentation was sent through NetMeeting over Lucent’s Intranet. The talk was encoded by PCM (Pulse Coded Modulation) with a sampling interval of 30 ms into a 64 kbps audio stream. A commercial software tool, NetXray, was used to capture the packets transported over two end points. Another software tool, Xdecode, was used to process and analyze the captured packets trace. Xdecode reads Ethernet capture files and decodes them in an ASCII format. We further processed the decoded file using shell scripts to identify and subtract the RTP/UDP packets that carried the audio stream. We obtained the RTP packet’s sequence number and relative delay offset and generated the corresponding binary packet loss trace. The trace we analyzed had 50,000 packets and lasted about 25 minutes. We divided the trace into 100 intervals of 500 packets each. The corresponding time series {Zk}, was thus generated with the packet loss rate of each interval as the dependent variable. Again, the estimation model is window based instead of per call based. Figure 5 shows the analytical results of the predication using three models. AR(2) nd is a 2 order AR predication model with coefficients (1.12, -0.17), SEWMA is a static EWMA with weight θ = 0.24 . AEWMA is an adaptive EWMA with average
QoS with an Edge-Based Call Admission Control in IP Networks
185
weight θ = 0.24 . The actual packet rates, denoted by the circles, show jumps between low packet loss and high packet loss. The results demonstrate strong prediction power in this data set also. This time the correlation goes back at least 1000 packets, or 30 seconds. The AR(2) model performs a little better than the EWMA models, particularly for the very clean transitions from low high loss. The EWMA models compete fairly well.
0.3
Predicted Loss
0.25 0.2 0.15 0.1 0.05 0 0
0.05
0.1
0.15
0.2
0.25
0.3
Actual Loss
Fig. 4. Prediction error from the AR model on a UMASS trace 0.2
Packet Loss
0.16
Loss
0.12
AR(2) SEWMA
0.08
AEWMA
0.04 0 0
20
40
60
80
100
Window ID
Fig. 5. Analysis of a NetMeeting trace using different models
Figure 6 shows the prediction error from the AR(2) model. The variance appears smaller than the UMASS trace. It is very likely that the predication error variance is proportional to the overall congestion level.
186
D.R. Jeske et al.
Prediction Loss
0.16 0.12 0.08 0.04 0 0
0.04
0.08
0.12
0.16
Actual Loss
Fig. 6. Prediction error from the AR model on a NetMeeting trace
3.4 CAC Options We now describe two variations of our proposed CAC approach, which differ in how the historical packet loss observations are calculated. The difference is significant in that the second option requires more information out of the MGWs. In Section 4 we compare the effectiveness of the two alternatives and evaluate whether or not the additional information significantly improves the overall QoS. CAC-1: In this option, packet loss rate is defined as the percentage of packets that are lost due to network conditions such as buffer overflow, and lost or corrupt connections. The process of determining the packet loss rate for the completed calls is through EWMA using a fixed smoothing factor of 0.8. If at the time of the new call arrival, neither of the predicted one-way packet loss rates exceeds a given threshold (say, 1%), the new call is accepted, otherwise it is rejected. For the calls that are accepted, overall QoS is measured as the ratio of the completed calls for which the actual packet loss or jitter delay is less than a given threshold (say 2%). In order to discount the effect of packet loss in the admission process when there are long interarrival times between consecutive calls, the packet loss rate is discounted at a rate of say 50% if the update takes place in longer than 10 seconds. Also, in order to account for calls with very long call holding times (longer than say 100 seconds), the packet loss is determined only over the last 100 seconds of the call. CAC-2: In this option, packet loss rate is defined as the percentage of lost packets due to all of the reasons defined for CAC-1 plus jitter variation. There is clearly an additional cost to acquire jitter measurement. However it is recognized as an important component of overall packet loss at the application layer and many vendors do support it. Note also that if network buffers are large, delay jitter may be the dominant component of packet loss. Our intent with CAC-2 is to ascertain that the use of delay jitter in the calculation of packet loss significantly improves the effectiveness of the CAC algorithm. All other details associated with CAC-2 are identical to CAC1.
QoS with an Edge-Based Call Admission Control in IP Networks
187
4. Performance of CAC Algorithms A simulation model is used to compare CAC-1, CAC-2 with two other scenarios. First, a “no CAC” scenario, in which all calls are admitted into the network. Second, a “Static CAC” scenario, where the maximum number of calls that can be allowed on a path is predetermined from provisioned bandwidth, and the number of calls inprogress is consulted before a new call is admitted. In the simulation model calls arrive to a single server queue, representing the network with no other interfering traffic. Each call generates voice packets every 20 ms from a PCM encoded source at 64 Kbps. Call inter-arrival times are exponentially distributed with a rate of 0.42 calls per second. The service rate of the queue corresponds to a transmission link with a capacity of 5 Mbps. The service time of a packet is therefore 1280/5000 = 0.256 ms. The call holding time is also assumed to be exponentially distributed with a mean of 180 seconds. The queue capacity is varied in the simulations. The three QoS measures are: 1) good call rate, where a good call is defined as those for which the packet loss rate over the entire duration of the call does not exceed a threshold (2% is used in the experiments), 2) call blocking rate, defined as the percentage of new call arrivals that were denied admission and, 3) packet loss, measured as the total number of packets lost due to buffer or delay jitter, and the average packet loss is determined as the percentage of packets lost within the duration of a call. 4.1 Simulation Results Table 1 shows call blocking rate (B), good call rate (Q), and average packet loss (L) for buffer sizes of 200, 400, 600 and infinite (packets). The two entries associated with L represent the packet loss components due to buffer overflow and delay jitter, respectively. Note that since CAC-1 relies only on packet loss due to buffer overflow, the performance for the infinite buffer size case is identical to the “No CAC” case. For the simulations summarized by Table 1, play-out delay was 60 ms. In this scenario the results for Static CAC, independent of the buffer size, are B=6.2%, Q=100% and L=0%. In the case of “No CAC,” all calls are allowed to the system, so obviously B=0. As a result, the quality of service as measured by Q and L are the worst compared to the other options. CAC-1 and CAC-2 both improve Q, particularly when the buffer size is small. Note that in that case, network buffer overflow dominates the overall packet loss rate so the difference between CAC-1 and CAC-2 is minimal. For large network buffers, delay jitter contributes to the overall packet loss, and CAC-2 results in better Q values compared to CAC-1, although the gains are modest for these cases. Of course CAC-2 always has higher B values since its predictions for packet loss will always be larger than those of CAC-1. Table 2 shows the simulation results for a 20 ms play-out delay and for buffer sizes of 200 and infinity. Comparing Table 1 and Table 2 for a buffer size of 200 packets, we note that the smaller play out delay translates into difference in the performance of CAC-1 and CAC-2. In Table 2, delay jitter is a significant component of the overall packet loss rate, and thus the CAC-2 packet loss predictions will have better fidelity. As a result, both B and Q are higher for CAC-2, and L is appreciably lower. Although Q falls off precipitously from Table 1 to Table 2 for both CAC-1 and CAC-
188
D.R. Jeske et al.
2, Table 2 still shows that both options improve upon the “No CAC” case. Similar to the previous case, the results for Static CAC, independent of the buffer size, are B=6.2%, Q=100% and L=0%. Table 1. Performance of CAC options (60 ms play out delay) Scenario No CAC CAC-1 CAC-2
B: Q: L: B: Q: L: B: Q: L:
200 0% 86.6% 2.2%, ~0% 5.9% 95.1% ~0%, ~0% 5.9% 95.1% ~0%, ~0%
B: Q: L: B: Q: L: B: Q: L:
Buffer Size (Packets) 400 600 0% B: 0% 49.9% Q: 49.9% 2.1%, 17.5% L: 2.1%, 19.1% 6.0% B: 5.9% 55.9% Q: 53.8% 0.5%, 10.7% L: 0.5%, 13.2% 11.4% B: 12.1% 61.7% Q: 62.9% 0.2%, 6.9% L: 0.2%, 8%
B: Q: L: B: Q: L: B: Q: L:
Infinity 0% 41.4% 0%, 34.1% 0% 41.4% 0%, 34.1% 12.5% 61.8% 0%, 9.8%
Table 2. Performance of CAC options (20 ms play out delay) Buffer Size (Packets) Scenario No CAC CAC-1 CAC-2
B: L: B: L: B: L:
200 0% Q: 48.7% 2.2%, 19% 5.9% Q: 52% 0.5%, 12.5% 11.8% Q: 59.7% 0.2%, 6.7%
B: L: B: L: B: L:
Infinite 0% Q: 43.6% 1.7%, 31.5% 0% Q: 43.6% 1.7%, 31.5% 13.3% Q: 60.6% 0%, 10.3%
In general, as expected, simulations indicate that both CAC-1 and CAC-2 increase Q compared to the “No CAC” case. What is worth noting however is that the performance of both CAC-1 and CAC-2 are approaching that of the best case “static CAC” when the network buffer and play-out buffers are sized appropriately. We have also noted that the difference between CAC-1 and CAC-2 with respect to Q is modest. However, with respect to L, CAC-2 provides appreciable reductions compared to CAC-1 and significant reductions compared to “No CAC”.
5. Summary In this paper, we presented a distributed edge-to-edge measurement-based approach to QoS support for a VoIP application. The measurements, namely packet loss rates, are available on most media gateways. We compared two statistical prediction methods: AR and EWMA, to predict packet loss from these measurements. It was demonstrated that the less complicated EWMA approach produced similar results under various traffic conditions on a public IP network. Two measurement based CAC methods were analyzed and compared through simulations with encouraging initial results. Both methods use predicted packet loss, with the difference that the packet loss is measured on different interfaces. CAC-1 uses measurements collected on the network layer and thus only represents packet loss due to various network conditions. CAC-2 uses the measurements collected on
QoS with an Edge-Based Call Admission Control in IP Networks
189
the application layer at the time where the packet play out is to take place. We quantified the magnitude of improvements with CAC2. These observations help us determine the details of algorithms and identify the required interfaces to the network elements. Future work includes: (i) extension of the simulation study to include data sources and multiple MGWs, (ii) investigation of the nature of packet loss in typical network environments to better differentiate CAC-1 and CAC-2, (iii) development of a revenue model based on QoS measures to facilitate comparison of alternative CAC scenarios, and (iv) investigation of the use of other measures, such as delay, in CAC algorithms.
References 1.
R.A. Lakshmi, “ The Lucent Technologies Softswitch: Realizing the promise of convergence,” Bell Labs Technical Journal V4, 2, APR-JUN, 1999, p 174-195 2. B. Doshi, E. Hernandez-Valencia, K. Sriram, YT Wang and O.C. Yue, "Protocols, Performance, and Controls for Voice over Wide Area Packet Networks," Bell Labs Technical Journal, Vol 3, No. 4, Oct 98. 3. K. Sriram and Y. T. Wang, "Voice over ATM using AAL2 and Bit Dropping: Performance and Call Admission Control," IEEE Journal of Selected Areas in Communications, Vol.17, No.1, 1999, pp.18-28 4. ITU-T Recommendation G.114, One-Way Transmission Time, 2/96 5. M. Grossglauser and D. Tse, "A Framework for Robust Measurement Based Admission Control,” IEEE/ACM Transactions on Networking V7, 3, JUN, 1999, p293-309 6. Z. Dziong, M. Ji, and Y.T. Wang, "Learning Algorithm for CAC adjustment in ATM networks", ATM/IP workshop, June 1999 7. P. Aukia, M. Kodialam, P. Koppol, T. Lakshman, H. Sarin, and B. Suter. RATES: A server for MPLS traffic engineering,” IEEE Network Magazine, pp. 34--41, March/April 2000 8. M. Yajnik, S. Moon, J. Kurose, and D. Towsley, “Measurement and Modelling of the Temporal Dependence in Packet Loss,” Proceedings of Inforcom99. 9. W.S. Wei, Time Series Analysis, Addison-Wesley Publishing Company, 1990. 10. D.W. Trigg and A.G. Leach, “Exponential smoothing with adaptive response rate,” Operations Research Quarterly, Vol.18, Issue 1, 1967, pp.53-59 11. D.Jeske, W.Matragi and B.Samadi, “Adaptive Play-out Algorithms for Voice Packets in a LAN Environment,” International Conference on Communications (ICC), 2001.
Admission Control and Capacity Management for Advance Reservations with Uncertain Service Duration Yeali S. Sun1 , Yung-Cheng Tu2 , and Meng Chang Chen3 1
Dept. of Information Management National Taiwan University Taipei, Taiwan 2 Dept. of Computer Science National Tsing Hua University Hsinchu, Taiwan 3 Institute of Information Science Academia Sinica Taipei, Taiwan
Abstract. Different from Immediate Request (IR) service in packetswitched networks, admission control for Advance Reservation (AR) service is more complex - the decision points include not only the start time of the new connection, but also the instants that the new connection overlaps with connections already admitted in the system. Traditional approach on advance reservation considers only a fixed scheduled period. When overtime occurs (often quite approaching the end of the originally scheduled service period) depending on network load and resource usage, the service may easily be disrupted due to insufficient resources available. Examples include the broadcasting of sports events and business videoconference calls. In this paper, we study the problem of admission control and resource management for AR service with uncertain service duration. The objective is to maximize user satisfaction in terms of service continuity and guarantee of QoS while minimizing reservation cost and call blocking probability of the AR service. An innovative two-leg admission control and bandwidth management scheme is proposed. Service continuity, user utility and reservation cost functions are proposed here to evaluate user’s satisfaction and the efficiency of resource allocation. Simulation results are presented.
1
Introduction
Many signaling and admission control designs of the quality of service (QoS) support in packet-switched networks such as RSVP [1] focus on requests that must be served immediately, commonly known as Immediate Request (IR) service. In today’s Internet, there is demand for Advance Reservation (AR) service. For example, many important business conference meetings and calls are pre-planned and scheduled. By advance reservation service, users can know whether they can E. Gregori et al. (Eds.): NETWORKING 2002, LNCS 2345, pp. 190–201, 2002. c Springer-Verlag Berlin Heidelberg 2002
Admission Control and Capacity Management for Advance Reservations
191
get full QoS support of their communication needs over the Internet in advance. From the service provider’s perspective, knowing the future needs ahead allows them to better manage the allocation and sharing of network resources between users, and to serve their customers in a more affirmative, predictable way. In order to perform admission control and resource allocation, requests for advance reservation must specify three basic data: service start time, QoS requirement and duration of service. Recently a few works were proposed. They all assume these parameters are given and of fixed value when requests are submitted [2,3,4,5,6,7,8]. In reality, these information may not be known in prior, especially the service duration. Examples include the broadcasting of sports events and business videoconference calls. Typically, there is so-called scheduled duration, e.g., two hours for a broadcast sports event. But often there are overtimes. Traditional approach on advance reservation considers only a fixed scheduled period. When overtime occurs (often quite approaching the end of the originally scheduled service period) depending on network load and resource usage, the service may easily be disrupted due to insufficient resources available. Therefore, it becomes a challenge to the service provider to fulfill the needs of such types of requests assuring both the continuity of service and guarantee of QoS given the uncertain service duration at the time the request was scheduled while in line with its goal of maximum network resource utilization. In this paper, we focus on AR request with longer lifetime such as Internet broadcast events and videoconferences. Here, we propose an innovative twoleg admission control and resource reservation scheme for AR requests with uncertain service duration over the Internet. The idea is to perform bandwidth reservation in multiple stages. Each stage has a fixed duration and specific level of quality of service to assure. Thus, service provider can efficiently manage network resources and allocate bandwidth necessary to guarantee service quality requirements of individual connections in each stage. To further tackle uncertainty and to maximize network resource utilization, an update mechanism is used. A convex user utility function is defined to characterize the level of user satisfaction for those admitted AR connections with the combined bandwidth allocation and service continuity. A reservation cost is also defined to evaluate the efficiency of the overall network resource allocation in advance reservation service. Other works related to advance reservations in the past include extensions to the existing protocols and signaling capabilities, e.g., extension of ST2 protocol [2,9] and RSVP [3]. In [5], the authors proposed a distributed reservation scheme and its possible implementation. In [10], the authors studied AR requests with uncertain duration. It does not address the service continuity problem. In [8], a measurement-based approach is proposed to estimate the bandwidth used for existing connections with fixed duration. In [6,7], they discussed the admission control for connections in progress that are preemptable or interruptible. Specifically, in [6], they studied the issue of resource sharing between AR and IR services. In [7], they gave a general description of the policy and pricing schemes for advance reservations. Most of these works assumed that service durations
192
Y.S. Sun, Y.-C. Tu, and M.C. Chen
are fixed and available at the admission control time. In [6,11], they assumed the service times follow some distribution. An estimate or a safe upper bound of the service duration must be given at the request submission time. In this paper, we focus on the admission control and bandwidth allocation problem for AR requests with uncertain service duration. The organization of this paper is as follows. In Section 2, we present the proposed Two-leg bandwidth reservation scheme. The definitions of service continuity and user utility are given. An update mechanism is also presented. In Section 3, admission control of the proposed scheme is described in detail. In Section 4, reservation cost is presented. In Section 5, simulation results are presented to show the benefits of the proposed scheme. Finally, we give a conclusion in Section 6.
2
The Two-Leg Resource Reservation Scheme
For advance reservation service, there are more than one decision points to check. They include all the time instants the duration of the new connection overlaps with the start time of any connections already admitted in the system. Figure 1 depicts the admission control decision points for AR connections with a fixed duration; the number of decision points is finite. This is, however, not true for the case of uncertain duration.
Fig. 1. There may have more than one decision points {t1 , t2 , t3 } to consider in the admission control of a new advance reservation request.
In this section, we propose a new admission control scheme with two-leg bandwidth reservation to address the problem of uncertain service duration in AR service. To deal with uncertainty, estimation is used here which is based on the observation that the probability many Internet AR applications will last longer than a duration t is very small when t is sufficiently large (e.g., VCD films information from Blockbuster Homepage [12]). First, we assume for AR requests without specifying service duration, the actual lifetimes will follow certain distributions. Thus, requests can be classified into different categories; each has its own characteristic lifetime distribution function. In reality such functions can be obtained through proper data collection, sampling, analysis and characterization from the real world [6].
Admission Control and Capacity Management for Advance Reservations
2.1
193
Age Function
Let ai (t) be the probability density function of the nominal duration of type i AR connections and si is the start time of the connection. We define the age function of connection i, Ai (t), as the probability that connection will end after time si + t, Ai (t) = P r{duration ≥ t} =
2.2
i
∞
ai (x)dx,
(1)
User Utility
We characterize the level of user satisfaction for those admitted AR connections with the combined bandwidth allocation and service continuity. First, a convex function si (Ti ) is defined to describe the level of satisfaction in terms of service continuity for connection i which lifetime is Ti and Di is the nominal duration of the corresponding event, i.e., si (Ti ) =
T
k( Di −1)
e 1
i
if Ti ≤ Di if Ti > Di
(2)
The constant k is used to reflect the weight of such effect. Figure 2 shows the values of (2) under different k’s. The larger the k the more utility gain is stressed on the continuity of service especially towards the end of the event. For example, if a live broadcast of a basketball game was initially scheduled for three hours but due to overtimes, the event is in fact three and half hours. The nominal duration is three and half hours. The lifetime of the connection however depends on whether a service extension request is issued, say two hour and 45 minutes after the event. If accepted, Ti is equal to nominal duration; otherwise it is three hours.
Service Continuity
1 0.8 0.6
k=1
0.4
k=2
0.2 0 0
0.2
0.4
Ti/Di
k=4 0.6
k=8 0.8
1
Fig. 2. The values of service continuity under different k’s.
194
Y.S. Sun, Y.-C. Tu, and M.C. Chen
Now, we define the user utility for connection i as the combination of both service continuity and bandwidth allocation. It is an increasing convex function: Ti dsi (t) ri (t) ui (Ti ) = × dt (3) dt Ri 0 Note that ri (t) is the bandwidth allocation to connection i at time t and Ri is the requested bandwidth. This function contrasts bandwidth allocated vs. requested during its lifetime continuity of service. The user utility is the integral of satisfaction over the nominal service duration. Essentially, the utility value increases when service continues. 2.3
Two-Leg Bandwidth Allocation
Instead of reserving bandwidth indefinitely for connections as in the traditional way, we propose to perform a two-leg admission control and bandwidth reservation for an AR request. The scheme works as follows. Initially, when the first time an AR request is issued Leg-One admission control is performed. In this phase, admission control only considers resource allocation for an initial fixed period of time called the full warranty period during which the requested bandwidth is reserved for its use if admitted. To handle situations where events may last longer than the warranty periods, a second leg - Leg-Two admission control is performed in which an at least minimum amount of bandwidth is reserved at the same time for another fixed period of time called the at least minimum warranty period. Admitted AR connection may issue warranty period extension requests at any time afterwards. Additional admission control will be required. We choose two-leg absolute service warranties than statistical guarantee as in [11]. We believe that this model of advance reservation service is more meaningful because users clearly know the requested QoS is assured during the period. There are several advantages of this model. First, it is easy to implement by service providers. Second, the model is simple enough for the average user to understand so that the users feel comfortable. Known expectations of service assurance reduce risks. Moreover, the administrative cost of tracking usage is low. Full Service Warranty Period In the full service warranty period, a connection is assured with full bandwidth allocation. The choice of a good warranty period is essential to the assurance of service continuity. It indeed depends on the nature of the application, i.e., the age distribution function. If a larger value is chosen, the system must reserve resources for a longer period of time. Although this can increase service provider’s confidence on service quality delivery and minimize the likelihood of service violation, a major concern is that network resources may be underutilized. Adversely, if a smaller value is used the system can achieve better resource utilization and blocking probability performance. The tradeoff is that more frequent service disruption and lower user satisfaction.
Admission Control and Capacity Management for Advance Reservations
195
At Least Minimum Bandwidth Reservation for After-Warranty Period The full service warranty period only represents expected or average duration. The rational behind the design of after-warranty period is to avoid sudden service disruption for connection whose event time is longer than this period, e.g., overtime of sports broadcast events. With resource reservation for the at least minimum warranty period, if a service extension request is rejected, the connection at least has a minimum bandwidth available to continue the service although the quality may degrade. The second leg warranty period is denoted as Di,amw . Let parameters βi,f w and βi,amw be the probability thresholds of the full warranty period Di,f w and at least minimum warranty period Di,amw (see Fig. 3).
Fig. 3. The amount of bandwidth reservation at different warranty periods.
Compared to full bandwidth reservation, the tradeoff is link utilization. In fact, many Internet applications such as real-time audio/video streaming media are capable of adapting themselves to the network state and can tolerate certain degree of performance degradation. Hence, the bandwidth requirement of an AR request in the proposed service model is given in the form of < Ri , Ri,min > where Ri is the bandwidth requirement of the service warranty period and Ri,min is the minimum amount of bandwidth acceptable to the connection. The actual bandwidth reservation in the at least minimum warranty period for connection i would be in the range of < Ri , Ri,min > (see Fig. 3). Ri can be the effective bandwidth [13,14] or the peak rate. 2.4
Revising Uncertainty with New Data
Uncertain resource allocation is complicated because of the form of ”probabilistic” in the duration of which the requested resources are needed as opposed to the fixed duration. Our work focuses on using new data to revise imperfect user-supplied initial knowledge of how long the connection will last. During the course of service, the service provider could periodically poll service user to update his/her knowledge of the connection lifetime or the service user can issue a status update to the service provider notifying whether an extension or early termination of the connection is needed.
196
3
Y.S. Sun, Y.-C. Tu, and M.C. Chen
Admission Control of Full Warranty Period and at Least Minimum Warranty Period
Let the total link capacity designated to the AR service denoted as CAR (CAR < Clink , Clink is the link capacity). Let Ai (t) is defined for each connection i: Ai (t)
=
0 ,t < 0 1 , 0 ≤ t ≤ Di,f w (full warranty period) Ai (t) , Di,f w < t ≤ Di,f w + Di,amw (at least minimum warranty period)
Consider the admission control of a new AR request. Let < Rnew , Rnew,min > be the bandwidth requirements of the new connection; Dnew,f w and Dnew,awm are the full warranty period and at least minimum warranty service period of the new connection, respectively. 3.1
Leg-One Admission Control
Let Pw be the set of admission control decision points identified, i.e. Pw = {tk , tk − snew ≤ Dnew,f w }, W (tk ) is the set of connections that overlap with new connection at time tk . The admission decision is based on the following equation: ∀j, j ∈ W (tk ) max(Rj,min , (Rj × Aj (tk − sj )) + Rnew ≤ CAR (4) j
3.2
Leg-Two Admission Control
Let Pamw be the set of admission control decision points identified, i.e. Pamw = {tk , Dnew,f w < tk −snew ≤ Dnew,f w +Dnew,amw }. W (tk ) is the set of connections that overlap with new connection at time tk . The admission decision is based on the following equation: ∀j, j ∈ W (tk ) j max(Rj,min , (Rj × Aj (tk − sj ))+ (5) max(Rnew,min , (Rnew × Anew (tk − snew )) ≤ CAR
4
The Reservation Cost
We distinguish two costs for each admitted advance reservation request i: the reservation cost ci,res and actual cost ci,act defined as follows: Di,amw ci,res = max(Ri × Ai (t − si ), Ri,min )dt (6) 0
ci,act =
0
Ti
max(Ri × Ai (t − si ), Ri,min )dt
(7)
Equation(6) is the integral of the total bandwidth reserved to connection i. This is the cost paid by the service provider. Equation(7) is the integral of the
Admission Control and Capacity Management for Advance Reservations
197
bandwidth actually used by connection i. The normalized reservation cost of the system for an interval τ is defined as follows: i∈AR(τ ) ci,res csys = (8) i∈AR(τ ) ci,act Its value is no smaller than 1. It will be used to evaluate the performance of the proposed scheme in the next section.
5
Performance Evaluation
In this section, we study the performance of the proposed two-leg advance bandwidth reservation scheme via simulation. The network configuration is shown in Fig. 4. The simulation period is 30 days (we take the daily average statistics from 30 days). For all sets of experiments, the requests are assumed of the type of videoconferences whose nominal service duration is a Pareto distribution with mean 120 minutes and shape=1.8. All requests have the same age distribution function and bandwidth requirements and < R, Rmin >=< 1.5M bps, 256kbps >
Fig. 4. Network Configuration of the simulation
The arrival process of advance reservation calls is assumed to be a Poisson process. Each call makes a connection reservation with start time in the next day. In each day, we divide 24 hours into ”peak zones” (9am-12noon and 2-5pm) and ”off-peak zones” (the other times of the day). The probabilities of reservations starting at peak zones or off-peak zones are assumed to be .7 and .3, respectively. Here, we assume the start time of an AR call must be at full or half o’clock (e.g., 9am, 9:30am, etc.). The start time distributions for calls in peak and off-peak zones are all uniform distribution. 5.1
Service Continuity, User Utility, and Reservation Cost
In this set of experiments, we compare user utility and reservation cost of the proposed Two-Leg bandwidth reservation with that of the traditional one-time reservation approach referred to as one-leg reservation. Df w and Damw are set to 80 minutes (βf w = 0.5) and 116 minutes (βamw = 0.75), respectively. Both these two interval values are used as the durations of the one-time reservation for the
198
Y.S. Sun, Y.-C. Tu, and M.C. Chen
sake of comparison. In the Two-Leg bandwidth allocation scheme, service update is issued 60 minutes after connection starts. The arrival rate of the AR calls is 0.06 calls/minute. Table 1 shows the user utility. We can see that for connections whose nominal durations are greater than the full warranty period but less than the at least minimum warranty period, in terms of service continuity, it is one under the Two-Leg reservation with or without update. With update, the user utility is further improved. For those connections whose nominal durations are greater than the at least minimum warranty period, the Two-Leg reservation scheme outperforms one-time reservation scheme with Df w . It is as expected that the Two-Leg reservation scheme is not as good as one-time reservation with duration Damw because in the former, the bandwidth allocated after the full warranty period is a function of the bandwidth available at the time service extension request was issued. We know service extension requests often come as short notice and approach the end of the event. How to increase the acceptance probability of the service extension requests is one of the issues that we are currently looking into. In the aspect of reservation cost, the Two-Leg bandwidth allocation scheme performs very well, close to that of one-time with Df w . This implies that the bandwidth reserved in the at least minimum warranty period is efficiently used. Table 1. Comparisons of service continuity, user utility and reservation cost of the Two-Leg and traditional one-time bandwidth reservation schemes. Di ≤ Df w s(Ti ) u(Ti ) 1-leg(Df w ) 1.00 1.00 2-leg(No update) 1.00 1.00 2-leg(Update) 1.00 1.00 1-leg(Damw ) 1.00 1.00
Df w < Di ≤ Damw s(Ti ) u(Ti ) 0.56 0.56 1.00 0.64 1.00 0.84 1.00 1.00
Damw < Di s(Ti ) u(Ti ) csys (24-hours) 0.12 0.12 1.11 0.35 0.17 1.16 0.46 0.22 1.14 0.35 0.35 1.38
Figure 5 shows comparisons of call blocking probability under different arrival rates for the two schemes. As expected, because the extra bandwidth reservation for at least minimum warranty period in the Two-Leg reservation scheme with or without update, the call blocking probabilities are higher than that of one-time reservation with duration Df w but lower than that of the one-time reservation with Damw . Figure 6 shows the reservation cost. One can see that the reservation costs for the Two-Leg reservation scheme with or without update in peak zones, are close to those in the off-peak zones. Adversely, the reservation costs for the one-leg reservations are much higher than those of two-leg approach. Moreover, in the Two-Leg reservation scheme, the reservation cost when with update is lower that when without update. This is because that the bandwidth reserved after update is much likely utilized, thus lowering the reservation cost.
Admission Control and Capacity Management for Advance Reservations
199
2 120
Peak:1−leg(Dfw) Peak:2−leg(No update) Peak:1−leg(Damw) Peak:2−leg(update)
100
Off−peak:1−leg(Dfw) Off−peak:2−leg(No update) Off−peak:1−leg(Damw) Off−peak:2−leg(update)
Peak:1−leg(D )
Off−peak:1−leg(D )
fw
1.8
fw
Peak:2−leg(No update)
Off−peak:2−leg(No update)
Peak:1−leg(D
Off−peak:1−leg(D
)
amw
Reservation Cost
Blocking Probability (%)
80
60
1.4
1.2
20
1
0.02
0.03
0.04
0.05
0.06
0.07
0.08
Arrival Rate(calls/mins)
0.09
0.8 0.01
0.1
Off−peak:2−leg(update)
1.6
40
0 0.01
)
amw
Peak:2−leg(update)
0.02
0.03
0.04
0.05
0.06
0.07
Arrival Rate(calls/mins)
0.08
0.09
0.1
Fig. 5. Comparisons of call blocking prob- Fig. 6. Comparisons of reservation cost. ability.
5.2
Service Continuity, User Utility, and Reservation Cost
The parameters βf w (Df w ) plays an important role in the proposed Two-Leg bandwidth reservation scheme. In this set of experiment, we study the effect of different choices of the full warranty period on user utility and reservation cost. Here, the βamw is fixed and set to 0.75. In Fig. 7, even with update in the Two-Leg reservation scheme, the improvement is limited. This is again because if the update is issued late during the connection lifetime, the blocking probability is likely high. Figure 8 shows the reservation cost under different full warranty periods. 1.6
1.6 D 0 otherwise U6 (α) = 0. All utilities are concave starting at some minimum and reach U (α) = 1 for α = 1. The least concave is the linear utility function U1 , which is thus proportional to the amount of information which is well received. The most concave function is U6 . Although it does not represent a utility of any real application, it can be used to obtain an upper bound on the gain achieved with employing FEC. Indeed, we see that U6 is larger than any other
Utility Analysis of Simple FEC Schemes for VoIP
233
utility, so it follows that in both models described by (1) and (2), it should give the best quality. The utility U4 is zero for α ≤ α0 . This is typical for real time applications with a minimum hard constraint. For example, the constraint may represent the fact that the throughput of existing codecs for voice applications cannot go beneath some bound. We shall plot the quality (expected) function with these six utility functions under different scenarios. For the plots we denote by M the actual buffer size and by ρ that total load. Also note that in all the plots that follow the utility functions are represented as: ∗−U1 , +−U2 , . − U3 , o − U4 , − U5 and finally − − U6 . 1. A single audio flow implementing FEC traversing the bottleneck (no multiplexing): When a single audio flow traverses the bottleneck (i.e., λa = λe = 0), then the spacing between a packet and its redundancy at the bottleneck is the same as that generated at the transmission codec. We take λ = 1 and ρ = λ/µ. We next plot the quality function with the six utility functions for this scenario and for the cases when: (i) the packet sizes (and hence the buffer sizes) remains unchanged after adding redundancy and the quality function is given by (1) in Figs. 1, 2 (ii) when the packet sizes changes after adding redundancy and the quality function is given by (2) in Figs. 3, 4. M=5, ρ=0.5
M=5,ρ=0.1 1
Expected Quality E[Q(α)]
Expected Quality E[Q(α)]
1
0.8 0.6
0.4
0.2
0
0
0.2 0.4 0.6 0.8 Amount of FEC (α)−−−−−−−−>
0.8
0.6 0.4
0.2
0
1
0
M=5, ρ=0.9
M=5, ρ=1.5
Expected Quality E[Q(α)]
Expected Quality E[Q(α)]
1
0.8
1
0.8
0.6 0.4
0.2
0
0.2 0.4 0.6 0.8 Amount of FEC (α)−−−−−−−−>
0
0.2 0.4 0.6 0.8 Amount of FEC (α)−−−−−−−−>
1
0.6
0.4
0.2
0
0
0.2 0.4 0.6 0.8 Amount of FEC (α)−−−−−−−−>
1
Fig. 1. A single audio flow implementing simple FEC traversing the bottleneck with quality function given by (1).
We observe that – as expected, U6 gives an upper bound for the quality. More generally, for two utility functions, Uj and Ui , if Ui ≥ Uj for all α, then the corre-
234
P. Dube and E. Altman
M=500, ρ=0.1
M=500, ρ=0.5
1
Expected Quality E[Q(α)]
Expected Quality E[Q(α)]
1
0.8 0.6
0.4
0.2
0
0
0.2 0.4 0.6 0.8 Amount of FEC (α)−−−−−−−−>
0
0.8 0.6 0.4 0.2 Amount of FEC (α)−−−−−−−−>
1
M=500, ρ=1.5
Expected Quality E[Q(α)]
Expected Quality E[Q(α)]
0.2
0.8
0.8 0.6 0.4 0.2
0.2 0.4 0.6 0.8 Amount of FEC (α)−−−−−−−−>
0.4
0
M=500, ρ=0.9
0
0.6
1
1
0
0.8
0.6
0.4
0.2
0
1
0
0.8 0.6 0.4 0.2 Amount of FEC (α)−−−−−−−−>
1
Fig. 2. A single audio flow implementing simple FEC traversing the bottleneck with quality function given by (1).
M=5, ρ=0.5
M=5,ρ=0.1
1
Expected Quality E[Q(α)]
Expected Quality E[Q(α)]
1
0.99
0.98
0.97
0.96
0
0.2 0.4 0.6 0.8 Amount of FEC (α)−−−−−−−−>
0.95 0.9 0.85
0.8 0.75 0.7
1
0
M=5, ρ=0.9
0.8
Expected Quality E[Q(α)]
Expected Quality E[Q(α)]
1
M=5, ρ=1.5
1 0.9 0.8
0.7 0.6 0.5
0.8 0.6 0.4 0.2 Amount of FEC (α)−−−−−−−−>
0
0.8 0.6 0.4 0.2 Amount of FEC (α)−−−−−−−−>
1
0.7 0.6
0.5 0.4 0.3
0
0.8 0.6 0.4 0.2 Amount of FEC (α)−−−−−−−−>
1
Fig. 3. A single audio flow implementing simple FEC traversing the bottleneck with quality function given by (2).
Utility Analysis of Simple FEC Schemes for VoIP M=500, ρ=0.1
M=500, ρ=0.5
1.001
Expected Quality E[Q(α)]
Expected Quality E[Q(α)]
2
1.5
1
0.5
0
0
0.2 0.4 0.6 0.8 Amount of FEC (α)−−−−−−−−>
1
1 0.999 0.998 0.997 0.996
Expected Quality E[Q(α)]
Expected Quality E[Q(α)]
0.8
0.7
0.2 0.4 0.6 0.8 Amount of FEC (α)−−−−−−−−>
0.8 0.6 0.4 0.2 Amount of FEC (α)−−−−−−−−>
1
0.8
0.9
0
0
M=500, ρ=1.5
M=500, ρ=0.9 1
0.6
235
1
0.7
0.6
0.5
0.4
0
0.8 0.6 0.4 0.2 Amount of FEC (α)−−−−−−−−>
1
Fig. 4. A single audio flow implementing simple FEC traversing the bottleneck with quality function given by (2).
sponding quality is larger for both cases (1) and (2). This is confirmed in the figures, taking into account that U6 ≥ U3 ≥ U2 ≥ U1 . – For large buffer size (M = 500) the quality is almost the same for all ρ ≤ 0.9; the reason is that the loss probabilities are very small and almost all contribution for the quality is from the utility of unlost packets. – For ρ > 1 we see that we gain by adding FEC (for any buffer size, and for both cases described by (1) and (2)), for utilities U3 , U4 , U6 . For small buffer size M = 5 we gain also for ρ = 0.9 when using utility U3 , U4 , U6 for both the cases but for very low α, 0.1 or less. – The linear utility function always decreases with FEC for any ρ and any M. – The quality is higher with constant information model than with constant packet size model. 2. An audio flow implementing simple FEC sharing the bottleneck with other audio flows implementing the same FEC: This is the case I (Sec. 2.1) of our analysis. We take λ = 0.1 and λa = 0.9 and ρ = (λ + λa )/µ. We plot the expected quality function with the six utility functions for this scenario and for the cases when: (i) the packet sizes (and hence the buffer sizes) remains unchanged after adding redundancy and the quality function is given by (1) in Figs. 5, 6 (ii) when the packet sizes changes after adding redundancy and the quality function is given by 7, 8. Due to multiplexing Φ takes values in {1, 2, . . .}. For our numerical calculations we restricted to φ = {1, 2, . . . , 8} as for φ ≥ 9, the contribution to Q(α) was negligible. Thus for buffer size of 5 we will have the spacing exceeding the buffer size. Observe that the
236
P. Dube and E. Altman
M=5, ρ=0.5
M=5,ρ=0.1
1
Expected Quality E[Q(α)]
Expected Quality E[Q(α)]
1
0.8 0.6
0.4
0.2
0
0
0.2 0.4 0.6 0.8 Amount of FEC (α)−−−−−−−−>
Expected Quality E[Q(α)]
Expected Quality E[Q(α)]
0.2
0
0.8 0.6 0.4 0.2 Amount of FEC (α)−−−−−−−−>
1
0.8
0.8 0.6 0.4 0.2
0.2 0.4 0.6 0.8 Amount of FEC (α)−−−−−−−−>
0.4
M=5, ρ=1.5
M=5, ρ=0.9
0
0.6
0
1
1
0
0.8
0.6
0.4
0.2
0
1
0
0.8 0.6 0.4 0.2 Amount of FEC (α)−−−−−−−−>
1
Fig. 5. An audio flow implementing simple FEC sharing the bottleneck with other audio flows implementing the same FEC scheme with quality function given by (1).
M=500, ρ=0.1
0.8 0.6 0.4 0.2 0
0
Expected Quality E[Q(α)]
0.6 0.4 0.2
0
0.8 0.6 0.4 0.2 Amount of FEC (α)−−−−−−−−>
0.6 0.4 0.2
0
1
0.8 0.6 0.4 0.2 Amount of FEC (α)−−−−−−−−>
1
M=500, ρ=1.5
0.8
0.8
0
0.8
0
1
M=500, ρ=0.9
1
Expected Quality E[Q(α)]
0.8 0.6 0.4 0.2 Amount of FEC (α)−−−−−−−−>
M=500, ρ=0.5
1
Expected Quality E[Q(α)]
Expected Quality E[Q(α)]
1
0.6
0.4
0.2
0
0
0.8 0.6 0.4 0.2 Amount of FEC (α)−−−−−−−−>
1
Fig. 6. An audio flow implementing simple FEC sharing the bottleneck with other audio flows implementing the same FEC scheme with quality function given by (1).
Utility Analysis of Simple FEC Schemes for VoIP
M=5, ρ=0.5
M=5,ρ=0.1 1
Expected Quality E[Q(α)]
Expected Quality E[Q(α)]
1
0.99
0.98
0.97
0.96
0
0.2 0.4 0.6 0.8 Amount of FEC (α)−−−−−−−−>
0.95 0.9 0.85 0.8 0.75 0.7
1
Expected Quality E[Q(α)]
Expected Quality E[Q(α)]
0.7
0.6
0.8 0.6 0.4 0.2 Amount of FEC (α)−−−−−−−−>
0.2 0.4 0.6 0.8 Amount of FEC (α)−−−−−−−−>
1
0.7
0.8
0
0
M=5, ρ=1.5
M=5, ρ=0.9
0.9
0.5
237
0.6
0.5
0.4
0.3
1
0
0.8 0.2 0.4 0.6 Amount of FEC (α)−−−−−−−−>
1
Fig. 7. An audio flow implementing simple FEC sharing the bottleneck with other audio flows implementing the same FEC scheme with quality function given by (2).
M=500, ρ=0.1
1.5
1
0.5
0
0
1
0.999 0.998 0.997 0.996
Expected Quality E[Q(α)]
0.8 0.7 0.6
0
0.6 0.8 0.4 0.2 Amount of FEC (α)−−−−−−−−>
0
1
0.6 0.8 0.4 0.2 Amount of FEC (α)−−−−−−−−>
1
M=500, ρ=1.5
0.8
0.9
0.5
1
M=500, ρ=0.9
1
Expected Quality E[Q(α)]
0.8 0.6 0.4 0.2 Amount of FEC (α)−−−−−−−−>
M=500, ρ=0.5
1.001
Expected Quality E[Q(α)]
Expected Quality E[Q(α)]
2
0.7 0.6 0.5 0.4 0.3
0
0.6 0.8 0.4 0.2 Amount of FEC (α)−−−−−−−−>
1
Fig. 8. An audio flow implementing simple FEC sharing the bottleneck with other audio flows implementing the same FEC scheme with quality function given by (2).
238
P. Dube and E. Altman
plots (5) and (6) are almost similar to plots (1) and (2) respectively. From (7) and (8) We observe that for large ρ(> 1), the buffer size does not affect the expected quality. Also from all the plots it is observed that U5 always gives a lower bound on expected quality for large α(≥ 0.2).
4
Conclusion
In this paper we studied the (possible) gain obtained with a simple FEC scheme when the losses are due to buffer overflow at the bottleneck. We obtained the loss probabilities in the presence of FEC using ballot theorem. To this end we generalize the analysis in [6] (which was for φ < Kα ) and computed the loss probability for a fixed φ taking values from 0 to ∞. Using these results we obtained the expressions for the expected audio quality for general utility functions and then utilised the tools developed for a detailed numerical studies with six utility functions under various scenarios of multiplexing at the bottleneck. Our future work is to analyse delay aware utility functions [3] and to do utility analysis of other more intelligent and efficient FEC schemes to quantize the (possible) gain.
References 1. A. V. Garcia, S. Fosse-Parisis. (FreePhone Audio Tool) High-Speed Networking Group. INRIA, Sophia Antipolis, France. 2. J. C. Bolot. End-to-End Delay and Loss Behavior in the Internet. Proc. Sigcomm ’93, pages 289–298, 1993. 3. C. Boutremans, J. L. Boudec. Adative Delay Aware Error Control for Internet Telephony. Proc. IPTEL 2001, 2001. 4. C. Perkins, L. Kouvelas, O. Hodson, V. Hardman. RTP payload for redundant audio data. RFC 2198 (1997). 5. D. R. Figueiredo, E. de Souza e Silva. Efficient Mechanisms for Recovering Voice Packets in the Internet. Proc. IEEE Globecom ’99, 1999. 6. E. Altman, C. Barakat and V. M. Ramos R. Queueing Analysis of Simple FEC Schemes for IP Telephony. Proc. IEEE Infocom 2001, April 2001. 7. E. Altman, C. Barakat, V. M. Ramos R. On the Utility of FEC Mechanisms for Audio Applications. Proc. Second International Workshop on Quality of Future Internet Services, Qofis 2001, 24-26 Sept., 2001, Coimbra, Portugal. See also INRIA Research Report No. RR-3998 at http://wwwsop.inria.fr/mistral/personnel/Eitan.Altman/perf.html. 8. E. W. Biersack. Performance Evaluation of FEC in ATM Networks. Proc. ACM Sigcomm ’92, pages 248–257, Aug. 1992. 9. I. Kouvelas, O. Hodson, V. Hardman, J. Crowcroft. Redundancy Control in RealTime Internet Audio Conferencing. Proc. of AVSPN ’97, Aberdeen, Scotland, Sept. 1997. 10. J. C. Bolot, S. Fosse-Parisis, D. Towsley. Adaptive FEC-Based Error Control for Interactive Audio in the Internet. Proc. IEEE Infocom 1999. 11. L. Kleinrock. Queueing Systems, Vol. I. John Wiley, New York, 1976.
Utility Analysis of Simple FEC Schemes for VoIP
239
12. N. Shacham, P. McKenney. Packet Recovery in High-Speed Networks Using Coding and Buffer MAnagement. Proc. IEEE Infocom ’90, pages 124–131, May 1990. 13. O. Gurewitz, M. Sidi, I. Cidon. The Ballot Theorem Strikes Again: Packet Loss Process Distribution. IEEE Trans. on Information Theory, 46(7):2588–2595, 2000. 14. O. J. Boxma. Sojourn Times in Cyclic Queues: the influence of the slowest server, Computer Performnace and Reliability. Elsevier Science Pubs. B. V. (NorthHolland), 1988. 15. Mice Project. (RAT: Robust Audio Tool) Multimedia Integrated Conferencing for European Researchers. University College London, U.K. 16. S. Shenker. Fundamental Design Issues for the Future Internet. IEEE Journal on Selected Areas in Communication, 13(7), September 1995.
A Power Saving Architecture for Web Access from Mobile Computers 1
2
2
Giuseppe Anastasi , Marco Conti , Enrico Gregori , and Andrea Passarella
1
1
University of Pisa, Dept. of Information Engineering Via Diotisalvi 2 - 56126 Pisa, Italy {g.anastasi, a.passarella}@iet.unipi.it, 2
CNR - CNUCE Institute Via G. Moruzzi, 1 56124 PISA—Italy {marco.conti, enrico.gregori}@cnuce.cnr.it,
Abstract. This work proposes new power-saving strategies for mobile access to the Web. User mobility is a key factor in the evolution of Web services. Unfortunately, the legacy approach for Web access is very inefficient when applied to mobile users. One of the critical issues is the inefficient usage of energetic resources when adopting the legacy TCP/IP architecture for Web access from mobile devices. In this paper we address this problem by proposing a new architecture, namely PS-Web, which works at the transport layer and exploits some knowledge about the application behavior. PS-Web is transparent with respect to the application and independent from the sub-network technology. We implemented a prototype of PS-Web. Experimental results provided by this prototype have shown that a relevant energy savings (about 70% on average) can be achieved with respect to the legacy TCP/IP approach. Furthermore, power saving is obtained without a significant degradation in the QoS perceived by the users. Specifically, PS-Web introduces almost neglible additional delays (with respect to the legacy approach) in the downloading of a Web page.
1
Introduction
The Mobile Internet is emerging as one of the most promising fields in the area of computer networking. The Internet explosion in the last years has demonstrated that accessing information of some interest in the same moment they are needed is a valuable opportunity. In this context, the concept of mobility adds a new dimension: information is carried directly to the user at any time and any place. However, integrating mobile computers in the legacy Internet scenario is still a challenging problem for a number of reasons. Internet protocols and applications were designed with the implicit assumption that links are wired and hosts do not change their position in time. Mobile computers have less computation and storage resources with respect to desktop computers. Furthermore, they usually connects through wireless links that are characterized by lower bandwidth and greater bit error rate with respect to wired links. Finally, mobile computers have a limited energy autonomy since they are battery-fed. Hence, the use of legacy solutions causes a non-optimal usage of the E. Gregori et al. (Eds.): NETWORKING 2002, LNCS 2345, pp. 240-251, 2002. © Springer-Verlag Berlin Heidelberg 2002
A Power Saving Architecture for Web Access from Mobile Computers
241
system that heavily limits the growth of the mobile Internet. In particular, the scarcity of energy resources is a very limiting factor [7, 11, 15]. In principle, energy-related problems could be solved by either increasing the battery capacity or reducing the energy consumption. Projections on progresses in battery technology show that only small improvements in the battery capacity are expected in next future [20]. Hence, it is vital to manage energy efficiently. Strategies for energy saving have been investigated at several layers including the physical-layer transmissions, the operating system, the network protocols and the application level [8,10,14,16,18,19,22,23,24]. In this paper we focus on strategies aiming at reducing the energy consumed due to networking. Although this component only accounts for about 10% of the total consumption in current notebooks [13], it increases to approximately 50% in hand-held computers (palm top, PDA, etc.) [12]. Hence, it becomes very important to design a power efficient networking subsystem. Based on experimental measurements, [21] and [13] conclude that the only way to actually reduce the networking component of the energy consumption consists in switching the wireless network interface off during inactivity periods. Works in [12] and [13] show that the legacy TCP/IP network architecture may have a negative impact on the energy consumption and propose to exploit the Indirect-TCP approach [2]. A further improvement could be achieved by exploiting some knowledge about the application behavior. According to this evidence, power management should be controlled at higher layers, potentially even at the application layer [12,13,21]. In this paper we propose new energy-saving strategies implemented in software network protocols. Specifically, we operate at the transport and application layers and use an application-dependent approach in the sense that envisaged strategies exploit some characteristics of the application. However, the proposed solutions do not require any modification to the application itself. We focus on Web services but our design is modular and could be easily adapted to any other network application. Web choice is justified by several reasons. First, it is today the most widely used Internet application and is seriously candidate to become the killer application for the mobile Internet too. Furthermore, Web users are typically sensitive to delays. Hence, achieving a significant reduction in the energy consumption while maintaining an acceptable Quality of Service (QoS) level is a very challenging task. We defined a new architecture, throughout referred to as PS-Web (Power Saving Web), which allows mobile users to exploit Internet Web services with a QoS similar to the one provided by the legacy network architecture based on the TCP/IP protocol stack, but with a significant reduction in the energy consumption. The PS-Web architecture is based on the Indirect-TCP model [2], i.e., the TCP connection between the browser and the Web server is split into two connections: one between the browser (on the mobile computer) and an Access Point (at the border between the wireless and wired networks), and the other one between the Access Point and the Web server. Unlike the solution proposed in [12], however, a simplified transport protocol is used between the mobile host and the Access Point. Furthermore, inactivity timeouts and sleeping times used to switch off and on the network interface are not fixed – as in [21] and [12] – but are adjusted dynamically based both on information about the past history collected on-line and on statistical models of Web traffic pattern available in the literature. The Access Point works as a Power Saving Proxy Web, i.e., a Proxy Web with power saving support for mobile users. Specifically, it implements a pre-fetching mechanism.
242
G. Anastasi et al.
Experimental results obtained on a prototype implementation of the PS-Web architecture based on a IEEE 802.11 WLAN [9] have shown that the PS-Web allows to save 70% of power with respect to the legacy TCP-based architecture. Furthermore, this is not obtained at the cost of a significant degradation in the Quality of Service (QoS). The additional delay introduced by the PS-Web in transferring a Web page is always lower than 1.5 s. The paper is organized as follows. Section 2 sketches the characteristics of Web traffic. Section 3 is devoted to the definition of the PS-Web architecture. Section 4 reports some experimental results obtained by using the prototype implementation. Finally, Section 5 concludes the paper.
2
Web Traffic Characterization
The power saving strategies implemented in our system are based on the characteristics of the application. Hence, as a preliminary step, it is necessary to understand the traffic profile generated by Web browsing. Many papers in literature provides mathematical characterizations of Web traffic [1,3,4,5,6]. Fig. 1 shows the typical ON/OFF profile of the network traffic generated by an individual Web user [5]. As is well known a Web page consists of a main file and zero or more embedded files (e.g., figures). All files composing a Web page are transferred during the Active Time interval while in the Inactive Time (or Think Time) interval the user reads the content of the downloaded Web page. Within an Active Time, ON Times correspond to actual file transfers while during Active OFF Times the browser parses a piece of the main file and sends the request for the next embedded file. ON Time
Active OFF Time
Active Time User Request
Inactive OFF Time
time
Inactive (Think) Time Download Done
User Request
Fig. 1. Typical phases of a page transfer as observed by the user.
Fig. 1 suggests us the following hints. During Inactive OFF Times the network interface can be switched off. On the other hand, Active OFF Times are often too short (less than 1s) to turn the interface off with some profit (recall that the interface has a transient in going on, during which it consumes energy but is not available for data transfer). However, one could manage the transfer of Web page in such a way that all files in a page are transferred on the wireless link in a single burst. By following this approach different Active OFF Times are concentrated in an single large OFF Time, and this gives more chance to turn the interface off, actually saving some energy. It may be worthwhile to point out that the ON-OFF behavior of Web traffic generated by individual users is related to the self-similarity that is a structural
A Power Saving Architecture for Web Access from Mobile Computers
243
property of (aggregate) Web traffic [5]. This means that the ON-OFF behavior is independent of the specific access pattern followed by the user and the type of files available in the Web server.
3
PS-Web Architecture and Protocols
A typical mobile scenario is depicted in Fig. 2. The communication between a mobile host and a machine connected to the Internet (Fixed Host) is made possible by a third entity (Access Point), which provides Internet connectivity to the mobile host through a wireless link. Although very simple and costless, a legacy TCP-based solution is prone to various drawbacks that heavily impacts the energy consumption at the mobile host. 1. The TCP congestion control wrongly interprets losses in the wireless link as congestion signals. Hence, the overall throughput is usually low and the wireless network interface at the mobile host remains idle for most of the time. 2. Congestions in the wired networks limits the throughput in the wireless link as well. The overall effect is the same as in 1. 3. The ON/OFF behavior of Web traffic forces the wireless network interface to be inactive for long time intervals. Wireless Link
Mobile Host
Wired Link
Wired Link
Access Point
Fixed Host
Fig. 2. A typical mobile environment
To overcome these problems we exploited a network architecture based on the Indirect-TCP model [2]. The transport connection between the client at the mobile host and the Web server is split into two parts: the first one between the mobile host and the Access Point and the second one between the Access Point and the Web server. At the Access Point an agent (I-TCP daemon) relays data from one connection to another. A Simplified Transport Protocol (STP in Fig. 3), instead of the legacy TCP protocol, is used to transfer data on the wireless link. The Indirect-TCP model eliminates problems related to point 1 above. However, bottlenecks in the Internet might still cause a low transfer rate in the wireless link. To overcome this second problem, we use pre-fetching of Web pages at the Access Point. Embedded files – if any – are requested to the remote server even without an explicit request from the user and will be transferred to the mobile host, on request from the mobile host itself. This approach allows to transfer embedded files on the wireless link at full speed, irrespective of the throughput available in the wired connection. At the mobile host side, pre-fetching is managed by the PSP (Power Saving Protocol) module. At the Access Point side, it is handled by the PS-Daemon (see Fig. 3). This is the I-TCP Daemon enriched with pre-fetching and power management mechanisms.
244
G. Anastasi et al.
browser
server PS-Daemon
HTTP
HTTP
PSP STP
TCP
STP
TCP
IP
IP
IP
Mobile Host
Access Point
Fixed Host
Fig. 3. Overall PS-Web network architecture; evidence on added components
Finally, with reference to point 3 above, it can be observed that, by grouping the transfer of the embedded files on the wireless link in a single burst, Active OFF Times can be compacted in an unique long OFF Time. This reduces significantly the time during which the network interface must be on. At the mobile host, the PSP layer is responsible for identifying the beginning of the Inactive OFF Times, and turning the network interface off until a new request from the browser arrives. 3.1
Power Saving Protocols
The PS-Daemon can be seen as made up of two components. The upper level component interacts with the HTTP modules at the mobile host and fixed server, and implements the same functionalities of a Proxy Web. The lower level component implements power management by interacting with the PSP module at the mobile host via the Power Saving Protocol. Therefore, the PS-Daemon can be regarded as a Proxy-Web with power saving support. Since we are interested in power management, in the following we shall focus on the PSP protocol. Fig. 4 and Fig. 5 show the actions performed at the mobile host and the Access Point, respectively. Upon receiving the main file from the remote server, the PS-Daemon forwards it to the mobile host, together with an estimate of the residual transfer time (see below), i.e., the time needed to fetch the embedded files from the server (lines 2-3 and 1822). Upon receiving such an estimate the mobile host turns the network interface off for the corresponding time interval (lines 4-7). Possible requests for embedded files generate by the browser in the meanwhile will be blocked by the PSP layer until the network interface is turned on again (lines 8-12). When the time interval has elapsed, the PSP module at the mobile host turns the network interface on and sends requests for embedded files, if any, to the PS-Daemon (lines 13-15). The PS-Daemon has already fetched these files from the server and can thus send them back to the browser (lines 23-25). When the Web page is completely available at the mobile host the PSP module turns the network interface off (lines 16-17) until a new request arrives from the user.
A Power Saving Architecture for Web Access from Mobile Computers 1 2 3 4 5 6 7
OnNewPageRequested(httpRequest) resumeInterface() send httpRequest to Access Point receive (mainFile, estimate) from access point if(estimate ≥ MIN_USEFUL_TIME) suspendInterface() setTimer(estimate)
8 9 10 11 12
OnRequestFromBrowser(httpRequest) if(interface is ON) send httpRequest to Access Point else insert httpRequest into pendingRequests
13 14 15
OnTimerExpired() foreach httpRequest in pendingRequests send httpRequest to Access Point
16 17
OnPageTransferFinished() suspendInterface()
245
Fig. 4. PSP protocol: actions performed at the mobile host.
18 19 20 21 22
OnNewPageRequested(httpRequest) Send httpRequest to server receive mainFile form server estimate = evaluate_time(mainFile) Send (mainFile, estimate) to mobile host
23 24 25
OnRequestForEmbedded(httpRequest) File = identifyFile(httpRequest) Send file to mobile host Fig. 5. PSP protocol: actions performed at the Access Point.
The architecture depicted in Fig. 3 is completely transparent to the application and the HTTP protocol, respectively. Like any other Web proxy, the PS-Daemon do not introduce any modification either at the client or server side of the application. In particular, the PSP module at the mobile host presents a socket-like interface to the application layer. The PS-Web architecture relies upon estimates of the file transfer times. These estimates are performed by the PS-Daemon at the Access Point and communicated to the mobile host (see [17] for details). As it clearly appears from the protocol description, the accuracy of the estimates is a key factor to achieve a significant power saving at the mobile host. The above architecture can be easily modified to include optimizations like handling of inaccuracies in the estimates supplied by the PS-Daemon (line 22, 4), isolation of the application-dependent functionalities to achieve an higher modularity and reusability, and so on. Details on such optimizations can be found in [17].
246
4
G. Anastasi et al.
Experimental Results
The objective of our system is twofold. First, it should achieve a good power saving with respect to the legacy approach. At the same time, the reduction in the power consumption should not occur at the cost of an unacceptable degradation in the QoS. To evaluate the performance of our system we considered a Power Saving Index (I_ps) and a Page Delay Index (I_pd). The Power Saving Index is defined as
I _ ps =
network interface consumption in PS - Web network interface consumption in TCP architecture
(1)
I_ps gives an immediate indication of how much energy can be saved by using the PS-Web approach instead of the legacy solution. The Page Delay Index measures the additional delay introduced by the PS-Web to transfer a Web page with respect to the legacy architecture, i.e.,
I _ pd = (page transfer time in PS-Web) – (page transfer time in TCP architecture) (2) To assess our architecture we performed an extensive measurements campaign. In our measurements, to simulate the application level, we used SURGE, a web traffic generator designed by Barford and Crovella [3]. SURGE can simulate an individual user by generating ON/OFF traffic with the same statistical properties of real Web traffic (see Section 2). So, it allows to evaluate our system in realistic conditions. To significantly simulate a Web session, each experiment stopped after 150 files have been transferred from the web server to the client1, when the whole page “in flight” arrives at the client. In each experiment, two different instances of a Web client request the same set of pages from the same server. One client uses the PSWeb, while the other one uses the legacy architecture. We take care that the path conditions between the client and the server are the same in both cases by executing the two instances in parallel. We performed a set of experiments where each set spanned an entire day and was carried out during italian business time to test our architecture when the Internet is heavily loaded. To increase estimates’ reliability, each set of experiments was replicated on several days. To take into account the influence of the Internet client-server route, we chose to perform separate sets of trials along different paths. Specifically, the mobile host was always located in Pisa (Italy), at the CNUCE-CNR Institute, while the server was located either at EPFL in Lausanne (Switzerland) or at the University of Texas at Arlington.
1
With SURGE, files requested to the web server are 93% extracted from body and 7% from the tail. Hence, in our experiments, we have about 10 files taken from the tail of the distribution.
A Power Saving Architecture for Web Access from Mobile Computers
4.1
247
Power Saving Analysis
As shown in [13] and [21], while the network interface is in the on state, it drains a nearly constant power from the battery source, whether it receives or transmits data, or remains idle. The energy consumed by the wireless interface is thus proportional to the time it remains in the on state. Therefore, according to equation (2), I_ps can be closely approximated by measuring, both in the legacy and PS-Web architectures, the overall time the network interface is in the on state. This index is easier to compute. Fig. 6(a) shows the overall time the network interface remains in the on state in the PS-Web and in the legacy architecture, respectively. The available service rate (i.e., the downloading speed in Kbps), as observed by the Web client in the legacy architecture, is also reported. It appears that the PS-Web is almost independent from the service rate. This is due to the joint effect of using the indirect-TCP model and data pre-fetching mechanism: the mobile host turns its network interface on only when the Web page is available at the Access Point. Therefore, the energy consumption does not depend on the state of the connection between the Access Point and the Web server, i.e., the service rate. The small variability that can be observed in Fig. 6(a) is caused by the transfer time estimator (see Section 3). ,BSV
36 B:H E WUDGLWLRQD O
(QHUJ\&RQVXPSWLRQ
V
H W
D W V
2 Q L G U
D F
VHUYLFHUDWH
LWDOLDQWLPH
(a)
1
DYHUDJHUDWLR
&RQVXPSWLRQUDWLR,BSV
VH UYLFHUDWH
V E S N H W D U H F L Y U H V
V
E N
H W
V S
,
S
D U H F L Y U H V
LWDOLDQWLPH
(b)
Fig. 6. Energy consumption and I_ps as a funcion of time (Web server in Texas).
The above results show that the wireless link is protected from the Internet congestions and, thus, the Indirect-TCP model and data pre-fetching work correctly. This conclusion is corroborated by Fig. 6(b) where the Power Saving Index I_ps is reported as a function of the day time. Fig. 6(b) also shows the I_ps index averaged on the whole day (average ratio in the figure) and the service rate. The power consumption in the PS-Web is always less than 40% the one in the legacy system, its average value on the whole day is below 30%, with values down to 20%. This means that, by using the PS-Web, we can save always more than 60% of battery, on average more than 70%, with saving peaks over 80%. The above results refer to the case when the Web server is located in Texas. Analogous experiments have been performed with the Web server in Switzerland, obtaining similar results. Indeed, when the server is in Switzerland the service rate is only slightly higher than the Texas’s one (175 Kbps vs. 150 Kbps) and the power saving is slightly lower (68% vs. 71%). The similarity can be justified by observing that the two paths share the initial part, which goes from Pisa to London. Furthermore,
248
G. Anastasi et al.
in the European business time, when the experiments were performed, this part is more congested than the rest of the path, and thus determine the overall service rate in both cases. 4.2
Delay Analysis
Fig. 7(a) shows the additional delay experienced by Web files, averaged on each experiment and on the whole set of experiments in a day, respectively. The service rate is also included for reader’s convenience. Fig. 7(b) shows the same indices but with respect to Web pages instead of Web files (recall that a Web page consists of a main file and possible embedded files). DYJ,BSG DY J GHOD \
RYHUDOODYJ
$YHUDJHDGGLWLRQDO3$*(GHOD\
RYHUDOODYJ
$YHUDJHDGGLWLRQDO),/(GHOD\
VHUYLFHUDWH
VHUYLFHUDWH
F
H V
E
H W
G
S N
\ D O H
V
D U H F L
F H V
G S
Y D
LWDOLDQWLPH
(a)
H V
,
J
Y U
J Y D
V S E N H W D U H LF Y U H V
LWDOLDQWLP H
(b)
Fig. 7. Average additional file and page delay (the Web server is located in Texas).
From Fig. 7(a) it appears that the PS-Web introduces very small additional delays in transferring Web files: the average daily value of the additional delay is less than 0.5 s. Similarly, the additional delay (averaged in a day) related to Web pages is in the order of 0.7 s. In general, the additional page delay is only slightly greater than the additional file delay. This is a very important result since it proves that the prefetching mechanism at the Access Point works properly. In fact, the estimator of pages’ transfer times forces the mobile host to keep the network interface off until the Web page is completely available at the Access Point. This means that the first file of a page experiments a certain additional delay, while successive files experiment a smaller one. Therefore, the additional delay related to the Web page is mainly determined by the first file. Fig. 8 shows the tail of the distribution of the additional delay for files and pages, respectively. The additional delay is never greater than 1s for individual files and never greater than 1.2 s for pages. Results discussed above were obtained with the Web server located in Texas. However, as above, experiments with the Web server in Switzerland provided very similar results which are omitted for the sake of space. Based on these results we can conclude that the power saving achieved in the PS-Web architecture is not paid with an unacceptable degradation of the QoS preceived by the user.
A Power Saving Architecture for Web Access from Mobile Computers
249
ILOHGHOD\
$GGLWLRQDOGHOD\'LVWULEXWLRQ)XQFWLRQV7DLO SD JH GHOD \
[ )
DGGLWLRQDOGHOD\VHF
Fig. 8 . Tails of the additional delay distributions, for single files and whole pages
5
Conclusions
In this work we have proposed and experimented new strategies for reducing the power consumption while accessing a Web service in a mobile wireless environment. To overcome drawbacks caused by the TCP-based legacy architecture we have designed a new architecture – referred to as PS-Web – that implements novel strategies to reduce the power consumption while accessing Web services. Specifically, we operate at the transport layer and we fully exploit information about the application behavior. So, we have an application-dependent approach but our system have been designed in a modular way that can be easily adapted to other network applications, as well. Furthermore, our system requires no modification to the Web application. The PS-Web architecture is based on the Indirect-TCP model and, as such, it isolates the wireless network from the wired network protecting the former from possible congestions in the latter. Furthermore, it makes a large use of pre-fetching to improve the performance: Web pages are first stored at the Access Point and then downloaded to the mobile host at the maximum rate allowed by the wireless link. Finally, it allows the mobile host to maintain the wireless network interface in the off state during inactivity periods (e.g., user think times). The experimental results obtained on a prototype implementation of the system have shown that the PS-Web is able to save, on average, the 70% of energy with respect to the legacy TCP-based approach, with saving peaks over 80%. More important, this energy saving is not obtained at the cost of a degradation in the QoS perceived by the user. In fact, experimental results have shown that the additional URT delay in transferring a Web document introduced by the PS-Web with respect to the legacy TCP based approach is, on average, below 1.5 s. We are currently working on PS-Web in order to refine the estimation of the file transfer time. This is very important since a better estimation allow to minimize the number of times the mobile host’s network interface is unnecessarily turned on. Also, we are comparing the performance of our application-dependent approach with those of application-independent solutions that operate without any preliminary information about the application behavior.
250
G. Anastasi et al.
Acknowledgements. The authors wish to express their gratitude to Paul Barford for providing the SURGE traffic generator used in the experiments. Also many thanks to Mohan Kumar and Silvia Giordano for giving the opportunity to use Web servers at the University of Texas at Arlington and EPFL (Lausanne, Switzerland), respectively.
References 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13.
14. 15. 16. 17.
M.Arlitt, C.Williamson, “Internet Web Servers: Workload Characterization and Performance Implication”, IEEE/ACM Transactions on Networking, Vol.5, No.5, pp.631645, Ottobre 1997. A.Bakre, B.R.Badrinath, “Implementation and Performance Evaluation of Indirect TCP”, IEEE Transactions on Computers, Vol.46, No.3, Marzo 1997. P.Barford e M.Crovella, “Generating Representative Web Workloads for Network and Server Performance Evaluation”, Proceedings of ACM SIGMETRICS ´98, Madison, WI, pp. 151-160, June 1998. P.Barford, A.Bestavros, A.Bradley e M.Crovella, “Changes in Web Client Access Patterns”, to appear in World Wide Web Journal, Special Issue on Characterization and Performance Evaluation, 1999. M.Crovella e A.Bestavros, “Self-Similarity in World Wide Web Traffic: Evidence and Possible Causes”, IEEE/ACM Transaction on Networking, Vol.5, No.6, pp.835-846, December 1997. C.Cunha, A.Bestavros e M.Crovella, “Characteristics of WWW Client-Based Traces”, Technical Report TR-95-010, Boston Univeristy Department of Computer Science, April 1995. G. H. Forman, J. Zahorjan, "The Challenges of Mobile Computing", Tecnical Report, University of Wachington, March 1994. D.P. Helmbold, D.E. Long, B. Sherrod "A Dynamic Disk Spin-down Technique for Mobile Computing", Proceedings of the Second Annual ACM International Conference on Mobile Computing and Networking, NY, pp. 130 - 142, November 1996 IEEE standard for Wireless LAN- Medium Access Control and Physical Layer Specification, P802.11, November 1997. T. Imielinski, S. Vishwanathan, B.R. Badrinath "Power Efficient Filtering of Data on air", Proc. of the EDBT, Cambridge, England, March 1994. T. Imielinscki B.R. Badrinath “Wireless Computing”, Communication of the ACM, Vol. 37, No. 10, October 1994. R.Kravets e P.Krishnan, “Power Management Techniques for Mobile Communication”, Proceedings of the Fourth Annual ACME/IEEE International Conference on Mobile Computing and Networking (Mobicom’98). G. Anastasi, M. Conti, W. Lapenna, “Power Saving Policies for Wireless Access to TCP/IP Networks”, Proceedings of the 8-th IFIP Workshop on Performance Modelling and Evaluation of ATM and IP Networks (IFIP ATM&IP2000), Ilkley (UK), July 17-19, 2000. J.R. Lorch, A.J. Smith, “Scheduling Techniques for Reducing Processor Energy Use in MacOS”, ACM/Baltzer Wireless Networks, 1997, pp.311-324. J.R.Lorch e A.J.Smith, “Software Strategies for Portable Computer Energy Management”, IEEE Personal Communication – June 1998, pp.60-73. M. Othman, S, Hailes, “Power Conservation Strategy for Mobile Computers Using Load Balancing”, ACM Mobile Computing and Communication Review, Vol. 2, N. 1, January 1998, pp. 44-50. A. Passarella, “Un’architettura power saving per l’accesso al Web da computer mobile”, Laurea Thesis, University of Pisa, October 2001 (in italian).
A Power Saving Architecture for Web Access from Mobile Computers
251
18. A. Rudenko, P. Reiher, G.J. Popek, G.H. Kuenning, “Saving Portable Computer Battery Power through Remote Process Execution”, ACM Mobile Computing and Communication Review, Vol. 2, N. 1, January 1998, pp. 19-26. 19. M.Rulnick e N.Bambos, “Mobile Power Management for Wireless Communication Networks”, ACM/Baltzer Wireless Networks, Vol.3, No.1, Marzo 1996. 20. S. Sheng, A. Chandrakasan, R.W. Brodersen, “A Portable Multimedia Terminal”, IEEE Communications Magazine, December 1992. 21. M.Stemm e R.H.Katz, “Measuring and Reducing Energy Consumption of Network Interfaces in Hand-Held Devices”, Proc. 3° International Workshop on Mobile Multimedia Communication, Princeton, NJ, Settembre 1996. 22. Mark Weiser, Brent Welch, Alan Demers Scott Shenker. "Scheduling for Reducing CPU Energy", USENIX Association, First Symposium on Operating System Design and Implementation, Monterey, CA, Nov. 1994. 23. M.Zorzi e R.R.Rao, “ARQ Error Control on Fading Mobile Radio Channels”, accepted for pubblication in IEEE Trans. Veh. Tech., Also in Proc. IEEE ICUPC ’95, pp.211-215, Novembre 24. M.Zorzi e R.R.Rao, “Energy Constrained Error Control for Wireless Channels”, Proceeding of IEEE GLOBECOM ’96, pp.1411-1416, 1996.
A Resource/Connection Management Scheme for HTTP Proxy Servers Takuya Okamoto1 , Tatsuhiko Terai1 , Go Hasegawa2 , and Masayuki Murata2 1
Graduate School of Engineering Science, Osaka University 1-3 Machikaneyama, Toyonaka, Osaka 560-8531, Japan {tak-okmt, terai}@ics.es.osaka-u.ac.jp 2
Cybermedia Center, Osaka University 1-30 Machikaneyama, Toyonaka, Osaka 560-0043, Japan {hasegawa, murata}@cmc.osaka-u.ac.jp
Abstract. Although many research efforts have been devoted to the network congestion against an increase of the Internet traffic, there has been a little concern on improvement of the performance of Internet hosts in spite of the projection that the bottleneck is now being shifted from the network to hosts. We have proposed SSBT (Scalable Socket Buffer Tuning), which is intended to improve the performance of Web servers by maintaining their resources effectively and fairly, and validated its effectiveness through the simulation and implementation experiments. In the current Internet, however, a significant amount of Web document transfer requests are through HTTP proxy servers. Accordingly, in this paper, we propose a new resource management scheme for proxy servers to improve their performance and to reduce Web document transfer time via the proxy servers. Our proposed scheme has the following two components. One is an enhanced EATBT, which is an enhancement version of our previous SSBT for proxy servers by taking account of different characteristics among TCP connections. The other is a scheme that manages persistent TCP connections at proxy servers to avoid newly arriving TCP connections from being rejected due to lack of resources. We validate an effectiveness of our proposed scheme through simulation experiments, and confirm that it can manage proxy server resources effectively.
1 Introduction With the rapid growth of Internet users, many research efforts have been directed to avoiding and dissolving network congestion against an increase of network traffic. However, there has been a little concern on improvement of the performance of Internet hosts in spite of the projection that the performance bottleneck is now being shifted from the network to endhosts. In [1], we have proposed SSBT (Scalable Socket Buffer Tuning) which is intended to improve the performance of Web servers by maintaining their resources effectively and fairly. SSBT has two major components; E-ATBT (Equation-based Automatic TCP Buffer Tuning) and SMR (Simple Memory-copy Reduction) schemes. In E-ATBT, we maintain an ‘expected’ throughput value of each active TCP connection, which is determined by an analytic estimation [2]. It is characterized by packet loss ratio, RTT (Round E. Gregori et al. (Eds.): NETWORKING 2002, LNCS 2345, pp. 252–263, 2002. c Springer-Verlag Berlin Heidelberg 2002
A Resource/Connection Management Scheme for HTTP Proxy Servers
253
Trip Time), and RTO (Retransmission Time Out), which are easily monitored by a sender host. Then, the send socket buffer is assigned to each connection based on its expected throughput with consideration on a max-min fairness among connections. The SMR scheme provides a set of socket system calls in order to reduce the number of memory copy operations at the sender host in TCP data transfer. The SMR scheme is alike as other schemes [3,4], but it is simpler to implement. In the current Internet, there are many requests for Web documents transfer via HTTP proxy servers [5]. Since the proxy servers are usually prepared by ISPs (Internet Service Providers) for their customers, such proxy servers must accommodate a large number of the customers’ HTTP accesses simultaneously. Furthermore, the proxy servers should handle both of upward TCP connections (from the proxy server to Web servers) and downward TCP connections (from the client hosts to the proxy server). Therefore, it is likely that the proxy server becomes the bottleneck in Web document transfer, even when both of the network bandwidth and the Web server performance are large enough. That is, to reduce the Web document transfer time, a performance enhancement of the proxy servers should next be considered. In this paper, we first point out several problems in handling TCP connections at the HTTP proxy server. The one is the assignment of the socket buffer for TCP connections at the proxy server. When a TCP connection is not assigned the proper size of send/receive socket buffer according to its throughput, the assigned socket buffer may be left unused or insufficient, which results in waste of the socket buffer. Another problem is the management of persistent TCP connections, which tends to waste the resource of the busy proxy server. When a proxy server accommodates many persistent TCP connections without any effective management, its resources are kept assigned to those connections whether those connections are actually ‘active’ or not. Then new TCP connections cannot be established since the server resources are short. We propose a new resource management scheme for proxy servers to resolve such problems, and then to reduce Web document transfer time via the proxy servers. Our proposed scheme has following two features. One is an enhanced E-ATBT, which is an enhancement version of our previous E-ATBT for proxy servers. Differently from the Web servers, the proxy server should handle both upward and downward TCP connections and behave as a client host to obtain Web documents and in-line images from Web servers. We therefore enhance E-ATBT to effectively handle a dependency between upward and downward TCP connections and to assign its receive socket buffer size dynamically. The other is a resource management scheme that can avoid newly arriving TCP connections from being rejected due to lack of resources for establishing them on the proxy server. It involves the management of persistent TCP connections provided by the HTTP/1.1. The persistent connection can omit the overhead of TCP’s three-way handshake and then reduce the document transfer time by HTTP. However, when the persistent TCP connection is unused until it is closed by timeout mechanism, the resources assigned for the TCP connection are wasted. The proposed scheme intentionally tries to close the persistent connections when the resources of the proxy server are shorthanded.
254
T. Okamoto et al. Web servers
Internet
Request the document to the original Web server
Upward TCP connection
Downward TCP connection
get the document from the original Web server
No Hit HTTP proxy server
Internet
Hit
deliver the document Request a document Client hosts
Fig. 1. HTTP Proxy Server
2 Background 2.1
Proxy Server
An HTTP proxy server works as an agent for Web client hosts that request Web documents. When it receives Web document transfer requests from the Web client host, it obtains the requested document from the original Web servers on behalf of the client host and delivers it to the client. It also caches obtained Web documents. When other client hosts request the same documents, it transfers the cached documents, which results in that the document transfer time is much reduced. For example, it is reported in [6] that using Web proxy servers reduces document transfer time by up to 30%. Also, when the cache is hit, the document transfer is performed without any connection establishment to Web servers. Thus, the congestion within the network and at Web servers can also be reduced. The proxy server accommodates a large number of connections, which are connected from Web client hosts and to Web servers as depicted in Figure 1. It is a different point from Web servers. The proxy server behaves as a sender host for the downward TCP connection (between the client host and the proxy server) and as a receiver host for the upward TCP connection (between the proxy server and the Web server). Therefore, if the resource management is not appropriately configured at the proxy server, the document transferring time increases even when the network is not congested or the load at the Web server is not high. That is, careful and effective resource management is a critical issue for improving the performance of the proxy server. In the current Internet, however, most proxy servers including those in [7,8] are lack of such considerations. Resources of HTTP proxy servers that we focus in this paper are mbuf, file descriptor, control blocks, and socket buffer. Those are closely related to the performance of TCP connections in transferring Web documents. Mbuf, file descriptor, and control blocks are resources for TCP connections. The amount of those resources cannot be changed dynamically according to the requirement of the proxy server, since it is determined when the system kernel is booted or when the proxy server is activated [9]. When at least one of the resources lacks, therefore, newly arriving TCP connections for Web document
A Resource/Connection Management Scheme for HTTP Proxy Servers
255
transfer have to wait for other connections to be closed and their assigned resources to be released. The socket buffer is used for data transfer operations between user applications and the sender/receiver TCP. When the user application transmits data using TCP, the data is copied to the send socket buffer and subsequently it is copied to the mbufs (or mbuf clusters). The size of the assigned socket buffer is a key issue for the effective data transfer by TCP. Suppose that a server host is sending TCP data to two client hosts; one a 64 Kbps dial-up (say, client A) and the other a 100 Mbps LAN (client B). If the server host assigns equal size of send socket buffers to both client hosts, it is likely that the amount of the assigned buffer is too large for client A and too small for client B, because of the differences of capacity (more strictly, bandwidth-delay products) of their connections. For an effective buffer allocation to both client hosts, a compromise of the buffer usage should be taken into account. We proposed an E-ATBT scheme [1], which assigns the receive socket buffer to each TCP connection dynamically according to its throughput estimated from the observed network parameters, such as packet loss ratio, RTT, and RTO. That is, a sender host calculates the average window size of its TCP connection based on the analysis result in [10] from the above three parameters. The throughput of the TCP connection is then obtained by considering the performance degradation caused by TCP’s retransmission timeout. Finally, we estimate the required receive socket buffer size as multiplication of the estimated throughput and RTT of the TCP connection. By taking into account the observed network parameters, the resource at the Web server is appropriately allocated to connections in various network environments. E-ATBT is applicable to HTTP proxy servers, since the proxy servers also accommodate many TCP connections issued by clients in various environments. However, since proxy servers have a dependency between upward and downward TCP connections, a straightforward application of E-ATBT is insufficient. Furthermore, the proxy server behaves as a receiver host for the upward TCP connection to the Web server, we have to consider the management scheme for the receive socket buffer, which was not considered in the original E-ATBT. 2.2
Persistent TCP Connection of HTTP/1.1
In recent years, many Web servers and client hosts (namely, Web browsers) support a persistent connection option, which is one of the important functions of HTTP/1.1 [11]. In the older version of HTTP (HTTP/1.0), the TCP connection between server and client hosts is immediately closed when the document transfer is completed. However, since Web documents have many in-line images, it is necessary to establish TCP connections many times to download them in HTTP/1.0. It results in a significant increase of document transfer time since the average size of Web documents at several Web servers is about 10 [KBytes] [12,13]. The three-way handshake in each TCP connection establishment makes the situation worse. In the persistent connection of HTTP/1.1, on the other hand, the server preserves the status of the TCP connection, which includes the congestion window size, RTT, RTO, ssthresh, and so on, when it finishes the document transfer, and re-uses the connection and its status when other documents are transferred by using the same HTTP session. Then, the three-way handshake can be omitted. However, since it keeps the TCP connection
256
T. Okamoto et al.
established whether the connection is active (in use for packet transfer) or not, the resources at the server are wasted when the TCP connection is inactive. Therefore, the significant portion of the resources may be wasted in order to keep the persistent TCP connections at the proxy server accommodating many TCP connections. One solution against this problem is simply to discard HTTP/1.1 and to use HTTP/1.0, since HTTP/1.0 closes the TCP connection when the document transfer is finished. However, HTTP/1.1 has other elegant mechanisms such as the pipelining and the contents negotiation [11]. We should therefore develop an effective resource management scheme under HTTP/1.1. Our solution is that the proxy server aggressively closes the persistent TCP connections that are unnecessarily wasting the proxy resources, as the resources become short.
3 Algorithm and Implementation Issues In this section, we propose a new resource management scheme suitable to the HTTP proxy server, which consists of a new management scheme of send/receive socket buffer, and a handling algorithm of persistent TCP connections. 3.1
New Socket Buffer Management Method
Handling the Relation of Upward and Downward Connections A HTTP proxy server relays a document transfer request to a Web server for a Web client host. Thus, there is a close relation between an upward TCP connection (from the proxy server to the Web server) and a downward TCP connection (from the client to the proxy server). That is, the difference of the throughput of both connections should be taken into account when socket buffers are assigned to them. For example, when the throughput of a certain downward TCP connection is larger than that of other concurrent downward TCP connections, the larger size of socket buffer should be assigned to the TCP connection by using E-ATBT. However, if the throughput of the upward TCP connection corresponding to the downward TCP connection is low, the send socket buffer assigned to the downward TCP connection is likely not to be fully utilized. In this case, the unused send socket buffer should be assigned to the other concurrent TCP connections having smaller socket buffers, hence, that the throughputs of those TCP connections would be improved. There is one problem to realize the above-mentioned method. The TCP connection is identified with the control blocks, tcpcb, by the kernel. However, the relation between the upward and downward connections cannot be known by the kernel. Two possible ways to overcome this problem are considered as follows: – The proxy server monitors the utilization of the send socket buffer of downward TCP connections. Then, it decreases the assigned buffer size of connections whose send socket buffers are not fully utilized. – When the proxy server sends the document transfer request to the Web server, the proxy server attaches an information of the relation to the packet header. The former algorithm can be done only by the modification of the proxy server. On the other hand, the latter algorithm needs the interaction of the HTTP protocol. In the higher abstract model, the above two algorithms have a same effect. However, the latter has a implementation difficulty while it can achieve a precise control.
A Resource/Connection Management Scheme for HTTP Proxy Servers
257
Control of Receive Socket Buffer In most of past researches, it was assumed that a receiver host has enough large size of receive socket buffer, considering that the performance bottleneck of the data transfer is not at the endhosts, but within the network. Therefore, many OSs assign a small size of the receive socket buffer to each TCP connection. For example, the default size of the receive socket buffer in the FreeBSD system is 16 [KBytes]. Now it is very small [14] because the network bandwidth is dramatically increased in the current Internet, and the performance of the Internet servers becomes higher and higher. To avoid the performance limit by the receive socket buffer, the receiver host should adjust its receive socket buffer size to the congestion window size of the sender host. This can be done by monitoring the utilization of the receive socket buffer, or by adding information about the window size to data packet header, as described above. In the simulation in the next section, we suppose that the proxy server can obtain complete information about required sizes of the receive socket buffer of upward TCP connections and control them according to the required size. 3.2
Connection Management
As explained in Subsection 2.2, a careful treatment of persistent TCP connections on the proxy server is necessary for an effective usage of the resources of the proxy server. We propose a new management scheme of persistent TCP connections at the proxy server by considering the amount of the remaining resources. The key idea is as follows. When the load at the proxy server is low and the remaining resources are much enough, it tries to keep as many TCP connections as possible. On the other hand, when the resources at the proxy server are going to be short, the proxy server tries to close the persistent TCP connections and free the resources, such that the released resources can be used for new TCP connections. The remaining resources of proxy servers should be monitored for realizing the above-mentioned control method. The resources for establishing TCP connections at the proxy server include mbuf, file descriptor, and control blocks. The total amount of these resources cannot be changed dynamically after the kernel is booted. However, the total and remaining amounts of these resources can be observed in kernel system [9]. Therefore, we introduce threshold values of the utilization for these resources, and if one of utilization level of those resources, calculated by the kernel system at regular intervals, reaches the threshold, the proxy server starts closing the persistent TCP connections and releasing the resources assigned to the connections. The proxy server maintains persistent TCP connections as follows. See also Figure 2. When a TCP connection becomes idle, the proxy server records the connection and the current time. For fast lookup of the record, we use the hashing algorithms, of which key is the combination of source/destination IP addresses and the port number of the TCP connection. We also introduce a list, called a time scheduling list, to put the persistent connections in order of the time length that they are persistent. When a new persistent TCP connection is registered hash table, it is added at the end of the time scheduling list, so that the proxy server can select the older persistent TCP connections to be closed. Each entry in the hash table has the socket file descriptor of the corresponding to the TCP connection, which is used later to identify the connection. When the proxy
258
T. Okamoto et al. time scheduling list time
IP address, port number
(IP address, port number)
hash function
( 192. 168. 10. 200, 10010 ) 246 16 : 20 ' 40 ( 192. 168. 17. 10, 12049 ) 36 ( 192. 168. 2. 155, 10110 ) 159
16 : 20 ' 42 16 : 20 ' 53
( 192. 168. 240. 3, 10338 ) 120 16 : 20 ' 48 socket file descriptor
NULL
Fig. 2. Management Scheme of Persistent TCP Connections
server closes some of persistent TCP connections, it selects them from the top of the time scheduling list, by which the proxy server can close the older persistent connections. When a certain persistent TCP connection in the hash table becomes active before closed, or when it is closed by persistent timer expiration, the proxy server removes the corresponding entry from the hash table and the time scheduling list. All operations on the persistent TCP connections can be performed by simple pointer manupilations and hash operations. For the further effective resource usage, we also add the mechanism that the amount of resources assigned to the persistent TCP connections is decreased gradually after the connection is inactive. The socket buffer is not necessary at all when the TCP connection becomes idle. However, we gradually decrease the send socket/receive buffer size of persistent TCP connections by taking account of the fact that as the connection idle time continues, the possibility that the TCP connection is ceased becomes large.
4 Simulation Experiments In this section, we investigate the effectiveness of our proposed scheme through simulation experiments using ns-2 [15]. Figure 3 shows the simulation model. In this figure, the bandwidths of the links between client hosts and an HTTP proxy server and those between the proxy server and Web servers are all set to 100 Mbps. To see the effect of various network conditions, we set the packet loss probability on each link to be 0.0001, 0.0005, 0.001, 0.005 or 0.01. That is, one-fifth of the links is assigned one of the above values. The propagation delay of each link between the client hosts and the proxy server is also varied as ranged from 10 msec and 100 msec, and that between the proxy server and the Web servers is from 10 msec and 200 msec. The propagation delays of each link is determined randomly from the above ranges. The number of Web servers is fixed at 50, and that of the client hosts is changed as 50, 100, 200 and 500. We ran 300 sec simulation in each experiment. In the simulation experiments, each client host selects one of the Web servers at random and generates a document transfer request via the proxy server. The distribution of the requested document size is obtained from [12], which is given by the combination of a log-normal distribution for small documents and a Pareto distribution for large
A Resource/Connection Management Scheme for HTTP Proxy Servers
259
Fig. 3. Simulation Model
ones. Note that since we focus on the resource and connection management of proxy servers, we have not considered detailed algorithms of the caching behavior, including the cache replacement algorithms. Instead, we set the hit ratio, Hr , to 0.5. Using Hr , the proxy server decides either to transfer the requested document to the client directly, or to deliver it to the client after downloading it from the Web server. The proxy server has 3200 KBytes of socket buffer and assigns it as send/receive socket buffer to TCP connections. It means that the original scheme can establish at most 200 TCP connections concurrently, since it fixedly assigns 16 KBytes of send/receive socket buffer to each TCP connection. In what follows, we compare the performance of the following 4 schemes; scheme (1) which does not use any enhanced algorithms in this paper, scheme (2) which uses E2 -ATBT, scheme (3) which uses E2 -ATBT and the connection management scheme described in Subsection 3.2, and scheme (4) which uses E2 -ATBT and the connection management scheme with the algorithm that gradually decreases the socket buffer assigned to the persistent TCP connections. Note that for scheme (3) and (4), we do not explicitly consider the amount and the threshold value of each resource, as explained in Subsection 3.2. Instead, we introduce Nmax , the maximum number of connections which can be established simultaneously, to simulate the limitation of the proxy server resources. In scheme (1) and (2), newly arrived requests are rejected when the number of TCP connections in the proxy server is Nmax . On the other hand, scheme (3) and (4) forcibly terminate some of persistent TCP connections that are unused for the document transfer, and establish the new TCP connections. For scheme (4), we exclude persistent TCP connections from calculation process of E2 -ATBT algorithm, and halve the assigned size of socket buffer every 3 sec. The minimum size of the socket buffer is 1 KByte. 4.1
Evaluation of Proxy Server Performance
We first investigate the performance of the proxy server. Here we define the performance of proxy server as the total size of the documents transferred in both directions by the proxy server during 300 sec simulation time. In Figure 4, we plot the performance of
260
T. Okamoto et al.
Total Transfer Size [MBytes]
1200 1000
scheme (1) scheme (2) scheme (3) scheme (4)
800 600 400 200 0
50
100
200
500
Number of client hosts
Fig. 4. Simulation Result: Proxy Server Performance
the proxy server as a function of the number of client hosts. Here, we set Nmax to 200. It is clear from this figure that the performance of the original scheme (scheme (1)) is decreased in the case of the larger number of client hosts. It is because when the number of client hosts is larger than Nmax , the proxy server rejects some of document transfer requests, although most of Nmax TCP connections are idle, which means that they do nothing but waste the resources of the proxy server. The results of scheme (2) in Figure 4 shows that E2 -ATBT can improve the proxy server performance regardless of the number of client hosts. However, it also shows that the performance also degrades when the number of client hosts increases. This means that E2 -ATBT cannot solve the problem of ‘idle’ persistent TCP connections, and that it is necessary to introduce a connection management scheme to overcome this problem. We can also see that scheme (3) can significantly improve the performance of the proxy server, especially when the number of client hosts is large. It is since when the proxy server cannot accept all connections from the client hosts, which corresponds to the case where the number of client hosts is larger than 200 in Figure 4, scheme (3) would close idle TCP connections for newly arriving TCP connections to be established. It results in that the number of TCP connections which actually transfer documents increases largely. Scheme (4) can also improve the performance of the proxy server, especially when the number of client hosts is small, as shown in Figure 4. In the case of larger number of client hosts, however, there is little performance improvement. It can be explained as follows. When the number of client hosts is small, most of the persistent TCP connections at the proxy server are kept established, since the proxy server has enough resources to accommodate 50 client hosts. Therefore, the socket buffer assigned to the persistent TCP connections can be effectively re-assigned to other active TCP connections by scheme (4). When the number of client hosts is large, on the other hand, the persistent TCP connections are likely to be closed before scheme (4) begins to decrease the assigned socket buffer. It results in that scheme (4) can do nothing against the persistent TCP connections.
A Resource/Connection Management Scheme for HTTP Proxy Servers
100
scheme (1) scheme (2) scheme (3) scheme (4)
Response Time [sec]
Response Time [sec]
100
10
1
0.1 10
100
1000
10000 100000 1e+06
Document Size [Byte]
1e+07
1
Response Time [sec]
Response Time [sec]
100
10
1
100
1000
10000
100000
Document Size [Byte]
1e+06
(c) Number of Client Hosts: 200
1000
10000
100000
1e+06
1e+07
(b) Number of Client Hosts: 100
scheme (1) scheme (2) scheme (3) scheme (4)
0.1 10
100
Document Size [Byte]
(a) Number of Client Hosts: 50 100
scheme (1) scheme (2) scheme (3) scheme (4)
10
0.1 10
1e+08
261
1e+07
scheme (1) scheme (2) scheme (3) scheme (4)
10
1
10
100
1000
10000 100000 1e+06
Document Size [Byte]
1e+07
1e+08
(d) Number of Client Hosts: 500
Fig. 5. Simulation Result: Response Time
4.2
Evaluation of Response Time
We next show the evaluation results of response time of document transfer, which corresponds to the user-perceived performance. We define the response time as the time from when a client host sends a document transfer request to when it receives the requested document. It also includes the waiting time for connection establishment. Figure 5 shows the simulation results. We plot the response time as a function of document size for the four schemes. From this figure, we can clearly observe that the response time is much improved when our proposed scheme is applied especially when the number of connections is large (Figure 5 (b)-(d)). However, when the number of client hosts is 50, the proposed scheme does not help improving the response time. For this, the server resources are enough to accommodate 50 client hosts and all TCP connections are soon established at the proxy server. Therefore, response time can not be improved so much. Note that since E2 -ATBT can improve the throughput of TCP data transfer to some degree, the proxy server performance can be improved as shown in the previous subsection.
262
T. Okamoto et al.
Although schemes (3) and (4) can improve the response time largely, there is little difference between the two schemes. This can be explained as follows. Scheme (4) decreases the assigned socket buffer to persistent TCP connections and re-assign it to other active TCP connections. Although the throughput of the active TCP connections becomes improved, its effect on the response time is very small compared with the effect of introducing scheme (3). However, scheme (4) is worth to be used at the proxy server, since scheme (4) can give a good effect on the proxy server performance as shown in Figure 3. From all of the above simulation results, we can say that scheme (4), which has all enhanced mechanisms proposed in this paper, is the best one to improve both the performance of the proxy server and response time of client hosts, regardless of the number of client hosts.
5 Conclusion In this paper, we have proposed a new resource management scheme for HTTP proxy servers. Our proposal scheme has two algorithms. One is an enhanced E-ATBT, the scheme for managing the socket buffer considering about the relation between the upward and downward TCP connections, which is one of the characteristics of the proxy servers. It also manages the receiver socket buffer, which is not considered in the original E-ATBT. The other is the scheme for managing TCP connections at the proxy servers. It maintains persistent TCP connections, and it aggressively closes them when the resources lack. We have evaluated our scheme through some simulation experiments, and confirmed that our scheme can improve the performance of the proxy servers, and reduce document transfer time experienced by client hosts. We are now implementing the proposed scheme to the actual proxy server, and to evaluate it through experiments using the actual network. We also plan to introduce other kinds of the resources of Web servers and proxy servers to our resource management scheme. For example, a CPU processing time should be considered for executing CGI programs, which is one of the bottleneck of the busy Web servers. Acknowledgements. This work was partly supported by the Research for the Future Program of the Japan Society for the Promotion of Science under the Project “Integrated Network Architecture for Advanced Multimedia Application Systems,” Telecommunication Advancement Organization of Japan under the Project “Global Experimental Networks for Information Society Project,” and the “Research on High-performance WWW server for the Next-Generation Internet” program of from the Telecommunications Advancement Foundation.
References 1. G. Hasegawa, T. Terai, T. Okamoto, and M. Murata, “Scalable socket buffer tuning for highperformance Web servers,” in Proceedings of IEEE ICNP 2001, Nov. 2001. 2. G. Hasegawa, T. Matsuo, M. Murata, and H. Miyahara, “Comparisons of packet scheduling algorithms for fair service among connections on the internet,” in Proceedings of IEEE INFOCOM 2000, Mar. 2000.
A Resource/Connection Management Scheme for HTTP Proxy Servers
263
3. A. Gallatin, J. Chase, and K. Yocum, “Trapeze/IP: TCP/IP at near-gigabit speeds,” in Proceedings of 1999 USENIX Technical Conference, June 1999. 4. P. Druschel and L. Peterson, “Fbufs: A high-bandwidth cross-domain transfer facility,” in Proceedings of the Fourteenth ACM symposium on Operating Systems Principles, pp. 189– 202, Dec. 1993. 5. Proxy Survey, available at http: // www. delegate. org/ survey/ proxy. cgi . 6. A. Feldmann, R. Caceres, F. Douglis, G. Glass, and M. Rabinovich, “Performance of Web proxy caching in heterogeneous bandwidth environments,” in Proceedings of IEEE INFOCOM ’99, pp. 107–116, 1999. 7. Squid Home Page, available at http: // www. squid-cache. org/ . 8. Apache proxy mod proxy, available at http: // httpd. apache. org/ docs/ mod/ mod_ proxy. html . 9. M. K. McKusick, K. Bostic, M. J. Karels, and J. S. Quarterman, The Design and Implementation of the 4.4 BSD Operating System. Reading, Massachusetts: Addison-Wesley, 1999. 10. J. Padhye, V. Firoiu, D. Towsley, and J. Krusoe, “Modeling TCP throughput: A simple model and its empirical validation,” in Proceedings of ACM SIGCOMM ’98, pp. 303–314, Aug. 1998. 11. R. Fielding, J. Gettys, J. Mogul, H. Frystyk, and T. Berners-Lee, “Hypertext transfer protocol – HTTP/1.1,” Request for Comments (RFC) 2068, Jan. 1997. 12. P. Barford and M. Crovella, “Generating representative Web workloads for network and server performance evaluation,” in Proceedings of the 1998 ACM SIGMETRICS International Conference on Measurement and Modeling of Computer Systems, pp. 151–160, July 1998. 13. M. Nabe, M. Murata, and H. Miyahara, “Analysis and modeling of World Wide Web traffic for capacity dimensioning of Internet access lines,” Performance Evaluation, vol. 34, pp. 249– 271, Dec. 1999. 14. M. Allman, “A Web server’s view of the transport layer,” ACM Computer Communication Review, vol. 30, pp. 10–20, Oct. 2000. 15. The VINT Project, “UCB/LBNL/VINT network simulator - ns (version 2).” available at http://www.isi.edu/nsnam/ns/.
Measurement-Based Modeling of Internet Round-Trip Time Dynamics Using System Identification Hiroyuki Ohsaki1 , Mitsushige Morita2 , and Masayuki Murata1 1 Cybermedia Center, Osaka University 1-30 Machikaneyama, Toyonaka, Osaka, Japan {oosaki, murata}@cmc.osaka-u.ac.jp 2
Graduate School of Engineering Science, Osaka University 1-3 Machikaneyama, Toyonaka, Osaka, Japan [email protected]
Abstract. Understanding the end-to-end packet delay dynamics of the Internet is of crucial importance since it directly affects the QoS (Quality of Services) of various applications, and it enables us to design an efficient congestion control mechanism. In our previous studies, we have measured round-trip time of the Internet, and have modeled its dynamics by the ARX (Auto-Regressive eXogenous) model using system identification. As input and output data for the ARX model, we have used the packet inter-departure time from a source host and the corresponding round-trip time variation measured by the source host. In the current paper, for improving the model accuracy, we instead use the packet transmission rate from the source host and the average round-trip time measured by the source host. Using input and output data measured in working LAN and WAN environments, we model the round-trip time dynamics by determining coefficients of the ARX model using system identification. Through numerical examples, we show that in LAN environment, the round-trip time dynamics can be accurately modeled by the ARX model. We also show that in WAN environment, the round-trip time dynamics can be accurately modeled when the bottleneck link is shared by a small number of users.
1 Introduction In the past decade, the Internet has been explosively growing in scale as well as in population after the introduction of the WWW (World Wide Web). In January 1997, only 16 million computers were connected to the Internet, but it has jumped to more than 56 million computers in July 1999 [1]. Because of the changing nature of the Internet, nobody knows the current network topology of the Internet. Such uncertainty of the Internet makes it very difficult, but also challenging, to analyze and understand the end-to-end packet behavior of the Internet. Understanding the end-to-end packet delay dynamics of the Internet is of crucial importance since (1) it directly affects the QoS (Quality of Services) of various applications, and (2) it enables us to design an efficient congestion control mechanism for both realtime and non-realtime applications. For non-realtime applications, a delay-based approach for congestion control mechanisms, rather than a loss-based approach as used in E. Gregori et al. (Eds.): NETWORKING 2002, LNCS 2345, pp. 264–276, 2002. c Springer-Verlag Berlin Heidelberg 2002
Measurement-Based Modeling of Internet Round-Trip Time Dynamics
265
TCP (Transmission Control Protocol), has been proposed (e.g., [2,3]). The main advantage of such a delay-based approach is, if it is properly designed, packet losses can be prevented by anticipating impending congestion from increasing packet delays. In [4,5], we have proposed a novel approach for modeling the end-to-end packet delay dynamics of the Internet using system identification. In [4,5], we have regarded the network, seen by a specific source host, as a dynamic SISO (Single-Input and Single Output) system. We have modeled the round-trip time dynamics using the ARX (AutoRegressive eXogenous) model. In those studies, the input to the system was the packet inter-departure time from the source host, and the output was the round-trip time variation between two adjacent packets. Using measured data obtained in wired and wireless LAN environments, we have investigated how accurately the ARX model can capture the round-trip time dynamics of the Internet. We have found that the ARX model can capture the round-trip time dynamics when the network is moderately congested. We have also found that, when the network is not congested or the measured round-trip time is noisy, the ARX model fails to capture the dynamics. This paper is a direct extension of [4,5], and has three major changes: (1) refined definition of input and output data for improving the model accuracy, (2) experimentations in LAN and WAN environments, and (3) use of two model validation methods in time domain and frequency domain. The first change is to refine the definition of the input and the output for the ARX model. The input to the system is changed to an instantaneous packet transmission rate from the source host during a fixed sampling interval. Also the output is changed to an instantaneous average round-trip time observed by the source host during a fixed sampling interval. In [4,5], the sampling interval is not fixed since it is dependent on the packet sending/receiving process at the source host. On the contrary, in this paper, the sampling interval is fixed, so that the model accuracy is expected to be improved since system identification originally assumes a fixed sampling interval. The objective of the second change is to investigate how the model accuracy is related to a network configuration. We collect input and output data for system identification in LAN and WAN environments, and build a model for the round-trip time dynamics. The third change is to evaluate the model accuracy in a more rigorous manner. We evaluate the model accuracy in frequency domain as well as in time domain. In [5], the accuracy of the ARX model was evaluated only in time domain; that is, we have compared the simulated outputs from the ARX model (i.e., round-trip times) with the actual round-trip times. In this paper, we also examine the model accuracy in frequency domain using a spectral analysis. Through numerical examples, we show that in LAN environment, the round-trip time dynamics can be accurately modeled by the ARX model. We also show that in WAN environment, the round-trip time dynamics can be accurately modeled when the bottleneck link is shared by a small number of users. This paper is organized as follows. In Section 2, a black-box approach for modeling the round-trip time dynamics of the Internet is explained. In Section 3, we discuss several measurement methods of the round-trip time, in particular, for collecting input and output data for system identification. We also explain three network environments in which input and output data, used for the model identification and for the model validation, are collected. Section 4 shows several measurement and modeling results, and discuss how accurately the ARX model can capture the round-trip time dynamics in various network configurations. Section 5 concludes this paper with a few remarks.
H. Ohsaki, M. Morita, and M. Murata packet transmission rate
round-trip time
266
e(t)
network layer physical layer
network layer ICMP Echo Request
source host
u(k)
ICMP Echo Reply
Fig. 1. Modeling round-trip time dynamics as SISO system.
y(k) ARX model
physical layer
destination host
noise (other traffic)
input (packet transmission rate)
output (round-trip time)
Fig. 2. ARX model for modeling round-trip time dynamics.
2 Black-Box Modeling Using ARX Model As depicted in Fig. 1, the network seen by a specific source host, including underlying protocol layers (e..g, physical, data-link, and network layers), is considered as a blackbox. Our goal of this paper is to model a SISO system describing the round-trip time dynamics: i.e., the relation between a packet sending process from the source host and its resulting round-trip time observed at the source host. Effects of other traffic (i.e., packets coming from other hosts) are modeled as noise. As the input to the system, we use an instantaneous packet transmission rate from the source host: i.e., the packet transmission rate during a fixed sampling interval. As the output from the system, we use an instantaneous average round-trip time measured by the source host: i.e., the average round-trip time during a fixed sampling interval. In this paper, the ARX model is used and its coefficients are determined using system identification [6]. Figure 2 illustrates a fundamental concept of using the ARX model for capturing the round-trip time dynamics. The input to the ARX model is a packet transmission rate from the source host, and the output from the ARX model is a roundtrip time measured by the source host. Effects of other traffic (i.e., packets coming from other hosts) are modeled as the noise to the ARX model. Letting u(k) and y(k) be the input and the output at slot k, respectively, the ARX model is defined as A(q) y(k) = B(q) u(k − nd ) + e(k) A(q) = 1 + a1 q −1 + . . . + ana q −na B(q) = b1 + b2 q −1 + . . . + bnb q −nb +1 where e(k) is unmeasurable disturbance (i.e., noise), and q −1 is the delay operator; i.e., q −1 u(k) ≡ u(k −1). The numbers na and nb are the orders of polynomials. The number nd corresponds to delays from the input to the output. All coefficients of the polynomials, an and bn , are parameters of the ARX model, and are to be identified from input and output data. Refer to [6] for the detail of the ARX model and system identification. For compact notation, ζ and θ are introduced as ζ = [na , nb , nd ] θ = [a1 , . . . , ana , b1 , . . . bnb ]T
Measurement-Based Modeling of Internet Round-Trip Time Dynamics
267
In [5], we have defined the input as the packet inter-departure time from the source host, and the output as the round-trip time variation measured by the source host. Although the ARX model, with such input and output definition, can capture the round-trip time dynamics to some extent, the model accuracy is not good. It is possibly because of the non-fixed sampling interval. Namely, use of a fixed sampling interval is generally assumed in system identification, however, in [5], the sampling interval is not fixed since it is dependent on the packet sending/receiving process at the source host. In this paper, we therefore use a fixed sampling interval for improving the model accuracy; that is, the input to the system is the packet transmission rate during a fixed sampling interval, and the output from the system is the average round-trip time during a fixed sampling interval. More specifically, the input u(k) and the output y(k) are defined as follows. Let ts (i) be the time at which the ith packet is injected into the network, and tr (i) be the time at which the ith ACK packet is received by the source host. We further introduce l(i) as the size of the ith packet including the IP header, and T as the sampling interval. Then, u(k) and y(k) are defined as i∈φs (k) l(i) u(k) = T i∈φr (k) (tr (i) − ts (i)) y(k) = |φr (k)| where φs (k) (or φr (k)) is the set of packet numbers sent (or received) during kth sampling interval; i.e., φs (k) ≡ {n : k T ≤ ts (n) < (k + 1) T } φr (k) ≡ {n : k T ≤ tr (n) < (k + 1) T }
3 Data Collection Using ICMP Packet 3.1
Measurement Method
For collecting input and output data from a real network, it is necessary to send a series of probe packets into the network, and to measure their resulting round-trip times. For sending a probe packet, one of the following protocols can be used. – TCP (Transmission Control Protocol) – UDP (User Datagram Protocol) – ICMP (Internet Control Message Protocol) In what follows, we briefly discuss advantages and disadvantages of these protocols for sending a probe packet to collect input and output data, in particular, for system identification. TCP has a feedback-based congestion control mechanism, which controls the packet sending process from a source host according to the congestion status of the network. Since it is an ACK-based protocol, it is easy for the source host to measure the round-trip time for each packet. However, because of such a feedback-based mechanism, TCP is not suitable for sending a probe packet for two reasons. First, although the input (i.e., the
268
H. Ohsaki, M. Morita, and M. Murata
packet transmission rate) should contain diverse frequencies for system identification purposes, the packet transmission rate of TCP would have limited frequencies. Second, regardless of many system identification techniques assuming an independence between the input and the output, the independence assumption cannot be satisfied with TCP since the packet transmission rate is dependent on the past round-trip times. On the contrary, UDP has no feedback-based control. The packet transmission rate of UDP can be freely controlled. However, UDP is a one-way protocol. The destination host must perform some procedure to measure the round-trip time for each packet at the sender side. One possible way is to use ICMP Destination Unreachable message as in the traceroute program [7]. When the host receives a UDP packet to an unreachable port, it returns ICMP Destination Unreachable message to the source host. The source host can therefore measure the round-trip time by observing the elapsed time between the UDP packet transmission and the receipt of the corresponding ICMP packet. However, as specified in [8], generation of ICMP Destination Unreachable messages is limited to a low rate. Use of ICMP Destination Unreachable message is therefore not desirable to collect the input and output data for system identification. ICMP is a protocol to exchange control messages such as routing information and node failures [9]. Since ICMP has no feedback-based control, the inter-departure time of ICMP packets can be freely controlled. Also it is easy to measure the round-trip time at the source host by using ICMP Echo Request and ICMP Echo Reply messages, as in the ping program. Although some network devices limit the rate of ICMP packets because of malicious use of them [10], such as a DoS (Denial of Service) attack, many network devices respond to ICMP Echo Request message and do not limit the rate of them. In this paper, we therefore choose ICMP Echo message as a probe packet. More specifically, the source host sends a series of ICMP Echo Request messages to the destination host, and the destination host returns ICMP Echo Reply messages. We have modified the ping program to dynamically change the packet inter-departure time (originally fixed at one second). The destination host copies the payload of the received ICMP Echo Request message to the returning ICMP Echo Reply message. Thus, the ICMP Echo Reply packet contains the timestamp placed by the source host at its transmission time. This enables precise measurement of the round-trip time at the source host. Instead of measuring ICMP Echo Request/Reply packet sending/receiving time at the source, a measurement host is prepared (Fig. 3). It is for achieving reliable data measurement even when the source host sends or receives packets at a very high rate. As shown in Fig. 3, the Ether TAP copies all packets carried on the link, and sends copies to the measurement host; that is, all ICMP Echo Request/Reply packets sent from/to the source host are also delivered to the measurement host. We use an active measurement approach for collecting data by sending probe packets to the network. This is because we want to know how accurately the ARX model can represent the round-trip time dynamics of the Internet. However, we intend to apply a passive measurement approach, which measures data by monitoring packets being transmitted in the network. 3.2
Network Environments
As the number of routers between source and destination hosts increases, the noise (i.e., effect of other traffic and measurement errors) contained in the output becomes large.
Measurement-Based Modeling of Internet Round-Trip Time Dynamics
269
Ether TAP ICMP Echo Request source host
destination host
source host
network ICMP Echo Reply
100Mbit/s
100Mbit/s SW1
SW2 100Mbit/s 100Mbit/s
100Mbit/s measurement host
Fig. 3. Measurement host for reliable data measurement.
FTP server
FTP client
Fig. 4. Network N1 (LAN)
Besides, the dominant part of the round-trip time is a queuing delay at the bottleneck router. It is therefore important to choose network configurations, in which the input and the output are collected, by taking account of the number of routers and the location of the bottleneck link. In this paper, we measure packet sending/receiving times in three network configurations including LAN and WAN environments, and obtain the input u(k) and the output y(k). In LAN environment, it is expected that the ARX model can accurately model the round-trip time dynamics since the network topology is rather simple and the measured data would suffer little observation noise. On the contrary, in WAN environment, it is expected that the model accuracy is degraded compared to that in LAN environment since the network topology is complex. We use two network configurations for WAN environment. The difference in these WAN configurations is the location of the bottleneck link. In this paper, the following three network configurations (i.e., N1, N2, and N3) are used for collecting input and output data. - Network N1 (LAN) The network N1 is LAN environment of a simple network configuration (Fig. 4). There exist two switches (SW1 and SW2) between source and destination hosts. All hosts and switches are connected to 100 Mbps LAN. The link between SW1 and SW2 also carries background traffic, as well as ICMP Echo Request/Replay packets exchanged between source and destination hosts. Namely, a bulk FTP transfer from a server (connected to SW1) to a client (connected to SW2) is performed during data collection. - Network N2 (WAN with the bottlenecked access link) The network N2 is WAN environment of a complex network configuration, and the access link is the bottleneck between source and destination hosts (Fig. 5). The source host is connected to the Internet via 100 Mbps LAN, and the destination host is connected via 56 Kbps dial-up PPP link. At the time of measurement, the number of hops between source and destination hosts was 16, and the average round-trip time was 319.7 ms. - Network N3 (WAN with the non-bottlenecked access link) The network N3 is WAN environment, and the access link is not the bottleneck between source and destination hosts (Fig. 6). The source host is connected to the Internet via 100 Mbps LAN. We have chosen www.so-net.ne.jp as the destination host. At the time of measurement, the number of hops between source and destination hosts was 16, and the average round-trip time was 36.89 ms.
270
H. Ohsaki, M. Morita, and M. Murata
source host
source host
u(k)
u(k)
100Mbit/s 100Mbit/s
Internet
destination host
Internet access link 56Kbit/s
destination host
ISP
u(k)
Fig. 5. Network N2 (WAN with the bottlenecked access link)
Fig. 6. Network N3: (WAN with the nonbottlenecked access link)
In the above three network configurations, we measured the packet sending/receiving time at the measurement host. The source host sent 20,000 ICMP Echo Request packets, and the timestamp of each ICMP Echo Request/Replay packet is recorded by the measurement host. The data collection was done at midnight of October 18, 2001. As we have explained in Section 2, the input u(k) and the output y(k) for system identification is calculated from measured packet sending/receiving times. We empirically choose the sampling interval T in each network configuration; that is, T is chosen for each sampling period to contain about five samples. In this paper, the packet inter-departure time from the source host is randomly changed, there might be a sampling period in which no packet is sent or received. If no packet is sent (or received) during kth sampling period, the input u(k) (or the output y(k)) is not defined. In such a case, we use the minimum value of all past input (or output) data; i.e., u(k) = min (u(k)) 0≤i Tn , as shown in Figure 5. According to the fundamental bounds of the service curve theory (maximum delay dmax and backlog Bmax ) in general, and Network Calculus [1] in particular, for θ = b−M p−r , dmax =
p − Rn M θ+ + Tn at t = θ if p > Rn > r Rn Rn
(1)
We now determine the maximum backlog Bmax . For Tn < θ : Bmax = (p − Rn )θ + M + Rn Tn at t = θ if p > Rn > r
(2)
However, in real settings, two cases may arise. One, it may be the case that the maximum network buffer size Bc is smaller than the above mentioned bound Bmax , in which case, if nothing is done, some traffic may be lost. Moreover, it may also be the case that the above mentioned bound on delay dmax is unacceptable for a real-time user who is not prepared to accept a delay, at the network level, larger than a delay constraint dc . Again, traffic in excess of dc may be useless to the user and hence lost. Our objective is then to act on the traffic in such a way so as to not exceed the maximum offered buffer size Bc and /or the maximum tolerated delay bound dc while guaranteeing a loss free performance. This is achieved by the use of a shaper. A shaper shall be introduced between the source and the network (see Figure 6). It has a size Bsh which we try to keep as minimal as possible. It has a shaping rate Rsh at which the traffic is shaped and sent into the network. A larger value of Rsh means a less affected traffic. This is a good feature as the traffic should be minimally altered. An optimal shaper is thus a shaper with minimal buffer size Bsh and maximal shaping rate Rsh .
Dynamic Shaping for Self-Similar Traffic Using Network Calculus octet
p
Rn
p
b
Bmax
M
M
θ
Tn
r
Bc
Bsh
Tsh
time
Fig. 5. Arrival Tspec and service curve
4.1
Rn
dc
d max
b
4
Rsh
r
319
Tn
Fig. 6. Shaper between LAN and WAN
Shaping Regions of Shaping
Adding a shaper to the arriving traffic prior to its entrance to the network is done as follows. A new service curve, corresponding to the actions of the shaper, with parameters (Rsh ,Tsh ) is set between the arrival curve and the network service curve. This causes the arrival traffic to be first shaped by the newly introduced shaping service curve, the output of which is then sent to the network and served by the network service curve. In what follows, we assume, without loss of generality, that the buffer and server at the network level are fully dedicated to our incoming traffic. Any background traffic shall not interfere with our incoming traffic and shall thus be not explicitly shown, i.e., Tn = 0. Shaping to meet buffer requirement. Let us suppose that the maximum buffer size, Bc , at the network level is smaller than the maximum backlog bound, Bmax , caused by the non-shaped arriving traffic. The point of introducing a shaper in this case is to assure that the incoming traffic does not exceed Bc for a loss-free network performance. For θ = Rshb−r ,Bc = (Rsh − Rn )θ at t = θ if Rsh > Rn > r. Schematically, and considering the setting of Figure 6, the idea is to vary the shaping curve through the segment indicating Bc . In this case, the shaded region, given in Figure 7, shows the region of shaping. It is wise to note the extreme in this case. It is the shaping curve with shaping rate hb < Rn . This corresponds to a maximal buffer size Bsh > Bmax for the network without shaping. However, Rsh is not maximal. Let us note that for Rsh more than hb , the buffer constraint Bc is outperformed uselessly for an even higher shaping rate. The optimal case is given by the shaping curve with shaping rate Rsh starting from Tsh = 0 and the intersection point with rt + b is (θ , y) . This corresponds to maximal Rsh or equivalently minimal shaping. Rsh =
Bc r − Rn b Bc − b
(3)
320
H. Elbiaze et al. p p
bytes
Rsh Rn
Rsh
r
Rn
r
b y b
Bmax
Bc
M
dl
M
θ
θ’
Tsh
time
Fig. 7. Region of shaping to meet buffer requirement Bc
Tn
Fig. 8. Region of shaping to meet delay requirement dc
Again, smaller values of Rsh will yield an even smaller Bc uselessly at the cost of higher shaping. Those two cases correspond thus to two feasible shaping parameters depending on the cost of the resources. The first case operates at network buffer less than the target Bc but a high shaping action whereas the second is optimal in view of the shaping action, i.e., large Rsh , and network buffer size constraint Bc met. Shaping to meet delay constraint. In this case, the point of shaping is to reduce the maximum delay to be experienced at the network region from the original dmax to a new delay constraint dc . Let us note that introducing a shaper does not add to the end-to-end delay. The latter shall be just partitioned between the shaper and the network element. This type of partitioning may be useful in an optical context where it is better to hold the packets at the electronic side and not at the optical side where the signal is more prone to being distorted and n attenuated. For θ = Rshb−r ,dc = RshR−R θ at t = θ if Rsh > Rn > r. n This is again achieved by setting appropriate values to Rsh . Schematically, the idea is to vary the line the shaping curve through the segment indicating dc as shown in Figure 8. The extreme in this case occurs when the shaping curve with shaping rate hd < Rn . This corresponds to a maximal buffer size Bsh > Bmax for the network without shaping. However, Rsh is not maximal. Let us note that for Rsh more than hd , the delay constraint dc is outperformed uselessly for an even higher shaping rate. The optimal case is given by the shaping curve with shaping rate Rsh starting from Tsh = 0 and the intersection point with rt + b is (θ , y). Rsh =
Rn (b − rdc ) b − R n dc
(4)
This corresponds to maximal Rsh or equivalently minimal shaping. Again, smaller values of Rsh will yield an even smaller dc uselessly at the cost of higher shaping.
Dynamic Shaping for Self-Similar Traffic Using Network Calculus
4.2
321
Equation-Based Dynamic Shaping Algorithm
So far, we considered shaping within every interval Ii of length lN . Our ultimate aim is however to shape the global incoming traffic. Parallel to the idea of partitioning the arrival process so as to locally bound each interval, the shaping scheme introduced in the previous section shall apply to each interval. It is clear that the shaping rate Rsh depends on the arrival curve parameters throughout the whole process. The task in this case is to find optimal, i.e. maximal, shaping rate Rsh for each interval Ii such that the buffer constraint and/or delay constraint are satisfied. This is achieved by dynamically changing the shaping rate from one interval to the next. The dynamic shaping algorithm is then as follows. 1. Set observation window size equal to lN 2. Determine corresponding Tspec in interval (Ii )i=1,...,N 3. Apply Equations 3 and 4. to set shaping parameters such that i. shaping is minimal in the sense of minimal buffer size Bsh and maximal shaping rate Rsh ii. requirements are met, i.e., buffer or delay constraint at network level iii. no loss at shaper, i.e. Bsh not exceeded.
5 5.1
Numerical Results Model
We consider the end-to-end system shown in Figure 1. Let the self similar traffic resulting from the LAN sources have the following characteristics: mean = 100 Mbit/s, variance = 108 , and Hurst parameter H = 0.7. Let the packets be of maximal size M equal to 1540 bytes. At the network level, let Rn = 227 Mbit/s be the rate of the server, with buffer capacity Bn equal to 100 packets. We assume without loss of generality that a fixed portion of the server at the network level, with service rate Rn and buffer space Bn , is entirely dedicated to our incoming traffic; any background traffic will not be modeled explicitly. This assumption simplifies the analysis and simulation as Tn is equal to zero. In real setting, this amounts to considering a dedicated share of buffer space and service rate. 5.2
Estimation of Arrival Parameters
The first step of our equation-based dynamic shaping algorithm is to estimate the parameters of the incoming traffic into a Tspec format, i.e., peak rate p, mean rate r and maximum burst size b for different observation windows of size lN . In interval (Ii )i=1,...,N of length lN , as stated in Section 2.1, the peak rate p is equal to the reciprocal of the minimum interarrival time Xmin and the mean rate r is equal to the reciprocal of the average interarrival times Xave . The complexity lies in the estimation of the maximum burst size b, an essential parameter for a well-defined arrival envelope, performance bounds and shaping issues.
322
H. Elbiaze et al.
By definition, b corresponds to consecutive arrivals with interarrival times tending to zero as the traffic is observed in a mean phase. In our present work, we observe the consecutive interarrivals of size equal to Xmin for every interval (Ii )i=1,...,N of length lN and store the largest value of the corresponding packets in b. Estimation of mean rate r and peak rate p. Figures 9 and 10 shows the average rate ri for intervals Ii of lengths equal to 300ms and 1s respectively. We notice that for a given window size lN , riN varies from one interval to the next keeping the same behavior as the original traffic, i.e., incorporating correlation. On the other hand, the family of (riN )i=1,...,N behaves in the same way in different 2.5e+08
2.5e+08
"mean_300ms"
2e+08
Mean rate ri (bit/s)
Mean rate ri (bit/s)
2e+08
1.5e+08
1e+08
5e+07
0
"mean_1s"
1.5e+08
1e+08
5e+07
0
500
1000
1500 2000 Intervals Ii
2500
3000
3500
0
0
200
400
600 Intervals Ii
800
1000
Fig. 9. Average rate ri during the In- Fig. 10. Average rate ri during the Interval Ii : lN = 300ms terval Ii : lN = 1s
lengths lN of intervals (Ii )i=1,...,N , i.e., in many time scales. That means the presence of self-similarity property in the sequence (riN )i=1,...,N . Figures 9 and 10 shows two times scales (riN )i=1,...,N behaviors : 300ms and 1s. The same remarks remain valid for p. Estimation of burst size b. For each window size lN , we have observed the interarrival packets during the smallest mean rate ri . The sum of consecutive interarrivals smaller or equal to the interarrival time within the corresponding peak rate pi corresponds to the burst size. The obtained values for b vary from 85 packets for small window size lN to 60 for larger ones. 5.3
Non-constrained Performance
If no shaping is used at the access of the network, Figure 11 shows the probability density function of the queue at the network level. For a no-loss performance, this figure indicates to us that a buffer size of 55 packets is needed at the network server. The maximum delay in the network level in this case is equal to 0.00268 sec.
Dynamic Shaping for Self-Similar Traffic Using Network Calculus
323
ri vs Rshi 2.5e+08
1.8e+08 Rshi
2.48e+08
0.1
2.46e+08
0.01
2.44e+08
0.001 0.0001 1e-05 1e-06
1.4e+08
1.2e+08 2.42e+08 1e+08 2.4e+08 8e+07 2.38e+08 6e+07
4e+07
2.34e+08
0
20
40
60
80
100
2.32e+08
0
q
Fig. 11. Network queue PDF without shaping
5.4
1.6e+08
2.36e+08
1e-07 1e-08
ri
Mean rate ri (bit/s)
1
Shaping rate Rshi (bit/s)
Prob(queue size >= q)
Probability density function of network queue without shaping
200
400
600
800
2e+07 1000
Intervals Ii
Fig. 12. Average rate ri vs Shaping rate Rsh during Ii
Buffer-Constrained Performance
In this case, let us assume that in fact, the buffer size Bc at the network level cannot be as large as to hold 55 packets, i.e., if no shaping is used, there will be some loss. Let Bc be large enough to hold 10 packets , a typical buffer size in optical switches. To keep up with a non-loss performance, we need to operate some shaping at the access of the network in order to meet the buffer constraint Bc = 10. This brings us to the second step of our algorithm. Based on the arrival Tspec and the service curve equations, we derive the shaping parameters for every interval (Ii )i=1,...,N of length lN . For intervals of length lN = 100 ms, Figure 12 shows the mean arrival rate ri versus shaping rate Rsh throughout the duration of the connection (100 seconds). We notice that ri and Rsh are inversely proportional; for every large values of ri , i.e., high input rate, the value of Rsh is small, i.e., a severe shaping is needed to accommodate the buffer constraint and the loss-free performance. The inverse case is also true. The third step of the algorithm is to plug the equation-based shaping parameters back into the simulation model. Figures 13 and 14 show the probability density function of the buffer occupancy at the network level and shaper, respectively, for different observation window lengths lN . The shaper size is also derived from the equations and the largest value over all intervals is used. This conservatism explains the fact that no loss is observed at the shaper. The independence between the shaping queue PDF’s and the interval sizes lN s, can be explained by the fact that the shaping rate Rsh is adaptive with respect to the incoming traffic in order to meet the non-loss performance. Thus, for each lN , Rsh varies in an inversely proportional way with the mean rate ri , keeping the shaping queue behavior more or less the same. The above figures and observation may actually suggest that self-similar traffic, variable at different time scales, may exhibit the same type of variability at those very time scales. If this turns out to be true, it may suggest that observing and monitoring the traffic at small time intervals may be sufficient in constructing and extrapolating its behaviour over larger time scales. As of the network, we notice that the smallest interval lengths lN = 65 and 100 ms yield a non-loss performance. This is explained by the fact
324
H. Elbiaze et al. Probability density function of network queue 1 0.1 0.01 0.001 0.0001 1e-05 1e-06 1e-07
"dishap65ms" "dishap100ms" "dishap200ms" "dishap300ms" "dishap1s"
0.1
Prob(queue size >= q)
Prob(queue size >= q)
Probability density function of shaping queue 1
"disres65ms" "disres100ms" "disres200ms" "disres300ms" "disres1s"
0.01 0.001 0.0001 1e-05 1e-06 1e-07
0
2
4
6
8
1e-08
10
0
20
40
q
60
80
100
q
Fig. 13. Network queue PDFs for dif- Fig. 14. Shaping buffer PDFs for differferent lengths Intervals Ii : 65ms, 100ms, ent lengths Intervals Ii : 65ms, 100ms, 200ms,300ms, 1s 200ms,300ms, 1s
that at those interval lengths, we obtain higher precision for estimation of arrival parameters and hence shaping parameters. For larger interval lengths, lN = 200 and 300 and 1000 ms, some loss, on the order of 2.4 10−7 , is observed. This is explained by the fact that for small precision, the arrival parameters are under-estimated. Put in the equations, they yield high shaping rates, or equivalently, soft shaping. This in turn results in loss at the network level. 5.5
Delay-Constrained Performance
Let us assume that in fact, the tolerated maximum delay dc at the network level cannot be as large as 0.0005 sec , i.e., if no shaping is used, there will be some loss due to the delay being exceeded. Again, to ensure this performance, we need to operate some shaping at the access of the network in order to meet the delay constraint dc =0.0005 sec. We apply then the three steps of our equation-based algorithm , as done for the buffer-constrained case.
Probability density function of network queue 1 0.1 0.01 0.001 0.0001 1e-05 1e-06 1e-07
"dishap65ms" "dishap100ms" "dishap200ms" "dishap300ms" "dishap1s"
0.1
Prob(queue size >= q)
Prob(queue size >= q)
Probability density function of shaping queue 1
"disres65ms" "disres100ms" "disres200ms" "disres300ms" "disres1s"
0.01 0.001 0.0001 1e-05 1e-06 1e-07
0
2
4
6 q
8
10
1e-08
0
5
10
15
20
25
30
35
q
Fig. 15. Network queue PDFs for dif- Fig. 16. Shaping buffer PDFs for differferent lengths Intervals Ii : 65ms, 100ms, ent lengths Intervals Ii : 65ms, 100ms, 200ms,300ms, 1s 200ms,300ms, 1s
Dynamic Shaping for Self-Similar Traffic Using Network Calculus
325
Figures 15 and 16 show the probability density function of the size of the buffer at the network level and shaper, respectively, for different observation window lengths lN . Table 1 illustrates the maximum delay values obtained by simulation for different window lengths lN : 65ms, 100ms, 200ms, 300ms and 1s. Again, we notice that the smallest interval lengths lN = 65 and 100 ms yield the target maximum delay. Table 1. Maximum delay at the network level for different window lengths lN : 65ms, 100ms, 200ms,300ms, 1s window lengths lN 65ms 100ms 200ms 300ms 1s maximum delay in 0.0004102 s 0.0004219 s 0.0005413 s 0.0005414 s 0.0005414 s the network
For larger interval lengths, lN = 200 and 300 and 1000 ms, the maximum values of the observed delay exceed the constraint. This is explained by the fact that for small precision, the arrival parameters are under-estimated.
6
Conclusion
In this paper, we focused on self-similar traffic at the input of an optical node. If this traffic is left as is, it cannot satisfy the buffer and/or delay constraint at the network level, which may be very stringent in the optical case. In order to meet those requirements, shaping is essential. In this work, we proposed an equation-based dynamic shaping with three key steps: 1) estimation and fitting of interval-wise incoming traffic into arrival curves, 2) solving into the service curve approach for the shaping parameters in an adaptive manner and 3) fitting the later back into the original model. As of the first step of our algorithm, we notice that the input estimate reproduces the same self-similar, correlated nature of the original traffic. The shaping parameters derived in step 2 are typically conservative owing to the deterministic nature of the service curve. However, when put back into the original model, i.e., step 3, they are shown to be numerically not very conservative. This may be explained by the correlated nature of the original self-similar traffic. Future work perspectives shall focus on the following issues. First, the conservatism of the deterministic version of the service curve approach seem to be less apparent in the presence of self-similar, LRD traffic, as shown by the small loss at the network level. It may be wise to quantify to which extent self-similar traffic reduces this conservatism. Second, optimal shaping relies on the trade-off between buffer sizes at the shaper versus network. We intend to tackle this issue by releasing the loss-free determinism at the shaper level where we can in effect tolerate some loss. This can be achieved by more severe shaping action, by decreasing the shaping rate Rsh , and hence reaching the buffer size limit. This limit can actually be violated in controlled manner in order to tolerate a
326
H. Elbiaze et al.
loss performance similar to that encountered at subsequent network elements. This feature is desirable and more pragmatic as it is useless to operate a shaping performance too perfect with respect to that of the optical network; after all, what really counts to the user view is the end-to-end performance.
References 1. J-Y. Le Boudec Application of Network Calculus To Guaranteed Service Networks. IEEE Trans on Information theory, (44) 3, May 1998. 2. R. L. Cruz A calculus on network delay, part I: Network elements in isolation. IEEE Transaction of Information Theory, 37(1):114-121,1991. 3. R. L. Cruz Quality of Service Guarantees in Virtual Circuit Switched Networks. IEEE JSAC, 1995. 4. A. Erramili, O. Narayan, and W. Willinger. Experimental Queuing Analysis with Long-Range Dependence Packet Traffic IEEE/ACM Transactions on Networking, 4(2), pp. 209-223, Apr. 1996. 5. H. Flower and W. Leland. Local Area Network Traffic Characteristic, with Implications for Broadband Network Congestion Management IEEE JSAC, 9(7), pp. 1139-1149, September, 1991. 6. V. Frost and B. Melamed. Traffic Modeling for Telecommunications Networks IEEE Communications Magazine, 32(3), pp. 70-80, March, 1994. 7. D. Ferrari and D. Verma. AA scheme for real-time channel establishement in widearea networks IEEE Journal on Selected Areas in Communictions, 8(3):368-379 April, 1990. 8. S. Golestani Congestion-free transmission of real-time traffic in packet networks In Proceedings of IEEE INFOCOM’90, pp. 527-142,San Francisco, California, June 1990. 9. C. Kamanek, H. Kanakia and S. Keshav. Rate controlled servers for very highspeed networks In Proceedings of IEEE Global Telecommunications Conference, pp. 300.3.1-300.3.9,San Diego, California, December 1990. 10. E. Knightly and H. Zhang. Traffic characterization and switch utilization usinf deterministic bounding interval dependent traffic models In Proceedings of IEEE INFOCOM’95,Boston, MA, April 1995. 11. W. Leland, M. Taqqu, W. Willinger and D. Wilson. On the Self-Similar Nature of Ethernet Traffic (Extended Version) IEEE/ACM Transactions on Networking, 2(1), pp. 1-15, February 1994. 12. V. Paxon and S. Floyd. Wide-Area Traffic: The failure of Poisson Modeling IEEE/ACM Transactions on Networking, 3(3), pp; 226-244, June 1995.
Is Admission-Controlled Traffic Self-Similar? Giuseppe Bianchi, Vincenzo Mancuso, and Giovanni Neglia Universit` a di Palermo, Dipartimento di Ingegneria Elettrica Viale delle Scienze, 90128 Palermo, Italy [email protected], {vincenzo.mancuso,giovanni.neglia}@tti.unipa.it
Abstract. It is widely recognized that the maximum number of heavytailed flows that can be admitted to a network link, while meeting QoS targets, can be much lower than in the case of markovian flows. In fact, the superposition of heavy-tailed flows shows long range dependence (self-similarity), which has a detrimental impact on network performance. In this paper, we show that long range dependence is significantly reduced when traffic is controlled by a Measurement-Based Admission Control (MBAC) algorithm. Our results appear to suggest that MBAC is a value added tool to improve performance in the presence of self-similar traffic, rather than a mere approximation for traditional (parameter-based) admission control schemes.
1
Introduction
The experimental evidence that packet network traffic shows self-similarity1 was first given in [1], where a thorough statistical study of large Ethernet traffic traces was carried out. This paper stimulated the research community to explore the various taste of self-similarity. This phenomenon has been observed in wide area Internet traffic and many causes that contribute to self-similarity for both TCP and UDP traffic aggregates have been now more fully understood [2,3,4,5]. In this paper, we focus our attention on traffic generated by sources nonreactive to network congestion (e.g. real-time multimedia streams). The traffic aggregate offered to a network link results from the superposition of several individual flows. It has been proven [6] that self-similarity or Long Range Dependence (LRD) arises when individual flows have heavy-tailed2 periods of activity/inactivity. This result is valid asymptotically as the number of sources increases. We are interested in the practical implications of self-similarity on the design of Call Admission Control (CAC) schemes. In this paper we assume, for convenience, a traffic scenario composed of homogeneous flows. In these conditions, a 1
2
This research is supported by European Community and MIUR in the frame of the Pollens project (ITEA, if00011a). In this paper we use the terms self-similarity and long range dependence in an interchangeable fashion, because we refer to asymptotic second order self-similarity (for details [7]). A random variable is said to be “heavy-tailed” when its cumulative distribution function converges to F (t) ∼ 1 − at−c , as t → ∞ with 1 < c < 2, being a a constant.
E. Gregori et al. (Eds.): NETWORKING 2002, LNCS 2345, pp. 327–339, 2002. c Springer-Verlag Berlin Heidelberg 2002
328
G. Bianchi, V. Mancuso, and G. Neglia
traditional (parameter-based) CAC rule simply checks that the number of admitted flows never exceeds a maximum threshold Nt . This threshold is selected so that target Quality of Service (QoS) requirements (e.g. loss ratio, delay percentiles, etc.) are met. In what follows we refer to this CAC scheme as MAXC (Maximum number of Calls). A large amount of work (see [7]) has shown that self-similarity has a detrimental impact on network performance. For the same link capacity and buffer size scenario, the Quality of Service (i.e. loss/delay performance) experienced by LRD traffic results worse than that experienced by Short Range Dependent (SRD) traffic, e.g. modelled as Markov processes. The straightforward interpretation of these results, in terms of traditional CAC, is that self-similarity is a key factor which reduces the maximum number Nt of flows that can be admitted. We argue that the above interpretation is questionable, as it does not account for recent progress in admission control schemes, and specifically the emergence and increasing popularity of Measurement-Based Admission Control (MBAC) approaches [8,9,10,11]. Unlike traditional CAC methods, which rely on a-priori knowledge of the statistical characterization of the offered traffic, MBAC algorithms base the decision whether to accept or reject an incoming call on run-time measurements on the traffic aggregate process. The aim of this paper is to present results which show that MBAC approaches appear capable of smoothing the self-similarity of the accepted traffic aggregate. In this sense, MBAC approaches are not merely “approximations” of ideal CAC schemes in situations where the statistical traffic source characterization is not fully known. On the contrary, this paper shows that MBAC schemes are an effective and important way to cope with the high variability of LRD traffic, and their adoption leads to significant performance advantages with respect to traditional CAC schemes (refer to [11] for an initial insight on the performance advantages of MBAC in an LRD traffic scenario). The rest of the paper is organized as follows. In section 2 we briefly describe the MBAC principles and we discuss the important role of MBAC in the presence of self-similar traffic. The specific MBAC algorithm adopted and the methods to evaluate self-similarity are described in section 3. Numerical results are presented and discussed in section 4. Finally, concluding remarks are given in section 5.
2
Measurement Based Admission Control
It is frequently assumed that the ultimate MBAC goal is to reach the “ideal” performance of a parameter-based CAC scheme. In fact, MBAC schemes are traditionally meant to approximate the operation of a parameter-based CAC. They cannot rely on the detailed a-priori knowledge of the statistical traffic characteristics, as this information is not easy supplied in an appropriate and useful form by the network customer. Therefore, their admission control decisions are based on an estimate of the network load obtained via a measurement process that runs on the accepted traffic aggregate.
Is Admission-Controlled Traffic Self-Similar? 1.1
329
Admitted calls Instantaneous link load Smoothed link load
1.05
Normalized link utilization
1
0.95
0.9
0.85
0.8
0.75
0.7
0
50
100
150
200
250
300
350
400
Simulation time (s)
Fig. 1. Traditional Admission Control operation 110%
Admitted calls Istantaneous link load Smoothed link load
105%
Normalized link utilization
100%
95%
90%
85%
80%
75%
70%
0
50
100
150
200
250
300
350
400
Simulation time (s)
Fig. 2. Measurement-Based Admission Control operation
However, a closer look at the basic principles underlying MBAC suggests that, in particular traffic conditions, these schemes might outperform traditional parameter-based CAC approaches. An initial insight into the performance benefits of MBAC versus parameter-based algorithms in an LRD traffic scenario is given in [11]. In this paper, we present additional results that confirm the superiority of MBAC and we justify them showing that MBAC algorithms are able to reduce the self-similarity of the traffic aggregate generated by the admitted
330
G. Bianchi, V. Mancuso, and G. Neglia
heavy-tailed sources. In other words, we argue that MBAC schemes are not just “approximations” of parameter-based CAC, but they are in principle superior to traditional CAC schemes when self-similarity comes into play. An intuitive justification can be drawn by looking at the simulation traces presented in figures 1 and 2 (simulation details are described in section 3). Each figure reports two selected 200 s simulation samples, which for convenience have been placed adjacently. The y-axis represents the normalized link utilization. The figures report: i) the number of accommodated calls normalized with respect to the link capacity; ii) the instantaneous link load, for graphical convenience averaged over a 1 s time window, and iii) the smoothed link load, as measured by the autoregressive filter adopted in the MBAC, whose time constant is of the order of 10 seconds. Figure 1 reports results for a parameter-based CAC scheme (MAXC). According to this scheme, a new flow is accepted only if the number of already admitted flows is lower than a maximum threshold Nt . In the simulation run Nt was set to 129, which corresponds to a target link utilization of about 88%, and a very high offered load (650%) was adopted. As a consequence, the number of flows admitted to the link sticks, in practice, to the upper limit. The leftmost 200 simulation seconds, represented in Figure 1, show that, owing to LRD of the accepted traffic, the load offered by the admitted sources is consistently well above the nominal average load. Traffic bursts even greater than the link capacity are very frequent. On the other hand, as shown by the rightmost 200 seconds, there are long periods of time in which the system remains under-utilized. The criticality of self-similarity lies in the fact that the described situation occurs at time scales, e.g. the one shown in the figure, which dramatically affect the loss/delay performance. A very different situation occurs for MBAC schemes. Figure 2 reports results for the simple MBAC scheme described in section 3.2. In this case, new calls are blocked as long as the offered-load measurement is higher than 89% (the values 129 in MAXC and 89% in MBAC were selected so that the resulting average throughputs were the same). In this case, we see from both leftmost and rightmost plots that the offered-load measurement fluctuates slightly around the threshold. However, long term traffic bursts are dynamically compensated by a significant decrease of the number of admitted calls (leftmost plot). The opposite situation occurs when the admitted calls persistently emit under their nominal average rate: indeed the rightmost plot shows that in these periods the number of admitted calls significantly increases. This “compensation” capability of MBAC schemes leads us to conclude that MBAC is well-suited to operating in LRD traffic conditions: the quantitative analysis carried out in section 4, in fact, confirms this insight.
3
The Simulation Scenario
To obtain simulation results, we have developed a C++ event-driven simulator. A batch simulation approach was adopted. The simulation time is divided into
Is Admission-Controlled Traffic Self-Similar?
331
101 intervals, each lasting 300 simulated minutes, and results collected in the first “warm-up” time interval are discarded. As in many other admission control works [10,11], the network model consists of a single bottleneck link. The reason is that the basic performance aspects of MBAC are most easily revealed in this simple network configuration rather than in a multi-link scenario. The link capacity was set equal to 2 Mbps, and an infinite buffer size was considered. Thus, QoS is characterized by the delay (average and 99th delay percentiles) experienced by data packets rather than packet loss as in [11]. The rationale for using delay instead of loss is threefold. Firstly, loss performance depends on the buffer size adopted in the simulation runs, while delay performance does not require a choice of buffer size (we have actually used infinite buffer size). Secondly, the loss performance magnitude may be easily inferred, for a given buffer size, from the analysis of the distribution of the delay, which can be well summarized via selected delay percentiles. Thirdly, and most importantly, a limited buffer size acts as a smoothing mechanism for traffic bursts. Large packet losses, occurring during severe and persistent traffic bursts (as that expected for self-similar traffic), have a beneficial congestion control effect on the system performance. Conversely, in a very large buffer scenario, the system is forced to keep memory of non-smoothed traffic bursts and therefore performance is further degraded in the presence of high traffic variability3 . As our performance figures, we evaluated link utilization (throughput) and delay distribution, summarized, for convenience of presentation, by the average and 99th delay percentile. The 95% confidence intervals have been evaluated. In all cases, throughput results show a confidence interval always lower than 0.3%. Instead, despite the very long simulation time, higher confidence intervals occur for 99th delay percentile results: less than 5% for MBAC results, and as much as 25% for MAXC results (this is an obvious consequence of the self-similarity of the MAXC traffic aggregate). 3.1
Traffic Sources
For simplicity, we have considered a scenario composed of homogeneous flows. Each traffic source is modelled as an ON/OFF source. While in the ON state, a source transmits 1000 bit fixed size packets at a Peak Constant Rate (PCR) randomly generated in the small interval 31 to 33 Kbps (to avoid source synchronization effects at the packet level). Conversely, while in the OFF state, it remains idle. The mean value of the ON and OFF periods were set, respectively, 3
Specifically, this justifies the very different performance results we obtain in high utilization conditions when compared with the loss-utilization performance frontier presented in [11] for LRD sources. In that paper, unlike our results presented in figure 4, it appears that performance of MBAC schemes tend to converge to the performance of traditional CAC schemes - i.e. the MAXC algorithm - as the utilization increases. A theoretical justification for this behavior can be found in [16], where the authors derive a formula to estimate the “correlation horizon” (which results to scale in linear proportion to the buffer size), beyond which the impact on loss performance of the correlation in the arrival process becomes nil.
332
G. Bianchi, V. Mancuso, and G. Neglia
equal to 1 s and 1.35 s (Brady model for voice traffic). This results in an average source rate r = 0.4255 · E[P CR] ≈ 13.6 Kbps. ON and OFF periods were drawn from two Pareto distributions with the same shaping parameter c = 1.5 (so they exhibit heavy-tails). Simulation experiments were obtained in a dynamic scenario consisting of randomly arriving flows. Each flow requests service from the network, and the decision whether to admit or reject the flow is taken by the specific simulated admission control algorithm. A rejected flow departs from the network without sending any data, and does not retry its service request again. The duration of an accepted flow is taken from a lognormal distribution [12] with mean 300 s and standard deviation 676 s (we adopted unitary variance for the corresponding normal distribution as reported in [12]), but call duration is extended to the end of the last ON or OFF period. Because of this, the real call-lifetime exhibits longer mean (320 s) and infinite variance. If the last burst were cut off, the process variance would become finite. The flow arrival process is Poisson with arrival rate λ calls per second. For convenience, we refer to the normalized offered load ρ = λ·r·Thold /Clink , being r the mean source rate, Thold the average call duration and Clink the link capacity. 3.2
Measurement-Based Admission Control Algorithm
Rather than using complex MBAC proposals, we have implemented a very basic MBAC approach. The rationale for the choice of a very simple MBAC scheme is twofold. Firstly, it has been shown [11] that different MBAC schemes behave very similarly in terms of throughput/loss performance. It appears that the length of the averaging periods and the way in which new flows are taken into account, are much more important than the specific admission criteria. Secondly, and more importantly, our goal is to show that the introduction of measurement in the admission control decision is the key to obtain performance advantages versus the MAXC approach, rather than the careful design of the MBAC algorithm. In this perspective the simpler the MBAC scheme is, the more general the conclusions are. The specific MBAC implementation is described as follows. A discrete time scale is adopted, with sample time T = 100 ms. Let X(k) be the load, in bits/sec, entering the link buffer during the time slot k, and let B(k) be a running bandwidth estimate, smoothed by a simple first order autoregressive filter B(k) = αB(k − 1) + (1 − α)X(k) We chose α = 0.99, corresponding to about 10 s time constant in the filter memory. Consider now a call requesting admission during the slot k + 1. The call is admitted if the estimated bandwidth B(k) is less than a predetermined percentage of the link bandwidth. By tuning this percentage, performance figures can be obtained for various accepted load conditions. An additional well-known issue in MBAC algorithm design [9] is that, when a new flow is admitted, the slow responsiveness of the load estimate will not
Is Admission-Controlled Traffic Self-Similar?
333
immediately reflect the presence of the new flow. A solution to prevent this performance-impairing situation is to artificially increase the load estimate to account for the new flow. In our implementation, the actual bandwidth estimate B(k) is updated by adding the average rate of the flow (i.e. B(k) := B(k) + r). 3.3
Statistical Analysis of Self-Similarity
The Hurst parameter H is able to quantify the self-similarity of the accepted traffic aggregate. For a wide range of stochastic processes H = 0.5 corresponds to uncorrelated observations, H > 0.5 to LRD processes and H < 0.5 to SRD processes. In order to evaluate H, we used the well known three methods described below. All methods receive in input a realization X(i) of the discrete-time stochastic process representing the load offered, during a 100 ms time window, to the link buffer by the accepted traffic aggregate. The methods adopted are: 1. Aggregate Variance [13]. The original series X(i) is divided into blocks of size m and the aggregated series X (m) (k) is calculated as X (m) (k) =
1 m
km
X(i)
k = 1, 2, . . .
i=(k−1)m+1
The sample variance of X (m) (k) is an estimator of V ar X (m) ; asymptotically: V ar(X) V ar X (m) ∼ 2(1−H) m 2. Rescaled Adjusted Range (R/S) [13]. For a time series X(i), with partial n sum Y (n) = i=1 X(i), and sample variance S 2 (n), the R/S statistics or the rescaled adjusted range, is given by: R 1 p p (n) = max Y (p) − Y (n) − min Y (p) − Y (n) 0≤p≤n S S(n) 0≤p≤n n n Asymptotically:
R (n) ∼ CnH E S 3. Wavelet Estimator [14] (see [15] for a freely distributed Matlab implementation). We recall that the spectrum of an LRD process X(t) exhibits power-law divergence at the origin WX (f ) ∼ cf |f |(1−2H) . The method recovers the power-law exponent 1 − 2H and the coefficient cf turning to account the following relation E d2X (j, l) = 2j(1−2H) cf C
where dX (j, l) =< X, ψl,j > are the coefficients of the discrete wavelet transform of the signal X(t), i.e. its projections on the basis functions ψl,j , constructed by the mother wavelet through scaling and translation (2j and l are respectively the scaling and the translation factor).
334
G. Bianchi, V. Mancuso, and G. Neglia 1
628.2 (MAXC)
Link utilization (with 99th delay percentile in ms)
0.9 361.1 (MAXC)
166.3 (MBAC)
809.6 (MAXC)
1114.5 (MAXC) 411.3 (MBAC)
336.9 (MBAC)
89.3 (MBAC)
0.8
0.7
4.7 (MAXC) 3.3 (MBAC)
0.6
0.5 MAXC MBAC 0.4
Utilization target: 90% 1.5 (MBAC)
0.3
1.5 (MAXC)
0
0.5
1
1.5
2
2.5
3
3.5
Offered load
Fig. 3. Link utilization vs offered load
A common problem is to determine over which scales LRD property exists, or equivalently the alignment region in the logscale diagrams. Using the fit test of the matlab tool [15] we determined for our traces the range from 2000 s -11th octave- to 250000 s -18th octave- (the two last octaves were discarded because there were too few values). All the three methods were applied over this scale.
4
Performance Evaluation
A problem arising in the comparison of different CAC schemes is the definition of a throughput/performance operational trade-off. In general, CAC schemes have some tunable parameters that allow the network operator to set a suitable utilization target and a consequent QoS provisioning. For example, in the case of the ideal MAXC algorithm, a higher setting of the threshold value results in an increased system throughput, at the expense of delay performance. By adjusting these parameters, CAC rules can be designed to be more aggressive or conservative with regard to the number of flows admitted. Results presented in figure 3 were obtained by setting the MAXC and MBAC tuning parameters so that a target 90% link-utilization performance is achieved in overload conditions. The figure compares the throughput/delay performance (99th delay percentiles, measured in ms, are numerically reported) of MBAC and MAXC, versus the normalized offered load. Minor differences can be noted in the capability of the considered schemes to achieve the performance target (as expected, MAXC converges faster than MBAC to the utilization target). A much more interesting result is the significantly lower MBAC 99th delay performance versus the MAXC one.
Is Admission-Controlled Traffic Self-Similar?
335
3500
3000
2500
Delay (ms)
MAXC: 99th delay perc. 2000
1500 MBAC: 99th delay perc.
1000
500 MBAC: aver. delay
MAXC: aver. delay 0 0.8
0.82
0.84
0.86
0.88
0.9
0.92
0.94
Link utilization
Fig. 4. Delay performance vs link utilization
It is restrictive to limit the investigation to a single level of performance, but it is preferable to compare different CAC schemes for a wide range of link utilization targets (and, correspondingly, QoS performance), obtained by varying the CAC threshold parameters. Unless otherwise specified, all results presented in what follows are obtained in large overload conditions (650% offered load). Rather than varying the offered load, figure 4 compares MBAC and MAXC by plotting their QoS performance versus the link utilization (following [11], the QoS versus utilization curve is called Performance Frontier). Specifically, the figure reports the delay/utilization performance frontiers of MAXC and MBAC. Both average and 99th delay percentiles are compared. The figure shows that the performance improvement provided by MBAC is remarkable, especially for large link utilization. We argue that the performance enhancement of MBAC over MAXC is due to the beneficial effect of MBAC in reducing the self-similarity of the accepted traffic aggregate. To quantify this statement, tables 1 and 2 report the Hurstparameter estimates obtained with the three methods described in section 3.3, along with the corresponding CAC settings (maximum call number for MAXC; link utilization threshold for MBAC), and the achieved link utilization4 . We see that the methods provide congruent estimates. Results are impressive, and show 4
As we said, these results were obtained with an offered load equal to 650%. It may be remarked that the different results of MBAC and MAXC in shaping traffic reduce in lighter load conditions, and vanish for very low offered loads (when neither MBAC nor MAXC enforce call rejections). By the way, in this situation, traffic self-similarity is irrelevant in terms of performance, as traffic QoS requirements are met. Additional results, not presented here due to space constraints, show that MBAC capability to reduce self-similarity becomes evident as soon as the offered load approaches the
336
G. Bianchi, V. Mancuso, and G. Neglia
Table 1. Hurst-parameter estimate for MAXC controlled traffic
Table 2. Hurst-parameter estimate for MBAC controlled traffic
MAXC Thresh Thrput Hurst Hurst Hurst (calls) % Variance R/S Wavelet 105 71.8 0.73 0.79 0.78 110 74.7 0.77 0.78 0.76 115 78.3 0.74 0.78 0.80 120 81.5 0.73 0.79 0.76 125 84.5 0.71 0.79 0.75 127 86.8 0.77 0.77 0.75 130 88.7 0.78 0.76 0.75 132 90.3 0.68 0.73 0.75 135 91.7 0.72 0.72 0.77 137 93.4 0.71 0.77 0.76 140 94.7 0.78 0.80 0.74
MBAC Thresh Thrput Hurst Hurst Hurst (util%) % variance R/S Wavelet 70 69.1 0.55 0.48 0.55 74 73.0 0.60 0.55 0.57 78 76.9 0.58 0.54 0.58 82 80.8 0.56 0.50 0.58 86 84.6 0.55 0.51 0.60 88 86.6 0.52 0.49 0.53 90 88.5 0.60 0.52 0.57 92 90.4 0.54 0.52 0.56 94 92.4 0.51 0.46 0.56 96 94.3 0.58 0.52 0.58 98 96.2 0.58 0.53 0.57
that the Hurst parameter decreases from about 0.75, in the case of MAXC, to about 0.5 for MBAC. It is interesting to note that 0.75 is the Hurst-parameter value theoretically calculated in [6] when a flow has heavy-tailed periods of activity/inactivity with a shaping parameter c = 1.5 (the formula is H = (3 − c)/2). In conclusion, table 2 quantitatively supports our thesis that self-similarity is a marginal phenomenon for MBAC controlled traffic (the achieved Hurst parameter is very close to 0.5, which represents SRD traffic). To quantify the time behavior of the two MAXC and MBAC traffic-aggregate time series, figure 5 reports a log-log plot of the aggregate variance, computed as described in section 3.3. While the two curves exhibit similar behavior for small values of the aggregation scale, the asymptotic slope of the MAXC plot is very different from the MBAC one. We recall that the asymptotic slope β is related to the Hurst parameter by β = 2H − 2. The lines corresponding to H = 0.50, H = 0.55, H = 0.75 and H = 0.80 are plotted in the figure as reference comparison. Note that the figure 5 appears to suggest that the MBAC-controlled traffic is not self-similar (Hurst parameter close to 0.5). Similar considerations can be drawn, with greater evidence, by looking at figure 6, which reports a log-log plot of the estimated squared wavelet coefficients d2x (j, l) versus the basis-function time scale. The figure shows that, for large time scales, the MBAC-controlled traffic plot tends to lay on a horizontal line (the asymptotic slope γ is related to the Hurst parameter by γ = 1 − 2H, and thus a horizontal line corresponds to H = 0.50, the H = 0.80 case is also plotted in the figure as reference comparison). Finally, figures 5 and 6 show that the MBAC curve departs from the MAXC curve at a time scale of the order of about 100 seconds. Although a thorough target utilization threshold, and Hurst-parameter values reach those presented in table 2 as soon as the offered load becomes 10-20% greater than this target.
Is Admission-Controlled Traffic Self-Similar?
337
1e+11
1e+10 H=0.8
MAXC MBAC 1e+09 Variance
H=0.75
1e+08
H=0.55
H=0.5 1e+07
1e+06
10
100
1000
10000
Aggregation scale (s)
Fig. 5. Aggregated variance plot
Squared Wavelet Coefficents
1e+13
1e+12
H=0.8
MAXC
1e+11
MBAC
H=0.5 1e+10
10
100
1000 Time Scale (s)
10000
Fig. 6. Wavelet coefficients plot
understanding of the emergence of such a specific time scale is outside the scope of the present paper, we suggest that it might have a close relationship with the concept of “critical time scale” outlined in [10].
338
5
G. Bianchi, V. Mancuso, and G. Neglia
Conclusions
The results presented in this paper appear to suggest that the traffic aggregate resulting from the superposition of Measurement-Based Admission Controlled flows shows a very marginal long range dependence. This is not the case for traffic controlled by a traditional parameter-based admission control scheme. We feel that there are two important practical implications of our study. Firstly, our study support the thesis that MBAC is not just an approximation of traditional CAC schemes, useful when the statistical pattern of the offered traffic is uncertain. On the contrary, we view MBAC as a value-added traffic engineering tool that allows a significant increase in network performance when offered traffic shows long range dependence. Secondly, provided that the network is ultimately expected to offer an admission control function, which we recommend should be implemented via MBAC, our results seem to question the practical significance of long range dependence, the widespread usage of self-similar models in traffic engineering, and the consequent network oversizing. Acknowledgements. The authors want to thank the graduate student Vito Imburgia for his collaboration in the software development.
References 1. W.Leland, M.Taqqu, W.Willinger, D.Wilson, “On the Self-Similar Nature of Ethernet Traffic”, Trans. on Networking, Vol. 2, No. 1, pp. 1-15, Feb. 1994. 2. J.Beran, R.Sherman, W.Willinger, and M.S.Taqqu, “Variable-bit-rate video traffic and long-range dependence”, IEEE Transactions on Communications, 1995 3. K.Park, G.T.Kim, M.E.Crovella, “On the Relationship Between File Sizes, Transport Protocols, and Self-Similar Network Traffic”, Proceedings of the International Conference on Network Protocols, pp. 171-180, October, 1996. 4. M.E.Crovella and A.Bestavros, “Self-Similarity in World Wide Web Traffic: Evidence and Possible Causes”, Trans. on Networking, 5(6):835–846, Dec. 1997. 5. A.Feldmann, A.C.Gilbert, W.Willinger, “Data network as cascades: Investigating the multifractal nature of the Internet WAN Traffic”, Computer Communications Review 28 (1998) 6. W.Willinger, M.Taqqu, R.Sherman, D.Wilson, “Self-Similarity Through HighVariability: Statistical Analysis of Ethernet LAN Traffic at the Source Level”, Trans. on Networking, Vol. 5, No. 1, pp. 71-86, Feb. 1997. 7. [edited by] K.Park and W.Willinger “Self-Similar Network Traffic and Performance Evaluation”, Wiley Inter-Science, 2000 8. R.Gibbens, F.P.Kelly, “Measurement-Based Connection Admission Control”, Proc. of 15th International Teletraffic Congress, June 1997 9. S.Jamin, P.B.Danzig, S.Shenker, L.A.Zhang, “A measurement-based admission control algorithm for integrated services packet networks”, Trans. on Networking Vol. 5, No. 1, Feb 1997, pp. 56-70. 10. M.Grossglauser, D.Tse, “A Framework for Robust Measurement Based Admission Control”, Trans. on Networking Vol. 7, No. 3, July 1999.
Is Admission-Controlled Traffic Self-Similar?
339
11. L.Breslau, S.Jamin, S.Shenker, “Comments on the Performance of Measurement Based Admission Control Algorithms”, Proc. of IEEE Infocom 2000, Tel Aviv, Israel, March 2000. 12. V.Bolotin, “Modeling Call Holding Time Distributions for CCS Network Design and Performance Analysis”, IEEE Journal on Selected Areas in Communications 12, 3 (Apr. 1994), 433–438. 13. J.Beran, “Statistics for long-memory processes”, Chapman & Hall, 1994 14. P.Abry, D.Veitch, “Wavelet Analysis of Long-Range Dependent Traffic”, IEEE Transactions on Information Theory, 44(1), pp. 2-15, January 1998. 15. http://www.emulab.ee.mu.oz.au/˜darryl/secondorder code.html 16. M.Grossglauser and J.Bolot, “On the Relevance of Long Range Dependence in Network Traffic”, Trans. on Networking, 1998.
Analysis of CMPP Approach in Modeling Broadband Traffic R.G. Garroppo, S. Giordano, S. Lucetti, and M. Pagano Department of Information Engineering, University of Pisa Via Diotisalvi 2 - 56126 Pisa - Italy {r.garroppo, s.giordano, s.lucetti, m.pagano}@iet.unipi.it Abstract. The CMPP (Circulant Modulated Poisson Process) modeling approach represents an appealing solution since it provides the integration of traffic measurement and modeling. At the same time, it maintains the Markovian hypothesis that permits analytical transient and steady-state analyses of queueing systems using efficient algorithms. These relevant features of CMPP approach has driven us to analyze in more details the fitting procedure when it is applied to actual broadband traffic. In the paper, investigating the estimation algorithm of model parameters, we emphasize the difficulty of CMPP in capturing the upper tail of marginal distribution of actual data, which leads to an optimistic evaluation of network performance. As shown in the paper, a simple relation exists between the number of significant eigenvalues obtained by the spectral decomposition and the peak rate that the CMPP structure is able to capture. The relation evidences the difficulties of CMPP to model actual traffic, characterized by long tailed distribution, as well as traffic data with the well accepted hypothesis of gaussian marginal.
1.
Introduction
The CMPP approach for modeling arrivals process by means of a circulant modulated Poisson process, provides a technique for integration of traffic measurement and modeling [10], maintaining, at the same time, the Markovian hypothesis that permits analytical transient and steady-state studies of queueing systems using efficient algorithms [9]. The developed modeling theory has permitted to study the impact of power spectrum, bispectrum, trispectrum, and marginal distribution of the input process on queueing behavior and loss rate. These studies have highlighted the key role played on the queueing performance by the marginal distribution, especially in the low frequencies region [8]. The technique for the construction of a CMPP that matches marginal distribution and autocorrelation function of the observed process has been presented in [2,9], where the authors showed simulation results with measured traffic data to prove the goodness of this approach. In this paper, further analysis of CMPP fitting procedure will be presented, highlighting a limitation of the mentioned algorithm in matching accuracy for the marginal distribution of observed rate process. Moreover, the presented study determines the maximum peak rate captured by the CMPP model once the spectrum has been matched and emphasizes the necessity of a E. Gregori et al. (Eds.): NETWORKING 2002, LNCS 2345, pp. 340-351, 2002. © Springer-Verlag Berlin Heidelberg 2002
Analysis of CMPP Approach in Modeling Broadband Traffic
341
CMPP structure containing a large number of effective eigenvalues to adequately capture even the light tail of a gaussian function, usually accepted as realistic for traffic distribution in the core network [1]. The relevance of these considerations is related to the impact of the marginal distribution tail of input traffic on queueing behaviour. Indeed, as shown in the numerical analysis Section, optimistic performance are estimated when the peak rate is not matched. On the other hand, the actual traffic rate has a marginal distribution that in some cases exhibits a tail heavier than Gaussian [3,5]; under such condition, the CMPP models result inadequate to estimate realistic queueing performance. Lastly, some advice to overcome the exposed limitation are briefly introduced.
2.
Background on CMPP
The fitting procedure of a CMPP model mainly consists of three steps [9], which are briefly summarized in this section. In the first step, the autocorrelation function of the observed rate process is estimated and then matched by a sum of exponentials (with complex parameters λk) weighted by real and strictly positive power coefficients ψk. This matching is a non-linear problem and cannot be solved directly. An approximate, but quite accurate, solution is obtained by using the Prony algorithm [6] to express the autocorrelation function in terms of complex exponentials with complex coefficients, and then satisfying the constraints on the ψk’s (which must be real and strictly positive) by matching the power spectral density (PSD) using the nonnegative least square (NNLS) method. The second step aims to design the transition frequencies matrix Q of the underlying modulant continuous time Markov chain. In order to fit the PSD of the modeled process, the eigenvalues of Q must contain all the λk’s obtained in the previous step; the use of a circulant matrix permits to solve the inverse eigenvalues problem. An efficient procedure to solve this problem is the Index Search Algorithm (ISA), presented in [1]. The last step is then the estimation of a vector γ associated to the Poissonian generation of arrivals in each state of the modulating Markov chain, such that the model matches the cumulative distribution function (CDF), F(x), of the observed rate process. In more details, the fitting procedure starts considering that the autocorrelation function of a CMPP model with N states is expressed by the following:
RCMPP (τ ) = ψ 0 +
N −1
∑ψ l ⋅ exp(λl ⋅ τ )
(2.1)
l =1
with positive real ψl's. The Fourier transform of (2.1) can be expressed by:
S CMPP (ω ) = 2 ⋅ π ⋅ψ 0 ⋅ δ (ω ) +
N −1
∑ψ l ⋅ bl (ω ) l =1
(2.2)
342
R.G. Garroppo et al.
[ (
where bl (ω ) = F exp λl ⋅ τ
)] =
− 2 ⋅ λl
ω 2 + λl
2
, and
∞
∫−∞ bl (ω )dω = 1 ; hence, the ψ ’s l
represent the power associated to each λl. The λl’s are the eigenvalues of the transition matrix, which must include all the “effective” ones that derive by the exponential decomposition of R(τ), the autocorrelation function of the measured rate process. Using the Prony method, the estimated R(τ) can be written as
R(τ ) ≅
p
∑ψ P, k ⋅ exp(λP,k ⋅ τ ) .
(2.3)
k =0
The presence of a constant term in RCMPP (τ) requires λP ,0 to be imposed equal to zero, and consequently ψ 0 = ψ P ,0 : this is simply obtained applying the Prony method to the autocovariance function C (τ ) = R(τ ) − γ 2 ( γ is the mean value of the observed
traffic rate), since R(τ → ∞ ) → γ 2 , and from (2.3) ψ P ,0 = γ . After the NNLS 2
matching, the expression (2.3) remains substantially unchanged and can be rewritten as p
R(τ ) ≅ ψ P ,0 + ∑ψ P , k ⋅ exp(λP , k ⋅ τ k =1
)
(2.4)
being aware that p, the ψ P,k ’s and the λP, k ’s may not be the same as those of (2.3) (they surely will not be in the case of complex eigenvalues). The order p of the exponential decomposition may be much less than the order N of the model, and thus in the construction of the transition matrix only few λl ’s will be imposed equal to the
λP, k ’s. Indicating with i the vector of indices (of dimension p) such that λi [k ] = λP , k , the relation ψ i [k ] = ψ P, k consequently holds. On the other hand, in order to obtain RCMPP (τ ) ≅ R (τ ) , all the other ψ l ’s will be imposed equal to zero.
After having determined the transition matrix Q (note that many solutions are possible for each set of eigenvalues, since the order N of the matrix is higher than the number p of desired eigenvalues), the third step, i.e. the design of the rate vector γ such that FCMPP (x ) ≅ F (x ) , involves the minimisation of the distance between
FCMPP (x ) and F (x ) , which is obtained by using the Nelder-Mead Simplex Search method. Since FCMPP (x ) is a piecewise step function, which jumps by 1/N at each value γ i in γ , the task is to determine the optimal vector γ which minimises the quantity N −1
∑ γ ’i −γ i
i =0
(2.5)
where γ is obtained by the quantization of F (x ) in levels, whose amplitude is 1/N.
Analysis of CMPP Approach in Modeling Broadband Traffic
Defining β i = ψ i ⋅ exp( jϑi ) , i=0,1,…,N-1, the vector
343
β = [β 0 , β1 ,..., β N −1 ]
represents the Discrete Fourier Transform of γ , and its Inverse can be expanded as
γi = γ +
N −1
∑ l =1
2 ⋅π ⋅ i ⋅ l ψ l ⋅ exp − j − ϑl , for i=0,1,2,…,N-1 N
where the expression of βl has been substituted. In order to obtain real γi, β must exhibit the Hermitian property (i.e. β N −l = β l* ,
which corresponds to ψ N −l = ψ l and ϑ N −l = −ϑl ). Indeed, if β does not satisfy the Hermitian property, its Inverse Finite Fourier Transform γ cannot be real. Under the condition of Hermitianity on β, the above relation assumes the following expression
γi = γ +
N −1
∑ l =1
2 ⋅π ⋅ i ⋅ l ψ l ⋅ cos − ϑ l , for i=0,1,2,…,N-1 N
(2.6)
that permits to estimate γ by applying the Nelder-Meade Simplex Search method to (2.5) as a function of ϑ . The Hermitian conditions on βl are automatically satisfied for those power coefficients related to conjugated complex pairs of eigenvalues, but cannot stand for real ones, since only one ψl is associated to each of them. To overcome this problem, each real eigenvalue needs to be considered twice. In order to maintain the same correlation structure (or equivalently the same PSD), the corresponding power coefficients will be assumed equal to half of the original ψl’s.
3.
Analysis of Fitting Procedure
The investigation presented in this work involves the last step of the fitting procedure and evidences a relevant limitation on the tail behavior of the marginal distribution of CMPP models. This limitation may considerably affect the evaluation of queueing performance of actual traffic, leading to an underestimation of network resources needed to guarantee the target QoS expressed in terms of loss probability. The first observation on the fitting procedure is that, putting τ=0 in (2.4), the 2 variance σ of the rate process can be expressed as the sum of the ψ l for l=1, 2, …, N1. On the other hand, the maximum theoretical rate achievable by the model is derived by (2.6) putting all the cosines equal to +1. In this case, a second relation involving ψ l ’s can be simply obtained:
γ MAX − γ =
N −1
∑
l =1
ψl .
(3.1)
The maximum rate deviation from the mean value is then limited by (3.1), under the constraint
N −1
∑l =1 ψ l
= σ 2 . As we stated before, only the p ψ l ’s associated to the
effective eigenvalues are non zero. Among these p power coefficients, some are
344
R.G. Garroppo et al.
related to real λl, hence each of them needs to be split into two terms with halved magnitude. Therefore, the resulting set of couples (λl, ψ l ) after this operation consists of q elements, with p ≤ q 1), as a product of this factor and the source mean rate. The relationship between the cell loss probability and the bucket size has two distinct parts: a cell-scale component which decreases exponentially (linearly on a log scale) with the increase of the bucket size and the burst component where the slope of the curve changes dramatically as we reach certain bucket size. The intercept between these two components define solution for the required bucket size and tolerance τm . However, we do not present any results here as this is still an ongoing research.
4
Incorporating the Model into the ATM Dimensioning
Our model defines the aggregate IP traffic between two OD pairs (e.g. IGR-toIGR) as a single IP traffic stream1 and provides its characterization in terms 1
Note that the model can be extended to incorporate QoS and thus define the aggregate IP traffic as multiple IP traffic streams. For that, it is necessary for the input traffic parameters to be categorized by QoS as well.
A Mathematical Model for IP over ATM
361
of the three parameters: mean rate, peak rate, and burst period. From these parameters one can calculate directly the equivalent bandwidth required for this traffic stream and thus model the aggregate IP traffic as an“IP call” with a capacity requirement equal to the equivalent bandwidth. The equivalent bandwidth computed for the “IP call” can now be used in two different ways. The first approach is to model this aggregate IP traffic as call ariving at some average arrival rate (Poisson distributed) and persisting for an average time period (Negative exponentially distributed); it requires the nominal equivalent bandwidth for the duration of the call. This approach enables these “IP calls” to be treated in a “unified” fashion along with other ATM calls that can be characterized by their mean arival rate, service time and equivalent bandwidth. When dimensioning ATM networks using this approach, a call loss probability is specified and the capacity determined using their mean arrival and service rates in conjunction with their equivalent bandwidths. The second approach recognizes the fact that IP traffic coming from a LAN or WAN is likely to be “always on” and that we simply need to allocate capacity based on the equivalent bandwidth computed for our single “IP call” emanating from the LAN or WAN. Adopting the first approach in this case would lead to gross over-dimensioning of the capacity requirements. This means that if such traffic is present, one should carefully separate the “always on” traffic from the traffic that can be characterized by an arrival and service rate. Finally, we consider another approach that can be taken for modeling the aggregate IP traffic stream to fit the “unified” ATM dimensioning model. We can transform our VBR “IP call” of capacity equivalent to its equivalent bandwidth into equivalent CBR “IP call” of different characteristics. According to the equivalent burst approximation [6], a general traffic stream which is the superposition of a large number of independent traffic streams can be replaced by an equivalent process with simpler characteristics. The equivalent process is chosen to have the same values for the following parameters: (1) Mean cell rate, m (2) Variance of instantaneous rate, σ 2 (3) Asymptotic variance of the number of cell arrivals in a long time interval, v. This equivalent traffic process has a Poissonian arrival process λ, for independent but equally distributed bursts with peak rate R, and bursts durations exponentially distributed with mean µ. The fitting relations are given by: λ=
2m2 ν
R=
σ2 m
µ=
ν 2σ 2
(24)
Thus, by fitting the mean, the instantaneous variance, and the asymptotic variance of the aggregate IP traffic into this equivalent process, we get as a result a CBR “IP call” of capacity R, and traffic intensity λµ. Accordingly, it only remains the asymptotic variance of the aggregate IP traffic to be derived. The expression for the asymptotic variance of the number of offered cells in a long time interval, follows from the application of the formulae for asymptotic variance of a cumulative regenerative process [7].
362
5
I. Atov and R.J. Harris
Simulation Results
Purpose built real-time simulator was used to analyse the analytical model. The simulator is a very simple implementation of the IP over ATM model. However, it tries to test the model with more realistic data flows. The simulator tests the effects of the transmission errors on the packet arrival rates. To be able to easily understand these effects and their behaviour, the system is kept very simple by limiting it to one ATM link. Traffic from individual IP connections is modelled as sessions which arrive according Poisson with given mean and negative exponential durations with mean values selected from a given range. The sessions are generated as TCP or UDP according to a specified probability. The packets from UDP sessions are generated deterministically, whereas the generation of packets from TCP sessions is governed by a protocol based on a TCP sliding window protocol, but has not been implemented in full detail. First a full data of window is sent (the window is set to a default value) and every other packet is sent after an arrival of ACK from the receiver. The packets are being retransmitted if an ACK has not been received within a fixed time interval. The packets from a session are of fixed length, which are selected from a given range. The simulator gives results for the mean rate and the peak rate of the carried IP traffic on ATM link as a function of BER, as well as, for the distribution of IP packet lengths of the offered traffic, which is needed for the input of the analytical model. The results in Table 1 are obtained for input traffic of mean arrival rate equal to 40 sessions per second, out of which 90 % is TCP and 10 % UDP. The analytical model for the mean rate gives values within the range of (0.39 %, 7.25 %) from the results obtained from the simulator. The analytical model for peak rate gives results within the range of (1.43 %, 9.23 %) from the results obtained from the simulator. Discrepancies bigger than 0.55 % for the mean rate and bigger than 1.76 % for the peak rate were recorded for BER > 10−6 . Table 1. Mean Rate and Peak Rate as a function of BER BER Mean (an) Mean (sim) an/sim Peak (an) Peak (sim) an/sim [Mbps] [Mbps] % [Mbps] [Mbps] % 1E-10 1E-9 1E-8 1E-7 5E-7 1E-6 5E-6 1E-5 5E-5 1E-4
1.264 1.264 1.264 1.265 1.269 1.276 1.310 1.359 1.907 3.210
1.259 1.259 1.259 1.260 1.263 1.269 1.287 1.310 1.830 2.993
0.39 0.39 0.39 0.40 0.47 0.55 1.78 3.74 4.21 7.25
1.835 1.835 1.835 1.836 1.842 1.849 1.900 1.972 2.684 4.100
1.809 1.809 1.809 1.810 1.812 1.817 1.846 1.893 2.252 3.740
1.43 1.43 1.43 1.46 1.52 1.76 2.89 4.15 6.50 9.23
A Mathematical Model for IP over ATM
6
363
Conclusions
In this paper we have described a mathematical model that characterises the aggregate traffic of multiple TCP/IP connections as it enters the ATM backbone into a form that is suitable for the dimensioning processes of the ATM network. Specificaly, we modelled the aggreagte IP traffic that is being carried over ATM link as an “aggregate IP call” characterized by the three principal parameters of the Gu´erin et al model, viz [4]: mean rate, peak rate, and mean burst period. This enabled us to calculate the effective bandwidth of the “aggregate IP call” and to use it subsequently for the dimensioning of the ATM networks. To validate the modelling, we have developed a simulation model. The simulation results have shown that our mathematical model is very accurate in demonstrating the effects of increasing cell loss probabilities (i.e., bit error rates) on the performance of the IP traffic over ATM networks. The success of the method shows that we can translate the IP traffic measurement data into their equivalent cell-level parameters for direct application into the ATM dimensioning procedures. Future work will involve the development of tools to implement the procedures described in this paper for optimal design of ATM networks.
References 1. Berry, L.T.M., Harris, R.J., Puah, L.K.: Methods of Trunk Dimensioning in a Multiservice Network. In Proceedings of GLOBECOM’98. (1998) 282–287 2. Kaufman, J.S.: Blocking in a Shared Resource Environment. IEEE Transactions on Communications. 29 (1981) 1474–1481 3. Roberts, J.W.: A Service System with Heterogeneous User Requirements - Application to Multi-Service Telecommunications Systems. In Proceedings of Performance of Data Communication Systems and their Applications, G. Pujolle (ed.). (1981) 423–431 4. Gu´erin, R., Ahmadi, H., Naghshineh, M.: Equivalent Capacity and its Application to Bandwidth Allocation in High-Speed Networks. IEEE Journal on Selected Areas In Communications. 9 (1991) 968–981 5. Hassan, M., Breen, J.: Performance Issues for TCP/IP over ATM. 7th International Network Planning Symposium - Planning Networks and Services for the Information Age. (1996) 575–580 6. Lindberger, K.: Analytical Methods for the Traffical Problems with Statistical Multiplexing in ATM Networks. In Proceedings of the 13th International Teletraffic Congress. (1991) 807–813 7. Smith, W.L.: Renewal Theory and its Ramifications. Journal of Royal Statistical Society B. 20 (1958) 243–302 8. ATM Forum. ATM Traffic Management Specification Version 4.1. (1999) 9. Butto, M., Cavallero, E., Tonietti, A.: Effectivness of the ”Leaky Bucket” Policing Mechanism in ATM Networks. Journal on Selected Areas in Communications. 9 (1991) 335–342 10. Bonaventure, O.: PhD. Integration of ATM Under TCP/IP to Provide Services with Minimum Guaranteed Bandwidth. Universit´e de Li`ege (1998)
Analysis and Comparison of Internet Topology Generators Damien Magoni and Jean-Jacques Pansiot Universit´e Louis Pasteur – LSIIT Pˆole API, Boulevard S´ebastien Brant, 67400 Illkirch, France {magoni, pansiot}@dpt-info.u-strasbg.fr
Abstract. The modeling of Internet topology is of vital importance to network researchers. Some network protocols, and particularly multicast ones, have performances that depend heavily on the network topology. That is why the topology model used for the simulation of those protocols must be as realistic as possible. In particular a protocol designed for the Internet should be tested upon Internetlike generated topologies. In this paper we provide a comparative study of three topology generators. The first two are among the latest available topology generators and the third is a generator that we have created. All of them try to generate topologies that model the measured Internet topology. We check their efficiency by comparing the produced topologies with the topology of a recently collected Internet map.
1 Introduction Today simulation tools are widely used to test network protocols. These tools need network topologies as input data. A network topology is usually modeled by an undirected graph where the network devices are modeled by the nodes of the graph and the communication links are modeled by the edges of the graph. A software tool that creates network topologies is usually called a graph generator or a topology generator. The way it builds network topologies (i.e. the set of creation procedures) is called a topology model. In this paper, we will focus on some of the latest Internet-like topology generators, including one that we have created. We will assess the efficiency of these generators by comparing the graphs that they produce with a graph built from real data and representing a part of the Internet at the router level. The rest of the paper is organized as follows. Section 3 presents the Internet map that we use as a reference, and the generators that we test as well as their settings. Section 4 details the properties studied. Finally, in section 5 we give the results of the analysis of the generated graphs compared to the Internet map analysis.
2 Previous Work The study of the Internet topology is an area of active research. There is not much information on the topology of Internet at the router level because it is very hard to obtain. An attempt to map the Internet was carried out by Pansiot et al. [11] by using E. Gregori et al. (Eds.): NETWORKING 2002, LNCS 2345, pp. 364–375, 2002. c Springer-Verlag Berlin Heidelberg 2002
Analysis and Comparison of Internet Topology Generators
365
source routing. Their data collect was done during the summer of 1995 and the resulting map contained 3888 routers. They also defined terms that we use in our work. More recently, another collect was undertaken by Govindan et al. using a heuristic called hoplimited probes [6]. This heuristic (and many others) has been included in their software called Mercator. Their collect was carried out in 1999 and the resulting map contained 228263 routers. Because it is easier to get exhaustive routing data, the Autonomous System (AS) level topology of Internet has been further investigated. From 1994 to 1995, a study of the Internet inter-domain topology was carried out by Govindan et al. [5]. In 1999, Faloutsos et al. [4] analyzed the inter-domain routing information provided by the NLANR and the routers’ map made by Pansiot et al. They found that the Internet topology obey power laws at both the AS level and the router level. In [6], Govindan et al. noticed that some of the Faloutsos et al. power laws still hold for their router level Internet instance of 1999. Recently, additional power laws were found by Magoni et al. at the AS level [8] also by using data provided by the NLANR. Concerning the topology generators, one of the earliest and most famous models was designed by Waxman in 1988 [12]. This kind of model is usually called flat topology model. The nodes are randomly placed on an euclidean plane irrespective of any hierarchy order among them. This model was later replaced by hierarchical topology models such as the Tiers [3] and the Transit-Stub [13] models. These models try to enforce a multilevel hierarchy that can be found in the Internet (e.g. host-router-AS). The discovery of power laws in the Internet by Faloutsos et al. has brought the arrival of a new kind of topology model. We call it the power law topology model because, as the name suggests, it makes use of power laws to generate Internet-like graphs. The BRITE [10] generator as well as our topology generator called network manipulator (nem), belong to this category and generate router level graphs. Inet2 [7] also belongs to the power law topology model category but it generates AS level graphs. Finally two models were recently defined to reflect the power laws found in huge network topologies. The first one was defined by Aiello et al. in [1] and the second one called Extended Scale-free model was defined by Albert et al. in [2].
3 Source and Tools 3.1
Internet Map
There are basically two levels in Internet topology. The router level and the AS level. Although AS level maps are easier to make, we will focus our attention on the router level maps of the Internet. The main reason for this choice is that a router level map provides a higher accuracy for IP layer protocol simulations. A simulation of a protocol designed for the IP environment should use such a map because it displays the IP connection topology. As we said in the previous section, one of the most recent router level Internet maps was constructed by Govindan et al. with their software tool Mercator [6]. Their map is called scan. They also recovered another map built by researchers at the Lucent laboratories. This map is called lucent. Both maps were merged to create one of the most recent and complete Internet map ever built. This map is called scan+lucent. Govindan et al. have made the anonymized version of this map freely available for download.
366
D. Magoni and J.-J. Pansiot
It is this map that we have used in our study. We call it an Internet reference map. It is a huge map containing 284772 nodes and 449228 edges. When we show a property value of the scan+lucent map in a figure, we label it “Internet” for simplicity instead of scan+lucent. We are aware that the map is probably not exhaustive. 3.2 Topology Models The three topology generators studied in this paper are BRITE, Inet2 and nem.All of them belong to the power law topology model (i.e. the latest and most accurate model). The graph sizes we have chosen to study are 500, 1000, 2000, 4000, 8000 and 16000 nodes. This should give a good view of the scaling effect on the properties of the generated graphs. Although researchers may need graphs smaller than 500 nodes (e.g. for resource consuming simulations), it is difficult for these generators to create very small graphs as they are based on power laws that arise only with big numbers. As Inet2 is, by design, not able to generate graphs of size below 3037, we have only created Inet2 graphs of sizes 4000, 8000 and 16000. We have generated 20 graphs of each chosen size for each topology generator. We explain here the parameter settings that we used to generate the graphs. We have chosen to test BRITE with m = 1, 2 and 3 (it is the number of links added per new node). As we use incremental growth, we obtain graphs with an average node degree of 2m. The Internet reference map has an average node degree of 3.15, so taking m above 3 would have given graphs with too high an average node degree (i.e. it means too many edges compared with the number of nodes). Furthermore we found different results for m = 1, 2 and 3, despite what Medina et al. found in [10]. That is why in the result section we consider three scenarios for the use of BRITE (i.e. for m = 1, 2 and 3) as if we had three different graph generators. Given the results of the authors of BRITE, we use a random node placement because a Pareto node placement gives similar results. Also we use preferential connectivity and incremental growth both turned on because only these settings generate graphs that obey the outdegree and rank power laws as shown by Medina et al. in [10]. The generation method (i.e. how graphs are created) of BRITE is fully explained in [10]. Concerning Inet2, a first very important remark is that it generates AS level graphs. To compare it with the other generators, we simply consider Inet2 output to be router level graphs. We note that Jin et al. did the same in [7] when they compare Inet2 with Waxman, Tiers, Transit-Stub and BRITE that they also consider as AS level generators. We will refer back to this point when it is relevant, and sometimes compare Inet2 graphs with an AS level map of May 2000 analyzed by Magoni et al. in [8]. The generation method of Inet2 is fully explained in [7]. Concerning nem, it is worth noticing that it creates graphs by extracting a subgraph from a real Internet map. Thus we usually call it a topology modeler rather than a topology generator. We use the scan+lucent map as its input real Internet map. Of course it can be argued that if we compare the graphs generated by nem with a reference map that is its input map, the results will automatically be matching what is desired but this is not true. The process of extracting a graph of a few hundred or thousand nodes from a map having nearly 300,000 nodes can make this graph have a completely different topology than the originating map. The generation method of nem is fully explained in [9].
Analysis and Comparison of Internet Topology Generators
367
4 Properties of Interest In this section we describe which topological properties we have chosen to study. We use the regular terminology set forth in previous papers by Pansiot et al. [11], Faloutsos et al. [4] and Magoni et al. [8]. From now on and for the sake of simplicity, we will talk about the property values of the Internet instead of the property values measured in the scan+lucent map (i.e. the map that we use as our reference map). (e.g. we write “the diameter of the Internet is . . . ” instead of “the diameter of the scan+lucent reference map is . . . ”.) The node outdegree or degree (i.e. as we consider undirected graphs) distribution is one of the fundamental properties of a graph. The degree distribution of the Internet is a skewed distribution. From a graph’s degree distribution we infer the average node degree, power law 1 (rank exponent) and power law 2 (outdegree exponent). Both power laws have been found by Faloutsos et al. in [4]. Medina et al. have shown in [10] that Waxman graphs and Transit-Stub graphs do not comply with power laws 1 and 2. Another important property is the distance distribution. We define the distance between two nodes as being the hop count between the two (i.e. the minimum number of edges to cross to get from one node to the other). The distance is also called shortest path length (defined by a number of hops). It is worth noticing that the distance distribution of the Internet is not skewed. It seems to be Gaussian. This means that the average distance inferred from the distance distribution is a good indicator to study. The biggest distance of a given node to any other node is called the eccentricity of the node. The eccentricity distribution of the Internet also seems to be Gaussian and thus we study the average eccentricity. Furthermore we will not study power law 3 (i.e. eigen exponent) found by Faloutsos et al. because Medina et al. have shown in their paper [10] that power law 3 holds for large Waxman graphs, Transit-Stub graphs and nearly all BRITE configurations. As Waxman and Transit-Stub graphs do not model the Internet topology accurately, we think that power law 3 is not a primary indicator. From the definition given by Magoni et al. in [8], a node belonging to a cycle or lying on a path connecting two cycles is called an in-mesh node. The mesh is the set of all in-mesh nodes of the graph. We examine the mesh size and we give results about the mesh connectivity such as the number of cutpoints and the biggest bicomponent size. The study of the mesh gives information about the amount of reliability vs connection failures and the possibility of load balancing by using alternate paths. Finally we examine the forest part of the graphs. We look at the number of trees and at the tree size distribution. In the Internet, we have found that power laws 6 and 7 (found by Magoni et al. at the AS level in [8]) can be inferred from this distribution. We examine the generated graphs to see if they also comply with these power laws. The properties concerning the trees are interesting for studying the network reliability and connectivity as each node belonging to a tree is a cutpoint (excepted the leaf nodes) and thus it can make the graph disconnected if it fails.
368
D. Magoni and J.-J. Pansiot
5 Results This section contains the results of the analysis of all the graphs generated by the three topology generators. These results are compared with the results of the analysis that we made on the scan+lucent map. In all the following figures, the value measured in the scan+lucent map of the given property is plotted as a dashed horizontal line. Of course it is not corresponding to the network size coordinate axis. It only serves as a reference value that can be easily compared to the values measured in the graphs. Any property value for a generator for a given size is the average of the values measured for each of the 20 generated graphs. For instance, the average node degree of the 500-node graphs generated by BRITE with m = 2 is 3.6. This means that this value is the average of the average node degree of each of the 20 graphs of size 500 generated with BRITE (m = 2). In what follows we will write BRITE x instead of BRITE with m = x. 5.1
Degree Properties
Figure 1 shows the plots of the average node degree. The first striking observation is that it is just below 2 for the BRITE 1 graphs. In fact we checked that each graph of size n generated with BRITE 1 has exactly n − 1 edges and is connected. This means that these graphs are merely trees. This is confirmed by the authors of BRITE who state that for a newly considered node they connect it with only one link [10] when m = 1. BRITE 3 graphs have an average degree between 5.5 and 6, which is nearly the double of the Internet average degree value. However, as the degree distribution is skewed, this may not be of great importance. Finally, nem has exactly the same average degree as the Internet. This is because, as we saw in section 3, it generates the amount of edges needed to match the Internet average degree. It is worth noticing that the average node degree of the graphs from all the generators does not depend much on the size of the graph (excepted for 2000-node or less BRITE 3 graphs). Figure 2 shows the plots of the absolute correlation coefficient (ACC) of the average degree distribution with respect to power law 2 (i.e. outdegree exponent). We clearly see that BRITE 2 and BRITE 3 graphs do not comply with power law 2. We also see that Inet2 graph ACCs tend to decrease when the size increases. In particular the graphs of size 16000 have an average ACC a little under 0.95. Finally, nem and BRITE 1 graphs have very good ACCs that decrease a little when the graph size is small (e.g. 500). We have examined some samples of the degree distributions of BRITE 2 and 3 graphs. Here is an example of the beginning of a degree distribution of a 4000-node BRITE 3 graph: Degree 1 2 3 4 5 ..
Frequency 5 4 1601 766 450 ..
Frequency in % 0.125 0.1 40.025 19.15 11.25 ..
Analysis and Comparison of Internet Topology Generators 6
369
BRITE (m=1) BRITE (m=2) BRITE (m=3) Inet2 nem Internet
5.5 5
Average Degree
4.5 4 3.5 3 2.5 2 1.5
0
2000
4000
6000
8000 10000 Network Size
12000
14000
16000
18000
Fig. 1. Average Degree 1 0.95
Degree Correlation Coefficient
0.9 0.85 0.8 0.75 0.7 0.65 0.6
BRITE (m=1) BRITE (m=2) BRITE (m=3) Inet2 nem Internet
0.55 0.5
0
2000
4000
6000
8000 10000 Network Size
12000
14000
16000
18000
Fig. 2. Degree Correlation Coefficient
It is the few degree 1 and degree 2 nodes that cause these graphs to not comply with power law 2. We verified for BRITE 2 and 3 graphs that the BRITE algorithm generates a degree distribution that starts at m instead of one (when m is greater than 1) and a few outliers of degree less than m cause BRITE graphs to not follow power law 2. We do not show the plots of the absolute correlation coefficient (ACC) of the rank distribution because all generators comply with power law 1 (i.e. rank exponent).
370
D. Magoni and J.-J. Pansiot 12
BRITE (m=1) BRITE (m=2) BRITE (m=3) Inet2 nem Internet
11 10
Average Path Length
9 8 7 6 5 4 3
0
2000
4000
6000
8000 10000 Network Size
12000
14000
16000
18000
Fig. 3. Average Path Length
To conclude the study on the degree distribution of the graphs, we can already see that many factors contribute to the realism of a graph generator. We saw here that BRITE 1 complies with power laws 1 and 2, but its graphs are trees and thus do not match the Internet topology at all (particularly in terms of redundant links). BRITE 2 and 3 do not comply with power law 2 because of the way they shift the degree distribution (i.e. the majority of the nodes should have degree 1 and not degree m). Inet2 and nem generate graphs that bear a closer similarity to the Internet degree properties. 5.2
Distance Properties
Figure 3 shows the plots of the average distance (i.e. path length). Except for BRITE 1 graphs, all graphs have an average distance below the Internet average distance which is 8.75. BRITE 1 average distance increases when the graph size increases presumably because of its tree structure. The average distance of the others does not seem to vary with changes in size. Inet2 graphs have the lowest values (all below 4) but this is surely due to Inet2 design (i.e. an AS level generator). Magoni et al. found that the average distance of the AS level Internet in May 2000 was 3.65 which is very close to the average distance values of Inet2 graphs. As distances are an important factor in simulations (particularly for determining delays) it would be desirable to have average distances as close as possible to the Internet average distance of 8.75. nem graphs with average distance values around 6 and BRITE 1 graphs with values around 11 (although much influenced by the graph size) are the most realistic ones. Figure 4 shows the plots of the average node eccentricity. Here BRITE 2 and 3 graphs have values not influenced by the graph size and below 8, which is far from the Internet eccentricity value of nearly 20. BRITE 1, Inet2 and nem values seem to depend
Analysis and Comparison of Internet Topology Generators
371
22 20
Average Node Eccentricity
18 16 14 12 10 8
BRITE (m=1) BRITE (m=2) BRITE (m=3) Inet2 nem Internet
6 4
0
2000
4000
6000
8000 10000 Network Size
12000
14000
16000
18000
Fig. 4. Average Node Eccentricity
on the graph size. When the graph size increases, the eccentricity increases. However only BRITE 1 seems to match the Internet eccentricity. Inet2 has values that are too low (i.e. below 11) and nem values although reaching 16 for 16000-node graphs are still 25% under the Internet eccentricity value. Eccentricity gives an idea of the spreading of the graph. If it is low it means that there are only a few nodes that are far removed from the others, most of them being concentrated in a small area. On the other hand, a high eccentricity (as in the Internet) means that the nodes are spread over a wide area. It is a measure of how far a node is from all the other nodes. In the Internet a node is, on average, at most 20 hops from all the others. At the AS level, the eccentricity of the Internet is 7 which is well under the Inet2 graph values (especially for 16000-node graphs with a fraction of 0.35, equal to the AS level fraction, where the eccentricity reaches 11). To conclude, we can say that distance properties are hard to model with accuracy. Average values such as distance and eccentricity are usually too low in the generated graphs compared to the Internet values. The increase in the values of BRITE 1, Inet2 and nem when the graph size increases is a good sign that taking bigger graphs will give adequate values. However we think that it is important to try to do better because these two properties are based on Gaussian distributions and thus are good indicators of graph realism. In fact we are currently creating a filter in our nem generator that allows us to generate graphs with average distances nearly equal to those measured in the Internet (i.e. between 9 and 11). It works when generating graphs having 500 nodes or more. This filter will be described in a future work.
372
D. Magoni and J.-J. Pansiot 100
BRITE (m=2) BRITE (m=3) Inet2 nem Internet
Mesh Size vs Network Size (in %)
90
80
70
60
50
40
30
20
0
2000
4000
6000
8000 10000 Network Size
12000
14000
16000
18000
Fig. 5. Mesh Size vs Network Size (in %)
5.3
Mesh and Trees Properties
We have given the definition of the mesh in section 4. As BRITE 1 graphs are trees, they do not have a mesh and thus they will not be studied in this section. Figure 5 shows the plots of the mesh size in percentage of the graph size. The Internet has a mesh whose size represents 33% of the size of the whole Internet graph. This means that one node out of three in the Internet belongs to the mesh. We can already make a striking observation. BRITE 2 and 3 graphs have an average mesh proportion of nearly 100% ! (the lowest value is 96.74% for 500-node BRITE 2 graphs). This does not coincide with the Internet mesh proportion of 33%. Inet2 has values starting at 41% and increasing accordingly with the graph size. nem on the other hand has values starting at 37% and decreasing when the graph size increases. Nevertheless nem graphs seem to have the closest mesh proportion values to the Internet mesh proportion. It is worth noticing that Inet2 mesh proportions are accurate (particularly for 16000-node graphs) when compared with the AS level mesh proportion which equals 63%. We have also studied the connectivity properties of the graph meshes and the Internet mesh. The number of cutpoints in the mesh of the Internet is equal to 3.7% of the total number of nodes in the mesh. In all the graphs that we studied this ratio was at most 0.39% which is much less than the Internet value. Cutpoints are important because the failure of a cutpoint router leads the network to be disconnected. Although we saw that cutpoints are rare, it is interesting to see how they partition the mesh. To examine this point, we calculated the sizes of the biggest biconnected component of the meshes. In the Internet, the biggest bicomponent contains 87% of the nodes of the mesh. This means that although the Internet mesh contains 3.7% cutpoints, these nodes only fraction a small part of the mesh (i.e. 13%), as the larger part is biconnected. Concerning the graphs, we saw that except for 500-node nem graphs whose average biggest bicomponent size
Analysis and Comparison of Internet Topology Generators
373
is 95.8% of the mesh, all the other graphs have a ratio above 97.8%. To conclude we can say that the generated graphs have almost entirely biconnected meshes, which is not really the case of the Internet mesh that still contains a few cutpoint nodes and bridge nodes (i.e. nodes not belonging to a cycle but on a path linking two cycles). The forest is simply the set of nodes of the graph that do not belong to the mesh. These nodes are located in trees and the union of these trees form the forest. The trees are connected to the mesh by special nodes called roots. We consider the roots as nodes belonging to the mesh. As we said before, BRITE 1 graphs are trees. Any one BRITE 1 graph of size n has exactly n − 1 edges and is connected hence it is a unique tree. This implies that BRITE 1 graphs are always composed of one tree and thus the assumption that the graphs have forests with multiple trees is false for BRITE 1 graphs. Hence we will not study BRITE 1 graphs in the rest of this section. Figure 6 shows the plots of the number of trees (i.e. the number of roots) vs the number of nodes in the graph. This is an interesting measure as root nodes represent the connection points to the in-tree nodes of the graph. We can see that BRITE 2 and 3 graphs have nearly 0% trees except for small sized ones. This is a consequence of what we saw in the previous section: nearly all the nodes of BRITE 2 and 3 graphs belong to the mesh part. When we look closer at these graphs we see that they only have a handful of trees, most of them having depth one (i.e. nodes are directly connected to the roots). Hence we have not been able to calculate tree power laws for BRITE 2 and 3 graphs and thus we do not study them in the rest of this section. In the Internet the ratio of the trees vs the Internet size is 11.7% (more than one in-mesh node out of three is a root node). We can see that Inet2 and nem graphs have roughly similar values. For small sizes, nem graphs have a higher percentage because their mesh is smaller and thus we find more trees. It is worth noticing that Inet2 values, already above nem values, are far from the AS level tree percentage which equals 7.7%. In the following of this section we examine the presence of tree power laws [8] defined by Magoni et al. in Inet2 and nem graphs. We found that the Internet complies with power laws 6 and 7 (i.e. tree rank exponent and tree size exponent) with a high degree of accuracy. It is worth noticing that Magoni et al. have already found that the AS level of the Internet complies with these tree power laws. We do not show the plots of the tree size ACCs of nem and Inet2 graphs because we found that the values are close to or above 0.95. Thus all these graphs comply with power law 7 (tree size exponent).We also do not show here the plots of the tree rank ACCs of nem and Inet2 graphs because they are all above 0.97. Thus they all closely comply with power law 6 (tree rank exponent). To summarize this section, we can notice that only Inet2 and nem graphs comply with the power laws concerning the trees. Their proportion of trees matches the Internet tree proportion. Because they do not have a sizeable number of trees, the tree size distributions of the BRITE 2 and 3 graphs cannot be used to calculate the ACCs and thus they do not comply with tree power laws 6 and 7. As both the AS level and the router level of the Internet comply with the tree power laws, we think that these laws are valid indicators of a graph topology similarity to the Internet topology.
374
D. Magoni and J.-J. Pansiot 18
Number of Trees vs Network Size (in %)
16 14 12 10 8 6 4 BRITE (m=2) BRITE (m=3) Inet2 nem Internet
2 0
0
2000
4000
6000
8000 10000 Network Size
12000
14000
16000
18000
Fig. 6. Number of Trees vs Network Size (in %)
6 Conclusions Designing Internet-like topology generators is not an easy task. The generated graphs have to comply with many laws and their topological properties must have suitable values that match the Internet ones. Ensuring that important power laws and Gaussian distribution averages are accurately reproduced is already a big step towards making a graph topology similar to the Internet one. In particular we think that the tree proportion and the tree power laws are interesting indicators for assessing the reliability of an Internet-like generated graph. The aims of our paper can be summarized as follows: – Compare some of the latest generators that belong to the power law topology model, as a means to evaluating their performance. – Analyze the generated graphs to examine how well their properties compare with the ones measured in the scan+lucent router level Internet map, as a means to evaluating their accuracy. – Give, at the same time, results on the Internet topology properties inferred from the scan+lucent map (one of the biggest available Internet maps). Our topology generator (nem) holds positive results and we hope that it will be helpful to the research community. The complexity of the Internet topology opens up the prospect of finding new properties and creating new generators in the near future.
References 1. William Aiello, Fan Chung, and Linyuan Lu. A random graph model for massive graphs. In Proceedings of ACM STOC’00, pages 171–180, 2000.
Analysis and Comparison of Internet Topology Generators
375
2. R´eka Albert and Albert-L´aszl´o Barab´asi. Topology of evolving networks: local events and universality. Physical Review Letters, (85):5234, 2000. 3. Matthew Doar. A better model for generating test networks. In Proceedings of IEEE GLOBECOM’96, November 1996. 4. Michalis Faloutsos, Petros Faloutsos, and Christos Faloutsos. On power-law relationships of the internet topology. In Proceedings of ACM SIGCOMM’99, Cambridge, Massachusetts, USA, September 1999. 5. Ramesh Govindan and Anoop Reddy. An analysis of internet inter-domain topology and route stability. In Proceedings of IEEE INFOCOM’97, Kobe, Japan, April 1997. 6. Ramesh Govindan and Hongsuda Tangmunarunkit. Heuristics for internet map discovery. In Proceedings of IEEE INFOCOM’00, Tel Aviv, Isra¨el, March 2000. 7. Cheng Jin, Qian Chen, and Sugih Jamin. Inet: Internet topology generator. Technical Report CSE-TR-433-00, University of Michigan, 2000. 8. Damien Magoni and Jean-Jacques Pansiot. Analysis of the autonomous system network topology. Computer Communication Review, 31(3):26–37, July 2001. 9. Damien Magoni and Jean-Jacques Pansiot. Internet topology analysis and modeling. In Proceedings of IEEE Computer Communications Workshop, Charlottesville, Virginia, U.S.A., October 2001. 10. Alberto Medina, Ibrahim Matta, and John Byers. On the origin of power laws in internet topologies. ACM Computer Communication Review, 30(2), April 2000. 11. Jean-Jacques Pansiot and Dominique Grad. On routes and multicast trees in the internet. ACM Computer Communication Review, 28(1):41–50, January 1998. 12. Bernard Waxman. Routing of multipoint connections. IEEE Journal on Selected Areas in Communications, 6(9):1617–1622, December 1988. 13. Ellen Zegura, Kenneth Calvert, and Michael Donahoo. A quantitative comparison of graphbased models for internetworks. IEEE / ACM Transactions on Networking, 5(6):770–783, December 1997.
Energy Efficient Design of Wireless Ad Hoc Networks Carla-Fabiana Chiasserini1 , Imrich Chlamtac2 , Paolo Monti1 , and Antonio Nucci1 1
Dipartimento di Elettronica, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129 Torino, Italy {chiasserini,nucci}@polito.it 2 Erik Jonsson School of Engineering and Computer Science, University of Texas at Dallas, Dallas, USA [email protected]
Abstract. One of the most critical issues in wireless ad hoc networks is represented by the limited availability of energy within network nodes. The time period from the instant when the network starts functioning to the instant when the first network node runs out of energy, the so-called network life-time, strictly depends on the system energy efficiency. Our objective is to devise techniques to maximize the network life-time in the case of cluster-based systems, which represent a significant sub-set of ad hoc networks. We propose an original approach to maximize the network life-time by determining the optimal clusters size and the optimal assignment of nodes to cluster-heads. The presented solution greatly outperforms the standard assignment of nodes to cluster-heads, based on the minimum distance criterion.
1
Introduction
One of the major challenges in the design of ad hoc networks is that energy resources are significantly more limited than in wired networks. Recharging or replacing the nodes battery may be inconvenient, or even impossible in disadvantaged working environments. This implies that the time during which all nodes in the ad hoc network are able to transmit, receive and process information is limited; thus, the network life-time becomes one of the most critical performance metrics [1,2]. Here, we define the network life-time as the time spanning from the instant when the network starts functioning to the instant when the first network node runs out of energy. In order to maximize the life-time, the network must be designed to be extremely energy-efficient. Various are the possible network configurations, depending on the application. In this paper, we deal with system architectures based on a clustering approach [3,4,5], which represent a significant sub-set of ad hoc networks. E. Gregori et al. (Eds.): NETWORKING 2002, LNCS 2345, pp. 376–386, 2002. c Springer-Verlag Berlin Heidelberg 2002
Energy Efficient Design of Wireless Ad Hoc Networks
377
In cluster-based systems, network nodes are partitioned into several groups. In each group, one node is elected to be the cluster-head, and act as local controller, while the rest of the nodes become ordinary nodes (hereinafter nodes). The cluster size is controlled by varying the cluster-head’s transmission power. The cluster-head coordinates transmissions within the cluster, handles intercluster traffic and delivers all packets destined to the cluster; it may also exchange data with nodes that act as gateways to the wired network. In cluster-based network architectures, the life-time is strongly related to cluster-heads’ failure. Indeed, power consumption in radio devices is mainly due to the following components: digital circuitry, radio transceiver, and transmission amplifier. Thus, energy consumption increases with the number of transmitted/received/processed packets and with the device’s transmission range. Consider a network scenario where all nodes within a cluster are one-hop away from the cluster-head, as it often occurs in cluster-based systems [5,6,7], and assume that the traffic load is uniformly distributed among the nodes. Since clusterheads have to handle all traffic generated by and destined to the cluster, they have to transmit, receive and process a significant amount of packets (much larger than for ordinary nodes), which depends on the number of controlled nodes. In addition, while transmitting the collected traffic to other cluster-heads or to gateway nodes, they have to cover distances that are usually much greater than the nodes’ transmission range. Cluster-heads therefore experience high energy consumption and exhaust their energy resources more quickly than ordinary nodes do. The life-time of cluster-based networks thus becomes the time period from the instant when the network starts functioning to the instant at which the first cluster-head runs out of energy. In order to maximize the system life-time, it is imperative to find network design solutions that optimize the cluster-heads’ energy consumption. The procedure of cluster formation consists of two phases: cluster-head election and assignment of nodes to cluster-heads. Although several algorithms have been proposed in the literature, which address the problem of cluster formation [2,3,5,6,7,8,9], little work has been done on energy-efficient design of clusterbased networks. In [2], an energy-efficient architecture for sensor networks has been proposed, which involves a randomized rotation of the cluster-heads among all the sensors and an assignment of nodes to clusters based on the minimum distance criterion. Cluster-heads rotation implies that the network energy resources are more evenly drained and may result in an increased network life-time. On the other hand, cluster-heads re-election may require excessive processing and communications overhead, which outweigh its benefit. Thus, having fixed the nodes that act as cluster-heads, it is important to optimize the assignment of nodes to cluster-heads in such a way that cluster-heads’ energy efficiency is maximized. In this paper, we consider a network scenario where cluster-heads are chosen a priori and the network topology is either static, like in sensor networks, or slowly changing. We propose an original solution, called ANDA (Ad hoc Network Design Algorithm), which maximizes the network life-time while providing the total coverage of the nodes in the network. ANDA is based on the concept
378
C.-F. Chiasserini et al.
that cluster-heads can dynamically adjust the size of the clusters through power control, and, hence, the number of controlled nodes per cluster. ANDA takes into account power consumption due to both the transmission amplifier and the transmitting/receiving/processing of data packets, and it levels the energy consumption over the whole network. Energy is evenly drained from the clusterheads by optimally balancing the cluster traffic loads and regulating the clusterheads’ transmission ranges.
2
The Network Life-Time
We consider a generic ad hoc network architecture based on a clustering approach. The network topology is assumed to be either static, like in sensor networks, or slowly changing. Let SC = {1, . . . , C} be the set of cluster-heads and SN = {1, . . . , N } be the set of ordinary nodes to be assigned to the clusters. Cluster-heads are chosen a priori and are fixed throughout the network life-time, while the coverage area of the clusters is determined by the level of transmission power used by the cluster-heads. Three are the major contributions to power consumption in radio devices: i) power consumed by the digital part of the circuitry; ii) power consumption of the transceiver in transmitting and receiving mode; iii) output transmission power. Clearly, the output transmission power depends on the devices’ transmission range and the total power consumption depends on the number of transmitted and received packets. Under the assumption that the traffic load is uniformly distributed among the network nodes, the time interval that spans from the time instant when the network begins to function until the generic cluster-head i runs out of energy, can be written as Li =
αri2
Ei , + β|ni |
(1)
where Ei is the initial amount of energy available at cluster-head i, ri is the coverage radius of cluster-head i, ni is the number of nodes under the control of cluster-head i, and α and β are constant weighting factors. In (1), the two terms at the denominator represent the dependency of power consumption on the transmission range and on the cluster-head transmitting/receiving activity, respectively. Notice that, for the sake of simplicity, the relation between the cluster-head power consumption and the number of controlled nodes is assumed to be linear; however, any other type of relation could have been considered as well, with minor complexity increase. Considering that the limiting factor to the network life-time is represented by the cluster-heads’ functioning time, the lifetime can be defined as [1,2] LS = min {Li } . i∈SC
(2)
Our objective is to maximize LS while guaranteeing the coverage of all nodes in the network.
Energy Efficient Design of Wireless Ad Hoc Networks
3
379
Energy-Efficient Network Design
In this section, we formally describe the problem of maximizing the network life-time. Two different working scenarios are analyzed: static and dynamic. In the former, the assignment of the nodes to the cluster-heads is made only once and maintained along the all duration of the system. In the latter, the network configuration can be periodically updated in order to provide a longer network life-time. Then, we propose an energy-efficient design algorithm, so-called ANDA (Ad hoc Network Design Algorithm), which maximizes the network life-time by fixing the optimal radius of each cluster and the optimal assignment of the nodes to the clusters. ANDA is optimum in the case of the static scenario and can be extended to the dynamic scenario by using a heuristic rule to determine whether at a given checking time the network needs to be reconfigured. 3.1
Problem Formalization
We assume that the following system parameters are known: number of clusterheads (C), number of nodes in the network (N ), location of all cluster-heads and nodes, and initial value of the energy available at each cluster-head1 . Let dik be the Euclidean distance between cluster-head i and node k (i = 1, . . . , C; k = 1, . . . , N ); we have that ri = dij when j is the farthest node controlled by cluster-head i. Next, let us introduce matrix L={lij }, whose dimension is equal to |SC |×|SN | and where each entry lij represents the life-time of clusterhead i when its radius is set to ri = dij and it covers nij = { k ∈ SN | dik ≤ dij } nodes. We have Ei . (3) lij = 2 αdij + β|nij | Once matrix L is computed, the optimal assignment of nodes to clusterheads is described by the binary variable xij . xij is equal to 1 if cluster-head i covers node j and equal to 0 otherwise. We derive the value of xij (i = 1, . . . , C; j = 1, . . . , N ) by solving the following max/min problem maximize subject to
LS i xij ≥ 1
(4) ∀j ∈ SN
LS ≤ lij xij + M (1 − xij ) ∀i ∈ SC , j ∈ SN xij ∈ {0, 1}, LS ≥ 0
∀i ∈ SC , j ∈ SN .
The first constraint in the problem requires that each node is covered by one cluster-head at least; the second constraint says that if node j is assigned to cluster-head i, the system can not hope to live more than lij . When node j is not assigned to cluster-head i, this constraint is relaxed by taking a sufficiently large M . 1
Notice that in the case of static nodes, this information needs to be collected only once when the network starts functioning; therefore, we neglect the cost of such an operation.
380
C.-F. Chiasserini et al.
This model can be easily extended to the dynamic scenario by dividing the time scale into time steps corresponding to the time instants at which the network configuration is recomputed. Time steps are assumed to have unit duration. Then, we replace xij with xsij , where xsij is equal to 1 if and only if cluster-head i covers node j at time step s and 0 otherwise, and Ei , dij , nij , lij with Eis , dsij , s , i.e., with the corresponding values computed at time step s. Note, hownsij , lij ever, that in this case the model is no longer linear, since the model parameters depend on the time step and, thus, on the former nodes assignment.
begin Covering for(every j ∈ SN ) set max = 0 for(every i ∈ SC ) if(lij ≥ max) set max = lij set sel = i end if Cover node j with cluster-head sel end for end for end Covering begin Reconfigure for(every i ∈ SC ) set Ei = initial energy of cluster-head i for(every j ∈ SN ) Compute dij , |nij |, lij end for end for (new) (old) = LS = LS LS ∆=0 (new) (old) 0, the optimal MPR set size DN of any arbitrary node is smaller than (1 + ε) −loglogNq with probability tending to 1 when N tends to infinity.
394
P. Jacquet et al.
Notice that DN = O(log N ) which very favorably compares to the size of the the whole host neighborhood which (in average pN ) and considerably reduces the topology broadcast. Theorem 3. The broadcast or flooding via multipoint relays takes in average a log N number RN of retransmissions smaller than (1 + ε) −p log q . Corollary 1. The cost of OLSR control traffic for topology broadcast in the random graph model is O(N (log N )2 ) compared to O(N 3 ) with plain link state algorithm. Remark: Notice that the neighbor sensing in O(N 2 ) is now the dominant source of control traffic overhead. Exhaustive Flooding
1000
800
Retransmission Number
600
400
200
Multi-point relay Flooding 0
0.2
0.4
p
0.6
0.8
1
Fig. 3. average number of retransmissions in multi-point relay flooding with N = 1000 and p variable
5 5.1
Analysis of OLSR in the Random Unit Graph Results in 1D and 2D Random Unit Graphs
We present results for 1D and 2D random unit graphs. The proof of the results shown in this section can be found in [14]. A 1D Unit graph can be made of N nodes uniformly distributed on a strip of land whose width is smaller than the radio range (set as unit length). We assume that the length of the land strip is L unit length.
Performance of Multipoint Relaying
395
Theorem 4. The size of the MPR set DN of a given host is 1 when the host is at less than one radio hop to the end to one end of the strip, and 2 otherwise. Theorem 5. The MPR flooding of a broadcast message originated by a random node takes RN = L retransmission of the message when N tends to infinity and L is fixed. Notice this is assuming an error free retransmission. In case of error, the retransmission stops at the first MPR which does not receive correctly the message. In order to cope with this problem one may have to add redundance in the MPR set which might be too small with regard to this problem. Notice that these figures favorably compare with plain link state where DN = M = N/L and RN = N . The analysis in 2D is more interesting because it gives less trivial results. Theorem 6. When L is fixed and N increases, then the average size of the MPR set, DN tends to be smaller than 3π(N/(3L2 ))1/3 = 3π(M/(3π))1/3 . Notice that this figure compares favorably with plain link state where DN = M = N/L2 . Figure 4 displays simulation results for dimension 2. The heuristic has been applied to the central node of a random 4 × 4 unit graph. The convergence in M 1/3 is clearly shown. Notice that in this very case the upper bound of DN is at least greater by a factor 2 than actual values obtained by simulations. Figure 5 summarizes the results obtained for quantity DN in the random graph model for dimension 1 and 2. The results for dimension 2 have been simulated. Theorem 7. The MPR flooding of a broadcast message originated by a random node takes RN = O((N L4 )1/3 ) retransmissions of the message when N tends to infinity and L is fixed. 5.2
Comparison with Dominating Set Flooding
In [11] Wu and Li introduced the concept of dominating set. They introduced two kinds of dominating set that we will call, the rule 1 dominating set and the rule 2 dominating set. In this section we establish quantative comparisons between the performance of dominating set flooding and MPR flooding. In particular we will show that dominating set floodings does not outperform significantly full flooding in random graph models and in random unit graph of dimension 2 and higher. Rule 1 dominating set does not outperform significantly full flooding in random unit graph model of dimension 1. MPR flooding outperforms both dominating set flooding in any graph models studied in this paper. The dominating set flooding consists into restricting the retransmission of a broadcast message to a subset of nodes, called the dominating set. Rule 1 and rule 2 consist into two different rules of dominating set selection. The rules consist into compairing neighbor sets (for example by checking hellos). For a node A we denote by N (A), the neighbor set of node A. In rule 1, a node A does not belong to the dominating if and only if there exists a neighbor B of A such that
396
P. Jacquet et al.
4
3
2
1
0
50
100 M
150
200
Fig. 4. Bottom: simulated quantity DN /M 1/3 versus the number of neighbor M for the central position in a 4 × 4 random unit graph,top: upper bound obtained in theorems. full link state 200
150
100
50
2D MPR 0
50
100 M
150
1D MPR
200
Fig. 5. Unit graph model from bottom to top, average number of MPR for 1D, 2D and full links state protocol versus the average number of neighbor nodes M .
1. B is in the dominating set; 2. the IP address of B is higher than the IP address of A; 3. N (A) ⊂ N (B). In this case one says that B dominates A in rule 1.
Performance of Multipoint Relaying
397
In rule 2, a node A does not belong to the dominating if and only if there exist two neighbor B and C of A such that 1. 2. 3. 4.
B and C are in the dominating set; nodes B and C are neighbors; the IP addresses of B and C are both higher than the IP address of A; N (A) ⊂ N (B) ∪ N (C).
In this case one says that (B, C) dominates A in rule 2. We first, look at the performance of dominating set flooding in the random graph model (N, p). Theorem 8. The probability that a node in a random graph (N, p) does not belong to the dominating set is smaller than N (1 − (1 − p)p)N in rule 1, and smaller than N 2 (1 − (1 − p)2 p)N in rule 2. Theorem 9. In the random unit graph model of dimension 1, assuming independence between node location and node IP addresses, the probability that a 4 node does not belong to the dominating set in rule 1 is smaller than M and the average size of the dominating set in rule 2 is max{0, 2L − 1}. Remark: The proofs of these theorem can be found in [14]. The density of the dominating set in rule 2 is twice than the density of retransmitters in MPR flooding when the network model is the random unit graph of dimension one. In random graph of dimension 2 and higher the probabilities that a node does not belong to the dominating set in rule 1 or in rule 2 are O(1/M ) since it is impossible to cover one unit disk with two unit disk that have different centers.
6
Conclusion and Further Works
We have presented a performance evaluation of OLSR mobile ad-hoc routing protocols in the random graph model and in the random unit graph model. The originality of the performance evaluation is that it is completely based on analytical methods (generating function, asymptotic expansion) and does not rely on simulation software. The random graph model is enough realistic for indoor or short range outdoor networks where link fading mainly comes from random obstacles. The random unit graph model is realistic for long range outdoor networks where link fading mainly comes from distance attenuation. In this case the random graph model can be improved by letting the parameter p depending on distance x between the nodes. This will be subject of further works.
References 1. J.M. McQuillan, I. Richer, E.C. Rosen, “The new routing algorithm for the ARPANET,” IEEE Trans. Commun. COM-28:711-719.
398
P. Jacquet et al.
2. D.B. Johnson, D.A. Maltz, “Dynamic Source Routing in Ad Hoc Wireless Networks,” in Mobile Computing, Ch. 5, pp 153-181, Kluwer Academic Publisher, 1996. 3. C.E. Perkins, E.M. Royer, “Ad Hoc On-Demand Distance Vector Routing,” IEEE Workshop on Mobile Computing Systems and Applications, pp. 90-100, 1999. 4. M.S. Corson, V. Park, “Temporallly ordered routing algorithm,” draft-ietf-manettora-spec-02.txt, 1999. 5. P. Jacquet, P. Muhlethaler, A. Qayyum, A. Laouiti, L. Viennot, T. Clausen, MANET draft “draft-ietf-manet-olsr-02.txt,” 2000. 6. B. Bellur, R. Ogier, F. Templin, “Topology broadcast based on reverse-path forwarding,” draft-ietf-manet-tbrpf-01.txt, 2001. 7. P. Jacquet, P. Minet, P. Muhlethaler, N. Rivierre, ”Increasing reliability in cablefree Radio LANs: Low level forwarding in HIPERLAN,” in Wireless Personal Communications Vol 4, No 1, pp. 51-63, 1997. 8. L. Viennot, “Complexity results on election of multipoint relays in wireless networks,” INRIA RR-3584, 1998. http://www.inria.fr/rrrt/rr-3584.html 9. P. Jacquet, A. Laouiti, “Analysis of mobile ad hoc network routing protocols in random graphs,” INRIA RR-3835, 1999. http://www.inria.fr/rrrt/rr-3835.html 10. A. Qayyum, L. Viennot, A. Laouiti, “Multipoint relaying: An efficient technique for flooding in mobile wireless networks,” INRIA RR-3898, 2000. http://www.inria.fr/rrrt/rr-3898.html 11. J. Wu, H. Li, “On calculating connected dominating set for efficient routing in ad hoc wireless networks,” in Proc. DIAL M, 1999. 12. P. Jacquet, L. Viennot, “Overhead in mobile ad hoc network protocols,” INRIA RR-3965, 2000.http://www.inria.fr/rrrt/rr-3965.html 13. A. Qayyum, Analysis and evaluation of channel access schemes and routing protocols for wireless networks, Th`ese de l’Universit´e Paris 11, 2000. 14. P. Jacquet, A. Laouiti, P. Minet, L. Viennot, “Performance analysis of OLSR multipoint relay flooding in two ad hoc wireless network models,” INRIA Research Repport RR-4260, 2001. http://www.inria.fr/rrrt/rr-4260.html 15. A. D. Aron et al., “Analytical comparison of local and end-to-end error recovery in reactive routing protocols for MANET,” 3rd ACM MSWiM 2000. 16. A. Boukerche et al., “Analysis of randomized congestion control with DSDV routing in ad hoc wireless networks,” in JPDC, pp. 967-995, Vo 61, 2001.
An Adaptive Location-Aware MAC Protocol for Multichannel Multihop Ad-Hoc Networks Zi-Tsan Chou1,2 , Ching-Chi Hsu1,3 , and Ferng-Ching Lin1,2 1
Department of Computer Science and Information Engineering National Taiwan University, Taipei, 106, Taiwan {d5526005, cchsu, fc lin}@csie.ntu.edu.tw 2 Institute for Information Industry, Taipei, 106, Taiwan {ztchou, fclin}@iii.org.tw 3 Kai Nan University, Tauyan, Taiwan
Abstract. In a multihop MANET (mobile ad-hoc network), reliable broadcast support at the MAC layer will be of great benefit to the routing function, multicasting applications, cluster maintenance, and realtime systems. In this paper, we propose a new hybrid MAC protocol, called the adaptive location-aware broadcast (ALAB) protocol, for linklevel broadcast support in multichannel systems. ALAB is scalable and mobility-transparent since it does not require any link state information. Above all, in ALAB, both deadlock and hidden terminal problems are completely solved. In principle, ALAB tries to combine both of the advantages of the allocation- and contention-based protocols and overcomes their individual drawbacks. At high traffic or density, ALAB outperforms the pure TDMA because of spatial reuse and dynamic slot management. At low traffic or density, ALAB outperforms the pure CSMA/CA because of its embedded stable tree-splitting algorithms. In addition, ALAB provides deterministic access delay bounds from its base TDMA allocation protocol. Simulation results do confirm the advantage of our scheme over other MAC protocols, such as IEEE 802.11, ADAPT, and ABROAD, even under the fixed-total-bandwidth model.
1
Introduction
With the revolutionary advances of wireless technology, the applications of the MANET (mobile ad-hoc network) are getting more and more important, especially in the emergency, military, and outdoor business environments, in which instant fixed infrastructure or centralized administration is difficult or too expensive to establish. In the MANET, pair of nodes communicates by sending packets either over a direct wireless link or through a sequences of wireless links including some intermediate nodes. Due to the broadcast nature of the radio transmission medium and the rapidly dynamic topology changes in the MANET, every algorithms and protocols developed on it will face great challenges. In this paper, we are specially interested in a medium access control (MAC) protocol for multihop ad-hoc networks with multiple frequency channels. E. Gregori et al. (Eds.): NETWORKING 2002, LNCS 2345, pp. 399–410, 2002. c Springer-Verlag Berlin Heidelberg 2002
400
Z.-T. Chou, C.-C. Hsu, and F.-C. Lin
A MAC protocol is to address how to allocate the multiaccess medium and resolve potential contention/collision among various nodes. MAC protocols proposed so far can be approximately classified into two categories [5]. One is allocation-based protocols, and the other is contention-based protocols. Deterministic allocation-based protocols, such as TDMA and its variants [9], are primarily designed to support bounded delay topology-independent transmissions by scheduled slot assignments. Nevertheless, these protocols are insensitive to variations in network loads or node connectivity. Although dynamic topologydependent TDMA-based transmission scheduling protocols [13] can adjust themselves to node connectivity, they are not suitable for highly mobility environments due to heavy loads on updated link state information maintenance. As to the contention-based protocols, such as CSMA/CA and it variants [14], they are primarily designed to support asynchronous transmissions and burst traffic. However, CSMA/CA is inherently unstable [7]. Because of this reason, the CARMA protocol based on the deterministic tree-splitting algorithm [7] was proposed. In CARMA, in order to maintain a consistent channel view for all nodes in a multihop wireless network, a base station should be set up to govern this task. Hence it is not suitable for the large-scale MANET. Most previous works on MAC protocols including IEEE 802.11, ADAPT [4], CARMA [7], and GRID [14] are designed to support only reliable unicast transmission. As indicated in [5,8,11], support for reliable broadcast at the MAC layer will be of great benefit to the routing function, multicasting applications, cluster maintenance, and real-time systems. Clearly, a single reliable broadcast can be implemented by sending one or more reliable unicast messages. However, this approach is not scalable since the time to complete a broadcast increases with the number of neighbors. Besides, MAC protocols typically do not maintain link state information [5]. Recently, several MAC protocols for broadcast support have been proposed, including ABROAD [5], TPMA [8], RBRP [11], CATA [13], and FPRP [15]. All of them depend on the collision detection capacity. In TPMA and RBRP, nodes with bad luck in their elimination phase or reservation request phase may lead to starvation. To make matters worse, all of these protocols may lead to deadlocks. A deadlock [11] is said to occur if two conflicting broadcasts are scheduled in the same slot and the senders do not realize this conflict. We also notice that all the above-mentioned protocols have focused only on single channel systems. From many literatures [9,14], we know that a multichannel system outperforms a single channel system in many aspects, including throughput, reliability, bandwidth utilization, network scalability, synchronization implementation, and QoS support. The authors in [5] developed a novel hybrid MAC protocol, called ABROAD, for reliable broadcast in single channel MANETs. Importantly, they try to combine both of the advantages of the allocation- and contention-based protocols and overcomes their individual drawbacks. Thus, ABROAD can dynamically self-adjust its behavior according to the prevailing network conditions [4,5]. Following their hybrid approach, but with a whole different design strategy, we propose a new multichannel MAC protocol based on the tree-splitting algo-
An Adaptive Location-Aware MAC Protocol
401
rithms for link-level broadcast support in multihop ad-hoc networks. We call the resulting distributed protocol “Adaptive Location-Aware Broadcast” (ALAB) protocol. Since a MANET should operate in a physical area, it is very natural to exploit location information in such an environment [14]. In addition, via a GPS (Global Positioning Systems), every node can get absolute timing and location information; thus synchronization becomes easy [8,11,12]. The advantages of the ALAB protocol are as follows. (i) ALAB supports reliable unicast, multicast, and broadcast transmission services in an integrated manner. That is, unicast and multicast packets are considered as special cases of broadcast packets. (ii) ALAB is scalable and mobility-transparent since it does not require any link state information. Moreover, both the time to broadcast a packet and the number of channels required for the MANET are independent of the network topology. (iii) In ALAB, no hidden or exposed terminal problems will exist. Therefore, our design does not need any handshake process such as RTS/CTS or RTS/CR/RA [8]. (iv) Like ABROAD, ALAB also provides bounded access delay from its base TDMA allocation protocol. Furthermore, ALAB is stable and adapts well to any traffic and network topologies. (v) In ALAB, the deadlock and starvation problems are completely eliminated. (vi) Under the severe traffic load and node density conditions, ALAB delivers superior performance than ABROAD, which outperforms TDMA, IEEE 802.11, and ADAPT [4,5], even under the fixed-total-bandwidth model.
2 2.1
The ALAB Protocol Model and Assumptions
A multihop mobile radio network used to pass messages containing data and control information can be modelled as an undirected graph G = (V, E) in which V (|V| = N ) is the set of mobile hosts and there is an edge (u, v) ∈ E if and only if u and v can mutually receive each other’s transmissions. In this case, we say that u and v are neighbors. Note that the edge set may vary over time because of nodal mobility. We can assign each node v in the network a unique identifer (ID) by a number in ℵ = {0, 1, . . . , N − 1}, where |ℵ| = N . In this paper, all logarithms are assumed to be base 2. Given an integer v ∈ ℵ, let Binary(v) = (v1 v2 . . . vk−1 vk ) denote its binary string, where k = log N . Thus, every integer in ℵ can be represented by a unique binary k-tuples (v1 v2 . . . vk−1 vk ), where vi ∈ {0, 1}. In addition, each channel is uniquely assigned by a number in C = {0, 1, . . . , ρ − 1}, where 1 ≤ ρ < N . Within a TDMA network, the time axis is divided into units called (transmission) frames, and each frame is composed of time slots. Each slot in turn comprises mini-slots. Nodes in the network are assumed to be synchronized and that the frame length is the same for each node. Each mobile radio host in a multichannel network is equipped with the transceivers (a single transmitter and multiple receivers). Depending on the ability of the transceivers, each node can communicate with others either in the full-duplex mode or in the half-duplex mode. In the half-duplex mode, each host cannot transmit and receive at the
402
Z.-T. Chou, C.-C. Hsu, and F.-C. Lin
Y 6
1 9
0, 2 d1
8
2
7
5
3
1
2
0, 1 1, 1
12
9
3, 2
r = 2d1
3, 1
5
3
4
0 4
0, 0 1, 0 2, 0 3, 0
X
Fig. 1. Ten mobile hosts are dispersed randomly over the 2D geographic region. The integer within in the grid is the channel number, while the integer pairs are the grid coordinate. The center part of the geographic area shows the relation between r and d.
same time [3]. In the full-duplex mode, each host can transmit only one packet on one channel but receive multiple packets on all channels simultaneously [9]. Throughout this paper, we assume every node works in the full-duplex mode. On the same channel, two types of communication collisions will arise [9]. The primary collision occurs when a node transmitting in a given mini-slot is receiving in the same mini-slot on the same channel. This also implies the converse: a receiving node cannot be transmitting on the same channel at the same time. The secondary collision occurs when node receives more than one packet in a mini-slot on the same channel. In both cases, all packets are rendered useless. To this end, we assume that if more than one node is transmitting on the same channel such that the packets overlap in time, then collision occurs on that channel. On the other hand, simultaneous reception of packets on other channels is not affected [3,9]. In this paper, we also assume that a node is capable of determining the current status of a single radio channel [12]. That is, at the end of a mini-slot, each node can obtain feedback from the receiver specifying whether the status of a radio channel is (i) NULL: no transmission on the channel, (ii) SINGLE: exactly one transmission on the channel, or (iii) COLLISION: two or more transmissions on the channel. The basic idea behind the ALAB protocol is very simple; in brief, we just imitate the organization of cellular/cluster networks. This approach is widely adopted in many issues for the MANET [6,14]. Each node is assumed to know its own position by virtue of its GPS receiver but not the position of any other nodes in the network. In our model of the ad-hoc network, nodes are dispersed randomly over a pre-defined geographic region, which is partitioned into twodimensional logical grids as illustrated in Fig. 1. Each grid is a square of size d × d. Let ri be the transmission range of node i. Determining √ the optimal values of ri and d is not an easy task. In our design, we restrict 2d ≤ ri ≤ 2d. Let ℵ∞ = {0, 1, 2, 3, . . .}. Grids are numbered x, y following the conventional xycoordinate. Every node must know how to map its physical location (x , y ) ∈
An Adaptive Location-Aware MAC Protocol TDMA frame 1
RTB
2
RTB
RTB
3
4
RTB
Collision Resolution Phase ( M mini-slots )
403
N
Data R T B : Request To Broadcast
The collisions are resolved by the tree-splitting algorithms: (1) randomized, (2) improved randomized, or (3) deterministic approaches. Priority Reservation Phase ( dedicated to the primary and secondary candidate nodes )
Fig. 2. The ALAB slot and frame structure.
(, ) to the corresponding grid coordinate x, y ∈ ℵ∞ , ℵ∞ . As illustrated in Fig. 1, we assign x, y = xd , yd . Besides, each grid is assigned a unique channel. When a node is located at a grid x, y, it must use the channel assigned to the grid x, y for transmission. Given two different girds x1 , y1 and x2 , y2 , if max{|x1 − x2 |, |y1 − y2 |} ≤ 4, then these two girds are called the interfering grids. The interfering grids are forbidden to be assigned the same channel to prevent co-channel interference in the packet transmission phase of ALAB. (We will explain it in the next subsection.) To attain this goal, we can simply apply the distance-4 coloring algorithms [14] to assign a channel for each grid. In the meantime, the frequency reuse should be maximized. Fig. 1 shows the possible channel assignments. Let |G| be the total number of grids over the geographic region. By a simple counting, the total number of channels required for the ALAB protocol is min{25, |G|}. The main purposes of these restrictions are as follows. Nodes within the same grid form a single-hop cluster. In other words, all nodes within the same grid can hear the transmission of others. By the collision detection ability of the transceivers, all nodes within the same grid are able to maintain a consistent channel view. Due to the channel consistency in every grid, no deadlock or hidden terminal problems will exist. 2.2
Protocol Description
The ALAB protocol integrates a tree-splitting collision resolution protocol within each slot of a TDMA allocation protocol. Each node is assigned a transmission schedule (frame) consisting of N slots. The slot and frame structure of the ALAB protocol, which is somewhat like the design of HYPERLAN [1], is shown in Fig. 2. The frame is divided into fixed-sized slots. Each slot is composed of three parts: a priority reservation phase and a collision resolution phase followed by a packet transmission phase. The first two phases are called the leader election phase. The priority reservation phase occupies the first mini-slot and the collision resolution phase consists of the next M mini-slots. The final mini-slot is the
404
Z.-T. Chou, C.-C. Hsu, and F.-C. Lin
packet transmission phase. In the priority reservation phase, only the predetermined primary and secondary candidate nodes have the chance to reserve the slot. However, when the first mini-slot remains unused, all active nodes contend to use it either by the randomized collision resolution algorithm or by the deterministic one. A node is said an active node if it has packets to send. Through the leader election phase, we guarantee that at most one active node will survive in a gird. The survival(s) gets the right of broadcast in the packet transmission phase. Recall that the transmission range is limited and simultaneous reception of packets on other channels is not affected. As a result, inter-grid communications via data packets are collision-free. In the following, we will focus the protocol description on a single grid, say x, y. Before we describe the ALAB protocol in detail, we also make the following assumptions. (i) A node located in a grid x, y is assumed to continuously monitor the status of the channel assigned to x, y. (ii) A node wishing to transmit a packet which may has arrived in the interim, it would wait until the beginning of the next slot. (iii) The channel introduces no errors, so control-packet collisions are the only source of errors. Nodes can perfectly detect such collisions immediately at the end of each minislot. (iv) Each mini-slot is designed to accommodate the packet transmission time and the guard time, which corresponds to the maximum differential propagation delay between any pair of nodes. Since the network are assumed to be synchronized, all active nodes enter the priority reservation phase synchronously. 1) Priority Reservation Phase: In slot i of a frame, we let the node with the ID = i − 1 mod N be the primary candidate (PC for short) node and the node with the ID = i − 1 + N2 mod N be the secondary candidate (SC for short) node. In our design, the priority of the PC node is higher than that of the SC node. At the beginning of the first mini-slot, only the PC and SC nodes are allowed to send RTB (Request-To-Broadcast) control packets with probability 1. At the end of the first mini-slot, if the status of the channel is COLLISION, then the PC node overwhelmingly wins the slot to broadcast a packet. If the status of the channel is SINGLE, all active nodes except the winner quit the contention at the remaining mini-slots, abandon the corresponding packet transmission minislot and wait for the next slot. Otherwise, all active nodes enter the collision resolution phase. 2.a) Randomized Collision Resolution Phase: At the beginning of the ith mini-slot, where 2 ≤ i ≤ M + 1, all active nodes send RTBs with probability 1. At the end of the ith mini-slot, if the status of the channel is NULL, then the collision resolution period is over. The contending nodes involved in the COLLISION split randomly into two subsets by each flipping a coin. Those who obtain heads (with probability p) send an RTB in the next mini-slot; while those who obtain tails (with probability 1 − p) become inactive and wait for the next slot. This process keeps running until a SINGLE is reported or i equals M + 1, whichever comes first. The above-mentioned algorithm is similar to that of TPMA [8]. We can find that the collision resolution process stops immediately once a NULL occurs. One can make a further improvement, however. On condition that a NULL is sensed, all previous contenders are allowed to flip a coin
An Adaptive Location-Aware MAC Protocol
405
again and those who obtain heads can send RTBs in the next mini-slot. Thus the collision resolution phase will never terminate in the NULL state before mini-slot M + 1. It is worth noticing that nodes with bad luck in the randomized collision resolution phase will not starve because of the underlying TDMA protocol. The advantages of the randomized approach are that it achieves fairness naturally N and a winner may arise quickly. A reasonable value of M could be 1 + log |G|
. 2.b) Deterministic Collision Resolution Phase: Our deterministic collision resolution algorithm is similar to that in [2]. We assume that every node keeps an integer variable temporary ID used for the collision resolution phase. Initially, temporary ID = ID. We let M = 1 + log N and (b1 b2 , · · · , bk ) be the binary representation of any given node temporary ID, where k = log N . At the beginning of the second mini-slot, all active nodes send RTB packets. If the status of the channel is NULL, then the collision resolution period is over. If a COLLISION occurs, all active nodes with b1 = 0 send RTB packets in the next mini-slot. The general rule on the (i + 2)th mini-slot, 1 ≤ i ≤ M − 1, is that all active nodes with bi = 0 send RTBs; at the end of the mini-slot, if a COLLISION is alarmed, all active nodes with bi = 1 are backlogged and wait for the next slot; while a NULL is detected, all active nodes with bi = 1 remain active in the next mini-slot. This process continues running until a SINGLE is recognized. Clearly, at the end of the collision resolution process, only the active node with the lowest-numbered temporary ID will be the winner. To ensure fairness, each node subtracts one (mod N ) from its current temporary ID at the end of every slot. The advantage of the deterministic approach is that a winner is guaranteed to be elected if at least one active node exists. However, only the partial fairness can be achieved because of the multihop characteristic in ad-hoc networks. Besides, the value of M by the deterministic approach may be larger than that by the randomized approach. In packet transmission phase, every winner in every grid in the leader election phase starts to transmit. Since simultaneous reception of packets on other channels is not affected, all nodes can gain the data concurrently. The control packet length is typically smaller than the data packet length, it is worthwhile taking multiple mini-slots to compete for the access right. To sum up, our hybrid MAC protocol is similar to the leader election among active nodes within each gird in every slot.
3
Performance Simulations
Two bandwidth models have been proposed in [14] to evaluate the network throughput performance for multichannel ad-hoc networks. (i) Fixed-channelbandwidth: Each channel has a fixed bandwidth. The more the channels, the more bandwidth the network can potentially use. This model is especially suitable for CDMA environments. (ii) Fixed-total-bandwidth: The total bandwidth offered to the network is fixed. With more channels, each channel will have less bandwidth. This model is especially suitable for FDMA environments.
406
Z.-T. Chou, C.-C. Hsu, and F.-C. Lin 4.5 4
Throughput (Mbps)
3.5 3 2.5 2 1.5 1
0.5 0 25
50
75
100
125
150
175
200
225
250
Data Packet Length / Control Packet Length
Fig. 3. Ld /Lc versus throughput under the fixed-total-bandwidth model. (η = 8 and |G| = 10 × 10.)
Due to space limitations, mathematical analysis can be referred to our technical report [6]. In this section, we report the simulation results. We use the fixedincrement time advance approach [10] for our discrete-event simulation model to evaluate the performance of ALAB. We have developed a simulator by C++. The ad-hoc network is simulated by placing N nodes randomly and uniformly within a bounded geographic region. The geographic region size (|G| = dA2 ) is measured by the number of grids. The transmission range of all simulated nodes is r meters. The control packet length Lc including the guard time is 20 bytes and the data packet length Ld is a multiple of Lc . Network traffic was generated according to a Poisson arrival process with a mean of λ packets per second, and uniformly distributed among the nodes. If the fixed-channel-bandwidth model is assumed, each channel’s bandwidth is 1 Mbps. If the fixed-total-bandwidth is assumed, the total bandwidth is 1 Mbps. We will consider the effect of node density on the performance instead of the average degree, where the node density N of the grid plane (η = |G| ) is defined as the average number of nodes per grid. A) Effect of Data Packet Length: In Fig. 3, we show the effect of the ratio Ld /Lc on the throughput performance under the fixed-total-bandwidth model. In this experiment, we fix η and G as 8 and 10 × 10, respectively. We can see that when Ld /Lc ≤ 125, the throughput is highly promoted with the increasing length of data packet. This is because each successful leader election process can schedule more data bits to be sent. However, if we further increase the ratio Ld /Lc , the throughput of ALAB will be saturated at a certain point. As shown in Fig. 3, as both offered load and Ld /Lc increase, the throughput of ALAB (deterministic collision resolution approach) approaches the network capacity. B) Effect of Arrival Rate and Bandwidth Models: In this experiment, we assume that N = 512, r = 2d, |G| = 8 × 8, η = 8, and Ld /Lc = 50. Fig. 4 and
An Adaptive Location-Aware MAC Protocol
407
2.5
Throughput (Mbps)
2
1.5
1
ABROAD ALAB (randomized)
0.5
ALAB (improved randomized) ALAB (deterministic)
0 0
0.2
0.4
0.6
0.8
1
Arrival Rate (pkts/sec)
Fig. 4. Arrival rate versus throughput under the fixed-total-bandwidth model. (N = 512, r = 2d, η = 8, |G| = 8 × 8, and Ld /Lc = 50.)
5 show the throughput versus the offered load under the fixed-total-bandwidth model and under the fixed-channel-bandwidth model respectively. Especially, even under the fixed-total-bandwidth model, we find a 70% increase in the peak performance for ALAB over ABROAD, which delivers superior performance than TDMA, IEEE 802.11, and ADAPT [4,5]. The reasons are three-fold. (i) In ALAB, via the location-aware channel assignment scheme, the number of potential interfering terminals is significantly reduced from the size of two-hop neighborhood to the size of intra-grid neighborhood. (ii) Via the leader election process in ALAB, the probability for a node to reserve a slot is highly boosted. (iii) In such a crowed environment, the erasure effect [8] or deadlocks also cause the performance of ABROAD degradation. However, it is not very fair to compare ABROAD and ALAB because of their different assumptions on the transceivers. In Fig. 5, we see that the ALAB protocol with the deterministic collision resolution approach performs best since an active node is guaranteed to be elected (if it exists) in a grid in a slot. C) Effect of Node Density: Fig. 6 shows the throughput versus node density and arrival rate under the fixed-channel-bandwidth model. We use N = 256 and Ld /Lc = 75. We see that as the node density decreases and/or the traffic load increases, the throughput increases monotonically and is finally saturated at a certain point. Especially, we find that when λ = 15 ∼ 20 and η = 4 ∼ 16, the deterministic collision resolution approach yields about 27.67% ∼ 56.67% improvement in the throughput, as compared with the randomized one. This is reasonable due to the uncertainty in the leader election phase by the randomized approach. Given fixed A and N , decreasing the node density will promote the throughput; √ meanwhile, it will cause the number of grids increase. Since we restrict 2d ≤ r ≤ 2d in our design, a larger number of grids implies a shorter
408
Z.-T. Chou, C.-C. Hsu, and F.-C. Lin 60
Throughput (Mbps)
50
40
30
20 ALAB (randomized) ALAB (improved randomized)
10
ALAB (deterministic)
0 0
2
4
6
8
10
12
14
16
18
20
Arrival Rate (pkts/sec)
Fig. 5. Arrival rate versus throughput under the fixed-channel-bandwidth model. (N = 512, η = 8, |G| = 8 × 8, and Ld /Lc = 50.)
70 60
30 20
Throughput (Mbps)
50 40
te al Ra Arriv
10
/sec) (pkts
20 15 10 5 1 16
8
ALAB (randomized)
4
2
16
8
4
2
16
8
4
2
0
/grid) sity (nodes Node Den
ALAB (improved randomized)
ALAB (deterministic)
Fig. 6. Throughput versus node density and arrival rate under the fixed-channelbandwidth model. (N = 256 and Ld /Lc = 75.)
transmission range. From the perspective of the routing performance, this will result in more hops from sources to destinations. To sum up, determining the optimal values of r and d is not an easy task. D) Effect of Node ID Distribution: In all the above experiments, we have observed that the ALAB protocol with the deterministic collision resolution approach performs best. However, its collision resolution method highly depends on the distribution of the node IDs. In spite of the multihop characteristic in ad-hoc networks, each contending station should receive an equal share of the
An Adaptive Location-Aware MAC Protocol
409
transmission bandwidth. We conduct an experiment to understand this fairness issue. We use N = 16, η = 4, |G| = 4, and Ld /Lc = 75. Four sample nodes intended for our observation are 0000, 0001, 1010, and 1011. Furthermore, we assume that they are located in a same grid. Fig. 7 shows the simulation result under the fixed-channel-bandwidth model. We see that as the offered load increases, the performance range of the sample nodes increases significantly. That is, the unfairness problem becomes serious when traffic load is heavy. Therefore, if fairness is critical, the ALAB protocol with the improved randomized collision resolution approach may be a compromise solution. 0.4
Node Throughput (Mbps)
0.35
0.3
0.25 0.2 0.15 0.1 0.05 0
0
10
1
11
Node ID
Fig. 7. Node ID versus node throughput under the fixed-channel-bandwidth model. (N = 16, η = 4, |G| = 4, and Ld /Lc = 75.)
4
Conclusions
In this paper, we have proposed a new adaptive location-aware MAC protocol, called ALAB, for link-level broadcast support in multichannel MANETs. By virtue of GPS and channel assign scheme, all nodes within the same grid are able to maintain a consistent channel view. Due to the channel consistency in every grid, no deadlock or hidden terminal problems will exist. ALAB is scalable and mobility-transparent since it does not require any link state information. Using the ternary channel feedback information, our novel hybrid broadcast scheme can achieve high throughput performance. In principle, ALAB tries to combine both of the advantages of the allocation- and contention-based protocols and overcomes their individual drawbacks. ALAB has deterministic access guarantees by its base TDMA allocation protocol while providing flexible and efficient bandwidth management by reclaiming unused slots through the stable
410
Z.-T. Chou, C.-C. Hsu, and F.-C. Lin
tree-splitting algorithms. Extensive experimental results have been conducted, which take many factors, such as channel bandwidth models, arrival rate, data packet length, node density, and fairness, into consideration. Both analysis [6] and simulation results do confirm the advantage of our scheme over other MAC protocols, such as IEEE 802.11, ADAPT [4], and ABROAD [5], even under the fixed-total-bandwidth model. All these results make ALAB a promising protocol to enhance the performance of the MANET.
References 1. G. Anastasi, L. Lenzini, and E. Mingozzi. HIPERLAN/1 MAC protocol: Stability and performance analysis. IEEE Journal on Selected Areas in Communications, Vol. 18, No. 9, Sep., (2000) 1787–1798. 2. D. Bertsekas and R. Gallager. Data Networks, Second Edition, Prentice-Hall, 1992. 3. I. Chlamtac and A. Farag´ o. An optimal channel access protocol with multiple reception capacity. IEEE Trans. on Computers, Vol. 43, No. 4, (1994) 480–484. 4. I. Chlamtac, A. Farag´ o, A. D. Myers, V. R. Syrotiuk, and G. Z´ aruba. ADAPT: a dynamically self-adjusting media access control protocol for ad hoc networks. GLOBECOM ’99, Vol. 1A, (1999) 11–15. 5. I. Chlamtac, A. D. Myers, V. R. Syrotiuk, and G. Z´ aruba. An adaptive medium access control (MAC) protocol for reliable broadcast in wireless networks. IEEE International Conference on Communications, Vol. 3, (2000) 1692–1696. 6. Z.-T. Chou, C.-C. Hsu, and F.-C. Lin. An adaptive location-aware MAC Protocol for multichannel multihop ad-hoc networks. Technical Report, National Taiwan University, 2001. 7. R. Garc´es and J.J. Garcia-Luna-Aceves. Collision avoidance and resolution multiple access with transmission queues. Wireless Networks, Vol. 5, (1999) 95–109. 8. T.-C. Hou and T.-J. Tsai. An access-based clustering protocol for multihop wireless ad hoc networks. IEEE Journal on Selected Areas in Communications, Vol. 19, No. 7, July, (2001) 1201–1210. 9. J.-H. Ju and V. O. K. Li. TDMA scheduling design of multihop packet radio networks based on Latin squares. IEEE Journal on Selected Areas in Communications, Vol. 7, No. 8, Aug., (1999) 1345–1352. 10. Averill M. Law and W. David Kelton. Simulation Modeling and Analysis, Third Edition, McGrraw-Hill Book Company Inc., 2000. 11. M. K. Marina, G. D. Kondylis, and U. C. Kozat. RBRP: A robust broadcast reservation protocol for mobile ad hoc networks. IEEE International Conference on Communications, Vol. 3, (2001) 878–885. 12. K. Nakano and S. Olariu. Randomized initialization protocols for ad hoc networks. IEEE Trans. Parallel and Distributed Systems, Vol. 11, No. 7, July, (2000) 749–759. 13. Z. Tang and J.J. Garcia-Luna-Aceves. A protocol for topology-dependent transmission scheduling in wireless networks. IEEE Wireless Communications and Networking Conference, Vol. 3, (1999) 1333–1337. 14. Y.-C. Tseng, S.-L. Wu, C.-M. Chao, and J.-P. Sheu. Location-aware channel assignment for a multi-channel mobile ad hoc network. ICS2000 Workshop on Computer Networks, Internet, and Multimedia, 2000. 15. C. Zhu and M. Corson. A five-phase reservation protocol (FPRP) for mobile ad hoc networks. Wireless Networks, Vol. 7, (2001) 371–384.
Capacity Assignment in Bluetooth Scatternets – Analysis and Algorithms Gil Zussman and Adrian Segall Department of Electrical Engineering Technion – Israel Institute of Technology, Haifa 32000, Israel {gilz@tx, segall@ee}.technion.ac.il http://www.comnet.technion.ac.il/segall
Abstract. Bluetooth enables portable electronic devices to communicate wirelessly via short-range ad-hoc networks. Initially Bluetooth will be used as a replacement for point-to-(multi)point cables. However, in due course, there will be a need for forming multihop ad-hoc networks over Bluetooth, referred to as scatternets. This paper investigates the capacity assignment problem in Bluetooth scatternets. The problem arises primarily from the special characteristics of the network and its solution requires new protocols. We formulate it as a problem of minimizing a convex function over a polytope contained in the matching polytope. Then, we develop an optimal algorithm which is similar to the well-known flow deviation algorithm and that calls for solving a maximumweight matching problem at each iteration. Finally, a heuristic algorithm with a relatively low complexity is developed. Keywords: Bluetooth, Scatternet, Capacity assignment, Capacity allocation, Scheduling, Personal Area Networks (PAN)
1 Introduction Recently, much attention has been given to the research and development of Personal Area Networks (PAN). These networks are comprised of personal devices, such as cellular phones, PDAs and laptops, in close proximity to each other. Bluetooth is an emerging PAN technology which enables portable devices to connect and communicate wirelessly via short-range ad-hoc networks [5],[6],[11]. Since its announcement in 1998, the Bluetooth technology has attracted a vast amount of research. However, the issue of capacity assignment in Bluetooth networks has been rarely investigated. Moreover, most of the research regarding network protocols has been done via simulation. In this paper we formulate an analytical model for the analysis of the capacity assignment problem and propose optimal and heuristic algorithms for its solution. Bluetooth utilizes a short-range radio link. Since the radio link is based on frequency-hop spread spectrum, multiple channels (frequency hopping sequences) can co-exist in the same wide band without interfering with each other. Two or more units sharing the same channel form a piconet, where one unit acts as a master controlling the communication in the piconet and the others act as slaves. E. Gregori et al. (Eds.): NETWORKING 2002, LNCS 2345, pp. 411-422, 2002. © Springer-Verlag Berlin Heidelberg 2002
412
G. Zussman and A. Segall
Bluetooth channels use a frequency-hop/time-division-duplex (FH/TDD) scheme. The channel is divided into 625-msec intervals called slots. The master-to-slave transmission starts in even-numbered slots, while the slave-to-master transmission starts in odd-numbered slots. Masters and slaves are allowed to send 1,3 or 5 slots packets which are transmitted in consecutive slots. A slave is allowed to start transmission in a given slot if the master has addressed it in the preceding slot. Information can only be exchanged between a master and a slave, i.e. there is no direct communication between slaves. Although packets can carry synchronous information (voice link) or asynchronous information (data link), in this paper we concentrate on networks in which only data links are used. Multiple piconets in the same area form a scatternet. Since Bluetooth uses packetbased communications over slotted links, it is possible to interconnect different piconets in the same scatternet. Hence, a unit can participate in different piconets, on a time-sharing basis, and even change its role when moving from one piconet to another. We will refer to such a unit as a bridge. For example, a bridge can be a master in one piconet and a slave in another piconet. However, a unit cannot be a master in more than one piconet. Initially Bluetooth piconets will be used as a replacement for point-to-(multi)point cables. However, in due course, there will be a need for multihop ad-hoc networks (scatternets). Due to the special characteristics of such networks, many theoretical and practical questions regarding the scatternet performance are raised. Nevertheless, only a few aspects of the scatternet performance have been studied. Two issues that received relatively much attention are: research regarding scatternet topology and development of efficient scatternet formation protocols (e.g. [4],[13]). Much attention has also been given to scheduling algorithms for piconets and scatternets. In the Bluetooth specifications [5], the capacity allocation by the master to each link in its piconet is left open. The master schedules the traffic within a piconet by means of polling and determines how bandwidth capacity is to be distributed among the slaves. Numerous heuristic scheduling algorithms for piconets have been proposed and evaluated via simulation (e.g. [7],[8]). In [11] an overall architecture for handling scheduling in a scatternet has been presented and a family of inter-piconet scheduling algorithms (algorithms for masters and bridges) has been introduced. Interpiconet scheduling algorithms have also been proposed in [1] and [16]. Although scatternet formation as well as piconet and scatternet scheduling have been studied, the issue of capacity assignment in Bluetooth scatternets has not been investigated. Moreover, Baatz et al. [1] who made an attempt to deal with it have indicated that it is a complex issue.1 Capacity assignment in communication networks focuses on finding the best possible set of link capacities that satisfies the traffic requirements while minimizing some performance measure (such as average delay). We envision that in the future, capacity assignment protocols will start operating once the scatternet is formed and will determine link capacities that will be dynamically allocated by scheduling protocols. Thus, capacity assignment protocols are the 1
In [1] the term piconet presence schedule is used to refer to a notion similar to capacity assignment.
Capacity Assignment in Bluetooth Scatternets – Analysis and Algorithms
413
missing link between scatternet formation and scatternet scheduling protocols. A correct use of such protocols will improve the utilization of the scatternet bandwidth. We also anticipate that the optimal solution of the capacity assignment problem will improve the evaluation of heuristic scatternet scheduling algorithms. Most models of capacity assignment in communication networks deal mainly with static networks in which a cost is associated with each level of link capacity (see [3] for a review of models). The following discussion shows that there is a need to study the capacity assignment problem in Bluetooth scatternets in a different manner: - In contrast with a wired and static network, in an ad-hoc network, there is no central authority responsible for network optimization and there is no monetary cost associated with each level of link capacity. - The nature of the network allows frequent changes in the topology and requires frequent changes in the capacities assigned to every link. - There are constraints imposed by the tight master-slave coupling and by the timedivision-duplex (TDD) scheme. - Unlike other ad-hoc networks technologies in which all nodes within direct communication from each other share a common channel, in Bluetooth only a subgroup of nodes (piconet) shares a common channel and capacity has to be allocated to each link. A scatternet capacity assignment protocol has to determine the capacities that each master should allocate in its own piconet, such that the network performance will be optimized. Currently, our major interest is in algorithms for quasi-static capacity assignment that will minimize the average delay in the scatternet. The analysis is based on a static model with stationary flows and unchanging topology. To the best of our knowledge, the work presented in this paper is the first attempt to analytically study the capacity assignment problem in Bluetooth scatternets. In this paper we focus on formulating the problem and developing centralized algorithms. The development of the distributed protocols is subject of further research. In the sequel we formulate the scatternet capacity assignment problem as a minimization of a convex function over a polytope contained in the polytope of the wellknown matching problem [14, p. 608] and show that different formulations apply to bipartite and nonbipartite scatternets. The methodology used by Gerla et al. [9],[15] is used in order to develop an optimal scatternet capacity assignment algorithm which is similar to the flow deviation algorithm [3, p. 458]. The main difference between the algorithms is that at each iteration there is a need to solve a maximum-weight matching problem instead of a shortest path problem. Finally, we introduce a heuristic algorithm whose complexity is much lower than the complexity of the optimal algorithm and whose performance is often close to that of the optimal algorithm. Due to space constraints, numerical results are not presented in this paper and the proofs are omitted. Yet, numerical examples and the proofs can be found in [18]. This paper is organized as follows. In Section 2, we present the model and in Section 3 we formulate the scatternet capacity assignment problem for bipartite and nonbipartite scatternets. An algorithm for obtaining the optimal solution of the problem is presented in Section 4. In Section 5, we develop a heuristic algorithm for bipartite scatternets and in Section 6 we summarize the main results.
414
G. Zussman and A. Segall
2 Model and Preliminaries Consider the connected undirected scatternet graph G = (N,L). N will denote the collection of nodes {1,2,…,n}. Each of the nodes could be a master, a slave, or a bridge. The bi-directional link connecting nodes i and j will be denoted by (i,j) and the collection of bi-directional links will be denoted by L. For each node i, denote by Z(i) the collection of its neighbors. We denote by L(U) (U ² N) the collection of links connecting nodes in U. Usually, capacity assignment protocols deal with the allocation of capacity to directional links. However, due to the tight coupling of the uplink and downlink in Bluetooth piconets2, we concentrate on the total bi-directional link capacity. Hence, we assume that the average packet delay on a link is a function of the total link flow and the total link capacity. An equivalent assumption is that the uplink and the downlink flows are equal (symmetrical flows). Let Fij be the average bi-directional flow on link (i,j) and let Cij be the capacity of link (i,j) (the units of F and C are bits/second). We assume that at every link the average bi-directional flow is positive (Fij > 0 "(i,j)³L). We define fij as the ratio between Fij and the maximal possible flow on a Bluetooth link when using a given type of packets3. We also define cij as the ratio between Cij and the maximal possible capacity of a link. It is obvious that 0 < fij 1 and that 0 cij 1. In the sequel, fij will be referred to as the flow on link (i,j) and cij will be referred to as the capacity of link (i,j). Accordingly, c will denote the vector of the link capacities and will be referred to as the capacity vector. The objective of the capacity assignment algorithms, described in this paper, is to minimize the average delay in the scatternet. We define Dij as the total delay per unit time of all traffic passing through link (i,j), namely: Definition 1. Dij is the average delay per unit of the traffic multiplied by the amount of traffic per unit time transmitted over link (i,j). We assume that Dij is a function of the link capacity cij only. We should point out that the optimal algorithm requires no explicit knowledge of the function Dij (cij). We shall need to assume only the following reasonable properties of the function Dij ( ·) . Definition 2. Dij ( ·) is defined such that all the following holds: 1. Dij is a nonnegative continuous decreasing function of cij with continuous first and second derivatives. 2. Dij is convex. 3. lim Dij (cij ) = ∞ cij → fij
4. Dij'(cij) < 0 for all cij where Dij' is the derivative of Dij.
2 3
A slave is allowed to start transmission only after a master addressed it in the preceding slot. For example, currently the maximal flow on a symmetrical link, when using five slots unprotected data packets (DH5), is 867.8 Kbits/second.
Capacity Assignment in Bluetooth Scatternets – Analysis and Algorithms
415
Using Definition 1, we shall now define the total delay in the network. Definition 3. The total delay in the network per unit time is denoted by DT and is given by: DT = ∑ Dij (cij ) ( i , j )∈L
Since the total traffic in the network is independent of the capacity assignment procedure, we can minimize the average delay in the network by minimizing DT. A capacity vector that achieves the minimal average delay will be denoted by c * . A capacity assignment algorithm has to determine what portion of the slots should be allocated to each master-slave link. On the other hand, a scheduling algorithm has to determine which master-slave links should use any given slot pair. Hence, we define a scheduling algorithm as follows. Definition 4. A Scheduling Algorithm determines how each slot pair is allocated. It does not allow transmission on two adjacent links in the same slot pair. The Bluetooth specifications [5] do not require that different masters’ clocks will be synchronized. Since the clocks are not synchronized a guard time is needed in the process of moving a bridge from one piconet to another. Yet, in order to formulate a simple analytical model we assume that the guard times are negligible. This assumption allows us to consider a scheduling algorithm for the whole scatternet.
3 Formulation of the Problem Scatternet graphs can be bipartite graphs or nonbipartite graphs [4] (a graph is called bipartite if there is a partition of the nodes into two disjoint sets S and T such that each edge joins a node in S to a node in T [14, p. 50]). Any scatternet graph in which no master is allowed to be a bridge is necessarily bipartite. For example, the scatternet graph described in Fig. 1-A is bipartite. Even if a master is allowed to be a bridge, the scatternet may be bipartite (e.g. Fig. 1-B). Obviously, if a master is allowed to be a bridge, the scatternet graph may be nonbipartite (e.g. Fig 1-C). In this section, we shall formulate the capacity assignment problem for bipartite and nonbipartite scatternets. We will show that the formulation for nonbipartite scatternets is more complex than the formulation for bipartite scatternets. Master Slave Slave which is also a Bridge A
B
C
Master which is also a Bridge
Fig. 1. Scatternet graphs – A bipartite scatternet in which no master is also a bridge (A ), a bipartite scatternet in which a master is also a bridge (B ), and a nonbipartite scatternet (C )
416
G. Zussman and A. Segall
3.1 Bipartite Scatternets When a bipartite scatternet graph is given, the nodes can be partitioned into two sets S and T such that no two nodes in S or in T are adjacent. Accordingly, the problem of scatternet capacity assignment in bipartite graphs (SCAB) is formulated as follows. Problem SCAB Given: Topology of a bipartite graph and flows ( fij). Objective: Find capacities (cij) such that the average packet delay is minimized:
min DT = min
cij > f ij
Subject to:
∑
( i , j )∈L
Dij (cij )
∀ (i , j ) ∈ L
(1) (2)
∑
cij ≤ 1 ∀i ∈ S
(3)
∑
cij ≤ 1 ∀i ∈ T
(4)
j∈Z ( i )
j∈Z ( i )
The first set of constraints (2) is obvious. Constraints (3) and (4) result from the TDD scheme and reflect the fact that the total capacity of the links connected to a node cannot exceed the maximal capacity of a link. Due to the assumption that the guard times are negligible, in (3) and (4) we neglect the time needed in the process of moving a bridge from one piconet to another. Notice that it is easy to see that the polytope defined by (2) - (4) is contained in the bipartite matching polytope [14]. 3.2 Nonbipartite Scatternets We shall now show that a formulation similar to the formulation of Problem SCAB is not valid for nonbipartite scatternets. A simple example of a nonbipartite scatternet, given in [1], is illustrated in Fig. 2-A. Constraint (2) and the constraint:
∑
j∈Z ( i )
cij ≤ 1 ∀i ∈ N
(5)
are not sufficient in order for the capacity vector to be feasible in this example. The capacities described in Fig. 2-A satisfy (2) and (5) but are not feasible because in any scheduling algorithm no two neighboring links can be used simultaneously. If links (1,2) and (1,3) are in use for distinct halves of the available time slots, there are no free slots in which link (2,3) can be in use. Thus, if c12 = 0.5 and c13 = 0.5, there is no feasible way to assign any capacity to link (2,3), i.e., there is no scheduling algorithm that can allocate the capacities described in the figure. Baatz et al. [1] suggest that a methodology for finding a feasible (not necessarily efficient) capacity assignment4 will be based on minimum coloring of a graph. They do not develop this methodology and indicate that: “the example gives an idea of how 4
Baatz et al. [1] refer to piconet presence schedule instead of capacity assignment. A piconet presence schedule determines in which parts of its’ time a node is present in each piconet. It is very similar to link capacity assignment as it is described in this paper.
Capacity Assignment in Bluetooth Scatternets – Analysis and Algorithms
417
complex the determination of piconet presence schedules may get”. We propose a formulation of the problem that is based on the formulations of Problem SCAB and the matching problem [14], and that allows obtaining an optimal capacity allocation.
f = 0.2 c = 0.5
2
1
f = 0.2 c = 0.5
f = 0.2 c = 0.5
3
f = 0.2 c = 0.4 2 f = 0.2 c = 0.4
A
4
1 f = 0.1 c = 0.2 f = 0.2 c = 0.4 B
f = 0.2 6 f = 0.2 f = 0.2 1 c = 0.6 c = 0.4 c = 0.4 f = 0.1 f = 0.1 2 7 5 3 c = 0.2 c = 0.2 f = 0.2 f = 0.2 f = 0.2 f = 0.2 4 c = 0.4 5 c = 0.4 c = 0.4 3 c = 0.6 C
f = 0.2 c = 0.4
Fig. 2. Examples of scatternets with capacity vectors which are not feasible
It is now obvious that the formulation of the capacity assignment problem for nonbipartite scatternets requires additional constraints to the constraints described in Problem SCAB. For example, one could conclude that the capacity of the links composing the cycle described in Fig. 2-A should not exceed 1. Moreover, one could further conclude that the total capacity of links composing any odd cycle should not exceed: (|links|-1)/2. Namely:
∑
( i , j )∈C
cij ≤ ( C − 1) 2 ∀C ⊆ L, C odd cycle
(6)
However, in the examples given in Fig. 2-B and Fig. 2-C, although the capacities satisfy (6), they cannot be scheduled in any way. Thus, in the following theorem we define a new set of constraints such that the capacity of links connecting nodes in any odd set of nodes U will not exceed (|U|-1)/2.5 These constraints and the proof of the theorem are based on the properties of the matching problem [10],[14]. Theorem 1. The capacity vector must satisfy (2),(5), and the following constraints:
∑c
ij ( i , j )∈L (U )
≤ ( U − 1) 2 ∀U ⊆ N , U odd , U ≥ 3
(7)
The proof appears in [18]. The scatternet capacity assignment problem (SCA) can now be formulated as follows (for bipartite graphs it reduces to Problem SCAB). Problem SCA Given: Topology and flows (fij). Objective: Find capacities (cij) such that the average packet delay is minimized: (1) Subject to: (2),(5) and (7) The constraints (2),(5) and (7) form a convex set which is included in the matching polytope corresponding to the scatternet graph (for bipartite scatternets these constraints reduce to constraints (2) - (4) described in Problem SCAB.). This set consists of all the feasible capacity vectors ( c ). Up to now we have not shown that a 5
We note that a similar observation has been recently independently made by Tassiulas and Sarkar [17] who have considered the problem of max-min fair scheduling in scatternets.
418
G. Zussman and A. Segall
feasible capacity vector has a corresponding scheduling algorithm. Namely, that it is possible to determine which links are used in each slot pair such that no two adjacent links are active at the same slot pair and the capacity used by each link is as defined by the capacity vector ( c ). This result is shown by the following proposition. We note that the proof of the proposition and the transformation of a capacity vector to a scheduling algorithm are based on the fact that the vertices of the matching polytope are composed of (0,1) variables and on an algorithm described in [10]. Proposition 1. If a capacity vector c satisfies (2),(5) and (7), there is a corresponding scheduling algorithm. The proof appears in [18].
4 Optimal Algorithm for Problems SCA and SCAB In this section a centralized scatternet capacity assignment algorithm for finding an optimal solution of Problem SCA, defined in Section 3.2, is introduced.6 The algorithm is based on the conditional gradient method also known as the Frank-Wolfe method [2, p. 215], which was used for the development of the flow deviation algorithm [3, p. 458]. Gerla et al. [9],[15] have used the Frank-Wolfe method in order to develop bandwidth allocation algorithms for ATM networks. Following their approach, we shall now describe the optimality conditions and the algorithm. Since the objective of Problem SCA is to minimize a convex function (DT) over a convex set ((2),(5) and (7)), any local minimum is a global minimum. Thus, necessary and sufficient conditions for the capacity vector c * to be a global minimum are formulated as follows (the following proposition is derived from a well-known theorem [2, p. 194] and, therefore, its proof is omitted). Proposition 2. The capacity vector c * minimizes the average delay for Problem SCA, if and only if: - c * satisfies constraints (2),(5) and (7) of Problem SCA. - There are no feasible directions of descent at c * ; i.e. there does not exist c such that 7:
∇DT (c * )(c − c * ) < 0
∑
j∈Z ( i )
cij ≤ 1 ∀i ∈ N
∑c
ij ( i , j )∈L (U )
(8) (9)
≤ ( U − 1) 2 ∀U ⊆ N , U odd , U ≥ 3 (10)
Proposition 2 suggests a steepest descent algorithm in which we can find a feasible direction of descent c at any feasible point c K by solving the problem:
min ∇DT (c K )c subject to - (9),(10) and: 6 7
(11)
The algorithm for the solution of Problem SCAB is similar (the changes are outlined below). ∇DT ( c * ) is the gradient of DT with respect to c evaluated at c * .
Capacity Assignment in Bluetooth Scatternets – Analysis and Algorithms
cij ≥ 0 ∀(i, j ) ∈ L
419
(12)
Since the constraint set (10) may include exponentially many constraints, this problem cannot be easily solved using a linear programming algorithm. Yet, since Dij’(cij) < 0 for all cij (according to Definition 2.4), the formulation of the problem conforms to the formulation of the maximum-weight matching problem [14, p. 610], 3 which has a polynomial-time algorithm (O(n )):
max −∇DT (c K )c
(13)
subject to:
∑
cij ≤ 1 ∀i ∈ N
(14)
cij ∈ {0,1} ∀(i, j ) ∈ L
(15)
j∈Z ( i )
This result and Proposition 2 are the basis for the optimal algorithm, described in Fig 3. The input to the algorithm is the topology, the flows (fij), a feasible initial solution ( c 0 ), and the tolerance (t). The output is the optimal capacity vector: c * . 1 2
4
Set K = 0 Find the vector c # - the optimal solution of (13) - (15) (i.e. solve a maximum-weight matching problem) * * Find the value a that minimizes DT (α c K + (1 − α )c # ) (a may be obtained by any line search method [2, p. 723]) Set c K +1 = α *c K + (1 − α * )c #
5 6
If ∇DT (c K )(c K − c # ) ≤ t then stop Else set K = K+1 and go to 2
3
Fig. 3. An algorithm for obtaining the optimal solution to Problem SCA
We emphasize that unlike the flow deviation algorithm, in which at each iteration a feasible direction is found by solving a shortest path problem, in the capacity assignment algorithm there is a need to solve a maximum-weight matching problem at each iteration. In case the algorithm is applied to Problem SCAB, there is a need to solve a bipartite maximum-weight matching problem.
5 Heuristic Algorithm for Problem SCAB When considering bipartite scatternets (Problem SCAB), the initial solution for the optimal algorithm can be obtained using a low complexity heuristic centralized scatternet capacity assignment algorithm, presented in this section. In our experiments (see [18]), the results of the heuristic algorithm are very close to the optimal results. The algorithm is based on the assumption that the delay function conforms to Kleinrock’s independence approximation [12], described in the following definition.
420
G. Zussman and A. Segall
Definition 5. (Kleinrock’s independence approximation) When neglecting the propagation and processing delay, Dij(cij) is given by:
Dij (cij ) =
f ij
(c
ij
− fij ) cij > f ij
∞
cij ≤ fij
The algorithm assigns capacity to links connected to bridges and to masters which have at least two slaves. Accordingly, we define N' as follows: Definition 6. N' is a subgroup of N consisting of bridges and masters which have at least two slaves. Namely: N ’ = { i i ∈ N ∩ j ∈ Z (i) > 1 } . We also define the slack capacity of a node as follows: Definition 7. The slack capacity of node i is the maximal capacity which can be added to links connected to the node. It is denoted by si and is given by:
si = 1 −
∑
j∈Z ( i )
cij
Initially all the link capacities are equal to the flows on the links (cij = fij "(i,j)³L). The algorithm selects a node from the nodes in N' and allocates the slack capacity to some of the links connected to it. Then, it selects another node, allocates capacity and so on. Once a node (k) is selected, the slack capacity of this node is allocated to its links whose capacities have not yet been assigned. The slack capacity is assigned to these links according to the square root assignment [12, p. 20]:
ckj = f kj +
sk
f kj
∑
f km
∀j : j ∈ Z ( k ), ckj = f kj
(16)
m: m∈Z ( k ), ckm = f km
There are various ways to define the process of node selection. For example, nodes can be selected according to their slack capacity or their average slack capacity. However, some of the possible selection methodologies require taking special measures in order to ensure that the obtained capacity vector is feasible (satisfies constraints (2) (4) of Problem SCAB). We propose a simple selection methodology that guarantees a feasible capacity vector. It can be shown that after capacity is assigned to a subgroup of the links connected to a node (i) (links whose capacities have not been assigned before), the delay derivatives (Dij'(cij)) of all these links will be equal. Accordingly, we define the delay derivative of a node as follows: Definition 8. The delay derivative of node i is proportional to the absolute values of the delay derivatives of the links connected to node i, whose capacities have not yet been assigned. Its value is computed as if node i has been selected as the node whose capacity has to be assigned and the capacities of these links have been assigned according to (16). It is denoted by di and is given by:
Capacity Assignment in Bluetooth Scatternets – Analysis and Algorithms
di =
∑
421
f
im m: m∈Z ( i ), cim = fim
(17)
si
Node k, whose link capacities are going to be assigned, is selected from the nodes in N’ which are connected to links whose capacities have not yet been allocated. The delay derivatives (di’s) of all these nodes are computed and the node with the largest derivative is selected. Thus, the capacities of links with high absolute value of delay derivative, whose delay is more sensitive to the level of capacity, are assigned first. The algorithm, which is based on the above methodology, is described in Fig 4. The input is the topology and the flows (fij), and the output is a capacity vector: c . It can 2 be seen that the complexity of the algorithm is O(n ), which is about the complexity of an iteration in the optimal algorithm. Moreover, the following proposition shows that the capacity vector obtained by the algorithm is always feasible. 1 2
Set cij = fij "(i,j)³L Set k =
∑
f
im m: m∈Z ( i ), cim = fim
arg max
i∈N ’ ∃m∈Z ( i ) such that cim = f im
sk
f kj
∑
f km
1−
∑
m∈Z ( i )
cim
3
Set ckj = f kj +
∀j : j ∈ Z ( k ), ckj = f kj
4 5
If there exists (i,j)³L such that cij = fij then go to 2 Else stop
m: m∈Z ( k ), ckm = f km
Fig. 4. An algorithm for obtaining a heuristic solution to Problem SCAB
Proposition 3. The heuristic algorithm results in an allocation { c } that satisfies constraints (2) - (4) of Problem SCAB. The proof appears in [18].
6 Conclusions and Future Study This paper presents an analytical study of the capacity assignment problem in Bluetooth scatternets. The problem has been formulated for bipartite and nonbipartite scatternets, using the properties of the matching polytope. Then, we have introduced a centralized algorithm for obtaining its optimal solution. A heuristic algorithm for the solution of the problem in bipartite scatternets, which has a relatively low complexity, has also been described. Several numerical examples can be found in [18]. The work presented here is the first approach towards an analysis of the scatternet performance. Hence, there are still many open problems to deal with. For example, distributed protocols are required for actual Bluetooth scatternets and, therefore, future study will focus on developing optimal and heuristic distributed protocols.
422
G. Zussman and A. Segall
Moreover, in this paper we have made a few assumptions regarding the properties of the delay function. An analytical model for the computation of bounds on the delay is required in order to evaluate these assumptions. In addition, it might enable developing efficient piconet scheduling algorithms. Finally, we note that a major future research direction is the development of capacity assignment protocols that will be able to deal with various quality of service requirements and to interact with scatternet formation, scheduling, and routing protocols.
References 1. 2. 3. 4. 5. 6. 7. 8.
9. 10. 11. 12. 13. 14. 15. 16. 17. 18.
Baatz, S., Frank, M., Kühl, C., Martini, P., Scholz, C.: Adaptive Scatternet Support for Bluetooth using Sniff Mode. Proc. IEEE LCN’01 (Nov. 2001) Bertsekas, D.P.: Nonlinear Programming. Athena Scientific, Massachusetts (1999) Bertsekas, D.P., Gallager, R.: Data Networks. Prentice Hall, New Jersey (1992) Bhagwat, P., Rao, S.P.: On the Characterization of Bluetooth Scatternet Topologies. Submitted for publication (Feb. 2002) Bluetooth Special Interest Group: Specification of the Bluetooth System - Version 1.1. (Feb. 2001) Bray, J., Sturman, C.: Bluetooth connect without cables. Prentice Hall (2001) Bruno, R., Conti, M., Gregori, E.: Wireless Access to Internet via Bluetooth: Performance Evaluation of the EDC Scheduling Algorithm. Proc. ACM WMI’01 (July 2001) Das, A., Ghose, A., Razdan, A., Saran, H., Shorey, R.: Enhancing Performance of Asynchronous Data Traffic over the Bluetooth Wireless Ad-hoc Network. Proc. IEEE INFOCOM’01 (Apr. 2001) Gerla, M., Monteiro, J.A.S., Pazos-Rangel, R.A.: Topology Design and Bandwidth Allocation in ATM Nets. IEEE JSAC 7 (Oct. 1989) 1253-1262 Hajek, B., Sasaki, G.: Link Scheduling in Polynomial Time. IEEE Trans. on Information Theory 34 (Sep. 1988) 910-917 Johansson, P., Kazantzidis, M., Kapoor, R., Gerla, M.: Bluetooth: An Enabler for Personal Area Networking. IEEE Network 15 (Sep./Oct. 2001) 28-37 Kleinrock, L.: Communication Nets: Stochastic Message Flow and Delay. McGraw-Hill, New York (1964) Law, C., Mehta, A.M., Siu, K.Y.: Performance of a New Bluetooth Scatternet Formation Protocol. Proc. ACM MOBIHOC’01 (Oct. 2001) Nemhauser, G.L., Wolsey, L.A.: Integer and Combinatorial Optimization. John Wiley and Sons (1988) Pazos-Rangel, R.A., Gerla, M.: Express Pipe Networks. Proc. Global Telecommunications Conf. (1982) B2.3.1-5 Racz, A., Miklos, G., Kubinszky, F., Valko, A.: A Pseudo Random Coordinated Scheduling Algorithm for Bluetooth Scatternets. Proc. ACM MOBIHOC’01 (Oct. 2001) Tassiulas, L., Sarkar, S.: Maxmin Fair Scheduling in Wireless Networks. Proc. IEEE INFOCOM’02 (to appear) Zussman, G., Segall, A.: Capacity Assignment in Bluetooth Scatternets - Analysis and Algorithms. CCIT Report 355, Technion - Department of Electrical Engineering, (Oct. 2001) Available at URL: http://www.comnet.technion.ac.il/segall/Reports.html
Optimization-Based Congestion Control for Multicast Communications Jonathan K. Shapiro, Don Towsley, and Jim Kurose Department of Computer Science University of Massachusetts at Amherst
Abstract. Widespread deployment of multicast depends on the existence of congestion control protocols that are provably fair to unicast traffic. In this work, we present an optimization-based congestion control mechanism for single-rate multicast communication with provable fairness properties. The optimization-based approach attempts to find an allocation of rates that maximizes the aggregate utility of the network. We show that the utility of multicast sessions must be carefully defined if a widely accepted property of aggregate utility is to hold. Our definition of session utility amounts to maximizing a weighted sum of simple utility functions, with weights determined by the number of receivers. The fairness properties of the optimal rate allocation depend both on the weights and form of utility function used. We present analysis for idealized topologies showing that while our mechanism is not strictly fair to unicast, its unfairness can be controlled by appropriate choices of parameters.
1
Introduction
Widespread deployment of multicast communication in the Internet depends critically on the existence of practical congestion control mechanisms that allow multicast and unicast traffic to share network resources fairly. Most service providers recognize multicast as an essential service to support a range of emerging network applications including audio and video broadcasting, bulk data delivery, and teleconferencing. Nevertheless, these network operators have been reluctant to enable multicast delivery in their networks, often citing concerns about the congestion such traffic may introduce. There is a clear need for multicast congestion control algorithms that are provably fair to unicast traffic if these concerns are to be addressed. In this paper, we present a congestion control mechanism for single-rate multicast traffic based on an economic theory of resource allocation and show that although it is not strictly fair to unicast traffic, its unfairness is bounded and can be controlled. We first formulate the multicast congestion control problem as a utility maximization problem, extending existing work for unicast. A naive, sender oriented,
This work was supported in part by the National Science Foundation (NSF) under grant number ANI-9980552
E. Gregori et al. (Eds.): NETWORKING 2002, LNCS 2345, pp. 423–442, 2002. c Springer-Verlag Berlin Heidelberg 2002
424
J.K. Shapiro, D. Towsley, and J. Kurose
generalization of existing formulations for unicast treats single-source multicast sessions no differently from unicast sessions, modeling each by an unweighted utility function and maximizing the sum of session utilities. One problem with this naive approach is that it penalizes individual multicast sessions for using more network resources than unicast sessions without rewarding them for the bandwidth saved on links shared by multiple receivers. More serious than its unfairness to multicast sessions, the sender-oriented approach turns out to violate a generally accepted property of aggregate utility, namely, that the preference of the aggregate does not change if we simply measure utility on a different scale. This common-sense notion is why, for example, we reject as nonsense the statement that, as a group, residents of New York prefer a temperature of 70 degrees to 60 degrees Fahrenheit, but prefer a temperature of 15.5 to 21 degrees Celsius. If this invariance property is violated in the congestion control problem, the network will be controlled about an operating point determined by an arbitrary choice of utility scale. We introduce a receiver-oriented approach that uses session weights based on the number of receivers and preserves invariance under a change in utility scale. Moreover, we show that such an approach is necessary a neccessary condition for satisfying this property. A consequence of adding session weights based on the number of receivers is that the resulting rate allocations tend to favor sessions with more receivers over those with fewer. Since the weighted sum does not remove the original penalty against sessions that use more resources, it is not immediately clear whether multicast sessions fare better or worse than unicast sessions under our formulation. We show that while our formulation favors multicast sessions, the resulting unfairness can be controlled and remains bounded in the simple network topologies we have considered. Our work is based on a promising economics-inspired approach called optimization based congestion control, which casts the congestion problem as one of utility maximization (alternately, cost minimization). This approach provides an elegant theoretical framework in which congestion signals are interpreted as prices, network users are modeled as utility maximizers, and the network sets prices in such a way to drive a set of self-interested users toward an operating point at which their aggregate utility is maximized. Specific link service disciplines and rate-control algorithms at end-hosts can be thought of as components of a distributed computation to solve the global optimization problem. Thus, improvements in congestion control can proceed in a principled fashion, driven by improvements in the underlying optimization algorithm. While the optimization-based approach has received much attention [1,2,3,4,5,6,7,8], it has only recently been applied to multicast congestion control [9,10]. Many existing mulicast congestion control schemes [11,12] rely on heuristic techniques, such as adapting to a single receiver or a small group of representatives. In contrast, the optimization-based approach offers a formal foundation with which to develop congestion control mechanisms and understand their fairness properties and impact on the global behavior of the network.
Optimization-Based Congestion Control for Multicast Communications
425
The rest of this paper is structured as follows: In Section 2 we extend a unicast congestion control problem formulation to single-rate multicast. In Section 3 we consider multicast session utility functions in detail, presenting a axiomatic argument in favor of a particular definition. The fairness properties of our definition are analyzed in Sections 4-6 where we show that multicast sessions are favored over unicast sessions and present evidence that such unfairness can be controlled. We conclude by briefly discussing the development of practical control mechanisms based on our results and highlighting future work.
2
Problem Formulation
Optimization-based congestion control casts the problem of bandwidth sharing as one of utility maximization. Consider a network modeled as a set of directed links L, with capacity cl for each link l ∈ L. Let C = (cl , l ∈ L). The workload for the network is generated by a set of sessions1 S, which consume bandwidth. The set of links used by a particular session, s, is L(s) ⊆ L. The set of sessions using any particular link, l, is S(l) ⊆ S. Each session is characterized by a utility function Us , which is assumed to be increasing and concave in the session rate xs . Session utility may also be a function of other parameters in addition to rate, such as number of receivers, but we will sometimes suppress these additional dependencies in the notation, writing Us (xs ). The network’s objective is to optimize social welfare, defined as the sum of session utilities. Us (xs ) (1) max xs ≥0
subject to
s∈S
xs ≤ cl l ∈ L
(2)
s∈S(l)
The problem (1-2) can be solved using convex optimization techniques [13]. Under a standard economic interpretation, the Lagrange multipliers of such techniques are referred to as shadow prices and can be shown to function as prices of network links [14]. The essential step in developing practical rate-control algorithms is to find a distributed algorithm for solving (1-2) in which each individual session need only compute a local optimization to set its own rates. There is a growing body of research devoted to finding such a distributed algorithm and using it as a basis for unicast rate-control in practical protocols [1,2,3,4,5,6,8,7, 15]. Observe that the topologies of sessions are not explicit in the formulation. For a unicast session, the links of L(s) are arranged end-to-end, forming an acyclic path between a source and a receiver. However, L(s) can be any subset of links—for example, a tree in the case of multicast. There is a requirement that the session employs a single rate xs on all of its links. Thus, this formulation is readily generalized to single-rate multicast sessions. 1
The terms ’session’ and ’user’ are synonymous in this paper.
426
J.K. Shapiro, D. Towsley, and J. Kurose
The case of multi-rate multicast is somewhat more complicated since the singe rate requirement is replaced with a constraint that a session’s rate on link l, be the maximum of its rate on any downstream link. Since a session can now have different rates on different links, it makes little sense to endow the session with a utility that is a function of scalar rate. Instead, recent treatments of the multi-rate problem [9,10] have altered the model by associating a utility function with each receiver. It is worth observing that this change to the model, while arising naturally from the multi-rate constraint, has been introduced without consideration of its effect on the global operating point. While a complete solution to the multi-rate problem is beyond the scope of this paper2 , our work provides a formal justification for the use of receiver utility functions even in the case of single-rate multicast where no such modelling pressure exists. 2.1
Application to Multicast
Single-rate multicast represents an important class of multicast applications. Many important applications, such as bulk data transfer [16] typically operate at a single rate. Even applications such as streaming video, for which multi-rate multicast is often considered well suited, single-rate multicast is used in current practice. It remains unclear whether multi-rate multicast for video is viable on the Internet, where it must be implemented using layered multicast schemes that have substantial overhead [17]. Furthermore, even if layered multicast is used, single-rate congestion control techniques may be appropriate to adapt the rates of individual layers. It would appear that congestion control for single-rate multicast is a trivial extension of the unicast problem and can take advantage of existing approaches. It is important, however, to evaluate this claim carefully, given the importance of single rate multicast in practice. Certainly there are implementation issues in multicast that complicate the extension of unicast optimization-based rate control protocols based on packet marking schemes [5,6,4]. Equally serious, are the conceptual difficulties that arise in an uncritical application of the unicast solution of the underlying optimization problem. It is not immediately clear what the fairness properties of the resulting rate allocations are and, more fundamentally, what it means to define a utility function for a multicast session. To develop our intuitions about the conceptual problems mentioned above, consider the approach of Low and Lapsley [5]. This approach finds a solution to problem (1-2) by solving its dual to obtain the following optimality condition for session rate xs : −1
xs (λs ) = Us (λs ) λs = λl
(3) (4)
l∈L(s) 2
Solving the resulting multi-rate optimization problem is further complicated by the coupling of problem variables due to the multi-rate constraint and because the max function is nondifferentiable. See recent works by Kar, Sarkar, and Tassioulas [9] and Deb and Srikant [10] for treatments of the multi-rate problem.
Optimization-Based Congestion Control for Multicast Communications
427
−1
where λl is the shadow price of link l and Us is the inverse marginal utility −1 function for session s. It can be shown that Us (λs ), and, hence, the rate allocated to session s is a decreasing function of the total session price λs . A large multicast session typically uses many more links than would a unicast session between the source and any single receiver and can therfore expect to see a higher −1 session price than the unicast session. since Us (λs ) is decreasing, multicast sessions will receive lower rates than unicast sessions along the same end-toend paths, casting doubt on whether individual receivers have any incentive to join multicast groups. It may be reasonable to adopt a new definition of session utility with a bias in favor of multicast sessions to encourage bandwidth sharing. However, we must be careful not to overcompensate for the high session prices seen by multicast sessions, yielding allocations one would not consider fair to unicast sessions. In the following sections we analyze the impact of such definitions on the fairness properties of the resulting congestion control mechansim. It will turn out that the class of receiver-oriented session utility functions, while not absolutely fair to unicast sessions, does not starve them in the presence of larger sessions. Moreover, we will see that utility functions in this class make sense in a way that other functions do not.
3
Multicast Utility Functions
In Section 2.1, we generalized the unicast optimization problem formulation to accommodate single rate multicast sessions. However, there is a subtle problem with this model that makes it difficult to apply to single-rate multicast. The problem concerns the definition of utility for an individual multicast session. An unweighted utility function is used to characterize the benefit of a higher rate to the session. For a unicast session, it makes little difference whether we consider this benefit to belong to the sender or receiver. For the purpose of unicast congestion control, we can treat the sender’s and receiver’s objectives as being one and the same. For a multicast session with multiple receivers it is unclear whether session utility belongs to the sender or should be split in some way among the receivers. One approach towards defining multicast session utility ignores the multiplicity of receivers and defines it only with respect to the sender.3 An alternative approach would be to define session utility as a function of the utilities of the receivers in the session. We informally refer to these two approaches as senderoriented and receiver-oriented, respectively. While a receiver-oriented approach emerges naturally from the model in the case of multi-rate multicast, it is not immediately clear which approach is most appropriate for single-rate multicast. Later in this section we will formalize these definitions and argue in favor of a receiver-oriented approach. Before doing so, however, we will digress briefly to provide some background about the use of utility functions in economics and the theory of social choice. 3
We are assuming that multicast sessions have a single source.
428
3.1
J.K. Shapiro, D. Towsley, and J. Kurose
Utility Functions and Social Welfare
The use of concave increasing utility functions to represent session utility has a natural and intuitive interpretation. Utility is a monotonically increasing function of its input when individuals prefer having as much of the input as possible. The concavity of the utility function captures the idea of diminishing marginal utility4 . Both concavity and monotonicity are appropriate assumptions in the case of bandwidth for elastic traffic [18], where the input to the utility function is the session rate.5 Utility can be difficult to quantify precisely; there is no clear unit of utility and no agreed upon scale. Comparing the utility of two individuals can be tricky, particularly when they do not share the same utility function. Because of the difficulty in performing interpersonal comparisons of utility, economists customarily think of utility as an ordinal magnitude, meaning that the absolute magnitude of utility is meaningless, but that the relative magnitudes of utilities at various rates for an individual session define preferences among rates and the relative differences in magnitude indicate the strength of the preferences [19]. A consequence of considering only ordinal magnitudes is that utility functions are unique only up to a linear transformation. That is, the utility maximizing behavior of an individual with utility function u(x) is indistinguishable from one whose utility function is a linear transformation of u(x). This restriction makes intuitive sense because a linear transformation simply represents a change in scale and a translation of the zero point of the utility function. The notion of an aggregate utility function is a compelling extension of the concept of individual utility. Aggregate utility is defined by a social welfare function (SWF) that maps the vector of all session utilities to a scalar utility value representing the social desirability of the corresponding vector of rates. Since the SWF is not one-to-one, it induces a partial ordering over allocations of rates, known as the social preference relation (SPR). As with individual utility functions, we are primarily interested in this preference relation rather than the absolute magnitude of the SWF. In optimization-based congestion control, we adopt the sum of individual utilities as the SWF. In general, there are many ways to define the SWF, each carrying with it some subjective judgment about how individual preferences should be combined to determine a social preference. It is possible to specify desirable properties for a SWF axiomatically. Perhaps the most important result of social choice theory is Arrow’s Impossibility Theorem, which states that it is not always possible to satisfy all desidirata [20]. For example, a commonly cited property of SWFs is independence of irrelevant alternatives, which states that the socially preferred allocation should be 4 5
The term ’marginal utility’ is used in economics to refer to the first derivative of the utility function. In this section, utility will be assumed to be a function of session rate; we do so for the sake of concreteness and continuity with the rest of the paper. It should be understood, however, that the discussion presented here applies to any utility function.
Optimization-Based Congestion Control for Multicast Communications
429
invariant under a change in individual utility functions that leaves individuals’ preferences unaffected. It is straightforward to show that the sum of individual utility functions violates this property. Indeed, it is precisely this violation that allows Kelly to associate optimal rates under different functional forms of utility with different formal definitions of fairness [1]. Although independence of irrelevant alternatives is neither required nor (in light of Arrow’s Impossibility Theorem) worth pursuing for the congestion control application, a related but weaker property is still worthy of consideration. – Invariance Under Linear Transformation (ILT): Let u be a vector of utility functions and v be a transformed vector such that vi (x) = α ui (x) + β. Let U (u(x)) be a SWF, where u(x) = (ui (xi )) is the vector of session utilities for rate vector x. We say that a SWF is invariant under a linear transformation if, for any two rate vectors x and y, U (u(x)) ≥ U (u(y)) ⇒ U (v(x)) ≥ U (v(y)) for any values of α, β. In words, the SWF induces the same preference relation for u and v. The ILT property builds on the assertion that individual utility functions are unique up to a linear transformation, saying that aggregate preferences, too, should be invariant under such a transformation. We will see shortly that under some definitions of multicast session utility the ILT property is satisfied, while under others it is not. 3.2
Sender- and Receiver-Oriented Utility Functions
We now formally define sender- and receiver-oriented concepts of session utility. Consider a single-rate multicast session s with rate x and receiver set R with size R. In the sender-oriented approach, session utility function is a concave increasing function us of the session rate. Usnd = us (x)
(5)
In the receiver-oriented approach, each receiver i ∈ R has a utility function ui (x), which is concave and increasing. The session utility function is the sum of receiver utilities. ui (x) (6) Urcv = i∈R
We can convert these definitions into an alternate form by introducing two requirements. First, we require that all receivers in a session have identical utility functions. ui (x) = ur (s) ∀ i ∈ R We typically think of utility functions as representing application characteristics and sometimes as being imposed by network mechanisms. For example, following
430
J.K. Shapiro, D. Towsley, and J. Kurose
the example of Kunniyur and Srikant [4], we use u(x) = -1/x to model TCP-style congestion control.6 To the extent that receivers within a session share the same application requirements, it is also reasonable to assume they share a utility function. We feel that this is a natural assumption in the case of single-rate multicast. The second requirement is that both sender- and receiver-oriented utility functions should reduce to the same standard unicast utility function up to a linear transformation when R = 1. These two restrictions allow us to express both types of session utility functions as the product of a base utility function u(·) and a scaling function f (·). The base utility function, u(·) depends only on the session rate and is concave and increasing. It can be thought of as the utility function of a session with a single receiver. The scaling function f (·) depends on the number of receivers in the session. It must be monotonic in its argument, although it need not be strictly increasing. For a sender-oriented definition of session utility, f (R) = κ, where κ is a constant. Usnd (x, R) = κ u(x) (7) For a receiver-oriented definition, f (R) = κ R, where κ is a constant. Urcv (x, R) = κ R u(x)
(8)
It is possible to entertain other definitions of session utility. We choose these because they are commonplace and mathematically tractable. One obtains equation (7) by treating all sessions equivalently, regardless of the number of receivers. Equation (8) reflects the idea that multicast session utility is itself a social welfare function, representing the aggregate utility of the receiver set. Under our assumptions, this equation is equivalent to the sum of receiver utilities—a simple and commonly used social welfare function 3.3
The Session-Splitting Problem
In Section 3.2, we identified two alternative definitions of multicast session utility based on sender- or receiver orientation. Now we consider these two definitions in more detail and determine which makes sense in the context of congestion control. We begin by attempting to capture the effect of flexible group membership using an optimization-based approach. Golestani and Sabnani [23] observe that if receivers in a session can be dropped and reassigned to a different session in response to congestion, it is often desirable to split a multicast group into subgroups with different rates. One can think of this approach as an approximation of multi-rate multicast that does not violate the constraint of having a single rate per session and requires less overhead at the receiver [17]. The presence of additional sessions in the network after splitting may increase contention on existing bottlenecks or even create new bottlenecks. Thus not all 6
√ √ A more accurate TCP utility function is u(x) = ( 2/T ) tan−1 (T x/ 2), where T is the round trip time [21,22]. Kunniyur and Srikant’s approximate function u(x) = −1/x is valid for small end-to-end loss probabilites.
Optimization-Based Congestion Control for Multicast Communications
431
ways of splitting a session lead to an overall improvement in received rates. Ideally, one would like to find a way to split the session that offers a higher rate to some receivers without reducing the rates of any others. A less ideal, but perhaps still tolerable split might reduce some receivers’ rates but improve the utilization of the network and allow many more receivers to receive at a higher rate. In this section, we consider the use of sender- and receiver-oriented social welfare functions to determine whether splitting a session will improve aggregate utility. In general, the choice of sender- or receiver-oriented utility as well as the form of the base utility function will affect the social welfare function. However, for a fixed choice of these factors, we expect the SWF to be well-defined for all possible ways of splitting the session. Additionally, the optimal way of splitting a session should be invariant under a linear transformation of the base utility function. If this were not the case, an arbitrary rescaling of utility could determine whether splitting a session is preferred over not splitting. We will observe that this invariance holds in the case of a receiver-oriented SWF but not in the case of a sender-oriented one. We begin by formalizing the session splitting problem in terms of utility maximization. In the session-splitting problem, we have a network (N, L) with link capacities C = (cl , l ∈ L). A set of receivers R ⊂ N could be served by one or more multicast sessions with source s ∈ N − R. We assume that the number and rates of all other sessions in the network are fixed. Capacities in C thus represent the available capacity for multicast sessions serving receiver set R. Each session’s rate is limited by its most constrained receiver, that is, by the receiver with the lowest link capacity along the path between it and the source. If this bottleneck link is not shared by all of the receivers, then it may be possible to split the session into two or more sessions yielding a higher rate to some receivers. Splitting the session is equivalent to partitioning the receiver set into disjoint subsets P = {P1 , P2 , . . . , PN }. We will use P to denote the set of all possible partitions of R. Each partition in P represents one possible way to divide the receiver set into sessions. Each element of a partition represents a subset of R to be served by a different session. Rates may vary among sessions, but all receivers within a session must receive at a single rate. Computing the rates for each session is, itself, a non-trivial problem since some links will be shared by more than one session. There are many possible mechanisms for determining session rates. One example is the greedy algorithm suggested by Rubenstein, Kurose and Towsley [24] to achieve max-min fairness among the sessions. For our purposes, it is sufficient to assume that we have some deterministic mechanism to perform this rate assignment, which we model as a rate allocation function X : P ×Z+ → R. Given a partition P and an index i, the rate allocation function returns the rate of the session serving Pi . The session-splitting problem requires us to find a partition that maximizes the aggregate utility of the network. Recall that the optimization-based approach defines aggregate utility as the (possibly weighted) sum of all session utilities.
432
J.K. Shapiro, D. Towsley, and J. Kurose
Thus the optimal splitting is a partition that solves max U (P ; f, u, X) P ∈P
where U (P ; f, u, X) =
|P |
f (|Pi |) u(X(P, i))
i=1
is the aggregate utility function. We can choose the scaling function f from equations (7) and (8) to solve this problem for sender- and receiver-oriented definitions of session utility. The aggregate utility function defines a partial ordering over P. In economic terms, this ordering is the social preference relation over all possible partitions of the receiver set. As explained in Section 3.1, it is customary to regard utility functions as unique up to a linear transformation. A reasonable restriction, therefore, is only to allow social preference relations that remain invariant under a linear transformation of the base utility function, as captured by the following axiom, similar to the ILT property in Section 3.1: Axiom 1 Let f (·) be a fixed scaling function and X(·, ·) be a fixed rate allocation function. For any base utility fuction u(·), let v(·) be another base utility function such that v(x) = α u(x) + β where α and β are constants. Then for all P, Q ∈ P, U (P ; f, u, X) ≥ U (Q; f, u, X) ⇐⇒ U (P ; f, v, X) ≥ U (Q; f, v, X) Theorem 1. Let fsnd (R) = κ and frcv (R) = κ R be sender- and receiveroriented scaling functions. For any base utility function u and rate allocation function X(·, ·), the aggregate utility function U (·; frcv , u, X) satisfies Axiom 1, while U (·; fsnd , u, X) does not. Furthermore, Axiom 1 can only be satisfied using the scaling function f (R) = frcv (R). The proof of Theorem 1 is quite straightforward and is omitted here due to space limitations. Interested readers can find it in [25]. One immediate consequence of this theorem is that if one accepts that Axiom 1 is indeed an appropriate requirement for any “reasonable” definition of aggregate utility, then our sender-oriented utility definition is not “reasonable”. In fact, the only reasonable definition of session utility is a receiver-oriented one.
4
Consequences of Receiver-Oriented Utility
In Section 3.3 we argued that receiver oriented session utility functions are an appropriate model for multicast session utility in the session splitting problem.
Optimization-Based Congestion Control for Multicast Communications
433
In this section, we return to the original congestion control problem and determine whether using receiver-oriented utility functions leads to fair sharing of bandwidth between unicast and multicast sessions. We rewrite the network optimization problem (1-2) as max κ Rs u(xs ) (9) x=(xs ,s∈S)
subject to
s
xs ≤ cl ,
∀l ∈ L
(10)
s∈S(l)
The Kuhn-Tucker conditions for optimality are λl κ Rs d u/d x =
(11)
l∈L(s)
λl (xl − cl ) = 0, (xl − cl ) ≤ 0
(12) where the λl are Lagrange multipliers or link prices and xl = s∈S(l) xs is the aggregate rate seen at link l. As before, we also write λs = l∈L(s) λl as the total session price seen by session s. From the first Kuhn-Tucker condition (11), we observe that the use of receiver-oriented utility functions creates a bias in favor of sessions with large numbers of receivers. To see this, note that d u/d x = λs (κ Rs )−1 The optimal rate for session s,
(13)
x∗s
is given by −1 s x∗s = u λ (κ Rs )−1
(14)
Equation (13) states that, at optimality, the a session’s marginal utility should be proportional to its price divided by the number of receivers. We refer to the ratio λs /(κ Rs ) as the effective session price. The optimal rate can therefore be obtained by taking the inverse of the marginal utility function as shown in (14). Since Us is concave, u is a strictly decreasing function of x and its inverse is also a decreasing function. For a fixed session price, a session with a larger number of receivers has a lower effective session price and thus receives a higher rate. We refer to this effect as “tyranny of the majority” (ToM). ToM is a source of unfairness against unicast flows since multicast flows with the same total session price will receive a higher rate. However, the fact that multicast sessions tend to use more links than unicast sessions, particularly as the number of receivers becomes large, means that the session price λs for a multicast flow is likely to be higher than that of a unicast session. To understand the fairness properties of rate allocations under receiver oriented utility functions we must determine whether the price increase associated with the scaling of multicast trees is sufficient to limit the effect of ToM as more receivers are added.7 7
If one holds that improving the rate of many receivers at the expense of a few is reasonable, giving a larger share of bandwidth to larger groups may not seem unfair.
434
5
J.K. Shapiro, D. Towsley, and J. Kurose
Effect of Multiple Points of Congestion
In the previous section, we saw that ToM and the scaling of multicast trees have opposite effects. As we will see shortly, these effects are not necessarily equal in strength. The effect of ToM is likely to be the stronger of the two, allowing sessions with more receivers to receive a greater share of bandwidth. Whether we choose to accept this form of controlled unfairness or introduce a correction, we require a more precise understanding of the interaction of the two effects. In this section, we show that the functional form of the base utility function can be chosen to limit the strength of the ToM bias.
depth 0 1 2 3
unicast
multicast
Fig. 1. A binary multicast tree of depth 3 with a sharing depth of 3.
Consider a multicast session in the form of a complete tree of degree k and depth D, such as the one shown in Fig. 1. Each link of the tree has capacity c. We will use a receiver-oriented definition of session utility, but allow an arbitrary base utility function u(x). The tree has a receiver at each leaf, giving R = kD receivers in total. The multicast session shares the network with a set of k D one-hop unicast sessions, which are evenly distributed accross the links at depth d. There are k D−d unicast sessions on each of k d links at level d. We refer to d, the depth in the multicast tree at which unicast sessions share links, as the sharing depth. Let x = (xs ) be the vector of session rates, where x0 is the multicast rate and x1 , . . . , xR are the rates of the unicast sessions. Shadow prices are represented by a vector of multipliers λ = (λ1 , . . . , λL ), where L = k d .8 For a particular choice of sharing depth d, we can now form the Lagrangian for the basic optimization problem (1).
8
We take the position that a bounded bias in favor of large groups is a defensible form of “controlled unfairness” but that there must be a mechanism to prevent starvation of unicast flows. The vector λ contains elements for only those links with nonzero price, namely, the kd links at depth d.
Optimization-Based Congestion Control for Multicast Communications
Ld (x, λ) = k D u(x0 ) +
R
u(xi ) −
L
i=1
λj gj (x)
435
(15)
j=1
where {gj } is the set of capacity constraints for the shared links. L(j,k,d,D)−1
gj (x) = x0 +
xl − c ≤ 0
(16)
l=L(j−1,k,d,D)
L(j, k, d, D) = j k D−d We use the symmetry of the tree topology to reduce the problem to three variables: the multicast session rate xm , the unicast session rate xi , and the shadow price of a congested link λ. We rewrite the link capacity constraint gj (x) = g(x) = xm + k D−d xi − c
(17)
Solving the first-derivative conditions of the reduced problem for the logarithmic base utility function u(x) = log(x) gives xm = c/2, xi = c/(2 k D−d ), λ = 2 k D−d /c
(18)
We observe the following facts about this result: – At the system optimum, the multicast session receives rate xm = c/2. This result is independent of the tree depth D, the sharing depth d, and the tree degree k. – The invariance of the optimal multicast rate is a direct result of the choice of a logarithmic base utility function. As we will see, this property does not hold for other utility functions. – The remaining capacity on the shared links is split evenly among the sharing unicast sessions. Since the number of sharing sessions is kD−d , the optimal unicast rate depends on D, d and k. – The total price charged to the multicast session is λ k d = 2 k D /c, which is independent of the sharing depth. Under a receiver-oriented definition of session utility, this price is divided by the number of receivers to obtain the effective session price. Thus, effective session price is independent of d, D and k under a logarithmic utility function. Since the invariance of the multicast session rate appears to derive from a special choice of utility function, it is interesting to explore the behavior as we modify the functional form. We can derive the following optimality condition: u (xm ) =
1 k D−d
u (
c − xm ), k D−d
(19)
436
J.K. Shapiro, D. Towsley, and J. Kurose
Equation (19) relates the marginal utility function u (x) to the function u (x) = a u (a (c − x)) obtained when we rotate u about the line x = c and scale both the argument and the result by the same factor a. Any point at which these two functions intersect satisfies the optimality condition. Note that u (x) is the derivative of a concave and strictly increasing utility function, and therefore ∗ must be strictly decreasing. Thus, u (x) and u (x) intersect in exactly one point, establishing the uniqueness of the solution. Observe also that the scaling factor a = 1/k D−d ≤ 1. Scaling the argument of u (c − x) compresses the function along the horizontal axis and moves the point of intersection to the left, while scaling its result compresses the function along the vertical axis and moves the point of intersection to the right. Figure 2 shows how the allocated multicast rate changes as we vary the sharing depth d (hence, the number of congested links) in a binary tree for three choices of base utility function: u(x) = log(x), u(x) = −1/x and u(x) = −(− log(x))α . The first function is the now familiar logarithmic utility function. The second is the minimum potential delay (MPD) utility function introduced by Massoulie and Roberts [26] and shown by Kunniyur and Srikant to model the utility of TCP traffic [4]. The third function is shown by Kelly to yield max-min fairness in the limit as α → ∞ [1].9 In all three graphs, the single decreasing function is u (x), the first derivative of the base utility function, and the family ∗ of increasing functions are u (x) for decreasing a (increasing D − d). The points ∗ where u (x) intersects u (x) give the optimal rates for the multicast session as a fraction of available capacity. As established above, the intersection point is invariant and equal to c/2 for logarithmic utility. The intersection point is also fixed at c/2 when a = 1 for all three functions, corresponding to a sharing depth equal to the maximum tree depth. In both the MPD and max-min fair utility functions, however, the intersection point moves to the left as a decreases. That is, as the sharing depth moves closer to the top of the tree, the number of bottleneck links decreases. However, as more unicast sessions share each bottleneck, the price on each congested link increases and the multicast session receives a smaller fraction of the available bandwidth. Under the definition of max-min fairness for single-rate multicast [24], the multicast session must share bandwidth equally with all sessions on its most congested link. Thus, in the max-min fair allocation for our k-ary tree example, xm = c/(k D−d + 1). In the case of Kelly’s max-min fair utility function, we see that the optimal rates approach the max-min fair allocations, indicated by the tick-marks along the x-axis in Fig. 2. It appears that the points of intersection converge to these values as we transform the logarithmic utility function into the max-min fair utility function by increasing the exponent α. Demonstrating this convergence formally is somewhat difficult. We can establish a similar result for a family of utility functions that includes both the logarithmic and MPD utility functions and also yields max-min fairness as a limiting case. Consider the family of utility functions u(x) with first deriva∗
9
In our numerical analysis, we take α to a reasonably high power. (α = 250)
Optimization-Based Congestion Control for Multicast Communications
437
Fig. 2. Figure showing the effect on the optimal allocation for a binary multicast tree as we vary the sharing depth. These graphs show three different marginal utility functions, ∗ u (x) along with their transformations u (x) for various choices of D − d with the yaxis shown in log scale. The x-coordinate of the points of intersection give the optimal session rates as a fraction of available capacity. Max-min fair allocations for different values of D − d are indicated along the x-axis. The figure shows that the logarithmic utility function (top) gives the multicast session half the available bandwidth regardless of the number of sharing unicast sessions, whereas the max-min fair utility function (bottom) splits bandwidth evenly among all sessions on the shared link regardless of the number of receivers. The MPD utility function (center) represents a compromise between these two extremes.
tives u (x) = 1/xα+1 . Such functions include u(x) = log(x), u(x) = −x−α /α. This family of functions was originally identified by Mo and Walrand [7]. Members of this family are mathematically tractable since the functions u (x) are homogeneous, satisfying u (t x) = t−r u (x)
(20)
where r = α + 1. We can simplify the optimality condition of (19). xm / (c − xm ) = a(1−r)/r
(21)
438
J.K. Shapiro, D. Towsley, and J. Kurose
As a further simplification, we can express the multicast rate as a fraction, p, of available capacity, xm = p c. Solving for p, we get −1 . (22) p = 1 + a(1−r)/r In the limit of large α, p converges to the max-min fair allocation. −1 = a/(1 + a) = 1/(k D−d + 1). lim 1 + a(1−r)/r r→∞
(23)
Following Kelly’s example in [1], we can prove that u(x) = −x−α /α always gives max-min fairness in the limit α → ∞, by providing an absolute priority to smaller flows.10 For two rates such that xs∗ < xs , α+1
u (xs∗ )/u (x) = (xs /xs∗ )
6
→ ∞ as α → ∞
Fairness to Unicast Sessions
In Section 5, we observed that a multicast session was able to obtain a higher rate than unicast sessions sharing the same bottleneck links. We showed that this unfairness is bounded in the presence of multiple points of congestion. However, this result exploited features of an idealized multicast session topology. Adopting a somewhat more realistic model in this section, we investigate whether the same type of bounded unfairness is possible in a more general setting with receiveroriented utility functions. We also consider whether there is any multicast utility function that allows a strictly equal split of shared bottleneck bandwidth between a multicast an unicast session. Adopting the fairness objective proposed by Handley, Floyd and Whetten [28]—that the algorithm be provably fair relative to TCP in the steady state, we define a generalized notion of TCP fairness. We say that a multicast session utility function U (x; r) = f (R) u(x) is strictly unicast-fair if the optimal rate for the multicast session is the same as would be obtained by a unicast session with utility function u(x) along the most congested source-to-receiver path in the multicast tree. This definition is equivalent to TCP-fairness in the case where u(x) = −1/x, the MPD utility function. We first show that neither sender nor receiver oriented multicast utility functions lead to strict unicast-fair allocations and derive a result suggesting that strict fairness is difficult to achieve under any definition of session utility. Consider the modified star network topology shown in Figure 3. A single multicast session with source node s and receivers {1, . . . , R} shares the network with R unicast sessions, one from s to each receiver. Link l0 from the source to the central node is shared by all sessions and has effectively infinite capacity. Each link li from the center to receiver i is shared by the multicast session and one unicast session. Link l1 is the bottleneck link, with capacity β c, where β < 1 and c is the capacity of all other links li , i > 1. Receiver 1 is the most congested receiver in the multicast session. 10
A similar result is reported in recent work by Bonald and Massoullie [27].
Optimization-Based Congestion Control for Multicast Communications
439
s l0
capacity = infinite
capacity = c
capacity = βc
l1 1
2
l2
...
l R-1 l R R-1
R
Fig. 3. A multicast tree with a modified star topology. Receiver 1 is most congested.
We give the unicast sessions the MPD utility function u(x) = −1/x. The multicast function has utility function u(x; R) = f (R) u(x). Let xm be the rate of the multicast session and xi be the rate of the unicast session to receiver i. A strictly tcp-fair allocation would split the bandwidth on l1 equally between xm and x1 , xm = x1 = β c/2. We can substitute this rate into the optimality conditions of the optimization problem (9-10) to determine the appropriate scaling function f (R) that will lead to the tcp-fair allocation, obtaining (R − 1) β 2 f (R) = 1 + √ 2 2 (2 − β)2
(24)
This result shows that tcp-fairness can be achieved in the optimization-based framework by maximizing a weighted sum of utilities with weights given by a scaling function f (R). However, the presence of β, a topological parameter, in the scaling function suggests that the correct scaling function depends on topological properties of the network. We now consider a generalized version of the previous example with no explicitly defined network topology. Consider a network containing a set of links L. The network is shared by two sessions v and w, which have rates xv and xw , respectively. Each session uses a subset of links in the network and session w only uses a proper subset of links that are also used by v. Formally, L(w) ⊂ L(v) ⊆ L. The sessions have Rv and Rw receivers with Rv > Rw . We assume that the path to the most constrained receiver in both v and w is the same and is therefore entirely contained in L(w). The Lagrangian for the optimization problem is. L(x; λ) = f (Rv ) u (xv ) + f (Rw ) u (xw ) + λl (xv + xw − cl ) + l∈L(w)
λl (xv − cl )
(25)
l∈L(v)−L(w)
From the Kuhn-Tucker conditions, we derive an optimality condition on the ratio of marginal utilities. u (xv )/u (xw ) = (f (Rw ) λv )/(f (Rv ) λw ) where
(26)
440
J.K. Shapiro, D. Towsley, and J. Kurose
λv =
λl ,
λw =
l∈L(v)
λl
(27)
l∈L(w)
Consider the family of base utility functions satisfying u (x) = −1/xα , α ≥ 1. Note that this family includes both the MPD and logarithmic utility functions. The ratio of session rates is 1/α
xw /xv = ((f (Rw ) λv )/(f (Rv ) λw ))
(28)
In a strictly tcp-fair allocation, the ratio xw /xv = 1. From equation (28), it is clear that the actual value of this ratio depends on both the choice of scaling function and the ratio λv /λw . It is also apparent that the ratio xw /xv approaches 1 in the limit as α → ∞. Thus, the exponent α offers one way to control unfairness for any choice of scaling function; increasing it moves the resulting rate allocation closer to max-min fairness. However, only the max-min utility function can guarantee strict unicast fairness for an arbirtary choice of scaling function. If a utility function other than max-min is used, providing strict unicast fairness requires careful selection of the scaling function. For example, strict unicast fairness could be achieved by exploiting a scaling law relating the total price of a multicast session to its number of receivers. Chuang and Sirbu propose such a law for static multicast costs [29] with the form λs ∝ Rsk
(29)
The authors empirically evaluate the scaling exponent k, finding its value to be constant over a wide range of network topologies. This law assumes, however, that link costs in the network are static. To be applicable for the purposes of congestion control, such a scaling law would have to be established for dynamically changing prices that reflect link congestion. If such a scaling law can be found, then strict unicast fairness would result from a multicast session utility function Us (x) ∝ Rs−k u(x). We leave the search for such a scaling law as direction for future research, but note here that, as presented in Section 3.3 the sum of session utilities under such a multicast utility function would not be invariant under a linear transformation of u(x).
7
Conclusion
This paper presented an optimization based scheme for multicast congestion control based on utility maximization. Appealing to the economic theory underlying this approach, we proposed the use of a receiver oriented definition of session utility. By considering the incentive to split multicast sessions into smaller sessions, we showed that only receiver oriented utility functions ensure that the optimal solution of the session-splitting problem remains invariant under a linear transformation of the utility scale.
Optimization-Based Congestion Control for Multicast Communications
441
We identified two sources of unfairness that arise when maximizing the sum of receiver oriented utility functions, one favoring unicast sessions and one favoring multicast. Unicast sessions are favored because they tend to use fewer links than multicast sessions and thus are charged a lower price for bandwidth. Multicast sessions are favored by the tyranny of the majority effect because the the sum of link prices in the session session is divided by the number of receivers and this reduced price is used to compute the session rate. When these two effects are combined, a net unfairness results that favors sessions with many receivers over sessions with few, with unicast sessions faring worst of all. This unfairness can be controlled, however, by choosing the form of the base utility function. While it is difficult to achieve strict fairness between unicast and multicast traffic, we argue that controlled unfairness is a reasonable goal, particularly as it provides an incentive to use multicast by rewarding larger groups. Much work still remains to be done in this area. In the work presented here, we have focused on the economic interpretation of the optimization-based approach to reason about the fairness properties resource allocation at system equilibrium. A complimentary line of enquiry concerns the convergence and stability properties of the equilibria in the multicast case. A promising direction of future work is to extend the growing body of relevant research for unicast [30, 31,32,33] to the multicast case.
References 1. Kelly, F.: Charging and rate control for elastic traffic. European Transactions on Telecommunications volume 8 (1997) 33–37 2. Kelly, F.P., Maulloo, A., Tan, D.: Rate control in communication networks: shadow prices, proportional fairness and stability. Journal of the Operational Research Society 49 (1998) 237–252 3. Gibbens, R., Kelly, F.: Resource pricing and the evolution of congestion control. Automatica 35 (1999) 1969–1985 4. Kunniyur, S., Srikant, R.: End-to-end congestion control: utility functions, random losses and ecn marks. In: Proc. INFOCOM. (2000) 5. Low, S.H., Lapsley, D.E.: Optimization flow control, i: Basic algorithm and convergence. IEEE/ACM Transactions on Networking (1999) 6. Athuraliya, S., Laspsley, D., Low, S.: An enhanced random early marking algorithm for internet flow control. In: Proc. INFOCOM. (2000) 7. Mo, J., Walrand, J.: Fair end-to-end window-based congestion control. IEEE/ACM Transactions on Networking (1999) 8. Golestani, S., Bhattacharyya, S.: A class of end-to-end congestion control algorithms for the internet. In: Proc. ICNP’98. (1998) 9. Kar, K., Sarcar, S., Tassiulas, L.: Optimization based rate control for multirate multicast sessions. In: Proceedings of Infocom 2001. (2001) 10. Deb, S., Srikant, R.: Congestion control for fair resource allocation in networks with multicast flows. In: Proc. of the IEEE Conference on Decision and Control. (2001) 11. Kasera, S., Bhattacharyya, S., Keaton, M., Kiwior, D., Kurose, J., Towsley, D., Zabele, S.: Scalable fair reliable multicast using active services. IEEE Networks Magazine (2000)
442
J.K. Shapiro, D. Towsley, and J. Kurose
12. Rizzo, L.: pgmcc: A TCP-friendly single-rate multicast. In: Proceedings of SIGCOMM. (2000) 17–28 13. Madden, P.: Concavity and Optimization in Microeconomics. Basil Blackwell (1986) 14. Hillier, F.S., Lieverman, G.J.: Introduction to Mathematical Programming. 2 edn. McGraw-Hill (1995) 15. P.Key, McAuley, D., Barham, P., Laevens, K.: Congestion pricing for congestion avoidance. Technical Report MSR-TR-99-15, Microsoft Research (1999) 16. Byers, J.W., Luby, M., Mitzenmacher, M., Rege, A.: A digital fountain approach to reliable distribution of bulk data. In: Proc. Sigcomm ’98. (1998) 17. Li, X., Ammar, M.H., Paul, S.: Video multicast over the internet. IEEE Networks Magazine (1999) 18. Shenker, S.: Fundamental design issues for the future internet. IEEE J. Selected Areas Comm. 13 (1995) 1176–1188 19. Hirshleifer, J., Hirshleifer, D.: Price Theory and Applications. 6 edn. Prentice Hall (1997) 20. Arrow, K.J.: Social Choice and Individual Values. 2 edn. Yale Univ. Press (1963) 21. Kelly, F.P.: Mathematical modelling of the internet. In: Proc. 4th International Congress on Industrial and Applied Mathematics. (1999) 22. Low, S.H.: A duality model of tcp flow controls. In: Proc. ITC Specialist Seminar on IP Traffic Measurement, Modeling, and Management. (2000) 23. Golestani, S.J., Sabnani, K.K.: Fundamental observations on multicast congestion control in the internet. In: Proc. INFOCOM. (1999) 24. Rubenstein, D., Kurose, J., Towsley, D.: The impact of multicast layering on network fairness. In: Proc. SIGCOMM 99. (1999) 25. Shapiro, J.K., Towsley, D., Kurose, J.: Optimization-based congestion control for multicast communications. Technical Report UM-CS-2000-033, University of Massachusetts at Amherst (2000) 26. Massoulie, L., Roberts, J.: Bandwidth sharing: Objectives and algorithms. In: Proc. INFOCOM. (1999) 27. Bonald, T., Massoullie, L.: Impact of fairness on internet performance. In: Proc. ACM SIGMETRICS. (2001) 28. Handley, M., Floyd, S., Whetten, B.: Strawman specification for tcp friendly (reliable) multicast congestion control. Technical report, Reliable Multicast Research Group (1998) 29. Chuang, J., Sirbu, M.: Pricing multicast communications: A cost-based approach. In: Proc. INET’98. (1998) 30. Low, S.H., Paganini, F., Doyle, J.C.: Internet congestion control. IEEE Control Systems Magazine (2002) February 31. Johari, R., Tan, D.: End-to-end congestion control for the internet: Delays and stability. To appear in IEEE/ACM Transactions on Networking (2001) 32. Massoulie, L.: Stability of distributed congestion control with heterogeneous feedback delays. Technical report, Microsoft Research (2000) 33. Hollot, C., Misra, V., Towlsey, D., Gong, W.: On designing improved controllers for aqm routers supporting tcp flows. In: Proc. IEEE Infocom 2001. (2001)
Severe Congestion Handling with Resource Management in Diffserv on Demand Andr´ as Cs´asz´ar1,2 , Attila Tak´ acs1,2 , R´ obert Szab´ o1,2 , 3 3 Vlora Rexhepi , and Georgios Karagiannis 1 N etLab, Ericsson Research HUNGARY {robert.szabo, andras.csaszar, attila.takacs}@eth.ericsson.se 2 High Speed Networks Laboratory, Department of Telecommunications and Telematics, Budapest University of Technology and Economics, {robert.szabo, andras.csaszar, takacs}@ttt.bme.hu, 3 ELN, Ericsson EuroLab Netherlands {vlora.rexhepi, georgios.karagiannis}@eln.ericsson.se
Abstract. Quality of Service (QoS) for the Internet has been discussed for a long time without any major breakthrough. There are several reasons, the main one being the lack of a scalable, simple, fast and low cost QoS solution. A new QoS-framework, called resource management in differentiated services (RMD), aims to correct this situation. This framework has been published in recent papers and is extending the IETF differentiated services (diffserv) architecture with new admission control and resource reservation concepts in a scalable way. This paper focuses on proposing and investigating two resource reservation solutions on the problem of severe congestion situation within a diffserv-aware network utilizing an admission control scheme called Resource Mananagement in Diffserv (RMD). The different severe congestion solutions are compared using extensive simulation experiments.
1
Introduction
Internet QoS has been the most challenging topic of the networking research for several years now. The two existing Internet Protocol (IP) quality of service (QoS) architectures, Integrated Services (intserv) and Differentiated Services (diffserv) [1] are the results of the research work in this area. Currently, the increasing popularity of the Internet as well as the growth of mobile communications have driven the development of IP-based solutions for wireless networking. The introduction of IP-based transport in radio access networks (RANs) is one of these networking solutions. When compared to traditional IP networks, an IP-based RAN has specific characteristics (see e.g. [2]) that impose stricter requirements on resource management schemes. Independently of the transport network, the cellular user expects to get the same service as in STM-based transport networks. In addition to this requirement, the situation is further complicated by the fact that the RAN is large in terms of its E. Gregori et al. (Eds.): NETWORKING 2002, LNCS 2345, pp. 443–454, 2002. c Springer-Verlag Berlin Heidelberg 2002
444
A. Cs´ asz´ ar et al.
geographic size and the number of inter-connected nodes (hundreds or even thousands of nodes) with high cost of leased transmission lines, and the proportion of real-time traffic may get up to 100%. Resource management and CAC schemes working in IP-based RANs will have to enable dynamic admission control, fast resource reservation and at the same time they need to be simple, have low cost and easy to implement along with good scalability properties. This paper focuses on proposing and investigating with simulation experiments two resource reservation solutions on the problem of severe congestion situation within a diffserv-aware network utilizing an admission control scheme called Resource Mananagement in Diffserv (RMD) [3] described in the following section. Severe congestion can be considered as an undesirable state, which may occur as a result of a route change or a link failure. Typically, routing algorithms are able to adapt to reflect changes in the topology and traffic volume. In such situations the re-routed traffic will traverse a new path. Nodes located on this new path may become overloaded, since they suddenly might need to support more traffic than their capacity. Moreover, when a link fails, traffic passing through it may be dropped, degrading its performance. The rest of the paper is organized as follows: Section II lists related work in the resource management field. The severe congestion requirements and solutions are described in Section III. Section IV presents the simulation experiment results and their analysis. Finally, Section V concludes.
2
Related Works
Resource provisioning and traffic control algorithms use a signaling protocol to communicate the resource needs from end systems to routers, which either rely on information collected by measurements [4,5] or maintain some sort of reservation state. Generally, one group of approaches requires from every network entity to maintain per-flow state related information [6,7]. Another broad class of algorithms does not require per flow state related information, but is rather maintaining aggregated states in network core nodes, e.g., [8]. These mechanisms generally assume soft reservation status in the network, and either aim to periodically update it or try to harmonize the actions of routers along the path or take an economic approach to handle congestion [9]. 2.1
Resource Management in DiffServ – RMD Framework
Currently, none of the available existing approaches satisfy the requirements for an appropriate resource management scheme within an IP-based RAN. In several recent papers [10,11] and IETF drafts [3,12] a new QoS framework, called Resource Management in DiffServ (RMD), is specified that aims to correct this situation. RMD extends the diffserv architecture with dynamic admission control and resource provisioning, and has good scaling properties and as such has low cost of implementation. Moreover, this framework has a wide scope of applicability in different types of diffserv networks.
Severe Congestion Handling
445
In compliance with diffserv concepts, the RMD framework distinguishes between the problem of a complex reservation within a domain and handling a simple reservation within a node. Accordingly, there are two types of protocols defined within the RMD framework, the Per Domain Reservation (PDR) and Per Hop Reservation (PHR) protocol groups. Per Domain Reservation (PDR) is implemented only at the edges of the RMD domain and it handles the resource management in the entire diffserv domain. Per Hop Reservation (PHR) is used to perform resource reservation per diffserv class or Per Hop Behaviour (PHB) in each node of the diffserv domain. PHR aware nodes are not able to differentiate between individual traffic flows, as for e.g., RSVP, because no per-flow information is stored and the packet scheduling is done per aggregate. This way, PHR is optimized to reduce the functionality requirements of interior nodes. In the following, we describe the simplified PHR operation. Before a new user data flow is admitted into the domain on one of the ingress edge nodes, it first has to signal its resource requirement (QoS Request). The ingress node classifies it into an appropriate PHB. These resource requests are transformed to discrete bandwidth values. Then the ingress edge node sends a PHR Resource Request packet to the egress edge, which is marked by any of the intermediate routers if they have not enough free resources. The egress edge node reports the reservation status back to the ingress, as a result of which the ingress can admit or reject the QoS request. If the flow is admitted, then periodic reservation refreshes are sent between the ingress and egress edge nodes. The RMD framework [3] defines two different PHR groups: the reservationbased and the measurement based groups, which differ in the method a core node determines whether to mark a resource request packet, along with some signaling needs for this purpose. Here, we solely focus on the reservation-based PHR methods, where nodes maintain a per PHB reservation state. This is accomplished by using a combination of the reservation soft state and the explicit release principles. This means that the reserved resources can be released either when they are not refreshed regularly (1 refresh packet in every PHR refresh period ), or when they are explicitly released by a predefined release message. In order to decrease the signaling traffic load on the network, the number of PHR refresh messages has to be minimized. Therefore, the PHR refresh period has to be chosen as large as possible, e.g., 30 seconds. The admittance decision is based on a threshold of maximum available resource units set for each PHB. Currently, there is one reservation-based PHR protocol defined, the Resource Management in Diffserv On DemAnd (RODA) protocol specified in [12].
3 3.1
Severe Congestion Problem Definition and Requirements
Severe congestion can be considered as an undesirable state that may occur as a result of a route change or a link failure. The severe congestion situation will severely degrade the performance of the real time traffic and therefore, it has to be detected and solved very fast. Typically, in a RAN where majority
446
A. Cs´ asz´ ar et al.
of traffic is real-time traffic, the severe congestion situation has to be detected by the ingress edges within one second. Subsequently, the ingress edge has to undertake predefined policing actions to lower the incoming traffic volume in order to solve the severe congestion situation. The severe congestion solution can be decomposed in four subsequent phases: – Detection of severe congestion by interior nodes: An RMD interior node has to detect the severe congestion situation using one of the following methods: – Volume measurements: by using measurements on the data traffic volume. If the volume of the data traffic increases suddenly, it is deduced that a possible route change and at the same time, a severe congestion situation occurred. – Counting: using a counter that counts the number of dropped data packets. The severe congestion state is activated when this number is higher than a pre-defined threshold. This method is similar to the previous one but is much simpler. However, it can only be applied when the traffic characteristics are known. – Increased number of refreshes: if the number of resource units per PHB, refreshed by PHR refresh messages is much higher than the number of resources refreshed previously, then the node deduces that a severe congestion occurred. This procedure is very efficient, but it can only be used when the PHR refresh period is small. The first three detection methods can be applied on both RMD schemes, i.e., reservation-based and measurement-based. The last method can only be applied on the reservation-based RMD scheme. – Propagation of severe congestion state to egresses: In this phase, an interior node notifies the severe congestion situation to an egress node. Due to the fact that the interior node does not store and maintain any flow related information, it is not possible to identify the ID of the passing flow and the IP address of the ingress node. Therefore, the interior node is not able to directly notify the ingress node that a severe congestion situation occurred. One of the following methods is applied: – Greedy marking: all packets which are passing through a severe congested interior node and are associated to a certain PHB will be somehow remarked to indicate severe congestion; – Proportional marking: this method is similar to the previous method, with the difference that the number of the remarked packets is proportional to the detected overload; – PHR message marking: only PHR signaling messages that are passing through a severe congested interior node will be marked. The marking is done by setting a special flag in the protocol message, i.e., ”S” (see [12]). This procedure is efficient, but it can only be used when the PHR refresh period is small. The last method can only be applied on the reservation-based scheme, while the other two can be applied on both RMD schemes.
Severe Congestion Handling
447
– Egress to ingress state propagation: In this phase, the egress node has to process the severe congestion information received from the interior nodes. Moreover, it notifies the ingress node that a severe congestion occurred. The type of the received severe congestion information depends on the propagation method used by the interior nodes (see above). When either the “greedy marking” or the “PHR message marking” method is used, the severe congestion information simply notifies the egress node that a severe congestion situation occurred. When the “proportional marking” method is used, the egress node is informed that a percentage of the incoming traffic is overloading a certain communication path. The egress node forwards this information to the proper ingress node for all congestion marked flows. – Ingress actions on severe congestion: the ingress node processes the severe congestion information received from the egress node and undertakes certain actions to solve the severe congestion situation. These actions depend on the method used by the interior nodes. One of the following set of actions can be undertaken: – Re-allocation: an ingress node is blocking new traffic flows and reinitiates all on-going flows that are affected by severe congestion. During the re-allocation procedure the ingress nodes will temporarily release and subsequently re-initiate all on-going flows that were affected by severe congestion. – Stochastic blocking: an ingress node is blocking new traffic flows and is terminating some of the on-going flows based on a probabilistic competition. The termination probability of a connection is proportional to its severe congestion marked traffic volume. The first method is applied when either the “greedy marking” or “PHR message marking” procedure is used. The later can only be applied when the “proportional marking” detection procedure is used. 3.2
Approaches to Handle Severe Congestion
In all of the consecutive approaches we commonly assume the followings: i) interior nodes use the counting detection method. In particular each interior node performs packet drop ratio measurements for every S interval per DSCP; ii) ingress nodes maintain per flow information that includes the flow ID, the requested amount of resources, i.e., bandwidth units; iii) egress nodes maintain per flow information that includes the flow ID and the IP address of the ingress node; iv) each node is capable of remarking each standardised DSCP, into locally defined DSCPs in order to signal severe congestion; v) egress nodes are checking the DSCP field of each passing data packet in order to identify its severe congestion status. Solution A – Re-allocation of resources. The main characteristics (Fig. 1) of this scenario are that each interior node uses the greedy marking procedure and
448
A. Cs´ asz´ ar et al. Severe Congested Interior Ingress Interior Egress Drop Ratio User Data > User Data Threshold User Data User Data
Severe Congested Interior Ingress Interior Egress User Data User Data User Data User Data
All Packets Marked
User Data Reject New QoS Requests
PDR Congestion Report
X
Reject New QoS Requests
Release Resource Request
Reinit
User Data
Drop Ratio Measurements
Solution A
Some Packets Marked PDR Congestion Report(DropRatio)
Calc. Drop Ratio
XTerminate
with probability "Drop Ratio"
Drop Ratio Measurements
Drop Ratio Measurements
Drop Ratio Measurements
Solution B
Fig. 1. Solution A and B Operation
each ingress node uses the re-allocation procedure to solve the severe congestion situation. Whenever the measured drop ratio in an S interval is higher than a pre-configured threshold value, the interior node deduces that a severe congestion occured. It remarks the DSCP field of all passing packets into a locally defined DSCP to indicate severe congestion. The egress nodes monitor the DSCP fields of each passing data packet. If re-marked DSCP fields are detected, then the egress node will deduce that a severe congestion occured. For each affected flow, the egress node will report the severe congestion situation to the corresponding ingress node by using a signaling PDR congestion report message. When the ingress node receives a PDR congestion report message from the egress, it will block new incoming flows for a certain amount of time. Moreover, the on-going flows that are affected by severe congestion have to re-initiate their reservations: they must temporarily terminate their user data flow, deallocate their reservation with an explicit release request message, and try to reallocate their original resource usage along the path. Note that the release request and reservation request messages are not necessarily sent immediately after congestion notification as we will show later on, though user data must immediately be terminated. Interior nodes receiving a release request message, decrease their aggregate reservation states with the number of units indicated in the corresponding message but not below zero. Unfortunately, in the case of a link failure and re-route event, flows previously accommodated on different paths will try to release resources on links where they have not yet allocated. In order to cope with this problem, interior nodes are disallowed to release more resources than previously allocated. Upon receiving a reservation request message, the interior node must see whether the sum of already existing reservations plus the new request is within a threshold. If so, the interior node updates its reservation state or else it marks the packet to indicate reservation failure. The new reservation might be admitted by all interior nodes and signaled back by the egress to restart the user data. However, if the re-initiated allocation fails or time-outs (one may allow several retries before permanently terminating any connections) then the connection must be terminated by the end hosts.
Severe Congestion Handling
449
Therefore, the ingress node must initiate the final termination of the flow at the end host. Solution B – Stochastic (distributed) blocking. The main characteristics (Fig. 1) of this scenario are that each interior node uses the proportional marking procedure and each ingress node uses the stochastic blocking procedure to solve the severe congestion situation. When the ingress node receives the PDR congestion report message, it will block new incoming flows for a certain amount of time. Moreover, it will also terminate some of the ongoing connections: it will realize a probabilistic drop on each individual flow that has received congestion report message according to the following formulas: of marked bytes B1 = # of marked# bytes+# – Algorithm B1: Pdrop of unmarked bytes . The underlying idea here is to purely base the blocking estimation on measured data. In this algorithm the blocking probability per connection (per flow) is calculated as the ratio between the dropped bytes and the maximum number of bytes that can be supported by the interior node (dropped / received). bytes B2 – Algorithm B2: Pdrop = # of marked , where r [bps] is the allocated rate rSe 8 for the connection, Se [sec] is the time base used at the egress edge and 8 is for bit/byte conversion. This version aims to eliminate the packet drops of connections by using the administrated reservations (dropped / sentadministrative). # of marked bytes B3 = 2∗# of marked – Algorithm B3: Pdrop bytes+# of unmarked bytes . Here, the blocking probability per connection (per flow) is calculated as the ratio between the dropped bytes and the total volume of user data (associated to the same connection), entering the RMD domain (dropped / sent-measured).
4
Numerical Results
In this paper we compare the severe congestion solutions described in Section 3 by using performance evaluation with the help of simulations. For these simulation experiments we used the network simulator (ns) [13] environment. This section describes the used traffic models, network topology, the performed experiments and their results. 4.1
Simulation Model
Traffic models: Based on the operational description of the RMD protocol, resource demands are handled in bandwidth units, which we will also use when describing our traffic models. First of all, one unit was set to represent 2000 bytes/second rate allocation. The reason behind was that one unit should represent the rate required by an encoded voice communication, e.g., GSM coding. Altogether we examined three different scenarios where i) calls requested only 1 units homogeneously, where ii) calls requested bandwidths of {1, 2, . . . , 20}
450
A. Cs´ asz´ ar et al.
units and iii) where call demands were selected from {1, 2, . . . , 100} units. Calls arrived according to a Poisson process with parameter λi for calls requesting i units of bandwidth. The average call holding time was set to µ1 = 90 seconds. The call and bandwidth unit requests were generated in a way that the demanded load for each reservation unit class was eqalized on the average, or more λ formally λµi BWi = µj BWj , where BWi = i[unit] is the bandwidth demand and 1 λ1 = 0.9 sec as per default. Hence higher bandwidth demands arrived less frequently than smaller ones. This is not unrealistic since higher bandwidth requests are probably more expensive. Packet sizes (L) of the connections were determined according to their reservations in the following way: Li = 40 [bytes]BWi . For the sake of simplicity packet inter arrival times were kept constant (constant bit rate - CBR), hence every flow sent one single packet in each 20 msec interval. Network topology: For the evaluation of the methods we used a simple delta topology with the G(V, E) graph, where E denotes edges {e0, e1, e2} and V denotes the vertices {(e0, e1), (e0, e2), (e1, e2)}. With the former re-routing and its effects are easily traceable. For the sake of simplicity only e0 generated traffic for the other two edges. In order to have effective multiplexing of flows, the capacity (C) of the links was set to be able to accommodate at least 100 flows of the highest bandwidth demands, i.e., to 100, 2‘000 and 100‘000 units. As discussed earlier, the severe congestion detection is based on packet drop ratio measurements. Hence, it was important to find the proper dimensioning for the network buffers. As our traffic model was based on CBR traffic with 20 msec packet inter arrival times, we determined the queue lengths so that no packet loss can occur during normal operation. We dimensioned for a target load level of 80% link capacity, hence the buffer sizes (B) were determined using the following formula: B = C ∗ 0.02 ∗ 0.8 [bytes]. Network events: In our simple delta network after the system achieves stationarity, the link between nodes e0 and e2 goes down at 350 sec of simulation time. Afterwards, the dynamic routing protocol (OSPF) updates its routing table at 352.0 sec and all flows previously taking the e0 − e2 path will be re-routed to the e0 − e1 − e2 alternate path. Hence, node e0 will suffer a serious overload resulting from the re-route event. 4.2
Numerical Evaluation
Network utilization: It can be seen in Fig.2/a that with solution A, more reservation messages were accepted by the severe congested node during the re-initiation procedure than the target admission threshold (80%). This phenomenon is due to certain properties of the protocol related to the explicit release of soft state resources, whose basic idea is discussed in [11]. Nevertheless, the above problem can briefly be reasoned by the following operation: First of all, in order to achieve better utilization the soft state refresh period (T ), which is in all cases set to 30 sec, is sub-divided into cells (10), where a sliding (or time) window algorithm is used to smooth out the T long discrete time steps. If a re-route event happens in the system after the link failure, all of
Severe Congestion Handling
451
the traffic originally traversing along a different path will flow through the single operating link. This will evidently make severe congestion situation. Here however, not only data packets but protocol signaling packets (see [12]) are involved, which will affect the administrated reservations in the following way. With solution A, release messages of the re-initialization procedure (see section 3.2) will decrease the number of registered reservations, however not only the originally accommodated flows but also the re-routed ones will try to release their allocations. Hence the volume of release must be limited, which is done by permitting releases until the administered reservation state is above zero. Unfortunately, connections that have not yet been notified of severe congestion (longer round trip times (RTT)) will keep on sending their periodic reservation refreshes that increase the administered reservation. This affects the same reservation state that the release messages compete for, hence allowing more release and reallocation than desired. This overshoot will only leave the system after a refresh period (see it at around 382 sec in Fig. 2/a). On the other hand, the descendant algorithms of solution B differ in the calculated call dropping probability (see section 3.2) and do not stop and re-allocate the connections but instead immediately drop some of them in proportion to the detected overload. This blocking probability is the smallest with algorithm B3, and highest with B1. It can be seen that -as expected- the lower the call blocking ratio is, the higher the maintained utilization is. Depending on the measurement time base (S), the retained utilization is above, upon or below the target level. It can be seen that with different measurement time bases, different drop ratio approximations perform best while solution A is almost indifferent to the measurement time base (see Fig. 2/a and b). Fig. 3 shows short term transients of the algorithms. Solution A is shown in Fig. 3/a for three different measurement time bases. It is interesting to see that since user traffic had a well defined period of 20 msec, measurements with 50 msec time base introduced high level of oscillation. It can be also seen that it takes a couple of measurement periods to bring the load back around the desired level though the control time is still in the order of 100 msec. Solution B variants react very similarly (see Fig. 3/b) where B1 and B2 operations result in exactly the same transients. Packet drop ratios: Here, only some representative results are presented due to the space limitations. As expected, the shorter the measurement time base is the shorter the congestion period will be (faster actions), hence packet drop periods decrease (see Fig. 4/a-b). The difference in operation between solution A and B can be seen when increasing the measurement windows. Since solution A stops user data flow its packet drop ratio more rapidly decreases with increasing measurement time base. Signaling overheads: Fig. 4/c shows the protocol messages and the reservation status for solution A. It can be seen that after the detection of severe congestion all the 160% traffic load is released and tried to be reallocated (see the almost coinciding curves at 352 in Fig. 4/a). It can also be well seen that due to the synchronized re-initiation, refresh messages arrive in bursts. This
A. Cs´ asz´ ar et al.
Utilization of the link between nodes 0 and 1. (S=20ms, Units=1..20) 99
Target utilization Solution A Solution B1 Solution B2 Solution B3
96
load [%]
93 90 87 84 81 78
75 350 352 354 356 358 360 362 364 366 368 370 372 374 376 378 380 382 384 386 388 390 time [s]
a) Utilization with S = 20 msec Utilization of the link between nodes 0 and 1. (S=250ms, Units=1..20) 100
Target utilization Solution A Solution B1 Solution B2 Solution B3
96 92
load [%]
88 84 80 76 72 68 64
60 350 352 354 356 358 360 362 364 366 368 370 372 374 376 378 380 382 384 386 388 390 time [s]
b) Utilization with S = 250 msec
Fig. 2. Utilization
load [%]
Utilization of the link between nodes 0 and 1. (Solution A, Units=1..20) 100 98 96 94 92 90 88 86 84 82 80 78 76 74
Target utilization S=20ms S=50ms S=250ms
351.6
351.8
352
352.2
352.4
352.6 352.8 time [s]
353
353.2
353.4
353.6
353.8
354
a) Short term transients for solution A Utilization of the link between nodes 0 and 1. (S=20ms, Units=1..20) 100
Target utilization Solution B1 Solution B2 Solution B3
95 load [%]
452
90 85 80 75
351.6
351.8
352
352.2
352.4
352.6 352.8 time [s]
353
353.2
353.4
353.6
b) Short term transients for solution B descendants
Fig. 3. Short term transients
353.8
354
Severe Congestion Handling Drop Ratio on the link between nodes 0 and 1. (Units=1..20, Solution A) 40
Drop Ratio on the link between nodes 0 and 1. (Units=1..20, Solution B2) 40
Severe Congestion Detection Threshold S=20ms S=50ms S=250ms
35 30
30 25 [%]
[%]
S=20ms S=50ms S=250ms
35
25 20
20
15
15
10
10
5 0
453
5 351.8
352
352.2
352.4 352.6 time [s]
352.8
353
0
353.2
351.8
352
352.2
a) Solution A
352.4 352.6 time [s]
352.8
353
353.2
b) Solution B2
c) Released, reserved and refreshed units for solution A
Signaling utilization of the link between nodes 0 and 1. (S=20ms, Units=1..20) 1
Solution A, signaling Solution B1, signaling Solution B2, signaling Solution B3, signaling
0.9 0.8 0.7 load [%]
[%]
a-b) Drop ratios of the different algorithms
Reservation and Refresh Messages on the link between nodes 0 and 1 (S=20ms, Units=1..20, Solution A) 160 Target reservation 150 140 Reservation 130 Released units 120 Reserved units 110 Refreshed units 100 90 80 70 60 50 40 30 20 10 0 300 310 320 330 340 350 360 370 380 390 400 410 420 430 440 450 460 470 480 490 500 time [s]
0.6 0.5 0.4 0.3 0.2 0.1 0 300 310 320 330 340 350 360 370 380 390 400 410 420 430 440 450 460 470 480 490 500 time [s]
c-d) Protocol messages
d) Signaling overheads
Fig. 4. Protocol performances
characteristic will only fade out with several refresh times, as connections are terminating by nature. Fig. 4/d shows the signaling overhead for the various algorithms. It is obvious that for solution B descendants there are no increase in signaling overhead as shown in Fig. 4/d. On the other hand, due to the excess signaling introduced by solution A the overhead bristles appear. This however, is still quite negligible compared to the link capacity (see Fig. 4/d).
5
Conclusions
In this work we have shown some aspects of severe congestion handling with the RODA protocol [12]. We have designed and presented two basic algorithms that could cope with severe congestion situations in the order of network round trip times. This reaction time can be considered as close to optimal due to the transmission constraints imposed by the system. The presented algorithms differ in their transients but we can conclude that two of our solution B derivatives performed best in all situations with measurement time base equal to the framing time of the data traffic. In this very special case the two algorithms resulted in the same operation due to the traffic characteristics and differences only appeared with higher measurement time bases. Overall, we are aware of the need for further analysis in this area with more general traffic models (e.g. VBR); with multiple traffic classes (e.g. voice, video and best-effort); with more complex network topology (e.g. a concrete RAN topology) and with a comparsion to other resource management protocols (e.g. RSVP). Nevertheless, we believe that our current results can already be applied to certain special networks like RANs. We suppose that these results will trigger new dialogs from the community.
454
A. Cs´ asz´ ar et al.
Acknowledgments. The authors owe special thanks to the RMD research team located in Sweden, Netherlands and Hungary. This work has partially been done at the High Speed Networks Laboratory, at the DTT of BUTE.
References 1. Blake, S., Black, D., Carlson, M., Davies, E., Wang, Z., Weiss, W.: An architecture for differentiated service. Request for Comments 2475, Internet Engineering Task Force (1998) 2. Partain, D., Karagiannis, G., Wallentin, P., Westberg, L.: Resource reservation issues in cellular access networks. Internet Draft, Internet Engineering Task Force (2002) Work in progress. 3. Westberg, L., Jacobson, M., Karagiannis, G., Oosthoek, S., Partain, D., Rexhepi, V., Szab´ o, R., Wallentin, P.: Resource management in diffserv (RMD) framework. Internet-draft: draft-westberg-rmd-framework-xx.txt, Internet Engineering Task Force (2002) work in progress. 4. Elek, V., Karlsson, G., Ronngren, R.: Admission control based on end-to-end measurements. In: Proceedings of the Conference on Computer Communications (IEEE Infocom), Tel-Aviv, Israel (2000) 5. Breslau, L., Jamin, S., Shenker, S.: Comments on the performance of measurementbased admission control algorithms. In: Proceedings of the Conference on Computer Communications (IEEE Infocom), Tel Aviv, Israel (2000) 6. Braden, R., Ed., Zhang, L., Berson, S., Herzog, S., Jamin, S.: Resource ReSerVation protocol (RSVP) – version 1 functional specification. Request for Comments 2205, Internet Engineering Task Force (1997) 7. Feher, G., Nemeth, K., Maliosz, M., Cselenyi, I., Bergkvist, J., Ahlard, D., Engborg, T.: Boomerang - a simple protocol for resource reservation in ip networks. In: IEEE Workshop on QoS Support for Real-Time Internet Applications, Vancouver, Canada (1999) 8. Baker, F., Iturralde, C., Faucheur, F.L., Davie, B.: RSVP reservations aggregation. Internet Draft, Internet Engineering Task Force (2001) Work in progress. 9. Gibbens, R.J., Kelly, F.P.: Resource pricing and the evolution of congestion control. Automatica 35 (1999) 1969–1985 10. Heijenk, G., Karagiannis, G., Rexhepi, V., Westberg, L.: Diffserv resource management in ip-based radio access networks. In: Wireless Personal Multimedia Communications (WPMC’01), Aalborg, Denmark (2001) ´ am Marquetant, Pop, O., Szab´ 11. Ad´ o, R., Dinny´es, G., Tur´ anyi, Z.: Novel enhancements to load control - a soft-state, lightweight admission control protocol. In: to appear at QofIS2001 - 2nd International Workshop on Quality of future Internet Services, Coimbra, Portugal, COST263 (2001) 12. Westberg, L., Jacobsson, M., Karagiannis, G., Oosthoek, S., Partain, D., Rexhepi, V., Wallentin, P.: Resource management in diffserv on demand (RODA) PHR. Internet Draft, Internet Engineering Task Force (2001) Work in progress. 13. The network simulator - ns-2. (http://www.isi.edu/nsnam/ns/)
Resource Allocation with Persistent and Transient Flows Supratim Deb1 , Ayalvadi Ganesh2 , and Peter Key2 1
Coordinated Science Lab., University of Illinois at Urbana-Champaign, 1308 W. Main Street, Urbana, IL 61801, USA [email protected] 2 Microsoft Research, 7 J J Thomson Ave., Cambridge CB3 0FB, UK ajg,[email protected]
Abstract. The flow control algorithms currently used in the Internet have been tailored to share bandwidth between users on the basis of the physical characteristics of the network links they use rather than the characteristics of their applications. This can result in a perception of poor quality of service by some users even when adequate bandwidth is potentially available, and is the motivation for seeking to provide differentiated services. In this paper, stimulated by current discussion on Web mice and elephants, we explore service differentiation between persistent and short-lived flows, and between file transfers of different sizes. In particular, we seek to achieve this using decentralized algorithms that can be implemented by end-systems without requiring the support of a complex network architecture. The algorithms we propose correspond to a form of weighted processor sharing and can be tailored to approximate the shortest remaining processing time service discipline. Keywords: Service differentiation, bandwidth allocation, decentralized control, weighted processor sharing, shortest remaining processing time.
1
Introduction
Most data in the current Internet is transferred using TCP. This protocol has two phases: a slow start phase which probes for available bandwidth up to a certain threshold, and a subsequent congestion avoidance phase that attempts to stabilize around a fair share. Despite having an aggressive ramp up phase, the throughput during slow start is typically much less than in the congestion avoidance mode due to the small size of the initial window, time-outs triggered by packet loss, etc. Moreover, the fair shares reached in the congestion avoidance phase allocate equal bandwidth to all file transfers having the same round-trip time and access bandwidth, irrespective of the sizes of the files being transferred. This results in a poor response time for short file transfers and raises the question of whether it is possible to improve performance for short file transfers without significantly degrading it for long file transfers. This question assumes particular importance in the context of the finding by a number of researchers that file sizes on the Web have a heavy-tailed distribution [6]: when file sizes vary over several E. Gregori et al. (Eds.): NETWORKING 2002, LNCS 2345, pp. 455–466, 2002. c Springer-Verlag Berlin Heidelberg 2002
456
S. Deb, A. Ganesh, and P. Key
orders of magnitude, treating all file transfers identically may not be appropriate. This has led to research on improving the throughput of short flows, either by altering the slow-start behavior [2,11,3], or by putting short flows into a different class [19,10], or by providing a predictive service to long flows [5]. A related problem is that of sharing bandwidth between file transfers and real-time traffic such as Internet telephony or video conferencing. Real-time flows are usually long-lived and can be treated as persistent sources for purposes of analysis. They have very different quality of service requirements from file transfers. Whereas what matters for file transfers is usually the transfer time, or equivalently, average bandwidth over the entire transfer period, real-time flows typically care about the bandwidth they receive at each instant in time (or, more precisely, averages over time periods much shorter than the lifetime of the connection). The value of bandwidth to a user can be described mathematically by a utility function which captures elements of the quality of service perceived by the user. Utility functions are commonly used in economics to represent individual preferences and to address questions of fair allocation. The resource allocation problem can be cast as one of maximizing the aggregate utility of all users. We model the utility for a file transfer as the negative of the time taken to complete the transfer. For real-time traffic, we assume that the total utility obtained is the integral of an instantaneous utility over the lifetime of the connection; the instantaneous utility, in turn, is modeled as an increasing and concave function of the bandwidth received by the flow at that instant. Such a concave function reflects diminishing marginal utility to the user as the allocated bandwidth increases. Equivalently, concavity models a preference on the part of the user for a fixed constant bandwidth over a fluctuating bandwidth allocation with the same mean. Sources with such a utility are referred to as elastic sources in the literature. There has recently been considerable work on bandwidth sharing between persistent elastic users [14,17]. However, the problem of combining such sources with transient sessions such as file transfers has received little attention. One recent study [15] suggests that, when the two traffic types share a network, file transfers should receive priority. Our main results and the organization of the paper are as follows. In Section 2, we consider persistent elastic sources sharing a link with transient sessions transferring a fixed volume of data. We pose the bandwidth allocation problem as an optimization problem and solve it numerically. We then derive practical flow-control schemes that can be easily implemented in a decentralized manner, and show that these are close to optimal. In Section 3, we consider a scenario where the transient sessions have different amounts of data to transfer. The shortest remaining processing time (SRPT) policy yields the optimal bandwidth allocation. We propose a practical scheme that approximates SRPT and study its performance through simulation. We show that there is an advantage to increasing the throughput given to short flows, and that this can be done without appreciably penalizing long flows. We present our conclusions and discuss directions for future research in Section 4.
Resource Allocation with Persistent and Transient Flows
2
457
Bandwidth Sharing between Persistent and Transient Flows
Consider a single link of capacity C shared by a fixed number, K, of persistent flows and a variable number of short-lived flows (also called Mice). The persistent flows are modeled in aggregate as having an increasing and strictly concave instantaneous utility function KUe (xe ), where xe is the aggregate bandwidth allocated to these flows at a specified time. The utility over a time period is given by the integral of the instantaneous utility over that period. To simplify technicalities in the analysis below, we assume in addition that Ue is differentiable. Short flows correspond to file transfers. They arrive into the system at the points of a Poisson process of rate λ and leave when the file transfer is complete. The file sizes are assumed to be exponentially distributed with mean f . Let ρ = λf /C denote the load offered by the short flows. We shall assume that ρ < 1. There is a unit holding cost per unit time for each short flow in the system. The goal is to maximize the time average of KUe (xe (t)) − N (t), where N (t) denotes the number of short flows in the system at time t. To this end, we introduce the performance objective, T 1 E [KUe (xe (t)) − N (t)]dt N (0) = n Jπ (n) = lim (1) T →∞ T 0 and seek a policy π that maximizes this objective. By Little’s law, the time average of N (t) is the same as λ times the mean sojourn time of a file transfer, so the objective is to maximize the utility of long flows, subject to a bound on the mean sojourn time of file transfers. The objective function above is precisely the Lagrangian for this optimization problem. We seek stationary optimal policies for the optimization problem described above. By the assumption of exponential file sizes, the state of the system is fully described by the number of short flows in progress and we have a semi-Markov decision problem. If the number of short flows is restricted to some nmax , and non-zero capacity is allocated to short flows whenever any are present, then the Markov process is irreducible and has a finite state space. Under these conditions, it can be shown that there is a stationary optimal policy, and that it can be computed numerically using value iteration. The proof is omitted due to lack of space, but can be found in [8] along with a discussion of structural properties of the optimal control policy. In order to compare the optimal policy with sub-optimal policies that we shall consider below, we need the following elementary bound on the performance of the optimal policy. Lemma 1. Suppose the state space is not truncated, i.e., nmax = ∞. Then, for any policy π and any initial state n, we have Jπ (n) ≤ KUe ((1 − ρ)C). Proof. Since the load offered by the short flows is ρ, any policy π that allocates capacity less than ρC to these flows on average will be unstable in the sense that
458
S. Deb, A. Ganesh, and P. Key
N (t) → ∞ as t → ∞. Thus, for any such policy, Jπ (n) = −∞, starting from any n. Therefore, we can restrict attention to policies π that, on average, allocate capacity no more than (1 − ρ)C to the persistent flows. Since Ue was assumed to be concave, we now obtain from Jensen’s inequality and the non-negativity of N (t) that 1 T 1 T [KUe (xe (t)) − N (t)] ≤ KUe xe (t) . T 0 T 0 Taking expectations and using Jensen’s inequality once more, we get 1 T Jπ (n) ≤ KUe E xe (t)|N (0) = n ≤ KUe ((1 − ρ)C) , T 0 since E[ T1 function.
T 0
(2)
xe (t)] ≤ (1−ρ)C for all T sufficiently large, and Ue is an increasing
Implementing the optimal policy requires knowledge of the number of short flows in progress and may not be practical. This leads us to consider simpler policies that are practically realizable. We show for two such policies below that they are close to optimal. In the rest of the paper, we will work with utility functions of the form 1 xe 1−β , β > 0. (3) Ue (xe ) = 1−β C If β = 1, we take Ue (xe ) = log(xe /C). These constitute a fairly general class of utility functions and have been considered by a number of authors; see, for example, [18]. The bandwidth shares assigned by TCP approximately maximize a utility function of this form with β = 2. Static Policy. A fixed amount of bandwidth C˜ < C is reserved for the persistent sources and the remainder is shared equally among file transfers. This can be implemented by logically partitioning the link between persistent and short flows and using TCP for the short flows, for example. Now, irrespective of the file size distribution, the number of short flows in progress evolves like the queue size in an M/G/1 − P S queue, with load ˜ The equilibrium queue length distribution is geometric with α = λf /(C − C). parameter α (see [13], for example), and so the mean number of short flows in ˜ progress is Eπ [n] = α/(1 − α). The bandwidth allocated to persistent flows, C, can be expressed as C˜ = C − (λf /α) = (α − ρ)C/α. Hence, α − ρ α C − . (4) Eπ [KUe (xe (n)) − n] = KUe α 1−α √ Taking α = 1 − a/ K and using (3), we obtain √ 1−β √ √ √ K K − a K 1 − ρ − (a/ K) a K √ + +1 − Eπ [KUe (xe (n)) − n] = 1−β 1−β a 1 − (a/ K)
Resource Allocation with Persistent and Transient Flows
=K
√ √ (1 − ρ)1−β + O( K) = KUe ((1 − ρ)C) + O( K) . 1−β
459
(5)
Recall that ρC is the rate at which work is brought in by short flows, and αC √ is the capacity allocated to them. The choice α = 1 − a/ K corresponds to allocating most of the available capacity to short flows, reserving only a small √ fraction a/ K for persistent sources. How much worse than optimal is the static policy? One way to quantify this is to ask how large a capacity Cˆ is needed, so that the total utility achieved using the static policy on a link of capacity C is the same as the utility achieved ˆ Recall that, by (2), using the optimal policy on a link of capacity C. K Eπ [KUe (xe (n)) − n] ≤ 1−β
(1 − ρ)Cˆ C
1−β
ˆ for a link of capacity √ C, using any policy. Comparing this with (5), we see that Cˆ = C(1 − O(1/ K)). In so far as K is large in the typical operating regime of interest, this shows that the static policy is close to optimal. Implementation of the static policy requires that bandwidth partitioning be carried out by network routers. In contrast, the weighted processor sharing policy discussed next can be implemented at end systems. Weighted Processor Sharing. Suppose each persistent source has weight 1 and each file transfer in progress has weight w, and that capacity is shared between users in proportion to their weights. In particular, each file transfer in progress gets the same share of capacity. Thus, irrespective of the file size distribution, the number of file transfers in progress can be modeled by a symmetric queue (see Lemma 3.9 in Kelly [13]), and has the invariant distribution of a birth-death process with constant birth rate λ, and state-dependent death rate µn = (C − xe (n))/f . Here xe (n) is the capacity allocated to persistent sources when n short flows are in progress, and f is the mean file size. If we assume further that k := K/w is an integer, then it can be shown that the invariant distribution is given by
k+n n π(n) = ρ (1 − ρ)k+1 , (6) n and a simple calculation yields Eπ [n] = (k + 1)ρ/(1 − ρ) for the mean number of short flows in the system. Details are omitted for brevity, but can be found in [8]. It is not possible to obtain a closed-form expression for Eπ [Ue (xe (n))] in general, but we can obtain approximations using a Taylor expansion for Ue when k is large. After routine calculations detailed in [8], we obtain Eπ [KUe (xe (n)) − n] ≈ K(1 − ρ)1−β Kρ(1 − ρ)1−β Kβρ(1 − ρ)1−β (k + 1)ρ + − − . 1−β k 2k 1−ρ
(7)
460
S. Deb, A. Ganesh, and P. Key
Recall that k = K/w, where w is the weight √ given to short-lived flows. It follows K, we get Eπ [KUe (xe (n)) − n] ≥ from the above that, by choosing w = √ KUe ((1 − ρ)C) − O( K). Since no policy can achieve a total utility greater than ˆ required by an KUe ((1 − ρ)C), we conclude that the minimum bandwidth, C, optimal policy to achieve the same utility as achieved by the weighted processor √ sharing policy is given by Cˆ = C(1 − O(1/ K)). Thus, the weighted processor sharing policy is nearly optimal, in the same sense as the static policy. Moreover, it can be implemented by the end systems rather than the network, for example by having end systems use a weighted analogue of TCP with weights chosen as above. An alternative implementation would be to use a willingness-to-pay scheme, as described in [9], with a willingness-to-pay parameter proportional to the weights above. Numerical Results. We now derive explicit formulas in the special case β = 2, i.e., Ue (x) = −C/x. Recall that this is the utility function implicitly maximized by TCP. The static policy allocates fixed capacity ρC/α to the short flows; when √ , and β = 2, we obtain from (4) that the optimal value of α is α = 1 − 1−ρ 1+
that Eπ [KUe (xe (n))] = −
√ K + Kρ , 1−ρ
√ Eπ [n] =
Kρ
Kρ + ρ . 1−ρ
When β = 2, we can also explicitly calculate Eπ [Ue (xe (n))] for the weighted-PS policy. We obtain
k+ρ k k+n Eπ [Ue (xe (n))] = Eπ Ue C = −Eπ =− . k+n k k(1 − ρ) √ A simple calculation now yields that the optimal value of k is K, i.e., each √ transient flow should be given a weight w = K relative to each persistent flow. With this choice of k, we get √ √ Kρ + ρ K + Kρ Eπ [KUe (xe (n))] = − , Eπ [n] = . 1−ρ 1−ρ We compare the mean utility and number in system for the static and weighted-PS policies with those for the optimal policy, obtained numerically. For this purpose, we choose the system parameters C = 1000, K = 25, f = 100, and vary λ so that ρ = λf /C spans the interval [0.1, 0.7]. We truncate the state space at nmax = 100 for the value iterations. The results are plotted below. Figure 1 shows the mean utility for the optimal, static and weighted policies, while Figure 2 shows the mean number of short flows in progress for each policy. The figures show that neither the persistent nor the transient flows suffer much by using the sub-optimal policies considered. Figure 3 shows the additional capacity required by the static policy if it is to achieve the same total utility as the optimal policy; 3(a) corresponds to K = 25 and 3(b) to K = 5. We see that the loss incurred by the sub-optimal policies is small, even for small values of K.
Resource Allocation with Persistent and Transient Flows
-2
-1 Optimal dynamic policy Optimal static policy Weighted PS
-3 -4 -5 -6 -7 0.1
0.4 0.5 0.3 0.7 0.2 0.6 0.8 Normalized load offered by the Mice -------->
Utility for the persistent source ----->
Utility for the persistent source ----->
-1
461
-1.5
Optimal dynamic policy Optimal static policy Weighted PS
-2 -2.5 -3
-3.5 -4 0.1
0.2 0.3 0.4 0.5 0.6 0.7 Normalized load offered by the Mice -------->
Fig. 1. Average utility of the long flow under three different allocation strategies. C = 1000, Mean file size=100, Ue (x) = −C , nmax = 100. K = 5 in the left panel and x K = 25 in the right panel. The arrival rate is varied along the x -axis
3
Bandwidth Sharing between Transient Flows
We now consider how capacity should be shared between file transfers when the sizes of the files being transferred might vary over several orders of magnitude. If the objective is to minimize the number of file transfers in progress (equivalently, the mean holding cost or mean sojourn time) and the amount remaining to be transferred is known, then a simple interchange argument shows that the optimal policy is to give priority to the file with shortest remaining processing time (SRPT). This policy has been proposed in the context of Web servers [12, 1]. However, it is not suited to our problem for a couple of reasons. First, it needs a centralized controller to assign priority (or a distributed leader election protocol, which imposes a high overhead). Second, while the concept is clear for a single bottleneck link or resource, it does not generalize easily to multiple bottlenecks. This motivates us to consider a generalization of the weighted PS policy introduced in the previous section and show that it can be made to approximate SRPT. Though the analysis and simulations in this paper pertain to a single link, the algorithms we propose generalize easily to networks. We also note that, in networks, the stability region of priority policies such as SRPT is not easily obtained; it is known that the “ρ < 1” condition that the offered load on each link be smaller than its capacity is not sufficient for stability. On the other hand, this condition does guarantee stability for the algorithms we consider, as shown in [4]. That is another advantage of the proposed algorithms in the network context. We continue to work with the optimization problem posed in the previous section. There, we considered how to split capacity between persistent and transient flows but did not consider further how the capacity allocated to transient flows should be shared between them. If file sizes are exponentially distributed and the allocation decision has to be made without knowing the sizes of all file transfers in progress, then it does not matter how this allocation is made; any
462
S. Deb, A. Ganesh, and P. Key 18
Optimal dynamic policy Optimal static policy Weighted PS
10 8
6 4
2 0 0.1
16
Mean number of Mice ----->
Mean number of Mice ----->
14 12
14
Optimal dynamic policy Optimal static policy Weighted PS
12
10 8 6
4 2 0 0.1
0.5 0.6 0.7 0.2 0.3 0.4 0.8 Normalized load offered by the Mice -------->
0.6 0.7 0.5 0.4 0.3 0.2 Normalized load offered by the Mice -------->
Fig. 2. Average number of short flow under three different allocation strategies. C = 1000, Mean file size=100, Ue (x) = −C , nmax = 100. K = 5 in the left panel and x K = 25 in the right panel. The arrival rate is varied along the x -axis 2.8
%Capacity Over-provisioning ------->
%Capacity Over-provisioning ------->
6 5.5
5
4.5 4 3.5
3 2.5
2 1.5 0.1
0.2 0.3 0.4 0.5 0.6 0.8 0.7 Normalized load offered by the Mice -------->
2.7
2.6 2.5 2.4 2.3 2.2 2.1 2 1.9 1.8 0.1
0.2 0.3 0.5 0.4 0.6 0.7 Normalized load offered by the Mice -------->
Fig. 3. Capacity over-provisioning sufficient for optimal static allocation to outperform optimal dynamic allocation. C = 1000, f = 100, Ue (x) = −C , nmax = 100. K = 5 in x the left panel and K = 25 in the right panel
allocation that doesn’t leave capacity idle achieves the same mean number in system. If file sizes are known or if they aren’t exponentially distributed, then this is no longer true; for example, if file sizes are heavy-tailed, the first-comefirst-served policy performs worse than processor-sharing. We noted above that, if file sizes are known, then SRPT is optimal. We now consider a weighted processor sharing policy where each transient flow chooses its own weight based on its residual file size. Suppose the weights are chosen according to wi = wmin + (wmax − wmin ) exp(−afir ) ,
(8)
where wi and fir denote the weight assigned to the ith flow and its residual file size, and wmin , wmax and a are system parameters. The link capacity C is shared between flows in proportion to their weights, i.e., flow i receives capacity wi C/W , where W denotes the sum of wi over all flows in the system. A similar policy has been proposed recently in [7].
Resource Allocation with Persistent and Transient Flows
463
We shall assume that W is constant over time. Such an assumption is plausible in a large system operating in a steady-state regime. In particular, if the system carries a large number of persistent flows, then the fluctuation in W is only due to short flows entering and leaving the system, and can be neglected to a first approximation. With this assumption, we can calculate the sojourn time of a file transfer as a function of the initial file size. Letting fi denote the size of file i, we have d r wi (t) f (t) = − C , (9) fir (0) = fi , dt i W where wi (t) is specified in terms of fir (t) via (8). Here, t denotes the time since the arrival of flow i into the system. Let Ti = inf{t > 0 : fir (t) = 0} denote the sojourn time of flow i. A straightforward calculation using (8) and (9) yields
W wmin afi e −1 . log 1 + Ti = aCwmin wmax The (unweighted) processor sharing policy is recovered in the limit a → 0, in which case Ti = W fi /wmax . The sojourn time of a file is thus proportional to its size, which is desirable in terms of fairness but has the disadvantage that small files see poor performance. In order to quantify the extent to which the proposed service discipline favors short flows, we compute the ratio of sojourn times for two different files, of sizes f1 and f2 . With plain sharing, this ratio is T (f1 )/T (f2 ) = f1 /f2 . Denoting the ratio wmin /wmax by α, we obtain for the scheme proposed above that log 1 + α(eaf1 − 1) T (f1 ) = . (10) T (f2 ) log [1 + α(eaf2 − 1)] We observe that if f1 and f2 are both large relative to 1/a and if, moreover, αeafi is much bigger than 1 for i = 1, 2, then T (f1 )/T (f2 ) ≈ f1 /f2 . In other words, the stretch, defined as the ratio of sojourn time to file size, is roughly constant for large files, meaning that the scheme approximates processor sharing at large file sizes. In particular, it avoids starvation of very large file transfers. On the other hand, if f1 and f2 are both small relative to 1/a, then again T (f1 )/T (f2 ) ≈ f1 /f2 . Finally, suppose f1 is large and f2 is small relative to 1/a. Then, by (10), af1 + log α 1 f1 wmax f1 T (f1 ) ≈ ≈ = . T (f2 ) αaf2 α f2 wmin f2 In other words, the large file has a stretch approximately 1/α times greater, or receives a bandwidth share approximately α = wmin /wmax as much as a small file. Loosely speaking, files much smaller than 1/a are “mice”, files much larger than 1/a are “elephants”, all mice are treated roughly equally, as are all elephants, but mice are favored over elephants. Note that this is achieved without explicitly splitting files into classes, but simply by having them choose individual weights based on their residual file sizes.
464
S. Deb, A. Ganesh, and P. Key
The degree to which mice are favored is determined by the ratio 1/α = wmax /wmin . This can be seen clearly in Figure 4, where we have plotted the stretch, T (f )/f , as a function of the normalized file size af over the range [0, 20]. We take W/(Cwmax ) = 1 for convenience. From top to bottom, the 3 curves on the plot correspond to 1/α = 5, 10 and 20 respectively. The plots suggest that large files receive much less capacity on average than do short files. It needs to be kept in mind that this is under the assumption that W is constant, which is not valid if there are no persistent flows. A model with no persistent flows and with an SRPT service discipline has been studied in [1], where it is shown that the stretch of long flows remains bounded. The intuition is that there will be epochs when the long flow is competing with very few or no short flows, at which times it is not handicapped by its small weight. A similar intuition applies to our model, and in fact the plots of stretch in Figure 4 correspond to “worst-case” values. Simulation Results. We simulate a system with capacity C = 1000 carrying K = 25 persistent flows, each of which has weight 1 and has the utility function Ue (x) = −C/x. File transfers arrive at rate λ, and file sizes have the Pareto distribution, P (file-size > x) = 1/(1 + (x/f ))2 , x ≥ 0, with mean file size f = 100. We take a = 1/f , wmax = 50 and wmin = 10. Performance measures for processor sharing with the scheme described above, and processor sharing with constant (file-size independent) weights w for three different weights, 10, 50 and 25, are shown in Figure 5. The left panel shows the utility received by the persistent flows under each policy. The panel on the right shows the mean number of transient flows in the system. The simulation results are based on 12,000 events (file arrivals) with a burn-in period of 1000 time units for the system to reach stationarity, and, are averaged over multiple runs. Clearly, when w = 50, the average stretch of the transient flows go down but at the cost of a reduced utility for the persistent flow. When w = 10, the persistent flows perform better but the average stretch of the short flows increases a lot. However, by using the processor sharing described in this section when the weights of the transient flows are varied in a dynamic manner, the transient flows can achieve a small stretch without starving the persistent flow much. We have also shown the plots for the case when the weights are kept constant at w = 25. The proposed processor sharing scheme still does better.
4
Concluding Remarks
We considered the problem of optimal bandwidth allocation in a system consisting of both persistent and transient flows. Treating all transient flows as identical, we first described simple algorithms that achieve a nearly optimal partitioning of the available bandwidth between the persistent and transient sources. We then studied the problem of how to share the bandwidth allocated to transient flows among file transfers of different sizes. We described a distributed scheme
Resource Allocation with Persistent and Transient Flows
465
18
16
Normalized stretch
14 12
10 8 6
4 2 0 0
5
10 15 Normalized file size
20
0
Average Stretch of Transient Flows ----->
Average Utility of a Persistent Flow ----->
Fig. 4. Stretch vs. file size for wmax /wmin = 20(top), 10(middle) and 5(bottom)
-200 -400
-600
-800
10 t] = P[(k − 1)T + u < t1 ] P[(e-e)k > t | (k − 1)T + u < t1 ] + P[t1 < (k − 1)T + u < t2 ] P[(e-e)k > t | t1 < (k − 1)T + u < t2 ] + P[t2 < (k − 1)T + u < t3 ] P[(e-e)k > t | t2 < (k − 1)T + u < t3 ] + P[t3 < (k − 1)T + u] P[(e-e)k > t | t3 < (k − 1)T + u]
Performance Analysis of a Forwarding Scheme
509
For class 2, 3 and 4 respectively, we obtain P[(e-e)k > t | t1 < (k − 1)T + u < t2 ] = P[(CH → BSO → R0 → BSN σ ) > t] P[(e-e)k > t | t2 < (k − 1)T + u < t3 ] = P[(CH → R1 → R0 → BSN σ ) > t] P[(e-e)k > t | t3 < (k − 1)T + u] = P[(CH → R0 → BSN σ ) > t]. The probability for a class 1 packet depends on k and is quite complicated as this involves the length of the overlap area, the instant the beacon is sent, the time out in the forwarding buffer and the size of this buffer. We distinguish three different cases for a class 1 packet: - case 1: the k-th packet is directly sent from BSO to MH. - case 2: the k-th packet is forwarded to MH via BSN. - case 3: the k-th packet is lost at BSO (due to time out or buffer overflow). For case 1 we obtain (e-e)k = (CH → R0 → BSOσ ), for case 2 (e-e)k = (CH → R0 → BSOσ )+(t1 −(k −1)T −u)+(BSOσ → R0 → BSN σ ) and for case 3 the end-to-end delay can be considered to be infinite. In case 2, we consider a possible loop between BSO and R1 for the first few packets that are forwarded from the FB, and we account for possible extra delay due to the burst of packets that is created when the FB is emptied. The details are omitted in this paper. We also need to determine the probability that the k-th packet finds itself in case 1, 2 or 3 respectively. Again most details are omitted and case 3 is considered as an example. Clearly, P[k-th packet is in case 3] = P[(k-th packet is processed by BSO after td ) AND (k-th packet is timed out OR pushed out of the FB)], where td is the instant of disconnection of the MH from the BSO, directly depending on the values of o and τb . This probability yields a rather complex expression in which the probability that a packet is pushed out of the circular buffer FB before it could be forwarded is required: P[k-th packet is pushed out] = P[(k+buffersize-1)T + u < t1 ] which is the probability that the (k+buffersize)-th packet needs to be forwarded (and therefore pushes out the k-th packet). The probablity that a packet is timed out is also required: P[k-th packet is timed out] = P[tp k < t1 + (R0 → BSOσ ) − T O]
where tp k denotes the instant of the end of the processing of the k-th packet at BSO and T O is the time out value. Remark that t1 + (R0 → BSOσ ) equals the instant that the FB is emptied and the buffered packets are forwarded. It is clear that, due to the M/M/1 assumption, all probabilities that occur in the above formulas can be computed through some standard conditional probability techniques. Similarly, the model allows us to compute several performance measures such as the expected number of packets arriving late at a playout buffer in the MH, due to the extra delay introduced by the forwarding scheme.
510
5
C. Blondia et al.
Performance Evaluation of the MSF Scheme
In this section we consider three cases in order of increasing complexity. In the first case (Section 5.1), cells do not overlap (o = 0 ms) and the beacon signal is received by the MH at the moment that it crosses the border between the cell controlled by BSO and the cell controlled by BSN. The time out (T O) and the capacity of the forwarding buffer in BSO (F B) are supposed to be large enough so that they do not cause packet loss. In the second case (Section 5.2) the
(A)
Expected number of dropped packets
τ = 5 ms, FB = 6 packets, TO = 100 ms, o = 0 ms, τb = 0 ms 4.5
Analytical Simulation
4 3.5 3
µ = 4 Mbps
2.5 2
µ=8
1.5 µ = 20
1 0.5 0
20
40
60
80 100 Playout time (ms)
120
140
(B)
Expected number of dropped packets
µ = 4 Mbps, FB = 15 packets, TO = 200 ms, o = 0 ms, τb = 0 ms 9
Analytical Simulation
8 7
τ=15 ms
6 τ=10 ms
5 4
τ= 5 ms
3 2 1 0
50
100 150 Playout time (ms)
200
250
Fig. 3. Expected number of dropped packets vs. playout time for variable transmission rate µ (A) and for variable link delays (B).
cells overlap (o > 0 ms). The first beacon sent after the MH crosses the middle of the overlap area (determining the handoff instant) may occur while the MH is in the overlap area or after the MH has left the overlap area. Again in this case, no packets will be lost due to time out and/or forwarding buffer overflow. In the third case (Section 5.3) we have the same characteristics as the second one, except that the time out value and the forwarding buffer capacity are chosen so that packets may be lost in the forwarding buffer. In all three cases we consider
Performance Analysis of a Forwarding Scheme
511
µ = 4 Mbps, τ = 5 ms, FB = 6 packets, TO = 100 ms, o = 0 ms, τb = 0 ms 1
Probability
0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0
7,...
20
6
40
5
60
4
3
80 100 120 End to end delay (ms)
2
k =1
140
160
Fig. 4. Delay Distribution of k-th Packet.
the system shown in Fig. 1 where: (i) the fixed propagation delay between routers (Rx, Ry) is the same for all routers (also between CH and R0) and we will refer to it as τ , (ii) the correspondent node (CH) transmits 500 byte packets every 20 ms to the MH, (iii) the background traffic is modelled as Poisson sources such that each router has a load ρ of 0.8. 5.1
Delay Evaluation
The playout time is the maximum allowed end-to-end delay: if a packet’s endto-end delay exceeds this playout time, it will be dropped. In Fig. 3 the expected number of forwarded packets that are dropped due to expiration of playout time is shown as a function of the playout time, for different values of the transmission rate µ in the routers in Fig. 3.A and for different values of distance τ between neighbouring routers in Fig. 3.B . The analytical results are compared against simulation results. The difference between simulation and analytical results is due to the M/M/1 approximation in the analytical model resulting in exponential packet service times, while in the simulation packets have constant length. Fig. 4 depicts the distribution of the end-to-end delay of the k-th packet involved in the handoff when there is no overlap between the cells. As can be expected, the delay decreases with increasing sequence number of the packet. Starting from packet 7, the curves converge since the probability to experience some extra delay due to forwarding decreases rapidly. 5.2
Influence of the Beacon Latency
Fig. 5 shows the important influence the beacon latency has on the system performance. The expected number of forwarded packets dropped due to expiration of playout time is shown as a function of the playout time, for different values of the time between the instant the MH crosses the coverage area of BSO and the
512
C. Blondia et al.
Expected number of dropped packets
τ = 5 ms, µ = 4 Mbps, FB = 10 packets, TO = 200 ms, o = 200 ms 10 9 8
τb=+100 ms
7 6 5
+50
4 3
0
2 1 0
-20 -60 50
Analytical Simulation
100 150 Playout time (ms)
200
250
Fig. 5. Expected number of dropped packets vs. playout time for variable beacon latency τb .
instant that the first beacon originating from the BSN is received. A positive value of τb means that the beacon arrives after the end of the coverage area of BSO, while a negative value means that the beacon arrives before the end of the coverage area of BSO. Again the analytical results are validated with the ns simulation. When the beacon arrives after the end of the coverage area of BSO a higher number of packets will have to be stored in the forwarding buffer of the BSO which have to be forwarded to BSN once the path setup message triggered by the beacon arrival is received in BSO. This is shown in Fig. 2 for a beacon latency of 40 ms. This forwarding increases the end-to-end delay and therefore a higher number of packets will be dropped for a given playout time. Fig. 6 shows the delay distribution of the packets involved in the handoff procedure (i.e. the packets directly sent to the MH or forwarded after the instant the MH crosses the middle of the overlap area). From this figure, it is clear that the first packet
Probability
µ = 4 Mbps, τ = 5 ms, FB = 10 packets, TO = 200 ms, o = 200 ms, τb = -60 ms 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0
4
6 5
3 7 k =1
8
9,... 20
40
2 60
80 100 120 140 End to end delay (ms)
160
Fig. 6. Delay Distribution of k-th packet.
180
Performance Analysis of a Forwarding Scheme
513
Expected number of lost or dropped packets
has a high probability to be sent directly to the MH without being forwarded via the BSN. Starting from packet 2, the probability of being forwarded increases. While these packets have a high probability to belong to class 1, starting from packet 7, the probability to belong to class 2 or 3 increases. Packet 9 and the following packets have a high probability to be sent directly to the MH via R2 and BSN and therefore, their delay distribution is close to the one of packet 1.
τ = 5 ms, µ = 4 Mbps, o = 0 ms, τb = +200 ms 16
Analytical Simulation
14 12
a)
10 8
b)
6 4 2 0
a) FB=20 packets, TO= 50 ms b) FB=20 packets, TO=500 ms c) FB=10 packets, TO=500 ms 50
100
c)
150 200 250 Playout time (ms)
300
350
Fig. 7. Expected number of lost or dropped packets vs. playout time for variable F B and T O.
5.3
Influence of the BSO Time Out and Forwarding Buffer Capacity
Forwarded packets may be dropped due to the expiration of the playout time or they may be lost when the time out in the FB expires or when they are pushed out of the circular FB when it is full. The expected number of packets dropped or lost is shown in Fig. 7 as a function of the playout time, for different values of the time out and different values of the capacity of the FB buffer. There is no overlap here and the beacon latency is τb = 200 ms. In a) with T O = 50 ms and F B = 20 some packets (+/- 10 packets) stored in the forwarding buffer will time out before the path setup message M2 reaches BSO and thus they will be lost. In b) with T O = 500 ms and F B = 20 no packets are lost. In c) with T O = 500 ms and F B = 10 only a few packets (+/2 packets) will find the forwarding buffer full with packets to be forwarded when arriving at BSO and thus will push out the packets at the head of the queue which will be lost.
6
Conclusions
In this paper, the MSF-HAWAII handoff protocol is analyzed by means of an analytical model. Its performance for constant bit rate real-time (UDP) traffic is
514
C. Blondia et al.
characterized by two measures: the expected number of forwarded packets that are dropped due to the expiration of the playout time together with the expected number of packets lost in the forwarding buffer and secondly, the individual endto-end delay distributions of the packets that are involved in the handoff. The model includes a number of system implementation characteristics that have a major impact on the system performance: size of the overlap area between neighboring cells, frequency of beacon signal generation, size of forwarding buffer and time out value used in the forwarding buffer. The numerical results obtained with the analytical model have been validated with the ns simulator, showing the accuracy of the model. Application of the model to a simple reference network shows that longer forwarding routes (due to longer distances between routers, slower routers or low capacity of transmission links) lead to a higher number of expected packets lost due to the expiration of the playout time and longer delays experienced by individual packets. Furthermore, it is shown that the expected number of lost packets may drastically increase when the beacon signal reaches the MH after it left the area covered by the old base station. The numerical examples also show that engineering accurately the forwarding buffer (both its time out value and its capacity) is an important, but difficult task, as unappropriate values of time out or buffer capacity may lead to a major performance degradation due to the loss of several packets. Acknowledgments. This work was supported in part by the EU, under project IST-11591, MOEBIUS; The 2nd and 3th author were supported by the Ministry of Education of Spain, grant TIC-1998-115-C02-01
References 1. C. Perkins, ed., ”IP Mobility Support”, IETF RFC 2002, October 1996. 2. R. Caceres and V. Padmanabhan, ”Fast and scalable handoffs for wireless networks”, in Proc. ACM MOBICOM ’96, pp. 56-66, 1996 3. R. Ramjee, T. La-Porta, S. Thuel, K. Varadhan, L. Salgarelli, ”IP micro-mobility support using HAWAII”, Internet draft, July 2000. 4. A. Valko, ”Cellular IP - a new approach of Internet host mobility”, ACM Computer Communication Reviews, January 1999 5. Ramjee, R., La Porta, T., Thuel, S., Varadhan, K., and Wang, S., HAWAII: a domain based approach for supporting mobility in wide-area wireless networks, Proceedings of International Conference on Network Protocols, ICNP’99. 6. Blondia, C., Casals, O., De Cleyn. P., and Willems, G., ”Performance analysis of IP micro-mobility handoff protocols”, Proceedings 7th Int. Workshop on Protocols for High Speed Networks, PfHSN 2002 7. A. Campbell, J. Gomez, C. Y Wan, S. Kim, Z. Turanyi, A. Valko, ”Cellular IP”, IETF draft (draft-ietf-mobileip-cellularip-00.txt), January 2000. 8. A. Campbell, J. Gomez, S. Kim, A. Valko, C.-Y. Wan and Z. Turanyi, ”Design, implementation and evaluation of Cellular IP”, IEEE Personal Communications, August 2000, pp.42-49
Evaluating the Performance of a Network Management Application Based on Mobile Agents Marcelo G. Rubinstein1 , Otto Carlos Muniz Bandeira Duarte2 , and Guy Pujolle3 1 2
Depto. de Eng. Eletrˆ onica e Telecom., Universidade Estadual do Rio de Janeiro, FEN, Rua S˜ ao Francisco Xavier, 524, 20550-013, Rio de Janeiro RJ, Brazil, Grupo de Teleinform´ atica e Automa¸c˜ ao, Universidade Federal do Rio de Janeiro, COPPE/EE, CP 68504, 21945-970, Rio de Janeiro RJ, Brazil, 3 Laboratoire LIP6-CNRS, Universit´e Pierre et Marie Curie, 4, Place Jussieu, 75252, Paris Cedex 05, France
Abstract. This paper analyzes mobile agent performance in network management, comparing it with the client-server model used by the SNMP (Simple Network Management Protocol). Response time results show that the mobile agent performs better than the SNMP when the number of managed elements ranges between two limits determined by the number of messages that pass through a backbone and by the mobile agent size that grows with the variables collected on the network elements. Keywords: Mobile agents, network management, and scalability
1
Introduction
Most network management systems use SNMP (Simple Network Management Protocol) [1] and CMIP (Common Management Information Protocol) [2] protocols, which are based on a centralized paradigm. These protocols use the clientserver model, on which the management station acts as a client that provides a user interface to the network manager and interacts with agents, which are servers that manage remote access to local information stored in a Management Information Base (MIB). Performance management is one of the management functional areas identified in OSI Systems Management and addresses the availability of management information, in order to be able to determine the network load [3]. This kind of management needs access to a large quantity of dynamic network information, which is collected by periodic polling. The operations available to the management station for obtaining access to the MIB are very low-level. This fine grained client-server interaction, called micro-management, and the periodic polling generate an intense traffic that overloads the management station [4], resulting in scalability problems. Network E. Gregori et al. (Eds.): NETWORKING 2002, LNCS 2345, pp. 515–526, 2002. c Springer-Verlag Berlin Heidelberg 2002
516
M.G. Rubinstein, O.C.M.B. Duarte, and G. Pujolle
management can be distributed and scaled by the use of mobile agents, which are programs that help users to perform tasks on the network, acting on behalf of these users. These agents move to the place where data are stored and select information the user wants; saving bandwidth, time, and money. This paper analyzes the performance of mobile agents in network management, which is also being investigated by several researchers. Baldi et al. [4] evaluate the tradeoffs of mobile code design paradigms in network management applications by developing a quantitative model that provides the bandwidth used by traditional and mobile code design of management functionalities. Bohoris et al. [3] present a performance comparison between mobile agents, CORBA, and Java-RMI on the management of an ATM switch running an SNMP agent. Response time and bandwidth utilization results are presented for the transfer of an array of objects (fictitious data). Gavalas et al. present experimental implementation results for the transfer of an aggregation of multiple variables on a local network of a few nodes. They also describe applications that use mobile agents to acquire atomic snapshots of SNMP tables and to get objects, from SNMP tables, that meet specific criteria [5]. Sahai and Morin [6] perform measurements of bandwidth utilization of mobile agent and client-server applications on an Ethernet LAN of a few nodes. None of these papers concerns the problem of scalability of network management based on mobile agents on a complex network with a high number of nodes and similar in shape to the Internet. In this paper, we compare the scalability of the network management based on mobile agents against traditional SNMP through the analysis of simulation and implementation results. Two prototypes of an application that gathers MIB-II variables, one based on mobile agents and the other only based on the SNMP, have been created and tested on a LAN. By acquiring parameters related to the network management and to the agent infrastructure, new results are obtained on large topologies similar in shape to the Internet. This paper is organized as follows. Section 2 describes main network management systems used nowadays. Section 3 presents the implemented prototypes and measurement results. Section 4 reports simulation results. At last, concluding remarks are presented in Section 5.
2
Network Management Systems
In the SNMP, operations available to the management station for accessing the MIB are very low-level. This interaction does not scale well because of the generation of intense traffic and computational overload on the management station [4]. Some steps towards decentralization have already been taken by IETF and ISO organizations. In event notification, SNMP agents notify the management station upon the occurrence of a few significant events. These agents use traps, i.e., messages sent without an explicit request from the management station, to decrease the intensive use of polling. ISO uses more complex agents that have higher processing capacity. In both the approaches, the agent is only responsible for the notification of an event.
Evaluating the Performance of a Network Management Application
517
A more decentralized approach is adopted in SNMPv2 [1] (SNMP version 2), on which there are multiple top-level management stations, called management servers. Each such server is responsible for managing agents, but it can delegate responsibility to an intermediate manager. This manager, also called proxy agent, plays the role of a manager in order to monitor and control the agents under its responsibility and also works as an agent to provide information and to accept control from a higher-level management server. Version 3 of the SNMP, SNMPv3 [1], incorporates a new security scheme to be used with SNMPv2 (preferred) or SNMPv1. SNMPv3 is not a stand-alone replacement for SNMPv1 or SNMPv2. The RMON (Remote MONitoring) [7] uses network monitoring devices called monitors or probes to perform proactive LAN monitoring on local or remote segments. These probes provide information about links, connections among stations, traffic patterns, and status of network nodes. They also detect failures, misbehaviors, and identify complex events even when not in contact with the management station. These proposals seem to reduce the traffic around the management station, but as the computational power of the network nodes is increasing, it is possible to delegate more complex management functions to nodes. Moreover, in order to satisfy the diverse needs of today’s network, new network management systems that analyze data, take decisions, and take proactive measures to maintain the Quality of Service (QoS) of the network must be developed. Mobile agents seem to be a good alternative to satisfy these needs. Main advantages that may justify mobile agent utilization in network management are: reduced cost by using a semantic compression, which filters and selects only relevant information; asynchronous processing that allows the decoupling from the home node; flexibility that permits the substitution of the behavior for management agents in real-time; and autonomy, since the agent can take decisions, performing a reactive management based on task delegation. Since SNMPv2 is not as spread as SNMPv1 and network management based on SNMPv1 does not scale when size or complexity of the network increases, mobile agents can be used to increase network management scalability.
3
Implementation of a Management Application
We compare two different solutions for gathering MIB-II [8] variables on managed elements: a mobile agent-based one and one only based on the SNMP. The Mole infrastructure [9] is used in the mobile agent implementation. This system provides the functionality for the agents to move, to communicate with each other, and to interact with the underlying computer system. Two different kinds of agents are provided: system agents and user agents. System agents are usually interface components to resources outside agent systems. They have more rights than non-system agents (e.g., only system agents can read or write to a file), but they are not able to migrate. User agents are agents that have a “foreigner status” at a location, which means that they are not allowed to do
518
M.G. Rubinstein, O.C.M.B. Duarte, and G. Pujolle
something outside the agent system as long as they can not convince a system agent to give them access to outside resources [9]. Mole uses TCP to transfer mobile agents, which are implemented in Java. A weak migration scheme is provided, where only the state related to data, which contains global and instantiated variables, is transfered. As a consequence, the programmer is responsible for encoding the agent’s execution state, which includes local variables, parameters, and execution threads, in program variables. This migration scheme is implemented by using the object serialization of Java. After the calling of the migrateTo()-method by an agent thread, all threads belonging to the agent are suspended. The agent is serialized by creating a system-independent representation. This serialized version is sent to the target that reinstantiates the agent. A new thread is started and as soon as this thread assumes control of the agent, a message is sent to the source that finishes all threads belonging to the agent and removes it from the system. Both the implemented prototypes, one with mobile agents and the other without, use the SNMP protocol to gather MIB-II variables. The AdventNet SNMP library [10] and the snmpd from package ucd-snmp are used on the prototypes. The AdventNet SNMP package contains APIs to facilitate the implementation of solutions and products for network management. Version 2.2 of the AdventNet SNMPv1 has been used. The daemon snmpd, which is included in the Linux Red Hat, is an SNMP agent that responds to SNMP request packets. The package versions used on this experiment have been the 3.5.3 (for machines running the Red Hat 5.2) and the 4.0.1 (for the Red Hat 6.x). 3.1
The Two Implemented Prototypes
The mobile agent implementation (Figure 1) consists of one mobile agent, which migrates to all network elements to be managed, one SNMP agent, which accesses the MIB-II variables, and one translator agent, which converts the mobile agent request into an SNMP request. The mobile agent migrates to a network element (arc 1 of Figure 1) and communicates by Remote Procedure Call (RPC) with the translator agent (arc 2). This translator agent sends a request (GetRequest PDU of SNMP) to the SNMP agent (arc 3) and obtains the response (arc 4) that is passed to the mobile agent (arc 5). Then, the mobile agent goes to the next element (arc 6) and restarts its execution. After finishing its task, which consists of visiting all network elements to be managed, the mobile agent returns to the management station (arc n). In the implementation that is only based on the SNMP, we have used the traditional model of this protocol. The manager sends an SNMP packet to an SNMP agent that responds to this manager. The manager sends requests to all elements to be managed, one after the other, i.e., a new request is started after receiving the response from the previous one, until the last network element receives a request and sends the response to the manager. This manager has been implemented in Java directly over the Java Virtual Machine.
Evaluating the Performance of a Network Management Application
MA - Mobile Agent
519
MA
(1)
(n)
Host A
(6)
(11)
MA
MA
MA
Host B
Host C
Host X
(2)
MA (5)
Translator Agent
(3)
(4)
SNMP Agent Host B
Fig. 1. Network management by using a mobile agent.
3.2
Experimental Study
We perform an experimental study in order to evaluate the scalability of the two implementations. The topology used on this experiment consists of one management station (host A) and two managed network elements (hosts B and C) interconnected through a 10 Mbps Ethernet LAN. Host A is a Pentium MMX 233 Mhz, with 128 Mbytes of memory and running Linux Red Hat 6.2. Hosts B and C are Pentiuns II 350 MHz, respectively with 64 Mbytes and 128 Mbytes of memory and running Linux Red Hat versions 6.1 and 5.2. In order to evaluate the performance of the two prototypes for a great number of managed elements, we alternately repeat the two elements B and C, e.g., if we want 5 elements to be managed, we use an itinerary {B, C, B, C, B, and A}. The considered performance parameter is response time in retrieving the MIB-II [8] variable ifInErrors from elements to be managed. This variable denotes the number of received packets discarded because of errors. The JDK (Java Development Kit) 1.1.7 version 3 has been used. All measurements have been performed early in the morning or at night in order to limit the variations of network performance, which would influence the response time results. Both the implementations have been tested in the same conditions, using the same itinerary. We have made all the tests with the mobile agent platforms running uninterruptedly. The number of managed network elements
520
M.G. Rubinstein, O.C.M.B. Duarte, and G. Pujolle
has been varied from 1 to 250. For each measured parameter, 10 samples have been observed and we have calculated a 99 % confidence interval for mean. These intervals are represented in the figures by vertical bars. The mobile agent carries with itself the name of the variable to be collected, the itinerary, and the already gotten responses. The SNMP sends a GetRequest PDU and receives a GetResponse PDU. The effect of the number of managed elements in response time has been analyzed. In all figures, we present the sample mean. Response time for the SNMP grows proportionally with the number of managed elements, since the time to manage a network element is approximately the same for all network elements (Figure 2). For the mobile agent, response time increases faster when the number of managed elements grows, due to the mobile agent size that grows with the collected variables on each network element. In the topology used on this experiment, the SNMP performs much better than the mobile agent.
40 35
MA SNMP
Response Time (s)
30 25 20 15 10 5 0
1
50
100 150 Number of Managed Elements
200
250
Fig. 2. Response time per number of managed elements.
Figure 3 presents time to access the MIBs and for the RPCs related to the communication between the mobile agent and translator agents. In this experiment and for the SNMP, for 250 managed elements, 99.6 % of the total time is spent on access to the MIBs. For the mobile agent, the access to the MIBs and the RPCs take 52.8 % of the total time for 250 managed elements. For the SNMP, the access to the MIBs grows proportionally with the number of managed elements and spends 65 ms per element. This MIBs access added to the RPCs related to the communication between the mobile agent and translator agents also grow linearly and spend approximately 78 ms per element. The mobile agent remaining time is calculated by the difference between the total time for the mobile agent and the time for accessing the MIBs and
Evaluating the Performance of a Network Management Application 20
521
MA (MIB + RPC) SNMP (MIB)
18 16
Time (s)
14 12 10 8 6 4 2 0
1
50
100
150
200
250
Number of Managed Elements
Fig. 3. Time to access the MIBs and for the RPCs.
for the RPCs (Figure 4). Since, for this experiment, agent transmission time is very small comparing to other times that constitute the total response time, the remaining time corresponds to infrastructure related times, e.g., serialization/deserialization, threads creation, and internal messages transmission. 20
MA remaining time Approximation of the MA remaining time
18 16
Time (s)
14 12 10 8 6 4 2 0
1
50
100 150 Number of Managed Elements
200
250
Fig. 4. Mobile agent remaining time.
The mobile agent remaining time grows exponentially with the number of managed elements, so the curve of the Figure 4 can be approximated to: y = ax , where a = 1.01176 .
(1)
This approximation has been chosen to allow, in a simple way, its use in simulations assessed for more general topologies (Section 4).
522
4
M.G. Rubinstein, O.C.M.B. Duarte, and G. Pujolle
Performance Analysis by Simulation
The applicability of mobile agents in carrying out network management tasks is also assessed by simulation. The Network Simulator (NS) [11] is used in these simulations. This discreteevent simulator provides several implemented protocols and mechanisms to simulate computer networks with node and link abstractions. In these simulations, we have used the functionalities of Ethernet, topologies similar in shape to the Internet, and UDP and TCP protocols. Some UDP and TCP modules of the NS have had to be modified in order to allow the transmission of mobile agents. The NS works with packets sent through a network and usually does not take into account processing time of the application layer on each node. For this reason, some parameters related to network management have been added to the simulation model. These parameters depend on the agent infrastructure, on the operational system, and on computer load, but their use turns simulation results more reliable to a real implementation. Table 1 contains the parameters used in the simulations. Table 1. Parameters used in the simulations Parameter Initial size of the agent Request size for ifInErrors Response size for ifInErrors MIB access time per node for the agent MIB access time per node for the SNMP Related to the remaining time for the agent
Value 1500 bytes 42 bytes 51 bytes 78 ms 65 ms 1.01176
The simulation model assumes that links and nodes have no load and that links are error-free. The Maximum Segmentation Size (MSS) used in the simulations is 1500 bytes, therefore, there is no fragmentation of SNMP messages since they are small. For the mobile agent, since the initial size is 1500 bytes, after visiting the first element, its size will be higher than the MSS, and so the agent will be fragmented and sent in different packets, damaging the performance. Every request of a variable is sent on a different message. In all simulations, the mobile agent follows a predetermined itinerary. The mobile agent uses the TCP-Reno as a transport protocol, because of its great use in the Internet, and the UDP protocol is used in the SNMP simulations. Two kinds of topologies have been used in the simulations. The first type consists of elements in a 10 Mbps Ethernet LAN, with 250 nodes and latency of 10 µs. The second kind is similar in shape to the Internet. This topology is called transit-stub, because each routing domain in the Internet can be classified as either a stub domain or a transit domain [12]. A domain is a stub domain if
Evaluating the Performance of a Network Management Application
523
the path connecting any two nodes u and v goes through that domain only if either u or v is in that domain. Transit domains do not have this restriction. The purpose of transit domains is to interconnect stub domains efficiently. A transit domain comprises a set of backbone nodes, which are typically fairly well connected to each other. In a transit domain, each backbone node also connects to a number of stub domains, via gateway nodes in the stubs. These transit-stub topologies can be used in the network management of a matrix-branch organization, on which a matrix wants to manage their branches spread geographically. The management strategy used in this experiment for transit-stub topologies considers that the management station belongs to a node of a stub domain and managed network elements are located in other stub domains (Figure 5). In the matrix-branch case, the management station from the matrix manages the branch routers and each branch is represented by a stub and contains some routers. Stubs E
E
M
E E
Transit M- Management station E - managed network Element
Stubs E
E E
E
E
Fig. 5. Network management on a transit-stub topology.
The considered performance parameter is response time in retrieving the MIB-II variable ifInErrors. We have used the LAN topology in order to compare the simulation model with the implementation results of Section 3.2. Figure 6 presents response time for the mobile agent and for the SNMP, in implementation and simulation studies. We can say that the simulated models reproduce the behavior of the implementations. There is a little difference in the response time for the mobile agent due to the approximation of the remaining time that has been used in the simulations (Section 3.2).
524
M.G. Rubinstein, O.C.M.B. Duarte, and G. Pujolle 40
SNMP (simulation) MA (simulation) SNMP (implementation) MA (implementation)
35
Response Time (s)
30 25 MA−simul. 20
MA−implem. SNMP-simul.
15 10
SNMP−implem.
5 0
1
50
100
150
200
250
Number of Managed Elements
Fig. 6. Response time for implementation and simulation studies.
Mobile agent performance is also evaluated in a situation closer to the one found on the Internet, on which latencies are much greater than on LANs. Three different transit-stub topologies created by the topology generator GT-ITM [12] are used. The topologies have 272 nodes and links of these topologies have a 2 Mbps bandwidth and latency of a few milliseconds. The management station controls groups of 16 network elements, which is the number of nodes of a stub domain. Management is performed in a predetermined way: all elements of a stub are accessed, after that, the next stub is managed, until all the 16 stubs are accessed. If not specified, figures present mean response time for the three topologies. Figure 7 shows that the mobile agent’s behavior does not change with the topology, but for the SNMP, there is a little difference in response time for the three topologies. This variation is due to the great number of SNMP packets that traverse the backbone (transit) links and to the configuration of the backbone nodes that changes with the topology. Figure 7 also presents mean response time. For a small number of managed elements, the SNMP performs better than the mobile agent, due to the fact that the SNMP messages are smaller than the initial size of the mobile agent. As the number of managed elements increases, response time for the SNMP grows proportionally, since the time to manage a stub is approximately the same for all stubs. For the mobile agent, response time increases faster when the number of managed elements grows, due to the incremental size of the mobile agent. By extrapolating the analysis, we can conclude that the mobile agent performs better than the SNMP when the number of managed network elements ranges between two limits, an inferior and a superior one, respectively determined by the number of messages that pass through a backbone and the size of mobile agent that grows with the variables collected on network elements.
Evaluating the Performance of a Network Management Application 140
SNMP on the 3 topologies MA on the 3 topologies Mean time for the SNMP Mean time for the MA
120
Response Time (s)
525
100 80 60 40 20 0
1
64
128
192
240
Number of Managed Elements
Fig. 7. Response time for the mobile agent and for the SNMP.
5
Conclusion
This work has analyzed the scalability of mobile agents in network management. The performance of mobile agents has been compared with the SNMP (Simple Network Management Protocol) one. We have compared two prototype implementations for gathering MIB-II (Management Information Base - II) variables on managed elements: a mobile agent-based one and the pure SNMP. Results show that the mobile agents require a higher processing capacity and that the SNMP uses a larger number of messages related to the management station when the number of managed elements exceeds a value related to the overhead of several retrievals of GetRequest PDUs. The mobile agent infrastructure turns the execution of Java code slower, mainly because of serialization/deserialization, threads creation, and internal messages transmission. The topology used on the measurements is adverse to the mobile agent, since the great availability of bandwidth on the Ethernet turns message transmission times negligible comparing with processing times. Therefore, in this topology, the SNMP performs much better than the mobile agent. Simulations of the two implementations have also been performed in the NS Network Simulator, in order to obtain results on large topologies similar in shape to the Internet. Response time results show that the mobile agent performs better than the SNMP when the number of managed elements ranges between two limits, an inferior and a superior one, respectively determined by the number of messages that pass through a backbone and by the mobile agent size that grows with the variables collected on network elements. In a general way, we conclude that the mobile agent paradigm significantly improves the network management performance when subnetworks must be man-
526
M.G. Rubinstein, O.C.M.B. Duarte, and G. Pujolle
aged remotely; mainly if the links between the management station and the elements to be managed have a small bandwidth and a large latency. Acknowledgements. This work has been supported by UFRJ, FUJB, CNPq, CAPES, COFECUB, and REENGE. We would like to thank FAPERJ for the grant to Mr. Rubinstein during the execution of this work on the Departamento de Engenharia Eletrˆ onica e de Computa¸c˜ ao da UFRJ.
References 1. Stallings, W.: SNMP and SNMPv2: The infrastructure for network management. IEEE Communications Magazine 36 (1998) 37–43 2. Yemini, Y.: The OSI network management model. IEEE Communications Magazine 31 (1993) 20–29 3. Bohoris, C., Pavlou, G., Cruickshank, H.: Using mobile agents for network performance management. In: IEEE/IFIP Network Operations and Management Symposium (NOMS’00), Honolulu, Hawaii (2000) 637–652 4. Baldi, M., Picco, G.P.: Evaluating the tradeoffs of mobile code design paradigms in network management applications. In: 20th International Conference on Software Engineering (ICSE’98), Kyoto, Japan (1998) 146–155 5. Gavalas, D., Greenwood, D., Ghanbari, M., O’Mahony, M.: Mobile software agents for decentralised network and systems management. Microprocessors and Microsystems 25 (2001) 101–109 6. Sahai, A., Morin, C.: Towards distributed and dynamic network management. In: IEEE/IFIP Network Operations and Management Symposium (NOMS’98), New Orleans, USA (1998) 7. Waldbusser, S.: Remote network monitoring management information base. RFC 1757 (1995) 8. McCloghrie, K., Rose, M.: Management information base for network management of TCP/IP-based internets: MIB-II. RFC 1213 (1991) 9. Baumann, J., Hohl, F., Straber, M., Rothermel, K.: Mole - concepts of a mobile agent system. World Wide Web 1 (1998) 123–137 10. Advent Network Management Inc.: AdventNet SNMP release 2.0. http://www.adventnet.com (1998) 11. Fall, K., Varadhan, K.: NS Notes and Documentation. Technical report, The VINT Project (1999) 12. Zegura, E.W., Calvert, K.L., Donahoo, M.J.: A quantitative comparison of graphbased models for internet topology. IEEE/ACM Transactions on Networking 5 (1997) 770–783
Performance Evaluation on WAP and Internet Protocol over 3G Wireless Networks Hidetoshi Ueno, Norihiro Ishikawa, Hideharu Suzuki, Hiromitsu Sumino, and Osamu Takahashi NTT DoCoMo, Multimedia Laboratories 3-5, Hikari-no-oka, Yokosuka, Kanagawa, 239-8536, Japan {hueno, ishikawa, hideharu, sumino, osamu}@mml.yrp.nttdocomo.co.jp
Abstract. This research analyses the performance of WAP 1.x in a comparison to the Internet protocol. We implement a WAP client and a WAP gateway based on WAP version 1.1 and assess the response time by comparing to that of HTTP and TCP. We use a W-CDMA simulator to evaluate its performance in high-speed wireless networks such as 2.5G and 3G. The results shows that both protocols have comparable performance (i.e. response time) except when transmitting large content sets (e.g. multimedia data files), in which case the performance of HTTP/TCP is better than that of WAP 1.x. We also evaluate WAP specific functions such as the binary encoding of WAP headers and contents. While binary encoding is effective for small content sets, its effectiveness and performance are questionable for large content sets. Finally, we propose a mobile Internet architecture that is suitable for 2.5G and 3G wireless networks based on the evaluation and our experience with the i-mode service. Our architecture consists of wireless optimized TCP, TLS, HTTP and XHTML.
1 Introduction Services that will access the Internet from handheld devices such as weather forecasts, news, and mobile banking are attracting people’s attention. Handheld devices tend to have many restrictions such as have less powerful CPUs, less memory, and smaller displays. Wireless networks also suffer from higher error rates, lower bandwidth, higher latency, and unexpected circuit failures. Since it was considered that the protocol used in the Internet might not be suitable for wireless environments, the Wireless Application Protocol (WAP) Forum developed the WAP version 1.x (WAP 1.x) protocol [1]. WAP 1.x is designed for various kinds of wireless network bearers (e.g. GSM and CDMA) [1]. However, it is unclear what sort of networks can take full advantage of WAP 1.x, and its performance of has not been fully evaluated. Thus, we developed a WAP client and a WAP gateway based on the WAP specifications [1], and then evaluated WAP performance by using a Wideband Code Division Multiple Access (W-CDMA) simulator. We compared the WAP 1.x protocol to the Internet protocol (HTTP/TCP). Finally, we created a mobile Internet architecture that is suitable for E. Gregori et al. (Eds.): NETWORKING 2002, LNCS 2345, pp. 527-538, 2002. © Springer-Verlag Berlin Heidelberg 2002
528
H. Ueno et al.
next generation (2.5G) and third generation (3G) wireless networks since services over the International Mobile Telecommunication 2000 (IMT-2000) networks [2] have been started in Japan. The WAP 1.x protocol is overviewed in section 2, and its implementation in the WAP 1.x test-bed system is described in section 3. Our evaluation of WML 1.x binary encoding is given in section 4 and is compared to WAP 1.x and HTTP/TCP in section 5. In section 6, we propose a mobile Internet architecture for high-speed wireless networks such as 2.5G and 3G.
2 WAP 1.x Overview WAP defines an architecture and protocols with the goal of providing special functionalities such as telephony, push delivery, and suspend & resume. The following summarizes the WAP architecture and protocols. The WAP architecture consists of WAP client, WAP getaway and origin server. It is based on the Internet World Wide Web (WWW) model with few enhancements. The WAP protocols are used between the WAP client and the WAP gateway, and the Internet protocols (i.e., HTTP and TCP) are used between the WAP gateway and the origin server. Optimizations and extensions have been made in order to satisfy the requirements of the wireless environment. The WAP protocols consist of Wireless Datagram Protocol (WDP), Wireless Transport Layer Security (WTLS), Wireless Transaction Protocol (WTP), Wireless Session Protocol (WSP) and Wireless Application Environment (WAE) (Figure 1.). Internet HTML JavaScript HTTP
Wireless Application Protocol Application Layer (WAE)
•WML(Wireless Markup Language) •WTA, WML Script
Session Layer (WSP)
•HTTP based request/reply protocol •Push functionality
Transaction Layer (WTP) •Transaction based protocol
•Segmentation and Reassembly
TLS - SSL TCP/IP UDP/IP
Security Layer (WTLS)
•Security, Authentication •TLS based
Transport Layer (WDP)
•Bearer adaptation •UDP based
Bearers: GMS-CSD, GMS-SMS, GPRS, etc
Fig. 1. WAP protocol overview and its comparison to the Internet protocol
WDP provides functions for bearer adaptation, which absorbs the differences of lower wireless network protocols. When the bearer network supports IP, UDP is used for WDP. WTLS is developed based on TLS and provides the means for supporting security functions such as authentication and confidentiality. WTP provides transaction type communication and has three transaction types (i.e. class 0, 1, and 2).
Performance Evaluation on WAP and Internet Protocol over 3G Wireless Networks
529
Class 2 transaction in particular realizes reliable communication by supporting packet retransmission. WTP supports segmentation and reassembly (SAR), which provides the means for transmitting large content sets whose size exceeds one Maximum Transfer Unit (MTU). WSP provides session management functions including session initiation and session suspend and resume. WSP provides header compact encoding and push capability in addition to the basic functionality of HTTP. WAE is a general term of application environments in WAP, and consists of several components. WAP defines the Wireless Markup Language (WML) [1] as the markup language, and WML Script as the scripting language, and Wireless Telephony Application (WTA) as the telephony-related application. WML is based on Extensible Markup Language (XML) and defines its own tags, and has no compatibility with Hyper Text Markup Language (HTML). WML uses the binary representation format [1] in order to reduce content transfer volume. WML content is encoded into binary representation format at the WAP gateway to the wireless network. 2.1 WAP Related Works Since there are a lot of Internet contents written in HTML, WAP clients must be able to access HTML contents. To do so the WAP gateway needs to support content conversion from HTML to WML. Reference [3] investigated the problems associated with the conversion from HTML to WML. It proposed some techniques for converting HTML to WML but several problems remained. The overhead of content conversion is also an issue because poor gateway scalability becomes a serious handicap when the number of subscribers increases. Although it is very important to investigate and consider WAP gateway scalability, research has been insufficient to date. As for evaluating the WAP protocol, one paper evaluated the WTP class two protocols by implementing the WAP protocol stack [4]. It pointed out some inconsistencies in the WAP specifications but didn’t investigate WTP performance over wireless networks. Reference [5] analyzed the network traces generated by a mobile browser application. The research observed daily and weekly cycles, and found some evidence of self-similarity in the network traffic produced by the application. The research also compared and contrasted the mobile browser traffic characteristics with the results for WWW traffic published in the literature. The results of this research are very significant for the design of wireless networks. However, since the network characteristics of 3G networks are quite different from those of 1G and 2G networks, in-depth research of 3G network traffic characteristics is needed. While 3G commercial services started in Japan in October 2001, no research has examined WAP 1.x over 3G networks.
3 Implementation of WAP 1.x Client and Gateway We developed a WAP client and a WAP gateway based on WAP version 1.1 (WAP 1.1) specifications [1], and simulated a high-speed wireless environment by using a hardware-based W-CDMA emulator. The W-CDMA emulator allows several of the
530
H. Ueno et al.
parameters related to the wireless network (e.g. bearer speed, error rate and maximum number of retransmission) to be set up. The parameters used, see Table 1, are based on the FOMA implementation1. Note that WAP 1.1 is not the newest version of the WAP 1.x series, but there is no measurable difference as regards protocol performance. Table 1. This table shows principal parameters set in the W-CDMA bearer simulator. In WCDMA, one to twelve PDUs (i.e. Radio Link Control frames) constitute one Forward Error Correction (FEC) frame [6]. The actual size of the FEC frame depends on the link conditions and bandwidth allocation. Since the error rate value is based on the typical average value on our experimental 3G system, it captures wireless-specific characteristics (e.g. fading behavior). The error rate is per FEC frame. Parameter Bearer Speed Layer 2 Bearer MTU Error Rate
Value (Downlink) 384 and 64 Kbps, (Uplink) 64 Kbps Radio Link Control (RLC) Protocol [6] 1500 bytes 5% (per FEC frame)
3.1 WAP Test-Bed System Overview The WAP 1.1 test-bed system is shown in Figure 2. WAP Client
W-CDMA Emulator
WAP Gateway
Origin Server
Windows98 WML 1.1 Browser
Uplink: 64Kbps Downlink: 384/64Kbps
WAP 1.1
Solaris2.6
Linux (Apache 1.3.9)
HTTP/TCP
Fig. 2. This is the WAP 1.1 test-bed system. The W-CDMA emulator is set between the WAP client and the WAP gateway.
1)
2)
1
WAP client We wrote the WAP client in C++ for the Windows 98 platform. It consists of WML 1.1 browser, WAP protocol module and so on. The WML 1.1 browser can emulate a mobile client and has the capability of displaying WML contents. WAP gateway We wrote the WAP gateway in C language for Solaris 2.6. It consists of WAP and Internet protocol modules, gateway application module, and so on. The
FOMA is the first 3G commercial service; it started in October 2001. For further information, please see (http://www.nttdocomo.co.jp/english/).
Performance Evaluation on WAP and Internet Protocol over 3G Wireless Networks
531
gateway application module realizes WML content pursing and binary encoding if the content is WML, and forwards the encoded data to the WAP client. Origin Server We used Apache 1.3.9 as the WWW server application. It also supports the Common Gateway Interface (CGI) function so that it is possible to create dynamic WML content.
3)
3.2 WAP Applications Table 2 lists the applications we implemented in the test-bed system. GET and POST are used for Web browsing. As for the push application, the WAP gateway becomes a push server and pushes contents to the WAP client by using the WSP push or confirmed push method. We also developed an e-mail application by using CGI and an e-mail receiving application by using WSP push. Table 2. Applications developed for the test-bed system Service Browsing Push E-mail
Content WML GIF, Text Text
WSP Function GET, POST method WSP Confirmed/Unconfirmed Push (Send) POST method (Receive) WSP Confirmed/Unconfirmed Push
4 Evaluation of WAP 1.x Binary Encoding WAP 1.x defines two types of binary encoding functions: WML binary encoding and WSP header compact encoding. Since these functions are peculiar to WAP 1.x, we evaluated both of them. 4.1 Evaluation of WML Binary Encoding The WAP Gateway encodes WML contents (WML tags as well as control codes) after XML parsing. Since the effectiveness of the binary encoding depends on the content, we evaluated the encoding rate by using several typical examples of WML contents as shown in Figure 3. The contents used essential tags such as “wml”, “card” and “p” and text data in addition to line break tags (i.e., “br”). We evaluated the binary encoding of WML contents using several content set sizes (i.e., 500, 1000, 1400, 20K, 100K, 360K bytes) by changing the size of the data part. The evaluation focused on the following two factors.
• •
Compression rate comparison between WML binary encoding and gzip Time taken for WML compression, which includes XML parsing time.
The result of the evaluation is shown in Figure 4. In the evaluation, we used RXP beta 15 as the XML parser.
532
H. Ueno et al.
1.1//EN" 0 is the unitary cost for LSP capacity allocation. • Excess cost: it takes mainly into account a cost due to packets switching performed in IP mode and their routing on an alternative path that occurs when the LSP capacity is less than the bandwidth request (x(k) < b(k)). Following the criteria that forwarding packets in MPLS mode is less expensive
950
C. Bruni, C. Scoglio, and S. Vergari
than IP mode, we assume the unitary cost coefficient ce for the bandwidth request not allocated on the LSP, greater than cl . To emphasize the advantage of MPLS techniques and to promote their utilization, we consider the above mentioned cost depending quadratically on the difference between the LSP capacity and the bandwidth request. On the other hand, it may happen that, at a generic time k, the LSP capacity is greater than the bandwidth request(x(k) > b(k)). In this case a certain amount of bandwidth is reserved without utilization: we have assumed to penalize this event with a cost depending quadratically on the amount of waste bandwidth. For simplicity we consider the coefficient per unit of waste bandwidth equal again to ce . From the above consideration, at time k, we have the following excess cost term: Je (k) = ce · [b(k) − x(k)]2
(4)
where, as already said, ce > cl . • Dimensioning variation cost: it takes into account the LSP dimensioning variation cost. Each change of LSP capacity is charged in order to avoid too much wide LSP capacity re-dimensioning, which in turns affects the dimensioning of the other LSPs in the MPLS network. The same term can also take into account the so called signalling cost which occurs at each LSP capacity variation. The dimensioning variation cost at time k is assumed to depend quadratically on the size variation of LSP capacity: Jv (k) = cv · ∆2 (k)
(5)
where cv > 0 is the unitary dimensioning variation cost of the LSP. It follows that the total cost function in the control interval is: Jt =
N
cl x(k) +
k=1
N
ce [b(k) − x(k)]2 +
k=1
N
cv ∆2 (k) .
(6)
k=1
From (1), the cost function (6) can be rewritten as follows: Jt = ce
N k=1
b2 (k) + cl
N
x(k) − 2ce
k=1
+ cv x20 + (2cv + ce )
N −1 k=1
N
b(k)x(k) + (cv + ce )x2 (N ) +
k=1
x2 (k) − 2cv
N −1
x(k + 1)x(k) − 2cv x0 x(1) .
k=1
Our aim is to minimize the total cost Jt with respect to the variable x(k), k = 1, .., N , in the presence of constraints (2). Let us note that the terms ce · N 2 2 k=1 b (k) and cv · x0 do not depend on x(k), k = 1, .., N , therefore we do not consider them in the cost minimization. We can, at this point, formulate the following quadratic programming problem.
Optimal Capacity Provisioning for Label Switched Paths in MPLS Networks
951
Problem 1. Find a global minimum for the cost function: J (x) = xT · HN · x + f T · x
(7)
in the admissible set: D = x ∈ RN :
x≥0
where x and f are the following N -vectors: cl − 2ce · b(1) − 2cv · x0 x(1) cl − 2ce · b(2) x = ... , f = .. . x(N ) cl − 2ce · b(N ) and HN is the following N × N matrix: 0 . 0 0 2cv + ce −cv −cv 2cv + ce −cv . 0 0 0 −c . . . . v . HN = . . . . −cv 0 0 0 . −cv 2cv + ce −cv 0 0 . 0 −cv cv + ce
(8)
(9)
(10)
The matrix HN is definite positive, as can be easily proved by exploiting some results in [10], so that J is strictly convex in RN . The solution of Problem 1 is given in the following theorem. Theorem 2. Assuming ce > cl , b(k) ≥ 12 , k = 1, ..., N , the unique solution of Problem 1 is: 1 −1 ·f xo = − HN 2
(11)
Proof. Taking the strict convexity of J into account, the unique global minimum of J in RN is the solution of the equation:
dJ dx
T xo
= 2HN · xo + f = 0
that is (11). In order to prove that (11) is also the unique solution of Problem 1, we will verify that xo ∈ D, that is xo (k) ≥ 0, k = 1, .., N . The generic component of xo is: xo (k) = −
N
1 −1 H · f (j) 2 j=1 N kj
k = 1, 2, ..., N .
(12)
952
C. Bruni, C. Scoglio, and S. Vergari
By suitably handling a result given in [10] about the analytical expression of −1 , we have: HN
−1 HN
|i−j|
ij
=
cv det HN −max{i, j} det Kmin{i, j}−1 det {HN }
(13)
where Hi , i = 1, ..., N , is an i × i matrix defined according to (10) and Ki is the following i × i matrix: 0 . 0 2cv + ce −cv −cv 2cv + ce . . . , 0 . . −cv 0 i = 1, 2, ..., N . Ki = . . −cv 2cv + ce −cv 0 . 0 −cv 2cv + ce as can be Noting that Hi and Ki , i = 1, ..., N , are definite positive
matrices, −1 proved by exploiting again results in [10], the positivity of HN follows from kj (13) for k, j = 1, ..., N . The positivity of xo (k), k = 1, ..., N , is then implied by the positivity of −f (j), j = 1, ..., N . This, in turn, is an obvious consequence of the assumptions and of the positivity of x0 . Remark 3. It is worth noting that the optimal solution xo (k), for each k, depends on all the samples b (j), j = 1, ..., N , as it clearly results from (12).
3
Sub-optimal on Line Solution
Although the hypothesis of complete knowledge of Internet traffic demand on the control discrete time interval [0, N ] is partially supported by long-term TE framework, our aim, in this Section, is to reduce this hypothesis. Indeed, we −1 will show that, for the particular structure of the inverse matrix HN , we can motivate a sub-optimal solution assuming to know just a narrow sliding window on the bandwidth profile, centered at the current time, much smaller than the total time interval [0, N ] considered before. As a consequence, the structure which characterizes the sub-optimal solution can be implemented ”on line”, while the optimal one is clearly ”off line”. In order to analyze the behaviour of the suboptimal solution we are going to introduce, let us define the following parameter: −1 HN h = 1, 2, ..., (N − 1) . (14) αh = max ij i,j: |i−j|=h
The behaviour of the above parameter αh has been numerically investigated for different values of cv , ce , cl and N . The analysis has pointed out a monotone decreasing behaviour of αh , as shown for instance in Fig.s 1, 2, 3. Let us now give the definition of the sub-optimal solution for the Problem 1.
Optimal Capacity Provisioning for Label Switched Paths in MPLS Networks
953
Definition 4. For a fixed integer N > 1, let be M ≤ 2N − 3 a positive odd integer. We define the following sub-optimal solution:
where PN M
1 xso = − PN M · f 2 is the M -diagonal matrix of dimension N × N with entries: −1 HN ij (PN M )ij = 0 f or
f or |i − j| = 0, 1, ..., M2−1 |i − j| = M2+1 , ..., (N − 1)
(15)
(16)
Remark 5. From (16) it is obvious that the generic component xso (k), k=1,...,N, depends on a sliding window of no more than M components f (j) of f . This means that the suboptimal solution, at each time k, requires the knowledge of bandwidth requests on a sub-interval containing no more than M2−1 future samples b (j). In order to verify that xso is a good approximation of xo , we introduce an upper bound on the error, which depends on M and is sufficiently small when M is suitably chosen. In fact, considering the norm ·∞ and recalling that b (k) ≤ A, k = 1, ..., N , from (9) it results: f ∞ =
max {|f (k)|} ≤ cl + 2ce A + 2cv x0 = 2C .
k=1,...,N
Therefore xo − xso ∞ = max {|xo (k) − xso (k)|} = k=1,..,N
1 H −1 − PN M f ≤ N ∞ 2
1 H −1 − PN M · f ≤ N ∞ ∞ 2 −1 ≤ C max HN ij − (PN M )ij . ≤
i,j=1,...,N
−1 From (13) and the definite positivity of Hi , Ki , we have HN > 0, i, j = ij 1, ..., N . Then, from (16), taking the definition (14) of αh into account together with its monotonic property, it results: −1 −1 HN = max max HN ij − (PN M )ij = ij i,j=1,...N
i,j: |i−j|= M2+1 ,...,(N −1)
=
max
h= M2+1 ,...,(N −1)
αh = α M +1 . 2
Therefore we have: xo − xso ∞ ≤ Cα M +1 . 2
(17)
In order to analyze the approximation level given by (17), it is useful to observe that if we set: M +1 =h (18) 2
954
C. Bruni, C. Scoglio, and S. Vergari
when M is an odd integer running from 1 to (2N − 3), h assumes the values 1, 2,...,(N − 1). Therefore we can rewrite (17) as follows: xo − xso ∞ ≤ Cαh . and analyze the approximation level by exploiting the behaviour of αh . αh
N=30 N=15 N=10
α
h
N=30 N=15 N=10
0.08
0.05
0.07
0.04 0.06
0.05
0.03
0.04
0.02
0.03
0.02
0.01 0.01
0
0
5
10
15
20
25
0
h
0
5
Fig. 1. Behaviour of αh (cv =50,ce =3,cl =1)
10
15
20
25
h
Fig. 2. Behaviour of αh (cv =10, ce =3, cl =1)
60
M
−3
x 10
α
h
N=30 N=15 N=10
2.5
50
40
2
30
20
1.5 10
0
1
0.0005 0.008 0.02 0.03
α
0.5
h
0.05 0.07
0
0
5
10
15
20
25
Fig. 3. Behaviour of αh (cv =3, ce =30, cl =1)
h
0.1
10
15
20
25
30
35
45
40
N
Fig. 4. Behaviour of M (cv =50, ce =3, cl =1)
Remark 6. As it appears from Fig.s 1, 2, 3, αh approaches quickly zero, for each fixed N , when h reaches a few unit value. Therefore, also when N increases, the approximation error can be kept low by assuming a suitable bounded value for h. For instance, in Fig. 1 and in Fig. 2 we have αh < 10−2 when h 7 (which amounts to the knowledge of only six future samples of b (j)) and this virtually for every N greater than about ten. From Fig. 3, we observe that αh < 3 · 10−3 , for every N , also if h is only equal to one (this means that xso (k), k = 1, ..., N ,
Optimal Capacity Provisioning for Label Switched Paths in MPLS Networks
955
depends only on the current request b (k) and no knowledge of the future is required). The same conclusion is also evidenced by Fig. 4 where the behaviour of the parameter M , numerically obtained by (14) taking (18) into account, is also given in a 3-dimensional representation for different values of αh and N , assuming for instance cv = 50, ce = 3, cl = 1. For αh and N fixed, Fig. 4 allows to deduce the corresponding value for the parameter M . It appears that M quickly reaches a steady state value when N increases, for each fixed αh .
4
An Application to Simulated Data
In order to test the application of the optimal and sub-optimal LSP capacity allocation procedures, we have considered a case study obtained by simulating a sequence of bandwidth requests. To generate this bandwidth profile, we consider each request arrival time and each request death time as an event. Besides, we assume that two events occur at the same time with probability zero. In particular we simulate three stochastic processes: - the first one generates the requests arrival times and it is simulated as a Poisson process with parameter λ = 12 ; - the second concerns the time duration of each request, and is characterized by an exponential distribution with parameter µ = 15 ; - the last one is related to the amount of bandwidth of each request, and follows a uniform distribution on the integers of the interval [1, 10]. On the generated bandwidth profile we select a time window containing N = 40 samples. As initial state we consider x0 = 11.3 corresponding to the average value of the bandwidth requests. Using the above data, we compute the optimal and the sub-optimal solutions considering, for the parameters cv , ce , cl , the same values as in Fig.s 1, 2, 3. We have considered a sub-optimal solution based, for instance, on the knowledge of only 8 future samples, which means M = 17. Note that, for the choice cv = 50, ce = 3, cl = 1 and assuming A = 35, Fig. 4 allows to guarantee an ”a priori” approximation error with respect to the optimal solution not greater than 6.5. In Fig.s 5, 6, 7 computed optimal and sub-optimal solutions together with the simulated bandwidth profile are shown. In particular, from Fig. 5 we easily verify the above expected approximation level. Concerning the effects of the cost parameters on the optimal solution, we have the following remarks: - noting that the optimal solution is defined modulo a positive factor in the cost function, we have normalized the cost coefficients assuming always cl = 1; - the parameter cv , which weights the variation size cost, affects the behaviour of the optimal solution considerably; in particular the higher is the value of cv , the smoother the solution becomes;
956
C. Bruni, C. Scoglio, and S. Vergari
- the parameter ce influences the fitting capability of the optimal solution with respect to the requested bandwidth reference. Note also that the same parameter influences the fitting capability of the sub-optimal solution with respect to the optimal one: this is due to the fact that, as ccve increases, the matrix HN approaches the identity matrix and consequently xso tends to coincide with xo .
A comparison between the optimal and the sub-optimal solution can be carried out both with reference to the instantaneous approximation error and to the related costs. For the first point, we observe that the maximum deviation between xo and xso is of about 3, 0.2, 0 respectively in the three considered cases. It is worth noting that virtually the same numerical results can be foreseen by exploiting the upper bound given by (17). Concerning the related costs, considering for instance the first choice of parameters (cv = 50, ce = 3, cl = 1), we obtain J(xo ) = 6672, while for the sub-optimal solution (with M = 17), we have J(xso ) = 7141, with an increase of about 6.56%. It appears that the cost increase is very low, when compared with the advantage (in the better case) of discarding 31 future samples b(j). Finally note that, if we want to furthermore reduce the cost increase, we can increase M ; assuming for instance M = 21 (10 future samples), the value of J for the corresponding sub-optimal solution, becomes 6827, which amounts to an increase of only 2.27%.
Mbps
30
25
20
15
10
bandwidth profile optimal sol. sub−optimal sol.
5
0
0
5
10
15
20
25
30
35
Fig. 5. Optimal and Sub-optimal solution (cv =50, ce =3, cl =1, M =17)
40
Optimal Capacity Provisioning for Label Switched Paths in MPLS Networks
957
Mbps
30
25
20
15
10
bandwidth profile optimal sol. sub−optimal sol.
5
0
0
5
10
15
20
25
30
35
40
Fig. 6. Optimal and Sub-optimal solution (cv =10, ce =3, cl =1, M =17) Mbps
30
25
20
15
10
bandwidth profile optimal and sub−optimal sol.
5
0
0
5
10
15
20
25
30
35
40
Fig. 7. Optimal and Sub-optimal solution (cv =3, ce =30, cl =1, M =17)
5
Concluding Remarks
This paper provides a formal description of the optimal capacity provisioning problem for a label switched path in a MPLS network. In particular, by a suitable choice of the cost function, the above problem was reduced to a quadratic programming problem, whose closed form solution have been easily obtained. This optimal solution requests the knowledge of all the future traffic in the control time interval. Being aware that future traffic knowledge is a quite unlikely assumption, by exploiting some properties of the optimal solution, we propose a sub-optimal one which offers the advantage of requiring the knowledge of future traffic only on a small sliding window over the control time interval and, at the
958
C. Bruni, C. Scoglio, and S. Vergari
same time, it offers a very good approximation level with respect to the optimal solution together with a very small increase of the cost. Acknowledgments. The Authors are indebted to Professor I. F. Akyildiz for his encouragement and many valuable suggestions. Special thanks are also due to Professor F. Delli Priscoli for some useful discussions and to T. Anjali and J. C. Oliveira for their help in the realization of this paper.
References [1] P. Trimintzios, D. Griffin, P. Georgatsos, D.Goderis, L.Georgiadis, C. Jacquenet, R. Egan: A Management and Control Architecture for Providing IP Differentiated Services in MPLS-Based Network. IEEE Communications Magazine, vol 39, n. 5, May 2001. [2] R. Callon, E. Rosen, A. Viswanathan: MultiProtocol Label Switching Architecture. IETF, RFC 3031, January 2001. [3] A. Bergsten, K. Nemeth, I. Cselenyi, G. Feher: Fundamental Questions Regarding End-to-End QoS. IETF, Internet Draft, July 2001. [4] D. O. Awduche, J. Malcolm, J. Agogbua, M. O’Dell, J. McManus: Requirement for Traffic Engineering over MPLS. IETF, RFC 2702, September 1999. [5] F. Gonzales, C. Chang, L. Chen, C. Lin: Using MultiProtocol Label Switching (MPLS) to Improve IP Network Traffic Engineering. Proc. Interdisciplinary Telecommunications Program, Spring 2000. [6] G. J. Armitage: MPLS: the Magic Behind the Myths. IEEE Communications Magazine, vol 38, n. 1, Jan 2000. [7] D. O. Awduche, A. Chiu, A. Elwalid, I. Widyaya, X. Xiao: A framework for Internet Traffic Engineering. IETF, Internet Draft, July 2001. [8] C. Scoglio, T. Anjali, J. C. Oliveira, I. F. Akyildiz: A new Optimal Policy for Label Switched Path Setup in MPLS Network. Proc. 17th International Teletraffic Congress, Brazil, September 2001. [9] H. Saito, Y. Miyao, M. Yoshida: Traffic Engineering Using Multiple Point-to-Point LSPs. Proc. INFOCOM 2000 (19th joint Conference of the IEEE Computer and Communication Societies), Tel Aviv, March 2000. [10] C. F. Fischer, R. A. Usmani: Properties of some tridiagonal Matrices and their application to Boundary Value Problems. SIAM Journal on Numerical Analysis, Vol 6,n. 1, March 1969.
A New Class of Online Minimum-Interference Routing Algorithms Ilias Iliadis and Daniel Bauer IBM Research, Zurich Research Laboratory, 8803 R¨ uschlikon, Switzerland {ili,dnb}@zurich.ibm.com
Abstract. On-line algorithms are essential for service providers to quickly set up bandwidth-guaranteed paths in their backbone or transport networks. A minimum-interference routing algorithm uses the information regarding the ingress–egress node pairs for selecting a path in the case of on-line connection requests. According to the notion of minimum interference, the path selected should have a minimum interference with paths considered to be critical for satisfying future requests. Here we introduce a new class of minimum-interference routing algorithms, called “simple minimum-interference routing algorithms” (SMIRA), that employ an efficient procedure. These algorithms use static network information comprising the topology and the information about ingress–egress node pairs, as well as the link residual bandwidth. Two typical algorithms belonging to this class are introduced, and their performance is evaluated by means of simulation. The numerical results obtained illustrate their efficiency, expressed in terms of throughput, and fairness.
1
Introduction
This paper deals with the issue of dynamic bandwidth provisioning in a network. This problem arises in several instances, such as in the context of dynamic label-switched path (LSP) setup in multiprotocol label switching (MPLS) [1] networks and in the context of routing virtual circuit requests over an ATM backbone network [2]. In particular, this paper considers the issue of establishing bandwidth-guaranteed connections in a network, in which connection-setup requests arrive one by one and future demands are unknown. This is referred to as an on-line algorithm, in contrast to an off-line algorithm that assumes a priori knowledge of the entire request sequence, including future requests. On-line algorithms are essential owing to the need of service providers to quickly set up bandwidth-guaranteed paths in their backbone or transport networks. The primary routing problem consists of determining a path through the backbone network that a connection should follow. Clearly, the available bandwidth of all links on the path should be greater or equal to the requested bandwidth. If there is insufficient capacity, some of the connections cannot be established, and therefore are rejected. A significant body of work exists for the on-line path selection problem. Several path selection algorithms proposed in E. Gregori et al. (Eds.): NETWORKING 2002, LNCS 2345, pp. 959–971, 2002. c Springer-Verlag Berlin Heidelberg 2002
960
I. Iliadis and D. Bauer
the literature aim at limiting the resource consumption so that network utilization is increased. The most important of these algorithms can be found in [3, 4]. The basic algorithms considered here along with a short description of their functionality are listed below. Shortest-path routing (SP) algorithms select a path with the least amount of aggregated cost. Minimum-hop routing algorithms select a path with the least number of links as that path uses the smallest amount of resources. When all links have the same cost, it is a special case of an SP algorithm. Widest-shortest-path routing (WSP) algorithms select a shortest path with the largest available (residual) bandwidth. Shortest-widestpath routing (SWP) algorithms select a widest path with the least amount of aggregated cost. A widest path is a path with maximum bottleneck bandwidth or, equivalently, a path with the largest In addition to these algorithms, a more sophisticated algorithm, called minimum interference routing algorithm (MIRA), that uses the information regarding the ingress–egress pairs was recently developed [5]. Despite the fact that this algorithm uses this information pertaining to the past, present, and future requests, it is considered to be an online algorithm because the future connection requests are unknown. However, the effect of future requests is indirectly incorporated through the notion of the minimum-interference routing. The second problem related to the primary routing is that of admission control. Routing algorithms can be categorized into two classes according to the control of admitting connections [6]. Greedy algorithms always establish a connection as long as sufficient capacity is available. Trunk-reservation algorithms reject a connection if assigning any of the existing paths could result in inefficient use of the remaining capacity regarding future connection requests [7]. Section 2 reviews the concept of minimum-interference routing and briefly describes the MIRA algorithm, the first such algorithm presented in [5]. The main contributions of this paper are presented in Sections 3 and 4. In Section 3, we present a new class of minimum-interference routing algorithms, called “simple minimum-interference routing algorithms” (SMIRA). This class of algorithms is not based on the principle of calculating maximum flows, but rather uses a more efficient (in terms of computational complexity) approach, hence the term “simple”. In Section 4, we examine the efficiency of the algorithms proposed by means of simulation, and compare them with the SP, SWP, WSP, and MIRA algorithms. The efficiency assessment is based on performance metrics including the throughput, expressed as the total bandwidth of the routed (accepted) connections, the blocking-free range, and the fairness achieved among different ingress–egress node pairs. In particular, we demonstrate that the effectiveness of a given algorithm strongly depends on the performance criterion chosen. Finally, we draw conclusions in Section 5.
2
Minimum-Interference Routing
In this section, the notion of minimum-interference routing and the first MIRA algorithm are reviewed. Although for on-line algorithms future connection re-
A New Class of Online Minimum-Interference Routing Algorithms
961
quests are unknown, the effect of future requests can be indirectly incorporated through the notion of minimum-interference routing. A new connection should follow a path that does not “interfere too much” with a path that may be critical to satisfy a future request. Note that this notion can only be used in conjunction with knowledge of all ingress–egress pairs. An explicit path between a given ingress–egress pair can in principle be calculated according to a defined interference criterion. For example, the criterion could be the maximization of the smallest maxflow value of all remaining ingress–egress pairs, the maximization of the weighted sum of the remaining maxflows (referred to as WSUM-MAX) [5], or the maximization of the maximum throughput of the corresponding multicommodity flow problem [7]. These problems are quite complex, therefore alternative, simplified approaches are highly desirable. One possibility is to turn the original problem into an equivalent shortest-path routing problem with appropriately selected link-cost metrics. In [5], this transformation is done as follows. The amount of interference on a particular ingress–egress pair, say (s, d), due to the routing of a connection between some other ingress–egress pair is defined as the decrease of the maximum flow value between (s, d). Then, the notion of critical links is introduced. These are links with the property that whenever a connection is routed over them, the maxflow values of one or more ingress–egress pairs decrease. For this definition, it turns out that the set of critical links corresponding to the (s, d) pair coincides with the links of all minimum cuts for this pair. Links are subsequently assigned weights that are an increasing function of their “criticality”: the weight of a link is chosen to be the rate of change in the optimum solution of the original WSUM-MAX problem with respect to changing the residual capacity of the link. This choice results in critical links being assigned the highest weights. Finally, the actual explicit path is calculated using an SP algorithm.
3
Simple Minimum-Interference Routing Algorithms
In this section, we introduce the class of the simple minimum-interference routing algorithms (SMIRA). The term “simple” reflects the fact that they do not define the critical links according to maximum-flow calculations, as done by MIRA, but rather employ a simpler approach, which, as will be seen below, has a lower computational complexity than the MIRA maximum-flow approach. To devise an alternative notion for the critical links we resort to the fundamental objective of minimum-interference routing, namely that a new connection must follow a path that interferes as little as possible with a path that may be critical to satisfy a future request. This requires that paths associated with future requests be taken into account and also that links associated with such critical paths be identified and weighted accordingly. One possible way to achieve this is the following. Let P be the set of ingress–egress node pairs, and suppose that the connection request is between nodes a and b. For every of the remaining ingress– egress pairs (s, d) we identify a set of critical paths, and each link of such a path is weighted accordingly. When this process has been completed, links having
962
I. Iliadis and D. Bauer
minimum weight are associated with paths that do not interfere with future requests. Routing the current connection request on a shortest path with respect to these weights results in a residual network in which the interference of the remaining ingress–egress pairs is kept to a minimum. 3.1
Critical Paths
There are several ways to identify a set of critical paths corresponding to the ingress–egress pair (s, d). Here we introduce a procedure for obtaining the set of critical paths called K-widest-shortest-path under bottleneck elimination. This procedure identifies a set of critical paths by making use of a WSP algorithm. The paths are enumerated in descending order of their significance. The algorithm (1) starts with selecting the widest-shortest path between pair (s, d). Let Lsd denote (1) the set of links constituting this widest-shortest path, and let btlsd be the corre(1) sponding (bottleneck) bandwidth of this path. Let also Btlsd denote the subset of link(s) of this path whose residual bandwidth is equal to the bottleneck value (1) btlsd . The next (second) member is found by computing the shortest-widest (1) path when the links of the set Btlsd are removed from the network. This procedure is repeated until either K paths are found or no more paths are available, whichever occurs first. It is realized by using the Dijkstra algorithm [8] in each iteration, therefore its complexity is of order O(K(n log n + m)), n and m being the number of nodes and links, respectively. An alternative procedure can be derived by using an SWP algorithm – or any other path-computation method – to enumerate the critical paths. This procedure, called K-shortest-widest-path under bottleneck elimination, is realized by using the Dijkstra algorithm twice in each iteration [3], therefore its complexity is also of order O(K(n log n + m)). For networks of practical relevance, it turns out that the value of K is typically a small number. Therefore, the complexity of the above procedures is of order O(n log n + m). On the other hand the complexity of the procedure for determining√the set of critical links in the case of the MIRA algorithm is of order O(n2 m + m2 ) [5]. This complexity results from the two phases used by MIRA. The first consists of a maximum flow calculation with a computational √ complexity of O(n2 m). The second consists of the process of enumerating all links belonging to minimum a complexity of O(m2 ). Thus, the total √ cuts with 2 2 complexity is of order O(n m+m ). In particular, in the case of sparse topolo1 gies, MIRA’s complexity is of order O(n2+ 2 ) as m is of order O(n), whereas that of our algorithm is of order O(n log n). In the case of dense topologies, MIRA’s complexity is of order O(n4 ) as m is of order O(n2 ), whereas that of our algorithm is of order O(n2 ). Therefore, our proposed procedures have a shorter expected execution time, justifying the use of the term “simple”.
A New Class of Online Minimum-Interference Routing Algorithms
3.2
963
Link-Weight Assignment
Each link is initially assigned a static cost. For critical links this cost is scaled by a factor that includes two weight components. The first one is associated with the path(s) to which this link belongs. Naturally, higher weights are assigned to paths with higher significance. The second reflects the importance of the links constituting each of the critical paths. (i) We now turn our attention to the first weight component. Let Lsd denote the set of the links constituting the i-th path associated with the ingress–egress pair (i,l) (s, d), and wsd denote the corresponding weight contributed to link l of this (i) (i) set. Let also btlsd be the corresponding bandwidth of this path, and Btlsd be the subset of the corresponding bottleneck link(s). The paths are enumerated in (i) descending order of their significance. In accordance, the links of the Lsd path (i) are weighted with a factor vsd , which is a decreasing function in i such that (i,l) (i) the weight for link l should be proportional to this factor, i.e. wsd ∼ vsd . Our rationale is the following: if all candidate paths for the new connection contain links that have already been marked by this process, i.e. the interference cannot be avoided, then the links associated with the most critical paths corresponding to the other ingress–egress pairs should be avoided by assigning them the highest weights. In this way the interference is relegated to secondary paths. There are infinitely many discounting value functions one could choose. Here we consider (i) (i) the following two functions: vsd = 1 and vsd = (K − i + 1)/K. The next consideration is the weight assignment for the links constituting (i) (i,l) path Lsd . Let gsd denote the corresponding weight for link l. Intuition dictates that bottleneck links should be assigned a higher value than other links. Here again, there are infinitely many discounting value functions one could choose. In this paper, we consider two functions defined as follows. The inversely pro(i,l) (i) (i,l) (i) portional function gsd = btlsd /r(l) and the step function gsd = btlsd /r(l), (i) (i) where r(l) denotes the residual bandwidth of link l ∈ Lsd . Note that btlsd /r(l) is (i) (i) a decreasing function in r(l) with btlsd /r(l) ≤ 1. Consequently, btlsd /r(l) = 1, (i) (i) if and only if r(l) = btlsd , otherwise btlsd /r(l) = 0. Thus, in the case of the step (i) (i,l) 1 l is a bottleneck link of the path Lsd , function, it holds that gsd = 0 otherwise . We proceed by choosing the weight contributed to link l to be proportional (i,l) (i) (i,l) (i,l) to the factors defined above, i.e. wsd ∼ vsd and wsd ∼ gsd . In particular, we (i,l) (i) (i,l) (i) choose wsd = w vsd gsd , (l ∈ Lsd ) , where w is a scaling factor. By taking into account weight of each link allcontributions, the is then K (i,l) calculated by w(l) = c(l) 1 + (s,d)∈P \(a,b) asd , or (i) w i=1 sd l∈L sd
w(l) = c(l)
(s,d)∈P \(a,b)
1+w
asd
K i=1
(i)
vsd
(i) l∈Lsd
(i,l)
gsd
,
(1)
964
I. Iliadis and D. Bauer
where c(l) is a static cost that could, for example, depend on the capacity of the link, and asd is the weight of the ingress–egress pair (s, d). Note that for noncritical links it holds that w(l) = c(l).
3.3
The SMIRA Algorithm
In summary the SMIRA algorithm is as follows: INPUT: A graph G(N, L), the residual bandwidth r(l) for each link, and a set P of ingress–egress node pairs. An ingress node a and an egress node b between which a flow of D units have to be routed. OUTPUT: A route between a and b with a bandwidth of D units, if it exists. ALGORITHM: (i) 1. Compute the K-critical paths ∀(s, d) ∈ P \(a, b). Let Lsd be the set of the links constituting the i-th path. 2. Assign weight on each link according to Eq. (1). 3. Eliminate all links that have residual bandwidth smaller than D and form a reduced network. 4. Use Dijkstra’s [8] algorithm to compute the shortest path in the reduced network with w(l) as the weight of link l. 5. Route the demand of D units from a to b along this shortest path, and update the residual capacities. SMIRA, as defined above, clearly constitutes a general class of algorithms containing an unlimited number of particular instances (implementations). Here we consider two particular algorithmic instances, and investigate their performance. The first is called minimum-interference bottleneck-link-avoidance algorithm (MI-BLA), and is derived from the following choices: – The set of critical paths is obtained using the K-widest-shortest-path under bottleneck-elimination procedure, with K equal to 6. (i) – All paths are considered to have the same weight, i.e. vsd = 1. (i,l)
(i)
– Links are valued according to the step function, i.e. gsd = btlsd /r(l) . – Setting asd = 1 and calculated by w = 2, the weight of each link is then (i) K btlsd . w(l) = c(l) 1 + 2 (s,d)∈P \(a,b) (i) i=1 r(l) l∈L sd
The second is called minimum-interference path avoidance algorithm (MI-PA), and is derived from the following choices: – The set of critical paths is obtained using the K-widest-shortest-path under bottleneck-elimination procedure, with K equal to 4. (i) – Paths are weighted according to the discounting function vsd = (K − i + 1)/K. (i,l) (i) – Links are valued inversely proportional, i.e. gsd = btlsd /r(l). – Setting asd = 1 and by w = 2, the weight of each link is then calculated (i) K K−i+1 btlsd . w(l) = c(l) 1 + 2 (s,d)∈P \(a,b) (i) i=1 K r(l) l∈L sd
A New Class of Online Minimum-Interference Routing Algorithms S2 D3
5
S4
S2 D3
12
2
5
12
11
11
13 D1
7 4
D2
1 7
14
10
9
13 D1
3
S1
1
8
S5
6
3
S3
S4
2
6
S1
965
15
S3 4 D5
D4
N1
9 8
D2 D6
14
10
S6
15
D4
N2+
Fig. 1. Example networks N1 and N2+.
4
Numerical Results
In this section, we compare the performance of the two SMIRA-type algorithms MI-BLA and MI-PA with the shortest-path (SP), shortest-widest-path (SWP), widest-shortest-path (WSP), and, where results are available, with the S-MIRA and L-MIRA algorithms. The experiments are carried out using network topology N1 of [5], see Figure 1. Links are bi-directional with a capacity of 1200 units (thin lines) and 4800 units (thick lines).1 Each link l is assigned a static cost c(l) of one unit. The network contains the four ingress–egress pairs (S1→D1), (S2→D2), (S3→D3), and (S4→D4). Path requests are limited to those pairs only. We have chosen this network such that the performance results can be directly compared with those published in [5]. All experiments are conducted using “static” requests, i.e. the bandwidth allocated for a request is never freed again. Requests are selected randomly and are uniformly distributed among all ingress–egress pairs. In all experiments, 20 test runs were carried out, and the results shown are the mean values obtained. They have a 99% confidence interval not exceeding 1% of the mean values. 4.1
Experiment 1: Uniform Link Costs
In a first experiment, network N1 is loaded with 7000 requests. The bandwidth demand of each request is uniformly distributed in the range of 1 to 3 units (only integer values are used). Because the cost of the links in network N1 is set to 1, the SP algorithm is reduced to a minimum-hop algorithm. The bandwidth of accepted requests of experiment 1 is shown in Figure 2. For each algorithm, the bandwidth increases with the number of requests until a saturation point is reached at which no more requests can be accommodated. The first performance measure we use is the bandwidth of successfully routed requests after the saturation point has been reached. The SP shows the weakest 1
Owing to an error in the production of the final version of [5], links 2-5, 2-3, and 14-15 are erroneously shown as having a capacity of 1200 units. For the experiments described in [5], those links had a capacity of 4800 units.
966
I. Iliadis and D. Bauer 11000 10500 10000 9500 9000
8500 8000
Maximum throughput MI-BLA Widest-Shortest Path Shortest-Widest Path MI-PA Shortest Path
7500
7000 6500 3500
4000
4500
5000
5500
6000
6500
7000
Fig. 2. Throughput of accepted requests using demands of 1 to 3 in N1. 2000
MI-BLA Widest-Shortest Path Shortest-Widest Path MI-PA Shortest Path
1800 1600 1400 1200 1000 800 600 400 200 0 3000
3500
4000
4500
5000
5500
6000
6500
7000
Fig. 3. Blocked requests using demands of 1 to 3 in N1.
performance, with a saturation point around 10200 bandwidth units. The best performance is shown by MI-BLA with 10800 units, followed very closely by WSP with 10770 units, and MI-PA and SWP with 10610 and 10550 units, respectively. Note that also the theoretical maximum is 10800 units. This maximum results from the solution of the multicommodity flow problem that maximizes the total flow of four commodities between the four ingress–egress pair. This is referred to as the maximum throughput problem [9]. Because requests are uniformly distributed among all ingress–egress pairs, we are also interested in a solution where the flow of each of the four commodities has the same value. This refers to the maximum concurrent flow variant of the multicommodity flow problem [9]. If the network is uniformly loaded, it is clear that a greedy routing algorithm cannot result in a flow that exceeds the maximum concurrent flow without rejecting any request. In our case, it turns out that the maximum throughput and the maximum concurrent flow have the same value of 10800. This implies that there is a solution where 2700 units can be transported between each ingress–egress pair, resulting in a total throughput of 10800 units. This maximum throughput is indicated by the “Maximum-throughput” line in Figure 2.
A New Class of Online Minimum-Interference Routing Algorithms
967
Table 1. Blocking rate per ingress–egress pair in N1. Algorithm (S1→D1) (S2→D2) (S3→D3) (S4→D4) MI-BLA 16.88% 16.81% 16.86% 16.82% WSP 26.26% 8.42% 24.96% 8.43% MI-PA 32.38% 8.81% 23.81% 8.79% SWP 18.83% 18.64% 18.59% 18.59% SP 45.25% 7.55% 25.54% 7.58% Table 2. Number of blocked requests of total 5000 requests in N1. Algorithm Blocked requests Avg Min Max Min-Hop ≈400 ≈350 ≈450 WSP ≈340 ≈310 ≈380 S-MIRA ≈80 0 ≈150 L-MIRA 0 0 0
Algorithm Blocked requests Avg Min Max SP 404 353 448 WSP 86 28 151 SWP 0 0 0 MI-BLA 0 0 0
A second performance measure looks at the number of blocked requests. Figure 3 shows the number of blocked versus total requests. After 3450 requests, the SP algorithm starts to block some requests. MI-PA blocks after 3950 requests, WSP starts to block after 4750 requests, followed by SWP at 5230 and MI-BLA at 5350. From the above, it is clear that the blocking-free range strongly depends on the algorithm used. Note that although WSP starts to block quite early, it still achieves a throughput close to the theoretical maximum. Similarly, MI-PA has a short blocking-free range but achieves a higher total throughput than WSP does. This is due to the fact that the request-blocking rates of WSP and MI-PA differ significantly among the ingress–egress pairs. Table 1 shows the blocking rate per ingress–egress pair of various algorithms after 6500 requests have been processed, i.e. at a saturation point. MI-BLA and SWP almost achieve perfect fairness among the pairs, whereas WSP, MI-PA, and SP favor pairs 2 and 4. Our results on the number of blocked requests are directly comparable with some of the results published in [5]. Figure 7 in [5] shows the number of blocked requests out of a total of 5000 requests for minimum-hop, WSP, S-MIRA, and L-MIRA. Table 2 compares the results presented in [5] (first four columns) with our results (columns 5 to 8). For all algorithms, the average, minimum, and maximum number of blocked requests are given. We observe that Min-Hop closely matches our results of SP. Because uniform link costs have been used, SP actually computes minimum-hop paths. The results for WSP, however, do not match. In our experiment, WSP achieves a similar performance as S-MIRA does. Furthermore, we observe that L-MIRA achieves a perfect score with no blocked requests. In our experiment, we obtain the same result for both SWP and MI-BLA. 4.2
Experiment 2: Costs Inversely Proportional to Link Capacity
In the next experiment, we study the effect of static link costs on the performance. In network N1, all links have a cost of 1. We obtain network N2 by
968
I. Iliadis and D. Bauer 11000 10500 10000 9500 9000
8500 8000
Maximum throughput MI-BLA Widest-Shortest Path Shortest-Widest Path MI-PA Shortest Path
7500
7000 6500 3500
4000
4500
5000
5500
6000
6500
7000
Fig. 4. Throughput of accepted requests using demands of 1 to 3 in N2. 14000 13500 13000 12500 12000 11500 11000
Maximum flow MI-BLA Widest Shortest Path Shortest Widest Path MI-PA Shortest Path
10500 10000 9500 5000
6000
7000
8000
9000
10000
Fig. 5. Throughput of accepted requests using demands of 1 to 3 in N2+.
assigning different costs to the links. Following a common practice, we assign link costs inversely proportional to the link capacities. Links with capacity 1200 are assigned a cost of 4, and links of capacity 4800 are assigned a cost of 1. As shown in Figure 4, all algorithms perform almost equally well. The bandwidth routed by all algorithms is very close to the theoretical maximum of 10800 units. However, SP and in particular MI-PA achieve this maximum later than the other algorithms do. 4.3
Experiment 3: Additional Ingress–Egress Nodes
In a third experiment, we increase the possibility of “interference” by increasing the number of ingress–egress pairs. To obtain the example network N2+, two additional ingress–egress pairs have been added to N1, see Figure 1. A number of 11000 requests are issued, and as in the previous experiments, the requests are uniformly distributed among the six ingress–egress pairs.
A New Class of Online Minimum-Interference Routing Algorithms
969
Table 3. Maximum concurrent flow in N2+. Step 1 Step 2 Step 3
(S1→D1) (S2→D2) (S3→D3) (S4→D4) (S5→D5) (S6→D6) Sum of flows 2000 2000 2000 2000 2000 2000 12000 2400 2000 2400 2400 2000 2000 13200 2400 2000 2800 2400 2000 2000 13600
As shown in Figure 5, the best performance is achieved by MI-PA, reaching a total throughput of close to 13600 units. WSP and SP exhibit a very similar behavior: both start to block requests early, but are able to successfully route a total of 13400 and 13300 units, respectively. MI-BLA, on the other hand, starts to block later, but saturates earlier, at 13000 units. WSP is the least successful strategy in this environment, it reaches its saturation point already at 12600. To compute the theoretical maximum performance of greedy algorithms, we resort to the multicommodity flow problem that maximizes the flow of six commodities corresponding to the ingress–egress pairs. In this case, it turns out that the maximum concurrent flow and the maximum throughput of the multicommodity flow problems do not coincide. In a first iteration, we find that a maximum of 2000 units of flow can be transported between each pair, resulting in a maximum concurrent flow of 12000 units. The flow can no longer be increased because some pairs are saturated. In our case it turns out that there still is residual bandwidth left between pairs (S1→D1), (S3→D3) and (S4→D4). If we compute the maximum concurrent flow in the residual network for the unsaturated pairs, we find that these pairs support another 400 units of flow. In a third step, we find that pair (S3→D3) supports another additional 400 units of flow. With this three-step approach, we obtain the maximum throughput as 13600 units. Table 3 summarizes the three maximum concurrent flow computations. Table 3 also defines how the optimum algorithm works in the settings of experiment 3. Requests for pairs (S2→D2), (S5→D5) and (S6→D6) are blocked after 2000 units of bandwidth have been routed over those pairs. Next, request for pairs (S1→D1) and (S4→D4) are blocked after 2400 units of bandwidth. Finally, requests for pair (S3→D3) are blocked. At this point, an optimum algorithm reaches its saturation point, with 13600 units of bandwidth routed in total. For an average request size of 2, the saturation point is expected to be reached at 8400 requests. MI-PA achieves near-optimum performance with respect to the total throughput. Figure 6 shows that also MI-PA is very close to the optimum solution with respect to the request-acceptance rate of individual pairs. MI-PA slightly overallocates requests for pair 3 at the expense of pair 6. The request-acceptance rates of individual pairs differ significantly for WSP and SP. Compared with the optimum solution (shown on the left), both WSP and SP over-allocate requests for pairs 3 and 4, while under-allocating requests for other pairs. MI-BLA and SWP, on the other hand, show a greater fairness among the pairs.
970
I. Iliadis and D. Bauer
Fig. 6. Bandwidth of accepted requests per ingress–egress pair.
5
Conclusions
Here we have addressed the issue of on-line path selection for bandwidth-guaranteed requests. We have presented a new class of minimum-interference routing algorithms called “simple minimum-interference routing algorithms” (SMIRA), designed for a reduced computational complexity compared with the existing MIRA maximum-flow approach. Two typical algorithms, called MI-BLA and MI-PA, belonging to this class were introduced, and their efficiency in terms of the throughput of accepted requests and blocking-free range, as well as their fairness were assessed by means of simulation. The results obtained in the topologies considered demonstrate that these algorithms can achieve a similar optimum performance as the earlier MIRA algorithm, however, at reduced computational complexity. Comparisons with the performance of some of the established routing algorithms revealed that employment of MI-BLA and MI-PA in networks with a high degree of interference improves the performance compared with that of the shortest-path, widest-shortest-path, and shortest-widest-path algorithms. Furthermore, our algorithms exhibit a higher degree of fairness among the ingress–egress node pairs. An investigation and assessment of how the algorithms proposed perform in dynamic environments is a significant area of future work. A more systematic approach for determining the optimum algorithmic instance within the SMIRA algorithm is also a topic for further investigation.
References 1. Rosen, E., Viswanathan, A., Callon, R.: Multiprotocol Label Switching Architecture. RFC 3031 (January 2001). 2. The ATM Forum: Private Network-Network Interface Specification Version 1.0. Specification Number af-pnni-0055.000 (March 1996). 3. Ma, Q., Steenkiste, P.: On Path Selection for Traffic with Bandwidth Guarantees. In: Proc. IEEE Int’l Conf. on Network Protocols, Atlanta, GA (1997) 191-202. 4. Gawlick, R., Kalmanek, C., Ramakrishnan, K. G.: On-line Routing for Permanent Virtual Circuits. In: Proc. IEEE INFOCOM ‘95, Boston, MA, Vol. 1 (1998) 278288.
A New Class of Online Minimum-Interference Routing Algorithms
971
5. Kar, K., Kodialam, M., Lakshman, T. V.: Minimum Interference Routing of Bandwidth Guaranteed Tunnels with MPLS Traffic Engineering Applications. IEEE J. Sel. Areas Commun. 18 (2000) 2566-2579. 6. Gibbens, R. J., Kelly, F. P., Key, P. B.: Dynamic Alternative Routing – Modelling and Behavior. In: Proc. 12th Int’l Teletraffic Congress, Turin, Italy (1988) 10191025. 7. Suri, S., Waldvogel, M., Warkhede, P. R.: Profile-Based Routing: A New Framework for MPLS Traffic Engineering. In: Boavida, F., Ed., Quality of Future Internet Services, LNCS 2156 (Springer, Berlin, 2001). 8. Dijkstra, E. W.: A Note on Two Problems in Connexion with Graphs. Numerische Mathematik 1 (1959) 269-271. 9. Aumann, Y., Rabani, Y.: An O(log k) Approximate Min-Cut Max-Flow Theorem and Approximation Algorithm. SIAM J. Comput. 27 (1998) 291-301.
Performance Analysis of Dynamic Lightpath Configuration for WDM Asymmetric Ring Networks Takuji Tachibana and Shoji Kasahara Graduate School of Information Science Nara Institute of Science and Technology Takayama 8916-5, Ikoma, Nara 630-0101, Japan {takuji-t, kasahara}@is.aist-nara.ac.jp
Abstract. In this paper, we analyze the performance of a lightpath configuration method for optical add/drop multiplexer (OADM) in WDM asymmetric ring network. We consider a multiple queueing system for a node in the ring network and derive loss probability and wavelength utilization factor. Numerical examples show how arrival rate from access network and the threshold specified in the dynamic configuration method affect loss probability and wavelength utilization factor. In addition, comparing the proposed method with static configuration method, the loss probability under the proposed method can be almost the same as that under the static method where lightpaths are pre-established such that all wavelengths in the network are used efficiently.
1
Introduction
Optical add/drop multiplexer (OADM) selectively adds/drops wavelengths at any OADM to establish lightpaths in WDM network [3,4,6,7,9,11]. This provides all-optical connection between any pair of OADMs (see Fig. 1). The number of available wavelengths is 16, 32, 64, 128 and so on, and the wavelengths to be added/dropped are pre-selected in each OADM [5,8,10]. Hence significant predeployment network planning is required to specify what and where wavelengths are to be added/dropped. Once the network design is determined, the design will not be changed unless network operator is willing to change the network design. When the traffic pattern changes frequently, the OADM degrades the performance of network [13]. However, if wavelengths are dynamically allocated, high utilization of wavelengths and large throughput of packets are expected [2]. To realize dynamic lightpath configuration for OADM, we have proposed a dynamic lightpath configuration method [12]. With our proposed method, a lightpath is established according to the congestion state of a node and is released when there are no packets to be transmitted in a buffer for the lightpath. It is not necessary to pre-select added/dropped wavelengths. In [12], we have considered the WDM ring network as shown in Fig. 2 where traffic is injected into each node from each access network at the same rate. Under E. Gregori et al. (Eds.): NETWORKING 2002, LNCS 2345, pp. 972–983, 2002. c Springer-Verlag Berlin Heidelberg 2002
Performance Analysis of Dynamic Lightpath Configuration
access network
λW
MUX
λ1, λW drop
...
...
λ1 M U X
DEMUX
λ1, λW add
D E M U X
λW−3 MUX
λW−3 drop
...
lightpath D E M U X
973
M U X
node
DEMUX
λW−3 add
Fig. 1. Optical add/drop multiplexer.
Fig. 2. Ring network model.
the real network environment, however, traffic volume injected from an access network depends on its location and services provisioned, i.e., traffic volume from each access network is different; we call such ring network an WDM asymmetric ring network. To analyze performances of the dynamic lightpath configuration method for WDM asymmetric ring network, we further extend the symmetric ring network model in [12] to an asymmetric one. We model this system as a continuoustime Markov chain and derive the loss probability of packets coming from access network to node and wavelength utilization factor. With the analysis and simulation, we investigate how arrival rate from access network and the threshold specified in the lightpath configuration method affect the performance measures for WDM asymmetric ring network. Finally, we compare the proposed method with static configuration method and discuss the effectiveness of the proposed method. The rest of the paper is organized as follows. Section 2 summarizes the dynamic lightpath configuration method, and in Section 3, we present the analytical model of our proposed method for WDM asymmetric ring networks. The performance analysis in the case of light traffic is presented in Section 4 and numerical examples are given in Section 5. Finally, conclusions are presented in Section 6.
2
Dynamic Lightpath Configuration Method
In this section, we summarize the dynamic lightpath configuration method proposed in [12]. Each node consists of an OADM with MPLS control plane and a label switching router (LSR) [1,2]. The procedure of lightpath configuration is as follows (see Fig. 3). For simplicity, we consider a tandem network with three nodes, namely, nodes A, B and C. Each node is connected to its own access network through LSR. Suppose W + 1 wavelengths are multiplexed into an optical fiber in our network. Among W + 1 wavelengths, W wavelengths are used to transmit data traffic and one is dedicated to carry and distribute control traffic. Therefore we handle W wavelengths that consist of one default path and W − 1 lightpaths.
974
T. Tachibana and S. Kasahara Node A (i) congestion Node B (ii)congestion
Node C
LSR OADM
access network W0 W1 W2 W2 W1 W0
W0 W1 W2 W2 W1 W0
W0 W1 W2 W2 W1 W0
(iii) (1)
wavelength request
wavelength(w1 or w2)distribution wavelength (w1) distribution (2)
wavelength establishing request
(3) wavelength release request wavelength release
L λ( j,L)
λj j1 pre
λj
λ(j, j-1) j-1 1
1 1 λ( j,1)
λ( j,k)
k1
Fig. 4. Traffic from node j to other nodes.
Fig. 3. Dynamic lightpath configuration.
Let w0 denote a wavelength for default path used between adjacent nodes (A and B or B and C in Fig. 3). We define wi (1 ≤ i ≤ W − 1) as the wavelength which is dynamically allocated according to congestion in default path. If an IP packet whose destination is node C arrives at node A from access network, the LSR in node A performs label switching by establishing a relation between tuple and tuple according to its destination node. Through MPLS control plane, OADM determines a relevant output wavelength corresponding to the output label. If the default path is not congested and a lightpath is not established between nodes A and C, the packet is transmitted to node B with wavelength w0 . When the packet arrives at node B, the LSR in node B performs label switching. Then, through MPLS control plane, the OADM in node B determines output wavelength and the packet is transmitted to node C with it. An LSR in each node has W buffers corresponding to W wavelengths. In particular, the buffer for default path (default buffer) has pre-specified threshold. If the number of packets in default buffer becomes equal to or greater than the threshold, LSR regards the default path as being in congestion and decides to establish a new lightpath. Here the new lightpath is established between the source and destination nodes of the packet that triggers the congestion. The new lightpath is established in the following manner. Now we consider the two cases: the packet that is transmitted from nodes A to C (i) triggers congestion at node A and (ii) triggers congestion at node B. In the case of (i), the MPLS control plane in node A requests a wavelength to the MPLS control plane in node C for the establishment of a new lightpath using control traffic (Fig. 3 (1)). Distributing network state information, MPLS control plane in each node has the latest information of lightpath configuration all the time. When the wavelength request of node A arrives at node C, MPLS control plane in node C searches an available wavelength for path BC. If wavelength
Performance Analysis of Dynamic Lightpath Configuration
975
w1 is available for path BC, node C informs node B that w1 is available using control signal and adjusts its OADM to drop w1 . Subsequently, the MPLS control plane in node B searches an available wavelength for path AB. If w1 is also available for path AB, node B informs node A about it. Otherwise, node B informs node A of another wavelength, say w2 . In the latter case, w2 is converted to w1 at node B for the transmission from A to C. If no wavelengths are available, the new lightpath establishment fails. Finally, node A adjusts its OADM to add w1 or w2 . Until the lightpath establishment is completed, wavelength w0 is still used for the packet transmission between A and C. As soon as the establishment is completed, the lightpath becomes available. In the case of (ii) where congestion occurs at intermediate node B, the MPLS control plane in node B asks node A to request a new wavelength to node C (Fig. 3 (2)). Successive procedure is same as the case (i). If there are no packets in the buffer after packet transmission, the timer for the holding time starts. The established lightpath is released if the holding time is over and there are no packets in the buffer (Fig. 3 (iii), (3)). For simplicity, we assume in the paper that multiple lightpaths between any pair of nodes are not permitted.
3
System Model
We consider a WDM network where L nodes are connected in ring topology (see Fig. 2). Each node, as shown in the previous section, consists of OADM with MPLS control plane and LSR and establishes/releases lightpaths according to the dynamic lightpath configuration method. In addition, each node is connected to its own access network through LSR. We assume that the number of wavelengths available at each node is W and all wavelengths can be converted regardless of any wavelength pairs. One of W wavelengths is for a default path and the others are for lightpaths which are dynamically established. W − 1 wavelengths for lightpaths are numbered from 1 to W − 1. A lightpath is established with a wavelength which has the smallest number. When there are no idle wavelength up to the i − 1th one, a lightpath is established with the ith (1 ≤ i ≤ W − 1) wavelength. We have two types of buffers in each node: one is for default path and the others are for lightpaths which are dynamically established/released. Let Kd denote the capacity for default buffer and Kl the capacity for each lightpath buffer. Here, the buffer capacity consists of a waiting room where packets wait for transmission and a server where a packet is in transmission. Let Th denote the pre-specified value of threshold for default path. For traffic condition within this WDM asymmetric ring network, we assume that packets arriving at node j (1 ≤ j ≤ L) from access network are transmitted to destination nodes in clockwise direction. Under this assumption, we have two kinds of packet traffic that arrives at the node j: one is from the access network and the other is from the previous node j − 1 as shown in Fig. 4.
976
T. Tachibana and S. Kasahara
In terms of traffic from the access network, we assume that packets arrive at node j from access network according to a Poisson process with parameter λj . We assume that for 1 ≤ j ≤ L, λj is so small that packet loss hardly occurs at intermediate nodes. Moreover we assume that the destination of a packet which L (j) (j) arrives at node j is k (k = j) with probability Pk which satisfies k=1 Pk = 1. k=j
Therefore, packets sent to the destination k arrive at node j from access network according to a Poisson process with parameter λ(j, k) which is given by (j)
λ(j, k) = Pk λj .
(1)
Next we consider traffic which arrives at node j from node j − 1. Since the buffers in each node are finite queues, our ring network is not an open Jackson queueing network. Due to light traffic, however, arrival packets are hardly lost and most of packets are served by default path. Therefore we can approximate the arrival process from previous node with the similar approach to the analysis of open Jackson network [14]. Let λpre denote the arrival rate of the packet arrival process from node j − 1 j to node j. Noting that packets are sent in clockwise direction and hardly lost due to light traffic assumption, λpre can be approximated with the following: j λpre j
j−1 L k=1
λ(k, n) +
n=j+1
k−1
λ(k, m)
m=1
+
L
k−1
λ(k, n) , 1 ≤ j ≤ L.
(2)
k=j+2 n=j+1
We assume that the packet arrival process at node j from the previous node j − 1 is Poisson with rate λpre j . The whole packets arrive at the node j according to a Poisson process with pre rate λall which is given by j = λj + λj λall j =
j L k=1
n=j+1
λ(k, n) +
k−1 m=1
λ(k, m) +
L
k−1
λ(k, n) , 1 ≤ j ≤ L.
(3)
k=j+2 n=j+1
(j)
We define Dl (t) as the set of destination nodes of the established lightpaths in node j at t. Then packets arrive at default path according to a Poisson process light with rate λall where λlight is given by j − λj j λlight = j
λ(j, k) .
(4)
(j)
k∈Dl (t)
We also assume that for any node the transmission time of a packet, the lightpath establishment/release time and the holding time are exponentially distributed with rates µ, p and h, respectively.
Performance Analysis of Dynamic Lightpath Configuration
λj
light λall j −λ j
pre
congestion
µ
Th Kd
λj
(S, 0) congestion
λ( j, k) p λallj - λlight j (0, k)
( I, 0)
λlight j
p
Kl
4
λ( j,k) µ
(1, k)
λ( j,k)
λ( j,k)
µ
µ
...
n
(Kl, k)
h
µ
Fig. 5. Asymmetric ring model with light traffic.
977
node
(R, 0)
Fig. 6. State transition diagram for a light(j) path li .
Performance Analysis
We consider a multiple queueing system for node j (1 ≤ j ≤ L) illustrated in Fig. 5. (j) Let li (1 ≤ i ≤ W ) denote the ith lightpath dynamically estab(j) lished/released at node j. We define the state of a lightpath li (1 ≤ i ≤ W − 1) for node j at t as (j) n, (0 ≤ n ≤ Kl ), if li is busy, (j) I, if li is idle, (j) Jli (t) = (j) if li is being established, S, (j) R, if li is being released. (j)
Let Nd (t) denote the number of packets in default path for node j at t. (j) (j) dli (t) is defined as the destination node directly connected with lightpath li at t and given by
(j) (j) (j) k, if li is busy and connected to node k (∈ Dl (t)), dli (t) = (5) 0, otherwise. Finally, we define the state of the system at t as (j)
(j)
(Nd (t), J l (t)),
(6)
(j)
where J l (t) is given by (j)
(j)
(j)
(j)
(j)
J l (t) = ( (Jl1 (t), dl1 (t)), · · · , (JlW −1 (t), dlW −1 (t)) ).
(7)
In addition, we define MlI(j) (t) as the number of idle lightpaths at t, and it is expressed as W −1 1{J (j) (t)=I} , (8) MlI(j) (t) = i=1
li
where 1{X} is the indicator function of event X.
978
T. Tachibana and S. Kasahara Table 1. State transition rate in asymmetric ring network model.
Number of active lightpaths MlI
Current state (Nd , J l )
>0
Next state
Nd < Th (Nd + 1, J l ) Th ≤ Nd < Kd , (Nd + 1, J l ), (Jlimin , dlimin ) = (I, 0) (Jlimin , dlimin ) = (S, 0) I
I
I
I
I
I
I
I
Nd = Kd , (Nd , J l ), (Jlimin , dlimin ) = (I, 0) (Jlimin , dlimin ) = (S, 0)
State of lightpaths
Nd > 0 Nd < Kd Nd > 0 Current state (Nd , J l )
(Jli , dli ) = (S, 0)
-
(Jli , dli ) = (0, k)
(Jli , dli ) = (n, k)
n < Kl n>0 n=0 -
(Jli , dli ) = (n + 1, k) (Jli , dli ) = (n − 1, k) (Jli , dli ) = (R, 0) (Jli , dli ) = (I, 0)
MlI = 0
(Jli , dli ) = (R, 0)
(Nd − 1, J l ) (Nd + 1, J l ) (Nd − 1, J l ) Next state (Nd , J l )
Transition rate light λall j − λj light λall j − λj light λall j − λj
µ light λall j − λj µ Transition rate λ(j, k)
light
λall −λj j
p
λ(j, k) µ h p
(j)
The state transition diagram for li is illustrated in Fig. 6. Let U (j) denote (j) (j) (j) the whole state space of (Nd (t), J l (t)) and Ul the space comprised of (j) J l (t). In the remainder of this subsection, the argument t is omitted since we consider the system in equilibrium. (j) (j) The transition rate from the state (Nd , J l ) is shown in Table 1. Note that we omit the superscript (j) of any notation for the simplicity. imin in Table I 1 is defined as (j) imin = min{ i ; Jli = I, 1 ≤ i ≤ W − 1}. (9) I (j)
(j)
(j)
For example, when current state is (Nd , J l ) where MlI(j) > 0, Th ≤ Nd
and the state of . λlight j
(j) li
< Kd
is idle, a packet arrives at default path with rate λall j − (j)
Then Nd is increased by one and the lightpath establishment of li (j) (Nd ,
(j) Jl )
starts. (j)
Similarly, when current state is where an established lightpath li (j) has no packets in its own buffers, the holding time of li is over with rate h and (j) li is released. (j) (j) (j) (j) Let π(Nd , J l ) represent the steady state probability of (Nd , J l ). (j) (j) π(Nd , J l ) is uniquely determined by equilibrium state equations and following normalized condition (j) (j) π(Nd , J l ) = 1. (10) (j)
(j)
(Nd ,J l )∈U (j)
Equilibrium state equations are omitted due to page limitation.
Performance Analysis of Dynamic Lightpath Configuration (j)
(j)
979
(j)
With π(Nd , J l ), loss probability Ploss and wavelength utilization factor (j) Pwave for node j are given by λlight j (j) (j) (j) 1 − all π(Kd , J l ) Ploss = λ j (j) (j) (Kd ,J l )∈U (j)
+
Kd W −1
(j)
(j) (j) (j) Nd =0 i=1 dl ∈Dl J (j) ∈U (j) l l i (j)
Jl
(j) Pwave
= (j)
(j)
1{N (j) >0} +
(Nd ,J l )∈U (j)
5
d
i
W −1 i=1
(j)
π(Nd , J l )
λ(j, k) , λall j
(11)
=Kl
1{(J (j) , d(j) )=(I, 0)} li
li
(j)
(j)
π(Nd , J l ) . W (12)
Numerical Examples
In our numerical examples, we assume that a labeled packet size (an IP datagram + a label) is 1250 bytes within access networks and that the transmitting speed of each wavelength is 10 Gbps. Thus, the transmission speed is calculated as 1250 [byte] × 8 [bit/byte] = 1 [µs] . 10 [Gbps]
(13)
We set 1/µ =1 [µs], where 1/µ is the mean transmission time of a packet. We set both the lightpath establishment/release time and holding time are equal to 1.0 [ms], i.e., p = 0.001 and h = 0.001. In this section, we consider WDM asymmetric ring network where there are 10 nodes. 5.1
Impact of Traffic Volume from Access Network
Figs. 7 and 8 illustrate how traffic volume from access network affects loss probability. In both figures, we set W = 4, Kd = 6, Kl = 5 and Th = 4, and assume that the destination of each packet is equally likely, i.e., for any pair nodes j and k (k = j), 1 1 (j) = . (14) Pk = L−1 9 Moreover arrival rate at node 1 from access network, λ1 , is variable and other arrival rates are fixed and equal to 0.015. Fig. 7 shows the numerical result calculated by approximation analysis and Fig. 8 represents simulation result. We observe the quantitative discrepancy between Figs. 7 and 8. This is because the loss probability in Fig. 7 is calculated under the assumption of exponential distributions of transmission time, lightpath establishment/release time and holding time while those times are set to be
T. Tachibana and S. Kasahara
Loss Probability
0.038 0.034
0.0005
Node 1 Node 5 Node 10
Loss Probability
980
0.03 0.026 0.022 0.01 0.012
0.0004
Node 1 Node 5 Node 10
0.0003 0.0002
0.0001
0.014 0.016 0.018 0.02
0.01
0.012
0.014
0.016
0.018
0.02
Arrival rate from access network at node 1
Arrival rate from access network at node 1
Fig. 7. Loss probability vs. arrival rate: approximation analysis.
Fig. 8. Loss probability vs. arrival rate: simulation.
constant in simulation. However, both figures show the same tendency and hence our analytical model is useful for capturing the loss behavior under proposed method in a qualitative sense. Our numerical experiments also show that our analytical model succeeds in capturing the characteristic of wavelength utilization factor, however, we omit those results due to page limitation. In Figs. 7 and 8, we observe that loss probability for node 1 increases as arrival rate at node 1 increases while loss probability for node 10 is constant. Since destinations of packet streams originated in node 1 are equally likely, the arrival rate from previous node j − 1, λpre j , becomes small as the node-number j increases. This results in the small (large) loss probability if λ1 is smaller (larger) than λj (j = 2, · · · , 10). We further investigate this tendency in the next subsection. 5.2
Impact of Node Position
In this subsection, with our analytical result, we investigate how the loss probability and wavelength utilization factor of each node differ from those of other nodes. Here, we set W = 4, Kd = 30, Kl = 5 and Th = 20. The destination of each (j) packet is equally likely, i.e., Pk = 1/9. In terms of traffic volume from each access network, we consider the following types;
0.18, i = 1, 6, 0.09, i = 1, 6, Type B: λi = Type A: λi = 0.135, otherwise. 0.135, otherwise. Fig. 9 shows the loss probability against the position of node. From Fig. 9, we observe that loss probability depends on the distance from node 1 or node 6. For type A, nodes 1 and 6 have larger loss probability than others while for type B, nodes 5 and 10 have larger loss probability than others. For type A, λ1 and λ6 are larger than others and this makes LSRs of nodes 1 and 6 in congestion.
3e-05
Type A
Loss Probability
2e-05
1e-05 1 1e-06
2
3
4
5
6
7
8
9
10
Type B
6e-07
2e-07 0
1
2
3
4
5 6 Node
7
8
9
10
Fig. 9. Loss probability vs. node position.
Wavelength utilization factor
Performance Analysis of Dynamic Lightpath Configuration 0.4
981
Type A Type B
0.35 0.3 0.25 0.2 0.15
1
2
3
4
5 6 Node
7
8
9
10
Fig. 10. Wavelength utilization factor vs. node position.
This causes the large loss probabilities of nodes 1 and 6. On the other hand, the packet streams originated in node 1 leave the ring network at nodes 2, 3, · · ·, and 10 in this order and hence the total arrival rate becomes small as the nodenumber increases. This results in the decrease of loss probabilities from nodes 2 to 5. At node 6, the same traffic volume is injected and this causes the jump of loss probability. The decrease of loss probability from nodes 6 to 10 follows from the same reason. For type B, λ1 and λ6 are smaller than others and this causes the small loss probabilities at nodes 1 and 6. As the node-number increases, the traffic volume larger than λ1 and λ6 makes the network being congested and this results in the increase of loss probabilities at nodes from 2 (7) to 5 (10). Fig. 10 illustrates how the proposed method establishes lightpaths in the asymmetric network. From this figure, we find that the proposed method can establish lightpaths according to traffic volume originated in each node. For type A, nodes 1 and 5 establish more lightpaths than others and for type B, nodes 6 and 10 establish more lightpaths than others. From this figure, we observe that the dynamic lightpath establishment function works well for WDM asymmetric ring network. 5.3
Impact of Threshold
In this subsection, we investigate how the threshold affects loss probability with our analytical result. We set W = 4, Kd = 30 and Kl = 5. As is the case with the above sections, we assume that the destination of each packet is equally likely, and consider the traffic condition for type A. Fig. 11 shows how loss probability is affected by threshold. From Fig. 11, we observe that smaller threshold gives smaller loss probability. This is because the LSR with small threshold regards the node as being in congestion frequently and makes lightpaths busy. We also find that loss probabilities for nodes 5 and 10 do not change so much while those for nodes 1 and 6 decrease as threshold
T. Tachibana and S. Kasahara
Loss Probability
3e-05
0.0008
Node 1, 6 Node 2, 7 Node 3, 8 Node 4, 9 Node 5, 10
2.5e-05 2e-05
Loss Probability
982
1.5e-05 1e-05
0.0006
0.0004
5e-06 0
1
5
10 15 Threshold
20
25
Fig. 11. Loss probability vs. threshold.
0.0002
1
2
Dynamic: (1) Th=3 Dynamic: (2) adjusted Th Static: two paths 3 4 5 6 7 8 9 10 Node
Fig. 12. Comparison of dynamic and static configurations: simulation.
becomes small. Therefore small threshold is effective to improve loss probabilities of bottleneck nodes. 5.4
Comparison of Dynamic and Static Configurations
Finally, we compare the proposed method with static configuration method where wavelengths are allocated to lightpaths statically. Fig. 12 illustrates loss probability for each node in cases of the proposed method and static configuration method. Loss probabilities in Fig. 12 are calculated by simulation. In this figure we set W = 4, Kd = 6, Kl = 5 and consider (j) the traffic condition for type A. In addition, Pk =1/9 for all j and k (j = k) except j = 1 and 6. For j = 1 and 6, we set
0.08, k = 2, 3, 0.08, k = 7, 8, (1) (6) Pk = Pk = 0.12, otherwise. 0.12, otherwise. That is, more packets whose destinations are nodes 2 (7) or 3 (8) arrive at node 1 (6) than packets whose destinations are other nodes. As for the static configuration method, we consider the case where each node statically establishes two lightpaths: one is connected to the next node and the other is connected to the next but one. Note that this is the most efficient use of wavelengths for the ring network considered here. As for the dynamic configuration, we consider the following two cases: (1) Th = 3 for all nodes and (2) Th ’s are different such as
1, k = 1, 6, 3, k = 3, 4, 8, 9, Th = 2, k = 2, 7, 4, k = 5, 10. The case of (2) is based on the results of the previous subsections. From Fig. 12, we observe that the loss probability for dynamic configuration with Th = 3 is the largest and that the loss probability for the proposed method with adjusted Th ’s is almost equal to or lower than that for static configuration case. This suggests that the proposed method can establish lightpaths efficiently between pairs of nodes whose traffic volume is large.
Performance Analysis of Dynamic Lightpath Configuration
6
983
Conclusion
In this paper, we have analyzed the performance of the dynamic wavelength allocation method for WDM asymmetric ring network. Numerical examples have showed that the proposed method can establish lightpaths efficiently according to traffic volume from access network even when some nodes are in congestion. In addition, the loss probability under the proposed method can be almost the same as that under the static method where lightpaths are pre-established such that all wavelengths in the network are used efficiently.
References 1. D. O. Awduche, “MPLS and Traffic Engineering in IP Networks,” IEEE Communications Magazine, vol. 37, no. 12, pp. 42-47, Dec. 1999. 2. D. O. Awduche et al., “Multi-Protocol Lambda Switching: Combining MPLS Traffic Engineering Control With Optical Crossconnects,” IETF draft-awduche-mplste-optical-03.txt, Apr. 2001. 3. P. Bonenfant, and A. R. Moral, “Optical Data Networking,” IEEE Communications Magazine, vol. 38, no. 3, pp. 63-70, Mar. 2000. 4. I. Chlamtac, V. Elek, A. Fumagalli, and C. Szab´ o, “Scalable WDM Access Network Architecture Based on Photonic Slot Routing,” IEEE/ACM Trans. Networking, vol. 7, no. 1, pp. 1-9, Feb. 1999. 5. O. Gerstel, R. Ramaswami, and G. H. Sasaki, “Cost-Effective Traffic Grooming in WDM Rings,” IEEE/ACM Trans. Networking, vol. 8, no. 5, pp. 618-630, Oct. 2000. 6. N. Ghani, S. Dixit, and T. S. Wang, “On IP-over-WDM Integration,” IEEE Communications Magazine, vol. 38, no. 3, pp. 72-84, Mar. 2000. 7. M. W. McKinnon, H. G. Perros, and G. N. Rouskas, “Performance Analysis of Boadcast WDM Networks under IP Traffic,” Performance Evaluation, vols. 36-37, pp. 333-358, Aug. 1999. 8. Y. Miyao, “λ-Ring System: An Application in Survivable WDM Networks of Interconnected Self-Healing Ring Systems,” IEICE Trans. Commun., vol. E84-B, no. 6, June, 2001. 9. B. Ramamurthy, and B. Mukherjee, “Wavelength Conversion in WDM Networking,” IEEE J. Select. Areas Commun., vol. 16, no. 7, pp. 1061-1073, Sep. 1998. 10. R. Ramaswami, and K. N. Sivarajan, Optical Networks: A Practical Perspective. San Francisco: Morgan Kaufmann Publishers, 1998. 11. K. Sato, S. Okamoto, and H. Hadama, “Network Performance and Integrity Enhancement with Optical Path Layer Technologies,” IEEE J. Select. Areas Commun., vol. 12, no. 1, pp. 159-170, Jan. 1994. 12. T. Tachibana and S. Kasahara, “Performance Analysis of Dynamic Lightpath Configuration with GMPLS for WDM Ring Networks: The Light Traffic Case,” Technical Report of IEICE (NS2001-140), pp.37-42, 2001.10.19. (in Japanese) . 13. W. Weiershausen, A. Mattheus, and F. K¨ uppers, “Realisation of Next Generation Dynamic WDM Networks by Advanced OADM Design,” WDM and Photonic Networks, D. W. Faulkner, and A. L. Harmer eds., IOS Press, Amsterdam, pp. 199-207, 2000. 14. R. W. Wolff, Stochastic Modeling and the Theory of Queues. New Jersey: Prentice Hall, 1989.
A Queueing Model for a Wireless GSM/GPRS Cell with Multiple Service Classes D.D. Kouvatsos, K. Al-Begain, and I. Awan Department of Computing, School of Informatics, University of Bradford BD7 1DP, Bradford, West Yorkshire, England, UK {d.d.kouvatsos, k.begain, i.awan}@bradford.ac.uk
Abstract. A novel analytic framework is devised for the performance modelling and evaluation of a wireless cell using Global System for Mobile telecommunication (GSM) with General Packet Radio Service (GPRS) supporting both voice and multiple class data services, respectively, under a complete partitioning scheme (CPS). In this context, a queueing model is proposed consisting of two independent queueing systems, namely an M/M/c/c loss system with Poissonian GSM traffic and a {GE/GE/1/N1 /FCFS → GE/GE/1/N2 /PS} system of access and transfer finite capacity queues in tandem having an external Compound Poisson GPRS traffic with geometrically distributed batches and generalised exponential (GE) service times under first-come-first-served (FCFS) and processor sharing (PS) scheduling rules, respectively. Although the analysis of the former loss system is straightforward, the solution of the GE-type queues in tandem is rather complex. This investigation focuses on the analysis of the tandem GE-type queueing system, which is valid for both uplink and downlink connections and provides multiple class data services with different arrival rates, interarrival-time squared co-efficient of variations (SCVs), file (burst) sizes and PS discrimination service levels. The principle of maximum entropy (ME) is used to characterise a product form approximation, subject to appropriate GE-type queueing theoretic constraints per class, and thus, implying a decomposition of the tandem system into GE/GE/1/N1 /FCFS and GE/GE/1/N2 /PS building block queues, each of which can be analysed in isolation. Subsequently, closed form expressions for state and blocking probabilities are obtained. Typical numerical examples are included to validate the ME solution against simulation and study the effect of external GPRS bursty traffic upon the performance of the cell. Keywords: Cellular mobile system, Global System for Mobile Telecommunication (GSM), General Packet Radio Service (GPRS), wireless GSM/GPRS cell, complete partitioning scheme (CPS), performance evaluation, maximum entropy (ME) principle, generalised exponential (GE) distribution, first-come-first-served (FCFS) rule, processor sharing (PS) rule.
E. Gregori et al. (Eds.): NETWORKING 2002, LNCS 2345, pp. 984–995, 2002. c Springer-Verlag Berlin Heidelberg 2002
A Queueing Model for a Wireless GSM/GPRS Cell
1
985
Introduction
Queueing theoretic models are widely recognised as powerful and realistic tools for the performance evaluation and prediction of complex mobile systems. However, there are inherent difficulties and open issues to be resolved before a global network infrastructure for broadband mobile systems can be established. Some of these problems may be attributed to the complexity of mobile traffic characterisation and the assessment of its performance impact based on the much needed derivation of closed form metrics. Most of the published performance studies in the field are based on simulation modelling and numerical solution of Markov models covering different traffic scenarios, mostly at call level, with single or multiple service classes. Earlier proposed models are based on resource network management parameters of the Global System for Mobile Telecommunications (GSM) technology, where the capacity of radio interference in the wireless cell is divided into discrete channels and operates in circuit-switched mode (e.g., [1]. More recently, extensions of these models have been made to capture the packetswitched behaviour introduced by the General Packet Radio Service (GPRS) which has been added to GSM to allow data communication with higher bit rates than those provided by a single GSM channel (e.g., [2,3]). More recently, Foh et al [4] proposed a single server infinite capacity queue for modelling GPRS in a Markovian environment and applied matrix geometric methods for the evaluation of performance metrics. Simulation is an efficient tool for studying detailed system behaviour but it becomes costly, particularly as the system size increases. Markov models on the other hand provide more flexibility and produce numerical results for many interesting performance measures. Nevertheless, the numerical solution of Markov models may suffer from several drawbacks, such as – state space explosion limiting the analysis to only small mobile systems, generally consisting of one cell, – restrictive assumptions of independent Poisson arrival processes for all types of homogeneous and uniformly distributed traffic with exponentially distributed call durations (which, if multiplexed, can be bursty and correlated). Thus, there is still a great need to consider alternative analytic methodologies for the analysis of queueing models, based on a balanced trade-off between simplified assumptions to reduce complexity and actual real life system behaviour, leading to both credible and cost-effective approximations for the performance prediction and optimisation of mobile systems. This investigation proposes a novel analytic framework for the performance modelling and evaluation of a wireless GSM/GPRS cell with both voice and multiple data services under a complete partitioning scheme (CPS). The work focuses on the analysis of a tandem generalised exponential (GE)-type queueing model involving a first-come-firs-served (FCFS) access queue and a discriminatory processor sharing (PS) transfer queue (air interface) with distinct multiple data service classes and external Compound Poisson GPRS (multiplexed) traffic class streams with geometrically distributed batches. The model is analysed
986
D.D. Kouvatsos, K. Al-Begain, and I. Awan
via the principle of maximum entropy (ME) (c.f., [5,6]) which is used to characterise a product form approximation, subject to GE-type queueing theoretic constraints, and thus, allowing system decomposition and the separate analysis of each of the two GE-type queues in tandem. Subsequently, closed form expressions for state and blocking probabilities per class are obtained. The paper is organised as follows. Section 2 describes call handling schemes for wireless GSM/GPRS cells. The GE-type tandem queueing model is discussed in Section 3 together with the characterisation of an ME product form approximation. Section 4 presents the ME analysis of the GE/GE/1/N building block queue with either FCFS or discriminatory PS scheduling rules. Numerical examples to validate the ME solution against simulation and study the effect of external GPRS bursty traffic upon the performance of the cell are included in Section 5. Concluding remarks follow in Section 6.
2
GSM/GPRS Call Handling Schemes
Resources for GPRS traffic can be reserved statically or dynamically, whereas a combination of both is possible. Different partitioning schemes can be defined where partitions are created for GSM and GPRS traffic but not for individual data services. For GPRS traffic, a complete partition is used for different data services. However, some data calls may be allocated higher priority and therefore they can be given higher share of the available bandwidth. Whenever voice and data share bandwidth, voice service is always given the highest priority. Two main partitioning schemes, namely complete partitioning and partial sharing, are described below: – Complete partitioning scheme (CPS) divides the total cell capacity to serve simultaneously GSM and GPRS traffics. As a consequence, the GSM and GPRS systems can be analysed separately. – Partial sharing scheme (PSS) allocates Cdata channels for data traffic and the remaining Cshared =Ctotal − Cdata channels are shared by voice and data calls with preemptive priority for voice calls. CPS has the advantage of requiring simpler management policy and implementation. Moreover, a definite capacity for GPRS under an efficient Connection Admission Control (CAC) algorithm can make feasible some QoS guarantees, although it will not clearly give the best utilisation for radio resources. Note that the CPS is the limiting case of the PSS under high loads. The GSM partition can be clearly modelled as a loss system. An admitted GSM voice call needs the assignment of a single traffic channel for its entire duration. On the absence of an available channel, a voice call is lost. Moreover, the GPRS partition can be represented by a finite capacity queueing model involving two single server queues in tandem, namely a FCFS access queue and a discriminatory PS transfer queue, where all active data connections share the total capacity of the data partition and may belong to various classes. These classes may have different
A Queueing Model for a Wireless GSM/GPRS Cell
987
characteristics such as maximum or minimum data rates, delay sensitivity, service discrimination, arrival rates, interrival-time variability and transferable file (data) length. A transfer queue holds a finite number of data connections which are served according to a discriminatory PS rule, where the available service capacity is shared evenly amongst all data calls belonging to the same class. However, in the presence of multiple data classes with different priority levels, the service capacity is shared according to discrimination rates favouring higher priority classes. An admitted data call is initially held in a finite capacity FCFS access queue. If the access queue is full, the incoming call is lost. The access queue models the Packet Control Unit (PCU)/Sevicing GPRS Support Node (SGSN) buffers of a GPRS network with down-link traffic or the logical queue of data call request for transmission in the up-link stream. A call at the head of the access queue will be blocked if the transfer queue is full. Upon the termination of an active data call at the transfer queue, the blocked data call at the access queue is polled into the transfer queue (within a very short time required for signaling) and immediately shares in a PS fashion the available capacity.
3
The GE-Type Tandem Queueing Model
This section introduces a queueing network model for the performance analysis of a wireless GSM/GPRS cell with both voice and multiple data services under CPS. The model describes the GSM and GPRS partitions which can be studied separately (c.f., Section 2). Assuming a Poissonian arrival process, the GSM partition can be modelled by the classical Birth-and-Death M/M/c/c loss system with exponential call durations (which can be analysed via Erlang’s loss formula). The GPRS partition on the other hand can be modelled as a tandem GE-type GE/GE/1/N1 /FCFS → GE/M/1/N2 /PS finite capacity queueing system, where both external and internal traffics are approximated by GE-type interarrivaltime distributions, or equivalently, Compound Poisson arrival processes, respectively, with geometrically distributed batches (c.f., Fig.1). Under PS rule, N2 represents the maximum number of connections sharing simultaneously the available service capacity. Note that a batch arrival process is a most suitable model of bursty multiplexed connections (belonging to various classes with different minimum capacity demands) being accepted into the mobile system if there is enough service capacity at the moment of their arrival. Although the stochastic analysis of this GE-type tandem system is rather complex, the principle of maximum entropy (ME) can be used, as in earlier works [5,6 ], to characterise a product form approximation, subject to appropriate GE-type marginal queueing theoretic constraints. More specifically, the form of the ME joint state probability P (k), k = (k1 , k2 ) of the tandem system, where kj is a state vector (kj1 , . . . , kjR ) and kji is the number of calls of class i in queue j for i = 1, . . . , R and j = 1 (access queue), 2 (transfer queue), subject to normalisation and the existence of the marginal constraints of server utilisation, mean queue length and full buffer
988
D.D. Kouvatsos, K. Al-Begain, and I. Awan M/M/c/c Loss System 1 GSM voice traffic 2 . . . c GSM GE/GE/1/N1/FCFS Access queue
GE/GE/1/N2/PS Transfer queue
GPRS data traffics .. .
FCFS
PS
GPRS
Fig. 1. The Wireless GSM/GPRS with CPS
state probability per class, can be clearly established by applying the method of Lagrange’s undetermined multipliers and is given by P (k) = P1 (k1 )P2 (k2 )
(1)
where P1 (k1 ) and P2 (k2 ) are the marginal joint state (or, queue length) probabilities of the GE/GE/1/N1 /FCFS access queue and GE/GE/1/N2 /PS transfer queue, respectively. This product form approximation allows the decomposition of the tandem system into the two aforementioned queues, each of which can be solved in isolation by carrying out ME analysis at the queue level in conjunction with flow formulae relating, approximately, to a GE-type interdeparture-time mean and SCV (c.f., Kouvatsos et al [5]), namely 2 2 = 2 n1i p1i (0) − Ca1i (1 + p1i (0)) , λd1i = λ1i , Cd1i
(2)
where {n1i , i = 1, 2, . . . , R} are the marginal mean queue lengths of the access queue GE/GE/1/N1 / FCFS and {p1i (0), i = 1, 2, . . . , R} are the marginal probabilities that there are no data calls of class i, i = 1, 2, . . . , R, in the access queue. Note that the proposed {GE/GE/1/N1 /FCFS → GE/GE/1/N2 /PS} tandem queueing model with multiple service classes and blocking differs from and in some respect extends overall the MMPP/M/c queueing model suggested by Foh et al [4]. Although the later incorporates a PSS, Markov Modulated Poisson Process (MMPP) and multiple channels, nevertheless it is only applicable to a single service class, assumes exponential transmission times and, being an infinite capacity queueing model, does not capture the adverse effect of blocking on system performance. Moreover, the GE-type queueing models can be solved via closed form expressions as opposed to computationally expensive matrix geometric methods.
A Queueing Model for a Wireless GSM/GPRS Cell
4
989
The ME Analysis of a GE/GE/1/N/{FCFS or PS} Queue
This section applies entropy maximisation to analyse a generic GE/GE/1/N queueing model, as a building block queue, with R (>0) classes of jobs, censored arrival processes, finite buffer capacity, N, complete buffer management scheme and either FCFS or PS service rules. Note that the GE distribution is of the form (c.f., [5,6]) F (t) = P (X ≤ t) = 1 − τ e−τ vt , t ≥ 0
(3) where τ = 2/(C 2 + 1), X is the inter-event time random variable and 1/v, C 2 are the mean and squared coefficient of variation (SCV) of the inter-event time distribution, respectively. Moreover, the underlying counting process of the GE distribution is a compound Poisson process with geometrically distributed batch sizes and mean batch size 1/τ = (C 2 + 1)/2. Notation Without the loss of generality and for the sake of simplifying the notation, the subscript j, j = 1, 2, referring to access and transfer queues, respectively, is dropped from the notation of this section. Let at any given time S = (c1 , c2 , . . . , cn ), n ≤ N be a joint system state, where c1 is the class of the job in service and c ∈ {1, 2, . . . , R}, = 2, 3, . . . , n is the class of th job in the queue, Q be the set of all feasible states of S and P (S) be the stationary state probability. For each class i, i = 1, 2, . . . , R let λi be the arrival rate, µi be the service rate and πi be the blocking probability that an arrival of class i finds the queue full. For each state S, S ∈ Q, and class i, i = 1, 2, . . . , R, the following auxiliary functions are defined: ni (S) = the number of class i customers present in state S, si (S) = 1, if the job in service is of class i or 0, otherwise, fi (S) = 1, if
R
ni (S) = N, and si (S) = 1 or 0, otherwise.
i=1
The form of the state probability distribution, P (S), S ∈ Q, can be characterised by maximising the entropy functional H(P) = − S P (S) log P (S), subject to normalisation and server utilisation, mean queue length and full buffer state probability constraints per class satisfying the flow balance equations, namely λi (1 − πi ) = µi Ui , i = 1, . . . , R.
(4)
By employing Lagrange’s method of undetermined multipliers the following solution is obtained P (S) =
R 1 si (S) ni (S) fi (S) g xi yi , ∀S ∈ Q, Z i=1 i
(5)
990
D.D. Kouvatsos, K. Al-Begain, and I. Awan
where Z is the normalising constant and {gi , xi , yi , i = 1, 2, . . . , R} are the Lagrangian coefficients corresponding to the server utilisation, mean queue length and full buffer state probability constraints per class, respectively. Defining the sets S0 = {S/S ∈ Q : si (S) = 0, i = 1, 2, . . . , R} , Qi = {S/S ∈ Q : si (S) = 1, i = 1, 2, . . . , R} , Qi;k = {S ∈ Qi : ni (S) = ki & ki ≥ 1, i = 1, . . . , R} , and aggregating P (S) over all feasible states S ∈ Q, the joint ‘aggregate’ state ME solution is given by P (S0 ) = P (k) =
1 , Z R i=1
1 = Z
(6) P rob(Qi;k )
R R kj − 1 ! kj δ(k) , xj ki gi yi R j=1 kj ! j=1 i=1 R j=1
(7)
where δ (k) = 1, if i ki = N , or 0, otherwise, k = (k1 , k2 , . . . , kR ) and ki be the number of jobs of class i present in the queue, i = 1, 2, . . . , R. By using equations (6) and (7), closed form expressions for the aggregate state probabilities {PN (n), n = 0, 1, . . . , N } and marginal state probabilities {Pi (k), k = 0, 1, . . . , Ni , i = 1, 2, . . . , R} can be obtained (c.f., [7]). Moreover, the Lagrangian coefficients xi and gi can be approximated analytically by making asymptotic connections to the corresponding GE-type infinite capacity queue. Assuming xi and gi are invariant to the buffer capacity size N , it can be established that xi =
ni − ρi (1 − X) ρi , gi = , n (1 − ρ)xi
(8)
R R where X = i=1 xi , n = i=1 ni and ni is the asymptotic marginal mean queue length of a multi-class GE/GE/1 queue. Note that closed form expressions for {ni , i = 1, 2, . . . , R} have been determined in Kouvatsos et al [5]) and are given by ni =
R λi 2 2 ρi 2 1 2 Cai + 1 + , {for FCFS rule} ρj Caj + Csj 2 2(1 − ρ) j=1 λj
R h 1 j 2 2 , {for PS rule} + ρj Caj ni = ρi Cai 1 − ρ j=1 hi
(9)
(10)
A Queueing Model for a Wireless GSM/GPRS Cell
991
R where ρi = λi /µi , ρ = i=1 ρi and hi , i = 1, 2, . . . , R, is a set of discriminatory weights that impose service discrimination to different priority classes. Moreover, the blocking probabilities {πi , i = 1, 2, . . . , R} of a GE/GE/1/N queue can be approximated by focusing on a tagged data call within an arriving bulk and is determined by πi =
N
δi (k)(1 − σi )N −k PN (k),
(11)
k=0 ri where δi (k) = ri (1−σ for k = 0 or 1, otherwise, σi = 2/(1 + Ca2 i ), ri = i )+σi 2 2 2 2/(1 + Cs i ), and {Ca i ), Cs i )} are the squared coefficients of variation for the interarrival and service times per class i, respectively, i = 1, 2, . . . , R. By substituting closed form expressions for the aggregate {PN (n), n = 0 . . . N } and blocking {πi , i = 1, 2, . . . , R} probabilities into the flow balance condition (4) and after some manipulation, the following recursive relationships for the Lagrangian coefficients {yi , i = 1, 2, . . . , R}, can be obtained: (n) yi (1)
yi where Θ1i =
5
=
1 − σi X
(n−1) yi
− Θ1i
1 − σi − X X
, for n ≥ 2,
(12)
= Θ1i − Θ2i ,
1−ρ 1−X
+
ρ(1−σi ) 1−σi −X ,
(13) and Θ2i = (1 − σi )
1−ρ 1−X δi (0)
+
ρ 1−σi −X
.
Numerical Results
This section presents some typical numerical experiments in order to illustrate the credibility of the proposed ME solution as a simple but cost-effective performance evaluation tool for assessing the effect of external GPRS traffic at the GE/GE/1/N1 /FCFS access queue and its propagation into the GE/GE/1/N2 /PS transfer queue in terms of the magnitude of the call rates and associated interarrival time squared coefficients (SCVs) of variation. The numerical study focuses on two data service classes with different average sizes of 62.5 KBytes (class-1) and 12.5 KBytes (class-2) in conjunction with a range of corresponding SCVs, respectively. Note that these two classes may represent two typical Internet applications with different parameters, such as web browsing and email, respectively. It is assumed that the GPRS partition consists of one frequency providing total capacity of 171.2 Kbps. Among the different performance parameters that can be determined, three important ones are chosen, namely mean response time, mean queue length and blocking probability. The relative accuracy of the ME algorithm has been verified against simulation (QNAP-2 [8]) focusing on the performance measure of channel utilisation (c.f., Figs. 2-3). It can be observed that the ME results are very comparable to those obtained via simulation.
992
D.D. Kouvatsos, K. Al-Begain, and I. Awan
Focusing on the GE/GE/1/N/PS queue under discriminatory PS rule favouring class 1 (service discrimination weight 1:5), it can be seen that the interarrivaltime SCV has an inimical effect, as expected, on the mean response time per class and the aggregate blocking probability (c.f., Figs. 4,5). Moreover, relative comparisons to assess the effects at varying degrees of interarrival time SCVs and buffer size, N, at the GE/GE/1/N/FCFS queue upon ME generated mean queue lengths are presented in Fig. 6 and 7, respectively. It can be seen that the analytically established mean queue lengths deteriorate rapidly with increasing external interarrival-time SCVs (or, equivalently, average batch sizes) beyond a specific critical value of the buffer size which corresponds to the same mean queue length for two different SCV values. It is interesting to note, however, that for smaller buffer sizes in relation to the critical buffer size and increasing mean batch sizes, the mean queue length steadily improves with increasing values of the corresponding SCVs. This ‘buffer size anomaly’ can be attributed to the fact that, for a given arrival rate, the mean batch size of arriving bulks increases whilst the interarrival time between batches increases as the interarrival time SCV increases, resulting in a greater proportion of arrivals being blocked (lost) and, thus, a lower mean effective arrival rate; this influence has much greater impact on smaller buffer sizes.
Marginal Utilisation for Class 1 Calls 1.0
0.8
0.6
0.4
Legend ME SIM
0.2
0 0.1
0.12
0.14
0.16 0.18 0.2 Mean Arrival Rate
0.22
0.24
0.26
Fig. 2. Marginal Utilisations for Class 1 Calls
6
Conclusions
A novel analytic framework is presented for the performance modelling and evaluation of a wireless GSM/GPRS cell with both voice and multiple data services under a CPS, a pessimistic limiting case of the PSS. The proposed model is comprised from two independent queueing systems, namely an M/M/c/c loss system with Poissonian GSM traffic and a GE/GE/1/N1 /FCFS → GE/GE/1/N2 /PS
A Queueing Model for a Wireless GSM/GPRS Cell
993
Marginal Utilisation for Class 2 Calls 0.15
0.13
0.11 Legend ME SIM
0.09
0.07
0.05 0.1
0.12
0.14
0.16 0.18 0.2 Mean Arrival Rate
0.22
0.24
0.26
Fig. 3. Marginal Utilisations for Class 2 Calls
Mean Response Time 45 40 35 Legend Class 1 Calls Class 2 Calls
30 25 20 15 10 5 1 0
01
5
10
15 20 25 SCV of in−coming calls
30
35
40
Fig. 4. Effect of varying degrees of SCV on Mean Response Time
system of access and transfer queues in tandem having a Compound Poisson external GPRS traffic with geometrically distributed batches. The paper focuses on the analysis of the GE-type tandem system, which is valid for both uplink and downlink connections and provides voice and multiple class data services with different arrival rates and interarrival-time SCVs, file (burst) sizes and different PS discrimination service levels allowing a weighted capacity sharing. A product form approximation for the two queues in tandem is characterised, based on the principle of ME, leading into the decomposition of the system and the separate ME analysis of each building block queue under FCFS and PS rules, respectively, subject to GE-type queueing theoretic constraints per class. Subsequently, closed form expressions for state and blocking probabilities are established. Typical numerical examples are included to inves-
994
D.D. Kouvatsos, K. Al-Begain, and I. Awan Aggregate Blocking Probability 0.1
0.01
0.001
0001
1
5
10
15 20 25 SCV for in−coming calls
30
35
40
Fig. 5. Effect of varying degrees of SCV on Aggregate Blocking Probability
Mean Queue Length for Class 1 Calls 0.12 Legend SCV = 1 SCV = 5 SCV = 10 SCV = 15 SCV = 20 SCV = 25
0.1 0.08 0.06 0.04 0.02 0
0
5
10 15 20 SCV of in−coming calls
25
30
Fig. 6. Effect of varying degrees of SCV on MQLs of Class 1 at different buffer sizes
tigate the relative accuracy of the ME solution against simulation and to assess the effect of external GE-type bursty traffic upon the performance of the cell. The paper has several extension possibilities. Firstly, the exponential assumption on the GSM call duration and interarrival time can be represented by a GE distribution resulting into a GE/GE/c/c loss system. Secondly, the model can be generalised to capture the dynamics of data partition capacity under PSS. In this case, the blocked data calls at the transfer queue will be diverted towards the loss system which will be able to accommodate R+1 classes (voice and data calls) under a preemptive resume (PR) priority rule (with voice having the highest priority). Finally, the ME methodology can be extended to model a network of multiple wireless cells using a QNM decomposition based on the principle of entropy maximisation.
A Queueing Model for a Wireless GSM/GPRS Cell
995
Mean Queue Length for Class 2 Calls 0.25 Legend SCV = 1 SCV = 5 SCV = 10 SCV = 15 SCV = 20 SCV = 25
0.20
0.15
0.1
0.05 0.01
0
5
10
15 Buffer SIze
20
25
30
Fig. 7. Effect of varying degrees of SCV on MQLs of Class 2 at different buffer sizes
References 1. K. Begain, G. Bolch, M. Telek, Scalable Schemes for Call Admission and Handover Handling in Cellular Networks with Multiple Services. Journal on Wireless Personal Communications, Volume 15, No. 2, Kluwer Academic Publishers, 2000, pp. 125-144. 2. K. Begain, M.Ermel, T. Mueller, J. Schueller, M. Schweigel, Analytical Call Level Model of GSM/GPRS Network, in SPECTS’00, SCS Symposium on Performance Evaluation of Computer and Telecommunication Systems, Vancouver, BC, Canada, July 16-20, 2000. 3. R. Litjens, R. Boucherie, Radio Resource Sharing in GSM/GPRS Network. em ITC Specialist Seminar on Mobile Systems and Mobility, Lillehammer, Norway, March 22 - 24, 2000. pp. 261-274. 4. C.H.Foh, B.Meini, B. Wydrowski and M.Zuerman, Modeling and Performance Evaluation of GPRS, Proc. of IEEE VTC, 2001, Rhodes, Greece, pp. 2108-2112, May 2001. 5. D.D. Kouvatsos, P.H. Georgatsos and N.M. Tabet-Aouel, A Universal Maximum Entropy Algorithm for General Multiple Class Open Networks with Mixed Service Disciplines, Modelling Techniques and Tools for Computer Performance Evaluation, eds. R. Puigjaner and D. Potier, Plenum, pp 397-419, 1989. 6. D.D. Kouvatsos, Entropy Maximisation and Queueing Network Models, Annals of Operation Research, Vol. 48, pp. 63-126, 1994. 7. D.D. Kouvatsos and I.U.Awan, Open Queueing Networks with RS-Blocking and Multiple Job Classes, Research Report RR-08-01, Performance Modelling and Engineering Research Group, Department of Computing, Bradford University, August, 2001. 8. M. Veran and Potier D. QNAP-2, A Portable Environment for Queueing Network Modelling Techniques and Tools for Performance Analysis, D. Potier (ed.), North Holland, pp. 25-63, 1985.
Integrated Multi-purposed Testbed to Characterize the Performance of Internet Access over Hybrid Fiber Coaxial Access Networks Hung Nguyen Chan, Belen Carro Martinez , Rafa Mompo Gomez, and Judith Redoli Granados Department of Signal theory and Telematics. Aula Cedetel - University of Valladolid. 47011 Valladolid – Spain. [email protected] http://go.to/hungnc
Abstract. This paper presents an experimental testbed to study the noise effect on the performance of the transport layer over Hybrid Fiber Coaxial (HFC) networks. We have successfully designed and implemented an integrated complex testbed, which is suitable not only for laboratory environments but also reusable for real-world networks. The main purpose of the testbed is modeling the residential broadband data network using hardware simulation under several noise conditions and observing the effects on the performance of popular Internet applications as well as native TCP/UDP performances. A large number of public domain Internet measurement tools have been evaluated, from which several selective software tools have been used. In addition, new software has been developed to combine all software and hardware devices. Based on the testbed, we were able to study several issues of TCP/UDP over HFC networks by making a large number of automatic measurements and analysis. The testbed infrastructure would be very useful for cable operator and end users for monitoring and troubleshooting HFC networks, and can be effectively reused for related studies in similar environments such as wireless and DSL.
1 Motivation Dramatic growth of the Internet has motivated the booming of broadband access technologies. Among those, Hybrid Fiber Coaxial (HFC) is one of the most popular access technologies, which provides users not only with TV programs but also high-speed Internet access and other applications. Many HFC characteristics affect the performance of Internet applications running over it, such as asymmetry, tree-and-branch topology, interferences on reverse path, etc, which requires significant considerations. As the performance of Internet application directly reflects the cable user’s satisfaction, cable operators have strong motivations to monitor not only the status of E. Gregori et al. (Eds.): NETWORKING 2002, LNCS 2345, pp. 996-1007, 2002. © Springer-Verlag Berlin Heidelberg 2002
Integrated Multi-purposed Testbed to Characterize the Performance of Internet Access
997
cable modem but also the Internet application performance of users’ hosts in order to effectively tackle their problems. Also, it is desirable to quickly classify and isolate the network problems. In reality, the Internet access speed of cable users depends on many factors including: RF-related factors, network congestion, QoS parameters, the load of Cable Modems Termination Systems (CMTS), etc. As a result, both users and cable operators need software tools for troubleshooting, and obtaining information about network health. This task is rather difficult without the assistance of special Internet measurement software (which most likely runs on UNIX platforms). Regarding these issues, we have setup a multi-purposed experimental testbed to study the performance of transport layer over HFC networks. The testbed was targeted to be reused in real-world HFC networks. The additional objective of this study is to answer several questions: [a] The possibilities of locating noise-affected area based on measuring Internet performance of hosts connected to a same CMTS (as a result, are under similar network conditions). [b] The effect of noise on Internet applications. [c] How to quickly distinguish between general network congestions problem and HFC network problems ? The rest of this article is organized as follows: Section 2 provides a brief background on HFC networks. Section 3 describes the experimental testbed. In section 4, we discuss on the related studies and the contributions of this work. Several results and possible applications of the testbed are illustrated in section 5. Finally, our conclusion and future work are given in the last section.
2 Overview on HFC Networks Figure 1 depicts a typical HFC system that provides residential broadband services. In order to bring data to cable user homes, the digital signal is converted into analog signal and mixed with CATV analog signal using frequency multiplexing. The high band 500-800 MHZ is used for downstream data and the low band from 5 to 40 MHZ is used for upstream data from cable modems.
Fig. 1. A typical HFC system providing broadband data services
998
H.N. Chan et al.
HFC networks are highly susceptible to noise funneling, the effect of noise entering the coaxial plant, being amplified through return path amplifiers, and aggregated from other coaxial network branches.
3 The Experimental Testbed 3.1
Hardware Configuration
Figure 2 illustrates the testbed hardware configuration. We used 6 PCs 1 running multi-operating systems including Linux RedHat 7.1, FreeBSD 4.3, Windows 2000 Server and Windows 98 SE. These OSes can be switched over from a remote PC, which controls the entire testbed. Simulated noise was generated on a PC-controlled arbitrary waveform generator HP33120 and injected into various points of the experimental network. Noise was reproduced using a noise database, taken from operating HFC networks. Another generator (HP3325B) triggered the HP33120 to control the repeated frequency of noise bursts. As a result, all noise parameters such as inter-arrival time, noise form, noise amplitude, etc, of Gaussian and impulse noise, can be fully controlled. An oscilloscope HP 54616, and a CATV analyzer HP Calan 3010R were used to monitor the signal during test. All the equipment and PCs were controlled and monitored from a remote desktop. With an additional PC-control RF switch, the connection configuration can also be changed remotely.
Fig. 2. The testbed hardware configuration 1
Four PCs connected with Cable Modem have similar hardware and software configurations.
Integrated Multi-purposed Testbed to Characterize the Performance of Internet Access
3.2
999
Software Configuration
The Overall Software Architecture The testbed software structure is depicted in Figure 3. A typical client-server network providing Web and FTP service was simulated, along with a distributed measurement network. A server running Linux RedHat 7.1 provided FTP and Http services for 4 client PCs running Win98SE/LinuxRH. A remote multi-OS PC acted as master controller, controlled the whole testbed through the Internet.
Fig. 3. Testbed software architecture
The software “Expert” plays the main role in the testbed. On the server side, a master “Expert” node can connect to a number of “Expert” slave nodes by either listening to incoming connection or actively connect to listening “Expert” slaves through direct TCP connections or SSH (Secure Socket Host) forwarding. After the communications has been successfully setup, the master node runs a master script in order to control these nodes to launch measurement programs, console commands or Internet applications (e.g. FTP, Web browser) in separated threads 2. One “Expert” node, which acted as instrument-control server, handled a number of measurement equipments through GPIB/RS232 interfaces and saves data into a local hard disk for uploading to the Linux server. Data including log files (from servers and clients) and measurement data (from measurement instruments), is processed by various Perl scripts, and then put into a dynamically-created array of Round Robin Databases (RRD). Those scripts can also directly get run-time data from the server2
Slave scripts can also run simultaneously with master script.
1000
H.N. Chan et al.
side measurement software pool through POSIX pipes. Another set of Perl/CGI scripts (run on top of Apache Web server) dumps selected data sources from the RRD database pool, make run-time graphs on the requests of remote Web browsers. Optionally, data from CMTS and Routers can also be acquired through SNMP interfaces and put into RRD pool. The Main Testbed Software: “Expert” In order to facilitate the experiment an experimental software, named “Expert”, was developed. The primary objective of the software is to provide a communication links for measurement nodes and a simple graphical interface to execute/debug measurement scripts. An integrated measurement script can combine Win32 shell commands, Unix-ported commands, Windows scripting host, scripts written in scripting languages such as Perl/TCL, and a number of additional internal UNIX-style commands and communication commands. A special communication protocol was implemented in the software so that the measurement nodes can work in both clientserver and peer-to-peer architecture. (More details can be found in [15], which was written by the same author.) Moreover, the event-based feedback and the capability to control measurement instruments of the software would also be useful for the traditional HFC status monitoring on the physical layer. The design of “Expert” software was based on our previous experiences and codes [3], [10] and regarding related software and measurement techniques [4]. The software “Expert” was written using Visual Basic 6 on the client side and Perl 5.6 on the server side 3. Internet Measurements During the project, a large number of public domain tools have been evaluated on UNIX (Linux and FreeBSD) and Win32. These measurement tools are the result of many studies of the academic and OpenSource community. Among these tools, the most useful are: 1) Iperf to characterize the native TCP/UDP bandwidth from cable PCs to CMTS 2) NCS to characterize the path from CMTSs toward the Internet backbone 3) A combination of Tcpdump/Windump/ Tcptrace/Xplot to analyze packet-level traces and 4) Ntop, a modern passive measurement software. 5) Traditional ping.
4 Related Studies and Contributions of This Work At present, few studies have focused on the performance of Internet applications and protocols run over HFC since the majority of the studies ([1], [5]) focused on the physical layer in order to find solutions to detect and mitigate cable upstream interferences, which severely degrade digital services. Among these studies, S. Chaterjee [6] and P. Tzerefos [12] using OPNET, a commercial simulation software, R. Cohen [2] used NS2 simulator in order to simulate TCP-based applications over HFC network. However, these studies did not directly regard the effects of 3
The information on the availability of the testbed software can be found on the author‘s home page: http://go.to/hungnc.
Integrated Multi-purposed Testbed to Characterize the Performance of Internet Access
1001
interferences on HFC networks. To our knowledge, one of the reasons for this is the complexity of simulating both the digital system providing Internet services and analog systems and signals only by using software simulations. Another reason is the high-cost of HFC network equipment such as CMTS, or broadband router. In addition, the HFC measurements taken on real HFC systems must be non-intrusive so as not to affect the operating network. In the Internet measurement field, many studies have been successfully performed for decades in both traditional network environments [4], [9] as well as new environments such as wireless [13]. Numerous measurement tools have been introduced as a result of those studies. Our previous work [3] was primarily concerned with monitoring the HFC physical layer performance by making use of a set of measurement instruments such as spectrum analyzers and oscilloscopes, which is a traditional approach. However, due to additional cost, many cable operators do not perform the preventive maintenance routines, which aim at monitoring the physical layer, even though these detailed routines have been clearly defined for years. In the next phase of our project [10], testbed software “Expert” was developed and provided automatic data collection on Win32 platform. In the current phase, server side software has been significantly improved by cross-platform Perl/CGI programs and Round Robin Database (RRD), which allow real-time display and analysis on popular Apache Web server while the client side has the new feedback capabilities. One of our greatest challenges was the complexity of the experimental testbed. This involved a large range of issues including; controlling measurement equipment, investigating the characteristics of HFC network and Internet services from the physical layer up to the application layer, synchronizing distributed network measurement, dealing with communication with remote measurement instruments through Internet firewalls, (which eliminated the feasibility of most commercial software products such as the measurement software that implements protocols such as Agilent SICL LAN, TCP/IP VXI and MS DCOM). In addition, most current Internet measurement tools are primarily available on UNIX platforms while the ATE (Automatic Test and measurement) software is most likely available only on Win32 platform or some proprietary platform (such as HPUX). Our work successfully integrated Internet measurement tools with measurement equipment and software, as well as simulated typical Internet services over HFC networks. The testbed infrastructure allowed large range of studies such as the interactions between the physical layer and the link layer [11], the physical layer and the transport layer [10], and interestingly, an approach to locate noise affected area based on measuring transport layer performance, which will be very cost-effective. Several results and possible applications of the testbed will be presented in the next section.
5 Results: Possible Applications of the Testbed Infrastructure 5.1 Characterizing TCP/IP Performance over HFC under Noise Conditions The testbed infrastructure allows studying numerous problems of HFC, such as noise funneling effects on cable user’s applications, finding the vulnerable points of the
1002
H.N. Chan et al.
networks to focus monitoring and maintenance efforts, detecting malicious-purpose noise injections, as well as the issues mentioned in section 1. Since most of the tests can be run automatically and unattended, this allows for a large number and variety of tests.
Fig. 4. Upstream UDP throughput under noise (run-time graph)
Locating the Noise Injection Based on the Noise Isolation Factor Based on the relative location of computers and the noise injection point shown in Figure 2, the calculation of noise isolation factor is shown in Table 1: Table 1. Noise isolation factor
PC PC2 PC6 PC3 PC4
Noise isolation (I) (for the connection scheme in Figure 2) 14 (tap) + cable* 3 + (- LNA2 + 220m) + 220m * 2,5db/100m (at 40MHZ) + 4 (TNA combiner loss ) + LNA1 (19) + cable*4 + 11 (tap) = 54 dB 14 (tap)+ cable *3 + (- LNA2 + 220m) + 4 (TNA combiner loss) + 80m * 2,5db/100m + 20 (tap) = 40 dB 14 (tap) + cable *2 + 8 (tap) = 22 dB 14 (tap) + cable* 2 + 20 (tap) = 34 dB
While observing the table, it should be noted that the losses of short cable between taps are assumed to be zero, and (-LNA2 + 220m) = 0 dB since the testbed coaxial network was calibrated for unity gain, which means that the gain of an amplifier equals the losses that precede it. (See [7]) Figure 4 is a screenshot of a run-time graph displaying the UDP measurement results using Iperf 1.2, (with the bandwidth parameters set in accordance with the QoS parameters assigned for CMs) on 4 clients. In this figure, the noise effects on cable users can be observed. The left-most flat section corresponds with the normal operation before various impulse noise forms of increasing amplitude were injected into the network. The noise injected in point N3 (See Figure 2) causes UDP bandwidth fluctuations. It can be observed that the effects on 4 PCs are very different: PC 3 seemed not be affected until noise exceeds a threshold. The main reason is the differences of the noise isolation factors relative to the noise injection point.
Integrated Multi-purposed Testbed to Characterize the Performance of Internet Access
1003
We also realized that among various parameters such as round trip delay, jitter, etc, the fluctuation of UDP bandwidth is a good metric to assess the noise isolation factor, in order to predict the noise injection location. (1) (2) With standard deviation and correlation calculated using equation (1) and (2), respectively, the correlation between the standard deviation of UDP throughput (which represents for the UDP bandwidth fluctuation) and the isolation factor (calculated in Table 1) is -0.90338, and the correlation strength is 0.816097, which is satisfactory.
Fig. 5. Standard deviation of native UDP bandwidth versus noise isolation factor
The importance of this finding is as follows: - Based on the estimation of noise isolation factor, together with experiences on operating networks, a vulnerable map of the network can be established (See [15]). - If the fluctuations of UDP bandwidth are detected on a number of cable users or dedicated measurement nodes, it is most likely that noise has affected the areas that have low noise isolation relative to these measurement nodes. Typically, these areas are neighborhood taps and the taps near the bi-directional amplifiers, where upstream noise is amplified ([15]). This fact can assist in locating noise injection point, which is the solution for issue [a] mentioned in section 1. Table 2. The correlations between noise isolation factor and various performance metrics
Performance metrics Native UDP upstream Native TCP upstream (measured by Iperf) FTP bandwidth upstream FTP bandwidth downstream
Correlation Coefficient - 0,903 -0.845 -0.807 -0.796
Correlation Strength 0,816 0.714 0.615 0.633
1004
H.N. Chan et al.
As can be seen in table 2, the native UDP bandwidth, native TCP bandwidth, FTP bandwidth upstream and FTP bandwidth downstream are in descending order of correlation strength. This is consistent with our expectation. Characterizing the Effects of Noise on Internet Applications In order to investigate issue [b] in section 1, two typical Internet applications, FTP and WWW (using MSIE 5.5) were characterized in the testbed. Several results are shown on Figure 6.
Fig. 6. Characterizing Internet applications a) FTP upstream b) FTP downstream c) A packetlevel trace shows retransmissions of a FTP upstream session due to impulse noise d) Excessive Http delay and timeout due impulse noises.
As can be seen on Figure 6, under similar noise condition, the effect of ACK loss on FTP downstream throughput (Fig. 6b), can degrade throughput up to 50%, while the effect of packet loss on FTP upstream, only degrade throughput up to 25% (Fig. 6a). Web browsing is more susceptible to noise effects than FTP 4 (Figure 6d). In addition, the effects on Web browsing also heavily depend on Web page size and structure.
4
However, in reality, users normally do not wait until a Web page is fully loaded before surfing to another page.
Integrated Multi-purposed Testbed to Characterize the Performance of Internet Access
1005
Fig. 7. Measurement network on HFC infrastructure
5.2
Other Applications
Monitoring HFC Network On the next phase of the project, we are planning a measurement infrastructure based on the current testbed. Figure 6 shows the suggested measurement network, which is based on the current testbed. The measurement network will contain low-cost headless servers running FreeBSD or Linux, which are located at the network distribution centers to serve 500 to 2000 cable users. The main measurement toolset will be installed at the server while end-user software can be downloaded freely to help users monitoring and troubleshooting network health. The servers can characterize the path toward Internet backbone to detect network bottleneck as well as the path toward cable users to detect RF-related problems. Therefore, they are able to solve the issue [c] previously mentioned in section 1. In this scenario, if a software is installed on the user side, it can perform some types of measurement and diagnosis and help users to tune their network setting to obtain higher performance in HFC environments, while the measurement results can also be seen on the servers due to the client-server nature of several Internet measurement software (e.g. Iperf). In addition, user-side software can greatly assist in topology discovery, trouble-shooting and improve the precision of Internet measurement. Benchmarking Cable/DSL Modems and CMTS/DSLAM As described in [8], measurement procedures to benchmark Cable/DSL modems and/or CMTS/DSLAM can be performed by a commercial system such as Smartbit. However, in the absence of such a high-cost system, a modified testbed infrastructure
1006
H.N. Chan et al.
can be used. Figure 7 illustrates the implementation of the testbed infrastructure as a benchmark system.
Fig. 8. Using Testbed infrastructure as a benchmark system
Other Access Network Environments (DSL, Wireless, etc) It is widely known that DSL networks suffer from similar problems as HFC network such as noise and interferences. Moreover, since telephone wire is more susceptible to noise in comparison with coaxial, DSL performance is dramatically degraded with the presence of noise. In the very designing phase, the testbed was aimed to be transparently reused in DSL environments with very small or no modifications.
6 Conclusions and Future Work This paper has described an experimental integrated testbed to study the overall performance of Internet access over HFC networks under simulated noise conditions and discussed on the applications of the testbed infrastructure. From the testbed experiences, a large number of measurement tools were collected and evaluated on the HFC infrastructure. The testbed infrastructure is flexible enough to be easily adapted for other purposes as well as other network environments. One of the interesting results obtained from the experimental testbed is the novel approach of locating noise injection points based on the correlation between the noise isolation factor and the variation of UDP bandwidth. The implementation of this approach in real networks would require some network topology discovery techniques. For this purpose, there are several possibilities: 1) Obtaining ranging parameters of CMs through SNMP interfaces of CMTS. 2) Cable users can provide their CM locations themselves through client software or a database-backend Web page. 3) The network maps are already available by network designing and maintenance procedures. 4) A combinations of the above three solutions. However, the testbed still have several limitations, such as the scale of the testbed was small in comparison with real HFC networks, etc. We will address those limitations in the next phase of the project by making non intrusive-measurements on the real operating HFC networks. The server-side software will be able to assist the automatic analysis more effectively if its database capability is improved. One possibility is connecting the current RRD database pool with an OLAP engine (online analytical processing), to form multi-dimensional databases of HFC network performance.
Integrated Multi-purposed Testbed to Characterize the Performance of Internet Access
1007
Acknowledgements. This work is supported by Cedetel, Spain, and Retecal, a Spanish cable operator. The author also appreciates the anonymous reviewers for their comments on the previous version of this paper and the OpenSource community, who made many valuable software and source code available.
References 1. 2. 3.
4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14.
K.Hui Li, “Impulse noise identification for the HFC upstream Channel" IEEE transaction on broadcasting, vol. 44. No.3, pp 324-329, September 1998. R. Cohen, S. Ramanathan, “TCP for high performance in hybrid fiber coaxial broadbandaccess networks”, IEEE/ACM Transaction on networking, pp 15-29, vol. 6. No.1, February 1998. N. Chan Hung, R. Mompo, J. Redoli, B. Carro, “Flexible COM-based software solution for HFC network monitoring”, Proceeding of IFIP TC6/WG6.7, Smartnet 2000, pp 555568, Kluwer Academic Publisher. Available: http://www.portalvn.com/hungnc/CVRevised.htm CAIDA Internet measurement taxonomy. Available on http://www.caida.org C. A. Eldering, N. Himayat, F. M. Gardner, “CATV Return path characterization for reliable communications”, IEEE Communication magazine, Aug 1995, pp 62-69. S. Chatterjee, L. Jin , “Performance of residential broadband services over High-speed cable networks”, Proceeding of Workshop on Information Technology and Systems (WITS98), Helsinki, Finland, Dec 1998. D. Raskin, D. Stoneback, “Broadband Return Systems for hybrid fiber/coax cable TV networks.”, Reading , Prentice Hall. Publishers, Inc. 1998 DOCSIS Acceptance Test Suit, Spirent Communication Inc., Available on Web page: http://www.spirentcom.com J. Guojun , G. Yang , B. R. Crowley, D. Agarwal, “Network Characterization Service”. Available on Web page http://www-didc.lbl.gov/NCS N. Chan Hung, B.Carro, R.Mompo, J.Redoli, “Monitoring the Hybrid fiber coaxial on the transport layer”. Proceeding of the European Conference on Networks and Optical communications NOC2001, pp249-256, IOS press, UK, July 2001. B. Carro, N. Chan Hung, J. Redoli, R. Mompó, “Link-level effect of a noisy channel over data transmission on the return path of an HFC network”, Accepted paper for Globecom 2001, Texas USA. P. Tzerefos, “On the performance and scalability of digital upstream DOCSIS 1.0 conformant CATV channels”, Ph.D. Dissertation, University of Sheffield, UK, Oct 1999. R. Ludwig, A. Konrad, A. D. Joseph, “Optimizing the End-to-End performance of reliable flows over wireless links”, Proceeding of MobiCom 99. N. Chan Hung, “Systematic study on Hybrid Fiber Coaxial network preventative maintenance and the performance of Internet applications over HFC networks”, Ph.D. Dissertation, University of Valladolid, Spain, Jan 2002. Available on http://www.portalvn.com/hungnc/CurrWork.htm
802.11 LANs: Saturation Throughput in the Presence of Noise Vladimir Vishnevsky and Andrey Lyakhov Institute for Information Transmission Problems of RAS B. Karetny 19, Moscow, 101447, Russia {vishn, lyakhov}@iitp.ru http://www.iitp.ru
Abstract. IEEE 802.11 specifies a technology for wireless local area networks (LANs) and mobile networking. In this paper, we present an analytical method of estimating the saturation throughput of 802.11 wireless LAN in the presence of noise which distorts transmitted frames. Besides the Basic Access mechanism of the 802.11 MAC protocol, we study such optional tool as the RTS/CTS method, which allows reducing the influence of collisions. In addition to the throughput, our method allows estimating a probability of a packet rejection occurring when the number of packet transmission retries attains its limit. The obtained numerical results of investigating 802.11 LANs by this method are validated by simulation and show high estimation accuracy for any values of protocol parameters and bit error rates. These results also show that the method is an effective tool for tuning the protocol parameters.
1
Introduction
In recent years, wireless data communications networks have become one of the major trends of the network industry development. Wireless LANs can be considered as an extension of the wired network with a wireless “last mile” link for connecting a large number of mobile terminals. The obvious merit of wireless LANs is the simplicity of implementation—no cables are required, its topology can be dynamically changed with connection, movement, and disconnection of mobile users without much loss of time. The success of wireless networks depends largely on the development of networking products for multiple access to a wireless medium and of the appropriate standards. One of such standards is the IEEE 802.11 protocol [1] concerning the specifications on MAC and PHY layers for wireless networks. Leading companies (e.g., CISCO) have developed software and hardware in conformity with this standard. The fundamental access mechanism in the IEEE 802.11 protocol is the Distributed Coordination Function (DCF), which implements the Carrier Sense
This work was partially supported by NATO Science Programme in the Collaborative Linkage Grant PST.CLG.977405 “Wireless Access to Internet exploiting the IEEE 802.11 technology”
E. Gregori et al. (Eds.): NETWORKING 2002, LNCS 2345, pp. 1008–1019, 2002. c Springer-Verlag Berlin Heidelberg 2002
802.11 LANs: Saturation Throughput in the Presence of Noise
1009
Multiple Access with Collision Avoidance (CSMA/CA) method. In this method, sequential attempts to transfer by every station are separated by backoff intervals. The number of slots b in this interval is random and defined by a binary exponential backoff rule (see Section 2). In previous works, performance of the DCF has been evaluated either by simulation (e.g., [2]) or by approximate analytical models [3,4] based on assumptions simplifying considerably the backoff rule. The DCF scheme has been studied in depth in [5]–[7], in which analytical methods have been developed for evaluating the performance of 802.11 wireless LANs in the saturation conditions when there are always queues for transmitting at every wireless LAN station. This performance index called the saturation throughput in [5] has been evaluated in the assumption of ideal channel conditions, i.e., in the absence of noise and hidden stations. The assumption of the absence of hidden stations is admissible as a result of the small distance between LAN stations. But if noise is neglected, the throughput may be overestimated, because electromagnetic noise in large cities is inevitable and worsens the throughput due to data distortion. In this paper we develop methods [5]–[7] to study the influence of noise on the 802.11 LAN performance. Further in Section 2 we briefly review the DCF operation in saturation and noise. In Sections 3 and 4 we develop a new analytical method of estimating the saturation throughput and a probability of a packet rejection occurring when the number of transmission retries attains its limit. In section 5, we give some numerical research results of the saturation throughput of 802.11 LANs. These results obtained by both our analytical method and simulation allow us to validate the developed method. Finally, the obtained results are summarized in section 6.
2
DCF in Saturation
Now we briefly outline the DCF scheme, considering only the aspects that are exhibited in saturation and with absence of hidden stations. This scheme is described in detail in [1].
FROOLVLRQ
EV 6W $
EV
'$7$
'$7$
',)6
EV
EV
$&.
6W % V
(,)6
'$7$
',)6
Fig. 1. Basic Access Mechanism (s - SIFS, b.s - backoff slots)
(,)6
1010
V. Vishnevsky and A. Lyakhov
Under the DCF, data packets are transferred in general via two methods. Short packets of length not greater than P are transferred by the Basic Access mechanism. In this mechanism shown in Figure 1, a station confirms the successful reception of a DATA frame by a positive acknowledgment ACK after a short SIFS interval. FROOLVLRQ
EV
576
EV
'$7$
576
6W $ V
EV
',)6
&76
6W % V
EV
$&. V
(,)6
576
',)6
(,)6
Fig. 2. RTS/CTS mechanism
Packets of length greater than the limit P called the RTS threshold in [1] are transferred via the Request-To-Send/Clear-To-Send (RTS/CTS) mechanism. In this case shown in Figure 2, first an inquiring RTS frame is sent to the receiver station, which replies by a CTS frame after a SIFS. Then only a DATA frame is transmitted and its successful reception is confirmed by an ACK frame. Since there are no hidden stations in the considered LAN, all other stations hear the RTS frame transmission and defer from their own attempts. This protects CTS, DATA and ACK frames from a collision-induced distortion. The RTS threshold P is chosen as a result of a reasonable trade-off between the RTS/CTS mechanism overhead consisting in transmitting two additional control frames (RTS and CTS) and reduction of collision duration. Figures 1 and 2 show that the collision duration is determined by the length of the longest packet involved in collision for the Basic Access mechanism, whereas in the RTS/CTS mechanism it is equal to the time of transferring a short RTS frame. After a packet transfer attempt the station passes to the backoff state after a DIFS interval if the attempt was successful (i.e., there was no collision, all frames of a packet were transferred without noise-induced distortions) or after an EIFS interval if the attempt failed. The backoff counter is reset to the initial value b, which is called the backoff time, measured in units of backoff slots of duration σ, and chosen uniformly from a set (0, . . . , w − 1). The value w, called the contention window, depends on the number nr of attempts performed for transmitting the current packet: w = Wnr , where Wnr = W0 2nr for nr ≤ m and Wnr = Wm for nr > m,
(1)
i.e., w is equal to the minimum W0 before the first attempt, then w is doubled after every failed attempt of the current packet transmission, reaching the
802.11 LANs: Saturation Throughput in the Presence of Noise
1011
maximum Wm = W0 2m . Note that every transmission attempt of a packet can include transfers of several frames (RTS, CTS, DATA, and ACK). Backoff interval is reckoned only as long as the channel is free: the backoff counter is decreased by one only if the channel was free in the whole previous slot. Counting the backoff slots stops when the channel becomes busy, and backoff time counters of all stations can decrement next time only when the channel is sensed idle for the duration of σ+DIFS or σ+EIFS if the last sensed transmission is successful or failed, respectively. When the backoff counter attains its zero value, the station starts transmission. In the course of transmission of a packet, a source station counts the numbers of short (ns ) and long (n ) retries. Let a source station transfer a DATA frame with a packet of length equal to or less than P , or an RTS frame. (Retries for these frames are called short ones in [1]). If a correct ACK or CTS frame, respectively, is received within timeout, then the ns -counter is zeroed; otherwise ns is advanced by one. Similarly, the n -counter is zeroed or advanced by one in case of reception or absence of a correct ACK frame (within timeout) confirming the successful transfer of a DATA frame with a packet of length greater than P (transfer retries for that sort of DATA frames are called long retries). When any of these counters ns and n attains its limit Ns or N respectively, the current packet is rejected. After the rejection or success of a packet transmission the next packet is chosen (due to saturation) with zeroing the values of nr , ns , and n . As in [5,6], to study the DCF, we adopt the following assumption: all stations change their backoff counter after a DIFS or EIFS interval closing a packet transmission attempt, i.e., the source station (or stations in case of collision), which has performed a transmission, modifies its contention window w and chooses randomly the backoff counter value from the set (0, . . . , w − 1), while other stations just decrease their backoff counters by 1 (in reality [1], other stations can do it only after a backoff slot σ since the end of the DIFS or EIFS interval). Thus, at the beginning of each slot any station can start its transmission. As shown in [7], this assumption does not affect significantly the throughput estimation results with the W0 values recommended in [1].
3
Throughput Evaluation
Let us consider a wireless LAN of N statistically homogeneous stations working in saturation. In fact, we mean by N not a number of all stations of the LAN, but a number of active stations whose queues are not empty for a quite long observation interval. By statistically homogeneity of stations, we mean that the lengths of packets chosen by every station from the queue have identical probability distribution {d , = min , . . . , max }. Since the distance between stations is small, we assume that there are no hidden stations and noise occurs concurrently at all stations. These assumptions imply that all stations “sense” the common wireless channel identically. As in [5], let us subdivide the time of the LAN operation into non-uniform virtual slots such that every station changes its backoff counter at the start of a virtual slot and can begin transmission if the value of the counter becomes zero.
1012
V. Vishnevsky and A. Lyakhov
Such a virtual slot is either (a) an “empty” slot in which no station transmits, or (b) a “successful” slot in which one and only one station transmits, or (c) a “collisional” slot in which two or more stations transmit. As in [5,6], we assume that the probability that a station starts transmitting a packet in a given slot depends neither on the previous history, nor on the behavior of other stations, and is equal to τ , which is the same for all stations. Hence the probabilities that an arbitrarily chosen virtual slot is “empty” (pe ), “successful” (ps ), or “collisional” (pc ) are pe = (1 − τ )N ,
ps = N τ (1 − τ )N −1 ,
pc = 1 − pe − ps .
(2)
Thus, the throughput S is determined by the formula S=
ps U , pe σ + p s T s + pc T c
(3)
where Ts and Tc are the mean duration of “successful” and “collisional” slots, respectively, and U is the mean number of successfully transferred data bytes in a “successful” slot. The duration of a “collisional” slot is the sum of time of transmitting the longest frame involved in collision and an EIFS interval. Disregarding the probability of collision of three or more frames, we obtain the formula for the mean duration of a “collisional” slot 2 P −1 max max Tc = td ( )d d + 2 dk + dk + tRT S d =min
k=min
k=P +1
=P +1
+ EIF S + δ,
(4)
where td ( ) = H + /V is the transmission time of a DATA frame including a packet of length and a header transmitted in time H, V is the channel rate, tRT S is the transfer time for an RTS frame (according to [1], tRT S < H), and δ is the propagation delay assumed the same for all pairs of stations. Finally, d is the probability that the performed attempt is related to a packet of length . Note that the distribution {d , = min , . . . , max } is different from the distribution {d , = min , . . . , max }, because the longer the length of a packet, the greater the number of attempts required for transferring a packet due to the higher probability of distortion of the corresponding DATA frame by noise. At the beginning of a “successful” slot, one and only one station initiates an attempt of transmitting a packet of length , and this transmission is successful with probability πh ( ) if none of the frames exchanged between the sender and receiver in this process is distorted by noise, i.e., πh ( ) = [1 − ξd ( )](1 − ξa ) for ≤ P and
πh ( ) = (1 − ξrc )[1 − ξd ( )](1 − ξa )
for > P ,
802.11 LANs: Saturation Throughput in the Presence of Noise
1013
where ξrc = 1 − (1 − ξrc )(1 − ξa ) is the probability of distorting an RTS-CTS sequence by noise, while ξd ( ), ξr , and ξa are the probabilities of noise-induced distortion of a DATA frame including a packet of length (ξd ( )), and RTS (ξr ) frame, and CTS and ACK (ξa ) frames of identical format [1]. These distortion probabilities are defined by the Bit Error Rate (BER)—the probability of distortion of a bit, i.e., an f -byte frame is distorted with probability ξf = 1 − exp{−8 f BER}. Transfer of a packet is terminated when an exchanged frame is distorted. Thus, the mean duration of a transfer attempt in a “successful” slot depends on the length of the transferred packet and is equal to ts ( ) = td ( ) + δ + [1 − ξd ( )](tACK + SIFS + δ) + πh ( )DIFS + [1 − πh ( )]EIFS for ≤ P and ts ( ) = tRT S + δ + (1 − ξr )(tCT S + SIFS + δ) + (1 − ξrc ){[1 − ξd ( )](tACK + SIFS + δ) + td ( ) + SIFS + δ} + πh ( )DIFS + [1 − πh ( )]EIFS for > P , where tCT S = tACK is the transfer time of a CTS and an ACK frame. Thus, the mean duration Ts of a “successful” slot and the mean number of successfully transferred bytes U in this slot are Ts =
max
ts ( )d ,
U=
=min
max
πh ( )d .
(5)
=min
Therefore we have found all components of (3). So the throughput S can be found if the transmission commencement probability τ and the probability distribution {d } are known.
4
Transmission Probability
Now we study the process of transmitting a packet of length by some station. This process starts at the instance when the packet is chosen from the queue and ends with either this packet successful transmission or its rejection. Let f and w be the mean numbers of this packet transmission attempts and virtual slots in which the considered station defers from transmission during this process. Then max max τ= d f / d (f + w ), (6) =min
d = d f /
max k=min
=min
dk fk ,
= min , max .
(7)
1014
V. Vishnevsky and A. Lyakhov
Moreover, we will seek also the averaged probability prej of packet rejection when one of the counters ns or n attains its limiting value Ns or N , respectively. This probability can be found from the following sum: max
prej =
d prej ( ),
(8)
=min
where prej ( ) is the probability of rejecting a packet of length . In the course of transmitting a packet of length let exactly i attempts take place. Let ψ (i) denote the probability of this event. Obviously, ψ (i) = ψs (i) + ψr (i),
(9)
where ψs (i) and ψr (i) are the probabilities that this transmission process terminates at attempt i with success and rejection, respectively. In our case when exactly i attempts take place, the mean number of virtual slots in which the station defers from transmission in the course of the whole considered process is Wi =
i−1 Wk − 1
2
k=0
Wi =
m Wk − 1 k=0
2
+
= Wi−1 −
W0 + i , 2
1 ≤ i ≤ m + 1,
i − m + 1 W0 + i Wm − 1 (i − 1 − m) = Wm − , 2 2 2
Then we have
im ()
f =
i > m + 1.
im ()
iψ (i),
i=1
w =
W i ψ (i),
(10)
i=1
where im ( ) is the maximal number of attempts for such a packet, i.e., im ( ) = Ns for ≤ P and im ( ) = i1m = (Ns − 1)N + 1 for > P . Now we look for probabilities ψ (i). First we consider a simple case ≤ P , in which the number i of attempts is bounded by Ns . The probability of unsuccessful attempt is πcd ( ) = 1 − (1 − πc )(1 − ξ( )) where πc = 1 − (1 − τ )N −1
and ξ( ) = 1 − (1 − ξd ( ))(1 − ξa )ø
are the probabilities of the current attempt collision and distorting DATA or ACK frames, respectively. Then the process is completed successfully at the ith attempt with probability ψs (i) = [1 − πcd ()][πcd ()]i−1 ,
i = 1, . . . , Ns ,
(11)
or ends in rejection with probability prej () = [πcd ()]Ns ,
i.e.,
(12)
802.11 LANs: Saturation Throughput in the Presence of Noise
1015
ψr (i) = 0 for i < Ns
(13)
and ψr (Ns ) = [πcd ( )]Ns .
Consequently by (9), ψ (i) = [1 − πcd ( )][πcd ( )]i−1 , i = 1, . . . , Ns − 1, ψ (Ns ) = [πcd ( )]Ns −1 . (14) Now let > P . In this case the number id of DATA frame transfer attempts is bounded by N and each of these attempts may be preceded by 0, . . . , Ns − 1 unsuccessful attempts of transferring an RTS frame. Moreover, in the case of a packet rejection due to attaining the limit Ns , the packet transmission process completes with Ns failed RTS transfer attempts. Let us express the probability ψr (i) as the sum ψr (i) = pdrej ( , i) + prrej ( , i),
(15)
where pdrej ( , i) and prrej ( , i) are the probabilities of rejection after i packet transmission attempts due to the attainment of limiting values of the n - and ns -counters, respectively. Note that 1
prej ( ) =
im
[pdrej ( , i) + prrej ( , i)].
(16)
i=1
The probabilities of unsuccessful transfer of DATA and RTS frames are ξ( ) and πcr = 1 − (1 − πc )(1 − ξrc ), respectively. Therefore after simple algebraic operations we obtain i−1 ψs (i) = (1 − πcr )[1 − ξ()]πcr
min(i,N )−1
h=0
pdrej ( , i) = 0,
ρ πcr
h
g(i − 1 − h, h + 1),
(17)
i = 1, . . . , N − 1,
i−N N pdrej ( , i) = πcr ρ g(i − N , N ),
prrej ( , i) = 0, i prrej ( , i) = πcr
i = 1, . . . , Ns − 1,
min(i−Ns ,N −1)
h=1
ρ πcr
i = 1, . . . , i1m ,
i = N , . . . , i1m ,
(18)
Ns prrej ( , Ns ) = πcr ,
h g(i − Ns − h, h),
i = Ns + 1, . . . , i1m ,
(19) where ρ = (1 − πcr )ξ( ) is the probability that an attempt of transmitting a packet of length fails just due to noise-induced distortion of DATA or ACK frames, while g(u, v) is the number of ways in which u indistinguishable balls (failed RTS transfer attempts) can be placed in v urns (gaps preceding each of DATA transfers) so that every urn contains not more than Ns − 1 balls. The function g(u, v) is computed recursively: g(0, v) = 1 ∀v > 0,
g(u, 1) = 1 for u < Ns and 0 for u ≥ Ns ,
1016
V. Vishnevsky and A. Lyakhov min(u,Ns −1)
g(u, v) =
g(u − k, v − 1) for v ≥ 2,
u > 0.
k=0
Therefore the transmission probability τ can be estimated by the following iterative procedure. Step 0. Define an initial value for τ . Step 1. For all possible packet lengths and number of attempts i, compute the rejection probabilities ψ (i) by (14) if ≤ P or by (9), (15), and (17)–(19) if > P . Step 2. For all possible packet lengths , using (10), compute the mean numbers of attempts f and virtual slots w in which transmission is postponed. Step 3. Using (6), find the modified value of τ and compare it with the initial value. If the difference of these values is greater than a predefined limit, return to Step 1 using a new initial value for τ —the half-sum of its old initial value and the modified value. After this iterative procedure, we obtain the averaged rejection probability prej by (8), (12), (16), (18), and (19). Finally, we find the distribution {d } by (7) and throughput by the formulas of the previous section. We don’t prove exactly the convergence of this iterative technique due to its complexity and lack of space. It is clear intuitively that the equation (6) has a unique solution because a growth of transmission probability τ leads to increasing the collision probability and, hence, to increasing the average number w /f of slots anticipating an attempt for all . In practice, numerous examples of adopting the suggested technique with various values of wireless LAN parameters have shown that this technique provides very fast convergence to the solution and high speed of calculating the values of estimated performance indices. It takes less than a second to calculate S and prej with running this technique program implementation at Intel Celeron 400 MHz.
5
Numerical Results
To validate our model, we have compared its results with that obtained by GPSS (General Purpose Simulation System) simulation [8]. The object of our numerical investigations was a LAN which consisted of N statistically homogeneous stations working in saturation and was controlled by the DCF scheme of the IEEE 802.11 protocol with the higher-speed physical layer extension (802.11b) [9]. The values of protocol parameters used to obtain numerical results for the analytical model and simulation were the default values [9] for the Short Preamble mode and summarized in Table 1. Moreover, the information packet size (in bytes) is sampled uniformly from the set {1, . . . , 1999}. In our simulation model, we have tried to take into account of all real features of the 802.11 MAC protocol and, of course, not adopted the assumptions used with analytical modeling and described at the end of Section 2 and in Section 3. In the course of each run (it took about 2 hours, in average) of the simulation model, we watched the measured performance index value and stopped the simulation when this value fluctuations became quite small (within 0.5%).
802.11 LANs: Saturation Throughput in the Presence of Noise
1017
Table 1. Values of protocol parameters Slot time, σ MAC+PHY Header Header transfer time, H RTS length SIFS EIFS Short retry limit, Ns Minimal contention window, W0
20 µs 49 bytes 121 µs 35 bytes 10 µs 212 µs 7 32
Propagation time, δ Length of ACK and CTS ACK transfer time, tACK RTS transfer time, tRT S DIFS V Long retry limit, N Maximal contention window, Wm
5HMHFWLRQSUREDELOLW\
7KUURXJKSXW
1 µs 29 bytes 106 µs 111 µs 50 µs 11 Mbps 4 1024
F E D
D
F
E
1XPEHURIVWDWLRQV
1XPEHURIVWDWLRQV
Fig. 3. Throughput (Mbps) and rejection probability versus number of station with BER= 5 · 10−5 for (a) the Basic Access mechanism, (b) the RTS/CTS mechanism, and (c) the optimal hybrid mechanism
2SWLPDO576WKUHVKROG
LY
LLL
LL L
1XPEHURIVWDWLRQV
Fig. 4. Optimal RTS threshold (bytes) versus number of station with (i) BER= 1·10−5 , (ii) BER= 5 · 10−5 , (iii) BER= 1 · 10−4 , and (iv) BER= 1.4 · 10−4
In Figure 3, we present some results of studying the throughput and the averaged rejection probability for the Basic Access and RTS/CTS mechanisms (where P > lmax and P = 0, respectively) with varying the number N of stations. Here dotted curves have been obtained by simulation, while our method has been adopted to obtain other curves. First of all, let us note a high accuracy of the analytical model: the errors never exceed 2% with throughput estimation and 5% with rejection probability estimation.
1018
V. Vishnevsky and A. Lyakhov
Further, as we could expect, the Basic Access mechanism provides the highest throughput when a number N of stations is small (N < 30 in Figure 3), while the RTS/CTS mechanism is better when N is large and provides nearly the same throughput with increasing the number of stations. The bold curves in Figure 3 have been obtained for the hybrid mechanism with the optimal RTS threshold P opt providing the maximal throughput and depending on N . The optimizing curves are shown in Figure 4 for various values of BER and have been determined with our analytical method. (A high calculation speed of our method has allowed us to use the exhaustive search of the optimal threshold.) With a low BER (curve (i)), P opt is quite small for large N , increases monotonically with decrease of N until some threshold Nb (where P opt becomes equal to lmax + 1 = 2000 bytes), and remains constant with N ≤ Nb , that is, the Basic Access mechanism is the best for small N . With a high BER (curves (ii)–(iv)), a curve P opt (N ) is not monotonic, that is, an additional threshold Nb0 appears somewhere below Nb and P opt decreases with decrease of N from Nb0 to 1. For example, Nb = 30 and Nb0 = 15 with BER= 1 · 10−4 . Both thresholds increase with BER growth, but Nb0 increases faster so that the interval, where the Basic Access mechanism is the best, disappears and these thresholds unite into one with a very high BER (see curve (iv)). Thus, we have obtained the following surprising fact: when stations are few and a BER is high, the best mechanism is not the Basic Access one, but some hybrid mechanism, and the throughput improvement achieved by this optimization is significant. For example, with N = 2 and BER= 1 · 10−4 , S = 1.44 Mbps for the Basic Access mechanism and S = 1.62 Mbps for the optimal hybrid mechanism with P = P opt = 1100 bytes. This case of few stations in a LAN can seem “exotic” and negligible, but keeping in mind that we considered only active stations, it corresponds to a reallife situation of low traffic. As Figure 3 shows, the throughput improvement in the considered case is achieved at the expense of worsening a rejection probability: in the above example, prej = 0.057 for the Basic Access mechanism and prej = 0.131 for the optimal hybrid mechanism. It can be explained in the following way. When stations are few and a BER is high, a collision probability is small and a failure probability is equal approximately to a noise-induced distortion probability for a DATA frame. So we can assume that a maximal number of attempts of transmitting a packet is equal to Nd = 4 if the packet is transmitted by the RTS/CTS mechanism and to Ns = 7 with the Basic Access mechanism. For a given packet and BER, the less maximal number of attempts, the larger the rejection probability, the less the mean value of backoff intervals anticipating transmission attempts, and hence the larger the throughput.
6
Conclusions
In this paper, a continuation of [5]–[7], a simple analytical method is developed for estimating the throughput of a wireless LAN controlled by the DCF scheme of IEEE 802.11 protocol and operating under saturation and in noise. Besides the
802.11 LANs: Saturation Throughput in the Presence of Noise
1019
throughput, the probability of a packet transfer rejection due to the attainment of the limiting values specified by the Standard [1] for the number of attempts for transferring long and short frames is evaluated. According to numerical results, our method is quite exact and can be considered as an effective tool for both investigating the influence of bit error rate on the wireless LAN performance indices and tuning optimally the protocol parameters. Extensions of the developed method to take into account of a possible presence of hidden stations as well as to consider the real-life situations when traffic generated by wireless LAN stations is non-uniform and non-saturating seem possible and are proposed as a future research activity. In order to tackle new research issues generated by the use of wireless LANs as Internet access networks, we plan also to apply the results of studying the 802.11 MAC layer for investigating the interaction between this protocol and the TCP/IP protocol stack (i.e., the protocols of Internet).
References 1. Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications. ANSI/IEEE Std 802.11, 1999 Edition. 2. Weinmiller, J., Schlager, M., Festag, A., et al.: Performance Study of Access Control in Wireless LANs – IEEE 802.11 DFWMAC and ETSI RES 10 HYPERLAN. Mobile Networks and Applications 2 (1997) 55–76 3. Chhaya, H.S. and Gupta, S.: Performance Modeling of Asynchronous Data Transfer Methods of IEEE 802.11 MAC Protocol. Wireless Networks 3 (1997) 217–234 4. Ho, T.S. and Chen, K.C.: Performance Analysis of IEEE 802.11 CSMA/CA Medium Access Control Protocol. Proc. 7th IEEE Int. Symp. on Personal, Indoor and Mobile Radio Communications (PIMRC’96), Taipei, Taiwan (1996) 407–411 5. Bianchi, G.: Performance Analysis of the IEEE 802.11 Distributed Coordination Function. IEEE Journal on Selected Areas in Communications 18 (2000) 535–548 6. Cal´ı, F., Conti, M., and Gregory, E.: Dynamic Tuning of the IEEE 802.11 Protocol to Achieve a Theoretical Throughput Limit. IEEE/ACM Transactions on Networking 8 (2000) 785–799 7. Vishnevsky, V.M. and Lyakhov, A.I.: IEEE 802.11 Wireless LAN: Saturation Throughput Analysis with Seizing Effect Consideration. Cluster Computing 5 (2002) 133–144 8. T.J. Schriber: Simulation using GPSS. John Wiley & Sons (1974) 9. Higher-Speed Physical Layer Extension in the 2.4 GHz Band. Supplement to [1]
Efficient Simulation of Blocking Probabilities for Multi-layer Multicast Streams Jouni Karvo Networking Laboratory, Helsinki University of Technology, P.O.Box 3000, FIN-02015 HUT, Finland. [email protected]
Abstract. This paper presents an efficient algorithm for Monte-Carlo simulation of time blocking probabilities for multi-layer multicast streams with the assumption that blocked calls are lost. Users may join and leave the multicast connections freely, thus creating dynamic multicast trees. The earlier published algorithms are applicable to small networks or networks with few users. The present simulation algorithm is based on the inverse convolution method, and is the only effective way to handle large systems, known to the author.
1
Introduction
This paper presents an efficient algorithm for Monte-Carlo simulation of time blocking probabilities for multi-layer multicast streams with the assumption that blocked calls are lost. Consider a network with circuit switched traffic, or packet switching with strict quality guarantees, such as the IntServ architecture in the Internet. Decisions on whether to allow a new connection in the network are made according to availability of resources. In general, traffic is a mixture of point-to-point (unicast) and point-to-multipoint (or multicast) traffic. There are well known algorithms for calculating blocking probabilities for unicast traffic in absence of multicast traffic, see e.g. [1, 2]. Multicast, however, gives rise to a multitude of new problems, (see e.g. [3]), one of which is blocking probability calculation. A model called “multicast loss system” has been developed for calculating blocking probabilities in recent years. This system comprises a tree-structured multicast network with dynamic membership. In this network, users at the leaf nodes can join or leave any of the several multicast channels offered by one source, the root of the tree. The users joining the channels form dynamic multicast connections that share the network resources. Blocking occurs when there are not enough resources available in the network to satisfy the resource requirements of a request. Blocked calls are lost. The multicast loss system may be seen as a virtual network over the real one, carrying the multicast traffic of the real network. The time blocking probability is the probability that the system is in a state where a call cannot be established due to unavailable resources, while the call blocking probability is the probability that a user’s attempt to establish a call E. Gregori et al. (Eds.): NETWORKING 2002, LNCS 2345, pp. 1020–1031, 2002. c Springer-Verlag Berlin Heidelberg 2002
Efficient Simulation of Blocking Probabilities
1021
fails due to unavailable resources. These probabilities are intimately related, and it is possible to calculate the call blocking probability in a multicast loss system knowing the time blocking probability. Audio and video streams can be coded hierarchically [4]. In hierarchical, or layered, coding, information is separated according to its importance, and then coded and transmitted in separate streams. In the present setting a user may, depending on her needs and abilities, subscribe to the most important sub-stream only, in which case she is said to be on layer 1, or subscribe to any number r of the most important sub-streams, in which case she is on layer r. This paper studies the effective simulation of blocking probabilities for multicasted layered streams. The assumption that blocked calls are lost implies that if a user does not get the desired layer (or number of sub-streams) due to blocking, she will not get any layer. That is, there will be no re-negotiation of lower quality transmission. Chan and Geraniotis [5] studied the system of layered video multicasting. They gave the definition of the state space, but resorted to approximations for the actual calculations. After their work, research has concentrated on nonlayered multicast streams, see e.g. [6,7]. An efficient Monte-Carlo simulation method for dynamic multicast networks with single layer multicast streams has been developed by Lassila et al. [8]. This method was based on the inverse convolution method Lassila and Virtamo published in [9]. Recently, there has been progress in the case where the multicast streams are layered. Karvo et al. [10] developed an algorithm for calculating blocking probabilities of two-layer streams with Poisson arrivals and exponential holding times. They extended their study in [11] to an arbitrary number of layers, and studied the validity of the insensitivity property for different user models. The present paper provides an efficient simulation algorithm extending the inverse convolution approach of Lassila et al. [8] to this multi-layer case. This paper is organized as follows. Section 2 presents the basic system model, and the time blocking probability calculation with exponential computational complexity. The problem of estimating time blocking probabilities is divided into smaller sub-problems in section 3. Section 4 contains the main contribution of this paper, showing how the inverse convolution method is applied to the layered multicast case. A numerical example is given in section 5, and the results are summarised in section 6.
2
Multicast Loss System
This section presents the system model and the notation for the multicast loss system. This model is the same as in [11]. Consider a network consisting of J links, indexed with j ∈ J = {1, . . . , J}, link j having a capacity of Cj resource units. The network is organized as a tree. The set U denotes the set of user populations, located at the leaves of the tree. The leaf links and the user populations connected to them are indexed with the same index u ∈ U = {1, . . . , U }. The set of links on the route from user population u to the root node is denoted by Ru . The user populations downstream link j, i.e. for which link j ∈ Ru , are
1022
J. Karvo
denoted by Uj . The size of the set Uj is denoted by Uj . Let Mj denote the set of all links downstream link j (including link j), and Nj the set of neighbouring links downstream link j (excluding link j). The links of the tree are indexed so that for all j ∈ Nj , j < j. Thus, the root link is denoted by J. The multicast network supports I channels, indexed with i ∈ I = {1, . . . , I}. The channels originating from the root node represent different multicast transmissions, from which the users may choose. There are L layers. Each layer l ∈ L = {1, . . . , L} has a capacity requirement of d(l) capacity units. The capacity requirements are unique and d(l) < d(l ) for all l < l , i.e. layer L contains all hierarchically coded sub-streams, layer 2 the two most important ones, and layer 1 only contains the most important sub-stream. 2.1
State Space
The states of the channels in a link define the state of that link. Each channel is in one of the states {0, 1, . . . , L}, depending on whether the channel is off, or on layer 1, . . . , L. That is, the state of channel i on link j is Yj,i ∈ {0, . . . , L}. The vector Yj = (Yj,i ; i ∈ I ) ∈ {0, . . . , L}I denotes the state of link j. The tuple (u, i, l) of the user population u (leaf link), channel i and layer l defines a multicast connection. The states Yu of all the leaf links define the network state X, X = (Yu ; u ∈ U) = (Yu,i ; u ∈ U, i ∈ I) ∈ Ω ,
(1)
where Ω = {0, . . . , L}U ×I denotes the network state space. The network state determines the state of any link j as follows: Yu , if j = u ∈ U , (2) Yj = (Yu ), otherwise , max u ∈Uj
where max(·) denotes the component-wise max-operation. The occupancy of any link j is determined by the link state as Sj = D(Yj ) =
I
d(Yj,i ) ,
(3)
i=1
where d(0) = 0, i.e. when channel is off, it does not need any link capacity. The occupancy generated by all other channels but I is denoted by Sj = D (Yj ) = I−1 i=1 d(Yj,i ). Finally, in a finite capacity network, the capacity constraints of the links truncate the state space, ˜ = x ∈ Ω D(yj ) ≤ Cj , ∀j ∈ J . (4) Ω
Efficient Simulation of Blocking Probabilities
2.2
1023
Probability Distributions
Let us assume that the user populations of the leaf links are independent, and that the leaf link distributions πu (y) = P{Yu = y}, u ∈ U, are known, and represent stationary distributions of reversible Markov processes satisfying the detailed balance equations. Several types of user population models of this kind have been discussed in [7], and in [11]. The steady state probabilities π(x) of the network states in a system with infinite link capacities can be calculated from π(x) = P{X = x} = πu (yu ) , (5) u∈U
since the user populations are independent. The inverse convolution approach also dictates that all channels shall be independent. Thus, πu (yu ) = pu,i (yu,i ) . (6) i∈I
˜ of states in a system with As already noted in [11], probabilities π ˜ (x), x ∈ Ω, finite link capacities are obtained by truncation ˜ = π ˜ (x) = P{X = x | X ∈ Ω}
π(x) , ˜ P{X ∈ Ω}
(7)
˜ = ˜ π(x). This follows from the assumed detailed balance. where P{X ∈ Ω} x∈Ω See Kelly [12] for discussion of truncation. 2.3
Blocking
In a finite capacity network, blocking occurs whenever a user tries to establish a connection for channel i and layer r, and there is at least one link j ∈ Ru where the channel is on state l < r and there is not enough spare capacity for setting the channel on the requested layer. Without loss of generality, the channels are ordered so that the blocking probability is calculated for channel with index I. Consider link j. A request for layer r succeeds if there is enough capacity already reserved for the layer in link j, or there is enough free capacity in the link, i.e. max{d(r), d(yj,I )} ≤ Cj − D (yj ). The expression “link j blocks” means that this condition does not hold for link j. The set Bu,r consists of the states where at least one link blocks for connection (u, I, r), when layer r of channel I is requested by user u, and is defined as ˜ Bu,r = x ∈ Ω ∃j ∈ Ru : d(r) > Cj − D (yj ) . (8) Then the time blocking probability for connection (u, I, r) is ˜ = P{X ∈ Bu,r } . Bu,r = P{X ∈ Bu,r | X ∈ Ω} ˜ P{X ∈ Ω}
(9)
1024
J. Karvo
Call blocking probabilities for users depend on the chosen user model, as discussed in [11]. Calculation of time blocking probabilities for layers is easy, but very time consuming: the number of states in the state space is (L + 1)U I . The following section attacks this problem using the inverse convolution method.
3
Divide and Conquer
This section discusses efficient estimation of time blocking probabilities by applying the algorithm developed by Lassila and Virtamo [9]. As the form of the stationary distribution π(x) is known, a natural choice for simulation is the Monte Carlo method. The main problem in the simulation is to quickly get a good estimate for P{X ∈ Bu,r }, i.e., the numerator in Eq. (9), especially in the case when the blocking probability Bu,r is small. Note that Bu,r also depends ˜ given by the denominator of Eq. (9). This probability is usually on P{X ∈ Ω} close to unity and is easy to estimate using the standard Monte Carlo method. Therefore, the rest of this paper concentrates on efficient methods for estimating P{X ∈ Bu,r }. First, section 3.1 divides the task of estimating P (Bu,r ) into simpler subproblems. Then, each of the sub-problems is solved using importance sampling, as is described in section 3.2. 3.1
Decomposition
In order to divide the task of estimating P (Bu,r ) to simpler sub-problems, Bu,r j j is partitioned into sets Eu,r . Eu,r is defined as the set of points in Bu,r where link j blocks but none of the links closer to user u block, j ˜ d(r) > Cj − D (yj ) ∧ Eu,r = Bu,r ∩ x ∈ Ω (10) d(r) ≤ Cj − D (yj ), ∀j ∈ Rju , where Rju denotes the set of links on the path from u to j, including link u
j j but not link j. The Eu,r form a partitioning of Bu,r , i.e. Bu,r = j∈Ru Eu,r , and j j Eu,r ∩ Eu,r = ∅, when j = j . From this it follows that P{X ∈ Bu,r } =
j P{X ∈ Eu,r }.
(11)
j∈Ru j The probability P{X ∈ Eu,r } can be thought of as the blocking probability contribution due to link j. It should be noted, however, that blocking in the states where several links block can be arbitrarily attributed to any of the blocking links. I use the convention which attributes it to the blocking link closest to the user.
Efficient Simulation of Blocking Probabilities
3.2
1025
j Conditioning of P{X ∈ Eu,r }
Equation (11) decomposes estimation of P{X ∈ Bu,r } into independent subj problems of estimating the P{X ∈ Eu,r }. For these estimation tasks, I introduce j j the superset Du,r ⊃ Eu,r , j Du,r
=
x ∈ Ω d(r) > Cj − D (yj ) ≥ d(yj,I ) .
(12)
This set corresponds to blocking states in a system where link j has a finite capacity Cj but all other links have infinite capacity. Since all links have finite j capacity in real systems, and several links could block simultaneously, sets Du,r j are not disjoint unlike their subsets Eu,r . j The next step is to use conditional probabilities to estimate P{X ∈ Eu,r }, as follows: j j j j P{X ∈ Eu,r } = P{X ∈ Eu,r | X ∈ Du,r }P{X ∈ Du,r }.
(13)
This relation is useful from the simulation point of view since it is easy to j compute P{X ∈ Du,r } and to generate samples from the original distribution j under the condition X ∈ Du,r , as explained later. Monte Carlo simulation is then j j used to estimate the conditional probability P{X ∈ Eu,r | X ∈ Du,r } instead of j P{X ∈ Eu,r }, which is usually much more effective. j j j Let ηu,r denote the estimator for ηu,r = P{X ∈ Eu,r }, j ηu,r
Nj vj = 1 ∗ j , Nj n=1 Xn ∈Eu,r
(14)
j where vj = P{X ∈ Du,r } and X∗n denotes samples drawn from the conditional j j distribution P{X = x | X ∈ Du,r }. Then, the estimator for P (Bu,r ) is simply j )= P(Bu,r
j ηu,r .
(15)
j∈Ru
Given the total number of samples N to be used for the estimator, the number of samples Nj allocated to each sub-problem is a free parameter. This can be j exploited by assigning the number of samples to different ηu,r according to their estimated variance during the simulation. See e.g. [8].
4
Inverse Convolution
This section presents the inverse convolution method (IC) for sample generation. j I am now only considering the estimation of one ηu,r for fixed j ∈ Ru and traffic class (u, I, r). The method is based on generating points from the conditional j distribution P{X = x | X ∈ Du,r } by reversing the steps used to calculate the
1026
J. Karvo
O
j
u
j Fig. 1. Example of sample generation. A sample in the set Du,r is generated for the link j (thick dashed line). States of the links marked by the dashed ellipse are generated by inverse convolution from the state of link j. States for links denoted by ticks are generated by a simple draw. The state of the link denoted by the thick line is calculated directly from the states of the other links.
j occupancy distribution of the considered link. Note that the condition X ∈ Du,r is a condition expressed in terms of the occupancy, Sj , of the considered link. The idea in the inverse convolution method is to first generate a sample of Yj such that the occupancy of the link is in the blocking region. Then, given the state Yj , the state of the network, i.e. states of the leaf links, is generated. The mapping x → yj is surjective, having several possible network states x generating the link state yj , and one of them is drawn according to their probabilities. The main steps of the simulation can be summarized as follows (See Figure 1.):
1. Generate the states for leaf links u by a) Generate a sample state Yj under the condition d(r) > Cj − D (yj ) ≥ d(yj,I ) for link j. b) Generate the leaf link states Yu , u ∈ Uj , with the condition that link j state Yj = maxu∈Uj (Yu ) is given. c) Generate the states Yu , u ∈ U − Uj for the rest of the leaf links as in the normal Monte Carlo simulation. j 2. The sample state of the network X∗n ∈ Du,r consists of the set of all sample states of leaf links generated with step 1. j 3. To collect the statistics for estimator ηu,r , check if X∗n ∈ Euj . The above steps are repeated for generating Nj samples. Section 4.1 explains the method of generating a sample for link j (step 1a). Section 4.2 explains the method for generating the leaf link states from the link state (step 1b). 4.1
j Generating a Sample for Du,r
As already noted, I have partitioned the set of blocking states into disjoint sets j . It is not easy to generate samples directly to these sets, however. Instead, Eu,r
Efficient Simulation of Blocking Probabilities
1027
j I generate samples to sets Du,r which correspond to the states in which at least link j blocks. After that it is possible to check if the sample belongs to the set j Eu,r to collect the sum in Eq. (14). j Convolution method for calculating P{X ∈ Du,r }. First, the link occupancy Sj is easily calculated recursively as follows. Let Sj,i denote link occupancy due to the first i channels, d(Yj,i ) . (16) Sj,i = i ≤i
Then Sj = Sj,I and Sj = Sj,I−1 . The Yj,i are mutually independent, and Sj,i = Sj,i−1 + d(Yj,i ), where Sj,i−1 and Yj,i are independent. Channel I must be dealt with differently than the other channels, since the system can be in a blocking state only if Cj − Sj,I−1 < d(r), but the channel I j can be partitioned into r can be in any state l < r. Knowing this, the set Du,r point-wise disjoint subsets: j,l = x ∈ Ω yj,I = l ∧ Du,r (17) d(r) > Cj − D (yj ) ≥ d(l) , l ∈ {0, . . . , r − 1} . j,l If a state x belongs to the set Du,r , the state is a blocking state for link j, and the channel I is on layer l. Thus, the free capacity Cj − D (yj ) of the link must be at most d(r) − 1, for the state to be a blocking state. The other channels may, however, consume at most Cj − d(l) capacity units for the state to be within the j,l allowed states. Now, let vj (l) denote the probability P{X ∈ Du,r }: Cj −d(l)
vj (l) = pj,I (l)
qj,I−1 (i) ,
(18)
i=Cj −d(r)+1 j where qj,i (x) = P{Sj,i = x}. The probability mass vj of the set Du,r , can be calculated as j }= vj = P{X ∈ Du,r
r−1
vj (l) .
(19)
l=0
The link occupancy distribution qj,I−1 (·) can be calculated recursively by convolution: qj,i (x) =
x
qj,i−1 (x − d(y))pj,i (y) ,
(20)
y=0
where the recursion starts with qj,0 (x) = 1x=0 . Here, pj,i (y) = P{Yj,i = y}, and is calculated easily, as shown in section 4.2.
1028
J. Karvo
Inverse convolution. For interpretation of the convolution step, note that the event {Sj,i = x} is the union of the events {Yj,i = y, Sj,i−1 = x − d(y)}, y ∈ {0, . . . , L}. The corresponding probability is qj,i−1 (x − d(y))pj,i (y). Conversely, the conditional probability of the event {Yj,i = y, Sj,i−1 = x − d(y)} given that Sj,i = x is, P{Yj,i = y, Sj,i−1 = x − d(y) | Sj,i = x} =
pj,i (y)qj,i−1 (x − d(y)) . qj,i (x)
(21)
j starts by drawing a value l for Yj,I using Generating a sample state in Du,r the distribution j }= P{Yj,I = l | X ∈ Du,r
j } P{Yj,I = l, X ∈ Du,r j P{X ∈ Du,r }
=
vj (l) , vj
(22)
where l ∈ {0, . . . , r − 1}. Then, a value for Sj = Sj,I−1 is drawn with the condition that Yj,I = l that is, using the distribution j }= p(x|l) = P{Sj,I−1 = x | Yj,I = l, X ∈ Du,r
P{Yj,I = l, Sj,I−1 = x} j P{Yj,I = l, X ∈ Du,r }
,
(23)
j }, restricting x to x ∈ {Cj − d(r) + since {Yj,I = l ∧ Sj,I−1 = x} ⇒ {X ∈ Du,r 1, . . . , Cj − d(l)}, and
p(x|l) =
pj,I (l)qj,I−1 (x) qj,I−1 (x) . = C −d(l) j vj (l) qj,I−1 (y)
(24)
y=Cj −d(r)+1
Then, given the value of Sj,I−1 , the state Yj,i of each channel (i = I −1, . . . , 1) is drawn in turn using probabilities in Eq. (21). Concurrently with the state Yj,i , the value of Sj,i−1 becomes determined. This is then used as the conditioning value in the next step to draw the value of Yj,i−1 (and of Sj,i−2 ), etc. Note that for reasonable sizes of links, it is advantageous to store the probabilities for fast generation of samples. The next subsection presents a method for drawing leaf link states Yu , given the state Yj of link j. 4.2
Generating Leaf Link States from a Link State
Having drawn a value for state Yj of link j, it is possible to draw values of the state vectors Yu , u ∈ U, of the leaf links. For u ∈ Uj , states Yu are generated under the condition Yj = maxu∈Uj (Yu ) using a similar inverse convolution procedure as above. Due to the assumed independence of channels, this condition can be broken down into separate conditions, i.e. for each i there is a separate problem of generating the values Yu,i , u ∈ U, under the condition Yj,i = maxu∈Uj (Yu,i ) with a given Yj,i . The above conditions affect leaf links
Efficient Simulation of Blocking Probabilities
1029
u ∈ Uj . For other links u ∈ U − Uj , the states Yu are independently generated from the distribution πu (·). First, let us consider a convolutional approach for generating a link state for channel i and link j if the states for each link u ∈ Uj are already known. In this section, I use an index uj ∈ {1, . . . , Uj } = Uj for the subset of leaf links. Let Zuj ,i = x denote the event that the channel i is on state x on link j when u = 1, . . . , uj leaf links have been counted for, i.e. Zuj ,i = maxu ≤uj (Yu ,i ). Note that Yj,i = ZUj ,i . Probabilities ξuj ,i (s) = P{Zuj ,i = s} can be calculated recursively as follows: ξuj ,i (s) = puj ,i (s)
s−1
ξuj −1,i (s ) + ξuj −1,i (s)
s =0
s
puj ,i (s ) .
(25)
s =0
The recursion starts with ξ0,i (s) = 1s=0 . The probabilities pj,i (s) used in the previous section are then simply pj,i (s) = ξUj ,i (s) where all users have been taken into account. If Zuj −1,i = s, then necessarily Zuj ,i ≥ s (due to the nature of max-operation). Conversely, to generate the state for each leaf link, given the value of Yj,i , I first generate Zuj−1 ,i from the distribution:
P{Zuj −1,i = x | Zuj ,i
x ξuj −1,i (x) s =0 puj ,i (s ) , ξuj ,i (s) = s} = ξu −1,i (x)puj ,i (s) j , ξuj ,i (s)
when x = s , otherwise .
(26)
Note that the event Zuj −1,i < Zuj ,i implies directly that Yuj ,i = Zuj ,i . If this is not the case, the value of Yuj ,i is drawn from the distribution puj ,i (y) . y =0 puj ,i (y )
P{Yuj ,i = y | Zuj −1,i = Zuj ,i = s} = s
(27)
This procedure is repeated for each channel. The state vectors of each leaf link u ∈ Uj result from this procedure. The rest of the leaf link states must be generated as in the normal Monte Carlo simulation using distribution πu (·).
5
Numerical Results
This section gives some numerical examples to illustrate the efficiency of the presented method in Monte Carlo simulation of the blocking probabilities. I consider the same network used in [7]. The network is the one shown in Figure 1. There is a root node, four channels, I = 4, and three layers, L = 3, with d(l) = l for all channels. The capacity of the root link is CJ = 6, for the others, Cj = 5. Each leaf link has an infinite user population offering traffic to each channel. The probability pu,i (l) that a channel is on layer l is pu,i (l) = αl b (for all users), where α1 = 0.3, α2 = 0.2 and α3 = 0.1. I simulated blocking for channel I and
1030
J. Karvo
Table 1. The relative deviation of the estimates P(Bu,r ) for the example network
user u (the longer path) with three values for b: 0.01, 0.05, and 0.1 to compare the simulation methods in light, moderate, and high load conditions. I also estimated the relative deviation of the estimator for 104 samples and 5 10 samples, given by (V[P(Bu,r )])1/2 /P(Bu,r ). For classic Monte Carlo (MC), these were the total numbers of samples used, while for Inverse Convolution j method (MC-IC), one third of samples was used for each estimate ηu,r . For Inverse Convolution with optimal Sample Allocation (MC-ICSA) [8], the total number of samples was allocated optimally for each estimate. The results are shown in Table 1. The table shows that the variance reductions obtained with the inverse convolution method are remarkable. For example, for light load (b = 0.01), the ratio between the deviations of the standard MC and the inverse convolution method (MC-ICSA) is up to 131 for 10 000 samples and 138 for 100 000 samples, corresponding to a decrease by a factor of 17 000 to 19 000 in the required sample sizes. In high load situations, the overhead in sample generation might not be justified, as the traditional Monte Carlo method gives rather good estimates, too.
6
Summary
I presented an algorithm for efficient simulation of time blocking probabilities for multi-layer multicast streams with the assumption that blocked calls are lost.
Efficient Simulation of Blocking Probabilities
1031
Calculating blocking probabilities for this system directly from the steady state probabilities is easy in principle, but excessively time-consuming. The simulation algorithm presented is based on the inverse convolution algorithm. The results in the shown example network support convincingly its efficiency, yielding a decrease in sample size of up to a factor of 19 000 over the traditional Monte Carlo method. Acknowledgement. This study was funded by the Academy of Finland, and also supported by Nokia Foundation. I thank Pasi Lassila, Jorma Virtamo and Samuli Aalto for their helpful advise.
References 1. Fortet R. and Grandjean C., “Congestion in a loss system when some calls want several devices simultaneously,” Electrical Communication, vol. 39, no. 4, pp. 513– 526, 1964. 2. Ross K. W., Multiservice Loss Models for Broadband Telecommunication Networks, Springer Verlag, London, 1995. 3. Diot C., Dabbous W., and Crowcroft J., “Multipoint communication: A survey of protocols, functions, and mechanisms,” IEEE Journal on Selected Areas in Communications, vol. 15, no. 3, pp. 277–290, Apr. 1997. 4. Karlsson G. and Vetterli M., “Packet video and its integration into the network architecture,” IEEE Journal on Selected Areas in Communications, vol. 7, no. 5, pp. 739–751, June 1989. 5. Chan W. C. and Geraniotis E., “Tradeoff between blocking and dropping in multicasting networks,” in ICC ’96 Conference Record, June 1996, vol. 2, pp. 1030–1034. 6. Karvo J., Virtamo J., Aalto S., and Martikainen O., “Blocking of dynamic multicast connections,” Telecommunication Systems, vol. 16, no. 3,4, pp. 467–481, 2001. 7. Nyberg E., Virtamo J., and Aalto S., “An exact algorithm for calculating blocking probabilities in multicast networks,” in Networking 2000, Pujolle G., Perros H., Fdida S., K¨ orner U., and Stavrakakis I., Eds., Paris, May 2000, pp. 275–286. 8. Lassila P., Karvo J., and Virtamo J., “Efficient importance sampling for Monte Carlo simulation of multicast networks,” in Proc. INFOCOM’01, Anchorage, Alaska, Apr. 2001, pp. 432–439. 9. Lassila P. E. and Virtamo J. T., “Nearly optimal importance sampling for Monte Carlo simulation of loss systems,” ACM Transactions on Modeling and Computer Simulation (TOMACS), vol. 10, no. 4, pp. 326–347, Oct. 2000. 10. Karvo J., Aalto S., and Virtamo J., “Blocking probabilities of two-layer statistically indistinguishable multicast streams,” in Proc. International Teletraffic Congress ITC-17, de Souza J. M., Fonseca N. L. S., and de Souza e Silva E. A., Eds., Salvador da Bahia, Brazil, Sept. 2001, pp. 769–779. 11. Karvo J., Aalto S., and Virtamo J., “Blocking probabilities of multi-layer multicast streams,” in 2002 Workshop on High Performance Switching and Routing (HPSR 2002) (To appear), Kobe, Japan, May 2002. 12. Kelly F. P., Reversibility and Stochastic Networks, John Wiley & Sons, 1979.
Aggregated Multicast – A Comparative Study Jun-Hong Cui, Jinkyu Kim, Dario Maggiorini, Khaled Boussetta, and Mario Gerla Computer Science Department, University of California, Los Angeles, CA 90095
Abstract. Multicast state scalability is among the critical issues which delay the deployment of IP multicast. In our previous work, we proposed a scheme, called aggregated multicast to reduce multicast state. The key idea is that multiple groups are forced to share a single delivery tree. We presented some initial results to show that multicast state can be reduced. In this paper, we develop a more quantitative assessment of the cost/benefit trade-offs. We introduce metrics to measure multicast state and tree management overhead for multicast schemes. We then compare aggregated multicast with conventional multicast schemes, such as source specific tree scheme and shared tree scheme. Our extensive simulations show that aggregated multicast can achieve significant routing state and tree management overhead reduction while containing the expense of extra resources (bandwidth waste and tunnelling overhead, etc.). We conclude that aggregated multicast is a very cost-effective and promising direction for scalable transit domain multicast provisioning.
1 Introduction IP Multicast has been a very hot area of research, development and testing for more than one decade since Stephen Deering established the IP multicast model in 1988 [6]. However, IP multicast is still far from being widely deployed in the Internet. Among the issues which delay the deployment, state scalability is one of the most critical ones. IP multicast utilizes a tree delivery structure on which data packets are duplicated only at fork nodes and are forwarded only once over each link. By doing so IP multicast can scale well to support very large multicast groups. However, a tree delivery structure requires all tree nodes to maintain per-group (or even per-group/source) forwarding information, which increases linearly with the number of groups. Growing number of forwarding state entries means more memory requirement and slower forwarding process since every packet forwarding action involves an address look-up. Thus, multicast scales well to the number of members within a single multicast group. But, it suffers from scalability problems when the number of simultaneous active multicast groups is very large. To improve multicast state scalability, we proposed a novel scheme to reduce multicast state, which we call aggregated multicast. In this scheme, multiple multicast groups are forced to share one distribution tree, which we call an aggregated tree. This way, the number of trees in the network may be significantly reduced. Consequently, forwarding state is also reduced: core routers only need to keep state per aggregated tree instead
This material is based upon work supported by the National Science Foundation under Grant No. 9805436, and CISCO/CORE fund No. 99-10060
E. Gregori et al. (Eds.): NETWORKING 2002, LNCS 2345, pp. 1032–1044, 2002. c Springer-Verlag Berlin Heidelberg 2002
Aggregated Multicast – A Comparative Study
1033
of per group. The trade-off is that this approach may waste extra bandwidth to deliver multicast data to non-group-member nodes. In our earlier work [8,9], we introduced the basic concept of aggregated multicast, proposed an algorithm to assign multicast groups to delivery trees with controllable bandwidth overhead and presented some initial results to show that multicast state can be reduced through inter-group tree sharing. However, a thorough performance evaluation of aggregated multicast is needed: what level of the gain does aggregated multicast offer over conventional multicast schemes? In this paper, we propose metrics to measure multicast state and tree management overhead for multicast schemes. We then compare aggregated multicast with conventional multicast schemes, such as source specific tree scheme and shared tree scheme. Our extensive simulations show that aggregated multicast can achieve significant state and tree management overhead reduction while at reasonable expense (bandwidth waste and tunnelling overhead, etc.). The rest of this paper is organized as follows. Section 2 gives a classification of multicast schemes. Section 3 reviews the concept of aggregated multicast and presents a new algorithm for group-tree matching. Section 4 then discusses the implementation issues for different multicast schemes and defines metrics to measure multicast state and tree management overhead, and Section 5 provides an extensive simulation study of different multicast schemes. Finally Section 6 summarizes the contributions of our work.
2 A Classification of Multicast Schemes According to the type of delivery tree, we classify the existing intra-domain multicast routing protocols into two categories (It should be noted that, in this paper, we only consider intra-domain multicasting): in the first category, protocols construct source specific tree, and in the second category, protocols utilize shared tree. For the convenience of discussion, we call the former category as source specific tree scheme, and the latter one as shared tree scheme. According to this classification, we can say, DVMRP [12], PIM-DM [5], and MOSPF [11] belong to source specific tree scheme category, while CBT [3], PIM-SM [7], and BIDIR-PIM [10] are basically shared tree schemes (of course, PIM-SM can also activate source specific tree when needed). Source specific tree scheme constructs a separate delivery tree for each source. Namely, each source of a group utilizes its own tree to deliver data to the receivers in the group. The shared tree scheme instead constructs trees based on per-group and all the sources of a group use the same tree to deliver data to the receivers. In other words, multiple sources of the same group share a single delivery tree. Shared tree can be unidirectional or bi-directional. PIM-SM is a unidirectional shared tree scheme. CBT and BIDIR-PIM are bi-directional shared tree schemes. Fig. 1 shows the different types of trees for the same group G with sources (S1, S2) and receivers (R1, R2). For source specific tree schemes, two trees are set up for group G. For the unidirectional shared tree scheme, one tree is set up. Each source needs to unicast packets to the rendezvous point (RP) or build source specific state on all nodes along the path between the source and the RP. For the last scheme, only one bi-directional tree will work. A source can
1034
J.-H. Cui et al. R2
R2
Multicast S1 Multicast S2
Unicast Multicast
R2
R2
RP
S1
S2
S1
S2 (b) Unidirectional Shared Tree
(a) Source Specific Tree
R2
R2
R2
R2
RP
RP
S1
S2
S1
S2
(d) Packet of S1 delivering on Bi-directional Shared Tree
(c) Bi-directional Shared Tree
Fig. 1. Different types of trees for group G with sources (S1, S2) and receivers (R1, R2).
unicast packet to the nearest on-tree node instead of RP. And each on-tree node can deliver packets along the bi-directional tree. Compared with conventional multicast schemes, aggregated multicast raises treesharing to an even higher level—inter-group tree sharing, where multiple multicast groups are forced to share one aggregated tree. An aggregated tree can be either a source specific tree or a shared tree, while a shared tree can be either unidirectional or bi-directional. We are going to review the basic concept of aggregated multicast and discuss some related issues in the following section.
3 Aggregated Multicast 3.1
Concept of Aggregated Multicast
Aggregated multicast [8,9] is proposed to reduce multicast state, and it is targeted to intra-domain multicast provisioning. The key idea is that, instead of constructing a tree for each individual multicast group in the core network (backbone), multiple multicast groups are forced to share a single aggregated tree. Fig. 2 illustrates a hierarchical inter-domain network peering. Domain A is a regional or national ISP’s backbone network, and domain D, X, and Y are customer networks of
Domain E
Domain A
E1
Domain B
Aa
Tunnel
A3
A1
X1 Domain X
B1
A2
Ab
A4
C1 D1
Y1
Domain C
Domain Y
Customer networks, domain D
Fig. 2. Domain peering and a cross-domain multicast tree, tree nodes: D1, A1, Aa, Ab, A2, B1, A3, C1, covering group G0 (D1, B1, C1).
Aggregated Multicast – A Comparative Study
1035
domain A at a certain location (say, Los Angeles), and domain E is a customer network of domain A in another location (say, Seattle). Domain B and C can be other customer networks (say, in Boston) or some other ISP’s networks that peer with A. A multicast session originates at domain D and has members in domain B and C. Routers D1, A1, A2, A3, B1 and C1 form the multicast tree at the inter-domain level while A1, A2, A3, Aa and Ab form an intra-domain sub-tree within domain A (there may be other routers involved in domain B and C). Consider a second multicast session that originates at domain D and also has members in domain B and C. For this session, a sub-tree with exactly the same set of nodes will be established to carry its traffic within domain A. Now if there is a third multicast session that originates at domain X and it also has members in domain B and C, then router X1 instead of D1 will be involved, but the sub-tree within domain A still involves the same set of nodes: A1, A2, A3, Aa, and Ab. To facilitate our discussions, we make the following definitions. For a group G, we call terminal nodes the nodes where traffic enters or leaves a domain, A1, A2, and A3 in our example. We call transit nodes the tree nodes that are internal to the domain, such as Aa and Ab in our example. In conventional IP multicast, all the nodes in the above example that are involved within domain A must maintain separate state for each of the three groups individually though their multicast trees are actually of the same “shape”. Alternatively, in the aggregated multicast, we can setup a pre-defined tree (or establish a tree on demand) that covers nodes A1, A2 and A3 using a single multicast group address (within domain A). This tree is called an aggregated tree (AT) and it is shared by more than one multicast groups (three groups in the above example). We say an aggregated tree T covers a group G if all terminal nodes for G are member nodes of T . Data from a specific group is encapsulated at the incoming terminal node using the address of the aggregated tree. It is then distributed over the aggregated tree and decapsulated at exiting terminal nodes to be further distributed to neighboring networks. This way, transit router Aa and Ab only need to maintain a single forwarding entry for the aggregated tree regardless how many groups are sharing it. Thus, aggregated multicast can reduce the required multicast state. Transit nodes don’t need to maintain state for individual groups; instead, they only maintain forwarding state for a smaller number of aggregated trees. The management overhead for the distribution trees is also reduced. First, there are fewer trees that exchange refresh messages. Second, tree maintenance can be a much less frequent process than in conventional multicast, since an aggregated tree has a longer life span. 3.2
Group-Tree Matching in Aggregated Multicast
Aggregated multicast achieves state reduction through inter-group tree sharing— multiple groups share a single aggregated tree. When a group is started, an aggregated tree should be assigned to the group following some rules. If a dense set of aggregated trees is pre-defined, things will be easy: just choose the tree with minimum cost which can cover the group. While in the dynamic case (aggregated tree are established on demand), a more elaborate group-tree matching algorithm is needed. When we try to match a group G to an aggregated tree T , we have four cases:
1036
J.-H. Cui et al.
1. T can cover G and all the tree leaves are terminal nodes for G, then this match is called perfect match for G; 2. T can cover G but some of the tree leaves are not terminal nodes for G, then this match is a pure-leaky match (for G); 3. T can not cover G and all the tree leaves are terminal nodes for G, then this match is called a pure-incomplete match; 4. T can not cover G and some of the tree leaves are not terminal nodes for G, we name this match as incomplete leaky match. Namely, we denote the case when some of the tree leaves are not terminal nodes for the group G as leaky match and the case when the tree can not cover the group G as incomplete match. Clearly, leaky match includes case 2 and 4, and incomplete match includes case 3 and 4. To give examples, the aggregated tree T0 with nodes (A1, A2, A3, Aa, Ab) in Fig. 2 is a perfect match for our early multicast group G0 which has members (D1, B1, C1). However, if the above aggregated tree T0 is also used for group G1 which only involves member nodes (D1, B1), then it is a pure-leaky match since traffic for G1 will be delivered to node A3 (and will be discarded there since A3 does not have state for that group). Obviously, the aggregated tree T0 is an pure-incomplete match for multicast group G2 which has members (D1, B1, C1, E1) and an incomplete leaky match for multicast group G3 with members (D1, B1, E1). We can see that leaky match helps to improve inter-group tree sharing.A disadvantage of leaky match is that some bandwidth is wasted to deliver data to nodes that are not members for the group. Leaky match may be unavoidable since usually it is not possible to establish aggregated trees for all possible group combinations. In the incomplete match case, we have two ways to get a tree for the group. One way is to construct a bigger tree by moving the entire group to a new larger aggregated tree, or, to extend the current aggregated tree to a bigger tree. Extending a tree might involve a lot of overhead, because all the groups which use the extended aggregated tree need to make the corresponding adjustment. An alternative way is to use “tunnelling”. Here we give an example. Suppose member E1 in domain E decides to join group G0 in Fig. 2. Instead of constructing a bigger tree, an extension “tunnel” can be established between edge router A4 (connecting domains A and E) and edge router A1. This solution combines features of multicast intergroup tree sharing and tunnelling; it still preserves core router scalability properties by pushing complexity to edge routers. We can see that, if we employ tunnelling instead of tree extension, then an incomplete match only involves tunnelling. An incomplete leaky match will activate tunnelling and will also waste resources because of leaky matching. 3.3 A New Group-Tree Matching Algorithm Here we present a new group-tree matching algorithm which is used in our simulation. To avoid the overhead of tree extension, this algorithm uses tunnelling for incomplete match. First, we introduce some notations and definitions. Overhead Definition. A network is modelled as an undirected graph G(V, E). Each edge (i, j) is assigned a positive cost cij = cji , which represents the cost to transport a
Aggregated Multicast – A Comparative Study
1037
unit of data from node i to node j (or from j to i). Given a multicast tree T , total cost to distribute a unit of data over that tree is cij . (1) C(T ) = (i,j)∈T
If every link is assumed to have equal cost 1, tree cost is simply C(T ) = |T | − 1, where |T | denotes the number of nodes in T . This assumption holds in this paper. Let M T S (Multicast Tree Set) denote the current set of multicast trees established in the network. A “native” multicast tree (constructed by some conventional multicast routing algorithm, denoted by A) for a multicast group G is denoted by TGA . For any aggregated tree T , as mentioned in Section 3.2, it is possible that T does not have a perfect match with group G, which means that the match is leaky match or incomplete match. In leaky match case, some of the leaf nodes of T are not the terminal nodes for G, and then packets reach some destinations that are not interested in receiving them. Thus, there is bandwidth overhead in aggregated multicast. We assume each link has the same bandwidth, and each multicast group has the same bandwidth requirement, then it is easy to get that the percentage bandwidth overhead (denoted by δL (G, T )) is actually equal to the percentage link cost overhead: δL (G, T ) =
C(T ) − C(TGA )) , C(TGA )
(2)
Apparently, δL (G, T ) is 0 for perfect match and pure-incomplete match. In incomplete match case, T can not cover all the members of group G, and some tunnels need to be set up. Data packets of G exits from the leaf nodes of T , and tunnels to the corresponding terminal nodes of G. Clearly, there is tunnelling overhead caused by unicasting data packets to group terminal nodes. Each tunnel’s cost can be measured by the link cost along the tunnel. Assume there are kG tunnels for group G, and each t tunnel is denoted by TG,i , where 1 ≤ i ≤ kG , then we define the percentage tunnelling overhead for this incomplete match as kG δI (G, T ) =
i=1
t C(TG,i )
C(TGA )
.
(3)
It is easy to tell that δI (G, T ) is 0 for perfect match and pure-leaky match. Algorithm Description. Our new group-tree matching algorithm is based on bandwidth overhead and tunnelling overhead. Let lt be the given bandwidth overhead threshold for leaky match, and tt be the given tunnelling overhead threshold for incomplete match. When a new group is started, 1. compute a “native” multicast tree TGA for G based on the multicast group membership; 2. for each tree T in M T S, compute δL (G, T ) and δI (G, T ); if δL (G, T ) < lt and δI (G, T ) < tt then T is considered to be a candidate aggregated tree for G;
1038
J.-H. Cui et al.
3. among all candidates, choose the one such that f (δL (G, T ), δI (G, T )) is minimum and denote it as Tm , then Tm is used to deliver data for G; if Tm can not cover G, the corresponding tunnels will be set up; 4. if no candidate found in step 2, TGA is used for G and is added to M T S. In step 3, f (δL (G, T ), δI (G, T )) is a function to decide how to choose the final tree from a set of candidates. In our simulations, f (δL (G, T ), δI (G, T )) = δL (G, T ) + δI (G, T ).
(4)
Actually, this function can be chosen according to the need in the real scenarios. For example, we can give more weight to bandwidth overhead if bandwidth is our main concern.
4 Experiment Methodology In an aggregated multicast scheme, sharing a multicast tree among multiple groups may significantly reduce the states at network core routers and correspondingly the tree management overhead. However, what level of gain can aggregated multicast get over other multicast schemes? In this section, we will discuss some implementation issues for different multicast schemes in our simulations, and define metrics to measure multicast state and tree management overhead. Then in Section 5, we will compare aggregated multicast with other multicast schemes through simulations. 4.1
Implementation of Multicast Schemes in SENSE
We do our simulations using SENSE (Simulation Environment for Network System Evolution) [2], which is a network simulator developed at the network research laboratory at UCLA to perform wired network simulation experiments. In SENSE, we can support the source specific tree scheme, the shared tree scheme (with unidirectional tree and bi-directional tree), and the aggregated multicast scheme (with source specific tree, unidirectional shared tree and bi-directional shared tree). It should be noted that, the multicast schemes we discuss here are not specific multicast routing protocols, since the goal of this paper is to study the gain of aggregated multicast over conventional multicast schemes. The comparison is between schemes, not protocols. We implement each multicast scheme with a centralized method. For each scheme, there is a centralized processing entity (called multicast controller), which has the knowledge of network topology and multicast group membership. The multicast controller is responsible for constructing the multicast tree according to different multicast schemes and then distributing the routing tables to the corresponding nodes. In the implementation, we did not model the membership acquisition and management procedures which depend on the specific multicast routing protocol. This omission reduces the bias and improves the fairness in comparing different multicast schemes. The multicast controller will read group and member dynamics from a pre-generated (or generated on-the-fly) trace file.
Aggregated Multicast – A Comparative Study
1039
For shared tree scheme (either unidirectional or bi-directional) and aggregated multicast scheme with shared tree (unidirectional or bi-directional), a core node or a rendezvous point (RP) is needed when a tree is constructed. To achieve better load balancing, the core node should be chosen carefully. In our implementation, for all multicast schemes using shared trees, a set of possible core routers are pre-configured. Then, when a group is initialized, the core is chosen so as to minimize the cost of the tree. In an aggregated multicast scheme, the multicast controller also needs to manage aggregated trees and multicast groups and manipulate group-tree matching algorithm. The multicast controller has the same responsibility as the tree manager (mentioned in [8,9]) in aggregated multicast. It collects group join messages and assigns aggregated trees to groups. Once it determines which aggregated tree to use for a group, the tree manager can install corresponding state at the terminal nodes involved. 4.2
Performance Metrics
The main purpose of tree sharing is to reduce multicast state and tree maintenance overhead. So, multicast state and tree management overhead measures are of most concern here. In our experiments, we introduce the following metrics. Number of multicast trees (or number of trees for shorthand) is defined as |M T S|, where MTS denotes the current set of multicast trees established in the networks. This metric is a direct measurement for the multicast tree maintenance overhead. The more multicast trees, the more memory required and the more processing overhead involved (though the tree maintenance overhead depends on the specific multicast routing protocols). Forwarding state in transit nodes (or transit state for shorthand). Without losing generality, we assume a router needs one state entry per multicast address in its forwarding table. As we defined in Section 3, in a multicast tree, there are transit nodes and terminal nodes. We note that forwarding state in terminal nodes can not be reduced in any multicast scheme. Even in aggregated multicast, the terminal nodes need to maintain the state information for individual groups. So, to assess the state reduction, we measure the forwarding state in transit nodes only.
5 Simulations In this section, we compare aggregated multicast with conventional multicast schemes through extensive simulation, and quantitatively evaluate the gain of aggregated multicast. 5.1
Multicast Trace Generation
Multicast Group Models. Given the lack of experimental large scale multicast traces, we have chosen to develop membership models that exhibit locality and group correlation preferences. In our simulation, we use the group model previously developed in [9]: The random node-weighted model. For completeness, we provide here a summary description of this model.
1040
J.-H. Cui et al.
The random node-weighted model. This model statistically controls the number of groups a node will participate in based on its weight: for two nodes i and j with weight w(i) and w(j) (0 < w(i), w(j) ≤ 1), let N (i) be the number of groups that have i as a member and N (j) be the number of groups that have j as a member, then N (i) w(i) it is easy to prove that, in average, N (j) = w(j) . Assuming the number of nodes in the network is N and nodes are numbered from 1 to N . To each node i, 1 ≤ i ≤ N , is assigned a weight w(i), 0 ≤ w(i) ≤ 1. Then a group can be generated as the following procedure: for i = 1 to N do generate p, a random number uniformly between 0 and 1, let it be p if p < w(i) then add i as a group member end if end for n Following this model, the average size of multicast groups is N i=1 w(i). Multicast Membership Dynamics. Generally, there are two methods to control multicast group member dynamics. The first one is to create new members (sources and receivers) for a group according to some pre-defined statistics (arrive rate and member life time etc.), then decide the termination of a group based on the distribution of the group size. This is actually a member-driven dynamic. As to the other method, we call it group-driven dynamics, which means that, group characteristics (group size, group arrival rate, and group life time) are defined first and then group members are generated according to groups. In our experiment, we use the second method, in which the group statistics are controlled first (using the random node weighted model). Actually, the second method looks more reasonable for many real life multicast applications (such as video conference, tele-education, etc.). In any event, the specific method used to control group member dynamics is not expected to affect our simulation results. In our experiment, given a group life period [t1 , t2 ], and the group member set g, where |g| = n, for any node mi ∈ g, 1 ≤ i ≤ n, its join time and leave time are denoted by tjoin (mi ) and tleave (mi ) separately. Then the member dynamics is controlled as follows: for i = 1 to n do mi ∈ g tjoin (mi )=get rand(t1 , t2 ); (get a random time in [t1 , t2 ]) tleave (mi )=get rand(tjoin (mi ), t2 ); (get a random time in [tjoin (mi ), t2 ]) end for It is not difficult to know that the average life time of each member is |t2 − t1 |/4. 5.2
Results and Analysis
We now present results from simulation experiments using a real network topology, vBNS backbone [1]. In vBNS backbone, there are 43 nodes, among which FORE ASX-1000 nodes (16 of them) are assumed to be core routers only (i.e. will not be terminal nodes for any
Aggregated Multicast – A Comparative Study
1041
multicast group) and are assigned weight 0. Any other node is assigned a weight 0.05 to 0.8 according to link bandwidth of the original backbone router – the rationale is that, the more the bandwidth on the outgoing (and incoming) links of a node, the more the number of multicast groups it may participate in. So, we assign weight 0.8 to nodes with OC-12C links (OC-12C-linked nodes for shorthand), 0.2 to nodes with OC-3C links (OC-3C-linked nodes), and 0.05 to nodes with DS-3 links (DS-3-linked nodes). In simulation experiments, multicast session requests arrive as a Poisson process with arrival rate λ. Sessions’ life time has an exponential distribution with average ¯ = λ/µ. During the life µ−1 . At steady state, the average number of sessions is N time of each multicast session, group members are generated dynamically according to group-driven method introduced earlier. Group membership is controlled using the random node-weighted model. Performance data is collected at certain time points (e.g. at T = 10/µ), when steady state is reached, as “snapshot”. First, we design experiments to compare unidirectional shared tree scheme (UST scheme for shorthand) vs aggregated multicast scheme with unidirectional shared tree (AM w/UST scheme for short hand). In this set of experiments, each member of a group can be a source and a receiver. Once a multicast session starts up, its core node (or RP) is randomly chosen from the 16 core routers in the network. For aggregated multicast scheme with unidirectional shared tree, the algorithm specified in Section 3.3 is used to match a group to a tree. When members join or leave a group, its aggregated tree will be adjusted according to the matching algorithm. Correspondingly, the routing algorithm A is PIM-SM like routing algorithm which uses unidirectional shared tree.
Number of trees
2000
UST Scheme AM w/UST Scheme, lth=0, tth=0 AM w/UST Scheme, lth=0.1, tth=0 AM w/UST Scheme, lth=0.2, tth=0 AM w/UST Scheme, lth=0.3, tth=0
12000 10000 Transit state
2500
1500 1000
8000 6000 4000
500 0
UST Scheme AM w/UST Scheme, lth=0, tth=0 AM w/UST Scheme, lth=0.1, tth=0 AM w/UST Scheme, lth=0.2, tth=0 AM w/UST Scheme, lth=0.3, tth=0
2000
500
1000
1500 Number of groups (a)
2000
2500
0
500
1000
1500
2000
2500
Number of groups (b)
Fig. 3. Results for UST and AM w/UST when only pure-leaky match (tth=0) is allowed
In our first experiment, for aggregated multicast, we only allow pure-leaky match, which means that the tunnelling overhead threshold (represented as tth) is 0. We vary the bandwidth overhead threshold (represented as lth) from 0 to 0.3. For UST scheme and AM w/UST scheme with different bandwidth threshold, we run simulations to show how the aggregation of aggregated multicast “scales” with the average number of concurrent groups. The results are plotted in Fig. 3. As to the number of trees (see Fig. 3(a)), clearly, for UST scheme, it is almost a linear function of the number of groups. For AM w/UST scheme, as the number of groups becomes bigger, the number of trees also increases, but the increase is much less than UST (even for perfect match (lth = 0), the number of trees is only 1150 instead of 2500 for UST when there are 2500 groups). Also this “increase” decreases as there are more groups, which means that as more groups are pumped into the network, more groups can share an aggregated tree. Fig. 3(b) shows us the change of
1042
J.-H. Cui et al.
transit state with the number of concurrent groups. It has similar trend to metric number of trees. Transit state is reduced from 12800 to 7400 (above 40% reduction) even for perfect match when 2500 groups come. A general observation is that, when bandwidth overhead threshold is increased, that is, more bandwidth is wasted, number of trees decreases and transit state falls, which means more aggregation. Therefore, there is a trade-off between state and tree management overhead reduction and bandwidth waste. In our second experiment, for aggregated multicast, we only allow pure-incomplete match, which means that the bandwidth overhead threshold (represented as lth) is 0. We vary the tunnelling overhead threshold (represented as tth) from 0 to 0.3 and want to look at the effect of tunnelling overhead threshold in the aggregation. Fig. 4 plots the results, which give us curves similar to Fig. 3. However, we can see that tunnelling overhead threshold affects the aggregation significantly: when tth = 0.3, and group number is 2500, almost 5 groups share one tree, and transit state is reduced about 70 percentage. When group number increases, we can expect even much more aggregation. The stronger influence of tunnelling overhead threshold on aggregation is not a surprise: the higher the tunnelling overhead threshold is, the more chance for a group to use a small tree for data delivery, the more likely for more groups to share a single aggregated tree.
Number of trees
2000
UST Scheme AM w/UST Scheme, lth=0, tth=0 AM w/UST Scheme, lth=0, tth=0.1 AM w/UST Scheme, lth=0, tth=0.2 AM w/UST Scheme, lth=0, tth=0.3
12000 10000 Transit state
2500
1500 1000
8000 6000 4000
500 0
UST Scheme AM w/UST Scheme, lth=0, tth=0 AM w/UST Scheme, lth=0, tth=0.1 AM w/UST Scheme, lth=0, tth=0.2 AM w/UST Scheme, lth=0, tth=0.3
2000
500
1000
1500
2000
0
2500
500
1000
Number of groups (a)
1500
2000
2500
Number of groups (b)
Fig. 4. Results for UST and AM w/UST when only pure-incomplete match (lth=0) is allowed
Our third experiment considers both bandwidth overhead and tunnelling overhead. And the simulation results are shown in Fig. 5. All the results tell what we expect: more aggregation achieved when we sacrifice more (bandwidth and tunnelling) overhead.
Number of trees
2000
UST Scheme AM w/UST Scheme, lth=0, tth=0 AM w/UST Scheme, lth=0.1, tth=0.1 AM w/UST Scheme, lth=0.2, tth=0.2 AM w/UST Scheme, lth=0.3, tth=0.3
12000 10000 Transit state
2500
1500 1000
8000 6000 4000
500 0
UST Scheme AM w/UST Scheme, lth=0, tth=0 AM w/UST Scheme, lth=0.1, tth=0.1 AM w/UST Scheme, lth=0.2, tth=0.2 AM w/UST Scheme, lth=0.3, tth=0.3
2000
500
1000
1500 Number of groups (a)
2000
2500
0
500
1000
1500
2000
2500
Number of groups (b)
Fig. 5. Results for UST and AM w/UST when both leaky match and incomplete match are allowed
Aggregated Multicast – A Comparative Study
1043
We have shown the results for comparing unidirectional shared tree scheme (UST) vs aggregated multicast scheme with unidirectional shared tree (AM w/UST). Similar results are obtained for source specific tree scheme (SST) vs aggregated multicast scheme with source specific tree (AM w/SST) and bi-directional shared tree scheme (BST) vs aggregated multicast with bi-directional shared tree (AM w/BST). Due to the space limit, we are not going to show the corresponding results for other schemes in this paper. But interested readers can find more results in [4]. From our simulation result and analysis, the benefits of aggregated multicast are mainly in the following two areas: (1) tree management overhead reduction by reducing the number of trees needed to be maintained in the network; (2) state reduction at transit nodes. The price to pay is bandwidth waste and tunnelling cost. The above simulation results confirm our claim while demonstrate the following trends: (1) if we are willing to sacrifice more bandwidth or tunnelling cost (by lifting the bandwidth overhead threshold and tunnelling overhead threshold correspondingly), more or better aggregation is achieved; by “more aggregation” we mean more groups can share an aggregated tree (in average) and correspondingly more state reduction; (2) better aggregation is achievable as the number of concurrent groups increases. The later point is especially important since one basic goal of aggregated multicast is scalability in the number of concurrent groups.
6 Conclusions and Future Work In this paper, we first gave a classification of multicast schemes, then had a short review of aggregated multicast. For aggregated multicast, we proposed a new group-tree dynamic matching algorithm using tunnelling. We implemented different multicast schemes in SENSE. Through extensive simulations, we compared aggregated multicast with conventional multicast schemes and evaluated its gain over other schemes. Our simulations have shown that significant state and tree management overhead reduction (up to 70% state reduction in our experiments) can be achieved with reasonable bandwidth and tunnelling overhead (0.1 to 0.3), etc.. Thus aggregated multicast is a very promising scheme for transit domain multicast provisioning. We are now in the process of developing an actual aggregated multicast routing protocol testbed for real application scenarios. The testbed will allow us to better evaluate the state reduction and control overhead.
References 1. vBNS backbone network. http://www.vbns.net/. 2. SENSE: Simulation Environment for Network System Evolution. http://www.cs.ucla.edu/NRL/hpi/resources.html, 2001. 3. A. Ballardie. Core Based Trees (CBT version 2) multicast routing: protocol specification. IETF RFC 2189, September 1997. 4. Jun-Hong Cui, Jinkyu Kim, Dario Maggiorini, Khaled Boussetta, and Mario Gerla. Aggregated Multicast—A Comparative Study. Technical report, UCLA CSD TR No. 020011, February 2002.
1044
J.-H. Cui et al.
5. S. Deering, D. Estrin, D. Farinacci, and V. Jacobson. Protocol Independent Multicast (PIM), Dense Mode Protocol : Specification. Internet draft, March 1994. 6. Stephen Deering. Multicast routing in a datagram internetwork. Ph.D thesis, December 1991. 7. D. Estrin, D. Farinacci, A. Helmy, D. Thaler, S. Deering, M. Handley, V. Jacobson, C. Liu, P. Sharma, and L. Wei. Protocol Independent Multicast-Sparse Mode (PIM-SM): Protocol Specification. IETF RFC 2362, June 1998. 8. Aiguo Fei, Jun-Hong Cui, Mario Gerla, and Michalis Faloutsos. Aggregated multicast: an approach to reduce multicast state. In the proceedings of Sixth Global Internet Symposium(GI2001), November 2001. 9. Aiguo Fei, Jun-Hong Cui, Mario Gerla, and Michalis Faloutsos. Aggregated Multicast with Inter-Group Tree Sharing. In the proceedings of NGC2001, November 2001. 10. Mark Handley and et al. Bi-directional Protocol Independent Multicast (BIDIR-PIM). Internet draft: draft-ietf-pim-bidir-03.txt, June 2001. 11. J. Moy. Multicast routing extensions to OSPF. RFC 1584, March 1994. 12. C. Partridge, D. Waitzman, and S. Deering. Distance vector multicast routing protocol. RFC 1075, 1988.
New Center Location Algorithms for Shared Multicast Trees Young-Chul Shim1 and Shin-Kyu Kang2 1
Hongik University, Department of Computer Engineering, Seoul, Korea [email protected] 2 Aston Linux, Seoul, Korea [email protected]
Abstract. Multicast routing algorithms such as PIM, CBT, BGMP use shared multicast routing trees and the location of the multicast tree has great impact on the tree cost and the packet delay. In this paper we propose new center location algorithms and a new center relocation algorithm and analyze their performance through simulation studies. The proposed center location algorithms try to find the geographic center of multicast members considering not only multicast group members but also a few non-member nodes which are carefully chosen. Simulation results show that the proposed algorithms find the better center than existing algorithms in terms of tree cost and packet delay. After many members have joined and/or left the group, the previously chosen center may not be a proper place any more and, therefore, we need to find a new center and build a new tree around this new center. We propose a new center relocation algorithm that determines the moment when the new tree should be built around the new center. The algorithm is based on measured packet delays as well as the parameter indicating how much the group has changed. It not only avoids unnecessary center relocation processes but also prevents the cost and worst packet delay of the tree from significantly deviating from the optimal values. . . .
1
Introduction
Multicast is an efficient mechanism for sending packets to a group of receivers and used in many areas[1,2]. To send packets to multicast group members, a multicast routing algorithm builds multicast packet delivery trees among senders and receivers. There are two types of delivery trees: source based trees and shared trees. In the source based tree approach, a shortest path tree is built from a sender to all the receivers and one tree is built for each sender. DVMRP[3] and MOSPF are examples of routing algorithms building source based trees. The disadvantage of this approach is that there are as many trees as the senders and the management of these trees can be very complicated. To solve this problem one shared tree is built among all senders and receivers in the shared tree approach. This work was supported by ITRC
E. Gregori et al. (Eds.): NETWORKING 2002, LNCS 2345, pp. 1045–1056, 2002. c Springer-Verlag Berlin Heidelberg 2002
1046
Y.-C. Shim and S.-K. Kang
CBT(Core Based Trees)[5], PIM-SM[6], and BGMP[7] are routing algorithms in this category. In this approach the location of the center of the shared tree greatly affects the multicast tree cost and the packet transmission delay over the tree and, therefore, the determination of the proper location of the center becomes an important issue. In a dynamic environment where members can join and leave during a multicast session, the center location which may have been optimal in the beginning may not be so anymore after many membership changes. So in case of the dynamic environment, the relocation of the center also becomes an important issue. The center location algorithms can be divided into three categories depending upon what network nodes are considered as candidates for the center. In the first category, all the network nodes can become candidates for the center and the best node is chosen as the center. With this method, the optimal center location can be found but because too many packets are exchanged among all the network nodes, it is never a practical solution. In the second category, only the multicast members are considered as candidates. This approach incurs the least overhead but because only the members are considered, the chosen center location can be far from being optimal. The last category stands between the first and the second. In this category, not only the members but also some carefully chosen non-member nodes are considered for the center and the best node among them is chosen as the center. The method of choosing non-member nodes that will be considered as candidates affects the overhead of the center location algorithm and the quality of the chosen center. We propose a new center location algorithm called GeoCenter(Geographic Center). The idea is that we try to find the geographic center of the multicast members in the Internet map and this geographic center becomes the center of the multicast tree. This geographic center can become the member or nonmember router. We introduce three algorithms GeoCenter1, GeoCenter2, and GeoCenter3 depending upon the method of finding the geographic center. The proposed algorithms try to minimize the packet delay and the tree cost. Then we consider a dynamic case and propose a new algorithm for relocating the center as the membership changes. The center relocation process is such a costly one that its execution should be limited only to unavoidable cases. The new algorithm determines the moment when the new tree should be built around the new center. This algorithm is based on measured packet delays as well as the parameter indicating how much the group has changed. It not only avoids unnecessary center relocation processes but also prevents the cost and worst packet delay of tree from deviating too much from the optimal value. We analyze the performance of the center location and relocation algorithms through simulation. The rest of the paper is organized as follows. Section 2 surveys related work. Sections 3 and 4 describe our algorithms for center location and relocation, respectively. Section 5 presents simulation results and is followed by the conclusion in Section 6.
New Center Location Algorithms for Shared Multicast Trees
2
1047
Related Work
In this section we first present algorithms that have been proposed for the center location. Before introducing these algorithms we give the definition of the tree cost and explain weight functions that have been used in those algorithms. The tree cost is the sum of the cost of each link in the tree. The link cost can be the actual monetary value of that link, bandwidth, delay, etc. But in this paper we set the link cost to be 1 for every link. A weight function is calculated for each center candidate and the resulting values are compared to select the best one. We introduce the definitions of some weight functions in the following[8]. In the definitions, S is the set of all the senders and members, u and v represent either a sender or a member, root is the candidate for the center, d(u,v) is the distance between u and v and deg(u) is the degree of u. Actual Cost = number of links in tree rooted at root . M ax Dist = max d(root, u) . u∈S
Avg Dist =
1 d(root, u) . |S| u∈S
M ax Diam = max d(root, u) + max u∈S
Est Cost = where
v∈S,v=u
d(root, v) .
Est Costmin + Est Costmax . 2
Est Costmin = max d(root, u)+ u∈S
number of duplicate distance nodes in S d(root, u) if |S| ≤ deg(root) Est Costmax =
u∈S
[
d(root, u)] − [|S| − deg(root)]
otherwise
u∈S
Now we introduce several center location algorithms. The OCBT(Optimal Center-Based Tree) algorithm calculates the actual cost of the tree rooted at each node in the network and selects the one which gives the lowest maximum delay over all the roots with the lowest cost. The MCT(Maximum-Centered Tree) algorithm selects the node with the smallest M axDist value. The ACT(AverageCentered Tree) algorithm chooses the node with the smallest Avg Dist value. The DCT(Diameter-Centered Tree) algorithm selects the node with the lowest M ax Diam value. These four algorithms belong to the first category of center location algorithms. The RSST(Random Source-Specific Tree) algorithm chooses the center randomly among the senders and is used in CBT and PIM. In the MINMEM(Minimal Member Tree) algorithm, each member or sender node calculates the weight function of the multicast tree rooted at itself and exchanges the calculated value with other nodes. The node with the lowest weight function
1048
Y.-C. Shim and S.-K. Kang
value becomes the center. The weight function can be any of the five functions explained above. These two algorithms belong to the second category. In the HILLCLIMB algorithm, a randomly chosen temporary center calculates the weight function of the tree rooted at itself and all the routers directly connected to the temporary center do the same calculation. If the temporary center has the lowest value, it becomes the center. Otherwise the node with the lowest value becomes the temporary center and compares the weight function value with its direct neighbors. This process is continued until the node is found such that its value is lower than those of its direct neighbors or the distance from the original temporary center to the current temporary center reaches a certain threshold. This algorithm belongs to the third category. The problem of this algorithm is that it just finds the locally optimal point among nodes within a limited distance from the original center. In a dynamic environment, a center can be relocated by applying any of the above algorithms after some membership changes have occurred. The biggest issue here is when to apply the center location algorithm again. Thaler and Ravishankar introduce the parameter ∆ defined as follows[9]: ∆=1−
|G0 ∩ Gi | . max(|G0 |, |Gi |)
where G0 is the original group membership, Gi is the current group membership, and ∆ indicates the amount by which the group has changed. They propose to recalculate the center location when ∆ reaches 90%. They show that when 90% of the membership has changed, the tree cost has likewise degraded about 90% of the way toward a randomly centered tree. But we show that their algorithm does not improve the quality of the tree center at all in some cases and, therefore, incurs unnecessary center relocation processes.
3
New Algorithms for Center Location
In this section we introduce 3 new center location algorithms: GeoCenter1, GeoCenter2, and GeoCenter3. These algorithms pick the center based upon the information on routes between members. The route information from a node A to a node B is the list of all the routers visited on the path from A to B and this information can be obtained by using the program called traceroute or the IP route record option. In the GeoCenter1 algorithm, each member finds the route information to all the other member nodes, compiles all the routers appearing in the routes, and sends this information to a temporary center. Upon receiving the route information from all the members, the temporary center finds the routers that appear most frequently in the route information. If there are several such routers, one router is selected randomly. This selected router and the member nodes become the candidates for the center. The center is chosen as the node which has the lowest M ax Dist value among these candidates. GeoCenter2 and GeoCenter3 take different approaches in selecting nonmember candidates. In the GeoCenter2 algorithm, each member first collects
New Center Location Algorithms for Shared Multicast Trees
1049
route information from all the other router as in GeoCenter1 but, when compiling this information, records not only the addresses of each router in the route information but also the number of times a router appears in the route information. This information is sent to the temporary center. In the GeoCenter3 algorithm, each member finds the midpoints on the path to other members and sends the list of these midpoints to the temporary center. The way the temporary center selects the center is the same as in GeoCenter1. Based upon the explanation given in the above, we now present each algorithm in detail. GeoCenter1 1 When a multicast group is created, a temporary center is chosen arbitrarily. 2 The temporary center sends a probe message to each member. The probe message also contains the list of the multicast group members. 3 Upon receiving the probe message, each member finds the route information to all the other members using either the traceroute program or the record route IP option. At the same time the member measures the packet delay to other members and records the largest packet delay as its weight function value. 4 Each member compiles the list of nodes appearing on the path to other members from itself. It sends this list and its weight function value to the center. 5 Upon receiving the list from all the members, the temporary center selects a node(s) that appears most frequently in the lists. If there are many such nodes, one or more nodes are randomly picked. If the picked node(s) are a 8 member, go to step . 6 The temporary center sends a probe message to the selected non-member candidate(s). 7 The non-member candidate measures the packet delay to all the members, records the largest delay as its weight function value, and sends this value to the temporary center. 8 The temporary center selects the node which has the lowest weight function value as the center. GeoCenter2 4 The member GeoCenter2 is the same as GeoCenter1 except the step . nodes send not only the address of nodes on the path to the other member nodes but also the visit counts of such nodes. GeoCenter3 1 2 3 The same as in GeoCenter1. 4 Each member finds the midpoints on the path to other members and sends the list of midpoints and its weight function value to the temporary center. 5 The temporary center adds up all the visit counts for each node appearing in the lists received from the member nodes. It picks the node(s) with the largest accumulated visit counts. If the picked node(s) is a member, go to 8 step . 6 7 8 The same as in GeoCenter1.
1050
4
Y.-C. Shim and S.-K. Kang
The Center Relocation Algorithm in a Dynamic Multicast Environment
In this section we explain the algorithm for relocating the center after membership changes have occurred many times. After many members have joined and/or left the group, the center that was carefully chosen in the previous time may not be a good place any more. The quality of the tree may have deteriorated during the membership changes and, therefore, the tree cost and the maximum delay may have become too high compared with the optimal tree. As already explained in the previous section, the most important issue in the center relocation is the determination of the moment when the center location algorithm is applied. Because the center relocation process requires not only the determination of a new center location but also building a new multicast delivery tree around this new center, it is a very costly process. In reality, building a new tree will consume more time than calculating a new center location. So it is imperative to minimize the numbers that the multicast delivery tree is rebuilt. As we described in Section 2, Thaler and Ravishankar introduce a parameter ∆ indicating the amount by which the group has changed and used this parameter to determine when to calculate the new center location. They propose to calculate the new center location when ∆ reaches 90%. They also show that the time interval which it takes for ∆ to reach from 0 to 90% roughly corresponds to two to three times of the average connection duration of a member in a multicast group. But as we will show with simulation, if the area in which members are located does not change very much and members are uniformly distributed in this fixed area, the center relocation using only ∆ does not improve the tree cost and the packet delay and, is unnecessary in many cases. Another parameter we can use to determine when to calculate the new center location is the maximum delay from the center to the member nodes. When the center location is calculated, the maximum delay at that moment is recorded as Prev Max Delay. When a new member joins the group, the delay from the current center to this node is calculated and compared with Prev Max Delay. If the delay to this new node exceeds a certain constant times of Prev Max Delay, a new center location is calculated. Using this method, the unnecessary recalculation of the center in the case of just using the parameter ∆ can be avoided because the center will be recalculated only if the proof that the quality of the tree has deteriorated enough is obtained. This method of measuring the delay to a joining member and comparing against the Prev Max Delay works if the area where members are located remains the same, moves, or gets larger. But this method does not work if the member distribution area gets smaller. If the size of the area gets smaller, the delay to a new member will rarely exceed the previously measured Prev Max Delay value. But the quality of the tree may have been deteriorated compared with the optimal tree. We propose a new algorithm for determining when to recalculate the center location and when to actually rebuild the tree around the new center. The proposed algorithm uses both the parameter ∆ and the delay to a new member. We assume that the multicast routing algorithm enables the center to be noti-
New Center Location Algorithms for Shared Multicast Trees
1051
fied of all the join events of new members and calculate the delay to these new members. Now we explain our algorithm in detail as follow. 1 The location of a center is calculated. 2 The multicast delivery tree is built around the calculated center. The Prev Max Delay and Curr Max Delay values are initialized to be the value of the maximum delay of this new tree. Set G0 and Gi to be the current set of members. 3 Wait until a membership changes. If the membership change is a join event, calculate the delay from the current center to this new member. Update the Curr Max Delay to be the maximum of the delay to this new node and the current value for Curr Max Delay. If Curr Max Delay is greater than C1 * 1 Prev Max Delay, go to step . 4 Update Gi and the value of ∆. If this value does not exceed C2, go to step 3 Otherwise calculate the location of a new center. Assuming this new . center, calculate the maximum delay of the new tree and set this value to be Opt Max Delay. If Curr Max Delay is greater than C1 * Opt Max Delay, go 2 Otherwise, set G0 and Gi to be the set of current members and to step . 3 go to step .
In the above algorithm C1 determines the extent to which the worst case packet delay of the current tree is permitted to exceed the worst case packet delay which was calculated when the center was determined in the most recent time. If C1 is 1.4, the excess up to 40% is permitted. If ∆ reaches C2, it means that C2 * 100% of members have changed. In the algorithm given in [9], C2 was set to be 0.9. The algorithm calculates a new center location and builds a new tree around this new center if the delay to a new joining member is bad enough compared with the maximum delay which was calculated when the tree was built in the last time. But the area where the members are distributed gets smaller, this condition will be rarely satisfied. So after we have seen enough changes in the membership, we calculate the new center location and also calculate the new tree around this new center. If we see that the quality of the current tree is bad enough compared with this new tree, we actually rebuild the tree around the calculated new center .
5
Experimentation Results
In this section we present and analyze simulation results for our center location and relocation algorithms. We first present simulation results for our center location algorithms assuming that the set of members and senders are fixed and compare their performance with other center location algorithms. Then we show simulation results for our center relocation algorithm in the environment where members join and leave. For the experimentation we used NS(Network Simulator) developed in UC Berkeley and ran the simulation on Linux 5.1 platforms. Algorithms were implemented with TCL and network topologies were generated with the GTITM(Georgia Tech Internetwork Topology Models) provided in the NS.
1052
5.1
Y.-C. Shim and S.-K. Kang
Experimentation Results for Center Location Algorithms
In this subsection we compare the proposed center location algorithms with other algorithms through simulation changing the multicast group size, the network size, and the average number of links for a node in the network. We measure the tree cost and packet transmission delay of the multicast trees built using various center location algorithms. We compare the proposed algorithms with OCBT, MDOT(Minimum Delay Optimal Tree), Random, MIN-MEM, and HILLCLIMB algorithms. The OCBT algorithm is optimal in terms of the tree cost. The MDOT considers all the nodes as the candidates for the center and picks the node such that the tree built around this node has the lowest M axDist weight function value. If several nodes have the same lowest M ax Dist value, the node with the lowest tree cost is selected. This algorithm is optimal in terms of tree costs when the shared trees are unidirectional such as in PIM-SM but may not be optimal in case of bi-directional shared trees such as in CBT. The Random algorithm chooses the center randomly. These three algorithms, OCBT, MDOT, and Random, are not practical but considered here just for the comparison. MIN-MEM and HILLCLIMB algorithms are practical solutions and shown to find a good center[8,9]. For the experimentation we assume that all the senders are also members and for each simulation we perform 100 experiments and take the average as the result.
Fig. 1. Effects of group size on algorithms
New Center Location Algorithms for Shared Multicast Trees
1053
A group size is the number of member nodes in a multicast group. In the first experimentation we assumed that there were 100 nodes in the network and the average number of links for nodes was 4. We changed the group size from 5 to 60 and measured the tree costs and the packet delays. Figure 1 (a) shows the tree costs of various algorithms and the tree cost is represented as the ratio to OCBT. The methods for measuring the packet delay become different depending on the type of shared trees: unidirectional trees or bidirectional trees. In unidirectional shared trees, packets are sent to the center and then distributed to all the members. But in bidirectional shared trees, packets are sent along the shortest path on the shared tree from the sender to members and in many cases may not pass the center. Figures 1 (b) and (c) show the packet delays of various algorithms and the packet delays are represented as the ratio to OCBT. In the figures some algorithms sometimes show better packet delay better than MDOT and this can be possible because MDOT is optimal when the packet delays are measured between the center and the receivers not between senders and receivers. The figures show that the tree cost of GeoCenter2 and GeoCenter3 stays within 112% of OCBT and is better than MIN-MEM and HILLCLIMB and the packet delay of GeoCenter2 and GeoCenter3 is comparable to MDOT and always lower than other algorithms. We see that GeoCenter2 and GeoCenter3 algorithms show better result than GeoCenter1 because former algorithms find better geographic centers than GeoCenter1. Next we summarize the results for the second and third sets of simulations without showing the figures because they were similar to Figure 1. The second set of experiments was performed varying the network size that is the number of nodes in the simulated network. The average number of links for nodes was set to 4 and the group size was assumed to be 20% of the network size. The tree cost of GeoCenter2 and GeoCenter3 was 12% higher than OCBT in the worst case but always better than MIN-MEMB and HILLCLIMB. The packet delay of GeoCenter2 and GeoCenter3 was comparable with MDOT, even better than MDOT at some points, and constantly better than other algorithms. The third set of experiments was performed varying the average number of links of a node in the network. The network size and the group size were assumed to 100 and 20, respectively. The simulation showed that the tree cost GeoCenter2 and GeoCenter3 stayed within 114% of OCBT and always better than MINMEMB and HILLCLIMB. The packet delay of GeoCenter2 and GeoCenter3 was very similar to MDOT and always better than other algorithms. From the above three sets of experiments, we conclude that two of the proposed algorithms, GeoCenter2 and GeoCenter3, achieve near optimal packet delay compared with the MDOT algorithm while not incurring too much increase on the tree cost compared with the OCBT algorithm. 5.2
Experimentation Results for the Center Relocation Algorithm
In this subsection we consider a dynamic environment where members can join and leave during the lifetime of a multicast session.
1054
Y.-C. Shim and S.-K. Kang
Thaler and Ravishankar introduced the parameter ∆ and proposed to calculate the new center and rebuild the tree as ∆ reaches at some fixed value[9]. We first show that if the area in which the members are located is fixed and the members are uniformly distributed in this area, their simple method of just using the ∆ value does not improve the quality of trees at all. We ran experiments with a network of 200 nodes. The members were uniformly distributed in this network and their average number was 40. We compared the quality of trees of two cases. In the first case the center location is never recalculated and in the second case the center location is recalculated when ∆ reaches 90%. The simulation results showed that recentering and rebuilding a tree with just using ∆ did not improve the quality of trees at all and in some cases gave worse performance. So we conclude that if members are uniformly distributed in a fixed area, we need not recalculate the center location. This is because the center that was calculated in the beginning remains to be near optimal if the distribution of the members remains uniform even though members join or leave the multicast group. From the experiment we can see that the packet delay remains to be within 140% of the optimal value, so if we set C1 to be 1.4 in our center relocation algorithm, the center need not be moved and, therefore, unnecessary overhead can be avoided. But the algorithm by Thaler and Ravishankar using only ∆ regularly changes the center location but does not improve the quality of the multicast tree. Now we consider three cases where the area in which members are distributed changes and show how our center relocation algorithm explained in the previous section performs. In the first case the area expands, in the second case the size of the area remains the same but the area moves gradually, and in the last case the size of the area becomes reduced. Figure 2 shows the simulation results when the area expands. Each figure has two graphs. The first graph shows the result when the center is calculated once in the beginning and is never recalculated. The second graph describes the result when the center is recalculated by the proposed algorithm using both ∆ and the worst case packet delay measurement. In the experiments GeoCenter3 algorithms was used to determine the center location. The points on graphs are represented
Fig. 2. Center relocation in an expanding area
New Center Location Algorithms for Shared Multicast Trees
1055
Fig. 3. Center relocation in a moving area
as the ratio to the value of optimal tree generated using the OCBT algorithm at each measurement point. The figure shows that the tree cost gradually increases without the center relocation algorithm as the area expands with the center recalculation. But if we use the proposed center relocation algorithm, the tree cost never becomes 30% higher than the optimal value calculated with the OCBT algorithm at each point. The figure also shows that the packet delay becomes almost 1.8-2 times of the OCBT tree without the center recalculation but rarely becomes 20% higher than the OCBT tree if we use the proposed center relocation algorithm. Figure 3 shows the simulation results when the area is moving and Figure 4 shows the simulation results when the area gets reduced. They show the same result as in the case when the area gets expanded. From these simulation results we see that the algorithm using just the ∆ parameter regularly recalculates the center location and actually rebuilds the multicast tree without making much improvement on the tree quality in the case that the area in which members are distributed is fixed. We note that rebuilding a tree is a very costly process. But the proposed algorithm uses both the ∆ parameter and the measurement data of the worst case packet delay and can avoid unnecessary rebuilding of the multicast tree in this case. In the cases where
Fig. 4. Center relocation in an reducing area
1056
Y.-C. Shim and S.-K. Kang
the area gets expanded, moves, or becomes reduced, the proposed algorithm generates multicast trees of reasonable quality. And by properly adjusting the value of C1, which is the multiplier on the previously measured maximum packet delay and determines when the center should be recalculated, we can bound the maximum packet delay within a certain limit of the maximum delay of the OCBT.
6
Conclusion
In this paper we proposed new center location algorithms and a new center relocation algorithm for multicast routing alogrithms building shared trees and analyzed their performance through simulation. The proposed center location algorithms try to find the geographic center of multicast members considering not only multicast group members but also a few non-member nodes that are carefully chosen. We built multicast trees around the centers found by our algorithms and we found that these trees had slightly higher tree cost than the cost-optimal tree, the similar packet delay as the delayoptimal tree, and constantly better cost and packet delay than the trees built around the centers found by algorithms proposed by other researchers. Then we considered a dynamic environment where members could join and leave a multicast session and proposed a center relocation algorithm which determined the moment when the new tree should be built around the new center. The algorithm is based on measure packet delays as well as the parameter indicating how much the group membership has changed. Our algorithm not only avoids unnecessary center relocation processes but also prevents the cost and worst packet delay of the tree from deviating too much from the optimal values.
References 1. T.A. Maufer: Deploying IP Multicast in the Enterprise. Prentice Hall. (1997) 2. B. Quinn: IP Multicast Applications: Challenges and Solutions. Internet Draft drftietf-mboned-mcast-apps-01.txt. (June 1999) 3. D. Waitzman, C. Partridge, and S. Deering: Distance Vector Multicast Routing Protocol. RFC 1075. (1988) 4. J. Moy: MOSPF: Analysis and Experience. RFC 1585. (1994) 5. B. Cain, Z. Zhang, and A. Ballardi: Core Based Trees Multicast Routing: Protocol Specifcation. (1998) 6. D. Estrin et al: Protocol Independent Multicast-Sparse Mode: Protocol Specifiction. RFC 2362. (1998) 7. S. Kumar et al: The MASC/BGMP Architecture for Inter-Domain Multicast Routing. ACM SIGCOMM Conference. (August 1998) 8. D. Thaler and C. Ravishankar: Distributed Center-Location Algorithms: Proposals and Comparisons. IEEE Infocom. (1996) 9. D. Thaler and C. Ravishankar: Distributed Center-Location Algorithms. IEEE Journal on Selected Area in Communications, vol. 15, no. 3. (1997)
A Multicast FCFS Output Queued Switch without Speedup Maurizio A. Bonuccelli and Alessandro Urpi Dipartimento di Informatica, Universit` a di Pisa, Corso Italia 40, 56100 Pisa, Italy. {bonucce,urpi}@di.unipi.it
Abstract. In this paper we propose an architecture for an output queued switch based on the mesh of trees topology. After establishing the equivalence of our proposal with the output queued model, we analyze its features, showing that it merges positive features of the input queued switches (specially their implementability) with all the characteristics typical of output queued ones. Moreover, such an architecture is able to easily and efficiently manage multicast traffic, which is becoming extremely important in networks with traditional communication services integrated in.
1
Introduction
Internet is evolving to an integrated services network with a large number of users that exchange huge amounts of data, making the efficiency of the switching phase increasingly critical ([1,2,3,4]). This is even more evident since large parts of Internet are circuit switched (SONET1 , just to cite one name), and since link speed is rapidly increasing (for example, 40 Gb/s at OC768c or 160 Gb/s at OC3072), making routers/switches a serious bottleneck. At a suitable level of abstraction, a switch is a box connecting n source inputs that want to exchange messages with m destination outputs. The system is synchronous, and the time is slotted. Without loss of generality, we can think of messages as fixed size cells that arrive at the system at the beginning of each slot, and are processed during the time interval. Since we assume that message destinations are independently chosen by each input without rules, it can happen that more inputs want to communicate with the same output at the same time, causing a potential collision. Such an event should be avoided, because it results in the loss of all the cells involved in it, and in the retransmission of all of them from the originating source. Competing cells need to be stored in a memory, and to be serialized in some way, in order to keep busy outputs with queued cells for and to avoid collisions. There has been a deep investigation in buffered switches during the last years, that leaded to fundamental results. One of the first proposed solutions ([5,6]) was to put a shared memory between inputs and outputs where to store incoming cells and to forward a suitably chosen subset of them. While such an architecture 1
http://www.sonet.com
E. Gregori et al. (Eds.): NETWORKING 2002, LNCS 2345, pp. 1057–1068, 2002. c Springer-Verlag Berlin Heidelberg 2002
M.A. Bonuccelli and A. Urpi
I1
I2
Scheduler
1058
(a) Input queued
O1
I1
O1
O2
I2
O2
(b) Output queued
Fig. 1. Different switch architectures.
is quite simple and practical for systems operating at less than 20 Gb/s, it has many problems, the most penalizing is perhaps the memory access time (n + m accesses should be granted at every cycle). A natural step to move then was to introduce a queuing system, that led to input queued (Fig. 1(a)) and output queued (Fig. 1(b)) switches. The former is the implementation of the very simple idea that every cell, at the arrival to the switch, should be immediately buffered (with a queue for every input), and then a scheduler will choose in every cycle a set of non conflicting cells (namely, cells bound for different outputs) to forward through a nonblocking interconnection network, for example a crossbar. Easy to implement, the architecture was shown to suffer of limited throughput if a FIFO strategy is used in the queues: conflicting cells in the head of the queues may block other cells that would be free to pass through the switch, causing a performance loss known as head of line (HOL) blocking, limiting the throughput of the system to ∼ 58.6% assuming i.i.d. arrivals ([7]). Moving the queues at the output ports results in efficient switches that don’t block cells if their destination is idling, able for this reason to provide quality of service ([1,8]). This is not a solution, since such an architecture is clearly equivalent to the shared memory one, and its problem is again scalability: each queue must be able to serve up to n requests per time slot. This introduces the need for a speedup of the switch of a factor n + 1, limiting its implementability to scenarios with few input ports and quite slow links. In order to achieve scalability without performance problems, virtual output queued switches were proposed ([9,2]). Such an architecture avoids HOL blocking by having in each input a different queue for each output. It is clear that the scheduling phase is now critical: a set of cells must be selected for transmission at every time slot to maximize performances. It was shown ([10,11,12,13]) that there exist scheduling algorithms able to exploit a throughput of the 100% and also to avoid starvation of cells ([14]). However, such algorithms have several drawbacks, that can be so classified:
A Multicast FCFS Output Queued Switch without Speedup
1059
Complexity: an optimal scheduling can be found solving a matching problem on bipartite graphs ([15]), or finding a decomposition of stochastic matrices (see [16] for the switching case). The weak point of these approaches is their complexity; the best known matching algorithm runs in O(N 2 log2 (N )) time in the worst case ([17]), while the second method has been proved useful to implement, at most, 4 × 4 switches ([18]). Throughput: with approximate algorithms it is possible to overcome the complexity problem ([2,15,19,13]). Behind this approach there are good simulation results ([15,20]), and a proof that, if traffic reaches a steady state (i.e. there is always a cell that must be sent to every output), the behavior of these algorithms is optimal ([13]). But with bursty traffic ([21]) such a stability is never reached ([20]). Performance guarantees: despite some results on the bounds in queues average sizes and on average delays in input queued switches have been recently found ([22]), it is not yet clear how to offer quality of service in such a class of switches. This justifies the research on output queued like architectures, in order to obtain guarantees on the offered service. Combined input-output queued switches are another interesting architecture proposed as a trade-off between input and output queuing: there are queues both in the inputs and in the outputs, and a speedup of k is used, in the sense that it is possible to transfer k cells from every queue in the inputs to the desired queue in the outputs at every time slot. In [23,24] it was proved that a speedup of 2 is enough and necessary to emulate an output queued switch with a queuing policy that varies in a well known class. Unfortunately a locally optimal scheduling does not guarantee network optimization: in [25] it is shown that input queued switches with high performance scheduling algorithms, efficient in isolation, cause unbounded delay of cells when put in a network. In order to avoid this problem, and to offer quality of service (QoS), it is then important to have practical solutions resembling output queued switches. Parallel architectures are an encouraging alternative ([26,27,28,29,30]). In this work we take a completely different approach, and propose an output queued switch obtained by parallelizing the “classical” architecture, having very interesting features like high compositional power (i.e. it is easy to create a greater switch by using smaller ones), no speedup required (in a sense that will be cleared later), real implementability and efficient multicast management. The paper is organized as follows. Section 2 introduces the notation we will use throughout the paper. Section 3 outlines the idea at the very base of our proposal, relating it with a well known architecture. In Sect. 4 the topology of the mesh of trees is presented, together with some of its most important features. In Sect. 5 we present a new architecture for a switch, proving that it is equivalent to an output queued one. Finally, we conclude in Sect. 6 summarizing our work and proposing future directions.
1060
2
M.A. Bonuccelli and A. Urpi
Definitions
The following concepts and terms are very important through the paper: Number of ports: without loss of generality, the switches are supposed to have n inputs and n outputs, Names: Ii is the ith input, Oj is the j th output; Qi is the queue at the ith input (output) in an input (output) queued switch, and Li is its size, Acronyms: IQ means Input Queued, OQ is Output Queued, while VOQ is Virtual Output Queued and CIOQ is Combined Input-Output Queued, Mimicking: as defined in [24], a switch S mimics another switch S if, for any arrival pattern and independently of the switch size, the outputs are exactly the same. In [30] the definition is extended by considering a possible queuing delay for the cells, i.e. the outputs of the two switches are the same but with a temporal shift caused by queuing. So, an architecture X mimics an architecture Y with a delay of f (n) if, under the same arrival process, the outputs of X at time t + f (n) are the same of Y at time t.
3
A First (Impractical) Step
We begin by presenting a new point of view of an OQ switch. Later in this section the same intuition will be presented from a different perspective. Let us assume we have a n × n2 crossbar2 and that each cell is associated with an integer representing its arrival time (a time stamp). Then, we can think of splitting the queues in the output ports in n different queues, one for each input, like in Fig. 2. Such an architecture can be thought of as the complement of a VOQ switch, and it should not be hard noting that it can perfectly emulate an OQ switch. In fact, assuming a FIFO strategy, the division in n queues is equivalent to the distribution of the cells in queues, sorted by sender. Qi in an OQ switch with a speedup of n, would contain all the cells sent to output i, sorted by the time of arrival to the system, with simultaneous arrivals serialized with a specific rule (for example smaller index of sender first, or randomly). In this way, in the n queues at the ith output, there is a double sorting: by arrival time and by sender. Then, if the Si element chooses the oldest cell from all the queues, breaking ties with the same rule that would have been used in the target OQ switch, we perfectly emulate it. The proposed architecture apparently does not require any speedup to achieve an OQ switch behavior emulation. Actually there is a logarithmic factor to be accounted for. In fact, assuming to be able to compare n time stamps in only one cycle is unrealistic (specially for very large n, that is our final target). The best thing we can do is to use a comparing tree, with log2 (n) stages of parallel comparisons. Such a logarithmic factor must be paid off in terms of scheduling iterations, in the worst case, also in almost every proposed VOQ switch ([2] for 2
Actually, a full crossbar is not necessary. A structure containing n selectors or a sorting network would be enough, but it is easier to imagine a crossbar
A Multicast FCFS Output Queued Switch without Speedup
1061
Q11
I1
Q21
S1
O1
S2
O2
Q12
I2
Q22
Fig. 2. Output queued switch?
PIM, [15] for iSLIP, [31] for iLPF, just to cite some very popular proposals). In IQ and VOQ switches, it is usually assumed that these computational steps can be done during a time slot (thus limiting the power of scheduling algorithms). In our proposal, we can avoid this delay with a very simple pipelining technique. In fact, it is not necessary to wait for the entire comparison to be over before starting a new one but, because of the tree structure of the comparison part, each element (leaf or internal node) can compare two cells, and forward the oldest to its parent (the root must send the oldest outside the switch). Thus, as soon as an element finishes its work, it can start again for another round, waiting only if its successor in the tree (its parent) is not ready to receive. Thus it is possible to perform log2 (t) comparisons (one for level) in parallel. The latency of the cell in the switch is proportional to the logarithm of the switch size but the throughput of the system will not suffer from this. The introduction of a pipelined part in a switch is not a new concept: in [32] the scheduler for a VOQ switch is improved with this technique, but the whole system is very different (and more complicated) from the one presented here. We make another step in our description, in order to have a system easier to implement. It is possible to come to the same idea also by starting from well known results. In [33], the architecture of an OQ switch called knockout was presented. Each input is connected to one bus, and each output is connected to every bus. We can think of the output modules as single queues, and still have the mentioned speedup problem. We can also think of increasing the number of queues, in order to avoid speedups by increasing the cell loss probability (namely, the probability of dropping conflicting cells). Of course, by putting one queue for every input in each output module , there is no cell loss (at this stage), and the architecture is very similar to the one we sketched. It is also possible to put less queues (say L), preceded by a statistical multiplexer that just chooses L cells, if there are more, and discarding the others. A buffering scheme that uses several (L) FIFO queues as just one queue with L inputs and one output in a knockout switch makes it acting like an OQ switch without requiring any speedup. It was shown that L = 8 queues are sufficient to reduce the loss probability to 10−6 for an arbitrarily large switch size n ([33]). However we are interested in avoiding cell loss (and then in using n queues without multiplexer). The architecture,
1062
M.A. Bonuccelli and A. Urpi
conceptually interesting, has many problems, like number of busses too high when the number of inputs grows and a number of crossing points not feasible when there are many outputs. Moreover, there is a spatial speedup to pay: implementing N adjacent memories is not so different from implementing one with a (temporal) speedup of T . In Sect. 5 we will see why the architecture proposed in this paper can be more practical, while potentially having the same problems.
4
Mesh of Trees
We present here a well known topology called mesh of trees, recalling only what is helpful to our aim. For more details the reader can refer to [34]. An N × N two-dimensional mesh of trees is a structure obtained from a N × N mesh (or two-dimensional array) by adding nodes in order to form a complete binary tree for every row and every column, with the nodes of the mesh as shared leaves (see Fig. 3). There is also an interesting recursive definition of the topology: given four N2 × N2 meshes of trees it is possible to combine them in a N ×N one just by using the four smaller meshes as elements of a 2×2 mesh, and combining the 4N roots pairwise adding 2N new roots (for a practical example see Fig. 3(b), where the nodes to be added are represented by hexagons). The total number of nodes in a N × N mesh of trees is 4N 2 − 2N . Communications between root nodes of column trees and root nodes of row trees are interesting in several ways. First of all they have a fixed length of 2 log2 (N ) hops. Moreover, if we label each destination node with the binary representation of a number between 0 and N − 1 (of course a different label for each different node), the routing of a message through the mesh of trees is very simple (i.e. the topology has a self-routing property). For example, nodes at ith level will forward the message to their right son if the ith digit of the label is 0, to their left son otherwise. The leaves work as interchange points, and they just have to forward the message from the column tree to the row tree they belong to. The communication is then logically divided in two steps: 1. a selection phase, in which the message is directed to the right row, 2. a gathering phase, in which the message is conveyed to the desired root. In terms of hardware complexity it is clear that each node, if only communication is needed, is very simple. Implementability of meshes of trees in single chips was widely studied (e.g. see [35]). In the next section, we will see how to combine meshes of trees and the switch architecture we proposed in Sect. 3.
5
The New Architecture
It is possible to produce a n × n OQ switch equivalent to the one we presented in Sect. 3, by means of a n × n mesh of trees. Assume to associate each input to a column tree root, and each output to a row tree root. In this way, a cell from
A Multicast FCFS Output Queued Switch without Speedup
(a) 4 × 4 mesh of trees
1063
(b) Composition of smaller meshes
Fig. 3. Two views of a 4 × 4 mesh of trees
input i to output j can be seen as a communication between roots, exactly like those presented in Sect. 4. Thus, the selection stage is exactly equivalent to the crossbar-like element in Fig. 2, while the gathering stage, choosing the oldest cell in case of contention, is just a comparing tree. We can think to put the queues in the leaves: in this way they would become very simple elements encapsulating a queue. It is easy to see that the two architectures are equivalent. Later in this section, a formal proof of the above mentioned mimicking will be presented. The logarithmic factor in the selection stage can be amortized in the same way we did in the comparing phase: up to log2 (t) communications can be present in parallel in the column trees, for a total of 2 log2 (t) communications at most that can be done in parallel. In the remainder of this section, we shall assume infinite size queues, and we use the following additional notation: Memories: in the mesh of trees, for each output, the memory is divided into queuing memory (in the leaves) and tree memory (up to one cell can be stored in each internal node in the row tree while waiting to exit from the switch). Symbols: Extending (in a natural way) the names given in Section 2, Qij is the queue from input i to output j in the mesh of tree, and Lij is its length. In order to establish that the mesh of trees switch mimics an OQ switch (with a delay, as we will see), it is useful to introduce an intermediate architecture that will be used as a paragon. In Fig. 4, it is shown a queued architecture for a single output (referred in the remainder of this section as DFIFO3 ) composed by n queues, from which the K element chooses the oldest to forward (breaking ties in the usual way), and log2 (n)−1 elements that just forward from one end to the other (actually the architecture is just the one shown in Fig. 2, with log2 (n) − 1 more stages). 3
DFIFO is just a short name for Delayed FIFO.
1064
M.A. Bonuccelli and A. Urpi
.....
t
queues
K
... log2(t) 1 elements
Fig. 4. An intermediate architecture.
It is now useful to introduce some straightforward lemmas: Lemma 1. A switch with DFIFO queuing architecture mimics a FCFS OQ switch with a delay of log2 (n) steps. Proof. As noted in Sect. 3, the architecture that, for every output, selects the oldest cell from t queues, exactly behaves like a FIFO OQ switch. Adding log2 (n) forwarding units, we just introduce a delay in the output. Lemma 2. A mesh of trees switch mimics a switch with DFIFO queuing architecture with a delay of log2 (n) steps. Proof. The delay is caused by the selection phase done at the column trees: as we assume infinite size queues, it takes exactly log2 (n) time slots to a cell for arriving to the queues, while in a DFIFO based switch they would arrive in one step. So it is enough to show that, without considering the column trees in the mesh of trees, the two architectures are totally equivalent. We focus on an output j, in order to prove that cells bound for that output are handled in the same way (once they arrive at the queues) by the two architectures. Since we don’t make any assumption on j, this will hold for all the outputs, establishing the lemma. We shall prove the lemma by induction on the number of queues4 : basic step: for n = 2 (2 inputs/outputs, the minimum case), the two architectures are exactly the same (the K element that chooses between two queues), induction step: for n = 2m and m > 1 the row tree can be seen as the composition of two trees with m leaves (queues) (see Fig. 5(a)). By induction, such an architecture is equivalent to the one shown in Fig. 5(b). It is not hard to show the equivalence of such an architecture and a DFIFO of height log2 (m) (or equivalently log2 (2m) − 1) forwarding elements. To avoid tedious details, it can be sufficient noting that – the number of steps that cells must undergo, is the same (log2 (2m) after the first selection), – during any time slot, if at level i of Fig. 5(b) architecture there is one cell, then there is one cell also at the same level of the DFIFO architecture, 4
Given the mesh of trees features, we only deal with powers of 2, with 2 as bottom of the induction chain.
A Multicast FCFS Output Queued Switch without Speedup
1 elements
Κ
Κ
Κ
...
...
...
...
m queues
m queues
m queues
m queues
(a) Trees composition
log 2(m)
β
...
α
...
Κ
1065
(b) DFIFO composition
Fig. 5. Inductive step.
– in any time slot, if at level i of Fig. 5(b) architecture there are two cells, then in the DFIFO architecture there is one cell at level i and one cell at level i + 1, – inversely, at any time slot, if at level i of DFIFO architecture there is one cell, then either there is at least one cell at the same level of Fig. 5(b) architecture, or there are two cells at level i − 1 (note that this holds for i > 0 since the root is unique in both systems). So, at every time slot there is a cell in output in one architecture if and only if there is a cell in output in the other. Since outputs are time ordered, they must be exactly the same. Lemma 3. Consider three switch architectures A, B and C. If A mimics B with a delay of f (n) and B mimics C with a delay of g(n), then A mimics C with a delay of f (n) + g(n). Proof. By definition of mimicking with a delay (see Sect. 2), under the same arrivals, the output of C at time t is the same of B at time t + g(n), which in turn is the same of A at time t + g(n) + f (n). We have thus established the following Theorem 1. The mesh of trees switch mimics a FCFS OQ switch with a delay of 2 log2 (n).
The mesh of trees architecture is particularly suitable to efficiently provide multicast. An addressing technique already known suffices: the destination of every cell is coded by a t bits string with the ith bit set to 1 if and only if the output i is in the set of receivers. So, during the selection stage, the node at level
1066
M.A. Bonuccelli and A. Urpi
i in the column tree must only perform two “or” operations when a cell to route arrives: one of the bits in the left half of the word, and one of the rightmost ones. If the first “or” operation is equal to 1, then the cell is forwarded to the left child (with the left half word as destination information), and the same happens with the second operation, but the cell is forwarded to the right child (note that at least one operation must be positive, but both can produce a 1). The so implemented multicast is a copy multicast, and it is the most efficient way to implement it: the cells arrive at the queues during the same time slot (because of the synchronism of the selection phase), and will depart during the first empty time slot. We believe this feature makes particularly interesting the proposed switch: the IQ architecture in fact has several problems managing multicast traffic, both from a theoretical point of view ([36]) and from a practical one (e.g. the simulations results in [37]), while in the mesh of trees switch the scheduling of multicast traffic practically comes for free. As previously established, the mesh of trees switch can mimic a FCFS OQ switch. Besides, for very large n’s the OQ switch can be considered purely theoretical because of the needed speedup, while the mesh of trees scales very well. Moreover, the time slot length limit is given just by the memory speed: in fact, the whole architecture behaves like a pipeline, and the time of the system is given by the time of the slowest element. If a comparing step is faster than a memory cycle, we can think to group several comparing steps into a single system cycle, in order to reduce the delay of the mesh of trees and to improve performances. The mesh of trees architecture seems to suffer of the same spatial speedup problem of the knockout switch: the queues in the leaves, for graphical presentation reasons, are drawn as adjacent, and at a first sight they can be imagined as a single big memory with a speedup problem. In the real physical implementation, memories not necessarily are positioned as in Fig. 3(a). Moreover, we think that, at least theoretically, the study of such a kind of architectures can be interesting, because of the positive performances offered that can overcome technical problems.
6
Conclusions
In this paper, we considered a parallel architecture for the implementation of the well known output queued switch. The widely studied mesh of trees topology has been used to propose a switch that can mimic (even if with a logarithmic delay) a FCFS output queued switch without the speedup problem. A future work will be to extend the class of queuing policies that is possible to emulate, in order to achieve quality of service, and to give some bounds on queues sizes and dimension of time stamps needed.
A Multicast FCFS Output Queued Switch without Speedup
1067
References [1] M. G. Hluchyj and M. J. Karol. Queueing in high-performance packet switching. IEEE Journal on Selected Areas in Communications, 6(9):1587–1597, Dec. 1988. [2] T. E. Anderson, S. S. Owicki, J. B. Saxe, and C. P. Thacker. High-speed switch scheduling for local-area networks. ACM Transactions on Computer Systems, 11(4):319–352, Nov. 1993. [3] N. Mckeown, M. Izzard, A. Mekkittikul, W. Ellersick, and M. Horowitz. The tiny tera: a packet core switch. Hot Interconnects IV, (Sstanford University), pages 161–173, Aug. 1996. [4] C. Partridge, P. P. Carvey, E. Burgess, I. Castineyra, T. Clarke, L. Graham, M. Hathaway, P. Herman, A. King, S. Kohalmi, T. Ma, J. Mcallen, T. Mendez, W. C. Milliken, R. Pettyjohn, J. Rokosz, J. Seeger, M. Sollins, S. Storch, B. Tober, G. D. Troxel, D. Waitzman, and S. Winterble. A 50 gb/s ip router. IEEE/ACM Transactions on Networking, 6(3):237–248, Jun. 1998. [5] J. P. Coudreuse and M. Servel. PRELUDE: an asynchronous time-division switched network. In Proceedings of IEEE International Conference on Communications ’87, pages 769–773, 1987. [6] N. Endo, T. Kozaki, T. Ohuchi, H. Kuwahara, and S. Gohara. Shared buffer memory switch for an ATM exchange. IEEE Transactions on Communications, 41(1):237–245, Jan. 1993. [7] M. J. Karol, M. G. Hluchyj, and S. Morgan. Input versus output queueing on a space division switch. IEEE Transactions on Communications, 35:1347–1356, 1987. [8] H. Zhang. Service disciplines for guaranteed performance service in packet switching networks. Proceedings of the IEEE, 83(10):1374–1396, Oct 1995. [9] M. Karol, K. Eng, and H. Obara. Improving the performance of input-queued atm packet-switching. In Proceedings of IEEE INFOCOM ’92, pages 110–115, 1992. [10] L. Tassiulas and A. Ephremides. Stability properties of constrained queueing systems and scheduling policies for maximum throughput in multihop radio networks. IEEE Transactions on Automatic Control, 37(12):1936–1948, Dec. 1992. [11] L. Tassiulas. Linear complexity algorithms for maximum throughput in radio networks and input queued switches. In Proceedings of IEEE INFOCOM ’98, pages 533–539, 1998. [12] N. McKeown, V. Anantharam, and J. Walrand. Achieving 100% throughput in an input-queued switch. In Proceedings of IEEE INFOCOM ’96, pages 296–302, 1996. [13] Y. Li, S. Panwar, and H. J. Chao. On the performance of a dual round-robin switch. In Proc. of IEEE Infocom 2001, 2001. [14] A. Mekkittikul and N. McKeown. A starvation-free algorithm for achieving 100% throughput in an input- queued switch. In Proceedings of the ICCCN, pages 226–231, 1996. [15] N. McKeown. Scheduling algorithms for input queued cell switches. PhD thesis, University of California at Berkeley, 1995. [16] C.S. Chang, W.J. Chen, and H.Y. Huang. On service guarantees for input buffered crossbar switches: a capacity decomposition approach by birkoff and von neumann. In IEEE IWQoS’99, pages 79–86, 1999. [17] R. E. Tarjan. Data structures and network algorithms. Society for industrial and apllied mathematics, 1983.
1068
M.A. Bonuccelli and A. Urpi
[18] C.S. Chang, W.J. Chen, and H.Y. Huang. Birkhoff-von neumann input buffered crossbar switches. In Proc. of IEEE Infocom 2000, 2000. [19] N. McKeown. The islip scheduling algorithm for input-queued switches. IEEE/ ACM Transactions on Networking, 7(2):188–201, Apr. 1999. [20] M. W. Goudreau, S. G. Kolliopoulos, and S. B. Rao. Scheduling algorithms for input-queued switches: randomized techniques and experimental evaluation. In Proc. of IEEE Infocom 2000, 2000. [21] W. Leland, M. Taqqu, W. Willinger, and D. Wilson. On the self-similar nature of ethernet traffic (extended version, 1994. [22] E. Leonardi, M. Mellia, F. Neri, and M. Ajmone Marsan. Bounds on average delays and queue size averages and variances in input-queued cell based switches. In Proc. of IEEE Infocom 2001, 2001. [23] S. T. Chuang, A. Goel, N. McKeown, and B. Prabhakar. Matching output queueing with a combined input output queued switch. IEEE Journal on Selected Areas in Communications, 17(6):1030–1039, 1999. (A preliminary version appears in Proceedings of INFOCOM ’99). [24] B. Prabhakar and N. McKeown. On the speedup required for conbined input and output queued switching. Automatica, 35(12):1909–1920, Dec. 1999. [25] M. Andrews and L. Zhang. Achieving stability in networks of input-queued switches. In Proc. of IEEE Infocom 2001, 2001. [26] F. M. Chiussi, D. A. Khotimsky, and S. Krihsnan. Generalized inverse multiplexing of switched atm connections. In Proc. of IEEE Globecom ’98, 1998. [27] F. M. Chiussi, D. A. Khotimsky, and S. Krihsnan. Advanced frame recovery in switched connection inverse multiplexing for atm. In Proc. of IEEE International Conference on ATM ’99, 1999. [28] D. A. Khotimsky and S. Krihsnan. Stability analysis of a parallel packet switch with bufferless input demultiplexor. In Proc. of IEEE ICC 2001, 2001. [29] S. Iyer, A. Awadallah, and N. McKeown. Analysis of a packet switch with memories running slower than the line-rate. In Proceedings of IEEE INFOCOM 2000, 2000. [30] S. Iyer and N. McKeown. Making parallel packet switches practical. In Proceedings of IEEE INFOCOM 2001, 2001. [31] A. Mekkitikul and N. McKeown. A practical scheduling algorithm to achieve 100% throughput in input-queued switches. In Proceedings of IEEE INFOCOM ’98, pages 792–799, 1998. [32] A. Mekkittikul. Scheduling non-uniform traffic in high speed packet switches and routers. PhD thesis, Stanford University, 1998. [33] Y. S. Yeh, M. G. Hluchyj, and A. S. Acampora. The knockout switch: A simple modular architecture for high performance switching. IEEE Journal on Selected Areas in Communications, SAC-5:1274–1283, Oct. 1987. [34] F. T. Leighton. Introduction to parallel algorithms and architectures: arrays, trees, h ypercubes. Morgan Kaufmann, 1992. [35] F. P. Preparata and J. E. Vuillemin. Area-time optimal vlsi networks for matrix multiplication. 11(2):77–80, 1980. [36] Z. Liu and R. Righter. Scheduling multicast input-queued switches. Journal of scheduling, 2(3):99–114, May 1999. [37] M. Ajmone Marsan, A. Bianco, P. Giaccone, E. Leonardi, and F. Neri. On the throughput of input-queued cell-based switches with multicast traffic. In Proc. of IEEE Infocom 2001, 2001.
Fault-Tolerant Support for Reliable Multicast in Mobile Wireless Systems Giuseppe Anastasi1 , Alberto Bartoli2 , and Flaminia L. Luccio3 1
2
Dip. di Ingegneria dell’Informazione, Universit`a di Pisa, Italy [email protected] Dip. di Elettrotecnica, Elettronica e Informatica Universit`a di Trieste, Italy [email protected] 3 Dip. di Scienze Matematiche, Universit`a di Trieste, Italy [email protected]
Abstract. In this paper we present a protocol for reliable multicast within a group of mobile hosts that communicate with a wired infrastructure by means of wireless technology. The protocol tolerates failures in the wired infrastructure, i.e., crashes of stationary hosts and partitions of wired links. The wireless coverage may be incomplete and message losses could occur even within cells, due to physical obstructions or to the high error rate of the wireless technology, for example. Movements of mobile hosts are accommodated efficiently because they do not trigger any interaction among stationary hosts (i.e., there is no notion of handoff). We evaluate by simulation the impact of fault-tolerance on the performance of the protocol in normal operating conditions, i.e., in the absence of failures. The results obtained show that the increase in the average latency experienced by messages is limited to few milliseconds.
1 Introduction Computing architectures based on portable computers and wireless networking are becoming a reality. Users may be equipped with hand-held computing devices and roam around freely while maintaining connectivity with a wired computing infrastructure through a number of wireless cells. Mobile wireless systems typically require special solutions, for a number of reasons. Traditional network protocols implicitly assume that hosts do not change their physical location over time. Mobile devices have severe resource constraints in terms of energy, processing and storage resources. Wireless networks are characterized by limited bandwidths and high error rates. Furthermore, mobility introduces new issues at the algorithmic level. For example, a mobile host may miss messages simply because of its movements, even with perfectly reliable communication links and computers that never crash [1]. All the above reasons imply that specialized protocols are required for extending to mobile hosts functionalities common for stationary ones. In this paper we present a protocol for reliable and totally-ordered multicast within a group of mobile hosts. By this we mean that: (i) each mobile host delivers all multicasts, without duplicates; and (ii) any two mobile hosts that deliver two multicasts deliver these multicasts in the same order. E. Gregori et al. (Eds.): NETWORKING 2002, LNCS 2345, pp. 1069–1080, 2002. c Springer-Verlag Berlin Heidelberg 2002
1070
G. Anastasi, A. Bartoli, and F.L. Luccio
Reliable and totally-ordered multicast is an important building block for applications composed of remote processes that have to cooperate tightly [13]. This communication primitive has proven its power in the context of traditional, i.e., static and wired, distributed computing. Our proposal makes this primitive available on mobile wireless systems. Moreover, we support this primitive in spite of (a certain number of) crashes of stationary hosts and partitions of wired links. The fault-tolerance properties of our protocol may greatly extend the scope of potential applications of mobile computing, including emergency management, plant control, traffic monitoring, stock market exchange, on-site data collection, for example. Fault-tolerant support for mobile wireless systems is, in our opinion, an important topic, yet it has not received much attention from the research community so far. We model a mobile wireless system as follows (see figure 1). There is a set of stationary hosts (SHs) connected by a wired network and a set of mobile hosts (MHs) that may move and communicate through wireless links. Some SHs, called mobile support stations (MSSs), may communicate also through wireless links. Each MSS defines a spatially limited cell covered by a wireless link. A MSS may broadcast messages to all MHs in its cell and send messages to a specific MH in its cell, whereas a MH may only send messages to the MSS of the cell where it happens to be located. Notice that we do not assume any network support for routing messages to a specific MH.
Fig. 1. Example system with five MHs and seven SHs.
An important feature of our model is the incomplete coverage of wireless cells, i.e., MHs may roam in areas that are not covered by any cell. A MH may move across adjacent cells but it may also "disappear" within the uncovered area and enter any other cell, perhaps after a "long" time. Movements occur without prior negotiation. The resulting scenario is quite general because it accommodates contemporary wireless LAN’s, infra-red networks requiring line-of-sight connectivity, disconnected modes of operation, long-range movements and picocellular wireless networks in which the cell size is of the order of a few meters, such as a room in a building. The message pattern of our protocol follows common approaches for reliable multicasting among MHs in mobile wireless systems [1,3,8,14,16]. A MH wishing to issue a multicast sends a request to the MSS of the cell where it happens to be located. The MSS forwards the message to a SH that processes this request, includes the payload, and
Fault-Tolerant Support for Reliable Multicast in Mobile Wireless Systems
1071
forwards the payload to all MSSs. MSSs broadcast the payload in the respective cell. More details will be given later. Our work is based on a design philosophy aimed to improve reliability of final applications, in particular, with respect to failures: 1. The state shared among SHs should not be updated upon each movement of MHs. Otherwise, performance could be penalized and failure handling would be more difficult. 2. One should avoid to assume that wireless coverage is complete. Otherwise, even a single physical obstruction, or particularly unfortunate area, or MSS malfunctioning, could compromise correctness. 3. Availability of MSSs should affect only availability of applications, not their correctness. In particular, a MSS failure should merely shrink the covered area, without affecting correctness. 4. One should avoid to make hypothesis on users’ movements. Otherwise, even a single inopportune movement could compromise correctness. 5. Critical state information should not be kept on MSSs, but on “ordinary” SHs. This choice allows using systematic and established techniques for improving the availability of these hosts, such as replication. 6. MSSs should be freely added or removed without stopping the system or compromising correctness. MSS addition may be necessary for upgrading or coverage enhancement, whereas MSS removal for maintenance or failure. Notice that the above points apply to mobile computing in general, not only to the specific problem of reliable multicast. We have analyzed the performance of the proposed protocol by simulation. In particular, we have focused on the impact of fault-tolerance on the performance of the protocol in normal operating conditions, i.e., in the absence of failures. We have found that the proposal is indeed practical, as the latency increase due to fault-tolerance is of just a few milliseconds.
2 System Model Each wired link and each wireless cell provides FIFO-ordered communication without duplicates. Messages may be lost. Message loss in a wired link occurs as a result of network partitions. Such partitions may recover. Message loss in a wireless cell may occur because of physical obstructions or because of the intrinsic features of wireless technology, e.g., high error rate. Hosts communicate solely via messages. Of course, while a MH is out of coverage no communication with it is possible. Similarly, SHs partitioned from each other cannot communicate among themselves. SHs may crash and a crashed SH may recover. MHs do not crash (see also below). The system is asynchronous in the sense that neither message delays nor computing speeds can be bounded with certainty. This characterization is a general and realistic one as it allows abstracting away such features as variable loads imposed by users and unknown scheduling strategies on hosts and communication links. Notice that a process
1072
G. Anastasi, A. Bartoli, and F.L. Luccio
cannot determine with certainty whether a remote process that appears to be unresponsive has crashed or happens to be very slow. The protocol can be easily made resilient also to crashes of MHs, the only problem being that state information about a crashed MH would never be discarded by SHs. A practical implementation might allow SHs to unilaterally garbage-collect state information not accessed for a very long time. Although a MH deemed crashed might show up again, there is no way to exclude such a possibility in an asynchronous system — unless one is willing to wait for an infinite time before deciding whether that MH actually crashed. Another practical issue is that a crashed MH should be able to participate again in the application after its recovery. This feature may be achieved by: (i) supporting a dynamic group of MHs that may exchange multicasts; and (ii) requiring that multicasts be delivered only by current members of the group. The protocol proposed here assumes a static set of MHs but it may be extended towards supporting (i) and (ii) quite simply [6, 9].
3 Related Work To the best of our knowledge, the only work with scope similar to ours is [2]. This work introduces resilience to failures of MSSs in a non fault-tolerant reliable multicast protocol proposed by Acharya and Badrinath (see below, [1]). However the system model is much more restrictive than ours because: (i) it assumes that a process can detect with certainty whether a remote process is active or crashed (fail-stop failures); and (ii) communication is reliable, both in the wired network and in wireless cells (thereby excluding, for example, uncovered regions, physical obstructions within cells, partitions of wired links). The protocol in [2] adds fault-tolerance to the one in [1] by associating each MH with a set of MSSs, denoted S(MH), and by replicating state information about that MH at each member of S(MH). Whenever MH sends a message or there is a message addressed to it, members of S(MH) have to execute a replica control protocol and this protocol must be able to tolerate host failures. The cited work mentions two alternatives for such protocol. The one that is more efficient requires additional mechanisms (network flush or rollback) that are not detailed. Furthermore, no performance analysis is provided and the complex interaction among (i) replica control protocol, (ii) MSS recovery, and (iii) hand-off, are only outlined 1 . Our protocol is fully detailed and, in our opinion, is much simpler to understand and implement. The protocol by Acharya and Badrinath, hereinafter the AB-protocol, was the first multicast protocol ensuring reliable (FIFO) delivery in the context of mobile computing and has been highly influential in the design of later protocols [3,8,14,16]. Although none of these protocols is fault-tolerant, it is useful to discuss them briefly to emphasize the differences with our proposal. Each MSS maintains, for each MH in its cell, an array of sequence numbers describing the multicasts already delivered by that MH. The MSS uses this array to forward pending messages in sequence and without duplicates. If the MH switches cell, the array is moved to the new MSS by means of a proper hand-off 1
The paper claims that the composition of S(MH) may change dynamically, but it appears that this issue has been oversimplified, in particular, with respect to the interaction just mentioned.
Fault-Tolerant Support for Reliable Multicast in Mobile Wireless Systems
1073
procedure. Therefore: (i) The state shared among SHs is updated upon each movement; (ii) MSSs maintain critical state information (i.e., each MSS remembers the sequence numbers of multicasts delivered by each MH in its cell); and (iii) the crash of a MSS affects correctness of the application (i.e., the above sequence numbers are lost for each MH in the cell). These features explain why the fault-tolerant extension in [2] requires a complex interaction among several sub-protocols. The AB-protocol and the protocols derived from it assume reliable communication in the wired network and in the wireless network, much like [2]. The AB-protocol provides reliable delivery without requiring routing support for MHs, e.g., Mobile IP, much like our proposal. Multicast protocols that rely on Mobile IP are generally targeted at different application domains and provide unreliable, besteffort, unsequenced delivery [12,15]. In particular, no messages are delivered during a cell switching and messages possibly lost will not be recovered in the new cell. With respect to the use of Mobile IP, note also that: (i) it would not solve the problem of recovering from lost messages; (ii) it would make it more difficult to exploit the broadcast capabilities of the wireless medium when many MHs are in the same cell; (iii) it would generate traffic in the wired network even while no new multicasts are generated, for tracking the location of each MH. The protocol proposed here is an extension of the protocol in [9] that was not faulttolerant and assumed reliable communication in the wired network. As an aside, performance analysis by simulation showed that the proposal in [9] outperforms the ABprotocol in terms of latency, scalability, bandwidth usage efficiency and quickness in managing cell switches of users [4]. The proposal in [9], as well as the one in this paper, borrows a crucial idea from the implementation of reliable multicasts in “static and wired” distributed systems: the use of a centralized sequencer for totally ordering multicasts and for storing multicasts that have not been acknowledged yet [13]. Note, however, that here we refer to a completely different system model: mobile hosts, wireless communication, incomplete spatial coverage.
4 Overview of the Protocol We begin by briefly outlining the non fault-tolerant version of the protocol. Messages have a field of enumerated type, called tag and indicated in SmallCaps, that indicates the purpose of the message. We say that a host H receives a message m when m arrives at the protocol layer at H, and that H delivers m when the protocol forwards m up to the application. A MH wishing to issue a multicast sends the payload to the local MSS with a New message. MH retransmits this message until receiving an acknowledgment (possibly from a different MSS, if the sending MH moves during the handshake). The MSS forwards the message to a designated SH acting as coordinator. A New message carries a sequence number locally generated by the sending MH, which enables the coordinator to process New messages in sequence and to discard duplicates. The coordinator constructs a Normal message containing the payload of the New message and a locally generated sequence number. The resulting message is then multicast to MSSs that broadcast it in the respective cell. Each MH uses sequence numbers of Normal messages to deliver
1074
G. Anastasi, A. Bartoli, and F.L. Luccio
these messages in sequence without duplicates (i.e., in total order) and to detect missing messages. In the latter case, the MH sends a retransmission request to the local MSS. This request is tagged Nack and specifies an interval of missing sequence numbers. When a MSS receives a Nack, it relays the missing Normal messages to the sending MH. The MSS obtains such messages from a local cache or, in case of a miss, from the coordinator. MSS requests missing messages to the coordinator with a FetchReq specifying an interval of sequence numbers. The coordinator responds with a FetchRep containing the required messages. A Nack from a MH implicitly acknowledges delivery of previous multicasts. MSSs extract this information and forward it to the coordinator, with StabInfo messages. Note that: (i) MSSs do not store critical state information: such information is kept by the coordinator and merely cached by MSSs for efficiency; (ii) each MSS reacts to cell switching without interacting with other MSSs. The fault-tolerant extension proposed here is obtained as follows. 1. A MH no longer assumes that a message arrived at a MSS will eventually arrive at the coordinator — the MSS might crash, or a partition might occur. Instead, a MH keeps on retransmitting a New message until receiving the matching Normal message (a MSS that receives a New message does not respond to the sending MH with an acknowledgment, as this acknowledgment would be useless). 2. The role of the single coordinator is played by a set of SHs, called coordinators. This set appears to MSSs as a single “coordinator service”. The service is available in spite of (a certain number of) failures of coordinators and connecting links. In particular, availability of the service requires a majority of coordinators. Coordinators interact among themselves through group communication (GC) [10]. GC may be thought of as a software layer exporting to applications a membership service and a communication service for reliable multicasting within a group of processes. These two services are tightly integrated so as to simplify the programming of distributed algorithms in the face of host crashes and recoveries, network partitions and mergers. More details will be given in section 4.1. 3. A MSS sends its messages to a designated coordinator, say C. The MSS might not receive a response for several reasons, including: (i) C is not able to interact with a majority of coordinators (section 4.1); (ii) the message from MSS to C is lost; (iii) the response from C to MSS is lost. Should a response not arrive within a specified timeout, the MSS will send the next request to another coordinator. The request not yet answered will be retransmitted by the originating MH, as pointed out above (1). The policy for associating coordinators with MSSs is irrelevant to this paper. Of course, timeouts expiring too soon must not affect correctness. To this end, the coordinator service maintains internally information sufficient to detect duplicate requests (section 4.1). Space constraints preclude a full description of the protocol, that can be found in the companion report in a pseudo-code form [5]. We will discuss in the next section only the implementation of the coordinator service. 4.1
Coordinator Service
Interaction among coordinators occurs through group communication (GC) [10]. A detailed description of GC is beyond the scope of this paper and we provide below only
Fault-Tolerant Support for Reliable Multicast in Mobile Wireless Systems
1075
the necessary background. GC is implemented by a dedicated software layer at each coordinator. Coordinators form a group (this notion of group has nothing to do with the group of MHs). The GC layer provides consistent information about the set of coordinators that appear to be currently reachable. This information takes the form of views. The GC layer determines a new view as a result of crashes, recoveries, network partitions and mergers. New views are communicated to coordinators automatically, through special messages called view changes. When a coordinator C receives a view change carrying the new view V , C is informed that it can communicate with the coordinators listed in V . To proceed further, we need a few simple definitions: (i) C installed a view V means that C indeed received the corresponding view change; (ii) two views V, W are consecutive means that a coordinator installs V and then installs W ; (iii) the view that is current at C is the one specified by the last view change received by C; (iv) C delivers message m in view V means that C delivers m when the view that is current at C is V . The key guarantee of GC is that view changes are globally ordered with respect to the receiving of multicasts: Given two consecutive views V and W , any two coordinators that install both views must have received the same set of multicast messages in view V . For example, consider a coordinator C1 that delivered V and suppose W is delivered as a result of the crash of C1 . If C1 crashed while performing a multicast m, then (i) all coordinators that install V and W receive m (and do so before installing W ); or (ii) none of them receives m. Clearly, this property is very powerful for reasoning about fault-tolerant algorithms. The GC layer supports partitionable membership, i.e., it allows multiple views of the group to exist concurrently, to model network partitions. Moreover, the GC layer supports uniform multicast: if any member of view V delivers multicast m, then each member of V delivers m or crashes. We present the algorithm in the hypothesis that a view including a majority of coordinators always exists. The algorithm may be extended to accommodate the more general case in which the majority view temporarily disappears. The variables maintained by each coordinator include the following: boss, the identifier of a designated member of a majority view; cseq, the sequence number of the last Normal message sent; normal-buffer, a set containing all Normal messages that might not be stable, i.e., that are not known to have been delivered by each MH; finally, member-table, a table with one element for each MH. Each element is a record whose fields are: mid, that identifies the MH; new-num, the sequence number (generated by the MH) of the last New message received from mid; cseq-mid, the cseq assigned to the last Normal message generated upon processing a New sent by mid; delivered, the highest sequence number of a Normal message that has certainly been delivered by mid. Each coordinator C executes a loop in which at each iteration it receives either a message or a view change. If the current view is not a majority, C skips to the next iteration — C ignores all messages and waits for a sufficient number of failures to recover. Otherwise, C acts as follows. Receiving of a message provokes the transmission of an Ack to the sending MSS (to prevent expiration of the time-out at MSS). In addition: – A New message is forwarded to the boss. When the boss receives one such message, it multicasts the message within the majority view. Let m denote a message multicast
1076
G. Anastasi, A. Bartoli, and F.L. Luccio
by the boss and let mid denote the MH that originated the associated New message. Upon receiving m, each coordinator C performs the following actions: (i) extract the entry, say e-mid, of member-table associated with mid; (ii) determine whether m is a duplicate and, in this case, discard m without any further processing (this check is done by comparing field new-num of e-mid to the sequence number in m, selected by mid itself); (iii) update field new-num of e-mid; (iv) increase cseq (system-wide sequence number); (v) construct a Normal message mN including, in particular, the payload specified by mid and cseq; (vi) store a copy of mN in normal-buffer; finally, (vii) the boss multicasts mN to MSSs. In short, coordinators proceed in locksteps and, in particular, they maintain identical copies of their variables. – A FetchReq message is processed locally (such a message is sent by a MSS whose local cache does not contain a Normal message requested by a MH). The FetchRep reply is constructed based on the normal-buffer. – A StabInfo message is multicast within the view (such a message describes the Normal messages certainly delivered by a specified MH). Upon receiving this multicast, each coordinator records the related information in the pertinent entry of member-table and clears from normal-buffer messages that have been delivered by every group member. Network partitions, mergers, host crashes and recoveries are handled simply. GC reports them automatically to coordinators in the form of a new view. Upon receiving a view change, the boss sends a copy of its variables to each coordinator that was not in the previous majority view. Then, the coordinator service starts processing again messages from MSSs, as all coordinators in the new majority view have identical variables. If the boss has left the majority view (e.g., it crashed), then a new boss is elected by applying a deterministic function to the composition of the view (this function must select a member of the previous majority view). Variables to the coordinators that have possibly entered the majority view will be sent by the new boss. Notice that all coordinators receive the same view, hence they can easily coordinate their reaction to the view change based solely on the view composition, i.e., without dedicated message-exchange rounds. It may be useful to observe what follows: (1) The boss might crash during steps (i)-(vii) above, i.e., before actually multicasting the Normal message to MSSs. In this case, MHs will eventually detect a hole in the stream of sequence numbers and ask retransmission; (2) When surviving coordinators receive the view notifying about the crash of the boss, they will certainly have the same variables: GC ensures that prior to the view change they have delivered the same set of multicasts from the boss.
5 Simulation Fault-tolerance obviously comes at a cost. A protocol designed to be fault-tolerant is likely to exhibit, even in the absence of failures, performance worse than that of a protocol that does not tolerate failures. In this section we evaluate such costs by simulation. This analysis enables us to capture the inherent cost of fault-tolerance for our proposal. Accordingly, our simulations assume reliable communication in the wired network and SHs that do not crash. The emphasis here is demonstrating that one can tolerate failures without paying excessive costs in normal conditions, i.e., in the absence of failures.
Fault-Tolerant Support for Reliable Multicast in Mobile Wireless Systems
1077
We set the numerous parameters that characterize the protocol similar to [6], which provides a simulation analysis for the non fault-tolerant version. There are 40 cells, i.e., 40 MSSs. A MH remains in a cell for a random time interval. The length of this interval is exponentially distributed and its average Tcell is set to 10 seconds for each MH. Wireless coverage is complete and the message loss rate in the wireless network is 0.1%. There are 100 MHs: all of them receive multicasts (Nr =100) whereas only 10 of them may generate 512-byte messages (Ns =10). Message generation is a Poisson process, i.e., times between the generation of successive messages are random variables exponentially distributed. Each sender generates, on the average, 8 messages/sec corresponding to a bit rate of approximately 33 Kbps. We consider a wireless bandwidth of 1 Mbps, in line with the bandwidth available in current Wireless LANs [7], and a wired bandwidth of 10 Mbps. Therefore, message transmission times in the wired network are one order of magnitude lower. Propagation delays, i.e., times messages take to travel from one node to another, are as follows. Wireless propagation delays are negligible as cells are supposed to be very small, (e.g., ten meters). Wired propagation delays are assumed to be exponentially distributed. The average for messages from MSSs to coordinators or back is 1.5 msec, whereas for messages from coordinators to the boss is 2 msec (these point-to-point messages are sent through group communication). Uniform multicast amongst coordinators is modeled as an additional exponential delay with average (Nc + 0.4) msec ([11]). We consider also processing time, i.e., time necessary to process a message. We used the same values reported in [6] as there are no substantial differences from the non fault-tolerant version. The main metric we consider is the average message latency, i.e., the average time elapsed from the instant at which a message is generated at a sending MH to the instant at which the same message is delivered by a destination MH. We analyzed latency for varying numbers of coordinators, sending MHs, mobility of MHs and message loss rate. Curves labelled as Nc = 1 refer to the non fault-tolerant version while the other curves relate to the fault-tolerant protocol proposed in this paper. For the sake of space we shall focus on the differences between the two versions. The reader can refer to [6] for details about the performance of the non fault-tolerant version. Figure 2-left shows the average latency as a function of the number of receivers for different number of coordinators (Nc ). Note that the fault-tolerant version maintains the very good scalability properties of the non fault-tolerant one. The fault-tolerant protocol exhibits higher average latency as a result of the following factors: 1. Messages experience an additional step with respect to the non fault-tolerant version: from the coordinator associated with the MH that originated the message to the boss. 2. When the boss receives a New message it does not multicast the related Normal message immediately, but it sends a uniform multicast within the view and waits for delivery of this multicast. 3. New messages are implicitly acknowledged by the related Normal message from the boss while in the non fault-tolerant version they are explicitly acknowledged by the local MSS. It follows that MHs have to use longer time-outs in order to minimize useless retransmissions, but this delays retransmission of messages that are actually lost. The companion report [5] analyzes in more detail the contribution of each factor.
1078
G. Anastasi, A. Bartoli, and F.L. Luccio
Figure 2-right shows the average latency as a function of the number of senders, i.e., of the aggregate message rate. Although average latency increases with the number of senders, it is important to observe that curves for different values of Nc are approximately parallel. In other words, the fault-tolerant version maintains approximately the same scalability properties of the non fault-tolerant one. The difference between the two protocols is due, obviously, to points 1, 2 and 3 above.
Fig. 2. Average latency as a function of the number of receivers (left) and of the number of senders (right) for different number of coordinators.
Fig. 3. Average latency as a function of mobility (left) and wireless network unreliability (right) for different number of coordinators.
Figure 3 shows the influence of MH mobility (left) and wireless network unreliability (right). Mobility is expressed in terms of the number of cell switches per second experienced by each MH, which is the inverse of the Tcell parameter, i.e., the average
Fault-Tolerant Support for Reliable Multicast in Mobile Wireless Systems
1079
cell permanence time. Unreliability of wireless links is expressed as the percentage of lost messages. Both plots exhibit a similar behavior. In particular, the difference between the non fault-tolerant protocol and the fault-tolerant protocol (for example with Nc =2) increases as either mobility or message loss rate grows up. This similarity can be easily understood if one considers that mobility of MHs may cause message losses. The increase in the distance between curves related to Nc = 1 and Nc > 1, respectively, is a consequence of point 3 above: when the fraction of New messages which get lost increases, the delay for recovering them increases accordingly. To summarize, the latency cost induced by fault-tolerance in the absence of failures is in the order of a few msec. The above components 1 and 2 of the additional delay cannot be reduced (for a fixed wired network technology and operating environment). On the other hand, component 3 could be partially lowered by using at MHs:(i) a transmission scheme more sophisticated than the simple stop and wait approach (e.g., a windowbased scheme); and/or (ii) a shorter time-out for New messages. On the other hand, the former would induce higher computational load at MHs while the latter would cause useless retransmissions and, thus, wastage of wireless bandwidth as well as computing and energy resources at the MH. Based on the above results, we believe that these solutions would lead to minor performance improvements that would not compensate for their drawbacks. However, in a different scenario, e.g., when MSSs are distributed in a geographical area rather than in a local area, use of a window-based transmission scheme could be appealing.
6 Concluding Remarks We have presented a protocol for offering fault-tolerant support to (totally ordered) reliable multicast within a group of MHs. The protocol tolerates crashes of SHs and partitions of wired links. To our knowledge, no other protocol provides these functionalities. Two key features of our protocol are: (i) movements of MHs do not require any interaction among SHs (i.e., no hand-off is required); and (ii) MSSs do not store any critical state information. Both features are crucial for coping with failures simply and efficiently. MSSs merely act as forwarding switch and as cache of state information whose primary copy is kept elsewhere, i.e., at coordinators. Replication through group communication is the main tool for enhancing availability of this information and for preserving its consistency in spite of failures. Simulation results show that the protocol is indeed practical in that the latency cost induced by fault-tolerance in normal operating conditions, i.e., in the absence of failures, is limited to some milliseconds. Moreover the protocol exhibits very good scalability properties.
References 1. A. Acharya and B. R. Badrinath. A framework for delivering multicast messages in networks with mobile hosts. ACM/Baltzer Journal of Mobile Networks and Applications, 1(2):199–219, 1996.
1080
G. Anastasi, A. Bartoli, and F.L. Luccio
2. S. Alagar, R. Rajagoplan, and S. Venkatesan. Tolerating mobile support station failures. In Proc. of the First Conference on Fault Tolerant Systems, pages 225–231, Madras, India, December 1995. Also available as Technical Report of the University of Texas at Dallas. 3. S. Alagar and S. Venkatesan. Causal ordering in distributed mobile systems. IEEE Transactions on Computers, 46(3):353–361, March 1997. 4. G. Anastasi and A. Bartoli. On the structuring of reliable multicast protocols for mobile wireless computing. Technical Report DII/00-1, Universit`a di Pisa and Universit`a di Trieste, January 2000. Submitted for publication. Available at http://www.iet.unipi.it/∼anastasi/papers/tr00-1.pdf. 5. G. Anastasi, A. Bartoli, and F.L. Luccio. Fault-tolerant support for reliable multicast in mobile wireless systems: Design and evaluation. Technical Report DII/02-1, Universit`a di Pisa and Universit`a di Trieste, March 2002. Available at http://www.iet.unipi.it/∼anastasi/papers/tr021.ps. 6. G. Anastasi, A. Bartoli, and F. Spadoni. A reliable multicast protocol for distributed mobile systems: Design and evaluation. IEEE Transactions on Parallel and Distributed Systems, 12(10):1009–1022, October 2001. 7. G. Anastasi and L. Lenzini. QoS provided by the IEEE 802.11 wireless LAN to advanced data applications: a simulation analysis. ACM/Baltzer Journal on Wireless Networks, 6(2):99–108, 2000. 8. V. Aravamudhan, K. Ratnam, and S. Rangajaran. An efficient multicast protocol for PCS networks. ACM/Baltzer Journal of Mobile Networks and Applications, 2(4):333–344, 1997. 9. A. Bartoli. Group-based multicast and dynamic membership in wireless networks with incomplete spatial coverage. ACM/Baltzer Journal on Mobile Networks and Applications, 3(2):175–188, 1998. 10. Ken Birman. The process group approach to reliable distributed computing. Communications of the ACM, 36(12):37–53, December 1993. 11. R. K. Budhia. Performance engineering of group communication protocols. Technical report, University of California, S. Barbara (USA), August 1997. Ph.D. dissertation. 12. V. Chikarmane, C. Williamson, R. Bunt, and W. Mackrell. Multicast support for mobile hosts using mobile IP: Design issues and proposed architecture. ACM/Baltzer Journal of Mobile Networks and Applications, 3(4):365–379, 1998. 13. F. Kaashoek and A. Tanenbaum. An evaluation of the Amoeba group communication system. In Proc. 16-th IEEE-ICDCS, pages 436–447, May 1996. 14. R. Prakash, M. Raynal, and M. Singhal. An efficient causal ordering algorithm for mobile computing environments. Journal of Parallel and Distributed Computing, March 1997. 15. G. Xylomenos and G. Polyzos. IP multicast for mobile hosts. IEEE Personal Communications, pages 54–58, January 1997. 16. L. Yen, T. Huang, and S. Hwang. A protocol for causally ordered message delivery in mobile computing systems. ACM/Baltzer Journal of Mobile Networks and Applications, 2(4):365– 372, 1997.
JumpStart: A Just-in-Time Signaling Architecture for WDM Burst-Switched Networks Ilia Baldine1 , Harry G. Perros2 , George N. Rouskas2 , and Dan Stevenson1 2
1 MCNC ANR, Research Triangle Park, NC, USA NCSU Department of Computer Science, Raleigh NC, USA
Abstract. We present an architecture for a core dWDM network which utilizes the concept of Optical Burst Switching (OBS) coupled with a Just-In-Time (JIT) signaling scheme. It is a reservation based architecture whose distinguishing characteristics are its relative simplicity, its amenability to hardware implementation, support for quality of service and multicast natively. Another important feature is data transparency - the network infrastructure is independent of the format of the data being transmitted on individual wavelengths. In this article we present a brief overview of the architecture and outline the most salient features.
1 Introduction The adoption of dWDM as the primary means for transporting data across large distances in the near future is a foregone conclusion, as no other technology can offer such vast bandwidth capacities. The current dominant technology for core networks are wavelength-routed networks with permanent or semi-permanent circuits set up between end points for data transfer. Many of the proposed architectures treat dWDM as a collection of circuits/channels with properties similar to electronic packet-switched circuits with customary buffering (potentially done in the optical domain) and other features of electronic packet-switching. In addition, transport protocols used today (e.g. TCP) developed for noisy low-bandwidth electronic links, are poorly suited for the highbandwidth, extremely low bit-error rate optical links. The round trip times for signaling and the resulting high end-node buffer requirements are a poor match to the all-optical networks characterized by the high bandwidth-delay product. In order to address the processing and buffering bottlenecks, characteristic of the electronic packet-switching architectures, and, by extension, their dWDM derivatives, a wholly new architecture is required, which is capable of taking advantage of the unique properties of the optical medium, rather than trying to fit it into existing electronic switching frameworks. In addition, in dWDM networks data transparency (i.e. independence of the network infrastructure from the data format, modulation scheme etc., thus allowing transmission of analog as well as digital signals) becomes not only possible, but desirable. In this paper we present an overview of an architecture for a core dWDM network. The type of architecture, described in this paper, is wavelength-routed, burst-switching, with the “just-in-time” referring to the particular approach to signaling, taken within this
This research effort is being supported through a contract with ARDA (Advanced Research and Development Activity, http://www.ic-arda.org).
E. Gregori et al. (Eds.): NETWORKING 2002, LNCS 2345, pp. 1081–1086, 2002. c Springer-Verlag Berlin Heidelberg 2002
1082
I. Baldine et al.
architecture. Signaling is done out of band, with signaling packets undergoing electrooptical conversion at every hop while data, on the other hand, travels transparently. For history of burst-switching the reader is referred to [6,9]. Just-In-Time (JIT) signaling approaches to burst-switching have been previously investigated in literature ([9,6]). The common thread in all these is the lack of the roundtrip waiting time before the information is transmitted (the so-called TAG: tell-and-go scheme) when the cross-connects inside the optical switches are configured for the incoming burst as soon as the first signaling message announcing the burst is received. The variations on the signaling schemes mainly have to do with how soon before the burst arrival and how soon after its departure, the switching elements are made available to route other bursts. An example is the Just-Enough-Time (JET) scheme proposed in [7] which uses extra information to better predict the start and the end of the burst and thus use the switching elements needed to route the burst inside a switch for the shortest amount of time possible. Schemes also have been proposed for introducing QoS into the architecture ([8]). These schemes have been shown to reduce the blocking probability inside an OBS network with the disadvantage of requiring a progressively more complex schedulers ([5]). In this short paper we present an overview and describe the salient features of the proposed Jumpstart architecture. For a more extensive treatment of the subject the reader is referred to [4,2]. 1.1
Guiding Assumptions and Basic Architecture
The basic premise of this architecture is as follows - data, aggregated in bursts is transferred from one end point to the other by setting up the light path just ahead of the data arrival. This is achieved by sending a signaling message ahead of the data to set up the path. Upon the completion of data transfer the connection either times out or is torn down explicitly. Some of the basic architectural assumptions are summarized below:
CALLING HOST
CALLING SWITCH
CALLED SWITCH
CALLED HOST
; ;;;;;;; ;; ; ;;;;;;; ;;;;;;; ; ;;;;;;; ; SETUP
SETUP ACK
CROSSCONNECT CONFIGURED
SETUP
SETUP
OPTICAL BURST
CONNECT
RELEASE
CROSSCONNECT CONFIGURED FOR EXPLICIT RELEASE
CONNECT
CONNECT
RELEASE
Fig. 1. Example of a burst
JumpStart: A Just-in-Time Signaling Architecture
1083
Out-of-band signaling - Signaling channel undergoes electro-optical conversions at each node to make signaling information available to intermediate switches. Data transparency - Data is transparent to the intermediate network entities, i.e. no electro-optical conversion is done in the intermediate nodes and no assumptions are made about the data rate or signal modulation methods. Network intelligence at the edge - Most “intelligent“ services are supported only by edge switches. Core switches are kept simple. Signaling protocol implemented in hardware - So as not to create a processing bottleneck for high-bandwidth bursty sources, the signaling protocol must be implemented in hardware No global time synchronization - In keeping with the “keeping it simple“ principle, we do not assume time synchronization between nodes. A basic switch architecture presumes having a number of input and output ports, each carrying multiple wavelengths (envisioned to be in 100’s to 1000’s). At least one separate wavelength on each port is dedicated to carrying the signaling traffic. Any wavelength on an incoming port can be switched to either the same wavelength on any outgoing port (no wavelength conversion) or any wavelength on any outgoing port (partial or total wavelength conversion). The switching can be done by using MEMS micro-mirror arrays or some other suitable technology. Switching time is presumed to be in the µs range, with anticipation that it could be reduced further as the technology develops. Additionally, each switch is equipped with a scheduler which keeps track of wavelength switching configurations and configures the cross-connects on time to allow the data to pass through. Data Transparency. We have briefly alluded to data transparency as being a desirable property of a core network of the future. Indeed, the ability to transmit optical digital signals of different formats and modulations, as well as analog signals simplifies many problems commonly associated with adaptation layers. In a burst-switched network, which essentially acts as a broker of time on a particular wavelength with high temporal resolution, this feature becomes relatively easy to implement, considering that signaling is done out of band on a separate channel. This is why JumpStart architecture makes no assumptions about the types of traffic it carries and instead schedules time periods on wavelengths within the network. The particular format that an end node uses to transmit its data to the destination is of no consequence to the network itself. Processing Delay Prediction. Unlike data, signaling messages propagate through the network and accrue a processing delay inside each intermediate switch. For a SETUP message, which announces the arrival of a new burst to intermediate switches, this means that it has to be sent far enough in advance before the burst, so that the burst does not catch up with it before the destination is reached. Knowing this delay apriori, at the ingress switch, and communicating it to the source node (via SETUP ACK) is part of the network function. This delay can be deduced from the destination address in the SETUP message, and further refined by the ingress switch over time, as CONNECT messages are sent back from the destination, indicating the actual processing delay incurred while the corresponding SETUP message traveled to the destination.
1084
I. Baldine et al.
Quality of Service. When one talks about Quality of Service (QoS) in the context of contemporary packet-switching networking technologies and protocols (DiffServ, IntServ), the criteria for evaluating the QoS of a given connection involve network bandwidth and buffer management inside the routers and end-nodes. In the context of an all-optical transparent network such as proposed here most of these issues become irrelevant: data is transparent to the network and no buffering is done inside the network switches. The network acts merely as a time-broker on individual links. As a result when we discuss QoS in JumpStart, it is separated into several areas: – QoS requirements defined for the specific adaptation layer used – Optical QoS parameters, on which specific adaptation layer requirements may be mapped to – Connection prioritization - allows us to preempt less important connections in favor of more important ones. It is a stand-alone property, which enables the network to deal with preemption of existing connections in a predictable manner. Optical QoS parameters allow the network to route a connection along the best suited route depending on the type of signal the connection carries. Examples of optical QoS parameters are: bit-error rate, dynamic range, signal-to-noise ratio, optical channel spacing. Multicast Support. Support for multicast connections is essential for future networks, however support for them within an architecture such as JumpStart may not be trivial. The optical signal must be split at certain points along the path according to the multicast routing tree in order for the network to remain all-optical, i.e. avoid electro-optical conversions. Such splitting presents a number of issues for the implementation, namely: – A switch must be equipped with an optical splitting mechanism (splitting signaling messages does not present a problem, since they undergo electro-optical conversions in each switch). – A number of such splits that can be done on a single connection is in general bounded by the optical power budget [3]. The result is that each connection may have a limited fan-out (contrary to present day electronic networks, where such issues are not considered). Given these restrictions, for our network architecture we presume that the switches capable of splitting the optical signal are not common in the network, and, in fact, are sparsely dispersed throughout the network. Each end-node gets assigned one such switch as its multicast server switch through either administrative mechanism or a separate signaling mechanism. These special switches also take care of setting up routing trees for multicast connections, so in addition to special hardware they need to allow to split the optical signal, they also have special firmware to allow them to manage and route multicast connections. Thus all signaling messages from the source node that pertain to its multicast connections get routed by the network to its assigned multicast switch. Within the multicast variety of connections we can identify two ways to setup a multicast session:
JumpStart: A Just-in-Time Signaling Architecture
1085
– Source-managed multicast: the source of the multicast knows the addresses of all of the members of the multicast group and that number is relatively small. In this case, the addresses of the members are directly included into the appropriate signaling messages by the source. – Leaf-initiated join: in this scenario, a source may announce the existence of a multicast session, with a session id that is unique inside the network. Multicast servers in the network will learn of this new session through means that are outside of the scope of this discussion, and the end nodes will be able to join existing multicast sessions by communicating with their domain multicast servers. In practice we would like to allow for a combination of both. A source may begin by specifying a few end nodes and allow the rest to use the leaf-initiated join capability. In the extreme case the source node simply announces an existence of a session and lets nodes in the network join as they wish. One additional option of multicast sessions is the session scope. The scope limits the availability of the session to nodes belonging only to specific domain(s). Additionally a source node may specify as part of the connection options that only it has the authority to add new leaves to the tree, in which case no leaf-initiated join connections will be allowed. Label Switching. Label switching concept will be utilized by the signaling channel in order to achieve several goals: – Speed up in accessing call state in the switch (based on label, not call reference number, which is not unique within the network). – Guarantee that forward and backward routes coincide (while connections in a JumpStart OBS network are unidirectional, signaling paths are not, and it is necessary that signaling messages travel the on the same path both in the forward and in the backward direction). – Speed up routing (once a connection path has been established, it is desirable that further signaling messages do not consult the routing table but use a pre-established path). For this purpose, labels similar to MPLS will be used as part of the signaling message format. These labels will have link-local significance (unique on one link, but not within a switch or the network). Similar to ATM and MPLS, these labels will be rewritten as the signaling message traverses the network. Special tables within the switch will be needed to maintain the forward and the backward label mapping. No label stacking will be allowed. Label distribution will be done on-the-fly, as part of signaling, while the connection is being setup, instead of utilizing a label distribution protocol like LDP or modified RSVP. Additional multicast support in a labeling mechanism will be necessary in those nodes that support multicast routing (multicast nodes). Unlike unicast-only nodes, which only need to maintain one-to-one label mappings for each connection, multicast nodes will require one-to-many and many-to-one mappings for label mechanism.
1086
I. Baldine et al.
Persistent Connections. For some applications there will be a need for all bursts to travel the same route through the network, especially to those applications that are particularly sensitive to jitter or sequential arrival of information. Defining a persistent route service that precedes a series of bursts can allow the network to "nail down" a route for all subsequent bursts to follow. There are some network traffic engineering implications of persistent routes. If a significant portion of the network connections are established with fixed routes, then dynamic load balancing through routing changes in the network will become inefficient and perhaps fail. To minimize this potential problem, network service providers may choose to treat persistent route connections as a premium service offering. This service would be more expensive for service providers to support. Multicast service as we have defined it also requires persistence. The first phase of establishing multicast service is to declare a session and build a routing tree. This is followed by one or several data transmission phase. Maintaining a persistent session is necessary so that the network can maintain state for multicast session routing as leafs are added and dropped through its lifetime. 1.2
Conclusions
In this paper we presented a short description of Jumpstart - a new proposed architecture for all-optical WDM burst-switched networks. We described and justified the need for the most important features in the network. Fore more information about the project we suggest [1].
References 1. Jumpstart project. In http://jumpstart.anr.mcnc.org. 2. Ilia Baldine, George Rouskas, Harry Perros, and Daniel Stevenson. Signaling Support for Multicasting and QoS within the Jumpstart WDM Burst Switching Architecture. Optical Networks Magazine, 2002. submitted for publication. 3. Karthik Chandrasekar, Dan Stevenson, and Paul Franzon. Optical Hardware Tradeoffs for All-Optical Multicast. In Submitted to OFC, 2002. 4. I.Baldine, G.N.Rouskas, H.Perros, and D.Stevenson. Jumpstart - a Just-In-Time Signaling Aarchitecture for WDM Burst-Switched Networks. IEEE Communications, page p.82, Feb. 2002. 5. Pronita Mehrotra, Ilia Baldine, Dan Stevenson, and Paul Franzon. Network Processor Design for use in Optical Burst Switched Networks. In International ASIC/SOC Conference, September 2001. 6. Chunming Qiao and Myungsik Yoo. Optical Burst Switching (OBS). Journal of High Speed Networks, 8, 1999. 7. MyungsikYoo, Myongki Jeong, and Chunming Qiao. A High-Speed Protocol for Bursty Traffic in Optical Networks. In SPIE, volume 3230, pages pp.79–80. 8. Myungsik Yoo and Chunming Qiao. A New Optical Burst Switching Protocol for Supporting Quality of Service. In SPIE, volume 3531, pages pp.396–405. 9. John Y.Wei and Ray McFarland. Just-In-Time Signaling for WDM Optical Burst Switching Networks. Journal of Lightwave Technology, 18(12):pp.2019–2037, December 2000.
QoS Evaluation of Real-Time Applications over a Multi-domain DiffServ Experimental Test-Bed G. Carrozzo, V. Chionsini, S. Giordano, and S. Niccolini Department of Information Engineering University of Pisa Via Diotisalvi 2 56126 Pisa Italy Tel. +39 050 568511, Fax +39 050 568522 {g.carrozzo,v.chionsini,s.giordano,s.niccolini}@iet.unipi.it
Abstract. This paper presents a QoS evaluation in a DiffServ experimental testbed scenario. We implemented our field trial using prototypal routers running under Linux OS and we arranged them in order to make possible the interconnection with a remote island of a Multi-domain DiffServ network. The performance evaluation of Real-time applications presented in the paper will make clear how it is possible to provide “mission critical” applications with tool-quality level of service when appropriate algorithm and resource sharing are chosen and when these features are associated with a fair degree of aggregation. As a consequence the paper describes the results by means of a Mean Opinion Score (MOS) evaluation campaign to show how Real-Time applications (such as voice and video conferencing) may suffer for the lack of QoS. Keywords: Experimental test-bed, Multi-Domain, DiffServ, Real-Time traffic.
1. Introduction During the past years different proposals for Next Generation Internet architecture have been suggested. Integrated Services [1] and Differentiated Services [2] were the most promising ones. Unfortunately both of them showed their weakness when dealing with end-to-end QoS guarantees (in particular, IntServ lacks of scalability, and DiffServ lacks of “hard” guarantees). This research work intends to show how, by means of simple DiffServ mechanism applying to prototypal routers (edge and core DiffServ routers), it is possible to obtain satisfying results in terms of QoS parameters and in terms of user perceived quality. The IntServ access network is supposed to be unchanged compared with the framework of IntServ over DiffServ architecture [3]. The rest of the paper is organized as follows: in Section 1 we present our design and implementation of a real Multi-Domain DiffServ experimental test bed carried out in the framework of NEBULA project [4]. We show how DiffServ mechanism and our DiffServ aggregation strategies [5] well behave when dealing with Real-time application (mostly voice and video). In Section 3 the discussion is about the treatment of audio and video within the Real-time class. In Section 4 we analyze and comment the results highlighting the goodness of our aggregation strategies assumption. Finally we present our conclusion and future works. E. Gregori et al. (Eds.): NETWORKING 2002, LNCS 2345, pp. 1093-1098, 2002. © Springer-Verlag Berlin Heidelberg 2002
1094
G. Carrozzo et al.
2. Test-Bed Description This section presents our Multi-Domain test-bed, built-up in order to study the obtainable performance of a DiffServ Core Network when appropriate QoS mechanisms are used. We emulated three simple access domains interconnected by a DiffServ cloud by means of an ATM link in order to arrange the trial to be remotized. Our implementation is related to multiple site interconnection, in order to insert our trial in a more complex network (as in the scope of NEBULA project), where the study of QoS performance is more critical. The access domains are emulated by means of source and destination PCs connected to separate private networks. In Fig. 1 we detail our field trial; it is possible to distinguish the access domains where the sources and the destinations are located.
Fig. 1. Field Trial
We developed our field trial under Linux OS running on IA32 platforms (PC). The interconnecting routers are equipped with two 10/100 Ethernet cards and one MMF ATM card; each source/destination PC has only one 10/100 Ethernet card and it is point-to-point connected to the Border Router. The DiffServ backbone is emulated by means of an ATM connection (i.e. two 155 Mbit/s links towards a Newbidge CS1000 ATM switch). The Border Routers (Hertz and Marconi) provide the necessary transformation from packets to cells and viceversa by means of the AAL5 protocol. We implemented, on the BRs, the DiffServ traffic control functionalities (i.e marking, shaping, metering, dropping), by means of the “TC” package available under Linux. The scheduler used by the BRs is a CBQ (Class Based Queuing) algorithm.
3. Real-Time Traffic & Non Real-Time Traffic The traffic used in this field trial may be classified in two main classes: a) Real time traffic: we include both audio and video sources because both of them have strict bounds on QoS target, even if different statistical features; b) Non Real-Time traffic: we include both MGEN (UDP source) traffic and FTP traffic (TCP source) because nor of them requires strict bounds on delay/jitter. The first test, whose results are pre-
QoS Evaluation of Real-Time Applications
1095
sented in Section.4, provided a simple distinction between the two mentioned classes. The adopted combination between scheduling algorithm and TC functionalities had the aim to protect the Real-Time class against an “aggressive” and “persistent” BE class, formed by UDP sources. The second test, according to evaluations derived from a previous work [5], provided a refined classification, in order to avoid performance degradation experimented when voice and video flows are merged together.
4. QoS Evaluation In all the tests presented in this section, Real-Time traffic was sent across the network. The first test conducted on the DiffServ backbone was about the flow isolation obtainable using the marking/scheduling algorithm on two classes (the first group of test is related to the transport of audio or video on the EF class while the rest of the traffic is forwarded on the BE class). We have collected the QoS relevant parameters directly between ingress and output interface of the ingress border router (named Hertz in Fig. 1), this because of synchronization problems arising from an end-to-end collection of delay. Table 1. DiffServ EF configuration (video on EF) Flow 1 2
Flow Description Video: 384kbit/s; Avg Packet size=800Bytes MGEN: rate=1Mbit /s; packet size=1kB
ToS EF BE
CBQ Class Parameters Buffer=3.2 kB;Rate=390 kbit/s Buffer=60kB;Rate=500 kbit/s
In the first test the scope was comparing QoS parameters of Real-Time Traffic, in terms of rate, delay and MOS in two cases: a) No DiffServ: all the traffic was sent on the same queue and served in a FIFO way; b) DiffServ: configuration described in Table 1. In Fig. 2 it is possible to notice that the portion of bandwidth used by the video flow increases when the protecting DiffServ scheduling scheme is activated. Fig. 3 shows that the performance degradation is more evident when speaking about the delay measurement. When the DiffServ is disabled all the flows share the same class and so the delay experimented by the video packets is the same experimented by the BE packets. On the other hand, when the video flow has its own queue (DiffServ enabled) we obtain the required service differentiation. In order to deeply analyze the performance of the video related to this test we have conducted a MOS (Mean Opinion Score) campaign; a MOS campaign is a collection of user sensation about quality perception by means of numerical score (1= lowest quality, 5= highest quality). This campaign has allowed to evaluate the perceived user quality at application level. We report in Fig. 6(a) the MOS evaluation obtained from the campaign when two video coding rates were considered: 128 kbit/s, 384 kbit/s. From the results it is possible to notice the application level performance improvement when DiffServ architecture is enabled.
1096
G. Carrozzo et al.
Fig. 2. Traffic rates: DiffServ disabled and enabled
Fig. 3. Enabling the DiffServ mechanism on the video flow (delay)
A second test group is mainly focused on the evaluation of the impact of different aggregation strategies on the QoS parameters at network level and on the user perceived quality at application level. We will compare a scenario where only one class (EF) is used to carry Real-time traffic to a second scenario where we use separate classes in order to carry Real time flows (voice on EF and video on AF). The BE class is used in both cases to carry non-RT traffic. In Table 2 we show the scheduling parameters used in this test group. Table 2. EF audio, AF video configuration Flow 1 2 3
Flow Description Audio: 64kbit/s; Packet size=172 Bytes Video: 128kbit/s; Avg Packet Size=800 Bytes MGEN: rate=1Mbit /s; packet size=1 kB
ToS EF AF BE
CBQ Class Parameters Buffer=1kB; Rate=64 kbit/s Buffer=15 kB; Rate=128 kbit/s Buffer =60kB; Rate =500kbit/s
The EF configuration adopted when audio and video are carried together may be obtained simply adding the CBQ parameters (i.e. Buffer = 16 kB and Rate = 192 kbit/s). We adopted this configuration in order to perform a fair comparison in terms of allocated resources. Fig. 4 shows the delay experimented by traffic flows when audio is carried on EF and video on AF, all collected on the DiffServ ingress BR. In this case
QoS Evaluation of Real-Time Applications
1097
the delay experimented by the audio flow is 35 ms (we had to zoom the statistic in order to make its visualization clearer) while the video delay is much more relevant (mean value: 25 ms). Video flow experimented a mean delay lower than BE flow one but higher than audio one, because of its different service class (AF) and its intrinsic burstiness. Anyway, the absolute values should not be considered, because they are relative to the crossing of a single device.
Fig. 4. Collection of delay parameter on BR (Audio on EF, Video on AF)
Fig. 5. Collection of delay parameter on BR (Audio and Video on EF)
Fig. 6. MOS evaluation: a)Video on EF; b)Audio/Video with different aggregation strategies
When audio and video are merged together (EF class) there is a degradation of the audio performance: being carried together with the video, it suffers the same order of
1098
G. Carrozzo et al.
delay (see Fig. 5). At last we present in Fig. 6(b) the MOS evaluation collected in order to make a comparison between our proposed two-classes aggregation scheme and one-class aggregation scheme. Fig. 4 compared to Fig. 5 highlights the better performance obtained with the two-classes aggregation scheme. As it concerns the application level QoS, the MOS shown in Fig. 6(b) takes benefit from the forwarding of audio and video on two separate PHBs. As it can be noticed, the gap is approx. one point of MOS scale between the two scenarios: it is a great degradation of quality when speaking about user perceived quality at application level.
6. Conclusion and Ongoing Works The main goal of the paper was the QoS evaluation of Real-time applications over a Multi-Domain DiffServ experimental test-bed by means of network level QoS parameters and application level parameters. In this framework the we have presented a two-classes aggregation scheme for DiffServ architecture in order to improve the obtainable performance. The collected results in the experimental test-bed scenario demonstrate they were satisfactory both at application level (evaluated by means of a MOS campaign) and at network level (evaluated by means of rate/delay statistics). This work is a first extract of our ongoing work on developing a real Multi-domain DiffServ island interconnection. A deeper analysis of end-to-end delays experimented in such a scenario is going to be conducted by means of a host synchronization tool (GPS system). Acknowledgments. The authors wish to thank Andrea Giorgi for his help in this work. This work was partially supported by the project “NEBULA” of the Italian MURST.
References 1. 2. 3. 4. 5.
R. Braden et al., “Integrated Services in the Internet architecture: An overview”, RFC 1663, June 1994. S. Blake et al., “An architecture for Differentiate Services”, RFC 2475, December 1998. Y. Bernet et al. “A Framework for Integrated Services Operation over Diffserv Networks”, RFC 2998, November 2000. NEBULA Project, financed by the Italian MURST (http://cofin98.cineca.it/murst-dae/). R.G. Garroppo, S. Giordano, S. Niccolini, F. Russo, “A Simulation Analysis of Aggregation Strategies in a WF2Q+ Schedulers Network” in Proceedings of The 2nd IP Telephony Workshop, New York, April 2001.
A New Policy Based Management of Mobile IP Users Hakima Chaouchi and Guy Pujolle University of Paris VI, LIP6 networks Lab. 8 rue du capitaine Scott, 75015 Paris {Hakima.CHAOUCHI ; Guy.pujolle}@lip6.fr
Abstract. A policy based management networking is a new paradigm used to achieve the network management. This paper presents a new policy based Mobile IP users management architecture based on a Common Open Policy Service (COPS) protocol which is currently deployed for QoS management. This paper introduces a new concept of terminal policy enforcement point (TPEP) which allows the terminal to interact with the network enforcing network policies defined by the network manager; it is a key feature of our architecture. The paper presents also the global architecture to support the mobile IP users requirements based mainly on two extensions of COPS protocol; COPS-SLS [1] for QoS negotiation and COPS-MU/MT [2] for policy based user and terminal mobility management.
1
Introduction
Due to the tremendous success of IP technology in the fixed network area, it is commonly accepted today that IP will provide the unifying glue for the increasingly heterogeneous, ubiquitous, and mobile environment [3, 4]. This paper presents new policy based architecture for user mobility management which supports nomadic users in the Internet by allowing them to access their personalized computing resources and services from anywhere on the Internet [5]. The IETF has proposed a policy based model for network management [6, 7, 8] and a TCP based policy transport protocol, called COPS (Common Open Policy Service) [9]. Policy based network management currently concerns QoS and security management. Many extensions have been introduced for COPS usage such as COPSPR [9] for Diffserv, COPS-RSVP [10] for Inserv, and COPS-MIP [11] for Mobile IP terminal mobility management. The next section presents an overview of a user mobility aspect and an overview of a policy based management networking. Then a new architecture to support Mobile IP users’ management is presented followed by a conclusion. 1.1
User Mobility Overview
User mobility concerns terminal mobility and personal mobility. The Terminal mobility allows a terminal to change its network point of attachment without being disconnected from the network [12]. IP networks support terminal mobility using E. Gregori et al. (Eds.): NETWORKING 2002, LNCS 2345, pp. 1099-1104, 2002. © Springer-Verlag Berlin Heidelberg 2002
1100
H. Chaouchi and G. Pujolle
Mobile IP protocol. Personal mobility allows a user to use any available mobile or fixed terminal, and use his personal subscribed home network services from any terminal and any network access [13]. Thus, personal mobility is related to user location and service portability management [14]. A universal personal identifier is necessary to achieve personal mobility. 1.2
Policy Based Architecture Overview
Policy based management networking (PBMN) framework is proposed by the IETF [6, 7]. It is based on two important elements: policy server PDP (Policy Decision Point) and PEP (Policy Enforcement Point) as illustrated on Fig. 1. (a). PBMN intends to manage the network based on the business policies, these policies are translated to network policies and stored in the network. They are used to automatically configure the network elements to offer services based on the business level policies. The protocol used to exchange policy objects is COPS [9]. PDP and PEP exchange COPS messages which are detailed in [15] to achieve policy based network management. Fig. 1 (b) illustrates an example of PDP/PEP messages exchange process which are briefly explained below: OPN: Client OPeN, PEP opens a TCP connection with the PDP; CAT: Client AccepT, the PDP accepts a connection; REQ: REQuest. PEP sends a request for a PDP. The request contains an identifier and policy objects necessary for a PDP policy decisions; DEC: DECision. The PDP sends a policy decision in a DEC message; RPT: RePorT. The PEP sends a report to the PDP after enforcing a policy contained in previous DEC message; CC: Client Close. The PEP closes a connexion. 3 ( 3
P o lic y c l ie n t P o li c y s e r v e r
T C P c o n n e c tio n
3 ' 3
C A T
C O P S 3 ( 3
O P N
3 ' 3
R E Q
D E C
R P T / 3 ' 3
(a )
(b )
… … .. C C
Fig. 1. (a) Policy based architecture. (b) PDP/PEP COPS messages exchange.
2
A New Policy Based Mobile IP Users’ Management Architecture
We identify four issues related to user mobility management which are user registration, terminal registration, service portability and QoS negotiation. To achieve these challenging issues, we introduce new components in the IETF policy based architecture illustrated on Fig. 2 and we introduce COPS extension called COPS-MU/MT [2] (COPS-Mobile User/Mobile Terminal) which defines new policy objects to support user and terminal registration respectively, user service portability, and QoS negotiation.
A New Policy Based Management of Mobile IP Users
2.1
1101
Architecture Components
Some Mobile IP terms [16] are necessary to understand the next sections, they are explained bellow: HA. Home Agent, maintains a mobility binding of a MT in his home network. FA. Foreign Agent maintains a list of terminal visitors in the foreign network. Mobility binding. It’s an association between the Home address and the CoA of a mobile terminal; Home address. It’s a routable and a permanent address used to locate a mobile terminal even when it changes its point of attachment. It is a HA adress. CoA. Care of Address, is the address obtained in a foreign network. CoA may be a FA address (IPv4) or a co-located CoA (IPv6) [16]. If a mobile terminal has a colocated CoA, it interacts directly with the HA else, it interacts with the FA which forwards its messages to the HA. Fig. 2 illustrates new components introduced in COPS-MU/MT architecture. N etw o rk A THPDP
THN UFN
FPD P
C O PS-M T
MT TPEP
N etw o rk B
UFA THA
TFA UHA
UHN T FN
N etw o rk A UH PD P
UHA THA
N etw o rk C
FN T FA UFA
(a )
FP D P C O PS-M U
MU
UHN THN N e tw o rk B
FN
C O P S -M T
MT TPEP
UH PDP THPDP
C O P S -M U
TFA U FA
(b )
MU
Fig. 2. Terminal’s and user’s home and foreign networks. (a) MU and MT are subscribed in different networks. (b) MU and MT are subscribed in the same network.
Some components defined for a Mobile User (MU) and a Mobile Terminal (MT) have similar functions such as a: A User Home Policy Decision Point (UHPDP) and a Terminal Home Decision Point (THPDP) which are policy servers in a User Home Network (UHN) and a Terminal Home Network (THN) respectively. A User Foreign Policy Decision Point (UFPDP) and a Terminal Foreign Policy Decision Point which are policy servers of a User Foreign Network (UFN) and a Terminal Foreign Network (TFN) respectively. A Foreign policy Decision Point (FPDP) is a policy server of a Foreign Network (FN) of a mobile user and a mobile terminal. A key feature of our architecture is a Terminal Policy Enforcement Point (TPEP). It is introduced to allow the terminal to interact directly with the network for user and terminal registration, QoS negotiation and user service portability. User Home Agent (UHA) and Terminal Home Agent (THA) maintains the user and the terminal mobility binding respectively. User Foreign Agent (UFA) and Terminal Foreign Agent (TFA) are the FA of the mobile user and the mobile terminal respectively. Policy servers of different network providers have to maintain policy information related to home and foreign mobile users such as user profile, services profile, and
1102
H. Chaouchi and G. Pujolle
terminal profile in order to allow a user universal access to network services and resources from anywhere. The different profiles may be stored in the home agents or in the policy servers. The goal of the policy based Mobile IP user management is to allow the user to access his home services with the parameters negotiated with his home network from anywhere. Thus, the policy based Mobile IP users management achieves the user and terminal registration to support user and terminal location management and the personal service portability and the QoS negotiation to support user services anywhere. 2.2
Policy Based User and Terminal Registration
Terminal registration. Terminal registration must be achieved only if a terminal is located in a foreign network, if it is a fixed terminal or a mobile terminal located in its home network then a terminal registration is unnecessary. COPS-MT is used to achieve the terminal registration, it supports IPv4 and IPv6 registration by allowing the TPEP directly interact with the FPDP so that the mobile terminal can achieve its registration directly with the HA. Fig. 5 illustrates COPS-MT terminal registration related to FA CoA (IPv4) and co-located CoA (IPv6). TFA TFPEP
MT
THA THPEP
TFPDP
THPDP
1
TFPDP
MT
THA THPEP
THPDP
1
2
2
3 3
4
5 6
4 5
7
(a)
(b)
Fig. 3. COPS-MT terminal registration. (a) FA CoA. (b) Co-located CoA.
Numbered steps illustrated on Fig. 3 (a) correspond to the terminal registration in Mobile IPv4 with FA CoA whereas Fig. 3 (b) corresponds to the case when a mobile terminal has a co-located CoA such as in IPv6. Fig. 3 (b) steps are explained bellow: 1. TPEP interacts directly with TFPDP for terminal registration request policy decisions; 2. MT sends registration request to the THA; 3. THPEP interacts with THPDP for terminal registration request; 4. THA sends registration reply message to the MT; 5. TPEP interacts with FPDP for terminal registration reply policy decisions. The steps described on Fig.3 (a) are related to the case where a MT has a FA CoA, they are different from steps in Fig.3 (b) in that a FA intercepts messages sent by a MT and forwards them.
A New Policy Based Management of Mobile IP Users
1103
User registration. A user registration must be achieved every time a user logs in a terminal even if a user is in his home network. In COPS-MU, the user mobility registration is similar to COPS-MT terminal mobility registration. COPS-MU user registration consists of maintaining an association between the terminal home adress and a user identifier. The necessary elements for achieving user registration are UHPDP, UFPDP, UHA and UFA. The registered user would be reachable on the terminal he is using and may use his home services from anywhere. 2.3
Policy Based Mobile IP User Service Portability and QoS Negotiation
In this work we assume that the network is a policy based network QoS management such as Diffserv COPS-PR policy provisioning based network and we propose to support a QoS negotiation for a Mobile IP user which moves from the home network to a foreign network. In this architecture COPS-MU/MT is deployed in a wireless access network to achieve a user and terminal registration and QoS negotiation, and COPS-SLS [1] is deployed between the home PDP and a FPDP for inter-domain negotiation of a user home subscribed QoS and COPS-MU for inter-domain mobile user and mobile terminal registration and mobile user service portability negotiation. When a mobile user moves to a foreign network, a FPDP interacts with the UHPDP to determine the user’s QoS negotiated with the home network so that the mobile user has not to re-negotiate a QoS with the foreign network. This architecture is illustrated in Fig. 4. C O P S -S L S FPDP F o r e i g n N e tw o r k
C O P S -P R
HPDP
C O P S -M U H o m e N etw ork
C O P S -P R
C O P S -M U /M T TPEP MT MU
Fig. 4. Policy based Mobile IP users QoS negotiation environment.
This architecture is also used to negotiate user service portability. The FPDP negotiate with the UHPDP where to run the user personal services. The UHPDP decides based on the user profile, the personal service profile and the terminal profile where to run the user personal service. This part will not be detailed in this paper.
3
Conclusion
In this paper, we have described new policy based architecture to support user mobility management in IP networks. The approach taken assumes that mobile users are in IP networks based on the PDP/PEP architecture and using COPS protocol. We have proposed to use a terminal Policy Enforcement Point (TPEP) which allows the terminal to interact directly with the appropriate PDP and we proposed also the COPS
1104
H. Chaouchi and G. Pujolle
extension named COPS-MU/MT (COPS-Mobile User/Mobile Terminal) to support a policy based user mobility management issues related to user and terminal registration, user services portability and QoS negotiation. We believe that the use of COPS and PDP/ PEP model offers a good way to achieve a unified IP network policy management of QoS, security, mobility, etc. However, we need to implement this architecture for performance evaluation. Future work intends to define all necessary policy objects related to user mobility registration, terminal mobility registration, service portability, and QoS negotiation.
References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16]
M. Nguyen, “COPS-SLS”, IETF-draft, november 2001, draft-nguyen-rap-cops-sls-01.txt H.CHAOUCHI, G. Pujolle, « COPS-MU : a new policy based user mobility management », proceeding MS3G 2001, Lyon, France. A. Fasbender,F. Reichert, E. Gueulen, J. Hjelm, T. Wierelemann, “Any Network, Any Terminal, Anywhere”, IEEE Personal Communications 1999. L. Bos, S. Leroy, “Toward an ALL-IP-Based UMTS System Architecture”, IEEE Network, January/February 2001. Yalun Li, V. Leung, “Protocol Archirtecture for universal personal computing”, IEEE Journal on selected areas in communications, vol 15, N° 8, October 1997. A. Wtersinen, J. Schnizelein, J;Strassner, M. Scherling, B. Quinn, J. Perry, S. Herzog, A. Huynh, M. Carlson, S. Waldbusser, “Policy Terminology”, Internet Draft, March 2001, draft-ietf-policy-terminology-02.txt B. MOORE, E. Ellesson, J. Strassner, A. Westerinen, “Policy Core Information Model”, RFC 3060, February 2001. M. Fine, K. McCloghrie, J. Seligson, K. Chan, S; Hahn, R. Sahita, A. Smith, F. Reichmeyer, “ Framework Policy Information Base”, Internet draft, March 2001, draftietf-rap-frameworkpib-04.txt. K. Chan, J. Seligson, D. Durhan, K. Mcloghrie, S. Herzog, F. Reichmeyer, R. Yavatkar, A. Smith “COPS usage for Policy provisioning (COPS-PR)”, RFC 3084, March 2001. S. Herzog, J Boyle, R.Cohen, D.Durham, R.Rajan, A.Sastry, “COPS usage for RSVP”, RFC 2749, January 2000. M. Jaseemuddin, A. Lakas, ‘COPS usage for Mobile IP ‘, Internet draft, October 2000, draft-ietf-jaseem-rap-cops-mip-00.txt. G. Forman, J. Zahorjan, “ the challenge of mobile computing”, IEEE Computer, March 1994. E. Koukoutsis, C. Kossidas, N. Polydorou, “ User Aspects for Mobility”, Acts Guideline SII-G8/0798. M. Cristina Ciancetta, G. Colombo, R. Lavagnolo, D. Grillo, F. Bordoni, “Convergence Trends for fixed and mobile services”, IEEE Personnal Communications, April 1999. D. Durham, J. Boyle, R. Cohen, S. Herzog, R. Rajan, A. Sastry, “The COPS (Common Open Policy Service) Protocol”, RFC 2748, January 2000. C. Perkins, “IP mobility support”, RFC 2002, October 1996.
A Framework for Policy-Based Management of QoS Aware IP Networks 1
2
3
3
2
P. Cremonese , M. Esposito , S. Giordano M. Mondini , S.P. Romano , and 2 G. Ventre 1
NETIKOS - via Matteucci, 56100 Pisa [email protected] 2
DIS -- Dipartimento di Informatica e Sistemistica, Università di Napoli Federico II Via Claudio 21, 80125, Napoli, ITALY {mesposit, spromano, giorgio}@unina.it 3
ICA -DSC- EPFL - CH-1015 Lausanne, Switzerland [email protected],
Abstract. In today's Internet, Policy-Based Network Management is gaining more and more proselytes. Its appeal is due to the given opportunity of a standard and consistent way for network configuration, independently of the underlying architecture and Quality of Service (QoS) provisioning model assumptions. The event-driven paradigm, well established in the generalpurpose programmers world, through the Policy-Based approach begins to play its role also in the field of network management. In this paper we describe a policy framework suited for dynamic network management in QoS-enabled IP networks. First, we design an object model conceived to represent policies in a network-independent fashion. Then, we describe a management and configuration system based on Common Open Policy Service (COPS). Finally, we show a system prototype pointing out the main features of a Differentiated Services network management and configuration based on Policy System Management.
1
Introduction
A policy is a set of rules or methods, representing an object behavior or a decision strategy to be applied in order to ultimate a particular goal. The Policy-Based Network Management is the application of these organizational policies in order to manage the networks. With this approach, the role of network management moves from passive network monitoring to active QoS (Quality of Service) and network service-level-agreement provisioning . While this technology is powerful and alluring, it’s also generally untested and unproven. Worse, this area still suffers from a lack of standards and for a lack of ad hoc use of existing ones. There are two key issues that are not yet totally addressed: first, how the vendors will access and control their hardware, and second, how these systems glean information about an organization’s users and resources. E. Gregori et al. (Eds.): NETWORKING 2002, LNCS 2345, pp. 1105-1110, 2002. © Springer-Verlag Berlin Heidelberg 2002
1106
P. Cremonese et al.
Our architecture, developed in the framework of the European IST project CADENUS, tries to address all those problems. First, we are developing a prototype that aims to test and validate the policy-based approach in a real DiffServ network. Secondly, we adopted a layered model. In this way, at the lower layer, we accomplished the devices configuration by employing a combination of CLI (command-level interface), COPS and LDAP. We feel that our work can be a step toward a standardized policy-based network management. This document is specifically concerned with the definition of the processes that take place right after a new Service Level Specification (SLS) has been created as a consequence of the negotiation of a new service instance between, for example, an end-user and a Service Provider (SP). We are not focussing on the interactions that bring to the creation of an SLS, but simply assume that a new SLS has been provided by a Service Provider stemming from an even higher service level description (see [SLA]). This document is organized in five sections. Next section illustrates the proposed architecture and the steps performed from an SLS to the final configuration of the devices. The Multiple-Layer, Policy-Based approach, with particular attention to the policies repositories used at each layer, is presented in Section 3. Section 4 expands on the Network Controller, which represents one of the main components of the overall architecture. Finally, Section 5 provides some concluding remarks, together with a discussion of future work.
2
A Framework for Automatic Configuration and Management of QoS-Aware Networks
Policy-Based Management has been thought to allow network configuration in the sphere of several applications, ranging from security and network engineering to monitoring and measurements. In this work, we will delve into the role of policies with respect to Quality of Service (QoS) needs in a QoS-aware network. In order to make network configuration and management an automatic task, independent of the specific devices implementing the network, our architecture is composed of three layers, each related to a different level of abstraction. More precisely, as depicted in Figure 1, the overall process starts from an abstract service description (contained inside an SLS), and comprises a number of intermediate steps, each needed in order to lower the level of abstraction, thus filling the gap between the human-oriented concept of a “service” and the device-specific configuration commands that eventually enforce the service itself. For each domain (i.e. Autonomous System --AS) we have one functional block, named Resource Mediator (RM), that is in charge of managing the whole underlying network.
A Framework for Policy-Based Management of QoS Aware IP Networks
1107
SLS = Service Level Specification PDP = Policy Decision Point
SLS
RM=Resource Mediator
PEP = Policy Enforcement Point
NIPR = Network Indep. Policy Repository
PIB = Policy Information Base NDPR = Network Dep. Policy Repository
NC = Network Controller
1,35
1,
50
50
',
1&
1'35 1' ',
COPS 3,%
3(3 COPS API
''
COPS 3,%
3(3 COPS API
'HYLFH&RQWUROOHU
1'
3'3
TC API
'HYLFH&RQWUROOHU
TC API
7UDIILF&RQWURO 5RXWHU
7UDIILF&RQWURO 5RXWHU
COPS 3,%
3(3
1H[WKRS $6
COPS API 'HYLFH&RQWUROOHU
TC API 7UDIILF&RQWURO 5RXWHU
$XWRQRPRXV6\VWHP Fig. 1. Different layers of the Policy Architecture
The scenario we analyze is one in which, starting from an SLS instance, we go all the way down through the shown components in order to arrive at the network devices and appropriately configure them. Delving into the details of such a process, we identify the following steps: 1. A Resource Mediator takes an SLS and translates it into a coherent set of Network Independent Policy Rules (NIPR). As the name itself suggests, such rules are to be both network and device independent: they just are a well-structured representation of the information contained inside the SLS. The model we are thinking to adopt is inspired to the various proposals stemming from the Common Information Model [CIM] under standardization inside both the IETF and DMTF research communities [PCIM],[PCIMe],[PQIM]. 2. The Network Independent Policy Rules are then passed to a Network Controller (NC), which translates them on the basis of the specific network architecture adopted (MPLS, Diffserv, etc.). The translation process brings to a new set of rules, named Network Dependent Policy Rules (NDPR), that are stored inside an ad-hoc defined Policy Information Base (PIB). The NC also acts as a Policy Decision Point (PDP) [COPS], which exploits a protocol like COPS to send, based on the “provisioning” paradigm, policies to the underlying Policy Enforcement Points (PEPs).
1108
P. Cremonese et al.
3. Upon reception of a new policy, the PEP is now in charge of interacting with a Device Controller (DC), thus triggering the last level of translation, so to produce the necessary configuration commands needed to appropriately configure the traffic control modules (e.g allocation and configuration of queues, conditioners, markers, filters, etc.) on the underlying network elements.
3
The Policy-Based and Multiple-Layers Approach
As we introduced in the previous sections, the innovative aspect of the CADENUS architecture is the policy-based and three-layers approach. The sequential steps performed by each layer aim to achieve the ultimate goal of setting up the network in an automatic fashion, without any human intervention. The SLS is an abstract service description, independent both of the network architecture (e.g. Diffserv, MPLS, ATM) and of the devices architecture (e.g. a CISCO router, a PC running Linux or FreeBSD). Yet, network-dependent and device-dependent information is still needed in order to configure and manage the network devices. For this purpose our architecture includes three databases containing the views of the requested QoS at different layers: the Network Independent Policy Repository, the Network Dependent Policy Repository, and the Vendor Dependent Policy Repository. 3.1
NIPR: A Repository for Network Independent Policies
The Network Independent Policy Repository (NIPR) is an archive located at the Resource Mediator (RM) level. When a new SLS arrives at the RM from the Service Provider, the RM translates it in a set of policy rules describing conditions and actions related to the requested service (still in a network-independent form) and stores it in the NIPR. This SLS must describe the single service instances in an unambiguous fashion. The peculiarity of such an approach is that a NIPR, being at a high level of abstraction and then entirely network independent, can represent a common component (for every network architecture) containing the bundle of services to be enforced in the future. Anyway, as already stated, the semantic value of information stored in the NIPR, is not different than the one contained in the original SLS: the only added feature is the policy-based representation. 3.2
NDPR – A Repository for Network Dependent Policies
As we just explained, the Network Independent Policy Repository is a formal representation of the information contained inside an SLS. It contains, in standard format, the otherwise fuzzy definition of a service. In order to let such a definition become comprehensible to the lower network management devices, the need arises for a further level of translation. For this step, the Network Controller (see figure 1) goes a step further, by taking into consideration the specific network architecture that will support the deployment of the service. These network dependent policies are stored in the Network Dependent Policy Repository (NDPR). The NDPR contains
A Framework for Policy-Based Management of QoS Aware IP Networks
1109
policies in a representation independent of the devices implementing the network. It introduces, in the service/flow description, rules deriving from the supported technology (e.g. Diffserv) without going in detail of devices characteristics and components. NDPR generation is performed by Network Controllers according to business rules defined for traffic classification. The mapping from NIPR (which is based on user/service requirements) to NDPR (based on network implementation) is local to each domain. Each NC uses some business rules for policy generation and policy distribution. Such a policy could, for example, lead to the marking (via DSCP field) of a packet, or dropping, remarking, delaying of out of profile packets. A policy is defined by the instantiation of a filter object (condition) and an action object. A filter object identifies, for instance, source and destination, while an action object can include Classifier object, Meter object, Shaper object. 3.3
VDPR – A Repository for Vendor Dependent Policies
This layer works with a representation that can be understood and handled by devices, thus reflecting their specific characteristics. The policies defined at the previous layer are translated into device configuration policies. Information, which is vendor and device dependent, such as queues configuration and network interfaces, is added at this layer. The vendor dependency derives from the necessity to make rules according to the specific features of the managed device. The schema for the translation of PIBs changes with the device nature. Therefore, this translation is demanded to a dedicated component, named the Device Controller. In our case, this component has been implemented for Linux-based routers, exploiting the functionality made available by the Linux Traffic Control (TC) module.
4
The Network Controller
The Network Controller (NC) is the component responsible for network management and configuration. Each NC manages a homogeneous network, where “homogeneous” means that only one technology for QoS support is provided within the network. The NC role can be summarized as follows: it performs management and configuration based on requests coming from the RM; it provides the RM with data for updating local repositories (routing, resources); it provides input to devices for local Traffic Control configuration; it manages the network with respect to fault detection and SLA monitoring. The main tasks the NC has to accomplish refer to policy generation and instantiation. 4.1
Policy Generation
The NC receives a request for subscription from the RM, related to a service to be committed. The request is composed of a set of policies Network Independent (NI). The NC translates all involved policies in a network dependent format; it checks the consistency of these policies (e.g. availability of the requested resources) and sends the answer back to the RM. The generated set of policies will be stored in the NDPR.
1110
4.2.
P. Cremonese et al.
Policy Instantiation
In this phase, the NC identifies the involved devices and sends (via COPS) the set of policies related to the request to the corresponding Device Controllers (DC). The DCs will in turn translate the received policies into the right traffic control commands needed to appropriately configure the network devices.
5
Conclusions and Future Work
In this paper we have shown an innovative approach for QoS-aware network configuration by means of policies. Such an approach has the advantage to provide a completely general way to achieve end-to-end QoS guarantees. Thanks to its layered structure, the architecture we propose is capable to make an adaptation from a service instance representation, as it is perceived at an abstract level, to a set of commands to be enforced on the underlying QoS-aware network nodes. This architecture is going to be implemented as prototype and tested in the framework of the European project CADENUS. The main goal of this work will be: to emphasize the power and attractiveness of the proposed technology; to show its validity by means of a prototype; to give results of tests and trials; to identify current lacks and propose solutions; to accelerate the step toward a standardization of all the elements of policy-based management network.
References [CIM] Distributed Management Task Force, Inc., “Common Information Model (CIM) schema”, version 2.3, March 2000. [PCIM] J. Strassner, E. Ellesson, B. Moore and A. Westerinen, “Policy Core Information Model - Version 1 Specification”, RFC3060, February 2001. [PCIMe] B. Moore, L. Rafalow, Y. Ramberg, Y. Snir, J. Strassner, A. Westerinen, R. Chadha, M. Brunner, R. Cohen, “Policy Core Information Model Extensions”, . [PQIM] Y. Snir, Y. Ramberg, J. Strassner, R. Cohen, “Policy Framework QoS Information Model”, Internet draft, , April 2000. [COPS] J. Boyle, R. Cohen, D. Durham, S. Herzog, R. Rajan and A. Sastry, “The COPS (Common Open Policy Service) Protocol”, RFC2748, January 2000. [SLA] S.P. Romano, M. Esposito, G. Ventre and G. Cortese, “Service Level Agreements for Premium IP Networks”, work in progress, Internet Draft , available at http://www.cadenus.org/papers, nov 2000. [SLS] D. Goderis, Y. T’joens, C. Jacquenet, G. Memenios, G. Pavlou, R. Egan, D. Griffin, P. Georgatsos, L. Georgiadis, P. Van Heuven, “Service Level Specification Semantics, Parameters and negotiation requirements”, Internet-Draft, , work in progress, June 2001, expires December 2001.
SIP-H323: A Solution for Interworking Saving Existing Architecture 1
2
3
3
G. De Marco , S. Loreto , G. Sorrentino , and L. Veltri 1
University of Salerno - DIIIE- Via Ponte Don Melillo - 56126 Fisciano(Sa) – Italy Ph.: + 39 0974 824700, Fax: +39 0974 824700, [email protected] 2 Ericsson Lab Italy – Via Madonna di Fatima, 2 - 84016 Pagani – Italy Ph.: + 39 081 5147733, Fax: +39 081 5147660, [email protected] 3 Coritel - Via Anagnina, 203 - 00040 Roma – Italy Ph.: + 39 06 72589169, Fax: +39 06 72583002, {sorrentino,veltri}@coritel.it
Abstract. In the 3rd generation multimedia communication world and in the 3GPP standardization consortium, SIP protocol appears to be the preferred signaling protocol. However, the need to communicate with non-SIP based network, e.g. H.323 from ITU-U, is still a reality. The need can be satisfied with the introduction of network gateways (also named Inter-Working Function). One of the open issues about SIP-H.323 interworking is the address resolution, in other words, the automatic forwarding of a SIP call to an H.323 user. The paper proposes a SIP network architecture which can interoperate with H.323 networks, safeguarding the existing software/hardware components, as SIP terminal clients or SIP server proxies or IWF gateways. Keywords: Sip, H323, interworking, gateway, call routing
1 Introduction One of the problems arising in the future multimedia network is the interworking between networks that use different protocols; at present days, such problems mainly concern the interworking between SIP (developed in IETF) and H.323 (from ITU) based multimedia networks. Both protocol are signaling protocol, being currently H323 the standard for any IP based implementation of multimedia communications ([1],[2],[3]). The 3GPP has selected SIP as the signaling protocol for multimedia communications in the UMTS network. All these considerations lead to the conclusion that the interoperation of H.323 and SIP based networks is becoming a very crucial problem ([5], [6], [7]). Among the various problems that arise when considering the interworking of these two protocols, one important aspect is to allow, for example, a SIP user to reach a remote user on both SIP and H.323 networks; of course if the remote terminal is an H.323 terminal, then an interworking system (gateway) is needed. A satisfactory solution, involving additional protocols [4]. In this work, we propose a new interworking solution that requires no modifications of these network elements. The proposed solution is so based on the assumption that neither the client applications (the terminals) nor the network servers/gateways should be E. Gregori et al. (Eds.): NETWORKING 2002, LNCS 2345, pp. 1111-1116, 2002. © Springer-Verlag Berlin Heidelberg 2002
1112
G. De Marco et al.
modified. Let us consider for example the following scenario: the owner of a big SIP network (> 500 consumers) has already acquired all the necessary servers; he/she has already installed and configured all the multimedia terminals; moreover, he/she has acquired the network nodes/servers and the H.323/SIP gateways (in the following referred also as interface module or Inter Working Function). Modifying the gateway source code or asking for a new version should be too expensive. In this context, we will see how to solve the addressing and registration aspects of the interworking problem without implementing the TRIP protocol within the SIP and H.323 signaling servers. The main idea consists of the introduction of a new network component that easily allows the call forwarding from SIP to SIP domains or from SIP to H.323 domains.
2 System Outline In a pure SIP network, terminals are named UserAgents (UA), while the servers can be classified as SIP Proxy servers, Redirect servers, and Registrar servers [8]. In our scenario, we consider a SIP network composed of UAs, stateful SIP Proxies acting also as Registrar servers, and one or more gateways (GWs) to other non-SIP IP networks. In such a network scenario, SIP terminals communicate directly with other SIP terminals and via an appropriate GW with non-SIP terminals. SIP Proxy servers are used to route call signaling among SIP terminals, by querying an internal database (DB). If the DB query gives no match for the current callee address, or if an error on the resulting next hop (SIP proxy) is obtained, the proxy releases the call and sends a Not Found message back to the caller. This fact may happen also and particularly in presence of a non-SIP called terminal; what really happens is that although the callee receives the SIP setup message, it is unable to generate an appropriate SIP response. In order to forward call setup requests from a SIP based terminal (UA) to a H.323 user, a gateway entity (IWF) should include all the interworking functionality needed to translate transparently the SIP messages to H.323 signaling and vice versa.To make the correct forwarding of calls through the IWF possible, the client agents of the signaling servers (i.e. the SIP servers and the H.323 gatekeepers) should share some information about the presence of users behind the specific IWFs. Such information should be dynamically exchanged among the SIP servers, the IWF, and the gatekeepers. Although this action is not crucial between gatekeepers and gateways in an H.323 domain, there is not a straightforward solution for the SIP-to-IWF relationship, and a sort of specific protocol seems to be required. A proposed solution for this issue makes use of the TRIP protocol. Another solution could be based on the adaptation of the SIP protocol and the change of SIP proxy functionality. However, both solutions seem to be not very suitable and won’t be followed. A possible approach that could be used to solve this issue is the insertion of a new module (software or hardware) acting as a SIP proxy server. This module should forward all the calls coming from SIP terminals to both the next hop SIP server and the SIP-to-H.323 gateway (IWF). However, the main drawback of this approach is that it requires the duplication of call signaling for both SIP-to-SIP or SIP-to-H.323 calls. This solution is fast but it increases (duplicates) the signaling traffic sent through the IP network.
SIP-H323: A Solution for Interworking Saving Existing Architecture
1113
An improved solution could be obtained by starting a new call request at the SIP proxy server as soon as a “Not found” message or a “Time out” message has been received. The new calling process is performed towards the preconfigured IWF. This . solution decreases the signaling traffic, but increases the call setup time (up to 2 Tout, where Tout is the time-out for the SIP call). To be noticed that the proposed solution is a compromise between the increase of signaling traffic and call setup delay.
3 The Network Architecture To describe the network architecture, let us start observing what happens if in a pure SIP network, a SIP user addresses a call for a user that is in an H.323 network. We suppose that the caller and called users are respectively a SIP user, whose address is sip:amalfi@a_sip.com, and an H.323 user, whose address is positano@b_h323.com. The address of the gateway is: sip:gw.a_sip.com.
Fig. 1. (a) SIP-H323 standard interworking architecture(EP: end-point); (b) simplified structure of modified network
The user amalfi send a SIP INVITE message to the pre-configured SIP proxy server. As soon as the SIP Proxy Server receives the INVITE message, it tries to resolve the address contained in the field To of the message, consulting its “contact database” or by means of the DNS. If it cannot find any correspondence in the “contact database” for the user, it forwards an INVITE message to the b_h323.com domain; however, the H.323 domain cannot process successfully the message. Then the SIP Server answers the INVITE request by sending a 404Not Found error message or a 408Request TimeOut to the amalfi user. The previous result occurs even if an IWF is introduced in the SIP architecture (fig. 1). This is due to the fact that there is no mean to let the SIP server aware about the correct route of the SIP requests through the gateway. In other words, a call initiated by a SIP client and directed to a H.323 user would give negative results because of the fact that the SIP Server doesn’t know that it could address the call via the IWF. A possible solution proposed by the IETF is to register the IWF at the SIP Server using the TRIP protocol [4]. But even in this case it is necessary to have a new SIP Server in the network which is aware of the IWF and able to interpret the TRIP protocol. By deeply examining the previously described scenario, it is possible to observe that the call failure towards an H.323 user is translated in a 404Not Found or 408Request TimeOut error message that is first received by the SIP server and then
1114
G. De Marco et al.
forwarded to the caller (sip:amalfi@a_sip.com in the previous example). Noticeably, when receiving these error messages, the server might guess that the called user belongs to the H.323 domain and try to forward the call to the IWF. This consideration is the basis of our scenario in which a SIP call that cannot be forwarded to the called user within the SIP domain is relayed through the IWF to the H.323 domain. For this scope, a new software component has to be introduced, the SSFI (Sip –Server –Functional to Interworking). The SSFI can be seen as a very simple and stateless SIP proxy server that just forwards all incoming messages (both requests and responses). The only functionality that it implements is to look for 404Not Found or 408Request TimeOut error messages and, when one of these messages is received, to translate them in a 302Moved Temporarily redirection message with the address of the IWF in the contact field. Any other message that will arrive to the SSFI, will be just forwarded to the client. In order to minimize the impact on the original architecture, the new SSFI can be introduced simply by configuring the SIP terminals to let them use the SSFI as the default proxy. As an example, if a terminal uses as its default outbound proxy a SIP server at the address A1:P1 (Address:Port), when using the new SSFI module, the latter is configured to accept messages on A1:P1, while the original SIP server will accept messages at A2:P2. If SSFI should run on the same machine as the SIP proxy server, then A2A1 and P2 is one of the ports available on the server. Obviously the SSFI should forward every incoming call (from SIP user clients) towards the original SIP server using the socket A1:P2 (or A2:P2). We suppose that the SSFI and the SIP proxy are running on the same system. The message flow between the SIP nodes is as follows: an INVITE message sent from the caller reaches the SSFI, the SSFI forwards it to the SIP server; if the SSFI doesn’t receive a 200Ok or 404NotFound message within a time t, it starts trying to route the call towards the H.323 network by means of the IWF. We set this time t to Tout/2 (note that this isn’t the optimal choice) [8].
4 Temporal Diagram A client in the home SIP network sends an INVITE. The client asks positano@b_h323.com to establish a two-party conversation. The SSFI accepts the INVITE request and forwards the request to the SIP proxy server. Both the SSFI and the Proxy Server set a Time-out counter. When the SIP proxy counter reaches the maximum value (Tout), the INVITE request is canceled. If the SIP proxy finds the called user before Tout/2, the terminal will send a 200 Ok message. If a 404 Notfound message is sent to the SIP proxy before Tout/2 then the SSFI begins a new calling process in an other network using the gateway. The SSFI does not forward this response, but replies to the caller with the status codes 301 (Moved Permanently) or 302 (Moved Temporarily) specifying the IWF location with the Contact field. The caller then sends a new INVITE request to the SSFI with RequestURI set to the address specified in the Contact field.
SIP-H323: A Solution for Interworking Saving Existing Architecture
1115
Fig. 2. (a) Successful transaction at SSFI; (b) Not Found in Sip network;(c) a successful response from the H.323 side, after the expiry of the first Tout/2
If no messages arrive to SSFI in a Tout/2 time, it starts a parallel search in the H.323 network. The SSFI then sends a new INVITE request to the SIP proxy with the same To (including tags), From (including tags), Call-ID, Cseq fields, but with a different Request-URI. Then it resets the Time-out counter. The Request-URI of the INVITE request is set to the IWF URI. For the SIP Proxy this request corresponds to a new transaction, and it should be proxied. The “branch” parameter, in the new INVITE, is set to a different value. Actually this token must be unique for each distinct request. The SSFI uses the value of the “branch” parameter to match responses to the corresponding requests. CANCEL and ACK requests must have the same branch value as the corresponding requests they cancel or acknowledge. In this state, if a “not found” message arrives from the SIP network within Tout/2 seconds, the SSFI will keep on staying in a “wait” state. If a “not found” message arrives also from the IWF, SSFI will forward it to the SIP proxy, which will close the session. If a 200 OK message arrives from one of the two networks, the SSFI will forward it as usual and, if necessary, will send a CANCEL message to the other network. The CANCEL message must be sent if a positive response arrives during the next Tout/2 seconds. Just as an example, if we suppose that a 200 OK response arrives from the IWF within the next Tout/2 seconds, the SSFI must send a CANCEL message to the SIP proxy. If neither the 200 OK message nor the “not found” message should arrive from one of the two ways, the SIP proxy server will close the session, after 3/2 Tout. We do note that, if a 200 OK message arrives from the IWF, it is possible to update the DB of the SIP proxy server in order to route future calls addressed to the called user, directly to the IWF. The SSFI could make this updating, sending special REGISTER messages to the SIP proxy.
5 SSFI: State Machine The idle state of SSFI is T (Transparent). When the SSFI receives an INVITE message, its state changes to S (SIP context), and its counter is set. In S state, when a 200 OK arrives from the SIP network, the SSFI goes back into T state; otherwise, when a 404 Notfound message arrives, the SSFI goes into H state (H.323 context).
1116
G. De Marco et al. 4 0 4
N o t F o u n d
* *
* 2 0 0
O K
It is a N o t F o u n d m e s s a g e F ro m S I P s id e n e tw o rk b e fo r e T o u t/2
^
IN V IT E
* * T
2 0 0
S
O K
4 0 4
H
N .F .*
^ 4 0 4
2 0 0 2 0 0
O K
3 /2
T o u t
^ ^ ^
O K
N .F . * *
T o u t/2
^ ^
^ ^ W
4 0 4 *
It is a N o t F o u n d m e s s a g e F ro m H 3 2 3 n e tw o r k s id e
It is a 2 0 0 O k m e s s a g e F ro m H 3 2 3 s id e , b e fo re T o u t/2
N .F .
^ ^ ^
It is a F ro m It is a F ro m
2 0 0 O k m e s s a g e S IP n e tw o rk s id e 2 0 0 O k a H 3 2 3
m e s s a g e n e tw o r k s id e
Fig. 3. State Machine
In the H state, SSFI begins a new session sending a “moved” message to the caller. When either a 200 OK or 404 Notfound message is received the SSFI goes back to the idle state T. Furthermore when in S state, after Tout/2, SSFI reaches the W state. In this state (Waiting) if a 404 Notfound message arrives from the SIP network, the SSFI continues waiting for some responses from the gateway.
6 Conclusions In this paper the problem of the interworking between SIP and H.323 networks has been considered. The problem of call forwarding through different domains arises for calls generated from a SIP domain and directed to a H.323 domain. A possible simple solution has been proposed and described, taking into account particularly the problem of backward compatibility with previously installed SIP and H.323 networks and legacy systems. For this reason, the proposed solution does not use new protocols between signaling systems and does not require any modifications of SIP/H.323 terminals nor servers. The call can be forwarded to both domains in serial or parallel manner. A compromise is chosen in order to balance the generated signaling traffic and the call-setup delay.
References [1] “Packet based multimedia communication systems”, Recom. H.323 – ITU-T, Feb. 1998 [2] “Call Signaling Protocols and Media Stream Packetization for Packet Based Multimedia Communications System”, Reccom. H.225.0, Version 2 - ITU-T, Feb. 1998 [3] “Control protocol for multimedia communication”, Recom. H.245.0, ITU-T, Feb. 1998 [4] J. Rosemberg, , H Salma, “Usage of TRIP in Gateways for Exporting Phone Routes” March, 2000 [5] Singh, Schulzrinne, “Interworking Between SIP/SDP and H.323” May 12, 2000 [6] H. Agrawal, R. R. Roy, et Al. “SIP-H.323 Interworking Requirements”, February 22, 2001 [7] H. Agrawal, R. R. Roy, et Al. “SIP-H.323 Interworking”, July 13, 2001 [8] H. Schulzrinne, J. Rosemberg, et Al. “SIP: Session Initiation Protocol”, July 20, 2001
High Router Flexibility and Performance by Combining Dedicated Lookup Hardware (IFT1), off the Shelf Switches and Linux Christian Duret1, Francis Rischette1, Joël Lattmann1, Valéry Laspreses1, Pim Van Heuven2, Steven Van den Berghe2, and Piet Demeester2 1 France Telecom R&D, Issy les Moulineaux, France {christian.duret, francis.rischette, joel.lattmann, valery.laspreses}@francetelecom.com 2 IMEC, Ghent, Belgium {pim.vanheuven, svdberg, demeester}@intec.rug.ac.be
Abstract. In this paper we propose a new router architecture that combines both flexibility and performance. This router architecture aims at combining the best of two worlds: the commercial routers, which have a proven track for stability and performance but lack the flexibility of routers with open source operation system. The latter is particularly flexible because the source code is accessible for analysis and modification purposes as opposed to the traditional commercial routers, whose software can be altered by their manufacturers only.
1 Motivation and State-of-the-Art The exponential growth of Internet traffic has yielded a dramatic development effort of the IP routers technology. Moreover, the deployment of value-added IP service offerings (ranging from a QoS-based access to the Internet to real-time services, like IP videoconferencing) has lead to an important development of specific capabilities (traffic conditioning, marking, scheduling and metering) that are supported by some if not all - the routers of the Internet. The consequence of the activation of such enhanced capabilities is twofold: a demand for an increase of the routers' switching performances together with the availability of multi-functional and multi-service routers. Other important concerns deal with IP security, multicast, and Virtual Private Networks services. Therefore, the IP routers that are exploited in a multi-service environment need to be flexible enough in order to address current and future requirements. For the past decade, Linux has received considerable interest not only from the research community, but also from the industry. An extensive description of Linux features and related bibliography can be found in [1]. Recently, an implementation for DiffServ over MPLS [2] has been released by some of the authors of this paper.
1
IP Fast Translator
E. Gregori et al. (Eds.): NETWORKING 2002, LNCS 2345, pp. 1117-1122, 2002. © Springer-Verlag Berlin Heidelberg 2002
1118
C. Duret et al.
The main issue raised by the use of Linux-based routers deals with their switching and forwarding performances: They are bounded by the CPU and are difficult to predict since both the data and the control planes run on the same CPU; Another problem is the interrupt overhead. Note that alternatives exist which are based on polling [3]. Even more important is that most of these routers are built around commodity PC, and therefore inherit of their shared bus limitations; Commercial routers provide more than acceptable switching performances. Their main drawback is their lack of flexibility. Thus, whenever an IETF standard is not implemented yet, and/or some functionality is missing, it becomes necessary either to rely on the roadmap of a given manufacturer for the introduction of new features, or to add adaptation boxes, where it is feasible. The commercial routers whose architecture is based upon a high performance CPU and interface cards linked together by a shared bus, are not sufficient anymore to keep pace of the constant increase of Internet traffic, hence overwhelming the Moore’s law. A new class of components, dedicated to high speed network layer processing has emerged for about a year: the network processor. Unfortunately, network processors are clearly designed in an opposite way as the Linux paradigm.
2 The IFT-Based Experimental Router Several years ago, FTR&D has developed a research program on high speed networking techniques to be initially deployed within an ATM context, so as to address the above-mentioned issues. One way to address the switching performances issue consists in system optimization. Looking at a conventional router, one can see that less than 5% of the system software runs in the data path, but is responsible for more than 95% of the execution time. Only a small part of the related functions has to be "wired" to reach the performance level that is needed today, this level being around 6 1.5 x 10 packets/second per Gigabit/s bit rate at the interface level. Among these functions, classification ("The process by which a data packet is examined and policy decision are made which affect down-stream processing" [4]) is a critical one, and it clearly requires as much flexibility as does a purely software-based implementation to handle forwarding decisions, filtering such as Access Control Lists (ACL), an increasing set of encapsulations headers, forthcoming protocols (IPv6)... Generally speaking, the incoming frame is characterized by a set of fields within a succession of headers, whose respective contents could possibly be analyzed against a set of patterns. Each individual analysis is defined by the position of the field within the frame, the set of patterns against which to compare the content of the field, an action to be performed in the case of a match (either a link that leads to another field to be analyzed, or a final result that indicates where to send the packet, or a default treatment). Figure 1 below is an example of such behavior for basic IP forwarding. The implemented lookup process is basically a "multibit Trie" allowing for either exact range or longest match. An extensive survey of lookup algorithms can be found in [5]. The complexities reported for this lookup scheme are:
High Router Flexibility and Performance
1119
Worst case lookup time O(W) Worst case update time O(W/K + 2K) Worst case memory size O(2kNW/K) Where N is the number of entries, W the length of the address and K the size of the bit slice (or "stride" according to [5]).
Fig. 1. Successive header fields to be processed for basic IPv4 forwarding. The shaded areas within the incoming frame are the header fields that are analyzed through the IFT. The sequence of the analyzed fields and related counters update is fully defined by the pattern store memory that implements a finite state machine, whose transitions are triggered by the incoming packet (upper part of the figure). A match may also trigger external processes to keep track of layer succession, check the header, update counters, update checksum, Time To Live (TTL) and Differentiated Service Code Point (DSCP). The IFT analysis result is mapped onto a VCI (Virtual Channel Identifier) value that implicitly designates the output port. These processes depicted in the lower part of the figure are protocol-dependent.
The worst case lookup time is 120ns for IPv4 addresses in the present hardware implementation, to be compared to 2.99us reported in [5] for a software implementation executed on a 200 MHz Pentium-Pro based computer under Linux. By nature, there is neither layer nor any field restriction in the analysis: upper layers may be processed through linked tables. The worst-case lookup time is 345ns for a basic TCP-UDP/IP 5-tuple, to be added to regular forwarding process time. This classification is performed by implementing a "set-pruning trie" data structure according to the proposed taxonomy in a recent survey of algorithms for packet classification [6]. The properties of this structure are: Worst case lookup time O(dW) Worst case memory size O(dN) Where d is the "dimension" of the classifier, that is to say the number of header fields of W bit length on which a number N of classification rules apply. The large amount
1120
C. Duret et al.
of memory is due to the fact that some fields may need as much as dN tables to ensure that every matching rule for a given field will be traversed depending upon the result of the analysis of the previous field. No backtracking nor linear search are needed allowing to analyze each relevant field only once on the fly. Backtracking, as implemented in "Grid-of-tries" [7], reduces the storage complexity to O(NdW) at the expense of O(Wd-1) for worst-case lookup time complexity. Incoming packets are analyzed at line rate by reading the IFT control memory. A software driver running on the IFT host is in charge of writing it. This driver offers a set of updating functions: insertion and removal of patterns. It constantly provides the global consistency of the control memory, without the need of recurrent tables reorganization. The IFT driver runs on a logical copy of the IFT memory and performs incremental updates, thus the memory bandwidth required for update operations is several orders of magnitude lower than the bandwidth required by incoming packet processing.
Fig. 2. Examples of packet processing.IFT-only functionality: The IFT runs a copy of the kernel Forwarding Information Base (FIB). A datagram whose destination address has been recognized is forwarded directly by the IFT to the switch fabric where it is forwarded to the output interface that leads to the next hop associated to the contents of the destination address field of the datagram.Linux control path functionality: a datagram destined to the router is forwarded to the router Linux host. The Linux kernel then processes this datagram. For example, if it is an Internet Control Message Protocol (ICMP), Echo Request message, then the kernel sends an Echo Reply message back to the originating host through the switch fabric.Linux control and IFT configuration functionality: a datagram destined to the router is forwarded towards the router Linux host. The Linux kernel sends this datagram up to the application layer. For example an Open Shortest Path First (OSPF), Link State Advertisement (LSA) packet is sent to the routing daemon. The daemon will update the kernel FIB if needed. The corresponding message is then copied in the IFT control memory.
The communication within the IFT-based experimental router is performed through an ATM switch fabric that directs IFT-processed packets towards external (most of packets) or internal interfaces for being handled by the Linux host and the control plane processes. Thus, aside the IFT driver, the role of the Linux host is threefold:
High Router Flexibility and Performance
1121
In the data plane, processing of the datagrams that were sent to the Linux kernel by the IFT module (such as time exceeded ones or those containing options or directly addressed to the router); Running the control plane functionality; Configuring the IFT forwarding table through a user relay application. The control path functionality is comparable to a regular Linux-based router. Figure 2 gives the three possible scenarios that can occur when a packet enters the experimental router. Routing protocol packets are an important example because these packets can update the routing table inside the Linux component. These changes have to be reflected in the IFT forwarding table too. This leads to the third role of the Linux components: the configuration tasks that consist of mapping the Traffic Classifier configuration commands and routing updates using netlink sockets [8] onto IFT header pattern entries. This has the advantage that software-based routing daemons can be re-used on the experimental platform without the need for any modifications.
3 Future Work The present router design is based upon an ATM switch. Ongoing developments include the support of Fast and Gigabit Ethernet interfaces. The architecture described in this paper applies to a design based upon an Ethernet switch as well. In this case, the IFT analysis result, instead of being mapped to an ATM connection, is mapped onto a Medium Access Control (MAC) frame, whose Destination Address field is either a host, a gateway or the Linux host itself. Additionally, most of Gigabit Ethernet switches provide priority queuing mechanisms through the implementation of the IEEE 802.1p standard that may be useful for implementing Diffserv-based routing and QoS mechanisms. The IFT developments have been considered for the implementation of a Multimedia Switch Router [9]. Security applications are also considered [10]. Another application of this kind of platform could be admission control facilities that would be based upon "on the fly" identification of elastic and streaming flows [11].
4 Conclusion In this paper we explained that current marked trends push for both flexible and high performance routers. Current router options are either high performance (commercial routers) or flexible (open source-based PC routers). As a solution to this problem, we propose a router architecture that consists of the combination of fast dedicated look-up hardware, off-the-shelf switches, and the Linux OS. The combination of these components provides: A performance level that can easily be compared to the switching performances of commercial routers; Scalability through the use of off-the-shelf switching fabric (currently ATM, Fast and Gigabit Ethernet later on);
1122
C. Duret et al.
The flexibility at the control path equal to that of an open source PC router; The extensive developer support that have been engaged on Linux-based routers; A clear separation between forwarding and control planes. Acknowledgements. This work was partly undertaken in the Information Society Technologies (IST) TEQUILA project, partially funded by the Commission of the European Union. Part of the work is also sponsored by the Flemish Government through two IWT scholarships. The authors would also like to thank the rest of the Tequila colleagues who have contributed to the ideas presented in this paper and Jacques Le Moal, Jean Louis Simon (France Telecom R&D) for their contribution to the IFT project.
References [1] D. Griffin editor “D2.1: Selection of Simulators, Network Elements and Development Environment and Specification of Enhancements" http://www.ist-tequila.org [2] Pim Van Heuven, Steven Van den Berghe, Tom Aernoudt, Piet Demeester, "RSVP-TE daemon for DiffServ over MPLS under Linux", http://dsmpls.atlantis.rug.ac.be [3] Benjie Chen et. al.," The Click Modular Router Project", http://www.pdos.lcs.mit.edu/click/ [4] "Programming & Reprogramming: Keeping the speed without Losing your Mind" in Network Processor Summit - Networld+Interop 2000 [5] Miguel A. Ruiz-Sanchez, Ernst W. Biersack, Walid Dabbous "Survey and Taxonomy of IP Address Lookup Algorithms" in IEEE Network March/April 2001 [6] Pankaj Gupta, Nick McKeown "Algorithms for Packet Classification" in IEEE Network March/April 2001 [7] V. Srinivasan et al., "Fast and Scalable Layer four Switching" in Proc. ACM Sigcomm, Sept. 1998 [8] G. Dhandapani, A. Sundaresan “Netlink Sockets – Overview” http://qos.ittc.ukans.edu/netlink/html/ [9] Michel Accarion, Christophe Boscher, Christian Duret, Joël Lattmann "Extensive Packet Header Lookup at Gb/s Speed for an Application to IP/ATM multimedia switch router" In World Telecommunication Congress - International Switching Symposium, Birmingham May 2000 [10] Olivier Paul, Maryline Laurent, Sylvain Gombault, "A Full Bandwidth ATM Firewall" in Proc. of the 6th European Symposium on Research in Computer Security, Toulouse, France, October2000 [11] N. Benjameur, S. Ben Fredj, S. Ouslati-Boulahia, J. Roberts, "Integrated Admission Control for Streaming and Elastic Traffic" in M. Smirnov, J. Crowcroft, J. Roberts, F. Boavida (Eds), "Quality of Future Internet Services", Springer, LNCS 2156, 2001.
Group Security Policy Management for IP Multicast and Group Security Thomas Hardjono1and Hugh Harney2 1
2
VeriSign Inc., 401 Edgewater Place, Suite 280, Wakefield, MA 01880, USA [email protected]
Sparta Inc., Secure Systems Engineering Division, 9861 Broken Land Parkway, Suite 300, Columbia, MD 21046, USA [email protected]
Abstract. The current work focuses on the area of group security policy within secure IP multicast and secure group communications. The work explains the background and context, introduces a Group Security Policy Framework, and describes how this fits within the broader Multicast Security Framework developed within the IETF. Finally, the current status of developments within group security policy in the IETF is discussed.
1 Introduction Group communications, also commonly called multicast, refers to communications in a group where the messages can be sent by any member and are received by all members. They range from mailing lists to conference calls to IP Multicasting. Often the need for data protection arises, which requires the group to handle the messages in a consistently secure manner. To accomplish this, cryptographic mechanisms and security policy must be shared and supported by the group as a whole. Because of this, special problems arise in managing the cryptographic and policy material as it changes or as the group changes. The current work discusses the need for policies and policy-management for secure groups, placing the discussion in the context of the SMuG/MSEC Framework for Multicast Security in the IETF. The work described the Multicast Security Framework and identified the entities and interactions involved in group security policy management. It then focuses on a framework for group policy management for secure-groups, and explains the current status of developments in the IETF.
E. Gregori et al. (Eds.): NETWORKING 2002, LNCS 2345, pp. 1123-1128, 2002. © Springer-Verlag Berlin Heidelberg 2002
1124
T. Hardjono and H. Harney
Group Security Policy Framework Policy Management
Distributed Designs Centralized Designs
Policy Rpstory
Policy Server
Policy Server
Group Owner/ Creator
Group Controller + Key Server
Group Controller + Key Server
Announcement Mechanism
Member
1-to-M M-to-M
Member
Fig. 1. Group Security Policy Framework
2 Group Security: Background & Framework There is significant interest in the networking industry and content delivery network (CDN) industry to use IP multicast a vehicle for data delivery to a large audience. One major hindrance to the successful deployment of IP multicast and other group-oriented communication protocols has been the lack of security for both the content and the content-delivery infrastructure. To this end, the IETF designated in mid-1998 the creation of the Secure Multicast Group (SMuG) under the umbrella of the Internet Research Task Force (IRTF) to research and develop protocols for multicast security. This IRTF group has since been formalized into a IETF Working Group, called Multicast Security (MSEC), early in 2001. The architecture and designs developed within SMuG have largely been
Group Security Policy Management for IP Multicast and Group Security
1125
carried-over into the MSEC WG with the aim of further refining and formalizing into specifications for a set of standards documents (RFCs). The Secure IP Multicast Framework and Building Blocks document [HCBD00] of the IETF describes a number of entities, which participate in the creation, maintenance, and removal of secure multicast groups. Those that are of concern for group security policy are the Group Controller and Key Server (GCKS), the Group Policy Server (GPS) and Member (Receiver and Sender). The Framework of [HCBD00] identified three broad problem-areas that need to be addressed. These are group key management, data/content handling (i.e. treatment of messages in a crypto context) and group policy. It is the later problem area that is of interest here, and will be further discussed in the following sections.
3 Group Security Policy Framework The intent of the Framework of [HCBD00] is to present a high-level roadmap for the development of technologies that implement group and multicast security. Thus, to that extent, it was intended that each problem-area would develop its specific or focused framework or architecture. An example of a more focused architecture is one for group key management as reported in [HBH00, BCD01]. In the following section, we discuss a framework for group security policy, using the Framework of [HCBD00] as the starting point. Figure 1 shows a framework for group security policy where additional entities (over those in [HCBD00]) have been introduced relating to group policy. Both centralized and distributed designs are still shown, though slightly skewed to emphasize the distributed designs involving the policy-related entities. 3.1
Group Owner/Creator (GOC)
The Group Owner/Creator (GOC) represents the entity that is understood by all participants and entities in the network as the ultimate controller of the secure group. The entity is understood as having among others the following tasks:
Defining group policy: The GOC defines all types and levels of policies pertaining to the group. This assumes that the network infrastructure for policy creation and assignment exists and can be deployed. Setting-up network services: As the creator/owner of a group, the GOC is assumed to also have network resources at all necessary layers of the network to enable the running of the group. Defining membership: The GOC defines the constituency of the group which it is setting-up. The basis of the membership of the group can be loose or tight, using host/user identity, IP addresses, certificates, or even a predefined access control list. Sending out announcements/invitations:
1126
3.2
T. Hardjono and H. Harney
The GOC is also responsible for putting out an announcement or call to join through the mechanisms it selects. This could be using IP broadcast or multicast, advertising on a website or other mechanisms. Terminating groups: The GOC is also responsible for concluding a secure group, particularly if that group consumes (network) resources. Group Policy Servers (GPS)
The Group Policy Server represents the entity that holds available the policies pertaining to groups. This information can be split into the policy items available for the general public (of non-members) and those available only to designated members of a group. Publicly available policy items: This is information pertaining to a secure group that has been previously announced through some public medium and which can be used by hosts/users to evaluate their eligibility to join a group. Private policy items: This is information that is only available to entities that have passed the membership eligibility test. The policy items may represents additional grouprelated policies that a (strongly authenticated) member needs to know in order to proceed further with participating in the group. 3.3
Group Policy Repository (GPR)
The Group Policy Repository (GPR) has the function of storing the secure group policies, each with the suitable protection levels and with access to it subject to appropriate authorization. Typically, authorization to access the GPR is provided only to the Group Owner/Creator (read/write/modify) and to the GCKS and Policy Servers (read). The first aim of the GPR is to make the policies pertaining to secure groups available on-line. The same is true for GCKSs. The second aim of the GPR is to allow dynamic update of policies by the Group Owner/Creator in cases when updating some policies does not endanger a group in progress. 3.4
Group Policy Announcement Mechanisms
The Group Policy Announcement (GPA) is a functionality that is aimed at making available information about groups to the intended recipient of such announcements. In the case of a Closed Secure Group, the announcement’s intended recipients would be the members pre-selected by the Group Owner/Creator. In the case of an Open Secure Groups, the announcement will be readable by the public.
Group Security Policy Management for IP Multicast and Group Security
4
1127
Group Security Policy Token
Current work in the IETF have so far focused more on how to define and represent the security mechanisms policies in the context of IP multicast security, where IP multicast is seen as the primary transport for group-oriented communications. The Group Security Policy Token (GSPT) [HHMCD01] is a structure that represents security mechanisms (and their parameters) used within a secure group (Figure 2). Not all elements of a GSPT for an instance of a group are made public through the announcement. The work of [HHMCD01] is a continuation of earlier work on group policies within the framework of GSAKMP [HCHMF01]. The elements of a GSPT (or categories in [MHCPD00]) specify the policies that are to be followed by members of a group, and consist of the following: Policy Identification: A group must have some means by which it can identify an instance of Group Security Policy in an unambiguous manner. Authorization for Group Actions: A Group Security Policy must identify the entities allowed to perform actions that affect group members. Access Control to Group Information: Access control policy defines the entities that will have authorization to hold the key protecting the group data. Mechanisms for Group Security Services: Identification of the security services used to support group communication is required. For example, policy must state the algorithms used to derive session keys and the types of data transforms to be applied to the group content. Verification of Group Security Policy: Each policy must present evidence of its validity. Group Security Policy Token
Token ID
Authorization
Access Control
Mechanisms
Digital Signature
Token ID Version
Group Protocol
Group ID
Timestamp
Authorization Group Owner
GCKS
Rekey GCKS
Access Control Permissions
Access
Mechanisms Category-3 Security Association
Category-1 Security Association
Category-2 Security Association
Digital Signature Signer ID
Certificate Certificate Info Data
Fig. 2. GSPT Structure
1128
5
T. Hardjono and H. Harney
Remarks and Conclusion
The current short paper has discussed the need for policies and policy-management for secure groups, placing the discussion in the context of the SMuG/MSEC Framework for Multicast Security in the IETF. The work then presented a more policy-focused framework/architecture using these existing entities, while introducing others that are relevant to group security policy management. The Group Security Policy Token (GSPT) was then presented and discussed. The GSPT represents the current status of development in the IETF in the MSEC Working Group with respect to group security policy.
References [BCD01]
M. Baugher, R. Canetti, L. Dondeti, Group Key Management Architecture, draft-ietf-msec-gkmarch-00.txt, June 2001, Work in Progress. [HBH00] H. Harney, M. Baugher, T. Hardjono, GKM Building Block: Group Security Association (GSA) Definition, draft-irtf-smug-gkmbb-gsadef-01.txt, September 2000, Work in Progress. [HHMCD01] T. Hardjono, H. Harney, P. McDaniel, A. Colgrove, P. Disnmore, Group Security Policy Token, draft-ietf-msec-gspt-00.txt, IETF, Work in Progress, Sept 2001. [HCBD00] T. Hardjono, R. Canetti, M. Baugher, P. Dinsmore, Secure IP Multicast: Problem Areas, Framework and Building Blocks, draft-irtf-smugframework-01.txt, September 2000, Work in Progress. [HCD00] T. Hardjono, B. Cain, N. Doraswamy, A Framework for Group Key Management for Multicast Security, draft-ietf-ipsec-gkmframework-03.txt, August 2000, Work in Progress. [HCHMF01] H Harney, A Colegrove, E Harder, U Meth, R Fleischer, Group Secure Association Key Management Protocol (GSAKMP), draft-ietf-msecgsakmp-sec-02.txt, December 2001, Work in Progress. [MHCPD00] P. McDaniel, H. Harney, A. Colgrove, A. Prakash, P. Dinsmore, Multicast Security Policy Requirements and Building Blocks, draft-irtf-smug-polreq00.txt, November 2000, Work in Progress. [SMuG-MSEC01] www.securemulticast.org
Issues in Internet Radio Yasushi Ichikawa, Kensuke Arakawa, Keisuke Wano, and Yuko Murayama Dept. of Software and Information Science, Iwate Prefectural University 152-52 Sugo, Takizawa, Takizawa-mura, Iwate,Japan {ichikawa,araken}@comm.soft.iwate-pu.ac.jp, [email protected], [email protected]
Abstract. The World-wide Web(WWW) works now as the infrastructure over the Internet for multimedia applications. Internet radio is one of those applications and its growth is explosive. We have started operating an Internet Radio station with streaming music services since April 2000. An Internet radio can broadcast music over the network regardless of such geographic restrictions as the traditional radio systems have. There are some problems and services due to the Internet. This paper reports our operation as well as the issues. We propose our idea on some novel radio services as well.
1
Introduction
During the 80’s the question was for what exact applications the Internet would be used best. We now know that the answer is WWW. The Internet has grown dramatically since WWW was introduced in the end of the 80’s. Indeed, WWW is considered now as the infrastructure over the Internet for multimedia applications. Internet radio is one of those applications. The growth of the number of Internet radio stations is explosive. There are more than 5000 stations operating over the Internet. The number of Internet radio stations has been increasing about 1000 stations each year. The purpose of our research is to identify the issues to be dealt with by Internet Radio. This paper reports our initial effort to set up a radio station as well as its operation for several months. We describe issues and present an idea of some novel radio services.
2
Internet Radio Systems
The Internet radio stations are classified into two types according to their operations; commercial ones and private ones. The private stations operate differently from the commercial ones including the traditional radio broadcast stations. The private ones would select the music more from the service provider’s viewpoint, whereas commercial ones need to provide the music favored possibly by many listeners. A famous commercial station would keep having more than 200 listeners. E. Gregori et al. (Eds.): NETWORKING 2002, LNCS 2345, pp. 1129–1134, 2002. c Springer-Verlag Berlin Heidelberg 2002
1130
Y. Ichikawa et al.
An Internet radio system has a client-server structure. A radio station has a server, and a user needs to have a client system such as an MP3 player. The multimedia authoring tools and the Internet have made it possible for us to set up private radio stations easily. There are two types of music streaming services available on the Internet. One is to download music data and then play it. The other is to play music on music streaming server on a real-time basis. Our radio station uses the latter. There are three systems available for setting up an Internet radio system, viz. The Real system [2], Shoutcast [3], and Icecast [4]. Real System is a server for a specific client system, the Realplayer. Most of the Internet radio stations use Real System. Icecast and Shoutcast provide MP3 streaming servers. MP3 is an MPEG Audio Layer 3, a compression format [5].
3
Flip over Radio(FOR)
In this research we set up our own radio station on the Internet called “Flip Over Radio (FOR).” At the moment we operate FOR on an experimental and private basis. We broadcast Indie music which is made originally by unknown artists who work independently from record companies. They have a limited opportunity in publishing their music such that the listeners can obtain the information only from specific magazines and music stores in Japan.
Fig. 1. The operational model of FOR
Fig. 1 shows the model of our radio operation. Our Internet radio station provide such an opportunity for both artists and listeners to exchange the in-
Issues in Internet Radio
1131
formation on music and artists. Our radio site is a media for this exchange. The artists provide the music that they composed and played as well as the related information. We provide them with tools such as the one to make their home page as well as the message board so that they can communicate with the listeners. Commercial promoters could make use of the information we provide to find a new artist and music, so that an artist could have an opportunity to get a commercial contract. Table 1 shows the configuration of the server system of FOR. Table 1. The server system of FOR CPU MEMORY OS HTTPserver Streaming Application
AMD K6-2 400MHz (Over DriveProcessor) 48M Laser 5 Linux 6. 0 Apache Icecast [4] Shout [4] Icedj [6] Liveice [7]
Icecast is used to broadcast music. Shout selects a music to broadcast, and passes the music data to Icecast. Liveice is a real-time re-encoder and passes the encoded data to Icecast. We can mix several MP3 streams and audio inputs from mic(microphone) and Liveice. Icedj is used to run an Icecast radio station such that broadcasts a music at a certain time as scheduled in a program. It can be used together with Icecast to show the information on music being broadcast on the radio station’s WWW page.
4
The Operation Report
We have been operating FOR since April 2000. Fig. 2 shows the number of total user access per month. We have not had so many users, presumably it is not because of Indie music, but due to poor amount of contents. Users would not listen to an Internet radio station if it broadcast the same songs repeatedly. During July and August in 2001, we revised icecast in the latest version, so that the facility of the registration function started working well, which registers our radio server to the access ranking server on the Internet. 3 percent of connections were from our university, and 20 percent of connections were from Japan. We found two requirements. One was that a user needs an easy-to-use interface. The other was that a lot of contents are required. We may well need a user interface in JAVA Applet, so that the software is installed automatically. A station with poor contents would not have the users who would visit the radio station site repeatedly.
1132
Y. Ichikawa et al.
Fig. 2. User access per month (2000 - 2002)
5
User’s Private Channel
We are planning to provide users with private channels, so that users can listen to their favorite music. This novel type of service is only possible on the Internet, but not on the traditional radio systems. Fig. 3 shows the design of a private channel. The private channel operations are as follows : 1. A user registers his/her desirable channel ID and favorite music information on the web page and receives from the server a URL. 2. The registration process sends the registered information to a database engine which selects music. The database engine makes the play list of the user’s favorite music. 3. The channel making process makes the user’s private channel with the channel ID and the play list. The user makes access to the URL and listen to the music with the MP3 player. We have implemented the first part, and half of the third part, from the above list. For the second part, we are planning to construct the music database with some attributes such as quiet and noisy, so that user’s favorite music tunes can be selected automatically according to the user’s taste. There is a problems with this service. If we provide users with private channels on demand, we will require to run as many private channel processes as the number of user requests. The more private channel processes we have, the more loads the server gets and the slower the system operation becomes. We may need to explore the tradeoff between the performance of the server and the number of private channels. We may well need dynamic channel management.
6
Distributed Streaming
If we provide our radio service from only one site, the server site will be a bottle neck as the number of users increases. We propose distributed streaming
Issues in Internet Radio
1133
Fig. 3. The design of a private channel
by setting up relay servers. A relay server is an application level router which forwards MP3 data stream. A radio station transmits single music data to a relay server. A user connects to the nearest relay server. The relay server makes copies of the music data and sends them down to users. This operation looks similar to multicast as in Resource ReSerVation Protocol (RSVP)[8] whose multicast function operates at the network level. Content Delivery and Distribution Networks(CDNs) may be one of a possible tool for this[9], CDNs provide users with an access to one of the distributed servers in the different locations over the Internet. The servers have a cache of an original content. A user has an access to the nearest CDNs server. There are many products and services of CDNs. We need further investigation on the use of CDNs for the Internet radio service. We plan to provide users with private channels by making use of CDNs according to user’s taste from the user’s nearest relay server. Firstly, the relay server caches the music contents of the original server. Secondly, the private channel server makes a play list according to a request from a user, and sends it to the user’s nearest relay server. Thirdly, its relay server makes channel and provides music according to users’ taste with the play list.
7
Related Work
Most of the radio stations are operating on a commercial basis. Among them the following site is one of those that have many services and users: http://www.netradio.com. It has more than 100 channels. They are classified firstly into global categories such as pop, rock, and so on. In a category, the channels are classified further into subcategories such as chronological groups.
1134
Y. Ichikawa et al.
The commercial sites provide users with a shopping function as well so that users can purchase CDs of their favorite music. Another radio station: http://www.wolffm.com. deals with the various types of streaming such as MP3 streaming Realplayer, and Windows Media Player. Our radio station provides only MP3 streaming and 32kbps bit-rate at the moment, however, we are planning to provide some other bit-rates of MP3 as well in future. Our radio station is managed on a private and non-profitable basis with one channel, the contents of 150 music tunes, and some artists information at the moment. We are providing only with the specific type of copyright-free music and the information on the almost unknown artists in Japan.
8
Conclusion
This paper introduced the Internet radio from the viewpoint of a service provider. Internet radio systems have many different properties from the traditional radio system. For example, a cultural revolution could be possible in music, since any type of music could be delivered over the Internet and some of them would never appear on the traditional commercial media. We reported on the operation of our Internet radio station and identify some issues to be dealt with in future. The issues include providing users with their private channels and making the delivery system distributed. Future work includes examining the database engine function of the private channel, making the delivery service distributed, and providing an easier user interface. We plan to implement those required functions into a client system using JAVA Applet with Java Media Frame(JMF)[10], provided in the Multimedia library of JAVA.
References Flip Over Radio : http://radio.comm.soft.iwate-pu.ac.jp Real system : http://www.realnetworks.com/ Shoutcast : http://www.shoutcast.com Icecast : http://www.icecast.org Fraunhofer Research : http://www.iis.fhg.de/amm/ Icedj : http://www.remixradio.com/icedj/ Liveice : http://star.arm.ac.uk/˜spm/software/liveice.html L. Zhang, S. Deering, D. Estrin, S. Shenker, and D. Zappala: RSVP: A New Resource ReSerVation Protocol, IEEE Network Vol.7 Issue 5 pp.8-18 (Sep. 1993) 9. Balachander Krishnamurthy, Craig Wills and Yin Zhang, On the Use and Performance of Content Distribution Networks, ACM SIGCOMM Internet Measurement Workshop 2001 10. JMF : http://www.java.sun.com/products/jave-media/index.html 1. 2. 3. 4. 5. 6. 7. 8.
last access : Feb. 27, 2002
I/O Bus Usage Control in PC-Based Software Routers† Oscar-Iván Lepe-Aldama and Jorge García-Vidal Department of Computer Architecture, Universitat Politècnica de Catalunya c/ Jordi Girona 1-3, D6-116, 08034 Barcelona, Spain {oscar,jorge}@ac.upc.es
Abstract. This paper presents a performance analysis of a fair sharing mechanism for PC-based software routers, required when the I/O bus and not the CPU is the bottleneck. The mechanism involves changes to the OS kernel and assumes the existence of certain NIC functions, but does not require any changes to the PC hardware architecture.
1
Introduction
We can define a software router as a computer that executes a program capable of forwarding IP datagrams among network interface cards (NIC) attached to its I/O bus. It is well known that software routers have performance limitations. However due to the ease with which they can be programmed for supporting new functionality software routers are still important at the edge of the Internet. After this, the question of how to optimize software routers performance arises. In addition, if we want to provide QoS guarantees for traffic going through the router, we must find a suitable way of sharing resources. In other pieces of work the problem of fairly sharing router resources is tackled in terms of protecting [1,4] or sharing [6] the use of the CPU amongst different packets or data flows. However, the increase in CPU speed in relation to that of the I/O bus makes it easy for this bus to constitute a bottleneck, which is why we address this problem. This paper presents our proposal for a resource sharing mechanism that allows QoS levels to be guaranteed in software routers by jointly controlling I/O bus activity and CPU operation. It is a software mechanism that does not require changes to the PC hardware architecture and which introduces low overhead and avoids intrusion. It requires that NICs dispose of several direct memory access (DMA) channels—one for each traffic flow—working independently and having a set of descriptors that store usage information—NIC’s buffer occupancy or the total number of arrivals to the channel. Moreover, this paper presents a study of the properties of the mechanism, when considered in isolation, and a system performance evaluation, when the mechanism is incorporated into a software router. We will concentrate on software routers built on desktop PCs running general purpose, open source operating systems—FreeBSD, which implement networking functions within the kernel.
†
This piece of work was supported in part by the Mexican Government through CICESE Research Center’s and CONACyT’s grants. Also supported by the Spanish Minister of Science and Technology through project TIC2001-0956-C04-01
E. Gregori et al. (Eds.): NETWORKING 2002, LNCS 2345, pp. 1135-1140, 2002. © Springer-Verlag Berlin Heidelberg 2002
1136
2
O.-I. Lepe-Aldama and J. García-Vidal
A Mechanism for Implementing I/O Bus Sharing
The mechanism we propose for implementing I/O bus sharing, and that we call Bus Utilization Guard (BUG), manipulates the vacancy space of the message buffer reception input queue of each DMA channel, so the overall activity at the I/O bus follows a schedule similar to one produced by a WFQ server. (For now on we referred to the I/O bus simply as the bus, and to a MBUF queue simply as a queue.) For minimizing intrusion, the mechanism is activated each T cycles and it is executed either by the CPU or by a suitable coprocessor placed at the AGP connector. For reducing overhead, the mechanism uses a two state behavior, monitoring and enforcing. Assume that the mechanism is in monitoring state at cycle k•T. Then, the mechanism gathers Di,k—number of bytes transferred through the bus during period ((k-1)•T, k•T) by channel i. If sum(Di,k) < T/b BUS , where bBUS is the cost per bit of bus transfer, the mechanism remains at monitoring state and no further actions are taken. On the contrary, the mechanism detects the start of a busy period and enters enforcing state. When at this state, the mechanism polls each NIC to gather Ni,k—number of bytes stored at the NIC associated with channel i—and computes the amount of bus utilization granted to each channel, or ik, after the outputs of an emulated general processor sharing (GPS) server [5] with batched arrivals, or Gi,k. The input for the emulated GPS are the Ni,k at the start of the busy period. Afterwards, the inputs are the amount of arrived traffic during the last period or A,ik=N i,k –Ni,k-1+Di,k. BUG is workconservative and thus
gi,k = Gi,k + (T/bBUS -(G1,k+…+GN,k) )
(1)
Observe that sum(gi,k) = T/bBUS, a situation that can lead to an unfair share. Consequently, BUG is prepared with an unfairness-counterbalancing algorithm. This algorithm computes an unfairness level per channel and if it detects at least one deprived flow, then it reduces gi,k of every depriver flow by the corresponding unfairness value. One problem with this approach is that if unfairness is detected then
(g1k + … + gNk) / bBUS = T
(2)
That is, the unfairness-counterbalancing algorithm may artificially produce some bus idle time. This problem also arises when packetzing bus utilization grants, as shortly explained. Happily, a single mechanism, one that allows BUG to vary the length of its activation period, solves both problems. The length T of BUG’s activation period, in general, keeps no relationship with any packet bus-transmission time—besides having to be at least larger than the largest. Consequently, when packetizing utilization grants it may happened that mod(gi,k , Li)? 0, where Li is the mean packet length for channel i. Hence, some rounding off is required. We have tested rounding off both down and up and both produce particular problems. However, the former gave us a more stable mechanism. If nothing else is done, some bus idle time is artificially produced and the overall share assigned to that flow would be much less of what it should be. This problem can be solved if we let BUG reduced its next activation period length by some dt time value, where dt is the time due to
I/O Bus Usage Control in PC-Based Software Routers
1137
rounding off. Evidently, this increases BUG’s overhead. But as long as dt is a small fraction of T, the increase will remain at acceptable levels. BUG will switch from enforcing to monitoring state, resetting the emulated GPS, any time that sum(Di,k) < T/bBUS.
3
BUG’s Dynamics
We devised a series of simulation experiments to assess the performance of a PCI bus controlled by a BUG. For all experiments we compared the responses of three simulated buses: a plain PCI, a WFQ bus and a BUG regulated PCI. We are approximating the PCI operation by a Round Robin scheduler. Operational parameters where computed after a 33 MHz, 32-bit bus. Besides, we set queue spaces to infinity and set BUG’s nominal activation period to 0.1 ms. Traffic load for all experiments was composed of three packet-flows soliciting each 1/3 of router resources. Flows differentiate themselves by the size of their packets: small (172 bytes), medium (558 bytes) and large (1432 bytes). Different experiments used different inter-arrival processes to show particular behavior. In Fig.4.a we show responses to unbalanced constant bit rate traffic. Each line at every chart denotes the running sum of output bytes over time. The traffic pattern is as follows. At time zero, flow 1 and flow 2 start loading the system with a load level equivalent to 50% of a PCI bus capacity each; that is, 528 Mbps. Two ms later (first arrow; 20T = 2 ms) flow 3 starts loading the system also at 528 Mbps. Then, 2 ms later (second arrow) flow 3 multiplies its bit rate by four. From the first chart we can see that the ideal bus behavior allows a 50% bus share between flow 1 and 2 during the first 2 ms. Then, after flow 3 gets active, it allows a 33% bus share irrespectively of the load level of flow 3. From the second chart we can see that a plain PCI bus only adequately follows the ideal behavior during the first 2 ms—first arrow. Then, the round robin scheduling deprives flow 1 in favor of flow 3. Moreover, although flow 2 is lightly affected it also receives more than its solicited share. After time 4 ms— second arrow—all gets worst. From the third we can see that the BUG equipped bus behaves very much like the ideal bus does. Observe that when flow 3 gets on, the reactive nature of BUG is reflected. For the first two activation periods, or so, flow 3 gets bus use-grants above its solicited share, depriving the other flows. But then, BUG adjusts and before 1 ms has passed all flows start receiving their solicited share. Before time 10 ms, flow 1 starts lagging a little behind flow 2. This is due to rounding off mismatches. By algorithm definition, when this mismatch accumulates to a whole packet BUG will allow flow 1 to catch up. We have practiced more experiments like the above varying the order of the flows and the length and size of the load changes and we have always found congruent results. In Fig.4.b and Fig.4.c we analyze the dynamic behavior of BUG under highly variant random load. For this pair of experiments each packet flow was run by an onoff source. On-state period lengths were set to a constant value. Packet inter-arrival processes were Poisson with mean bit rate equal to 3520 Mbps, or 300% of the PCI bus capacity. Off-state period lengths were drawn after an exponential random process with mean value set to 9 times the on-state period-length. Consequently, all flows overall mean bit rates were equal to 30% of the PCI bus capacity or 352 Mbps. Besides observing the system response to this kind of traffic, with these experiments
1138
O.-I. Lepe-Aldama and J. García-Vidal
we wanted to see if we could find any BUG pathology related to operating-mode cycles, where the continuous but random path into and out of enforcing mode may produce some wrong behavior. Consequently, we ran several experiments with different on-off cycle lengths. Here we present results for an on-state period-length 8 times the BUG activation period T (Fig.4.b) and for one of 0.5T (Fig.4.c). In both these figures, each chart left to right separately compares for each flow (flows 1, 2 and 3) the resulting output processes for each of the considered buses. Each line denotes the running sum of output bytes over time, and thus horizontal segments correspond to off-state periods. For reference, each chart also draws, as a running sum over time, the corresponding flow’s input process. From both figures we can see that despite the traffic’s fluctuations BUG quite well follows the ideal WFQ policy, while the PCI
a)
b)
c) Fig. 1. Simulation results from BUG dynamics contrasting study under (a) unbalanced CBR traffic and (b,c) random and highly variable traffic. BUG’s behavior is contrasted to the behavior of the ideal WFQ policy and the behavior of a PCI bus (approximated by a roundrobin policy). Note that each chart at (a) contrasts the output processes of the three traffic flows described in the main text for a particular scheduling policy. While at (b,c) each chart contrasts the output processes produced by the three scheduling polices for one traffic flow.
I/O Bus Usage Control in PC-Based Software Routers
1139
like Round Robin policy again favors the largest-packet flow and affects the most to the smallest-packet flow. Of particular interest is what Fig.4.c show to us about BUG behavior. It seams that BUG is not macroscopically sensitive to a traffic pattern that repeatedly takes it in and out of enforcing mode.
4
System Performance Study
Here we study the performance of a PC based software router whose PCI bus in regulated by BUG. Operational parameters for the queuing network model were determined using software profiling, as described in [2]. The target system had a 600 MHz Pentium III CPU, a 100 MHz system bus, 10 ns EDO RAM chips and a 33 MHz, 32-bit PCI I/O bus. Software wise, the system was power by FreeBSD 4.1.1. Measurements were not taken for the bus service times. Instead, we used the description of the system operation [2]. We assume that data phases are of 1 cycle and that frame transfer is never pre-empted. We have considered Poisson traffic as input traffic, and which has a three-flow configuration as for the previous section. We have performed the simulation with systems configured with two different CPUs. CPU1 works at 1 GHz and CPU2 works at 3 GHz. The system’s I/O bus works at 33 MHz and has a 32-bit data path. Note that for the considered traffic, the CPU is the bottleneck for the system with CPU1 while the I/O bus is bottleneck for system with CPU2. In Fig.5.a we show results for the basic software router. The left chart shows aggregated throughputs for offered loads in the range of [0, 1400 Mbps]. The other two charts show the share obtained for each traffic flow, firstly for CPU1 and then for CPU2. It can be seen that the system with CPU1 has a linear increase of the aggregated throughput for offered loads below 225 Mbps. At this point the CPU utilization is 100% while the bus utilization is around 50% and the systems enters into a saturation state. If we further increase the offered load the throughput decreases until a live lock condition appears, at an offered load of 810 Mbps. During the saturation state most losses occur in the IP input buffer. The system with CPU2 gets its bus saturated before its CPU at an offered load of 500 Mbps. The system behavior for increasing offered loads depends on which priorities are used by the bus arbiter. Summarily, the basic system cannot provide a fair share of the resources when it is in saturation. Fig.5.b shows results for the system with a WFQ scheduling for the CPU and the BUG mechanism for controlling bus usage. We see that the obtained results correspond to almost an ideal behavior, as under saturation throughput does not decrease with increasing offered loads and the system achieves a fair share of both router resources: CPU and bus.
5
Conclusions
Under quite normal operation conditions for today’s PC hardware and telecommunication links, the plain PCI bus arbitration mechanism impedes a software router to fulfill QoS guarantees. The mechanism that we proposed and called BUG, for bus usage control, is effective in controlling the bus share between different flows.
1140
O.-I. Lepe-Aldama and J. García-Vidal
a)
b) Fig. 2. Performance results for (a) base BSD router (b) a router with WFQ for the CPU and BUG for the I/O bus. The charts at the left contrast the router throughput when it uses a CPU of 1GHz and a CPU of 2GHz. The charts at the middle and at the right show the throughput share obtained by each of the three flows described in the main text. The charts at the middle are for a router using a 1GHz CPU, while the charts at the right are for a router using the 2GHz CPU.
When we use this mechanism in combination with the known techniques for CPU usage control, we obtain a nearly ideal behavior of the share of the software router resources for a broad range of workloads.
References 1. Indiresan, A. Mehra and K. G. Shin, “Receive Livelock Elimination via Intelligent Interface Backoff”, December 1997, http://citeseer.nj.nec.com/366416.html 2. O. I. Lepe and J. García, “A Performance Model of a PC-based IP Software Router”, to appear at Proc. IEEE ICC2002, April 2002. 3. M. L. Loeb, A. J. Rindos, W. G. Holland and S. P. Woolet, “Gigabit Ethernet PCI Adapter Performance”, IEEE Network, 15(2): 42-47, March/April 2001. 4. Mogul and K. K. Ramakrishnan, “Eliminating receive livelock in an interrupt-driven kernel”, ACM Trans. Computer Systems, 15(3): 217-252, August 1997. 5. K. Parekh and R. G. Gallager, “A Generalized Processor Sharing Approach to Flow Control in Integrated Services Networks- The Multiple Node Case”, Proc. IEEE INFOCOM 1993, pp. 521-530 vol.2 6. X. Qie, A. Bavier, L. Peterson and S. Karlin, “Schedulling Computations on a SoftwareBased Router”, Proc. SIGMETRICS 2001, June 2001.
Multiple Access in Ad-Hoc Wireless LANs with Noncooperative Stations Jerzy Konorski Technical University of Gdansk ul. Narutowicza 11/12, 80-952 Gdansk, Poland [email protected]
Abstract. A class of contention-type MAC protocols (e.g., CSMA/CA) relies on random deferment of packet transmission, and subsumes a deferment selection strategy and a scheduling policy that determines the winner of each contention cycle. This paper examines contention-type protocols in a noncooperative an ad-hoc wireless LAN setting, where a number of stations self-optimise their strategies to obtain a more-than-fair bandwidth share. Two scheduling policies, called RT/ECD and RT/ECD-1s, are evaluated via simulation It is concluded that a well-designed scheduling policy should invoke a noncooperative game whose outcome, in terms of the resulting bandwidth distribution, is fair to non-self-optimising stations.
1 Introduction Consider N stations contending for a wireless channel in order to transmit packets. In a cooperative MAC setting, all stations adhere to a common contention strategy, C, which optimises the overall channel bandwidth utilisation, U: C=argmaxU(x). In a noncooperative MAC setting, each station i self-optimises its own bandwidth share, Ui: Ci*= argmaxUi(C1*,...,Ci–1*,x,Ci+1*,...,CN*). Ci* is a greedy contention strategy and (C1*,...,CN*) is a Nash equilibrium [3] i.e., an operating point from which no station has incentives to deviate unilaterally. Note that noncooperative behaviour thus described is rational in that a station intends to improve its own bandwidth share rather than damage the other stations’. This may result in unfair bandwidth shares for stations using C. For other noncooperative wireless settings, see [1,4]. In ad-hoc wireless LANs with a high degree of user anonymity (for security reasons or to minimise the administration overhead), noncooperative behaviour should be coped with by appropriate contention protocols. A suitable communication model is introduced in Sec. 2. The considered contention protocol under the name Random Token with Extraneous Collision Detection (RT/ECD) involves voluntary deferment of packet transmission. We point to the logical separation of a deferment selection strategy and a scheduling policy that determines the winner in a contention cycle. Sec. E. Gregori et al. (Eds.): NETWORKING 2002, LNCS 2345, pp. 1141-1146, 2002. © Springer-Verlag Berlin Heidelberg 2002
1142
J. Konorski
3 outlines a framework for a noncooperative MAC setting. A scheduling policy called RT/ECD-1s is described in Sec. 4 and evaluated against the RT/ECD policy in terms of the bandwidth share guaranteed for a cooperative (c-) station (using C) in the presence of noncooperative (nc-) stations (using Ci*). Sec. 5 concludes the paper.
2 Noncooperative MAC Setting with RT/ECD Our ’free-for-all’ communication model consists of the following non-assumptions: neither N nor stations’ identities need to be known or fixed, except for detecting carrier, a station need not interpret any packet of which it is not an intended (uni- or multicast) recipient. This allows for full encryption and/or arbitrary encoding and formatting among any group of stations. To simplify and restrict the model we assume in addition single-hop transfer of packets with full hearability, and a global slotted time axis. Any station is thus able to distinguish between v- and c-slots sensed (for ’void’ and ’carrier’). An intended recipient of a successful transmission recognises also an s-slot (for ’success’) and reads its contents. This sort of binary feedback allows for extraneous collision detection in the wireless channel as employed by the following RT/ECD protocol (Fig. 1). In a protocol cycle, a station defers its packet transmission for a number of slots from the range 0..D-1, next transmits a 1-slot pilot and senses the channel in the following slot. On sensing an s-slot containing a pilot, any intended recipient transmits a 1-slot reaction (a burst of non-interpretable carrier), while refraining from reaction if a v- or c-slot is sensed. A reaction designates the sender of a successful pilot as the winner and prompts it to transmit its packet in subsequent slots; a v-slot will mark the termination of this protocol cycle. If pilots collide, no reaction follows and the protocol cycle terminates with a no-winners outcome. In a full-hearability environment, RT/ECD operates much like CSMA/CA in the IEEE 802.11 Distributed Coordination Function [2], with the pilot/reaction mechanism resembling the RTS/CTS option. Note, however, that it is to provide ACK functionality rather than solve the hidden terminal problem; moreover, pilots only need to be interpreted by intended recipients, while reactions are non-interpretable. station 1 station 2 station 3 station 4 slots Fig. 1. RT/ECD, a no-winners protocol cycle followed by one where station 4 wins
Multiple Access in Ad-Hoc Wireless LANs with Noncooperative Stations
1143
To account for noncooperative behaviour, we assume that NC out of N stations are nc-stations that may use greedy deferment selection strategies (NC need not be known or fixed), the c-stations use a standard deferment selection strategy S, defined by the probabilities pl of selecting a deferment of l slots (l³0..D–1), and all stations adhere to a common scheduling policy. A simple greedy strategy might consist in introducing a downward bias³0..D–1 to the deferment distribution e.g., p0’=p0+...+pbias and pl’=pl+bias for l>0. As shown in Sec. 4, this may leave the c-stations with a tiny fraction of the bandwidth share they would obtain in a cooperative setting (with NC=0).
3 Framework for a Noncooperative MAC Setting Besides pursuing a greedy deferment selection strategy, an nc-station might try various 'profitable' departures from the protocol specification – for example, pretend to have transmitted a pilot and sensed a reaction. In RT/ECD-like protocols, however, such cheating must involve making false claims as to the presence or absence of carrier on the channel, which is easily verifiable. Therefore it suffices to design a scheduling policy so as to minimise the benefits of any conceivable greedy strategy vis-a-vis S. A greedy strategy can be expected to be isolated i.e., not relying on collusion with other nc-stations, and rational, meaning that deferment selection rules observed to increase own bandwidth share are more likely to be applied in the future, however, to stay responsive to a variable environment, no rules are entirely abandoned [3]. A reasonable scheduling policy is constrained to be nontrivial, in that no deferments should be known a priori to render other deferments non-winning (note that RT/ECD is a counterexample, deferment of length 0 being 'fail-safe'), and incentive compatible, in that channel feedback up to any moment in the deferment phase should not discourage further pilots (as a counterexample, imagine a scheduling policy whereby a second-shortest deferment wins). Let Uc(NC) be the bandwidth share obtained by a generic c-station in the presence of NC nc-stations. A fair and efficient scheduling policy is one that ensures Uc(NC) '' Uc(0) '' Uc(0)|RT/ECD for any NC and any greedy strategy, where '' reads 'not less or at least tolerably less than.' This means that the presence of nc-stations should not decrease a generic c-station's bandwidth share by an amount that its user would not tolerate. The latter 'inequality' implies that protection against nc-stations should not cost too much bandwidth in a cooperative setting, RT/ECD being a reference policy supposed, by analogy with IEEE 802.11, to perform well in a cooperative setting.
1144
J. Konorski
4 Evaluation of the RT/ECD-1s Scheduling Policy While RT/ECD prevents any station from winning if a collision of pilots occurs, in RT/ECD-1s the first successful pilot wins no matter how many collisions precede it. A protocol cycle is illustrated in Fig. 2. A slot occupied by a pilot (or a collision of pilots) is paired with a following one, reserved for reactions. Stations whose pilots were not reacted to back off until the next protocol cycle. The lack of a second chance to transmit a pilot in the same protocol cycle creates a desirable ’conflict of interest’ for an nc-station selecting its deferment. RT/ECD-1s is arguably nontrivial and th incentive compatible. (A family of similar policies can be devised whereby the n successful pilot wins, or the last one if there are less than n; of these, RT/ECD-1s yields the best winner outcome vs. scheduling penalty tradeoff.) station 1, deferment=3 station 2, deferment=4 station 3, deferment=1 station 4, deferment=1 slots Fig. 2. RT/ECD-1S protocol cycle: stations 3, 4 back off when no reaction follows; station 1’s first successful pilot wins (deferments are frozen during reaction slots)
In a series of simulation experiments, simple models of c- and nc-stations were executed to evaluate RT/ECD-1s against the backdrop of RT/ECD. In each simulation run, D=12, N=10 and NC³0..N–1 were fixed and packet size was 50 slots. Symmetric heavy traffic load was applied with one packet arrival per station per protocol cycle. The strategy S at the c-stations used a truncated geometric probability distribution over l 0..D–1 i.e., pl=const.¼q with parameter q=0.5, 1 or 2 (referred to symbolically as ’aggressive,’ ’moderate’ and ’gentle’). Two isolated and rational greedy strategies were experimented at nc-stations: Biased Randomiser (BR) and Responsive Learner (RL). The former introduced a downward bias as explained in Sec. 2; the bias value was optimised on the fly at each nc-station and occasionally wandered off the optimum to keep the strategy responsive to possible changes in other stations’ strategies. The latter mimicked so-called fictitious play [5] by selecting deferments at random based on their winning chances against recently observed other stations’ deferments. Once selected, a deferment was repeated consistently throughout the next update period of UP=20 protocol cycles. For simplicity, strategies were configured uniformly within all stations of either status, producing two noncooperative game scenarios: S vs. BR and S vs. RL. Ideally, Uc(NC)1/N of the channel bandwidth. Scheduling penalties cause this figure to drop even in a cooperative MAC setting (at NC=0), whereas nc-stations may bring about a further decrease. For the S vs. BR scenario, Fig. 3 (left) plots Uc(NC) (normalised with respect to 1/N) as measured after the nc-stations have
Multiple Access in Ad-Hoc Wireless LANs with Noncooperative Stations
57(&'V
@ > F 8
@
DJJUHVVLYH
> F 8
PRGHUDWH
JHQWOH
57(&'V
1145
DJJUHVVLYH
PRGH UDWH
JHQWOH
57(&'
57(&'
1&
1&
Fig. 3. C-station bandwidth share as a function of NC, left: S vs. BR, right: S vs. RL
57(&'V OHDGHU
@ > L 8 57(&'VQRQ OHDGHU DYHUDJH
57(&' OHDGHU
57(&'QRQ OHDGHU DYHUDJH
SURWRFROF\FOH>[@
Fig. 4. RL vs. RL: Stackelberg ’leader’ scenario
reached a Nash equilibrium with respect to bias. Note that while RT/ECD-1s is generally superior to RT/ECD, much depends on the parameter q: the ’gentle’ value is not recommended, especially for a small N, while for the ’aggressive’ value, the ncstations detect that the optimum bias is 0, hence Uc(NC) remains constant. Also, RT/ECD-1s has difficulty coping with NC=1. Fig. 3 (right) presents similar results for the S vs. RL scenario. Observe that under RT/ECD-1s, nc-stations’ increased intelligence does not worsen Uc(NC) significantly, which it does under RT/ECD. Again, much depends on q: although the ’moderate’ value pays off in a cooperative setting, the ’aggressive’ value offers more uniform guarantees for Uc(NC) across various N. Lose-lose situations (with both the c- and nc-station bandwidth shares below Uc(0)) were observed under RT/ECD owing to this policy not being nontrivial. Fig. 4 presents an RL vs. RL scenario where, after a third of the simulation run, one nc-station captures more bandwidth by lengthening its update period tenfold
1146
J. Konorski
whenever a deferment of length 0 is selected. In doing so, it becomes a so-called Stackelberg ’leader’ [5]. A form of protection, switched on after another third of the simulation run, is for a c-station to monitor its own and other stations’ win counts over the last update period. If the former is zero and the latter nonzero, the station temporarily resorts to S with the ’aggressive’ q. Under RT/ECD-1s, this quickly results in the ’leader’ obtaining a less-than-fair bandwidth share. Under RT/ECD the protection is ineffective; moreover, the overall bandwidth utilisation remains poor.
5 Conclusion Ad-hoc wireless LAN systems, with their preferences to user anonymity and a lack of tight administration, potentially constitute a noncooperative MAC setting. For a class of contention protocols relying on random deferment of packet transmission, c-stations are vulnerable to unfair treatment by nc-stations, which use greedy deferment selection strategies. The design of a scheduling policy has been shown to be quite sensitive in this respect. A framework for a reasonable scheduling policy and greedy strategies that might be expected from nc-stations has been outlined. A slotted-time scheduling policy called RT/ECD, analogous to CSMA/CA with the RTS/CTS option, and an improved variant thereof called RT/ECD-1s have been evaluated under heavy load to find that the latter guarantees the c-stations a substantially higher bandwidth share. This it does assuming that nc-stations behave rationally and seek a Nash equilibrium. In the experiments, RT/ECD-1s coped well with nc-stations using a randomisation bias or a fictitious play-type greedy strategy. Several directions can be suggested for future work in this area: a game-theoretic study of RT/ECD-like scheduling policies aimed at establishing the mathematical properties of the underlying noncooperative games, model extensions to include multihop wireless LAN topologies (in particular, dealing with the problem of hidden stations); development of a suitable extension of RT/ECD-1s is under way, and access delay analysis to investigate the issues of QoS support.
References 1. 2. 3. 4. 5.
Heikkinen, T.: On Learning and the Quality of Service in a Wireless Network. In: Proc. Networking 2000, Springer-Verlag LNCS 1815 (2000), 679-688 IEEE 802.11 Standard (1999) Kalai, E., Lehrer, E.: Rational Learning Leads to Nash Equilibrium, Econometrica 61 (1993), 1019-1045 MacKenzie, A.B. and Wicker, S.B.: Game Theory and the Design of Self-Configuring, Adaptive Wireless Networks, IEEE Comm. Magazine, 39 (2001), 126-131 Milgrom, P., Roberts, J.: Adaptive and Sophisticated Learning in Normal Form Games, Games and Economic Behaviour 3 (1991), 82-100
Next Generation Networks and Services in Slovenia Andrej Kos, Janez Beˇster, and Peter Homan University of Ljubljana, Faculty of Electrical Engineering, Laboratory of Telecommunications, Trˇzaˇska 25, 1000 Ljubljana, Slovenia, {andrej.kos, janez.bester, peter.homan}@fe.uni-lj.si http://www.ltfe.org
Abstract. This paper provides an overview of development of telecommunications in Slovenia. Major systems, networks and services are briefly considered. The combination of own generic research and critical mass of knowledge had and still has a very positive influence the on development of telecommunications in Slovenia. We propose a two-level network architecture consisting of a simplified data forwarding plane and service control plane. Future technological development and the proposed role of Slovenia as a regional telecommunications hub are presented.
1
Introduction
Recent years have been marked with significant advances in telecommunications. The main reasons for fast development are: 1. 2. 3. 4.
Fast development of new technologies Rapidly falling prices of networking equipment and bandwidth Rapidly falling prices of services Changing the basic platform of telecommunications from connection oriented networks to connectionless, packet-based networks 5. Convergence 6. Rapid shift of importance from technology towards services 7. Deregulation and liberalization
In 2000 there were still some doubts about the general development path of telecommunications. Two scenarios were possible: evolution and revolution. Revolutionary scenario anticipated the advent of new, small, specialized, and technically very advanced actors. The services would all be provided over IP infrastructure. Evolutionary scenario anticipated gradual transformation of classical telecommunications in 10 to 15 years from PSTN/ISDN-centric to IP-centric companies. It is now clear that the future development in telecommunications will follow evolutionary path. Telecoms, on contrary to new players, typically have large investments in embedded base and strong revenue-generating existing services (voice) that help fund extensive and expensive network as well as service upgrades. Areas where investment is particularly intense are mobile, broadband, and Internet. E. Gregori et al. (Eds.): NETWORKING 2002, LNCS 2345, pp. 1147–1152, 2002. c Springer-Verlag Berlin Heidelberg 2002
1148
2
A. Kos, J. Beˇster, and P. Homan
State of Telecommunications in Slovenia
Slovenia has relatively well developed telecommunications sector. Some important characteristics of Slovenian telecommunications are summarized in Table 1. Table 1. Main telecommunications indicators
Slovenia belongs among 15 countries in the world that have generic telecommunications development and are capable of developing, producing and exporting advanced telecommunications systems and solutions. There is tight cooperation between industry and academic institutions. Combination of own generic research and critical mass of knowledge had and still has very positive influences on development of telecommunications in Slovenia.
Fig. 1. Core infrastructure
Core infrastructure that supports all three main segments; fixed telephony, mobile, and IP is shown in Fig. 1. It is based on optical cable systems upgraded
Next Generation Networks and Services in Slovenia
1149
with different technologies on different layers, such as DWDM, SDH, FR, ATM, Gigabit Ethernet, MPLS, and IP. It is mainly provided by Telekom Slovenije. In lesser extent it is also provided by Elekto-Slovenija, Slovenian Railways, and Motorway Company in the Republic of Slovenia. The latter offer leased line services over SDH infrastructure. Fixed telephone network is currently still the most important Slovenian telecommunications infrastructure. At the end of 2000 digitalization rate reached 99 % and the PSTN/ISDN penetration is 45 %. The penetration of ISDN and centrex together is 7.3 %. Fixed telephone network is structured in two-level hierarchy; primary (PX) and secondary (SX), which is hierarchically higher than the primary. Broadband ADSL services over copper access network are available from the beginning of 2001. Mobile communications are well developed with one of the highest penetration rates in Europe. Three mobile operators, Mobitel, Si.mobil, and Western Wireless International are operating in Slovenia. Service provider Debitel uses Mobitel’s GSM network. At the end of 2001 the penetration rate of mobile users was over 70 %. Comparison of mobile penetration rates with other European countries is shown in Fig. 2. The data is valid for September 2001.
Fig. 2. Comparison of mobile penetration rates (September 2001)
As in the rest of the Europe, the number of people using the Internet continues to grow. In October 2001 there were some 700.000 (35 %) Internet users. A user for the above figure is defined as someone who has used the Internet at least once in the past three months. Of these 700.000 users, – 500.000 use the Internet at least once per month – 400.000 use the Internet at least once per week – 300.000 use the internet on a daily basis The biggest internet service provider in Slovenia is Siol. It manages the biggest core commercial network and currently offers dial-up access, leased lines, ADSL,
1150
A. Kos, J. Beˇster, and P. Homan
and Ethernet access. In addition to different types of access to the Internet, Siol offers services such as VPNs, web hosting, all standard IP services, and many new application services, such as audio/video, e-commerce, and distance learning. The other big player in the field of Internet is Academic and Research Network of Slovenia. The main task of Arnes is development, operation and management of the communication and information network for education and research. There is a variety of smaller commercial ISPs that provide internet services, such as access to the Internet, web hosting, consulting and similar. Currently there are more than 100 CaTV operators in Slovenia, which provide services to around 250.000 Slovenian households and 750.000 users respectively. Thus the CaTV penetration rate is 37.5 %. In some urban areas the penetration rate is more than 90 %. However the great majority of operators are small companies owned by local communities.
3
Convergence
As shown in Fig. 1 telecommunications today are based on three pillars: fixed, mobile and IP. Technologically all three can support voice and data/internet. Up to now terminals for fixed telephone network were classical telephone terminals. With the advent of xDSL, the access telephone network is being used for broadband data as well. GSM mobile networks were primarily built to support voice, but with HSCSD, GPRS and UMTS more and more data traffic will be transported over mobile networks. In the past typical usage of IP networks was data, but with the advent of VoIP, IP networks are being used for voice as well. Especially it is expected that the boundary between mobile operators and Internet service providers will blur due to strong cross-area expansion. With the advent of ADSL there is also a similar blurring of the boundary between fixed operators and Internet Service providers. General convergence trends that can be identified are: 1. Voice is migrating from fixed to mobile networks (overall voice is growing, whereas there is a decline in fixed voice) 2. Fixed networks will be used for broadband data 3. IP networks are converging into a common infrastructure for all existing and new services through implementation of MPLS
4
Future Development
Till the end of 2000 the core network was working mainly in a connection oriented transport fashion, Anticipated technological evolution of core network in general is presented in [1,2], where it should be noted that although today’s vision of next generation core network is IP/MPLS/GMPLS over DWDM, existing, proven and well-known technologies such as SDH, ATM and Gigabit Ethernet would still be used for a long time. As discussed below only their role might be slightly different.
Next Generation Networks and Services in Slovenia
1151
Fig. 3. Concept of contemporary network architecture and its main usage
Fig. 4. Next generation network
According to general evolution of core network the concept of future network architecture and its main usage will change as shown in Fig. 3. The concept is based on the following facts: 1. IP protocol has become the convergence layer for majority of services 2. MPLS and its generalization GMPLS have become the core technology of choice that in addition to connection oriented approach support many new functionalities in terms of routing, signaling, control and QoS support 3. ATM as a layer 2 technology is with ADSL and ATM switches at customer sites migrating towards access 4. Voice services will still for some time be accessed via classical terminals, mainly mobile. VoIP functionality will be through media gateways first introduced mainly in the core as voice trunking In [3] framework for next generation network is proposed. Logically it is a two-level network architecture, which consists of service control layer
1152
A. Kos, J. Beˇster, and P. Homan
and transport layer. The transport is service independent. We propose next generation network, of which technology aware view is shown in Fig. 4 [4,5] (extended version of this paper can be found on http://www.ltfe.org/pdf/networking2002 extended.pdf). Most of the intelligence is in edge devices. Edge devices’ functionalities include termination of different access technologies, data format adaptation for transport over core network, service gateways, such as QoS mappings, connection admission control, classification, metering, marking, dropping, authorization, accounting, fire-walling, address translation, security, and others.
5
Conclusion
In the article the overview of development in the field of telecommunications in Slovenia is presented. The combination of own generic research and critical mass of knowledge had and still has very positive influences on development of telecommunications. We propose a two-level network architecture consisting of a simplified data forwarding plane and service control plane. Service control plane is mostly implemented in edge devices, in the form of different gateways and servers. Slovenia with less than 2 million inhabitants is relatively small market and will in global markets have to find its place in niche segments. With a lot of technological know-how, unique geographic position, a lot experience, and good relationships with all neighboring countries, one among most important niche segments is being a telecommunications hub. Acknowledgements. The authors would like to thank Matej Eljon and the anonymous reviewers for their thoughtful review and useful suggestions.
References 1. Kos, A., Beˇster, J.: Role of MPLS in Modern Telecommunications Networks. International Symposium Viable Telecommunications VITEL 2000, Technologies and Communication for the Online Society, Ljubljana, Slovenia (2000) C45-C49 2. Banerjee, A., et al.: Generalized Multiprotocol Label Switching: An overview of Routing and Management Enhancements. IEEE Commun. Mag., vol. 39, no. 1 (2001) 144-150 3. Moridera, A., Murano, K., Mochida, Y.: The Network Paradigm of the 21st Century and Its Key Technologies, IEEE Commun. Mag., vol. 38, no. 11 (2000) 94-98 4. Rockstr¨ om, A.: Technology as a Driver for New Business Logic. IEEE Commun. Mag., vol. 38, no. 11, (2000), 100-104 ˇ 5. Zurbi, R.: Signalling and Control Protocols in Next Generation Networks. M.Sc. Thesis, Faculty of Electrical Enginering, University of Ljubljana, Ljubljana, Slovenia (2001)
Minimizing the Routing Delay in Ad Hoc Networks through Route-Cache TTL Optimization Ben Liang and Zygmunt J. Haas School of Electrical and Computer Engineering, Cornell University, Ithaca, NY 14853, USA {liang, haas}@ece.cornell.edu
Abstract. This paper addresses the issue of minimizing the routing delay in ad hoc on-demand routing protocols through optimizing the Time-to-Live (TTL) interval for route caching. An analytical framework is introduced to compute the expected routing delay when the source node has a cached route with a given TTL value. Furthermore, a numerical method is proposed to determine the optimal TTL of a newly discovered route cached by the source node. We present simulation results that support the validity of our analysis.
1 Introduction Node mobility and the lack of topological stability make the routing protocols previously developed for wireline networks unsuitable for ad hoc networks[9][8][11]. A popular family of ad hoc routing protocols are the reactive routing protocols, also called ondemand routing protocols. In these protocols a node is not required to maintain a routing table (although route caches may be kept), but instead a route query process is initiated whenever it is needed. Routing protocols such as ABR, AODV, DSR, the IERP of ZRP, and TORA are examples of reactive protocols[11]1 . In an on-demand routing protocol, a newly discovered route should be cached, so that it may be reused the next time that the same route is requested. However, prolonged storage of a route cache may render it obsolete. When an invalid route cache is used, extra traffic overhead and routing delay is necessary to discover the broken links. Depending on the implementation details, data and/or control packets are delivered over part of the cached route that is still valid, before the broken link can be discovered.2 One approach to minimize the effect of invalid route cache is to purge the cache entry after some Time-to-Live (TTL) interval. If the TTL is set too small, valid routes are likely to be discarded, but if the TTL is set too large, invalid route-caches are likely to be used. Thus, an algorithm that optimizes the TTL setting is necessary for the optimal performance of an on-demand routing protocol. As far as we are aware, there is very little reported work in literature that addresses the issue of ad hoc route-cache TTL optimization. Most existing on-demand protocols, such 1 2
Due to the page limit, the individual references to these protocols are omitted. It is possible to employ proactive route-cache invalidation initiated by the up-stream node of a broken link, whether or not the link is part of an active route presently delivering data. However, this can lead to large control overhead when the network topology changes frequently. Proactive techniques are outside the scope of this paper.
E. Gregori et al. (Eds.): NETWORKING 2002, LNCS 2345, pp. 1153–1158, 2002. c Springer-Verlag Berlin Heidelberg 2002
1154
B. Liang and Z.J. Haas
as AODV, DSR, and TORA, employ route caching in various forms. In AODV, a discovered route is associated with an “active route time-out” value that dictates the duration within which the route can be used. This time-out value is static and identical throughout the network. In DSR and TORA, a cached route is kept indefinitely (i.e. TTL=∞), until a broken link in the route is detected during data transmission. In this work, we study the TTL optimization adaptive to each cached route. In [10], case study based on DSR has suggested that route caching can reduce the average latency of route discovery by more than 10-fold. Further simulation studies reported in [1], [7], [4], [2], and [3] have confirmed the effectiveness of route caching in on-demand routing protocols. However, [7], [4], [2], and [3] have also drawn the conclusion that the indefinite route-cache, as is employed in DSR, can lead to many stale routes and hence degrade the routing performance. In addition, [2] and [3] have demonstrated the need for determining a suitable time for the route-cache expiration. The simulation results in [3] have further shown a case study of the optimal route-cache expiry time obtained by exhaustive search. In this work, we approach the problem of adaptive route-cache TTL optimization through analytical studies. We consider the problem of optimizing the TTL of a cached route in order to minimize the expected routing delay of the next request of the same route (i.e., the same source and destination pair). In Section 2, we explain the network model under consideration. In Section 3, we introduce analytical and numerical frameworks to compute the optimal TTL and the corresponding expected routing delay. In Section 4, we present simulation results that support the validity of our analysis, study the system parameters that affect the optimal TTL, and show the performance gain achieved by the optimal TTL. Finally, concluding remarks are provided in Section 5.
2 Network Model We consider a mobile ad hoc network consisting of a set V of nodes. At any time instant, an edge (u, v), where u, v ∈ V , exists if and only if node u can successfully transmit to node v. In this case, we say that the link from node u to node v is up. Otherwise, the link is down or has failed. In the modeling of general communication networks, it is usually assumed that all edge failures are statistically independent [6]. The modeling of dependent link failures generally requires an exponentially large number of conditional probability distributions. Therefore, though unrealistic, the independence assumption greatly simplifies the analysis of network performance. In this paper, we assume that all links have independent and identical up-time distribution Fu (t) and down-time distribution Fd (t). We assume that route requests to a destination node nd arrive at the source node ns as a stream that has identically distributed inter-arrival intervals with a general distribution Fa (t). We consider only non-trivial networks where the average time between topology changes is smaller than the average route-search delay.3 Therefore, we assume that the route-request inter-arrival time is much larger than the route-search delay, since, otherwise, a valid route is already found at the last route request. Namely, a burst of 3
Otherwise, the only suitable routing approach is to flood data packets throughout the network.
Minimizing the Routing Delay in Ad Hoc Networks
1155
data packet train sent to a common destination within a very small time frame would constitute a single route request. When a route request is made due to a data packet arrival, if ns has a cached route to nd , it immediately sends out the data packet using the cached route. If the cached route is valid, we assume that this operation does not incur any routing delay. However, if the cached route is invalid, the intermediate node on the up-stream end of a failed link notifies ns via a route-error packet. In this case, and in the case that ns does not have a cached route to nd , the pre-defined routing protocol 4 is employed to search for a new route to nd . We further assume that ns renews or re-computes the TTL of a cached route to nd each time a packet is successfully sent through the cached route. A cached route is purged when its TTL expires.5 We assume that all data and control packet transmissions across a link incur an average delay of L seconds.6
3 Optimizing the Route-Cache TTL to Minimize Routing Delay7 3.1
Computing the Expected Routing Delay
Suppose the source node ns has a cached route to the destination node nd , which is validated by the last route request and has a TTL value of T seconds. Let D be the number of hops in this route. Let the next route request to nd arrive at time ta after the ns -to-nd route is cached. Then, from Section 2, ta has distribution Fa (t). Let fa (t) be the density function of ta , and let fa ∗ (s) be the Laplace transform of fa (t). Furthermore, let fc (t) be the density function of ta given ta < T . Then, the Laplace transform of fc (t) is −zT fc ∗ (s) = − Fa1(T ) ξ∈poles of fa ∗ (s−z) Resz=ξ 1−ez fa ∗ (s − z), where Resz=ξ denotes the residue at the pole z = ξ. Let fu (t) = dFu (t)/dt be the density function of the link up-time and fu ∗ (s) be its Laplace transform. The residual lifetime of a link in the cached route has the density function fr (t) = µ1u [1 − Fu (t)], where µu is the mean up-time of a link. The Laplace t transform of Rr (t) = 1 − 0 fr (τ )dτ is Rr ∗ (s) = 1s − µu1s2 [1 − fu ∗ (s)]. Let Xi be the minimum of the residual lifetimes of the first i links in the cached route. Let fXi (t) and FXi (t) be its density and distribution functions, and let fXi ∗ (s) 4
The exact mechanism of the on-demand routing protocol is not important here. Some on-demand protocols allow an intermediate node that has a cached route to the destination reply to the source initiated route request. Such schemes have been shown to significantly improve the routing performance. However, the quantitative effect of the stale routes provided by the intermediate nodes is not well understood. Therefore, in this work, we only consider the TTL of route caches kept by a source node. 6 For example, in the case study of [10], the average delay is shown to be 14.5 ms/hop. Although packets may be different in length, in a wireless ad hoc network operating at medium to high load, the predominant factor in the aggregate delay of packet transmission across a link is the queuing delay in the MAC layer due to the contention of the shared wireless medium. 7 Due to the page limit, in this section, we only give a brief outline of the important results and leave out the details to the long version of this paper. 5
1156
B. Liang and Z.J. Haas
and FXi ∗ (s) be the corresponding Laplace transforms, respectively. Let RXi (t) = 1 − FXi (t) and Rr (t) = 1 − Fr (t), and RXi ∗ (s) and Rr ∗ (s) be their Laplace transforms, respectively.Then fXi ∗ (s) can be determined through the following recursion: RXi ∗ (s) = − ξ∈poles of Rr ∗ (s−z) Resz=ξ RXi−1 ∗ (z)Rr ∗ (s − z), along with fXi ∗ (s) = sFXi ∗ (s) − FXi ∗ (0+) = 1 − sRXi ∗ (s). Let Qi (T ) be the probability that, when a route request arrives before the TTL expires, the first i links of the cached route have not failed. We can obtain Qi (T ) = ∗ − ξ∈poles of fX ∗ (−s) Ress=ξ fc s(s) fXi ∗ (−s). Define Q0 (T ) = 1. The expected routi ing delay of the next route request, when the TTL of a D-hop cached route is set to T , D is C(T ) = 2L D + Fa (T ) i=1 i (Qi−1 (T ) − Qi (T )) − Fa (T )QD (T )D . The above analytical framework provides a means for evaluating the expected routing delay given the TTL value. However, it is very likely that the optimal TTL value is more important to a system designer. In the next section, we provide a numerical method to compute the optimal TTL. 3.2
Determining the Optimal Route-Cache TTL
Let q(τ ) be the probability that a given link in the cached route is still up at time τ after the last route request. The expected routing delay as defined in the previous section has T q D (τ )−1 D the following alternate form: C(T ) = 2LD − 2L 0 2Dq (τ ) − q(τ )−1 fa (τ )dτ .
Since q(τ ) is a decreasing function of τ and 0 ≤ q(τ ) < 1, it is easy to verify ) = that C(T ) is a convex function of T . Therefore, if we let g[q(T )] = − 2Lf1a (T ) dC(T dT D
)−1 2Dq D (T ) − qq(T(T)−1 , the minimum of C(T ) is achieved when g[q(T )] = 0. Therefore, the optimal value of q(T ) is the root in [0, 1) of a function of the form g(x) = 2DxD − xD −1 x−1 . Given any value of D, a numerical method such as bisection or the Newton’s method can be used to find this root. Since q(T ) = 1 − Fr (T ), once the optimal value of q(T ) is determined numerically, the optimal TTL value can be found by reversing the density function of the residual lifetime of a link. The above illustrates an important property of the optimal TTL: it does not depend on fa (t). This property significantly reduces the computational requirement of the adaptive, real-time route-cache TTL optimization performed by individual nodes in an ad hoc network.
4 Simulation and Numerical Evaluation 4.1
Simulation Model and Output Analysis
A simulation model is developed to validate the analytical model. It represents the link establishments and breakages in an ad hoc network based on the network model described in Section 2. In particular, we present the simulation results for a 300-node network where the link up and down-times between any pair of nodes are exponentially distributed with mean values µu = 1 and µd = 48.8 (i.e., the average node degree is 6). Given a source node, the destination node is chosen randomly with uniform distribution
Minimizing the Routing Delay in Ad Hoc Networks 8
4
simulate, µ =0.1 a analysis, µa=0.1 simulate, µa=0.3 analysis, µa=0.3 simulate, µa=1 analysis, µ =1
performance gain achieved by the optimal TTL
average delay per route request [L]
7
a
6
5
4
3
2 −1 10
0
10
1157
1
10
over no−cache, α=0 over TTL = ∞, α=0 over no−cache, α=1 over TTL = ∞, α=1 over no−cache, α=2 over TTL = ∞, α=2 over no−cache, α=3 over TTL = ∞, α=3
3.5
3
2.5
2
1.5
1 −1 10
TTL optimality factor
0
10
mean route−request inter−arrival time
1
10
Fig. 1. (a) Expected routing delay (normalized to L) vs. the TTL optimality factor γ. The vertical lines represent the 99.95% confidence intervals. (b) Performance gain achieved by using the optimal route-cache TTL over TTL=0 and TTL=∞.
among all other nodes in the network. For a chosen source and destination node pair, t the route-request inter-arrival time has distribution fa (t) = µ1a e− µa . We further define a TTL optimality factor γ, such that, when a new route is cached, its TTL is set to γTopt , where Topt is the optimal TTL value found as described in Section 3.2. The comparison between our analytical and simulation results is illustrated in Fig. 1(a). Figure 1(a) validates both the analytical and the simulation models. In particular, the simulation results demonstrate that the minimal routing delay is indeed achieved at γ = 1, as expected from the analysis. The computed average delay per route request in some cases is 2% higher than the corresponding simulation outcome. This is due to the pessimistic assumption in the analytical model that once a link in a cached route fails, it does not become up again at the time of the next route request. Figure 1(a) also suggests that the optimal TTL determination is the most important when the route-request inter-arrival time is moderate compared with the mean link-failure time. For systems with different parameter values, the results are similar to Fig. 1(a) and are omitted. 4.2
Performance Gain of the Optimal TTL
Using the proposed analytical framework, we can quantitatively study the advantage of optimizing the route-cache TTL. Due to the page limit, we are unable to show all results. In Fig. 1(b), we illustrate the performance gain of using the optimal TTL over the no route-cache system (TTL=0) and the never-expiring route-cache system (TTL=∞)8 , for different values of µa 9 and various traffic locality. In describing the traffic locality, we have used a power law distribution as follows. Let πD be the probability that a given 8
We define the performance gain as the ratio between the expected delay of using a non-optimal TTL and the expected delay of using the optimal TTL. 9 We have scaled time such that µu = 1. Therefore, 1/µa represents the relative frequency of the route requests to the frequency of topology variation. Also note that the analytical results are valid for any µd as long as µd >> µu .
1158
B. Liang and Z.J. Haas
route request is made to a destination of D hops away. If D is upper-bounded by Dmax , −α , where a larger the probability distribution function of D is defined as πD = DD max i−α i=1
value of α indicates a higher level of locality. In this example, Dmax = 20. Figure 1(b) demonstrates that the performance gain is a fast increasing function of α. As a point of reference, when α = 3 and µa = 1, using the optimal TTL can reduce the routing delay of either a non-caching system or a never-expiring caching system by approximately 25%. Therefore, route-cache optimization is especially important in the design of scalable on-demand routing protocols for large mobile ad hoc networks, where it has been proven that the traffic pattern must be localized[5].
5 Conclusions We have presented analytical and numerical methods to determine the expected routing delay and the optimal route-cache TTL for on-demand routing. The analysis is based on a random-graph model of mobile ad hoc networks. Our analytical results agree very well with the simulation results. Through the proposed analytical framework, one can study the routing delay of a network given various system parameters. The results of our analysis have shown that the optimal route-cache TTL does not depend on the route-request frequency or inter-arrival distribution. Furthermore, our numerical results have demonstrated that optimizing the route-cache TTL is the most effective when the traffic pattern is localized.
References 1. J. Broch, D. A. Maltz, D. B. Johnson, Y.-C. Hu, and J. Jetcheva, “A performance comparison of multi-hop wireless ad hoc network routing protocols,” ACM/IEEE MOBICOM, 1998. 2. S. R. Das, C. E. Perkins, and E. M. Royer, “Performance comparison of two on-demand routing protocols for ad hoc networks,” IEEE INFOCOM, 2000. 3. Y.-C. Hu and D. B. Johnson, “Caching strategies in on-demand routing protocols for wireless ad hoc networks,” ACM/IEEE MOBICOM, 2000. 4. G. Holland and N. Vaidya, “Analysis of TCP performance over mobile ad hoc networks,” ACM/IEEE MOBICOM, August, 1999. 5. P. Gupta and P. R. Kumar, “The capacity of wireless networks,” IEEE Trans. Information Theory, vol. 46, no. 2, March 2000. 6. J. J. Kelleher, “Tactical communications network modeling and reliability analysis: overview,” JSLAI Report JC-2091-GT-F3, November 1991. 7. P. Johansson, T. Larsson, N. Hedman, B. Mielczarek, and M. Degermark, “Scenario-based performance analysis of routing protocols for mobile ad-hoc networks,” ACM/IEEE Mobicom, August 1999. 8. J. Jubin and J. D. Tornow, “ The DARPA packet radio network protocols,” Proceedings of IEEE (Special Issue on Packet Radio Networks), vol. 75, pp. 21-32, January 1987. 9. B. M. Leiner, D. L. Nielson, and F. A. Tobagi, “Issues in packet radio network design,” Proceedings of the IEEE, vol. 75, pp. 6-20, January 1987. 10. D. A. Maltz, J. Broch, J. Jetcheva, and D. B. Johnson, “The effect of on-demand behavior in routing protocols for multihop wireless ad hoc networks,” IEEE JSAC - Special Issue on Wireless Ad Hoc Networks, vol. 17, no. 8, pp. 1439-1453, August 1999. 11. C. E. Perkins, ed., Ad Hoc Networking, Addison-Wesley Longman, 2001.
Long-Range Dependence of Internet Traffic Aggregates Solange Lima, Magda Silva, Paulo Carvalho, Alexandre Santos, and Vasco Freitas Universidade do Minho, Departamento de Informatica, 4710-059 Braga, Portugal {solange, paulo, alex, vf}@uminho.pt
Abstract. This paper studies and discusses the presence of LRD in network traffic after classifying flows into traffic aggregates. Following DiffServ architecture principles, generic QoS application requirements and the transport protocol in use, a classification criterion of Internet traffic is established. Using fractal theory, the resulting traffic classes are analysed. The Hurst parameter is estimated and used as a measure of traffic burstiness and LRD in each traffic class. The traffic volume per class and per interface is also measured. The study uses real traffic traces collected at a University of Minho major backbone router in different periods of network activity.
1
Introduction
The diversity of quality of service (QoS) requirements of the actual and emergent services will force the network to differentiate traffic so that an adequate QoS level is offered. One of the most promising solutions proposed by the Internet Engineering Task Force (IETF) is the Differentiated Services architecture (DiffServ) [1], which aggregates traffic in a limited number of classes of service according to QoS objectives. This new network traffic paradigm poses renewed interest and challenge to network traffic analysis and characterisation. Although, several other studies focus on general Internet traffic characterisation, the effects of aggregating traffic in classes are still unclear. Will a particular traffic class be responsible for the behaviour expressed in [2]? Does aggregation affect burstiness at network nodes and links? The major objective of our work is to study fractal properties such as the long-range dependence (LRD) in Internet traffic aggregates. Netflow[3] traffic samples collected at different time periods of network activity in a backbone router at the University of Minho were used. After establishing a traffic classification criterion based on a DiffServ model multi-field approach, all the samples are analysed applying that criterion. The time characteristics of each traffic class are studied resorting to the Mathematica software.
E. Gregori et al. (Eds.): NETWORKING 2002, LNCS 2345, pp. 1159–1164, 2002. c Springer-Verlag Berlin Heidelberg 2002
1160
2
S. Lima et al.
The DiffServ Model
In the DiffServ model, network traffic is classified and marked using the DS-field [11]. This identifier determines the treatment or Per-Hop-Behaviour (PHB) [4,5] traffic will receive in each network node. The IETF has proposed the Expedited Forwarding PHB [4] and the Assured Forwarding PHB Group [5] (EF and AF PHBs), besides best-effort (BE PHB). The EF PHB can be used to build services requiring low loss, reduced delay and jitter, and assured bandwidth. The AF PHB group, consisting of four classes, can be used to build services with minimum assured bandwidth and different tolerance levels to delay and loss.
3
Network Traffic Characterisation
The knowledge of the network traffic characteristics as a whole and, in particular, of traffic aggregates is relevant to allow a proper network resources allocation and management, to help traffic engineering, traffic and congestion control, and to specify services realistically. In our study, the analysis is based on the fractal time series theory since recent studies related to the characterisation and modelling of network traffic point to the presence of self-similarity and LRD. This last property may directly affect the items highlighted above, with strong impact on queuing and on the nature of congestion [6]. 3.1
Fractal Traffic Properties
Self-similarity expresses the invariance of a data structure independently of the scale that data is analysed. From a network traffic perspective, self-similarity expresses a new notion of burstiness, i.e. there is no natural length for a burst and bursty structure of traffic is maintained over several time scales. As an example of processes which exhibit self-similarity and LRD, one may consider X(t) , an asymptotically second order self-similar stochastic process, with Hurst parameter 12 < H < 1 , i.e., limm→∞ γ(k) = ((k + 1)2H − 2 2k 2H + (k − 1)2H ) σ2 . X(t) has the following properties: long-range dependence the autocorrelation function ρ(k) decays hyperbolically (ρ(k) is non-summable) ρ(k) limk→∞ ck −β = 1; slowly decaying variances - the variance of the aggregated km (m) 1 series processes X (m) , Xk = m i=km−m+1 Xi (k = 1, 2, ... and m = 1, 2, ...), is expressed by var(X (m) ) ∼ var(X)m−β , with c > 0 constant, β = 2 − 2H, 0 < β < 1. H is commonly used to measure LRD, and a valuable indicator of traffic burstiness (burstiness increases with H). If 12 0.5, and 55% of them have an H > 0.7. Table 3 extends this analysis to the activity periods defined in section 3.2. Similar analysis was also carried out for the other classes.
1164
S. Lima et al.
It is notorious that H increases with network activity, which is consistent with [7]. Although the above tables do not differentiate the results by interface, the analysis of classified traffic per interface shows the same tendency. Excluding class 2, the relation between H and the traffic volumes is not clear for the remaining classes which may indicate an application type dependence. In fact, class 1 behaves in opposite way, and class 4 does not show evidence of burstiness for the different activity periods. This can be due to the regular nature of traffic it emcompasses (e.g. routing traffic). As regards the autocorrelation, almost all the samples with autocorrelation functions decaying slowly to zero (which suggests LRD) had an H above 0.5 in the variance time plots, which is consistent.
5
Conclusions
In this study, the Hurst parameter was used to measure LRD in real traffic samples classified according to a proposed criterion. The values for H were determined for the defined traffic classes and for periods of different network activity. The results show that classes 2 and 3 (bulk transfer and HTTP traffic) play a major role in the total load per interface. In particular, Class 3 is clearly the one showing a higher evidence of burstiness, which increases with traffic load. While Class 2 has similar characteristics, Class 1 behaves in opposite way. Class 4 presents an estimated H below 0.5 independently of the network activity period. For most of the samples, the tests illustrate similar results either analysing the time series of packets or bytes. Currently, a larger set of samples are being analysed to consolidate these results. Obtaining complementary statistics is also a matter of concern.
References 1. S. Blake, D. Black, M. Carlson, E. Davies, Z. Wang, and W. Weiss. An Architecture for Differentiated Services. Technical report, IETF RFC 2475, 1998. 2. M.S. Taqqu, W. Willinger, W.E. Leland, and D.V. Wilson. On the Self-Similar Nature of Ethernet Traffic. SIGCOMM’93, 1993. 3. Cisco Systems. NetFlow. http://www.iagu.on.net/software/netflow-collector. 4. V. Jacobson, K. Nichols, and K. Poduri. An Expedited Forwarding PHB. Technical report, IETF RFC 2598, 1999. 5. J. Heinanen, F. Baker, W. Weiss, and J. Wroclawski. Assured Forwarding PHB Group. Technical report, IETF RFC 2597, 1999. 6. A. Erramilli and W. Willinger. Experimental Queueing Analysis with Long-Range Dependent Packet Traffic. IEEE/ACM Trans. on Networking, 4(2), April 1996. 7. M.S. Taqqu, V. Teverovsky, and W. Willinger. Estimators for Long-Range Dependence: An Empirical Study. Fractals, 3:785...788, 1995. 8. Trends in Wide Area IP Traffic Patterns. http://www.caida.org/outreach/papers/. 9. A. Bak, W. Burakowski, F. Ricciato, S. Salsano, and H. Tarasiuk. Traffic Handling in AQUILA QoS IP Networks. QoFIS2001, page 243...260, September 2001. 10. A. Croll and E. Packman. Managing Bandwidth: Deploying QoS in Enterprise Networks. Prentice Hall, 2000.
Improved Initial Synchronisation in the Presence of Frequency Offset in UMTS FDD Mode Valentina Lomi1 , Gianfranco L. Pierobon2 , Daniele Tonetto1 , and Lorenzo Vangelista1 1 Telit Mobile Terminals S.p.A., via Masini 2, 35129 Padova, Italy {valentina.lomi, daniele.tonetto, lorenzo.vangelista}@telital.com 2 Universit` a di Padova, Dipartimento di Elettronica e Informatica, via Gradenigo 6/A, 35131 Padova, Italy [email protected]
Abstract. The UMTS–FDD system, one of the members of the ITU IMT–2000 third generation standard for terrestrial cellular systems, employs pruned Golay sequences to enable initial synchronisation of the mobile terminals to the network. In this paper a low complexity solution for initial synchronisation is proposed, which is able to counteract the performance degradation introduced by large frequency offsets occurring in the mobile station receiver. Simulation results are provided to validate the proposed solution. Keywords: cell search, UMTS, FDD, Golay sequences, synchronisation
1
Introduction
The initial synchronisation is the process of a mobile station in a cellular CDMA network getting synchronised in time to the (strongest) base station and acquiring the scrambling code that base station uses for the downlink traffic channels. To let the mobile get synchronised to the network, the UMTS–FDD system [1] [2] provides two “bursty” pilot channels (“primary” and “secondary” synchronisation channels) and a continuous pilot channel. In this paper we focus on the “primary” channel which is based on the repetition of a non–scrambled Golay sequence common to all cells and which is needed to perform slot synchronisation (the first step of the initial synchronisation procedure, see [3]). During the standardisation process particular attention has been paid to the necessity of an implementation with requirements of low complexity and of robustness to the frequency offsets, as low cost mobile stations in UMTS systems may have large initial frequency offsets, up to 26 kHz [4]. In [4] [5] and [6], algorithms which are able to counteract the effect of the frequency offset on the initial synchronisation procedure are presented. In this paper we propose another solution for slot synchronisation in the presence of a frequency offset, which performs better than [4] in most cases. The theoretical basis for our solution is a theorem, proven in this paper, according E. Gregori et al. (Eds.): NETWORKING 2002, LNCS 2345, pp. 1165–1171, 2002. c Springer-Verlag Berlin Heidelberg 2002
1166
V. Lomi et al.
to which the Golay sequences rotated at the receiver by a frequency offset still preserve the “Golay property”. The paper is organised as follows. Section 2 describes the slot synchronisation procedure in the UMTS–FDD system. Section 3 demonstrates the previously mentioned theorem. Section 4 describes the proposed algorithm and its performance. Conclusions are drawn in Section 5.
2
System Model
For purposes of the slot synchronisation (see [1] for full details), we model the baseband representation of the signal received and sampled at the mobile station at chip rate 1/Tc , Tc = 1/3840000s, as ˜ xr (kTc ) = Ach · e(2π∆f kTc +θ) · pSCH (((k − k0 ) modQ) Tc ) + w(kT c)
(1)
where Q = 2560; QTc is the time interval called slot in the UMTS specifications; Ach is a real value modelling a constant channel attenuation; ∆f is the receiver frequency offset and θ is an unknown phase; k0 Tc is the time offset (unknown to the receiver): its estimation is actually the target of slot synchronisation; 2 – w(kT ˜ c ) is white Gaussian noise with variance σ ; – pSCH ((kmodQ) Tc ) is the primary synchronisation channel, common to all UMTS base stations. The sequence 1 1 √ √ (1 + j) a(k) for 0 ≤ k < 256 2 256 pSCH (kTc ) = (2) 0 for 256 ≤ k < Q – – – –
repeats every slot. a(k) is a pruned Golay complementary sequence generated as follows: b0 (k) = δ(k) a0 (k) = δ(k) an (k) = an−1 (k) + Wn · bn−1 (k − Dn ) an−1 (k) − Wn · bn−1 (k − Dn ) bn (k) = an (k)
(3) (4) n = 1, 2, 3, 5, 7, 8 n = 4, 6
a(k) = a8 (k)
(5) (6)
with k = 0, 1, 2, . . . , 255, where [D1 , D2 , D3 , D4 , D5 , D6 , D7 , D8 ] = [128, 64, 16, 32, 8, 1, 4, 2] [W1 , W2 , W3 , W4 , W5 , W6 , W7 , W8 ] = [1, −1, 1, 1, 1, 1, 1, 1]
(7) (8)
Let CN R be the ratio between the power of the synchronisation primary channel and the power of noise, hence CN R = A2ch / 256σ 2 . An efficient estimation of k0 can be obtained with the Budisin correlator (see [7]) shown in Fig.1, applied with the following procedure:
Improved Initial Synchronisation in the Presence of Frequency Offset
1167
✲ D8 ✲++❞✲ D7 ✲++❞✲ D6 ✲++❞✲ D5 ✲++❞✲ D4 ✲++❞✲ D3 ✲++❞✲ D2 ✲++❞✲ D1 ✲++❞ ✲ + + + + + + ❆ ✁✕ + ❆✁ ❆✁ ❆✁ ✕ ✕ ✕ ✕ ✕+ ∗ ❆ ✁ ✁ ✕ ✁ ✕ ✁ W8∗❄ ❆ + W7∗❄ W6 ❄ ❆ + W5∗❄ W4∗❄ ❆ + W3∗❄ ❆ + W2∗❄ ❆ + W1∗❄ ✁ +❞ ✲ ×❞ ✁ ✲ ×❞ ✲ ✁ +❞ ✲ ×❞ ✁ ✲ ×❞ ✲ ✁ +❞ ✲ ×❞ ✲ ✁ +❞ ✲ ×❞ ✲ ✁ +❞ ✲ ×❞ ✁ ✲ ×❞ ✲ -
-
-
-
-
Fig. 1. Budisin correlator
1. letting L be the number of slots in a synchronisation time, calculate L−1 n = 0, 1, . . . , Q − 1 m(n) = =0 |c ((n + Q) Tc )| where c (kTc ) is the signal at the output of the correlator; 2. find n0 such that m(n0 ) ≥ m(n), n = 0, 1, . . . , Q − 1; 3. calculate kˆ0 = n0 − 255.
3
Rotated Golay Sequences
Theorem 1. If the sequence a(k) is a pruned Golay complementary sequence defined by the recursive equations (3), (4), (5) and (6), then the sequence a ˆ(k) = a(k)e2π∆f kTc is a pruned Golay complementary sequence too (from now on called rotated Golay sequence), which can be obtained from the recursive equaˆ n = Wn · e2π∆f Dn Tc . tions (3), (4), (5), and (6) with the substitution Wn → W Proof. Equations (3), (4), (5) and (6) can be rewritten in the z–transform domain as An (z) = An−1 (z) + Wn z −Dn Bn−1 (z) An−1 (z) − Wn z −Dn Bn−1 (z) for n = 1, 2, 3, 5, 7, 8 Bn (z) = An (z) for n = 4, 6 A(z) = A8 (z)
(9) (10) (11)
with the initial condition A0 (z) = B0 (z) = 1. Defining a ˆn (k) = an (k)e2π∆f Tc ˆbn (k) = bn (k)e2π∆f Tc
(12) (13)
and substituting z → ze−2π∆f Tc in (9), (10) and (11), we have that the sequence a ˆ(k) can be represented, in the z–transform domain, by the recursive equations ˆn−1 (z) (14) Aˆn (z) = Aˆn−1 (z) + Wn e2πDn ∆f Tc z −Dn B 2πDn ∆f Tc −Dn ˆ ˆ z Bn−1 (z) for n = 1, 2, 3, 5, 7, 8 ˆn (z) = An−1 (z) − Wn e B (15) ˆ An (z) for n = 4, 6 ˆ (16) A(z) = Aˆ8 (z) ˆ0 (z) = 1. with the initial conditions Aˆ0 (z) = B
1168
4
V. Lomi et al.
The Proposed Synchronisation Algorithm and Its Performances
It is known that the usual synchronisation procedure, described in Section 2, is very sensitive to frequency offsets. It can be shown (see [4]) that the signal-tonoise ratio degradation in the output of the correlator matched to the sequence a(k) is proportional to sin2 N π∆f Tc (17) N sin2 π∆f Tc with N = 256. Hence the correlation peak vanishes at all when ∆f = ∆f± = ±1/N Tc = ±15 kHz. This consideration together with Theorem 1 lead us to propose the new solution, depicted in Fig. 2, which uses three Budisin correlators, one matched to the sequence a(k), one matched to a(k) · e2π∆f+ kTc and one matched to a(k) · e2π∆f− kTc . 1 The MAX block takes the sequences applied at its input, determines the maximum in the set of all the values assumed by the sequences and produces as an output the input sequence to which the maximum belongs.
Complex Budisin correlator Wn+
xr (kTc )
Complex Budisin correlator Wn
Complex Budisin correlator Wn−
c+ (kTc )
c(kTc )
c− (kTc )
||
||
MAX
||
Fig. 2. The proposed algorithm
Unfortunately the proposed synchronisation scheme has a higher implementation complexity than usual schemes because the coefficients Wn± are e2π∆f± Dn Tc instead of simply ±1 and three correlators, instead of only one, are used. In order to reduce the implementation complexity we make the following approximations for the coefficients of the upper and lower correlators: Wm+ ≈ Wm Wm− ≈ Wm 1
for m = 3, 4, 5, 6, 7, 8
(18) (19)
Note that at the critical frequency offsets 0, ∆f+ and ∆f− one of the outputs of the three correlators raises its maximum, while the others have there their minima.
Improved Initial Synchronisation in the Presence of Frequency Offset [xr (kTc )] ✲
+ + + + + + D8 ✲+❡ ✲ D7 ✲+❡ ✲ D6 ✲+❡ ✲ D5 ✲+❡ ✲ D4 ✲+❡ ✲ D3 ✲+❡ ✲ D2 ❆✁ ❆✁ ❆✁ ❆✁ ✕+ ✕+ ✁ ✕+ ✕+ ✁ ✕+ ✕+ ∗ ∗ ✁ W∗ ✁❆ W ∗ ✁ W∗ ✁ ✁ ❆ ❆ W8∗ ❄ ✁❆❆ W W + + + 7 ❄ 6 ❄ 5 ❄ 4 ❄ 3 ❄ +❡ ✲ +❡ ✲ ++ ✲ ×❡ ✁✲+❡ ✲ ❡ ✁ ✲ ❡ ✁✲❆ ❡ ✁ ✲ ❡ ✁✲❆ ❡ ✁✲❆ ❡ × × × × ×
-
-
-
-
[xr (kTc )] ✲
+ + + + + + D8 ✲+❡ ✲ D7 ✲+❡ ✲ D6 ✲+❡ ✲ D5 ✲+❡ ✲ D4 ✲+❡ ✲ D3 ✲+❡ ✲ D2 ❆✁ ❆✁ ❆✁ ❆✁ ✕+ ✕+ ✁ ✕+ ✕+ ✁ ✕+ ✕+ ∗ ∗ ∗ ∗ ∗ ∗ ✁ ✁ ✁ ✁ ✁ ✁ W8 ❄ ❆❆ W7 ❄ W6 ❄ ❆❆ W5 ❄ W4 ❄ ❆❆ W3 ❄ ❆❆ ✲ ×❡ ✁✲++ ❡ ✲ ❡ ✁ ✲ ×❡ ✁✲++ ❡ ✲ ❡ ✁ ✲ ×❡ ✁✲++ ❡ ✲ ❡ ✁✲++ ❡ × × ×
-
-
αR
+ +
αRd
+ −
βRd
+ +
βR
+ −
αI
+ +
αId
+ −
βId
+ +
βI
+ −
-
-
1169
✲ αR ✲ D1 ✲ αRd ✲ D1 ✲ βRd ✲ βR ✲ αI ✲ D1 ✲ αId ✲ D1 ✲ βId ✲ βI
+ [c+ (kTc )] − + [c+ (kTc )] + + [c(kTc )] − + [c(kTc )] − + [c− (kTc )] + − + [c− (kTc )]
Fig. 3. Reduced complexity scheme
It can be demonstrated that these approximations have the correlators matched to sequences rotated by a staircase phase instead of a linear increasing phase. Taking into account the above assumptions, computations depicted in Fig. 2 can be re–organised as shown in Fig. 3 in a reduced complexity scheme. Note that, while the single Budisin correlator (considering both in–phase and quadrature components) needs 26 sums per output, the computations required by the proposed algorithm in this simplified implementation are 34 sums per output. The performance of the proposed algorithm in a flat fading channel with 9.26 Hz Doppler is shown in Fig. 4 with no frequency offset and with a 20 kHz frequency offset. In both cases it is compared with the performances obtained with a single Budisin correlator and with the algorithm described in [4] (indicated with the label ”Wang-Ottosson”). The algorithm proposed in this paper shows a very good performance both at 0 kHz and 20 kHz. The synchronisation error is also plotted versus the frequency offset in Fig. 5 for both the algorithm in [4] (indicated with the label ”W-O”) and the proposed method. Examined CNR values are -21 dB and -13 dB. The proposed algorithm shows the best behaviour for all the examined frequency range, except for the values around ∆f+ /2.
1170
V. Lomi et al. (a)
(b)
0
0
10
10
First step sync error rate
First step sync error rate
Proposed algorithm Wang−Ottosson Single Budisin correlator
Proposed algorithm Wang−Ottosson Single Budisin correlator
−1
10
−21
−20
−19
−18
−17
−16
−15
−14
−13
CNR [dB]
−1
10 −21
−20
−19
−18
−17
CNR [dB]
−16
−15
−14
−13
Fig. 4. First step performance with a 0 (a) and 20 kHz (b) frequency offset (L = 15) 0
10
First step sync error rate
Prop alg CNR = −13 dB Prop alg CNR = −21 dB W−O CNR = −13 dB W−O CNR = −21 dB
−1
10
0
2
4
6
8
10
12
Frequency offset (kHz)
14
16
18
20
Fig. 5. First step performance in two different CNR conditions (L = 15)
5
Conclusions
We have presented an innovative method for the first step of the UMTS–FDD initial synchronisation procedure which is able to counteract the degrading effect of the frequency offset. The algorithm has a low complexity implementation. Simulations indicate that it can offer a good performance for all the CNR and frequency offset values of interest.
References 1. 3GPP 3G TS 25.211 “Physical channels and mapping of physical channels onto transport channels (FDD)”, version 4.1.0 June 2001. 2. 3GPP 3G TS 25.213 “Spreading and modulation (FDD)”, version 4.1.0 June 2001. 3. 3GPP 3G TS 25.214 “Physical layer procedures (FDD)”, version 4.1.0 June 2001.
Improved Initial Synchronisation in the Presence of Frequency Offset
1171
4. Y.-P. E. Wang, T. Ottosson, ”Cell search in W–CDMA”, IEEE Journal on Selected Areas in Communications, Vol. 18, No. 8, Aug. 2000. 5. K.-M. Lee, J.-Y. Chun, ”An initial cell search scheme robut to frequency error in W-CDMA system”, PIMRC 2000, Vol. 2, 2000. 6. S.-Y. Hwang, B.-J. Kang, J.-S. Kim, ”Performance analysis of initial cell search using time tracker for W-CDMA”, GLOBECOM 2001, Vol. 5, 2001. 7. S. Z. Budisin, ”Efficient pulse compressor for Golay complementary sequences”, Electronics Letters, Vol. 27, No. 3, Jan. 1991.
Scalable Adaptive Hierarchical Clustering Laurent Mathy1 , Roberto Canonico2 , Steven Simpson1 , and David Hutchison1 1
Lancaster University, UK {laurent, ss, dh}@comp.lancs.ac.uk 2 University Federico II, Napoli, Italy [email protected]
Abstract. We propose a new application-level clustering algorithm capable of building an overlay spanning tree among participants of large multicast sessions, without any specific help from the network routers. This algorithm is based on a unique definition of zones around nodes and an innovative adaptive cluster size distribution. The proposed method finds application in many context where large-scale overlay trees can be usefull: application-level multicasting, peer-to-peer networks and content distribution networks (among other things).
1
Introduction
More than a decade of research in multicast technologies demonstrates the need for large-scale (application-level) overlay structures. Tree-based ACKs (TRACKs) have been identified as the most appropriate approach to providing real-time and scalable delivery guarantees to groups of receivers [8]. In this scenario, the overlay provides a control structure. This approach is further re-inforced with the recent emergence of the Source-Specific IP multicast model [4][3] which is an asymmetrical service where only a designated source can send in multicast to the group. More recently, reasons for the lack of widespread deployment of IP multicast have been identified [2]. These indicate that ubiquitous rollout of IP multicast services may, even if at all possible, take a very long time. In such circumstances, overlays represent an attractive alternative to IP multicast for data dissemination among members of multicast groups. This is the case in Content Delivery Networks (CDN) where application-level multicast overlays are often used, for example, for the distribution of multimedia data from primary to secondary servers. Peer-to-peer (p2p) applications also rely heavily on overlays. Here, the overlays are used to propagate search strings among the nodes of the p2p network, in order to discover the location of the desired content. Very often, users of p2p networks statically configure a few nodes to peer with, which can result in a non-efficient, almost “chaotic” overlay. To date, all the above scenarii lack, but would greatly benefit from, effective algorithms to build large-scale, efficient overlays. In this paper, we propose a
This work was supported by the BT Alpine Project.
E. Gregori et al. (Eds.): NETWORKING 2002, LNCS 2345, pp. 1172–1177, 2002. c Springer-Verlag Berlin Heidelberg 2002
Scalable Adaptive Hierarchical Clustering
1173
new method designed to build such large-scale overlays, without requiring any special support from the network routers. This method is based on the concept of clustering.
2 2.1
Adaptive Hierachical Clustering Algorithm General Strategy and Goal
The algorithm described in this section is designed to build, recursively, a hierarchy of clusters. A cluster is represented by a cluster head and is composed of the cluster head and other nodes “close” to the cluster head. The algorithm is “recursive” in the sense that each cluster is divided into sub-clusters, whose (sub-)cluster heads are constituent nodes of the original cluster. The hierarchy of clusters is organised into layers, where layer Li is composed of the cluster heads of (sub-)clusters that divide Li−1 -clusters (i.e. clusters whose head is in layer Li−1 ). This is illustrated in figure 1.(a). For instance, in this figure, the L1 -cluster headed by C is composed of C, F and G. This cluster contains two L2 -clusters, respectively headed by F and G.
1.(a): Clusters and layers
1.(b): Tree
Fig. 1. Cluster hierarchy.
The principle of the algorithm is that, starting at layer L0 with a top-level cluster containing all the nodes in the hierarchy and whose cluster head is a well known node called the root of the hierarchy, clusters are recursively divided into
1174
L. Mathy et al.
sub-clusters, until all clusters obtained are “singleton-clusters” containing only their cluster head1 . The cluster hierarchy thus built forms a logical tree spanning all the cluster heads (e.g. all the nodes) in the hierarchy (see figure 1.(b)). Consequently, the state to build and maintain this hierarchy can be distributed among all the nodes in the hierarchy such that each node in layer Li only needs to record its parent cluster head (i.e. the Li−1 -cluster head whose cluster it belongs to) and the Li+1 cluster heads that are members of its own cluster. For instance, in figure 1.(a), B records R as its parent cluster and E as its child. 2.2
Workings of the Algorithm
The algorithm is distributed and based solely on unicast communications. In other words, it does not rely on any special network support. One of the central ideas in the algorithm is that any node (i.e. any cluster head) sees the rest of the world as a set of concentric rings (which we call zones), centered on the node itself. Each zone starts where the previous one finishes and the zones are numbered in increasing order, starting at 0 for the smallest ring (see figure 2). The actual size of each ring, as well as the distance measurement used to define it (e.g. delays, throughput, etc.), is unimportant for the general workings of the algorithm. With each zone, a distance called a radius is also defined. Again, the size of the radius is unimportant for the workings of the algorithm (but its distance measurement is the same as the measurements used for defining the zones).
Fig. 2. Zones associated with a node.
The scalable hierachical clustering algorithm works as follows. The cluster hierarchy is rooted on a well known entity called the root. A node desiring to join the cluster hierarchy first measures its distance to the root, and then sends 1
Each node in the hierarchy is therefore the cluster head of a (sub-)cluster.
Scalable Adaptive Hierarchical Clustering
1175
to the root a JOIN message containing this distance. Based on this distance, the root determines the zone of the joining node. Here, two cases are possible: 1. The joining node is the first node joining in the corresponding zone. 2. Other nodes from the same zone have already joined. In the former case, the root records the presence of the joining node in the corresponding zone and sends the node a NEW CLUSTER ACK message, indicating that the joining node has found its place in the hierarchy (this finishes the algorithm for the joining node). The joining node is now the cluster head of one of the sub-clusters dividing the cluster headed by the root (albeit a “singleton-cluster” for the time being). In the latter case, the root sends to the joining node, in a TRY message, the list of the cluster heads in the same zone as the joining node, along with the radius associated with this zone. The joining node then measures its distance to each of the nodes in the list. Again, we consider two cases: 1. The distance of the joining node to at least one of the cluster heads in the list is smaller than the given radius. The clusters headed by these cluster heads are called attracting clusters. 2. The distance of the joining node to all the cluster heads in the list is greater than the given radius. In the former case, the joining node chooses the closest attracting cluster and joins it: that is, the algorithm starts again with the corresponding cluster head acting as the root. In this case, the joining node is said to “go down one layer” (as the cluster it is heading will potentially be part of the partition of the attracting cluster) and it is important to note that the root does not record the presence of the joining node. In essence, from the root’s point of view, the members of the attracting cluster are “collapsed” into the attracting cluster head, as this cluster head is the only node in the attracting cluster remembered by the root. In the latter case, the joining node creates a new sub-cluster by sending a NEW CLUSTER message to the root (including its distance to the root). The root then keeps a record of the new cluster head (i.e. the joining node) and of its zone and replies with a NEW CLUSTER ACK message which finishes the algorithm for this joining node.
3
Scalability Considerations
From the previous section, it should be clear that the state overhead imposed on each node in the hierarchy is proportional to the number of zones needed for that node to “span” its cluster, times the number of clusters per zone. This number of clusters per zone also influences the scalability of the join procedure, as any joining node must measure its distance to all the cluster heads at the same zone, for all traversed layers. Also, the further away from the central node a zone is, the more nodes – and thus the more clusters – such a zone potentially contains. These observations favour the use of large clusters, within few zones.
1176
L. Mathy et al.
On the other hand, large clusters tend to create many layers (as they can contain large sub-clusters which, in turn, will have to be divided), which has a negative impact on the latency of the join procedure. In order to accommodate these conflicting requirements, we propose to define zones based on RoundTrip Times (RTTs) measurements, and whose sizes follow an “exponential distribution” (see figure 2): zone0 : 0 < dist ≤ 1 zonei : (1 + ∆)
i−1
(1) i
< dist ≤ (1 + ∆) , with ∆ > 0
(2)
This, in turn, allows us to define the size (i.e. radius) of the clusters at zonei as: ri =
(1 + ∆)i − (1 + ∆)i−1 2
(3)
The parameter ∆ in the formulae could be either fixed or varied according to which layer the cluster, headed by the corresponding node, belongs to. Other size “distributions” for both zones and radii are of course possible, but the ones we propose prevent an explosion of the number of clusters in far zones while keeping the number of zones down and retaining the desirable property that “detail” (i.e. “fine grain positioning”) matters only for nodes close to a cluster head.
4
Discussion and Conclusions
In this paper, we have proposed a method to build a hierarchy of nodes, based on the notion of proximity, in a distributed and scalable way. The hierachy is built through a series of “local” decisions involving only a small subset of the hierachy’s population for each decision. This, coupled with an innovative adaptive cluster size distribution approach, yields a simple, yet powerful, approach to building overlay, application-level structures without relying on any special support from network routers. The hierarchy thus built is loopless and spans all the nodes in it. Our scalable adaptive hierarchical clustering algorithm can therefore be seen as a new member in the category of application-level multicast tree building methods (e.g. [1][5][7][6]). The overlay application-level multicasting trees built with our scalable adaptive hierarchical clustering are unconstrained, meaning that nodes in the tree cannot explicitly control their number of children. This may not be a problem for overlay trees built for control purposes [8] but could yield a significant penalty for trees built for data distribution. However, the method presented in this paper can still be very useful in the context of application-level multicast data distribution. Indeed, a constrained application-level multicast overlay tree can be built by having each cluster head and its sub-clusters (i.e. its members populating the next layer in the hierarchy) run any algorithm that builds a constrained overlay tree [1][7][6]. With this approach, each node in the cluster hierachy would be a
Scalable Adaptive Hierarchical Clustering
1177
member of the overlay tree rooted at its parent cluster, as well as the root of the overlay tree spanning its own cluster. This would allow the building of very large constrained overlay trees. Another application of the scalable adaptive hierarchical clustering presented in this paper is resource discovery. Indeed, a permanent hierarchy of resources could be built, rooted on a well known node, and “searched” by clients with a modified join procedure which does not declare the creation of a new (sub-) cluster when it finishes (see section 2.2). This could even substitute expanding ring searches in asymmetric network multicast circumstances or when network multicast is unavailable. In future work, we will investigate the performance of the proposed algorithm under dynamic conditions (e.g. dynamic group membership, failures, etc.)
References 1. Y-H. Chu, S. Rao, and H. Zhang. A Case for End System Multicast. In ACM SIGMETRICS, pages 1–12, Santa Clare, CA, USA, June 2000. 2. C. Diot, B. Levine, B. Lyles, H. Kassem, and D. Balensiefen. Deployment Issues for the IP Multicast Service and Architecture. IEEE Network, 14(1):78–88, Jan/Feb 2000. 3. B. Fenner, M. Handley, H. Holbrook, and I. Kouvelas. Protocol Iindependent Multicast - Sparse Mode: Protocol Specification (revised). Internet Draft draft-ietf-pimsm-v2-new-02, IETF, Mar 2001. Work in Progress. 4. H. Holbrook and D. Cheriton. IP Multicast Channels: EXPRESS Support for Largescale Single-source Applications. ACM Comp. Comm. Reviews, 29(4):65–78, Oct 1999. 5. J. Jannotti, D. Gifford, K. Johnson, F. Kaashoek, and J. O’Toole. Overcast: Reliable Multicasting with an Overlay Network. In USENIX OSDI, San Diego, CA, USA, Oct 2000. 6. L. Mathy, R. Canonico, and D. Hutchison. An Overlay Tree Building Control Protocol. In Proc. of Intl. workshop on Networked Group Communication (NGC), pages 76–87, Nov 2001. 7. D. Pendarakis, S. Shi, D. Verma, and M. Waldvogel. ALMI: an Application Level Multicast Infrastructure. In 3rd USENIX Symposium on Internet Technologies, San Francisco, CA, USA, Mar 2001. 8. B. Whetten and G. Taskale. An Overview of Reliable Multicast Transport Protocol II. IEEE Network, 14(1):37–47, Jan 2000.
How to Achieve Fair Differentiation Eeva Nyberg and Samuli Aalto Networking Laboratory Helsinki University of Technology P.O.Box 3000, FIN-02015 HUT, Finland {eeva.nyberg,samuli.aalto}@hut.fi
Abstract. We present a simple packet level model to show how marking at the DiffServ boundary node and scheduling and discarding inside a DiffServ node affect the division of bandwidth between two delay classes: elastic TCP flows and streaming non-TCP flows. We conclude that only per flow marking together with dependent discarding thresholds across both delay classes is able to divide bandwidth fairly, according to the load of the network, and in a TCP friendly way. Keywords: DiffServ, TCP, fairness, TCP friendliness
1
Introduction
The main arguments against differentiation are the waste of network resources and the difficulty to guarantee fair bandwidth allocation between priority classes. More research in this field has to be done, to be able to settle the dispute. The Internet research also lacks efforts in coupling the packet level QoS mechanisms of DiffServ [1], e.g. Assured Forwarding (AF) [2], to flow level analysis. On the other hand, flow level bandwidth allocation and fairness research, e.g. [3], [4], continue to assume that weighted fair bandwidth allocations between flows in different service classes are somehow achieved and evade the question of how to do so without flow control or per flow scheduling. In [5] we introduced both packet and flow level models to study how bandwidth is divided among flows using packet level differentiation mechanisms of the Simple Integrated Media Access (SIMA) proposal [6]. In the present paper we continue the packet level modelling approach to investigate the key factors of two DiffServ schemes, AF and SIMA. Following Roberts [7], we assume two forwarding classes based on delay requirements: elastic TCP traffic and streaming non-TCP traffic. As a result, we present the role of the conditioning and forwarding mechanisms in dividing bandwidth consistently across delay classes.
2
DiffServ Network Model and Its Analysis
The main elements of DiffServ are traffic classification and conditioning at the boundary nodes and traffic forwarding through scheduling and discarding at the DiffServ interior nodes. In addition, congestion control mechanisms designed for E. Gregori et al. (Eds.): NETWORKING 2002, LNCS 2345, pp. 1178–1183, 2002. c Springer-Verlag Berlin Heidelberg 2002
How to Achieve Fair Differentiation
1179
the Internet, such as TCP, and active queue management algorithms, such as RED, may be used for QoS in the Internet. Figure 1 summarizes the components. Interior node Boundary node flows
Meter
Marker
Conditioner
aggregates
Discard
Accept
Discard
Delay Real-time class Nonrealtime
w1 w2
Scheduling unit
Fig. 1. Components of a DiffServ network
Network model. Consider a DiffServ network with a single bottleneck link, which is loaded by a fixed number of flows. Assume two delays classes, d = 1, 2, and I precedence levels, i = 1, . . . , I. Delay class 1 refers to non-TCP flows, and delay class 2 to TCP-flows. Precedence level I refers to the highest priority, i.e. flows at that level encounter the smallest packet loss probability, and level 1 to the lowest priority. Note that this is just opposite to, e.g., the definition given in [2]. Therefore, we rather use the term priority level here. Each flow is given a weight φ that reflects the value of the flow. A natural objective of any traffic control algorithm is to allocate bandwidth as fairly as possible. Here fairness refers to weighted fairness in a single link, i.e. the throughput θ of any flow should be proportional to its weight φ. For networks with DiffServ architecture it is not clear how to achieve this objective, since there are no per flow mechanisms available in the core network. At the conditioner, the packets of a flow are marked to priority levels according to the measured traffic rate compared to the weight of the flow. More specifically, let ν denote the measured packet arrival rate of a flow. As in [6], we assume that the priority level pr of the flow depends on ν and φ as follows: ln φν ,I ,1 . (1) pr = max min I/2 + 0.5 − ln 2 Thus, the priority level is decreased by one as soon as the traffic rate doubles. For non-TCP flows we assume a fixed packet arrival rate ν, whereas for TCP flows it depends on the congestion level of the network. Let RT T denote the round trip time of a TCP flow and q the packet loss probability it encounters in the buffer of the bottleneck link. Following [8], we assume that 1−q 1 . (2) 2 ν= RT T q Assume that there are L1 different groups of non-TCP flows, each group l with a characteristic packet arrival rate ν(l), and let L1 denote the set of such flow groups. Furthermore, assume that there are L2 different groups of TCP flows, each group l with a characteristic round trip time RT T (l), and let L2
1180
E. Nyberg and S. Aalto
denote the set of such flow groups. Finally, let n(l) denote the number of flows in any group l. At the boundary node all the traffic belonging to the same delay class and precedence level are aggregated. Let λd (i) denote the aggregate packet arrival rate of delay class d and priority level i. Packets of the flow aggregates are then forwarded or discarded by a scheduling unit that includes two buffers, one for each delay class. Denote by K 1 and K 2 the sizes of the two buffers in number of packets. DiffServ mechanisms. Traffic is conditioned at the boundary node by measuring the incoming traffic and, based on the metering result, by marking the packets of the flow. We consider two different marking principles: – Per flow marking: Once the measured traffic rate of a flow exceeds a marking threshold, all packets of the flow are marked to the same precedence level. – Per packet marking: Only those packets of a flow that exceed the marking threshold are marked to the lower precedence level. The marking thresholds for flow group l, determined from (1), are t(l, 0) = ∞, t(l, I) = 0, and t(l, i) = φ(l) · 2I/2−i−0.5 , i = 1, ..., I − 1. Per flow marking gives the aggregate arrival intensity λd (i) as λd (i) = n(l)ν(l).
(3)
(4)
l∈Ld :pr(l)=i
On the other hand, if per packet marking is applied, then λd (i) = n(l)(min [ν(l), t(l, i − 1)] − min [ν(l), t(l, i)]).
(5)
l∈Ld :pr(l)≤i
But what are such metering and marking mechanisms that follow these principles? In [9] we demonstrated by simulation experiments that the token bucket scheme marks packets to precedence levels per packet, while the use of exponentially weighted moving average (EWMA) marks packets per flow. The token bucket scheme is referred to, e.g., in the AF specification. Packets are marked to I precedence levels by I − 1 cascaded token buckets. The EWMA scheme was proposed, e.g., in the SIMA proposal. Forwarding at the interior node is done to aggregates divided, in our case, into two delay classes. Before forwarding, traffic can be limited by discarding packets based on precedence levels. We consider two different discarding mechanisms: – Independent discarding: Each buffer acts locally as a separate buffer, discarding appropriate precedence levels according to its buffer content. – Dependent discarding: The content of both buffers determines which precedence level is discarded, in both buffers.
How to Achieve Fair Differentiation
1181
Let md denote the number packets in the buffer of delay class d. The independent discarding is implemented by giving, separately for each delay class d, thresholds K d (i) that determine the minimum priority level accepted, P La , when compared to md . The dependent discarding, proposed in [6], is implemented by giving a two-dimensional monotonic function P La = f (
m1 m2 , ) K1 K2
(6)
that determines the minimum priority level accepted when in state (m1 , m2 ). We apply the function introduced in [10]. The traffic not discarded is placed in the two buffers. Following the Weighted Fair Queuing (WFQ) principle, whenever one of the buffers is empty, the other buffer has use of total link capacity. Otherwise the capacity of the link is divided according to predetermined weights w1 and w2 , with w1 + w2 = 1. We consider three different scheduling scenarios: – Priority queuing: WFQ with weights (w1 = 1, w2 = 0). – Unequal sharing: WFQ with weights (w1 = 0.75, w2 = 0.25). – Equal sharing: WFQ, with weights (w1 = w2 = 0.5). Analysis. The scheduling unit with two buffers is modelled as two dependent M/M/1/K queues with state dependent arrival intensities. When in state (m1 , m2 ), the arrival intensity depends on the applied discarding function as follows: if P La is i, then the arrival rate for buffer d is λd (i)+. . .+λd (I). The packet transmission times are assumed to be exponentially distributed with mean 1/µ. Thus, if both buffers are non-empty, packet service rates are w1 µ and w2 µ for the two delay classes. This results in a two-dimensional Markov jump process, the stationary distribution of which can be solved numerically. From the stationary distribution we can calculate the packet loss probabilities pd (i) for each traffic aggregate, i.e., for each combination of delay class d and priority level i. Thus, if per flow marking is applied, the packet loss probability, q(l), for a flow in group l ∈ Ld becomes q(l) = pd (pr(l)).
(7)
On the other hand, if per packet marking is applied, then q(l) =
I j=1
pd (j)
min [ν(l), t(l, j − 1)] − min [ν(l), t(l, j)] . ν(l)
(8)
For each TCP flow these packet loss probabilities can be used to determine iteratively the packet arrival rate ν from equation (2). Then these rates are again aggregated as in (4) and (5), and the aggregate rates are used to solve the stationary distribution of the resulting two-dimensional Markov process. By continuing this iteration, the traffic rates of TCP flows converge to some equilibrium values, which reflect the network state, i.e. the number of flows n(l) in different classes l.
1182
3
E. Nyberg and S. Aalto
Numerical Results and Conclusions
We study the combined effect of the three degrees of freedom introduced in the text: marking, discarding thresholds and weighted capacity. We have the following scenario in terms of the free parameters: fL = 1, Kl = 13, K2 = 39 and I = 3. In addition we consider two flow groups, non-TCP flows in group 1 with ¢(1) = 0.08 and v(l) E {0.039, 0.079, 0.16}, and TCP flows in group 2 with ¢(2) = 0.04 and RTT(2) = 1000/ fL. The three values of v(l) are chosen so that, under the per flow marking scheme, the non-TCP flows have priorities pr(l) = 3, pr(l) = 2, and pr(l) = 1, respectively. . tUles , d epIC . t e d·m fi gUle ' 2 sh ow th e rat·10 &(2) &(1) - u(l)(l-q(l)) E ac h se t 0 f pIC u(2)(1-q(2)) between throughputs of flows as function of total number of flows, under the condition n(1)/n(2) = 1/2. The trajectories are solid, gray, and dashed for v(l) = 0.039, v(l) = 0.079, and v(l) = 0.16, respectively. WI
= 0.75 , w 2 = 0.25
.'T~ .") ,-------
e(1~/e(2)
i~ ; } 4 ,
~~
10 15 20 25 30 35 40 45 n(ll _n(2)
4" ~~
10 15 20 25 30 35 40 45 n (l) .n(2)
= 0.5, w 2 = 0.5
WI
"lb "lb
10 15 20 25 30 35 40 4s n (l) .n(2)
One priority, i.e. no differentiation
·(lf~7 ) J ;,
~~
10 15 20 25 30 35 40 45 n(l) .n(2)
.'T~ ."), . . ------4
,
~~
·L2 ") -i~
10 15 20 25 30 35 40 45 n (l) +n(2)
10 15 20 25 30 35 40 4S n (l) +n(2 )
Three priorities, p er packet or p er flow marking, independent discarding
·'T~ ·") ..' 6
__ '
5
4
...
......
~~
10 15 20 25 30 35 40 45 n(l) _n(2)
·'T 6 ;
10 15 20 25 30 35 40 45 n(l) +n(2)
·'T~ ·") 6
; ...................... .... 3
~
;~
·b")
10 15 20 25 30 35 40 45
0
(1) +n(2)
Three priorities, p er p acket m arking, dep endent discarding
·'T~ ·") 6 ;
4
3 -~
·'T~ ·") 6
3-_ ;
4
i~ 10 15 20 25 30 35 40 45 n(l) _n(2)
;~
10 15 20 25 30 35 40 45 n(l) .n(2)
·'T 6 ; 4
~l --~
10 15 20 25 30 35 40 4S n (l) +n(2)
Three priorities, per flow marking, dependent discarding Fig. 2. Effect of marking and discarding when the minimum weights of the rt buffer and nrt buffer change. 66% are TCP flows and 33% non-TCP flows.
The lowest pair in figure 2 shows the effect of per flow marking and dependent discarding. Marking all packets of the flow to the same priority level encourages the TCP mechanism to optimize the sending rate according to the network state. Under congestion, the TCP flows attain a higher priority level by dropping their
How to Achieve Fair Differentiation
1183
sending rate. This also encourages the non-TCP traffic to adjust the sending rate accordingly. In all other cases, it is always optimal for the non-TCP flows to send as much as possible, even if packets are then marked to the lowest priority level. The use of per flow marking and dependent thresholds thus gives a powerful incentive for flows to be TCP friendly [11]. The use of dependent discarding controls the throughput of non-responsive flows better than independent discarding. With dependent thresholds, when the nrt buffer is congested packets in the rt buffer are also discarded to alleviate the congestion. The effect of giving some minimum weight to the nrt buffer protects the TCP traffic from bandwidth exhaustion by the non-TCP flows. However, there is not a clear one to one relationship between the ratio w1 /w2 of scheduler weights and ratio φ(1)/φ(2) of flow group weights. Further research has to be done in elaborating the TCP congestion control model to include slow start. Furthermore, to properly assess the mechanisms we need to extend the model to networks with more than one bottleneck link. Acknowledgments. Eeva Nyberg’s research is supported by the Academy of Finland and in part by a grant from the Nokia Foundation. The authors would like to thank Jorma Virtamo and Eemeli Kuumola for their cooperation.
References 1. Blake S., Black D., Carlson M., Davies E., Wang Z., and Weiss W., An Architecture for Differentiated Service, Dec. 1998, RFC 2475. 2. Heinanen J., Baker F., Weiss W., and Wroclawski J., Assured Forwarding PHB Group, June 1999, RFC 2597. 3. Kelly F., “Charging and rate control for elastic traffic,” Eur. Trans. Telecommun, vol. 8, pp. 33–37, 1997. 4. Massouli´e L. and Roberts J., “Bandwidth sharing: Objectives and algorithms,” in Proceedings of IEEE INFOCOM, 1999, pp. 1395–1403. 5. Nyberg E., Aalto S., and Virtamo J., “Relating flow level requirements to DiffServ packet level mechanisms,” Tech. Rep. TD(01)04, COST279, Oct. 2001. 6. Kilkki K., “Simple Integrated Media Access,” available at http://www-nrc.nokia.com/sima, 1997. 7. Roberts J., “Traffic theory and the Internet,” IEEE Communications Magazine, vol. 39, no. 1, pp. 94–99, Jan. 2001. 8. Kelly F., “Mathematical modelling of the Internet,” in Proc. of Fourth International Congress on Industrial and Applied Mathematics, 1999, pp. 105–116. 9. Nyberg E., Aalto S., and Susitaival R., “A simulation study on the relation of DiffServ packet level mechanisms and flow level QoS requirements,” in Intl. Seminar, Telecommunication Neworks and Teletraffic Theory, St. Petersburg, Russia, 2002. 10. Laine J., Saaristo S., Lemponen J., and Harju J., “Implementation and measurements of simple integrated media access (SIMA) network nodes,” in Proceedings for IEEE ICC 2000, June 2000, pp. 796–800. 11. Floyd S. and Fall K., “Promoting the use of end-to-end congestion control in the Internet,” IEEE/ACM Transactions on Networking, vol. 7, no. 4, pp. 458–472, Aug. 1999.
Measurement-Based Admission Control for Dynamic Multicast Groups in Diff-Serv Networks Elena Pagani and Gian Paolo Rossi Dip. di Scienze dell’Informazione, Universit` a degli Studi di Milano via Comelico 39, I-20135 Milano, Italy {pagani,rossi}@dsi.unimi.it
Abstract. An appealing approach to the admission control problem for traffic with QoS requirements consists in evaluating the resource availability by means of measurement-based techniques. Those techniques allow to provide QoS with minimal changes to the current network devices. In this work, we propose a mechanism to perform active measurementbased admission control for multicast groups with dynamically joining receivers. The proposed mechanism has been implemented in the ns-2 simulation framework, to evaluate its performance.
1
Introduction
The Differentiated Service model [1] has been proposed in the literature to provide QoS in a scalable manner. According to that model, bandwidth broker agents [4] exist that take in charge the traffic admission control functionalities. Yet, only a few practical implementations of the diff-serv model have been realized. Moreover, in the diff-serv model it is difficult to support multicast [1]. In this paper we describe the end-to-end Call Admission Multicast Protocol (Camp) [5], that can be used to ensure bandwidth guarantees to multicast sessions in IP networks, thus providing them with the Premium Service [4]. Camp is scalable, operates on a per-call basis and supports the group membership dynamics. It performs the functionalities of a distributed bandwidth broker (BB). To perform the admission control, Camp adopts an active-measurement approach. We have implemented Camp in the frame of the ns-2 simulation package to verify its effectiveness under different system conditions. In the system model we consider, a QoS-sensitive application specifies to the underlying service provider, the QoS communication requirements and the behaviors of the data flow it is going to generate (traffic profile). We assume that a session announcement protocol (e.g., sdr) is available to announce the needed session information. We consider sources generating CBR traffic. All the recipients receive the same set of microflows; they have the same QoS requirements.
This work was supported by the MURST under Contract no.MM09265173 “Techniques for end-to-end Quality-of-Service control in multi-domain IP networks”.
E. Gregori et al. (Eds.): NETWORKING 2002, LNCS 2345, pp. 1184–1189, 2002. c Springer-Verlag Berlin Heidelberg 2002
Measurement-Based Admission Control for Dynamic Multicast Groups
1185
We adopt the notation proposed in [1] for the differentiated services (diff-serv, DS) model. We consider the functional architecture of the BB in accordance with the proposal presented in [7]. The BB provides the applications with the interface to access the QoS services. When the QoS aggregate spans multiple domains, an inter-domain protocol is executed amongst peer BBs, to guarantee the proper configuration of the transit and destination domains.
2
Distributed Bandwidth Broker
In this section we outline the end-to-end Call Admission Multicast Protocol (Camp); greater details can be found in [5]. In Figure 1, we show the system architecture in which Camp works. Camp operates within the RTP/RTCP [6] protocol suite and performs the set-up of a RTP session. It receives from the application, via RTP, the profile of the data traffic that will be generated. Camp uses RTCP to monitor the QoS supplied to the recipients. Camp performs the admission control using an active measurement approach [2]. It generates probing traffic with the same profile as the data traffic generated by the application. Both the data and the probe packets are multicast. To this aim, we assume that both a membership protocol and a multicast routing protocol are available. The latter maintains a tree-based routing infrastructure connecting the multicast recipients. Camp is independent of both those protocols. The probing phase has
application
application
service req/reply
data
data
playback RTP data flow
RTP
subscr mship
TSpec CAMP BB
probe flow
BB CAMP
report
RTCP
eval report flow
RTCP
UDP
mcast mship
mcast routing IP
priority data packet probe sched. b.e.
Fig. 1. Architecture of Camp-based end stations
the aim of evaluating whether the available bandwidth is sufficient to support the new traffic. With respect to the classification given in [2], we adopt out-ofband probing with dropping of the probe packets as the congestion signal. All
1186
E. Pagani and G.P. Rossi
the routers use a priority packet scheduling discipline: the probe packets are marked with a higher priority than the best effort packets, but a lower priority than the QoS packets. This priority assignement ensures that the probing traffic does not affect the established QoS sessions. On the other hand, probe packets can drain the available bandwidth for the new QoS session at the expenses of the best effort traffic. To support multicast, two issues must be considered: (i) the receiver group membership can dynamically change; and (ii) different destinations can experiment different QoS in receiving the same traffic. To cope with problem (ii) above, the recipients that receive the probing traffic with low quality prune from the tree and refuse the service, by sending a refusing RTCP report to the source. When all the reports have been received by the source, if the service is accepted by at least one recipient, the source switches from the transmission of probe packets to the transmission of the data packets generated by the application, without discontinuity. The data packets are forwarded along the pruned tree. We deal with the problem (i) above using a proxy mechanism. The source announces the multicast session via sdr and starts transmitting probe packets at the scheduled time, if at least one receiver is listening. A Camp proxy is instantiated in a router either in the initialization phase, or when one or more new downstream output interfaces (oifs) appear in the router for the group (dynamic membership changes). The proxy remarks as probe packets all the incoming packets for the session, that must be forwarded to the probing oifs. This way, the data sent to the new destinations do not affect previously established sessions traversing the new branch. The proxy lifetime lasts until, for each probing oif, either it is pruned from the tree (as the result of a service refusal), or an acceptance report is received from it. In the latter case, if the initialization phase is ongoing, the report is forwarded to the source. The source Camp entity switches to the transmission of the data as soon as it receives an accepting report. The proxy mechanism allows to hide the membership to the source.
3
Performance Evaluation
We have implemented the architecture shown in Figure 1, in the frame of the ns2 simulation package [3]. The simulations have been performed with a meshed network of 64 nodes, connected by optical links of 2 Mbps bandwidth and variable length in the range 50 to 100 Km. Background, best effort traffic is uniformly distributed all over the network; best effort sources generate CBR traffic with a 0.66 Mbps rate. The size of best effort, probe and data packets is 512 bytes. We embedded a real RTP implementation into the RTP template of ns-2. The recipients dynamically join the group; we performed experiments with different join rates. The multicast tree is incrementally built as join events occur; the source is located in the tree root. The source does not know the group of recipients; it generates CBR traffic whose rate assumes different values in the range 0.4 Mbps to 1.9 Mbps. During the probing phase, the recipients compare the received rate with the source rate: if the difference is below a tolerated threshold, a recipient
Measurement-Based Admission Control for Dynamic Multicast Groups 1:8 1:6 1:4 M b 1:2 p s 1:0 0:8 0:6
.. .... .... .... .... .... .... . . . .. .... .... .... .... .... .... .... .... . . . .... .... .... .... .... .... .... .... . . . .. .... .... .... .... ... .... .... .... . . . . .... .... .... .... .... .... .... .... . . . . .... .... .... .... .... .... .... .... . . . . .... .... .... .... .... .... .... .... . . . .. .... .... ....
1:8 1:6 1:4 1:2 1:0
6:0 5:9 5:8 5:7 m s 5:6 e c 5:5
6:0
... .. .. .. .. .. .. .. . ... .. ... .. .. .. ... .. . . .. .. .. .. .. .. .. . . .. ... ...... ............................... ......................... .................. .......... ......... . . . . . . . . . . . . . . . . . .... ..... ..... ......... ....... ........ ......... ......... . . . . . . . ......
0:8
5:4
0:6
5:3
0:4 0:4 0:4 0:6 0:8 1:0 1:2 1:4 1:6 1:8
1187
5:6 5:5
5:8 5:7
5:9
5:4 5:3
5:2 5:2 0:4 0:6 0:8 1:0 1:2 1:4 1:6 1:8
Mbps
Mbps
(a)
(b)
Fig. 2. (a) Throughput vs. offered load for |G| = 10. (b) Average end-to-end delay vs. offered load for |G| = 10
sends a positive report. We performed measures for different thresholds [5]. The recipient decision is sent within the first RTCP report a destination generates after the reception of a number of probe packets, i.e. of samples, sufficient to ensure an accurate measure of the available bandwidth by covering the rate of the slowest traffic source. We performed simulations with different sample sizes.
0:7 0:6 0:5 m 0:4 s e c 0:3 0:2 0:1
0:7
0:6
0:5
. ...... ... .. .. ... ... .. .. .. ... .... .. .. . .. . .. .. .. .. .. .. .. .. .. .. ... . .. . .. ... .. . . ... ... .. . .. ... .. . . ... ... .. . . .. .. .. .. . .. . .. .. .. . .. .. .. .... .. ... ... ...... . . .. . . .. ... ..... ... .. . . .. . . .. ... .. .... ... . . .. ... .. ... .. .. ... . .. . ...... .. ... ...... .. . . .. . .. .. . .. . ... .... . . . . ...... ... ...... ..... . . . . ..... .. ...... ...... . . . ..... . .. ..... ...... . ...... . . . .... ... ..... . . . . .
0 0:4 0:6 0:8 1:0 1:2 1:4 1:6 1:8 Mbps
(a)
0:4 0:3 0:2 0:1 0
9:0 8:6 m 8:2 s e c 7:8 7:4
9:0
.... .. .... .. ........ .... ... .... .. .... .. .... .. . .... . .... ... .... .... ......... ... ..................................... .. . . . . ... . ... . . .. ... ... .. ... ... ... . .. ... .. ... ... ... ... . ... . ... ......... .... ..... ... ... .. ... ................ ....... . . . . . . ...... ....... ....... ........ ....... .......
8:6 8:2
7:8 7:4
7:0 7:0 0:4 0:6 0:8 1:0 1:2 1:4 1:6 1:8 Mbps
(b)
Fig. 3. (a) Jitter vs. offered load, for |G| = 10. (b) Fair delay vs. offered load, for |G| = 10
By performing simulations with different group cardinalities, we observed that the performance is almost independent of this parameter. We performed simulations with recipients that join the group with different rates. The proxy
1188
E. Pagani and G.P. Rossi
mechanism has proved to be effective in performing the admission control, and the measured performance is independent of the frequency with which recipients join the group. The results shown in this section have been obtained for a group of 10 recipients, acceptance threshold set to 5% of the source rate and frequency of the join requests arrival 1 sec. The measures have been taken after 20 sec. from the end of the set-up phase of the last grafted recipient. Simulations indicate that Camp effectively performs the call admission control. The recipients accepting the transmissions receive at the correct data rate. In Figure 2, we report the throughput (a) and the end-to-end delay (b) averaged over all the recipients; no receiver has refused the service. The delay increases when the offered load approximates the link capacity, while it is independent of the interference of the best effort traffic. This indicates that the sessions characteristics are preserved from source to destination, independently of the other network load. 6:0 5:9 5:8 5:7
m s e 5:6 c 5:5
6:0
1KB 128B
512B
54 53 : :
5 9
............................... ................... . ... .. .... .. ... .. ... ... .. ... .. . . . .. .... .. . . . .... ... ... .. .... ... .... .. ... .... .. .... ... .... . . . . . ... .. ..... .. ..... .. .. .... ..... .. .. ... ..... .. .. .. ..... .. ... .... ... ... ... ... ... .. . . . . ... ... . ... .. .. .... .. ..... .. ... ... ........... ............. ........................... ................................ ............................. ..................... ................. ........ . . . . . . . . . . . . . . . . . . . .... ............. ........ ........ ........... ........... ..... ... .... ........... ........ ....... ... .......... ......... . .... ............ ..................................... .............. ... ............................................... .............. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ........... ........
:
5:8 5:7 5:6 5:5
54 :
5:3
5:2 5:2 0:4 0:6 0:8 1:0 1:2 1:4 1:6 1:8 Mbps
(a)
24 ............................ ............................................................................................................................................................................................................... ...... 24 22 22 10 hop 20 20 18 18 16 16 m 14 14 5 hop s .. e 12 ......................... .......................... 12 ................. ................................. ................................ .......................... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . c 10 10 8 8 6 6 1 hop .. .......... 4 ............................................................................................... 4 ......... ............ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 2 0:4 0:5 0:6 0:7 0:8 0:9 1:0 Mbps
(b)
Fig. 4. (a) Average end-to-end delay vs. offered load as a function of the best effort packet size, for |G| = 10. (b) Average end-to-end delay vs. offered load as a function of the receivers distance from the source
The jitter has been computed according to the algorithm given in the RTP specification [6]; it is reported in Figure 3(a). The jitter behaviour indicates that at the receiver side a delivery agent must be used to perform the playback of the source transmission. The jitter shows a peak in correspondence with the maximum contention between the QoS and the best effort traffic. After that value, the QoS traffic pushes the best effort traffic away from the tree branches, and best effort packets start to be dropped from the queues. The fair delay is the maximum difference between the end-to-end delays perceived by two different destinations. Its behaviour (Figure 3(b)) indicates that the destinations at a greater distance from the source greatly suffer the network congestion. In the worst case, this could result in a lower probability of
Measurement-Based Admission Control for Dynamic Multicast Groups
1189
successful service set-up for the farest destinations. However, we never observed service refusal. The measures of the jitter and the fair delay show the effects of the presence of best effort packets at the core routers. As expected, the priority mechanism alone is not sufficient to ensure jitter control at the destinations. To highlight the impact of the best effort traffic over the QoS, we performed simulations with different best effort packet sizes. In Figure 4(a) we report the end-to-end delay observed by the QoS packets that compete with best effort traffic generated as before. As the links cannot be preempted once a packet transmission is ongoing, QoS packets arriving at a node could have to wait at most for a best effort packet transmission time before gaining the link, although they have the highest priority. The impact of the delay over the received rate is however negligible. We performed an experiment with two sources: the former one has a 1 Mbps rate; we varied the rate of the latter source. In figure 4(b), we show the average delay measured with respect to the load generated by the second source and the distance of the recipients from the source. The contention probability amongst different sessions increases with the path length: it affects the queueing delays, thus altering the regular traffic profile. The impact on the received throughput is however negligible. The achieved results show that the devised mechanism effectively performs admission control. Yet, further investigation has to be carried out concerning the interactions amongst several concurrent transmissions and their impact on the probability of successful service establishment.
References 1. Blake S., Black D., Carlson M., Davies E., Wang Z., Weiss W. “An Architecture for Differentiated Services”. RFC 2475, Dec. 1998. Work in progress. 2. Breslau L., Knightly E., Shenker S., Stoica I., Zhang H. “Endpoint Admission Control: Architectural Issues and Performance”. Proc. SIGCOMM’00, 2000, pp. 57-69. 3. Fall K., Varadhan K. “ns Notes and Documentation”. The VINT Project, Jul. 1999, http://www-mash.CS.Berkeley.EDU/ns/ . 4. Nichols K., Jacobson V., Zhang L. “A Two-bit Differentiated Services Architecture for the Internet”. Internet Draft, draft-nichols-diff-svc-arch-00.txt, Nov. 1997. Work in progress. 5. Pagani E., Rossi G.P., Maggiorini D. “A Multicast Transport Service with Bandwidth Guarantees for Diff-Serv Networks”. Lecture Notes in Computer Science 1989, Jan. 2001, pp. 129. Springer, Berlin. 6. Schulzrinne H., Casner S., Frederick R., Jacobson V. “RTP: A Transport Protocol for Real-Time Applications”. RFC 1889, Jan. 1996. Work in progress. 7. Teitelbaum B., Chimento P. “QBone Bandwidth Broker Architecture”. Internet 2 QoS Working Group Draft, Jun. 2000. Work in progress. http://qbone.internet2.edu/bb/
A Framework to Service and Network Resource Management in Composite Radio Environments1 L.-M. Papadopoulou, V. Stavroulaki, P. Demestichas, and M. Theologou National Technical University of Athens (NTUA), Electrical and Computer Engineering Department, Telecommunications Laboratory, 9 Heroon Polytechneiou Str, 15773 Zographou, Athens, Greece [email protected]
Abstract. This paper builds on the assumption that in the future, UMTS, HIPERLAN-2 and DVB-T can be three (co-operating) components of a composite radio infrastructure that offers wideband wireless access to broadband IP-based services. Managing the resources of this powerful, composite-radio infrastructure in an aggregate manner, and multi-operator scenario, is a complex task. This paper presents an approach to the overall UMTS, HIPERLAN-2 and DVB-T network and service management problem, providing the internal operation of a system addressing this problem. Key points addressed are the development of an architecture that can jointly optimise the resources of the technologies in the composite radio environment, and the development of open interfaces with Service Provider mechanisms and the heterogeneous managed infrastructure.
1
Introduction
Wireless systems continue to attract immense research and development effort [1], especially in the following areas. First, the gradual introduction of third generation systems like the Universal Mobile Telecommunications System (UMTS) [2] and the development of the IMT-2000 framework [3]. Second, the standardisation, development and introduction of Fixed Wireless Access (FWA) systems, which support radio access to broadband services, with limited mobility; a pertinent promising example is the HIPERLAN (High Performance LAN) initiative [1]. Third, the advent of Digital Broadcasting Systems (DBS) like the Digital Video Broadcasting (DVB) and the Digital Audio Broadcasting (DAB) initiatives [4]. Moreover, a recent trend (compliant with the features envisaged for the Fourth Generation (4G) wireless systems’ era) is to assume that UMTS, HIPERLAN-2 and DVB-T will be three co-operating wireless access components. In other words, UMTS, HIPERLAN-2, and DVB-T can be seen as parts of a powerful, composite-radio, infrastructure through which their operators will be ----------------1
th
Work partially funded by the CEC, under the 5 Framework Program, within the IST project MONASIDRE (IST-2000-26144: Management of Networks and Services in a Diversified Radio Environment).
E. Gregori et al. (Eds.): NETWORKING 2002, LNCS 2345, pp. 1190-1195, 2002. © Springer-Verlag Berlin Heidelberg 2002
A Framework to Service and Network Resource Management
1191
enabled to provide users and service providers (SPs) with alternatives regarding the efficient (in terms of cost and QoS) wireless access to broadband IP-based services. This paper presents the development of a UMTS, HIPERLAN-2 and DVB-T network and service management system capable of:
Monitoring and analysing the statistical performance and QoS levels provided by the network elements (segments) of the managed infrastructure, and the associated requirements originating from the service area (environment conditions, e.g., traffic load, mobility levels, etc.). Inter-working with SP mechanisms, so as to allow SPs to dynamically request the reservation (release, etc.) of network resources. Performing dynamic reconfigurations of the overall managed UMTS, HIPERLAN2 and DVB-T infrastructure, as a result of resource management strategies, for handling new environment conditions and SP requests in a cost-efficient manner. In the following, the management architecture in the aspect of a composite radio and multi-operator context, and the operation of such a system are presented.
2
Management Architecture in a Composite Radio and Multi-operator Context
Our model of the composite radio environment includes three different wireless access technologies and has a flexible implementation. In the context of this paper, each wireless access system is considered to belong to a different operator, occupying a network and service management platform. A generic management architecture of such a platform is split in three logical entities as follows.
MASPI (Monitoring and Assessment and SP mechanism Interworking). This component captures the (changing with time) conditions that originate from the environment (service area) of the managed UMTS, HIPERLAN-2 and DVB-T infrastructure; this is accomplished by monitoring and assessing the relevant network and service level performance of the managed network elements and segments. This component also interworks with the SP mechanisms, so as to allow SPs to request the reservation of resources (establishment of virtual networks) over the managed network infrastructure. Virtual networks are seen as the realisation of contracts that the managed system should maintain with SPs. RMS (Resource Management Strategies). This component applies resource management strategies, so as to dynamically find and impose the appropriate UMTS, HIPERLAN-2 and DVB-T infrastructure reconfigurations, through which the service provider requests, and/or the (new) service area conditions, will be handled in the most cost-efficient manner. NES (UMTS, HIPERLAN-2 and DVB-T Network and Environment Simulator). It provides the means for validating some management decisions prior to their application in the real network. This component enables off line testing, validation and demonstration of the management mechanisms.
1192
L.-M. Papadopoulou et al.
The management system components are distributed in each domain, specialised for handling the specific (UMTS, HIPERLAN-2 and DVB-T) technology. However, these components can co-operate for handling SP requests and/or new environment conditions.
3
System Operation
A sample scenario according to which the components above collaborate is provided in Fig. 1. MASPI-U, RMS-U, and NES-U represent components dedicated to the UMTS network (similarly for the HIPERLAN-2 and DVB-T networks). The interactions with the NES-U, NES-H and NES-D components are omitted for simplicity reasons.
SPM SPM
MASPI MASPI U U
MASPI MASPI H H
MASPI MASPI D D
RMS RMS U U
RMS RMS H H
RMS RMS D D
1a. SP Request 2. 2. Translation 3. UMTS Network Status Acquisition 4. Condition Condition and and Offer Request Request 5a. Translated Request, UMTS Network Status, HL-2 Offer, DVB-T Offer 5b. 5b. Traffic Traffic Assignment Assignment to to Networks Networks and and Quality Levels Levels 5c. Reply (pattern for assignment of traffic to networks and quality levels) 1b. Reply to SP 6. 6. Acceptance Acceptance Phase
7. 7. Network Resource Resource Optimisation Optimisation and Configuration Configuration Phase Phase
Fig. 1. Sample operation scenario. The scenario shows how the components of the UMTS, HIPERLAN-2 and DVB-T network and service management system collaborate
The initiation of the scenario is done from the UMTS network, as an example. The procedure would be similar if the MASPI component observed a new environment condition (e.g., alteration in the traffic demand, mobility and interference levels, etc.) in the managed network. Alternatively, the process could have been initiated by a SP request towards the HIPERLAN-2 or DVB-T management system. The network that
A Framework to Service and Network Resource Management
1193
receives a SP request or observes a new environment condition is called originating network. The scenario consists of steps that can be roughly categorised in four phases. In the first phase (step 1), the SP issues a virtual network establishment request. In the second phase (steps 2-5), the request is processed by the UMTS, HIPERLAN-2 and DVB-T network and service management system (this includes translation of the SP request from a service to a network level view, network status acquisition of the originating network, condition and offers from the co-operating networks, and traffic assignment to networks and quality levels). In the third phase the proposed solution is accepted by the parties involved (SP and network operators, step 6). Finally, in the fourth phase the managed systems are appropriately configured (step 7). The aforementioned steps are addressed in detail in the following sub-sections. 3.1
Service Provider Request
MASPI supports the functionality of handling the SP requests and the corresponding replies to these requests, as a general framework of the processing, establishment and maintenance of contracts (SLAs) with the SPs. A typical SP request has the following general structure (content). (i) It specifies the service (or set of services), the provision of which the SP requests from the management system. (ii) It specifies the distinguishable user classes to which the service is offered; each user class is associated with specific quality levels, a user class profile and a terminal profile. The quality of service levels express the quality levels that are considered acceptable for the provision of services to users (subscribers) that belong to the specific user class. If the quality levels are more than one, the SP may also provide the significance factor of each quality level for the present user class. The user class profile includes mobility, and traffic characteristics of the users, and is described in a file with specific format. The terminal profile assists the management system on knowing which networks can be used to satisfy the SP request (e.g., users of a specific user class may have terminals that support only the UMTS network). (iii) It specifies the number of subscribers that correspond to each service and user class. (iv) It includes information about the area (geographic region) to which the request is applied to, and the time zone, i.e., the time period during the day that the service should be provided to the users of the specific user class. 3.2
Service Provider Request Translation
MASPI supports the functionality of translating the SP request from a service level view to a network level view. The information about the requested services, and user classes (including user class profiles and quality of service levels) is used in order to investigate various options regarding the network load required to fulfill the request. On the other hand, the area information in the SP request can be exploited for the detection of the cells that will be affected by the SP request.
1194
3.3
L.-M. Papadopoulou et al.
Network Status Acquisition
During this task the status of the originating network (e.g., traffic carried by cells that can be affected by the SP request) is obtained. MASPI maintains interfaces with the underlying network interface and/or the network element management infrastructure. MASPI collects service level information based on the handled load, the provided performance (measured e.g., by the blocking probability, the dropping probability or the delay), and the dedicated resources per service provider, service, and user class. An integration of this service level information about the management system enables a network level provisioning of the managed system infrastructure. In case of performance degradations MASPI can initiate the procedure of the scenario described in Fig. 1. 3.4
Condition and Offer Request
Each MASPI is capable of requesting from the co-operating networks’ MASPIs the amount of resources or the load that these networks can carry, as well as cost related information. Likewise, each MASPI is in position to respond to such requests. This information (bandwidth availability and cost), as well as other information (e.g., the area and time zone for which the services should be provided), will contribute to the decision of the traffic splitting between the three networks. 3.5
Traffic Assignment to Networks and Quality Levels
This is an optimisation problem, targeted to the splitting of the traffic to the three networks and the assignment to quality levels. Considering the case of the scenario depicted in Fig. 1, this optimisation problem relies on the following input data:
The translated SP request, which can express the service demand per user class; The benefit deriving from the assignment of (portions of the) service demand to the several quality levels; The status of the UMTS, HIPERLAN-2 and DVB-T network segments that are to be affected by the request; The UMTS, HIPERLAN-2 and DVB-T offers, i.e., the cost that these networks will impose per quality level of the service. The optimisation results to an allocation of the service demand to networks and quality levels. The allocation should optimise an objective function, which is associated with the amount of the service demand accommodated, the quality levels at which the service demand will be accommodated, and the benefit deriving from the assignment of service demand to high quality levels. The constraints of the optimisation problem fall into the following categories:
The service demand should be assigned to acceptable quality levels; The capacity constraints (deriving form the UMTS network status, and the HIPERLAN-2 and DVB-T condition and offers) should not be violated.
A Framework to Service and Network Resource Management
3.6
1195
Reply to Service Provider Request – Acceptance Phase
The MASPI component reply to the initial SP request includes the quality levels to which the user classes are assigned, cost related information per user class, as well as the volume of the subscribers assigned to each network. This information is valid for the area and time zone specified in the SP request. The acceptance of this reply from the SP will lead to the establishment of a SLA. 3.7
Network Resource Optimisation and Configuration
After the decision by the RMS on the traffic allocation to the three networks, and the acceptance phase, the MASPI of the originating network makes a resource reservation request to the other networks’ MASPIs, in order these systems to accommodate the assigned traffic. Thereafter, optimisation and configuration procedures take place among the three networks. RMS finds an optimal configuration of the managed network segments, so as to guarantee that the traffic assigned to them will be handled (carried) with the most cost-efficient manner. This part of the management system consists in a suite of tools and procedures that optimise functions including for instance cost, network performance criteria, etc., under a set of constraints related to target QoS levels, resource utilisation, fault tolerance, etc.
4
Conclusions
This paper builds on the assumption that in the future, UMTS, HIPERLAN-2 and DVB-T can be three (co-operating) components of a composite radio infrastructure that offers wideband wireless access to broadband IP-based services. In this direction the paper presented an approach to the overall UMTS, HIPERLAN-2 and DVB-T network and service management problem, addressing the operation of a system that deals with such a problem. The paper, or alternate versions of this paper, can be expanded with more information on the internal functionality of the components, the provision of details regarding the design choices followed, or the presentation of indicative results obtained from case studies. The application of the management architecture in largescale network test-beds is a future stage of our work.
References 1. U. Varshney, R. Vetter, “Emerging mobile and broadband wireless networks”, Commun. of the ACM, Vol. 43, No. 6, June 2000 2. “Wideband CDMA”, Feature topic in IEEE Commun. Mag., Vol. 36, No. 9, Sept. 1998 3. “IMT-2000: Standards effort of the ITU”, Special issue on IEEE Personal Commun., Vol. 4, No. 4, Aug. 1997 4. Digital Video Broadcasting Web site, www.dvb.org, Jan. 2001
JESA Service Discovery Protocol Efficient Service Discovery in Ad-Hoc Networks Stephan Preuß University of Rostock; Dept. of Computer Science; Chair for Information and Communication Services mailto:[email protected] http://wwwiuk.informatik.uni-rostock.de/staff/spr.html
Abstract. Pervasive computing requires management techniques allowing efficient service handling in volatile contexts. The Java Enhanced Service Architecture (JESA) is a service platform addressing this issue for resource limited devices. A major problem in dynamic service networks is the discovery of desired services. The JESA Service Discovery Protocol (JSDP), one of JESA’s core components, is a lightweight, platform independent service discovery protocol for ad-hoc networks. JSDP features transparent operation with or without central service brokers providing scalability from point-to-point connections to larger structured networks. Keywords: service discovery, pervasive computing, ad-hoc networking
Classification (CR 1998): C.2.2, C.2.3, C.2.4
1
Introduction and Motivation
In ad-hoc networking, client nodes enter an initially unknown territory for using network services. During the service discovery process, they gather information about their surrounding service context avoiding the inflexible use of only well known or preconfigured services. A discovery technology meeting a broad range of application areas is characterized by: scalability concerning service count and resource usage, platform independence concerning computing and network, low complexity for easy application development, and application transparency. The existing discovery technologies (e.g. SLP [1], Ninja SDS [2], SSDP [3], Jini [4], Salutation [5]) do not meet all of the above characteristics. Especially, the need for a service platform applicable to resource limited embedded or mobile devices as well as to desktop and server systems led to the development of the Java Enhanced Service Architecture (JESA) [6] – a lightweight, Java-based middleware for spontaneous service discovery and usage. It is mainly intended to be used in industry and home automation as well as mobile computing. One of its core components is the JESA Service Discovery Protocol (JSDP) – an efficient discovery mechanism offering operation modes of different complexity to be applicable to a wide variety of device classes.
Supported by a grant of the Heinz Nixdorf Stiftung.
E. Gregori et al. (Eds.): NETWORKING 2002, LNCS 2345, pp. 1196–1201, 2002. c Springer-Verlag Berlin Heidelberg 2002
JESA Service Discovery Protocol
2
1197
Service Discovery
The establishment of service relations in volatile network environments requires the nodes to become aware of their service context and context changes. The information necessary for using a specific service is gathered in the service discovery process. Service discovery comprises at least one of the following items: locating the service provider, acquiring additional service or provider information, retrieving the provider’s access interface (proxy, stub, etc.). Service discovery can be classified according to the level of initial knowledge, the relation, the count, and the activity of the involved entities. The distinct discovery classes build a hierarchical structure as laid out in Fig. 1. Service disService Discovery Preconfigured Location−aware
Non−configured Mediated
Non−transparent
Transparent
Immediate Active
Passive
Fig. 1. Service Discovery Categories.
covery is split up into two top-level categories. Preconfigured discovering entities know either about the desired service provider or whom to ask for that information. In contrast, non-configured entities are innocent regarding the service context. This is the typical situation for ad-hoc networks. Further subdivision relates to the number of involved entities. Location-aware and immediate discovery classes are characterized by direct relation between client and provider. The provider itself supplies the client with all necessary information. In mediated modes, service brokers deal discovery information on behalf of providers. At the bottom-level, five discovery categories are distinguished: Location-aware clients know the logical location of their desired service, hence discovery reduces to getting additional information like service access interface or service characteristics; In active mode, a client initiates a request response procedure by broadcasting a request for a certain service. Appropriate providers respond at least with the location data of their service; Passive mode, in contrast, releases clients from inquiry and obliges providers to announce their services; Both mediated modes work with central information brokers. For proper operation providers have to register their service data with a broker. For that purpose, a broker may be treated as a special service provider. Regular providers will be clients of the broker for the registration period and have to discover the broker service by any of the discovery means discussed here. Transparent and non-transparent mode differ in the client’s awareness of the broker. If a client intentionally used a broker for finding services it discovers non-transparently. Whereas, if the client believed to interact directly with a provider but in fact it was a broker, the client discovers transparently.
1198
3
S. Preuß
Related Technologies
There is a number of ad-hoc service platforms deploying different discovery schemes: SLP offers a comprehensive message based discovery system for TCP/IP networks; SSDP is the discovery protocol used in UPnP [7], it mainly bases on UDP, HTTP, and XML; The discovery protocol of the Ninja project, SDS, enables secure service discovery with an infrastructure of service, accounting, and certificate directories; Salutation provides generic network independent service discovery and access by the use of network and service managers; Jini features Java-based, broker-centric discovery and service usage with proxies. Table 1. Discovery Classes Supported by Current Protocols. SLP SSDP SDS Salutation Jini JSDP
Location-aware Non-transparent Transparent Active Passive + + + + + + + + + + + (+) + + +
Table 1 presents an overview of the protocols’ discovery class conformance, according to the classification given in Section 2. Furthermore, it contains a forecast of the JSDP functionality.
4
JESA Service Discovery Protocol
JSDP has been designed as a lightweight discovery protocol for embedded and mobile systems. It provides generic functionality: locating of service providers, retrieval of service proxies and service attributes. Almost any discovery feature can be added leveraging service attributes. Integral security mechanisms have been left off the protocol to keep it small. A major goal is JSDP’s ability to work transparently with or without a central service broker (see Section 2) thus offering the possibility of peer-to-peer communities of limited devices on the one side and large scalable service networks on the other side. 4.1
Discovery Strategies
JSDP incorporates three major discovery modes (see Fig. 2). The immediate ones (active, passive) are intended for short range discovery in the local network segment. The transparent mode enables long distance discovery across segment boundaries by using a broker hierarchy. Brokers may either forward service requests or share the registered service information. With the existing means, non-transparent discovery can be realized by special broker services having not
JESA Service Discovery Protocol Request
Client Provider a) Announcement
Client Provider b) Response
1199
Request
Client Broker Provider c) Registration Response
Fig. 2. JSDP Operation Modes: a) passive; b) active; c) transparent.
only a registration interface but a query interface, too. This architecture would be similar to Jini. Service discovery is performed in two steps. In the first step, provider location information is gathered using the Provider Location Protocol (PLP). If a client decides to further examine or to use a certain service, in the second step, proxy and/or attribute information will be retrieved by the Proxy/Attribute Request Protocol (PARP). 4.2
Provider Location Protocol
The Provider Location Protocol (PLP) can be used in request response mode (active discovery) or in announcement mode (passive discovery) for acquiring service provider locations. For simplicity, it defines only a single message format for requests and announcements (see Fig. 3). Service queries or announcements contain only the service type because this is the most important selection criterion. If a client needs more information about a service it will continue with the attribute retrieval and come to a decision according to the service characteristics. The group list can be used to limit the service matches to certain administrative groups. A query will match a service if version and type are identical and the service is member of at least one listed group. The ID list can contain a set of Universal Unique Identifiers (UUID) which specify services a client does not want to get announced. A provider fills in this list a single UUID which is used by clients for further transactions. Although PLP is designed in Version Sender URI Type ID Count ID List Group Count Group List Int32 UTF8 String UTF8 String Int32 [Int128] Int32 [UTF8 String] Fig. 3. Service Request / Announcement Message.
a Java environment, it is not Java-specific. Hence, it can be used for service discovery in non-Java environments as well. The current PLP implementation bases on UDP. Requests or unsolicited announcements are delivered with UDP multicasts; alternatively, link-local broadcasts can be used. Solicited announcements (service responses) are sent by UDP unicasts. The message flow for active discovery accords to Fig. 2b. A client uses the sender URI field to tell potential responders where to send the answer. The sender URI field in the answer codes the continuation point for the next discovery steps, in fact where to contact the provider to get the attributes or the service access interface.
1200
S. Preuß
Transparent discovery mode uses the same message exchange procedure. From the client’s point of view nothing changes. Providers have to register their service data (location information and access interface) with a broker and stop answering requests or announcing services. Brokers should be implemented as regular JESA services discoverable by JSDP means. Now, the broker answers requests for registered services and delivers their proxies or attributes on request. The unsolicited announcement of registered services (passive discovery) is possible but not encouraged. 4.3
Proxy/Attribute Request Protocol
After successful completion of PLP, a client knows about at least one provider’s or broker’s location and can fetch a service proxy or service attributes using the Proxy/Attribute Request Protocol (PARP). In contrast to PLP, PARP is more dedicated to a Java environment because it ships serialized Java objects (JESA service proxies and attributes [8,6]) across the network. PARP uses a stream connection, currently it is TCP, to request and transmit service data. The Request message (see Fig. 4) indicates in the tag field whether proxy or Tag Service UUID int32 Int128 Fig. 4. PARP Request.
attribute list is to be transmitted. The service UUID field exactly identifies the target service and has been retrieved in a PLP response. In immediate modes, directly talking to a provider, the UUID would be almost superfluous. It is possible that in the time between PLP and PARP execution a certain provider leaves the community and a new one enters with the same logical location. Here, the UUID avoids the delivery of incorrect service data. In transparent mode, using a broker, the UUID is used to select the appropriate information out of the pool of registered service data. If a service matching the UUID is available service data, according to the tag field, will be returned to the client in form of serialized Java-objects. For robustness, transparent operation mode can be split up. During PLP, the broker does not deliver its own URI for further discovery but the one of the original provider. Hence, proxy or attributes will be fetched from the real provider. Successful completion of this step ensures a properly working provider. Clients should not cache service data over long periods and reuse them because there is no guarantee that a service remains available and does not change its logical location or characteristics. For most cases a “discover-use-forget” strategy will be appropriate.
JESA Service Discovery Protocol
5
1201
Conclusion and Future Work
JSDP has been developed as integral discovery component of the Java Enhanced Service Architecture. It focuses on core discovery tasks and allows arbitrary extensions. By the seamless integration of immediate and mediated discovery modes JSDP scales with the service count. The current implementation is fully functional on top of TCP/IP networks. JSDP itself comes with a memory footprint of about 30K. Together with the NetObjects technology [8] for service execution, a properly working JESA system requires about 80K. JESA and hence JSDP require a Java1.1 compliant runtime environment and will fit to embedded or mobile systems running PersonalJava, Kaffe, Jeode, J2MECDC. Ongoing developments will produce: 1) Java Abstract Network (JANet) which decouples JESA from TCP/IP, a reference implementation will run with the CAN-bus; 2) An OSGi [9] interface which automatically transforms OSGi services into JESA services discoverable by JSDP; 3) An integration of the Bluetooth SDP [10] into JSDP to avoid multiple discovery levels when applying JESA to Bluetooth devices.
References 1. Guttmann, E., Perkins, C., Veizades, J., Day, M.: Service Location Protocol, Version 2. IETF Internet Draft, RFC 2608 (1999) 2. Czerwinski, S.E., Zhao, B.Y., Hodes, T.D., Joseph, A.D., Katz, R.H.: An Architecture for a Secure Service Discovery Service. In: Proceedings of the Mobicom 99, Seattle, Washington, USA, ACM (1999) 24–35 3. Goland, Y.Y., Cai, T., Leach, P., Gu, Y., Albright, S.: Simple Service Discovery Protocol/1.0. http://www.upnp.org/download/draft cai ssdp v1 03.txt (1999) 4. Arnold, K., Wollrath, A., O’Sullivan, B., Scheifler, R., Waldo, J.: The Jini Specification. Addison-Wesley (1999) 5. Salutation Consortium Inc.: Salutation Architecture Specification V2.1. ftp:// ftp.salutation.org/salute/s21a1a21.pdf (1998) 6. Preuß, S.: Java Enhanced Service Architecture. http://wwwiuk.informatik. uni-rostock.de/˜spr/jesa/ (2001) 7. UPnP Forum: Universal Plug and Play Connects Smart Devices. http://www.upnp.org/ (1999) 8. Preuß, S.: NetObjects - Dynamische Proxy-Architektur f¨ ur Jini. In: Proceedings of Net.ObjectDays 2000, Erfurt, c/o tranSIT GmbH (2000) 146–155 9. The Open Services Gateway Initiative: OSGi Service Gateway Specification. http://www.osgi.org/ (2000) 10. SIG, B.: Core. In: Specification of the Bluetooth System. Volume 1. Bluetooth SIG (February 2001)
Performance Simulations of a QoS Aware Caching Method Pertti Raatikainen 1, Mika Wikström 2, and Timo Hämäläinen2 1
VTT Information Technology, Telecommunications P.O.Box 1202, FIN-02044 VTT, Finland [email protected] 2 University of Jyväskylä, Department of Mathematical Information Technology P.O. Box 35, FIN-40351 Jyväskylä, Finland [email protected], [email protected] Abstract. Research of web-servers has recently addressed the problem of content distribution coupled with quality of service (QoS). Due to the explosive growth of services offered over the Internet, novel mechanisms are needed for IP based service delivery to scale in a client-transparent way. This paper addresses the above problem considering also utilization of available processing power of servers. Many developed caching systems dedicate a fixed portion of the processing power for higher QoS services leading to lowered overall throughput of the server system. Here we introduce and simulate a QoS aware caching scheme that offers lower response delay for higher quality services and additionally optimizes utilization of the available processing power.
1. Introduction Distribution of web-content to servers and arbitration of requests among a cluster of servers have been in focus of intense research. Location of content among the servers and service admission control are the major problems in server farm implementations. Since the same content can be located to several servers, an additional problem appears in maintaining an established connection to specific content on a known server throughout a session. A number of different schemes to locate content to servers and balance loading between them have been developed [1, 2, 3, 4, 5]. The most sophisticated ones, often called web-switches, are content aware and offer methods to maintain connection to a given server all along a session [10]. The content based switching schemes enable categorization of connections, e.g., based on the requested service, user or combination of both. Good customers or access to certain high quality services may be directed to less loaded servers enabling lower response delay. This causes skewing of the processing balance and at worst some servers are overloaded while others remain lightly loaded. Optimum loading implies that loading degree of each server can be fixed to be equal. Caching combined with content aware switching is a technique that can be used in lowering response delay for some customers or services, while maintaining the loading balance between the servers. Since the number and size of web-files is normally large compared to the available cache size, a subset of web-pages can be located to cache E. Gregori et al. (Eds.): NETWORKING 2002, LNCS 2345, pp. 1202-1207, 2002. © Springer-Verlag Berlin Heidelberg 2002
Performance Simulations of a QoS Aware Caching Method
1203
[5, 7, 8]. To maximize the number of cache hits when the cache capacity is exceeded, a number of caching algorithms have been developed to overwrite less frequently accessed pages with more frequently accessed ones [3, 6, 7]. This paper introduces a caching method that allows to distribute web-content based on QoS requirements and simultaneously balance load among a cluster of servers to enable maximum utilization of the aggregate processing power. The objective in locating web-files to cache is to maximize cache hit-rate and thereby minimize response delay. Chapter 2 introduces the caching algorithm and related simulation model. Chapter 3 gives some simulation results and chapter 4 concludes the paper.
2. QoS Aware Caching Scheme The objective in developing the caching scheme was to improve cache system performance measured in service response delay and utilization of the server system’s processing power. Response delay is lowered by increasing the cache hit-rate and utilization of processing power is optimized by locating content randomly to servers. When the number of web-files is large, random location policy leads to uniformly distributed load among the servers. 2.1 Caching Method Each server in a cluster is supposed to have a hard disk and cache memory of known size. The memory sizes, processing power and memory reading delays may vary from a server to another. Web-files stored on servers are categorized into a fixed number of QoS classes. In a general case, the number of files in those classes and sizes of the files are different. The web-files are located randomly to servers and upon storing a file it is associated with one of the QoS classes. If cache is small compared to the aggregate size of files in a server system, a predetermined percentage (pc) of files from each QoS class can be located to cache. The highest QoS classes have priority over the lower classes and the most requested files of each QoS class are selected first. Files that cannot be stored in cache are located on hard disk. If the cache is large enough to store a fixed percentage pc of files from all QoS classes then some of the leftovers can also be placed to cache. The order in which they are stored follows the QoS class priorities and request rate intensities of the files. 2.2 Simulation Model The simulations are carried out by applying a generic cache simulation model introduce in [9]. It allows manipulation of a number of parameters that characterize the introduced QoS based caching scheme. Performance of the system can be studied by varying the cache sizes, number and size of web-files, number of QoS classes, number of files in each QoS class, file request rates and processing power of servers. Logically the model is divided into five decision-making blocks (see Fig. 1). At the top is a block, which decides whether the next event is related to an incoming or outgoing request. A request coming from a client is considered an incoming one and a request that has been processed and is being directed to a server an outgoing one.
1204
P. Raatikainen, M. Wikström, and T. Hämäläinen
In case of an incoming request, the algorithm first chooses the server that has the requested file. Different sorts of selection algorithms can be implemented based on the selected file location policy. Then the algorithm checks whether the selected server is available for service. If it is not available, the request is put to into a queue where it will stay until the algorithm enters the outgoing leg. If the server is available, the simulation enters the block, which checks whether the requested file is in the cache. Different policies can be used in placing files to the cache. Here, priority is given to files of the highest QoS classes. If the requested file is in the cache, it is read and marked as the most recently accessed one. If the file is not found in the cache, it must be on the hard disk and the simulation continues to the „Read file“ -block. The file is read from the disk and the deployed caching scheme decides how to proceed (store it to the cache or leave on the disk). This block allows the use of different caching methods and comparison of their performances. If in the topmost block the outgoing leg is chosen then the server where the file is being served is located and the file is removed from service. After that, it is checked whether there is a queue for that particular service. If there is no queue, the model starts another iteration round, i.e., it starts to process the next event. If there is a queue for that service then the first request in the queue is serviced. Furthermore, the server system keeps record of all file, their QoS classes, file locations in the server system, sizes of the caches and their degree of fullness. Access rates of the files also need to be recorded to enable request rate based location of files when the servers are running out of cache memory. Choose next event incoming outgoing Choose server
Remove request
no
Is there queue for server ? yes Take first request and store it to service queue
Choose server
Next event ?
Put request to queue
Read it from cache and mark it most recently read file
Read it from cache
Server available for request ?
no yes (temp. cache)
Read file
yes Requested file in cache ?
yes (perm. cache) Save it into cache
no yes
Does it fit into cache? no Delete the oldest file from cache
Fig. 1. Flow chart of the simulation model.
3. Simulation Examples To demonstrated performance of the developed caching system, simulation results for a system of three servers are introduced. When a new file is inserted into the caching system, it is associated with one of four possible QoS classes and it is given a size that belongs to one of three possible categories: 1, 5 or 10 kilobytes (kb). Ten per cent (%) of the files belong to the highest QoS class (QoS1), 20 % to the second highest (QoS2), 30 % to the next (QoS3) and the rest 40 % to the lowest class (QoS4).
Performance Simulations of a QoS Aware Caching Method
1205
The simulated files were divided into two equally large groups based on their mean request rate; the higher rate was twice that of the lower one. Request rate intensities were exponentially distributed. In order to keep the simulation times reasonable, the simulated system had only 300 files. These were located randomly to servers giving approximately 100 files (about 550 kb) per server. The objective was to study performance of the proposed caching system by analyzing variation of the cache hit rates of the different QoS classes as a function of the cache size and thereby estimate the system response time. The cache sizes were varied between 0 and 550 kb, while the number of files and their sizes on each server were kept fixed. As a comparison, performance of a non-QoS aware caching scheme, simulated in [9], was analyzed. The configuration set-up of the non-QoS aware system and the number of files, their sizes, request rates and locations were identical with those of the simulations of the QoS aware system. The only difference was the applied caching method, which did not account QoS. Instead, it exercised first-in-first-out (FIFO) discipline. At the start of a simulation, files on each server were randomly located to cache and on hard disk. The cache performed like a FIFO memory in which the most recently requested files were located at the end and less recently requested ones at the head of the FIFO queue. Each time a file (either on hard disk or in cache) was requested, it was moved to end of the FIFO. Files already in the cache were shifted towards the head of the FIFO queue. When the FIFO was full, the file at the head of the queue was removed to the hard disk. Fig. 2 and 3 illustrate performance of the proposed QoS aware caching scheme when the proportion of files (of each QoS class) that could be located to the cache was 80 % (pc = 0.8) and 100 % (pc = 1.0). Fig. 4 shows corresponding results for the nonQoS aware system and Fig. 5 demonstrates the average response delay performance of these three simulated cases. The horizontal axis in all these figures gives the percentual size of the cache compared to the aggregate size of all files in the system. In Fig. 2 to 4, the vertical axis gives the percentual proportion of cache hits compared to the total number of simulated file requests. In Fig. 5 the vertical axis gives response delay normalized to file reading delay from hard disk. File reading delay from hard disk was assumed to be ten times that from cache. Fig. 2 and 3 show that the cache hit rates of the different QoS classes follow the prefixed priorities quite nicely. The highest QoS classes reach the 100 % cache hit rate limit faster in Fig. 3 than in Fig. 2. The reason for this is that the system in Fig. 2 can store only 80 % of files of the highest QoS classes to the cache when the cache size is small. The rest of the highest QoS class files can be located to the cache only if the cache is large enough to include more than 80 % of files of all the QoS classes. When comparing curves in Fig. 2 and 3 with those in Fig. 4, it is obvious that the proposed caching method is capable of supporting QoS. The non-QoS aware system does not show much difference between the QoS classes. The reason for the slightly differing curves is that the simulation tool assigned a QoS class, file size and request rate group randomly to every file. Thus the total size of files in each QoS class and the sizes of the two request rate groups were not exactly the above given ones. The average response delay (see Fig 5) of the non-QoS aware system was always better than that of the QoS aware system. The reason for this is that in the QoS aware system files of the highest QoS classes have (regardless of their request intensity)
1206
P. Raatikainen, M. Wikström, and T. Hämäläinen
100
100
80
80 Hit-rate [%]
Hit-rate [%]
priority over the lower QoS class files in locating files to cache. This lowers the average cache hit rate and lengthens the average response delay. Simulations have shown that the performance gap between the QoS and non-QoS aware system can be reduced by decreasing the difference between the lowest and highest request rate value or decreasing the value of pc. Allowing pc to change step by step with the size of cache, it is possible to share cache memory fairly between the QoS classes, offer relatively good system level response delay and still maintain QoS awareness.
60 QoS1 QoS2
40
60 QoS1
40
QoS2
20
QoS4
QoS3
QoS3 QoS4
20
Avg
Avg
0
0 0
10
20
30
40
50
60
70
80
90
0
100
10
Fig. 2. Cache hit-rates of QoS aware system ( pc = 0.8)
30
40
50
60
70
80
90 100
Fig. 3. Cache hit-rates of QoS aware system (pc = 1.0)
80 60
QoS1 QoS2
40
QoS3 QoS4 Avg
20 0 0
10
20
30
40
50
60
70
80
90 100
Cache size/total size of files [% ]
Fig. 4. Hit-rates of non-QoS aware system
Normalized esponse delay
1,00
100
Hit-rate [%]
20
Cache size/total size of files [%]
Cache size/total size of files [%]
Non-QoS simu 0,80
QoS simu 1
QoS simu 2 0,60 0,40 0,20 0,00 0
10
20 30 40 50 60 70 80 90 100 Cache size/total size of files [%]
Fig. 5. Response delays of simulated systems
4. Conclusions This paper presents a quality of service based caching scheme that allows support of different QoS classes offering shorter response delay for higher QoS class requests than for lower class ones. Processing power of the server system is utilized effectively by locating web-files randomly on the different servers and thus dividing the processing load uniformly among the servers. The developed caching scheme was modeled by a generic simulation tool, which had previously been developed by the author, and was here used to evaluate performance of the QoS aware caching system. The model includes a number of adjustable
Performance Simulations of a QoS Aware Caching Method
1207
parameters that can be varied to find optimal performance in different simulation cases. As a comparison a non-QoS aware system was also modeled to find out possible pros and cons of the developed QoS aware scheme. The carried out simulations showed that the QoS aware caching method is clearly able to support different QoS classes. The only drawback was found in the system level cache hit rate. Since the highest QoS class files were located first to the cache memories, the cache included also less frequently requested files and the average cache hit rate was found to be lower than in the comparative non-QoS aware system. However, the discovered difference was an acceptable one. The average response delay can be lower by decreasing the portion of the highest QoS class files that can be located to the cache memory thus giving room for the lower QoS class files. It is for further study, to enhance the developed caching scheme to implement a more efficient content distribution algorithm. The objective is to add some feedback to the algorithm and let the allocation parameters to be adjusted dynamically to respond better to changes in the servers’ conditions.
References [1] Blaze M., Alfonso R.: Dynamic Hierarchical Caching in Large-Scale Distributed File Systems. In: Proceedings of International Conference on Distributed Computing Systems, Yokohama (Japan), June 1992, pp. 521-528. [2] Dahlin M.D., Wang R., Anderson T. E., Patterson D.: Cooperative caching: Using remote client memory to improve file system performance. In: Proceedings of Operating Systems Design and Implementation Symposium, Monterey (USA), Nov. 1994, pp. 267-280. [3] Dan A., Towsley D.: An approximate analysis of the LRU and FIFO buffer replacement schemes. In: ACM SIGMETRICS, May 1990, pp. 143-152. [4] Feeley M., Morgan W., Pighin F., Karlin A., Levy H., Thekkath C.: Implementing global memory management in a workstation cluster. In: Proceedings of the 15th ACM Symposium on Operating Systems Principles, Colorado (USA), Dec. 1995, pp. 201-212. [5] Patterson R. H., Gibson G. A., Ginting E., Stodolsky D., D. Zelenka D.: Informed Prefetching and Caching. In :Proceedings of the 15th ACM Symposium on Operating System Principles, Colorado (USA), Dec. 1995, pp. 79-95. [6] Chou H., DeWitt D.: An evaluation of buffer management strategies for relational database systems. In: Proceedings of the 11th VLDB Conference, Stockholm (Sweden), August 1985, pp. 127-141. [7] O’Neil E. J., O’Neil P. E., Weikum G.: The LRU-k page replacement algorithm for database disk buffering. In: Proceedings of International Conference on Management of Data, Washington D.C. (USA), May 1993, pp. 297-306. [8] Cao P., Felten E. W., Li K.: Application-controlled file caching policies. In: Proceedings of 1994 Usenix Summer Technical Conference, June 1994, pp. 171-182. [9] Hämäläinen T., Wikström M., Raatikainen P.: A Simulation Model for Studying of Caching Algorithms. In: Proceedings of International Conferences on Info-tech and Infonet (ICII 2001), Beijing (China), Nov. 2001, pp. 599 - 604.. [10] Apostopoulos G., Aubespin D., Peris V., Pradhan P., Saha D.: Design, Implementation and Performance of a Content-Based Switch. In: Proceedings of INFOCOM 2000, IEEE, pp 1117 – 1126.
Call Admission Control for Multimedia Cellular Networks Using Neuro-dynamic Programming 1
2
Sidi-Mohammed Senouci , André-Luc Beylot , and Guy Pujolle
1
1
Laboratoire LIP6 Université de Paris VI 8, rue du Capitaine Scott 75015 Paris – France {Sidi-Mohammed.Senouci, Guy.Pujolle}@lip6.fr 2 ENSEEIHT - IRIT/TeSA Lab 2, rue C. Camichel - BP7122 F-31071 Toulouse Cedex 7 - France [email protected]
Abstract. We consider, in this paper, the call admission control (CAC) problem in a multimedia cellular network that handles several classes of traffic with different resource requirements. The problem is formulated as a Semi-Markov Decision Process (SMDP) problem. It is too complex to allow for an exact solution for this problem, so, we use a real-time neuro-dynamic programming (NDP) [Reinforcement Learning (RL)] algorithm to construct a dynamic call admission control policy. A broad set of experiments shows the robustness of our policies compared to the classical solutions such as Guard Channel
1 Introduction The increasing demand and rapid growth of mobile communications that will provide reliable voice and data communications has massively grown. The service area in these networks is partitioned into cells. Each cell is assigned a set of channels1. As a user moves from one cell to another (handoff), any active call needs to be allocated a channel in the destination cell. If the destination cell has no available channel, the call is aborted. One of the goals of the network designer is to keep the handoff blocking probability low. If this task is simple in a mono-class traffic framework, it is quite complicated in a multi-class context. In a multi-class context it is sometimes preferable to block a call of a less valuable class and to accept another call of a more valuable class. This paper proposes an alternative approach to solve the call admission control (CAC) in multimedia cellular networks using the experience and knowledge that could be gained during real-time operation of the system. The optimal CAC policy is obtained through a form of reinforcement learning algorithm known as Q-learning [1]. This policy is able to reduce the blocking probability for handoff calls and, also, able to generate higher revenues. 1
Channels could be frequencies, time slots or codes depending on the radio access technique
E. Gregori et al. (Eds.): NETWORKING 2002, LNCS 2345, pp. 1208-1213, 2002. © Springer-Verlag Berlin Heidelberg 2002
Call Admission Control for Multimedia Cellular Networks
1209
The rest of the paper is organized as follows. After the formulation of the CAC problem as an SMDP in section 2, we detail the two different implementations of QLearning algorithm (TQ-CAC and NQ-CAC) that solves this SMDP in section 3. Performance evaluation and numerical results are exposed in section 4. Finally, section 5 summarizes the main contributions of this work.
2 Problem Description We propose an alternative approach to solving the call admission control problem in a cellular network. The approach is based on the judgment that the CAC can be regarded as a Semi-Markov Decision Process (SMDP), and learning is one of the effective ways to find a solution to this problem [3], [4], [5], [6]. In dynamic programming, we assume that the learner agent exists in an environment described by a set of possible states S = {s1 , s 2 ,..., s n } . It can perform any of possible actions A = {a1 , a 2 ,..., a m } and receives a real-valued reward ri = r ( si , ai ) indicating the
immediate value of this state-action transition. For the CAC problem, we identify the system states s, the actions a and the associated rewards r as follows: 1. States: We consider two classes of traffic C1 and C2. But, the ideas in this paper can be extended easily to several classes of traffic as well. We define the state of the system s=(x,e) as: x=(x1, x2) where x1 and x2 are the number of calls of each class of traffic (C1 and C2 respectively) in the cell. We do not take into account the states associated with a call departure because no action needs to be taken. e³ {1 =arrival of a new C1 call, 2 =arrival of new C2 call, 3 =arrival of a C1 handoff call, 4 =arrival of a C1 handoff call} 2. Actions: Applying an action is to accept or reject the call a³ {1=accept, 0=reject} 3. Rewards: The reward r(s,a) assesses the immediate payoff incurred due to the acceptation of a call in state s. We set the reward parameters, as shown in Table 1, for each class of traffic. To prioritize handoff calls, larger reward values have been chosen for handoff calls. r ( s , a ) = η i if a = 1 and e = ei 0
otherwise
Table 1. Immediate Rewards
h1 h2 5 1
h3 50
h4 10
This system constitutes an SMDP with a finite state space S = {(x, e)} and a finite action space A={0,1}. To solve this SMDP, a particular learning paradigm has been adopted known as reinforcement learning (RL). There exists a variety of RL algorithms. A particular algorithm that appears to be suitable for the CAC task is called Q-learning [1]. The task of the agent is to learn a policy, π : S → A , for selecting its next action at = π ( st ) based on the current state st , that maximizes the long-term revenue/utility.
1210
S.-M. Senouci, A.-L. Beylot, and G. Pujolle
For a policy π , the state-action value Q π ( s, a) (named Q-value) is the expected discounted reward for executing action a at state s and then following policy π thereafter. The Q-learning process tries to find the optimal Q-values in a recursive manner. The Q-learning rule is Q ( s , a ) + α t ∆ Q t ( s , a ),
Qt + 1(s, a) = t Q t ( s , a ),
if s = s t and a = a t . otherwise
Where ∆Qt (s, a) = rt + γ max[Qt (s’t , b)] − Qt (s, a) . b
(1)
(2)
3 Algorithm Implementation After the specification of the states, actions and rewards, let us describe the two online implementations of the Q-learning algorithm for solving the CAC problem (TQ-CAC and NQ-CAC). The TQ-CAC uses a lookup table to represent the Qvalues. In contrast, the NQ-CAC uses a multi-layer neural network. Function approximators such as neural networks are used when the input space consisting of state-action pairs is large or the input variables are continuous. When there is a call arrival (new or handoff call), the algorithms determine the action according to a = arg
max Q * ( s, a ) . a∈A( s )=0,1
(3)
In particular, (3) implies the following procedures. When a call arrives, the Q-value of accepting the call and the Q-value of rejecting the call are determined. If rejection has the higher value, the call is dropped. Otherwise, the call is accepted. In these two cases, and to learn the optimal Q-values Q*(s,a), the value function is updated at each transition from state s to s’ under action a for the two algorithms as follows: 1. TQ-CAC: (1) is used to update the appropriate Q-value in the lookup table. 2. NQ-CAC: In this case, ∆Q defined in (2) is served as an error signal which is backpropagated in the back-propagation (BP) algorithm [1]. We compare our policies with the greedy policy2 and with the Guard Channel mechanism [2]. The number of guard channels is determined for each traffic period and each traffic class. The guard channel mechanism will be characterized by a vector s which corresponds to the different thresholds, s = (s1 , s 2 ,..., s K ) , where K is the number of classes of traffic. In the present paper, an exact numerical solution has been derived. To determine the optimal vector s ∗ , all the configurations s for which s1 ≤ s 2 ≤ ... ≤ s K = N , where N is the number of channels in the cell, were investigated. 2
Policy that always accepts a call if the capacity constraint will not be violated
Call Admission Control for Multimedia Cellular Networks
1211
4 Simulation In order to evaluate the benefits of our call admission control algorithms, we simulate a mobile communication system using a discrete event simulation. We consider a fixed channel assignment (FCA) system with N=24 channels in each cell. The performance of the algorithms has been evaluated on the basis of the total rewards of the accepted calls (Total rewards), the total rewards of the rejected calls (Total Lost Rewards), and by measuring the handoff blocking probability. A set of simulations was carried out, including the cases of traffic load varying, and time-varying traffic load. The experimental results are shown in Fig. 1 through Fig. 3. The results show that the reinforcement learning is a good solution for the call admission control problem. The proposed algorithms are considerably powerful compared to the greedy and to the guard channel schemes. In all cases the lost rewards due to rejection of customers and blocking probability of handoff calls are significantly reduced. The total rewards due to acceptance of customers is also significantly increased. The Q-values were first learned during a training period with a constant traffic load for both C1 and C2. The parameters used in the simulation are given in Table 1 and 2. Table 2. Experimental Parameters
Number of channels Call holding time Call arrival rate
C1
C2
1
2
40 s
40 s
λ1 = 180 calls / hour
λ 2 = λ1 / 2 = 90 calls / hour
4.1 Traffic Load Varying In this case we used the same policy learned in the training period but with six different traffic load conditions (for both classes C1 and C2). Fig. 1 and Fig. 2 show that the proposed algorithms result in significant gains compared with alternative heuristics for all the considered traffic loads and especially when the traffic load is heavy. It is shown that TQ-CAC leads to significantly better results compared to NQ-CAC because NQ-CAC uses a neural network to represent the Q-values which needs more time to converge. We also compare our algorithms results to those obtained with: (1) the guard channel with fixed thresholds – these thresholds were calculated for the same traffic load given in Table 2; (2) the guard channel with optimized thresholds - the best thresholds are derived for each input traffic load value.
S.-M. Senouci, A.-L. Beylot, and G. Pujolle 500
total rewards (*10²)
150
G ree d y
400
T Q -C A C
N Q -C A C
N Q -C A C
F ix ed T h re sh . O p tim iz e d T h r e s h .
300
200
100
0 3 6 0(C 1 ) 1 8 0(C 2 )
3 0 0 (C 1 ) 1 5 0 (C 2 )
2 4 0 (C 1 ) 1 2 0 (C 2 )
G ree dy
T Q -C A C
1 8 0 (C 1 ) 9 0 (C 2 )
1 2 0 (C 1 ) 6 0 (C 2 )
1 0 0 (C 1 ) 5 0 (C 2 )
total loss rewards (*10²)
1212
F ix e d T h resh . O ptim iz ed T h resh .
100
50
0 36 0(C 1) 18 0(C 2)
30 0(C 1) 15 0(C 2)
2 40 (C 1 ) 1 20 (C 2 )
T r a ffic L o a d (c a lls /h o u r )
18 0(C 1) 90 (C 2 )
12 0(C 1) 6 0(C 2 )
1 00 (C 1 ) 5 0(C 2)
T ra ffic L o a d (ca lls/h o ur)
(a)
(b)
handoff blocking probability
Fig. 1. (a) Total rewards per hour (b) Total Loss rewards per hour
25% 20%
TRL-CAC
15%
Greedy NRL-CAC
10% 5% 0% 360(C1) 180(C2)
300(C1) 150(C2)
240(C1) 120(C2)
180(C1) 90(C2)
120(C1) 60(C2)
100(C1) 50(C2)
Traffic Load (call/h)
Fig. 2. Handoff blocking with six different traffic loads
This illustrates clearly that TQ-CAC and NQ-CAC have the potential to significantly improve the performance of the system over a broad range of network loads. We notice in Fig. 1, that the optimal threshold method leads to better performance results than Q-learning. But in this method we must compute the optimal values for each traffic in an off-line manner. In contrast, in TQ-CAC and NQ-CAC, it is interesting to observe that neither the table nor the neural network were relearned and retrained for each traffic load, indicating that the system possesses some generalization and adaptability capabilities. 4.2 Time-Varying Traffic Load The traffic load in a cellular system is typically time varying. In this case, we use the same policy learned in the training period but during a typical 24-h business day. The peak hours occur at 11:00 a.m. and 4:00 p.m. Fig. 3 gives the simulation results under the assumption that the two traffic classes followed the same time-varying pattern. The blocking probabilities were calculated on an hour-by-hour basis. The improvements of the proposed reinforcement learning algorithms over the greedy policy are apparent specially when the traffic is heavy.
Call Admission Control for Multimedia Cellular Networks
1213
1,2% TRL-CAC handoff blocking probability
Greedy 0,9%
NRL-CAC
0,6%
0,3%
0,0% 1
6
11hour of day
16
21
Fig. 3. Performance with time-varying traffic load
5 Conclusion In this paper, we presented a new approach to solve the problem of call admission control in a cellular multimedia network. We formulate the problem as a dynamic programming problem (SMDP), but with a very large state space. The optimal solutions are obtained by using a self-learning scheme based on Q-Learning algorithm. The benefits gained by this method can be summarized as follows. First, the learning approach provides a simple way to obtain an optimal solution for which an exact solution can be very difficult to find using traditional methods. Second, compared to other schemes like the guard channel, the system offers a generalization capacity. So, any unforeseen event due to significant variations in the environment conditions can be considered as a new experience for improving its adaptation. Third, the acceptation policy can be determined with very little computational effort. It is, also, shown that the proposed CAC algorithms result in significant savings.
References 1. T. M. Mitchell, “Machine Learning”, McGraw-Hill companies, Inc., 1997. 2. C.H. Yoon, C.K. Un, Performance of personal portable radio telephone systems with and without guard channels, IEEE Journal on Selected Areas in Communications (JSAC’1993), vol. 11, pp. 911-917, August 1993. 3. P. Marbach, O. Mihatsch and J. N. Tsitsikils, “Call admission control and routing in integrated services networks using neuro-dynamic programming”, IEEE Journal on Selected Areas in Communications (JSAC’2000), vol. 18, N°. 2, pp. 197 –208, Feb. 2000. 4. H. Tong and T. X. Brown, “Adaptive Call Admission Control under Quality of Service Constraint: a Reinforcement Learning Solution”, IEEE Journal on Selected Areas in Communications (JSAC’2000), vol. 18, N°. 2, pp. 209-221, Feb. 2000. 5. R. Ramjee, R. Nagarajan and D. Towsley, “On Optimal Call Admission Control in Cellular Networks”, IEEE INFOCOM, pp. 43-50, San Francisco, CA, Mar. 1996. 6. S. Senouci, A.-L. Beylot, Guy Pujolle, “A dynamic Q-learning-based call admission control for multimedia cellular networks”, IEEE International Conference on Mobile and Wireless Communications Networks (MWCN’2001), pp. 37-43, Recife, Brazil, Aug. 2001.
Aspects of AMnet Signaling Anke Speer, Marcus Sch¨ oller, Thomas Fuhrmann, and Martina Zitterbart Universit¨ at Karlsruhe, Germany
Abstract. AMnet provides a framework for flexible and rapid service creation. It is based on Programmable Networking technologies and uses active nodes (AMnodes) within the network for the provision of individual, application-specific services. To this end, these AMnodes execute service modules that are loadable on-demand and enhance the functionality of intermediate systems without the need of long global standardization processes. Placing application-dedicated functionality within the network requires a flexible signaling protocol to discover and announce as well as to establish and maintain the corresponding services. AMnet Signaling was developed for this purpose and will be presented in detail within this paper. Keywords: Programmable Networks, Active Nodes, Multicasting, Signaling
1
Introduction
Many new evolving Internet applications are based on one-to-many or manyto-many communication, e.g., tele-teaching, tele-collaboration, information dissemination through push technologies, and web-radio. IP multicast [2] efficiently supports this type of transmission with a receiver-oriented concept: receivers join a particular multicast session group and traffic is delivered to all members of that group by the network infrastructure. A challenge that comes with this type of communication is the possible heterogeneity in the group members’ service requirements. These may vary dependent on the individually available performance of the network access, the type of end system being used, the willingness to pay a higher price for better quality of service and the like. Today, most approaches realizing heterogeneous group communication adjust the provided data stream for all group members according to the group member with the lowest performance. This, however, is not desirable for many multimedia or distributed applications (e.g., video conferencing, gaming). Besides group communication applications, also other Internet applications benefit from additional network support (e.g., management or control facilities). However, introducing new functionality into the network has to be in-line with new evolving applications for realizing proper communication support promptly. Unfortunately, progress in supporting new network functionality is very slow E. Gregori et al. (Eds.): NETWORKING 2002, LNCS 2345, pp. 1214–1220, 2002. c Springer-Verlag Berlin Heidelberg 2002
Aspects of AMnet Signaling
1215
because the current network infrastructure is inflexible. The introduction of new services and protocols to enhance network functionality typically requires long global standardization processes. AMnet addresses rapid service creation in the context of heterogeneous group communication to allow the flexible and rapid introduction of new functionality in global networks. AMnet is based on Programmable Networking technologies and aims at building an implicit overlay network on top of the existing IP infrastructure for the completion of application-specific requirements. According to the Programmable Networking approach, so-called service modules are installed on active intermediate nodes – the AMnodes. AMnodes form the core building blocks of AMnet and operate on the multicast distribution tree used for the communication between sender and receivers. Service modules are responsible for the adaptation of data streams to specific service demands [5]. This paper is structured as follows: The next section presents the developed inter-domain signaling protocol for AMnet in detail. Section 2.1 describes the management of different services within AMnet, followed by the mechanisms of the establishment and maintenance of these services in Section 2.2. In Section 2.3, the way a receiver is provided with its dedicated services is presented. Section 2.4 lays special focus on the new concepts developed for evaluating AMnodes to determine their capabilities as possible service providers. The paper closes with a summary and an outlook on future work.
2
Concepts of AMnet Signaling
AMnet aims at dynamically placing application-specific functionality within the network. To this end, some questions have to be decided: how should different services be managed within a session, how should they be established and maintained, how should a receiver be associated to a dedicated service and where should those services be placed? In this context, a session describes a communication scenario where a designated sender issues a data stream which can be received from several communication participants without or after adaptation in the AMnodes. To solve the foregoing questions a flexible and light-weight signaling protocol for AMnet was developed [6] which will be presented in the following section. 2.1
Management of Services
Service heterogeneity within a session needs to be bound to a manageable degree of diversity. Therefore, one concept of AMnet signaling is to logically group receivers with similar service demands into distinct multicast receiver groups – the service level groups – distinguishable by their multicast-addresses. The receivers join the corresponding group on demand through IGMP [3]. Each service level group within a communication session represents all receivers whose service demands can be satisfied with a single group service.
1216
A. Speer et al.
Therefore, each group represents a different view onto the same original data corresponding to the adaptation performed by the AMnodes. The service of a group is supported by an AMnode through the use of appropriate service modules. The actual service is then derived from the processing of the original data stream (cf., AMnode 2 in Figure 1) or from the service of another service level group (cf., AMnode 3 in Figure 1). Therefore, the communication service offered by AMnet can be described by a tree of service level groups (cf., Figure 1). Original MC stream adapted MC stream
1
sender/ IP router AMnode receiver service level group
2
3
adapted MC stream adapted MC stream
adapted MC stream
Fig. 1. Multicast Tree with Service Level Groups
2.2
Establishment and Maintenance of Services
Service modules are held in distributed data bases – the service module repositories – which are administratively managed per domain. Service modules can be stored there by trusted AMnet users or network management procedures. Moreover, current work focuses on establishing a hierarchy of trusted repositories. The stored service modules are grouped into module classes like, e.g., audio transcoding or reliable multicast, and for each module class there exists a distinct evaluation procedure to be downloaded within the evaluation process (cf., Section 2.4). Therefore, the overall purpose of the repositories is to make service modules and their corresponding evaluation procedures available to an AMnode which is requested to provide a special service (cf., Figure 2 (4)). Besides multicasting the original data stream of the session (cf., Figure 2 (1)), the sender announces the provided session on a separate multicast group – the session control group (cf., Figure 2 (1a)). In this group every AMnet session is announced similar to the Session Announcement Protocol of the MBone [4]. The session announcement contains a description of the session including bandwidth and delay requirements, as well as content specific information like data format and compression scheme. This description is used by the potential session participants to determine in which way the original data stream has to be adapted by the AMnodes to receive the data stream at a desired service level.
Aspects of AMnet Signaling
1217
Moreover, the description contains the multicast address of the original data stream and the multicast address of the service announcement group. In this group, the AMnodes announce the provided services (cf., Figure 2 (5)). Potential session participants join the service announcement group to learn about available services (cf., Figure 2 (3)), i.e., the description of the provided service, as well as the address of the corresponding service level group where the adapted data is sent to. Moreover, during multicast distribution the address of each AMnode that was traversed by the announcement on its way from the sender to the receiver is included. This information is used for the evaluation process described in Section 2.4. Service modules on an AMnode are maintained in a soft state. After joining a service level group the participant periodically sends HELLO messages to the AMnode hosting the appropriate service module. If no HELLO messages were received for a given time, the AMnode makes the service module stop issuing data into the corresponding service level group. However, the service modules are not immediately deleted from the AMnodes but cached, in case the service is requested again right afterwards. The soft state of the service modules is utilized to prevent service level groups not used by any participants. The caching strategy helps to avoid unnecessary overhead coming along with re-loading and re-installation. (1)
data stream
(1)
sender
data stream (1a) session announcement
(6a)
data stream (5) service AMnode announcement
service announcement group
session control group
(6b)
(3)
adapted data stream
joins
(4)
service module (2)
receiver
is member of
service module repository
Fig. 2. Overview of the AMnet Service Control
2.3
Association between Receiver and Service
A receiver that wants to use a special service for adapting the data stream to its requirements processes the service announcements. If one of the announcements matches the receiver’s requirements, it simply joins the corresponding service level group where the adapted data stream is sent to. Otherwise, if no matching service is announced, an evaluation process has to be started (cf., Section 2.4). Within this process, AMnodes on the data path
1218
A. Speer et al.
between the sender and the receiver are analyzed whether to be capable of providing the desired service. The AMnodes on the data path are known from the corresponding service announcements of the sender that provides the data stream the receiver wants to be adapted (cf., Section 2.2). Even if no additional service is advertised in the service announcement group, at least the service of the session sender providing the original data stream is announced. The evaluation process will result in the address of an AMnode that is considered to be a good place for supporting the service. Then, the appropriate service module is downloaded from the service module repository onto this AMnode (cf., Figure 2 (4)). For security reasons, the service modules will be signed and only modules with a correct signature will be installed on an AMnode. The newly installed service is announced into the service announcement group (cf., Figure 2 (5)) and the receiver can simply join the corresponding service level group. Now, it will experience the data stream adapted according to its requirements (cf., Figure 2 (6a)-(6b)). 2.4
The Evaluation Process
In the original approach of the evaluation process for AMnet [6] the intra-domain evaluation of the AMnodes was realized with active evaluation packets corresponding to capsules as introduced in the Active Networking context [1]. These capsules contained an evaluation program downloaded from the service module repository and initialized by the receiver. This approach, however, was not usable within the context of inter-domain signaling because of security considerations. Therefore, a new approach was developed. Now, the receiver has only to issue a service request to its predecessor AMnode (cf., Figure 3) known by the path information of the service announcements. The service request contains the class of the service the receiver wants to be supported (e.g., audio transcoding) and the path information. Moreover, specific parameters can be included. In the case of audio transcoding, this may be the maximum data rate the receiver is able to process, the formats of the data stream the receiver can understand, and so on. The first AMnode that is able to process the receiver’s service request contacts the service module repository and downloads the evaluation procedure that corresponds to the module class the desired service belongs to. With this evaluation procedure the AMnode is analyzed. The results of the local evaluation, the address of this local AMnode, as well as the identifier for the used evaluation procedure are tied up into an evaluation packet that is forwarded to the next AMnode (cf., Figure 3) known from the path information contained in the service request. Moreover, this path information is included into the packet, as well as the address of the first AMnode that processed the receiver’s service request and started the whole evaluation process. An AMnode that receives an evaluation packet contacts its responsible service module repository to download the specified evaluation procedure. The AMnode will contact the same service module repository for downloading a requested service. Therefore, if the specified procedure is not included in the contacted service module repository, the AMnode will not be able to support the desired
Aspects of AMnet Signaling
1219
service at all, because service class and corresponding evaluation procedure are always stored as one entity in a repository entry. Then, the evaluation packet will only be forwarded unchanged to the next AMnode on the path towards the session sender. However, if the contacted service module repository contains the specified evaluation procedure, it will be downloaded and executed on the AMnode as described above. Evaluation results are only entered into the evaluation packet if the local AMnode fulfills the given requirements in a better way than the AMnode already registered in the packet. After the evaluation procedure is finished on the local AMnode, the evaluation packet is forwarded to the next AMnode, again. This process stops on the last AMnode in front of the session sender. After its evaluation, the final evaluation packet is sent back to the first AMnode that started the whole procedure (cf., Figure 3). This AMnode, now, interprets the evaluation result as the address of the AMnode that is considered to be the best for supporting the service in the current network scenario. This AMnode, then, is made to download and install the requested service module and the receiver can access the desired service level as described in Section 2.3.
sender
result
evaluation packet
evaluation packet AMnode receiver
service request
receiver
"regular" IP router
Fig. 3. Scheme of the Evaluation Process
3
Conclusions and Outlook
AMnet provides an open and generic framework for the provision of user-tailored rapid service creation with a specific focus on heterogeneous group services. It is based on Programmable Networking technologies and aims at building an overlay network on top of the available Internet infrastructure. A major goal of AMnet is to provide individual services on demand without complex installation and management overhead. AMnet is based on IP and benefits from its multicast extensions in several ways. For realizing service and session announcements, as well as for disseminating adapted data with individual requirements distinct IP multicast groups are applied.
1220
A. Speer et al.
This paper is focused on AMnet Signaling – a flexible and active signaling protocol – developed specifically to support the placement and announcement as well as the establishment and maintenance of active services in the context of rapid service creation with AMnet. Moreover, the new developed evaluation process for placing dedicated services inside the network was described. In contrast to the mechanisms presented in [6], this evaluation mechanism can be used inter-domain. Future work will focus on extending or, respectively, changing the signaling mechanisms to be able to use AMnet as well in networking environments where native IP Multicast is not provided. Different approaches are considered and will be evaluated. Moreover, the presented novel evaluation process will be introduced in the actual prototype implementation, leading to enhanced experience with automated service discovery and placement.
References 1. J. V. Guttag D. J. Wetherall and D. L. Tennenhouse. ANTS: A Toolkit for Building and Dynamically Deploying Network Protocols. In Proceedings of the IEEE OPENARCH, pages 117–129, April 1998. 2. S. Deering. Host Extensions for IP Multicasting. RFC 1112, IETF, August 1994. 3. B. Fenner. Internet Group Management Protocol, Version 2. RFC 2236, IETF, November 1997. 4. M. Handley. SAP: Session Announcement Protocol. Internet draft, IETF, November 1996. 5. T. Harbaum, B. Metzler, R. Wittmann, and M. Zitterbart. AMnet: Heterogeneous Multicast Services based on Active Networking. In Proceedings of the IEEE OPENARCH99, pages 98–107, New York, NY, USA, March 1999. IEEE. 6. A. Speer, R. Wittmann, and M. Zitterbart. Locating Services in Programmable Networks with AMnet Signalling. In Proceedings of The Sixth Conference on Intelligence in Networks (SmartNet 2000), Wien, Austria, September 2000.
Virtual Home Environment for Multimedia Services in 3rd Generation Networks Orazio Tomarchio, Andrea Calvagna, and Giuseppe Di Modica Dipartimento di Ingegneria Informatica e delle Telecomunicazioni Università di Catania Viale A. Doria 6, 95125 Catania, Italy {tomarchio, acalva, [email protected]}
Abstract. The Virtual Home Environment (VHE) has been introduced as an abstract concept enabling users to access and personalize their subscribed services whatever the terminal they use and whatever the underlying network used. The European IST VESPER project (Virtual Home Environment for Service Personalization and Roaming Users) aims to provide an architectural solution and an implementation of the VHE, providing ubiquitous service availability, personalised user interfaces and session mobility, while users are roaming or changing their equipment. In this paper we present a multimedia delivery service, one of the trial services selected to demonstrate VHE features, showing its interconnection with the so far defined VESPER VHE architecture.
1
Introduction
The technological evolution of the last years, both in network speed and bandwidth than in multimedia capabilities of low-end devices, has made possible the convergence of telecommunication networks and data networks, leading to a new generation of integrated, IP based, transport infrastructure that will enable the deployment of even more valuable services for the users, like real-time video communication ones. Also, both existing and upcoming wireless technologies are enabling the support of data services and audio/video communication for “moving” users, that is users whom network location may change even while a service session is currently in progress. As these services will be available over heterogeneous network, users would like to access them in a personalized way, transparently and independently of the underlying network technology and particular terminal used. In 3GPP [1] this idea is embodied in the Virtual Home Environment (VHE)[2,3,4], defined as a concept for Personal Service Environment (PSE) portability across network boundaries and between terminals. The concept of the VHE is such that users are consistently presented with the same personalized features of subscribed services, in whatever network and whatever terminal (within the capabilities of the terminal and the network), wherever the user may be located. The IST VESPER (Virtual Home Environment for Service Personalization and Roaming Users) project [12] (funded by the European Community) aims to provide an architectural solution and an implementation of the VHE, providing ubiquitous service availability, personalized user interfaces (i.e., service portability) and session mobility, while users are roaming E. Gregori et al. (Eds.): NETWORKING 2002, LNCS 2345, pp. 1221-1226, 2002. © Springer-Verlag Berlin Heidelberg 2002
1222
O. Tomarchio, A. Calvagna, and G. Di Modica
or changing their equipment. VESPER VHE should hide away from the user the variety of access network types (fixed or wireless), the variety of supporting terminals, and the variety of the involved network and service providers involved during service provision [9,10]. As regards the international standardization for a, the project intends to influence the course of standardisation within 3GPP, Parlay [8] (to enhance standard APIs for sustaining the VHE functionality) and OMG [5,6]. In the context of the Vesper project, this paper describes our effort in the realization of a test service-application [1226] to validate the design and implementation work done in the project [11]. The service we will describe, selected among others as one of the trial services used to demonstrate the features of the VESPER prototype, is a “multimedia content delivery” service, designed to provide a mechanism to distribute multimedia streams (consisting of video and audio, but also pictures, etc) to end-users. In this paper we will show the main benefit gained by such a kind of service (which we named “multimedia delivery service”(MDS)), when used in the context of a Virtual Home Environment. The rest of the paper is organized in the following way. Section 2, after a brief overview of the Vesper VHE architecture, describes the multimedia delivery service and its interactions with the VHE architecture at the current phase of the project. In Section 3 we focus on the adaptation feature of VHE and finally, we conclude the paper in Section 4.
2
Vesper Demonstration Services
The VESPER architecture has been designed using a component based approach: all of these components rely on a CORBA based environment for their internal communication. A more detailed description of the overall VESPER architecture is out of scope of this paper and can be found in [10,12]. However, VESPER components are embedded into a heterogeneous network and terminal landscape. Figure 1 shows VHE architectural placement in relation to network and terminal environment. At server side VHE functionality is accessed via VHE API on top and deals via OSA/Parlay[8] gateways with different networks as transport layer. At terminal side VHE functionality is also accessed via VHE API and deals via USAT (Universal SIM Application Toolkit)[14] or MExE (Mobile Station Application Execution Environment)[13] with terminal core functions. One of the objective of the VESPER project is to define, design, and implement services which both impose precise requirements on the VHE architecture defined and implemented by the project, and demonstrate that this VHE architecture fulfils the requirements. VESPER will provide an open API to VASPs’ applications, enabling the VHE concept within the service. VASPs (Value Added Service Provider) will be able to offer advanced services, abstracting from the terminal used for accessing the service and from the underlying networks, leaving all of the basic VHE services to the VHE provider role. In this scenario, this section will describe the MDS service, one of the applications selected for validating the VESPER middleware during the two trials, the first one held at the end of 2001 in a lab environment, the second one (at the end of 2002) involving also real networks.
rd
Virtual Home Environment for Multimedia Services in 3 Generation Networks T er m in al
N et w or k
ap plicat ion s
applicat ion s VE S P E R VH E AP I
O S A /P ar lay Gat ew ay
network
R e s o u r ce A P I n et w or k s pe ci f i c
Circuit
T e r m in al cor e f u n ct io n s
P ar lay /O S A API
s witched
M E xE AP I
M EXE (W A P , J av a, i-m ode)
IP network
U S AT AP I
VH E com pon en t s
Mobile
VH E co m po n en t s
network
VE S P E R VH E API
U S AT
1223
Fig. 1. Embedding of VHE Architecture
2.1
Multimedia Delivery Service
People are nowadays used to get every kind of information through the Internet, using common Web browser applications. The next big challenge is to deliver multimedia informations to end users independently of the device used to access the network: notebook, PDA, and next generation UMTS phone will be able to display not only text and images but also audio and video. Using VHE functionalities a multimedia delivery application will be greatly enhanced and widely spread among users. Users will not be restricted in the set of terminal they have to use to access the application and to receive data, neither in the network they are connected to. Users will be able to access the application using different terminals, ranging from the powerful multimedia PC to the personal PDA with limited screen size and computing capabilities. Even future smartphones will be able to reproduce small videoclips in their small screens. The MDS will take benefit of adaptation, connectivity and service personalisation functionalities provided by VESPER VHE. The adaptation feature will make the service delivery transparent to the VASP, thus allowing a larger number of terminals to be able to access the MDS from a wider range of available networks. The service will only provide the stream content to the VHE component responsible of the adaptation, whose task will be to adapt it to the user preferences, user network, user terminal and deliver it. What is asked to the adaptation is not only a user interface adaptation, but also a content adaptation. So, for instance, if user terminal is not provided with the right codec to watch a movie, it is a task of the component responsible for adaptation either to “adapt”(codify) the stream in a format compatible to that of one of the codecs owned by the user terminal, or to upload the suitable codec to the terminal.
1224
3
O. Tomarchio, A. Calvagna, and G. Di Modica
Service Adaptation in the Vesper VHE
One of the key features of VHE, is the adaptation that it provides to terminal capabilities and user preferences. Each service using Vesper VHE APIs should not care of terminal device used by the end-user: it is the adaptation that takes charge of that. The adaptation is realized by the Adaptation Component, whose main tasks are: - to adapt the contents a VESPER Service provides to the user according to the capabilities of the terminal accessing the VHE Server, the End-User interface and services preferences and the QoS classes supported by the underlying network(s), - to manage the user interaction with the service.
9+ ( 7H U P LQ DO
9 $ 6 3 6 H UY H U
$ 6 2 D L Y , 8
6 H U Y LF H & OLH Q W
$ F F HV V & R P S R QH Q W
6 HU Y LFH
, 3 $ \ D O U
D 3
& R Q Q H FWLR Q
6 H V VLR Q
& R P S R QH Q W
& R P S R QH Q W
$ 6 2
$ 6 2 LD Y O R U W Q R O& O D &
, 3 $ ( + 9
3 UR ILOH & R P S R QH Q W $ G DS WD WLR Q & RP SRQHQW 8 S OR DG & R P S R QH Q W
0 H G LD
& R Q WH Q W IO R Z Y LD Q H W Z R U N H J +
$ G D S WH U
& R Q WH Q W I OR Z Y LD 7 & 3 ,3 WK UR X J K D G D S W D WLR Q
Fig. 2. Media adaptation
The VHE service implementation is completely independent from the actually used environment (network type, terminal type, user preferences) during service usage. The adaptation component offers interfaces in the VHE API to enable this feature. An overview of an adaptation scenario is given in Figure 2. The adaptation to network and terminal capabilities is done at the VHE system. The content flow goes from the VASP Server to the media adapter via TCP/IP connection through the CORBA based VHE API. On the terminal side a connection is established by the Connection Component depending on the transport network via OSA/Parlay. The media adapter supports the used network protocol to provide the terminal with the media stream (e.g. announcements, multimedia/video). The Adaptation Component specification has provisioned for a flexible engineering scheme such that the media adapter can be wrapped as a mobile agent whose itinerary is limited to the VASP Server and/or the End-User’s terminal. The adaptation is done at VASP Server by the agent providing adaptation to the media stream supported by the terminal and to the used network protocol. At terminal side a corresponding agent decodes the stream and provides content output at the terminal. This solution presupposes that the terminal and the VASP Server provide an agent execution environment. The second role of the adaptation is to manage the user interaction with the service: this means that the user interface should be adapted and presented to the user
rd
Virtual Home Environment for Multimedia Services in 3 Generation Networks
1225
according the terminal capabilities (apart from user preferences). In order to be able to offer this kind of adaptation each service is required to provide a formal description (UIModel) of the user interface they want to offer to their users: this description is expressed in XML and includes several kind of logical tools for user interaction (buttons, text fields, text entries, checkbox, etc). The actual representation of this graphical model will depend of the actual device used by the user to access the service. It is the adaptation component that will decide the best way to render the User Interface model (UIModel) provided by the service. VHE-Provider
MDS Server
Adaptation Manager
UIModel Interpreter
UIModel
Web Server
Media Adapter
User Terminal
ORRN8S0HGLD$GDSWHU
&KRRVHV0HGLD$GDSWHUDGDSWHGWR7HUPLQDO1HWZRUNDQGDQGXVHU3URILOHV
JHW0HGLD$GDSWHU
*HW8,0RGHO
VHW8,0RGHO
© LQSXW
LQWHUSUHW8,0RGHO
HOHPHQWV
RXWSXW
HOHPHQWV
DFWLRQ
HOHPHQWV
ª
LQWHUSUHW8,0RGHO
$GDSW8,0RGHO
+70/SDJH
85/ DFWLRQ3HUIRUPHG VHW8,0RGHO
9. User action
PRYLHVVHOHFWHG«
«
LQWHUSUHW8,0RGHO
Fig. 3. MDS interaction with the VHE Adaptation component
For better understanding of this mechanism, Figure 3 shows a step by step scenario involving the MDS service interaction with the adaptation component, supposing the user has accessed the service through a Web browser: 1. The service looks up for a media adapter. The VHE Server provides the service with a list of available and appropriate media adapters for the terminal and network currently used. 2. The MDS chooses a media adapter whose presentation characteristics cope best with its logic. 3. The service asks for an UIModel object. 4. The MDS sends the description of the service’s user interface to this object as an XML description. This description synthesises the interface to be presented in the End-User’s terminal (output messages, input fields, selection list, buttons, etc). 5. The MDS asks the Adaptation component to interpret the UIModel. 6. (and 7) The Adaptation Component interprets the model and formats it into an HTML page, by mapping the output elements, input elements and action elements into HTML elements, respectively HTML texts, HTML text field/selection field and HTML buttons. The mapping of the previous UIModel description into HTML page takes in consideration the terminal capabilities and the End-User User Interface Preferences.
1226
O. Tomarchio, A. Calvagna, and G. Di Modica
8.
Once produced the adapted HTML page, the Adaptation Component submits the page to the Web server which then sends this page to user’s browser. 9. The user interacts with the received HTML page, fills the text field or selects values in the selection field and then clicks on a button. 10. The Web server forwards the request to the registered UIModel Interpreter. 11. The UIModel Interpreter object collects the information in the URL request, builds a description of user’s interaction (user’s entered values, button pressed) in form of a XML description and invokes MDS callback action listener.
4
Conclusions
In this paper an overview of the VESPER project has been presented. The paper has been focused on the description of a multimedia delivery service, selected as a trial application for demonstrating VHE capabilities. These kind of applications can be greatly enhanced by VHE features, since users will be able to access this advanced service by means of every available device. Key functionalities of adaptation to terminal capabilities and personal user preferences have been described more in detail. Acknowledgement. This work has been performed in the framework of the project IST VESPER, which is funded by the European Community. The Author(s) would like to acknowledge the contributions of his (their) colleagues from Intracom Hellenic Telecommunications and Electronics Industry S.A., National Technical University of Athens, Institut National de Recherche en Informatique et Automatique, IKV++ Technologies AGe, FOKUS Fraunhofer Institute for Open Communication Systems, Fondazione Ugo Bordoni, Universita’ Di Catania, Portugal Telecom Inovação, University of Surrey, Technical Research Centre of Finland and SIEMENS AG Österreich
References 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14.
3 rd Generation Partnership Project (3GPP), http://www.3gpp.org “Virtual Home Environment / Open Service Architecture”, TS 23.127, 3GPP project. “The Virtual Home Environment”, TS 22.121, 3GPP project, release 2000 3GPP, 3G TS 22.121 v3.2.0, The Virtual Home Environment, stage 1. 3GPP, 3G TS 29.198 v3.0.0, Open Service Access (OSA) API Part 1, stage 3. 3GPP, 3G TS 23.127 v3.1.0 Virtual Home Environment/Open Service Architecture, st. 2. A. Calvagna, A. Puliafito, and L. Vita, ``A Low Cost/High Performance Video on Demand Server'', in IEEE Int. Conf. on Computer and Communication Networks (ICCCN'99), Boston, MA USA, 11-13 Ottobre 1999 Parlay Group, Parlay Specifications, http://www.parlay.org/ VESPER, IST-1999-10825, Technical Annex VESPER, IST-1999-10825, D22 – VHE Architectural Design VESPER, IST-1999-10825, D42 – Initial Demonstration Services Specification VESPER WWW site: http://vesper.intranet.gr/ 3G 22.057 v3 0 1 – MexE ETSI TS 101.267 v8.3.0, Specification of the SAT for the SIM-ME interface
On Providing End-To-End QoS Introducing a Set of Network Services in Large-Scale IP Networks E. Tsolakou, E. Nikolouzou, and S. Venieris National Technical University of Athens School of Electrical and Computer Engineering Telecommunications Laboratory 9 Heroon Polytechniou Str, 15773 Athens, Greece {evi, enik}@telecom.ntua.gr, [email protected]
Abstract. The Differentiated Services (DiffServ) architecture has been proposed as a scalable solution for providing service differentiation among flows. Towards the enhancement of this architecture, new mechanisms for admission control and a new set of network services are proposed in this paper. Each network service is appropriate for a specific type of traffic and is realized through its own network mechanisms, which are the Traffic Classes. Traffic Classes provide the traffic handling mechanisms for each Network Service and are composed of a set of admission control rules, a set of traffic conditioning rules and a per-hop behavior (PHB). Different traffic-handling mechanisms are proposed for each network service and are implemented with the use of the OPNET simulation tool. A large-scale network is used as a reference topology for studying the performance and effectiveness of the proposed services. Keywords: Network Services, QoS, Traffic Classes
1
Introduction
Motivated by the rapid change of QoS requirements of the new introduced network applications, the Internet has been evolving towards providing a wide variety of services, in order to meet the qualities of information delivery demanded by the applications. For the past few years, there have been two major efforts focusing on augmenting the single-class, best effort Internet to include different levels of guarantee in quality of service - Integrated service (Intserv) and Differentiated service (DiffServ) [1]. The most salient point between these two approaches is the difference on the treatment of packet streams. Intserv tends to emulate circuit-switch networks, focusing on guaranteeing QoS on individual packet flows between communication end-points. To ensure the level of guarantee on a per-flow basis, it requires explicit signaling to reserve corresponding resources along the path between these end-points. One major dilemma faced by this approach is that in the core of the Internet, where exist several millions of flows, it may not be feasible to maintain and control the forwarding states efficiently. These scalability and management problems are addressed recently by DiffServ approach. E. Gregori et al. (Eds.): NETWORKING 2002, LNCS 2345, pp. 1227-1232, 2002. © Springer-Verlag Berlin Heidelberg 2002
1228
E. Tsolakou, E. Nikolouzou, and S. Venieris
The focal point of the DiffServ model lies in the differentiation of flows at an edge router of a DS-domain and the aggregation of those flows of the same service class at a core router of the DS-domain. At each ingress interface of a edge router, packets are classified and marked into different classes, using Differentiated Services CodePoint (DSCP). Complex traffic conditioning mechanisms such as classification, marking, shaping, and policing are pushed to network edge routers. Therefore, the functionalities of the core routers are relatively simple - they classify packets and then forward them using corresponding Per-Hop Behaviors (PHBs). In this sense, PHB is a means by which a node allocates resources to behavior aggregates, and it is on top of this basic hop-by-hop resource allocation mechanism that useful differentiated services may be constructed. PHBs are implemented in nodes by means of some buffer management and packet scheduling mechanisms and the parameters associated with those mechanisms are closely related to those of traffic conditioning.
2
Network Services
In order to provide QoS guarantees in a DiffServ network it is essential to assure QoS differentiation. Therefore, a set of five Network Services (NS) has been specified and implemented in our framework [2], which comprises the services sold by the provider to the potential customers, either end-users or other providers. The specified NSs are: Premium Constant Bit Rate, Premium Variable Bit Rate, Premium Multimedia, Premium Mission Critical and Standard Best Effort. The PCBR network service is intended to support applications that require VLLlike services, i.e. voice flows, voice trunks, interactive multimedia applications with low bandwidth requirements. That kind of flows is usually characterized by an almost constant bit rate (CBR) and low bandwidth requirements, while a great number of them are unresponsive (UDP). In addition, they should have small packets ( 0, the retransmitted segment cannot have reached the receiver, and there is no point in repeating the retransmission. When rtx count expires, rtx count is reset and the sender repeats a complete cycle of all retransmissions still requested by incoming NACKs. SaTPEP also utilizes TCP’s Retransmission Timer. Whenever the timer expires the SaTPEP sender sets rwnd to 1, dupwnd to 0 and retransmits the segment requested by the last ACK received. Figure 1 describes more formally the loss recovery algorithm implemented at the SaTPEP sender.
1236
D. Velenis, D. Kalogeras, and B. Maglaris
Initially: dupwnd = 0, recover = null, hinack = null, hiack = null rtx count = null, rtx allow = 0, rtx stop = null On arrival of 1st dupACK: – hiack ← dupACK’s ack no, recover ← highest transmitted seq no, dupwnd ← 1 – if rtx count is null or 0: rtx count ← rwnd − 1 – hinack ← NACK’s highest seq no – retransmit segment(s) requested in NACK option – reduce dupwnd by number of segments retransmitted – transmit new data (as much as cwnd allows) On arrival of any dupACK: – dupwnd ← dupwnd + 1, rtx count ← rtx count − 1 – if NACK’s highest seq no > hinack and rtx allow=0: • retransmit what NACK requests, hinack ← NACK’s highest seq no – else if rtx allow=1 and NACK does not contain rtx stop: • retransmit segment(s) requested in NACK option • if hinack > NACK’s highest seq no: hinack ← NACK’s highest seq no – else if rtx allow=1 and NACK contains rtx stop: • rtx allow ← 0, rtx stop ← null, rtx count ← rwnd – reduce dupwnd by amount of data retransmitted – if rtx count = 0: rtx allow ← 1, rtx stop ← NACK’s highest seq no – transmit new data (as much as cwnd allows) On arrival of Partial ACK: – reduce dupwnd by Partial ACK’s ack no - hiack – hiack ← Partial ACK’s ack no, Perform actions for any dupACK On arrival of New ACK: – dupwnd ← 0, recover ← null, hiack ← New ACK’s ack no, rtx count ← null, rtc allow ← 0
Fig. 1. SaTPEP sender reaction to Duplicate Acknowledgements
3
Simulation Experiments
We evaluate the performance of SaTPEP in comparison to SACK-TCP, in a series of simulation experiments using the ns simulator [9]. A bi-directional GEO satellite link is used to establish communication between N data senders and N receivers. The data senders are connected to the Uplink Gateway, and the receivers to the Downlink Gateway. They perform bulk data transfers of various file sizes. The packet size is set to 1500 bytes. The propagation delay of the satellite link is set to 275ms, resulting in a RTT of 550ms between UG and DG. Link capacity values range from 2 to 10M bps. All other links have a propagation delay of 1ms, and their capacity is set to 10M bps, or 100M bps in the case of a 10M bps satellite link. The packet loss probability, Ploss , of the satellite link, ranges from 10−6 to 10−2 . All other links are error-free. Queue sizes are set to 600 packets for all links, so that end-to-end TCP transfers do not experience congestion loss. The focus of our comparison is on goodput (f ile size/connection duration), as perceived by the receiver hosts, and on fairness between multiple simultaneous connections. In the first series of experiments N is set to 1. All other parameters cover the full ranges already mentioned. In order for TCP to be able to eventually fully utilize the satellite link, we have set the TCP rwnd to rather high values, from 100 to 500 segments. Figure 2 depicts goodput achieved by SaTPEP and TCP for different Ploss values. The file size is 1M byte and the link capacity 6M bps. SaTPEP performs significantly better than TCP because it is able to fully utilize the link after the first RTT of the connection. Frequent losses cause
SaTPEP: A TCP Performance Enhancing Proxy for Satellite Links
1237
TCP’s cwnd to remain low, while SaTPEP still raises cwnd high enough to fully utilize the link. Figure 2 also depicts the goodput ratio for SaTPEP to TCP, which rises significantly for Ploss = 10−2 . For a given Ploss value, SaTPEP’s performance increases for higher file sizes and link capacities, as shown in figure 3. Both graphs are obtained for Ploss = 10−3 .
500
SaTPEP/TCP
18
400
16
350
14
Goodput Ratio
Goodput (kbytes/sec)
20
SaTPEP TCP
450
300 250
200
150 100
12 10 8 6 4
50
2
0 1e-06
0 1e-06
1e-02
1e-03 1e-04 1e-05 Packet Loss Probability
1e-05
1e-04
1e-03
1e-02
Packet Loss Probability
Fig. 2. Goodput and Goodput Ratio for different Ploss values 800
600
500 400
300 200 100 0
SaTPEP TCP
700
Goodput (kbytes/sec)
Goodput (kbytes/sec)
800
SaTPEP TCP
700
600 500 400
300 200
100
0
1
2
3
4
5
6
File Size (Mbytes)
7
8
9
10
0
2
3
4
7 6 5 Link Capacity (Mbps)
8
9
10
Fig. 3. Goodput for different file size and link capacity values
In the second series of experiments, we set N to 21 and the link capacity to 6M bps. At time t1 = 1sec twenty senders begin transmission of a 2M byte file each. At time t2 = 10sec the 21st sender begins transmission of a 500kbyte file. The rwnd value for TCP connections is set to 25 segments, high enough to result in full utilization of the link, without causing congestion during the initial Slow Start phase. Figure 4 depicts goodput achieved by each of the initial twenty connections for Ploss = 10−3 . It is clear that SaTPEP distributes the link capacity in an even more fair manner than TCP does. The average goodput achieved by the twenty initial connections, along with the goodput of the 21st connection, is shown in figure 4 for different Ploss values.
1238
D. Velenis, D. Kalogeras, and B. Maglaris
50
40
35
30 25
40 35 30 25
20
20
15
15 1e-06
2
4
6
8
12 10 Connection
14
16
SaTPEP-avg TCP-avg SaTPEP-21 TCP-21
45
Goodput (kbytes/sec)
Goodput (kbytes/sec)
50
SaTPEP TCP
45
18
20
1e-05
1e-04 1e-03 Packet Loss Probability
1e-02
Fig. 4. Goodput of 20 simultaneous connections. Average Goodput of 20 connections, and Goodput of connection 21 for different Ploss values
4
Conclusion
In this paper we introduced SaTPEP, aiming at improving TCP performance over satellite links. SaTPEP’s flow control is based on link utilization measurements. Loss recovery is based on Negative Acknowledgements. Simulation experiments show significant performance improvement over TCP, in presence of available link capacity, and under high error rates. Under heavy traffic, SaTPEP exhibits remarkable fairness between simultaneous connections.
References 1. C. Partridge and T. Shepard, “TCP/IP Performance over Satellite Links,” IEEE Network Mag., pp. 44–49, Sept. 1997. 2. V. Jacobson, “Congestion Avoidance and Control,” in Proc. ACM SIGCOMM, Stanford, CA USA, Aug. 1988. 3. M. Allman, D. Glover, and L. Sanchez, “Enhancing TCP over Satellite Channels using Standard Mechanisms,” RFC 2488, Jan. 1999. 4. M. Allman, S. Dawkins, D. Glover, J. Griner, D. Tran, T. Henderson, J. Heidemann, J. Touch, H. Kruse, S. Ostermann, K. Scott, and J. Semke, “Ongoing TCP Research Related to Satellites,” RFC 2760, Aug. 2000. 5. J. Border, M. Kojo, J.Griner, G. Montenegro, and Z. Shelby, “Performance Enhancing Proxies Intended to Mitigate Link-Related Degradations,” RFC 3135, June 2001. 6. I. Minei and R. Cohen, “High-Speed Internet Access Through Unidirectional Geostationary Satellite Channels,” IEEE JSAC, vol. 17, no. 2, pp. 345–359, Feb. 1999. 7. T. Henderson and R. Katz, “Transport Protocols for Internet-Compatible Satellite Networks,” IEEE JSAC, vol. 17, no. 2, pp. 326–344, Feb. 1999. 8. I. Akyildiz, G. Morabito, and S. Palazzo, “TCP-Peach: A New Congestion Control Scheme for Satellite IP Networks,” IEEE/ACM Transactions on Networking, vol. 9, no. 3, pp. 307–321, June 2001. 9. “NS (Network Simulator),” http://www.isi.edu/nsnam/ns/. 10. S. Keshav, “A Control-Theoretic Approach to Flow Control,” in Proc. ACM SIGCOMM, Zurich, Switzerland, Sept. 1991.
An Overlay for Ubiquitous Streaming over Internet Chai Kiat Yeo1, Bu Sung Lee1, and Meng Hwa Er2 1 School of Computer Engineering School of Electrical & Electronics Engineering Nanyang Avenue, S639798, Singapore {Asckyeo, Ebslee, Emher}@ntu.edu.sg 2
Abstract. Conventional distribution of real-time multimedia data uses multicasting or a series of relays and tunnels for unicast networks. The former is a capability not popularly enabled by a lot of networks while the static relays cannot readily adapt to changing network conditions and are potential bottlenecks in a heavily accessed system. This paper proposes a dynamic overlay framework for streaming multimedia data over heterogeneous networks. The overlay comprises a self-improving tree which is built from client relays on the fly and a lightweight server to manage the tree. The overlay provides a better QoS than conventional relays as it monitors the network and re-configures the tree to adapt to changing environments. Clients can switch parents for better QoS. The robustness of the tree is improved by using a spiral mechanism and failure of the lightweight server will not impact the data distribution functionality of the existing tree.
1 Introduction The IP multicast [1] has been a highly efficient delivery mechanism for best-effort, large-scale, multi-point delivery of real-time multimedia data. However, Internet Service Providers and organisations deliberately disable multicast traffic to protect their networks against unwanted traffic. With the increasing popularity of multicast and broadband applications, the only way then for intranet clients and multicastdisabled networks to access multicast sessions is through a combination of tunnelling and a network of static relays. [2] and [3] are examples of such applications. [2] proposes a hierarchical configuration of reflectors to act as unicast-multicast bridges. It uses a clustered-based approach by the manual placement of distributed servers at bottlenecks in the network to balance the load. The problem with this approach is the inability of the system to respond to rapid changes in the network and the potential of these servers becoming bottlenecks themselves. [3] proposes a centralised framework for developing collaborative applications using a lightweight application level multicast tunnelling called mTunnel [4]. A centralised server is used to view, manage and effect all tunnelled sessions with specific gateways employed to unify unicast-multicast clients. Its drawback is the potential bottleneck in host processing capability and network resources. E. Gregori et al. (Eds.): NETWORKING 2002, LNCS 2345, pp. 1239-1244, 2002. © Springer-Verlag Berlin Heidelberg 2002
1240
C.K. Yeo, B.S. Lee, and M.H. Er
2 Framework Overview and Design Fig. 1 shows the architecture and the operation of the framework. It comprises the Directory Server (DS), the Web Server (WS) and the overlay tree of client nodes. The overlay tree is responsible for the distribution of data streams while DS is only responsible for the management functions. Hence the load of DS is vastly reduced compared to [3]. WS provides the GUI for sources to advertise their sessions. A separate overlay is built for different sources.
2.1 Overlay Construction and Operation A source can either be unicasting or multicasting. The former will have to advertise its session by contacting WS (Step 1) while the latter will be automatically discovered by WS via the Session Directory Service (sdr) (Step 1) [5]. The overlay is built using DS as a point of contact [6]. The tree-only approach is much less complex than the treemesh approach adopted in [7] and [8]. Note that should DS fail, data distribution will still function normally except that new clients cannot join the tree until DS recovers. Fig. 2 shows an example of a 4-level overlay tree, rooted at the source. Level 1 clients are multicast-enabled clients (C1 and C2 linked by dotted lines to the source) and proxies (Proxy 1) set up by the framework. The proxies act as relays for unicasting sources as well as a parent for the first unicast client joining the tree. It also doubles up as a static relay in the event of severe client failures. Clients from Level 2 onwards are simply members who join the group over time. A new member selects the session to join from WS (Step 2 of Fig. 1) and issues a join request to DS (Step 3). DS will search its database and returns a list of potential parents (Step 4) using an algorithm which is similar to Prim’s [9], commonly used to derive the minimum spanning tree in multicast routing. The clients are categorised into four groups, i.e. 1 to 4 based on Round Trip Time (RTT) between the client and the source. The categories are derived from data provided by [10, 11]: Cat 1 RTT < 100 ms Cat 2 100 ms < RTT < 200 ms Cat 3 200 ms < RTT < 400 ms Cat4 > 400 ms Cat 1 clients are always chosen to be the parent for a client to ensure that the chosen parent is closest to the originating source. Note that unlike Prim’s algorithm, the process does not necessarily mean that the chosen parent is closest to the client. However, the proposed framework is self-improving such that the clients converge towards the closest parent, ultimately reverting to Prim’s algorithm again. DS will return a list of 5 parents (where available) in ascending order of categories with a maximum of 3 Cat 1 clients, 1 Cat 2 client and 1 Cat 3 client. The latter two clients are selected randomly. The client will then establish connections with the given Cat 1 potential parents (Step 5) and connect to the parent with the best QoS i.e. closest to it as per Prim’s algorithm and update the DS (Step 6).
An Overlay for Ubiquitous Streaming over Internet
1241
1 (sdr) Web Server
Directory Server
2
Multicast Network
Source
Multicast Client
4 Unicast Client
3
5 6
Legend Multicast Tunnel Control Messages
Unicast Client
Multicast Network
Fig. 1. Overlay Architecture
2.2 Overlay Adaptation To adapt the overlay to changing network conditions, clients monitor the RTT to their respective parents as well as gossip [12] with the other potential parents returned by DS. Should the RTT results prove to be higher than the initial category of its parent, the client will attempt to switch to a better parent. As illustrated in Fig. 2, C8 gossips with C4, C5 and C6. Note that Cat 2 and Cat 3 nodes are also involved. As the overlay tree strives to improve its quality, the QoS delivered by each client changes. Their inclusion therefore provides a means to avoid partitioning of the tree by having a wider list of gossipers for the client without reverting to DS. Each client sends its own QoS parameter (RTT) to the potential parents that it is gossiping with. If the client finds that the QoS received from other potential parent is better than its current parent, it will perform a parent switch. The client will inform its children about its parent switching so as to avoid an influx of switching among its children. Switching oscillation is prevented by checking that the QoS history of the potential parent is better than the client’s current value by a threshold, and that the client has not switched within a predefined time period, and that the client has not received information that its current parent is also doing a switch, the client can then switch to the new parent.
1242
C.K. Yeo, B.S. Lee, and M.H. Er Source Spiral links Proxy 1
Gossip flows C1 C2
Level 1 Clients C3
C4
Level 2 Clients
C5
C6
Level 3 Clients C8
C9
C10
C7 Level 4 Clients C11
Fig. 2. Example of an Overlay Tree with Spiral and Gossip Mechanisms
2.3 Overlay Robustness Membership on the overlay tree is dynamic as clients join and leave the tree and experience failures. Spirals shown in Fig. 2 are incorporated to strengthen the tree without incurring the complexity of a full mesh. Spirals can basically withstand node failures in any of its overlay tree branches so long as these failures are not consecutive nodes of the same branch. Client maintains a connection with its grandparent so that should a parent fail, it simply connects to its grandparent without needing to request for a new list of potential parents from DS. Information of the grandparent is passed to the client when it first establishes connection with its parent. Fig. 2 shows spirals from C11 to C3, C7 to C1 which can withstand the failure of clients C7 and C3 respectively. Level 2 clients who do not have grandparents will spiral with the siblings of their parents, e.g. C3 to C2 and C4 to C2. For consecutive node failures in the same branch, recovery is via the gossip mechanism. If all else fails, the client can simply request DS for a new parent. Client who leaves voluntarily will inform its children, parent, grandparent and grandchildren about its impending departure. The child nodes will then connect to their grandparent (which is the leaving client’s parent) immediately. The children who spirals with the leaving client will similarly switch to the leaving client’s parent for spiralling.
3 Performance The framework is implemented in Java using JDKv1.3 and JMF2.1. It has been tested on Win 98/NT and Solaris. Fig. 3 shows the overlay used in the experiments. All the
An Overlay for Ubiquitous Streaming over Internet
1243
clients are connected via a 100 Base-T switch in a Local Area Network. The clients are Intel P3 500 MHz PCs with 128 MB SDRAM, installed with Win 98 OS. An MPEG2 source with a peak rate of 3.5 Mbps is used. Results are compared to 6 unicast clients sourced by a conventional single static source and the ideal case of 6 multicast-abled clients connected to a multicast source. 3.1 Loss Measurements The average loss rate per client, shown in Fig. 4, is captured over 10 runs and the experiment is repeated by varying the number of Level 2 and 3 clients from 2 to 4 to 6. Multicast is most efficient for streaming multimedia data regardless of client number with an average loss rate of 0.27%. The static server unicast setup is the least efficient as it cannot scale unlike the overlay tree which scales much better given its distributed nature. The average loss packet per client increases from 0.72% to 1.25% when the number of clients increases from 4 to 6. 3.5%
Avg Loss Rate per Client
3.0%
Multicast Source
Level 1 Level 2
C1
2.0% 1.5% 1.0% 0.5%
C3
C2
2.5%
0.0%
C4
C5
C6
C7
1
Level 3
2
3
No. of Levels 2 & 3 Clients Pure Multicast
Fig. 3. Overlay used in Experiments
Overlay Tree
Sequential Unicast
Fig. 4. Average Loss Rate
3.2 Inter-Level Latency The inter-level latency, shown in Table 1, is measured through a series of ping requests between client and parent as the number of clients varies. The delay is insignificant although it should be noted that it increases with the client number as the response time of the parent gets longer when the parent services more children. Table 1. Inter-level Latency
# of Level 2 & 3 Clients Latency (ms)
2 0.14
4 0.195
6 0.32
1244
4
C.K. Yeo, B.S. Lee, and M.H. Er
Conclusion
An application level overlay for ubiquitous streaming of multimedia data is proposed. The self-organising and self-improving abilities of the overlay are accomplished through the monitoring of network dynamics. By adapting itself to the prevailing network conditions, better overlays are configured.
References 1.
S. E. Deering, Multicast Routing in a Datagram Internetwork, PhD Thesis, Stan. U. (1991). 2. M.H. Willebeek-LeMair, Bamba-Audio & Video Streaming over Internet, IBM J. Res. Develop, vol. 42, no. 2, Mar (1998) 269-279. 3. Peter Parnes, et al, mSTAR: Enabling Collaborative Applications on the Internet, IEEE Internet Computing, Sep-Oct (2000) 32-39. 4. Peter Parnes, et al, Lightweight application level multicast tunnelling using mTunnel, Computer Communications, vol. 21 (1998) 1295-1301. 5. Ross Finlayson, Internet Draft: Describing Session Directories in SDP, http://search.ietf.org/internet-drafts/draft-ietf-mmusic-sdp-directory-type-02.txt 6. M. Kadansky, et. al., Reliable Multicast Transport Building Block: Tree AutoConfiguration, IETF RMT WG, draft-ieft-rmt-bb-tree-config-02.txt, 2 Mar (2001). 7. Y.H. Chu, S.G. Rao, S. Seshan, H. Zhang, Enabling Conferencing Applications on the Internet using an Overlay Multicast Arch., SIGCOMM, San Diego, California, Aug (2001). 8. P. Francis, Yoid: Extending the Internet Multicast Arch. ,http://www.aciri.org/yoid (2000). 9. Prim, R. C., Shortest connection networks and some generalizations, Bell Sys. Tech Journal, 36, (1957), 1389-1401. 10. T. Hansen, J. Otero, T. McGregor, H.W. Braun, Active Measurement Data Analysis Techniques, Int. Conf. On Communications in Computing, Las Vegas, Jun (2000) 105-135. 11. Internet Traffic Report, http://www.internettrafficreport.com/index.html. 12. Q. Sun, Sturman, D.C., A gossip-based reliable multicast for large-scale high-throughput applications, Proc. Conf .Dependable Systems & Networks (2000) 347 -358.
A Measurement-Based Dynamic Guard Channel Scheme for Handover Prioritization in Cellular Networks Roland Zander and Johan M. Karlsson Department of Communication Systems, Lund University Box 118, SE-221 00 Lund, Sweden {rolandz, johan}@telecom.lth.se
Abstract. The introduction of guard channels in a cellular network is a method for giving priority to on-going calls by having channels exclusively reserved for handover purposes. Herein, an adaptive measurementbased dynamic guard channel scheme is introduced. The proposed scheme uses the number of on-going calls in adjacent cells and measurements of handover probabilities to determine the amount of guard channels to allocate in a cell. To improve the efficiency of the scheme, the calls are divided into groups depending upon mobility and latest visited cell, where separate measurements are performed for every single group. Simulations showed that the proposed scheme seems to be very efficient.
1
Introduction
The quality of service (QoS) in a cellular network consists among other things of the blocking probability for a new call due to channel occupancy and the probability for a forced call termination. An on-going call may be terminated prematurely due to a handover attempt failure or because of low signal to noise ratio and/or high attenuation. Obviously, it is much more frustrating for a subscriber to have an on-going call dropped than a new call blocked. Since the operator would like to keep the subscribers satisfied, it is a good idea to lower the probability for a forced call termination, which can be achieved by insertion of guard channels. Guard channels are channels exclusively reserved for handover calls, lowering the handover dropping probabilities to the expense of higher blocking probabilities and in most cases reduced throughput. In the fixed guard channel scheme, a fixed number of guard channels, Ng , is allocated in a cell [1]. A new call is accepted if at least Ng + 1 channels are available in the cell at the time of the call arrival, while a handover call only needs one available channel to be accepted. The fixed guard channel scheme is quite simple to implement but unfortunately it is not especially efficient due to its poor flexibility. In dynamic guard channel schemes, the number of guard channels allocated in a cell is time varying and dependent upon the momentary network conditions [2,3]. A well-designed dynamic guard channel scheme provides a higher QoS than its fixed counterpart, but the most important advantage E. Gregori et al. (Eds.): NETWORKING 2002, LNCS 2345, pp. 1245–1251, 2002. c Springer-Verlag Berlin Heidelberg 2002
1246
R. Zander and J.M. Karlsson
with the dynamic schemes is the increased flexibility, making it possible to provide a certain average QoS to on-going calls. Unfortunately, the complexity of dynamic guard channel schemes is higher, requiring increased communication between base stations and increased processing load. In this paper, an adaptive measurement-based dynamic guard channel scheme is introduced. To improve the efficiency of the proposed scheme, the calls are divided into groups depending upon mobility and most recent source cell (latest visited cell), where separate measurements are performed for every single group.
2
The Proposed Algorithm
The proposed dynamic guard channel scheme uses measurements to estimate the momentary handover arrival intensity of every single cell. From these intensities, the number of guard channels to be allocated in the cells are derived. All base stations measure the probability for an on-going call residing in its coverage area to make a handover attempt (handover probability) and the probability for a handover attempt to take place to a specific target cell (handover direction probability). To improve the accuracy of the estimations, the calls are divided into groups depending upon most recent source cell, where separate measurements are performed for every single group [4,5]. The introduction of source cell groups is very effective since calls belonging to different groups usually do not have the same movement or mobility patterns, which is especially obvious for a call covering a highway where almost all subscribers follow the same path, namely the road. Obviously, the handover direction probabilities for subscribers traveling in opposite directions on the road differ completely from one another. The handover probability for calls belonging to source cell group x is denoted Ph (x) and the handover direction probability to target cell y for those calls is denoted Phd (x, y). In formulas (1) and (2), ζ is the number of adjacent cells to the cell for which the measurements are performed. H(x, y) is the number of handover attempts to adjacent cell y made by calls belonging to source cell group x, while D(x) is the number of calls belonging to source cell group x having departed from the cell either through a handover or a call termination. Ph (x) =
ζ
H(x, k) D(x)
k=1
H(x, y) Phd (x, y) = ζ k=1 H(x, k)
(1)
(2)
In order to make the proposed scheme sensitive towards changes in subscriber behavior, only a limited amount of data is taken into account in the measurements. A data value is considered by the scheme if it is less than t minutes old
A Measurement-Based Dynamic Guard Channel Scheme
1247
or if it belongs to the z most recent data values. These parameters have to be set as a compromise between adaptability and accurate measurements. To be able to handle more short-term changes in subscriber behavior, recent data are given a larger influence, which is achieved by weighting the data according to a function with a negative exponential behavior. Most subscribers are stationary while using their mobiles. Consequently, a large portion of the on-going calls is not moving and will accordingly never perform a handover. When predicting subscriber movements it would be useful if the non-moving calls could be distinguished from the moving calls. By letting the covering base station sample the received signal strength from a mobile and calculate the mean value of the last v samples, a signal strength alteration indicating a changed distance between mobile and base station can be detected. In the proposed scheme, a new call is initially placed in a mobility group for undecided calls. If a call movement is detected, the call is transferred to the moving mobility group and the signal strength measuring stops. Calls belonging to the undecided group are after w seconds transferred to the non-moving mobility group. In this case, the measuring continues and if a call movement eventually is detected, the call is transferred to the moving mobility group. Obviously, all calls arriving from adjacent cells are moving, which makes the moving and non-moving mobility groups unnecessary. Instead, these calls are divided into low and high mobility groups depending upon channel holding time in the latest visited cell. The channel holding time is defined as the time duration between the time a channel is occupied by a call and the time it is released either through a call termination or a handover. If the channel holding time of a call is larger/smaller than the mean value of calls from the same source cell group with identical target cell, the call is placed in the low/high mobility group. Separate measurements of Ph and Phd are performed for all mobility groups belonging to a certain source cell group. From these probabilities and the number of on-going calls in each group, C(k), the number of calls in a cell expected to make a handover attempt to adjacent cell y, G(y), is derived. There exist ζ + 1 different source cell groups, one for every adjacent cell plus one group for calls with no previous handover. These source cell groups consist of two (low/high) and three mobility groups (moving/non-moving/undecided) respectively. Thus, 2ζ + 3 separate measurements are performed for every cell. G(y) =
2ζ+3
C(k)Ph (k)Phd (k, y)
(3)
k=1
G(y) is signaled to the base station covering cell y where all received handover expectancy numbers, are summed up. In formula (4), Gi (y) is the number received from adjacent cell i. The resulting total handover expectancy number gives an indication of the current handover arrival intensity and can therefore be used to determine the number of guard channels to allocate in the cell. Since all handover calls will not arrive simultaneously, the actual number of guard channels, Ng , is significantly smaller than the total handover expectancy number. In the proposed scheme, Ng is equal to the total handover expectancy number
1248
R. Zander and J.M. Karlsson
multiplied by ψ (ψ < 1). Fractional guard channels are used, which means that the number of guard channels in a cell does not have to be an integer. Ng = ψ
ζ
Gi (y)
(4)
i=1
In order to provide a guaranteed average QoS to already accepted calls, a requested mean value for the handover dropping probability, Preq , is set for every cell. By letting ψ be time varying, the handover dropping probability can be held at the requested value. Every single time an on-going call is dropped due to a handover failure, ψ is multiplied by 1 + κ (0 < κ < 1), which increases the number of allocated guard channels. When (1/Preq ) − 1 handover attempts have succeeded, ψ is multiplied by 1 − κ and the counter is set to zero. κ has to be set as a compromise between having a robust scheme (small value) and a scheme sensitive towards traffic alterations (large value).
3
Simulation Model and Numerical Results
The simulated network consisted of 100 rectangular-shaped cells covering a rectangular street network. All cells had four adjacent cells and were arranged in a 10*10 ring topology with wrapped around edges. Hence, a cell in the 1st row with coordinates (x,1) was neighbor with a cell in the 10th row, coordinates (x,10). A Manhattan cell architecture was used, meaning that a base station was placed in every intersection and the cell borders were located midway between adjacent intersections. All cells in the simulated network were identical from a traffic parameter point of view. The number of channels in each cell were set to 50 and the time duration between consecutive new call arrivals and the call lengths were assumed to be exponential distributed with mean values 0.13 and 5 minutes, respectively. The traffic parameter values were chosen to obtain realistic simulation settings and reasonable traffic load. Three different kinds of users were used in the simulations; stationary, slow moving and fast moving. 50 percent of the users were stationary, 17 percent slow moving and 33 percent fast moving. The channel holding time distribution was either exponential or rectangular (uniform) distributed with mean values of 1 minute (fast moving) and 5 minutes (slow moving). Six variants of the proposed guard channel schemes, briefly described in Table 1, were investigated. Due to simplification reasons, it was assumed that a new call instantly is placed in a moving or non-moving mobility group. This is performed without errors in scheme VI, while 10 percent of the slow moving calls are placed in the non-moving groups in schemes IV and V. In addition, calls with at least one previous handover are placed in a low or high mobility group in schemes V and VI. Three simulation scenarios with different channel holding time distributions and handover direction probabilities were investigated. In the handover direction probability sets shown in Table 2, P1 is the probability for a user to go
A Measurement-Based Dynamic Guard Channel Scheme
1249
Table 1. Investigated guard channel schemes Scheme Description I Fixed guard channels II No source cell or mobility groups III Source cell groups IV Source cell and moving/non-moving mobility groups, inserted errors V Source cell and full mobility groups, inserted errors VI Source cell and full mobility groups, no inserted errors
straight ahead at an intersection, P2 the probability for a right turn and P3 the probability for a left turn. Table 2. Simulation scenarios Scenario I II III
Distribution Handover direction probabilities Exponential P1 =0,5 , P2 =0,25 , P3 =0,25 Exponential P1 =0,67 , P2 =0,33 , P3 =0 Rectangular P1 =0,5 , P2 =0,25 , P3 =0,25
In order to shorten the simulation length, it was decided to perform the simulations without the use of the guaranteed average QoS feature. Instead, the handover dropping probabilities were set to 3 ∗ 10−4 by manual calibration of ψ. This is somewhat unfair towards the fixed guard channel scheme, but this scheme can anyhow be discarded out of flexibility reasons. The blocking probabilities, Pb , were used to determine the best scheme in each specific case. In Table 3, 95 percent confidence intervals are given for the blocking probabilities. All schemes were compared to the fixed guard channel scheme, and the Gain column, which shows the obtained gain in percent, is calculated from the two closest and two most distant values in the blocking probabilities of the respective schemes. Table 3. Simulation results Scheme I II III IV V VI
Scenario I Pb (%) Gain 3,689-3,693 0 3,646-3,650 1,1-1,3 3,594-3,598 2,5-2,7 3,574-3,579 3,0-3,2 3,573-3,578 3,0-3,2 3,574-3,578 3,0-3,2
Scenario II Pb (%) Gain 3,690-3,694 0 3,643-3,648 1,1-1,4 3,541-3,546 3,9-4,1 3,520-3,525 4,5-4,7 3,517-3,522 4,6-4,8 3,520-3,524 4,5-4,7
Scenario III Pb (%) Gain 3,898-3,902 0 3,835-3,839 1,5-1,7 3,747-3,753 3,7-4,0 3,711-3,716 4,7-4,9 3,678-3,684 5,5-5,7 3,675-3,680 5,6-5,8
1250
R. Zander and J.M. Karlsson
The gain obtained by dividing the calls into groups depending upon most recent source cell (scheme III-VI) increased with a larger difference in user behavior between calls from different groups, which can be seen when comparing the simulation results of scenarios I and II. The use of non-moving and moving mobility groups (scheme IV-VI) also led to significant improvements. It was found that the use of high and low mobility groups (scheme V-VI) only was effective for scenario III. In scenarios I and II where an exponential distribution was used, a call with a very long channel holding time in the latest visited cell (low mobility) may out of randomization reasons have a really small channel holding time in the next cell and vice versa. Obviously, this reduces the obtained gain from the prediction-oriented mobility group feature. The removal of the inserted error in the mobility classification procedure (scheme VI) did not have a significant impact on the results. In general, a large standard deviation for the channel holding time distribution reduces the gain obtained from predictive-oriented features such as source cell and mobility groups because of the larger probability of getting a really small sample value [6]. If a call shortly after arrival in a cell makes a handover attempt, the allocation of guard channels in the adjacent cells may due to channel occupancy not have been fully activated.
4
Conclusions
In this paper, an adaptive measurement-based dynamic guard channel scheme was proposed. The scheme uses the number of on-going calls in adjacent cells and their handover probabilities to estimate the handover arrival intensities of every single cell. In order to improve the accuracy of the estimations, the calls are divided into measurement groups depending upon most recent source cell and mobility, where separate measurements are performed for every single group. The channel holding time in the latest visited cell is used to divide the calls into a low or high mobility group, while positioning is used to divide calls with no previous handover into a moving or non-moving mobility group. The proposed scheme was compared to other similar guard channel schemes and showed better results (lower blocking probabilities) for all investigated simulation scenarios. However, the high complexity of the scheme requires increased communication between base stations and increased processing load, which has not been taken into account.
References 1. D. Hong and S. S. Rappaport. Traffic Model and Performance Analysis for Cellular Mobile Radio Telephone Systems with Prioritized and Nonprioritized Handoff Procedures. IEEE Transactions on Vehicular Technology, vol. 35, no. 3. pp. 77-92, 1986.
A Measurement-Based Dynamic Guard Channel Scheme
1251
2. K. C. Chua, B. Bensaou, W. Zhuang and S. Y. Choo. Dynamic Channel Reservation (DCR) Scheme for Handoff Prioritization in Mobile Micro/Picocellular Networks. IEEE ICUPC ’98, vol. 1, pp. 383-387, 1998. 3. C. Oliveira, J. B. Kim and T. Suda. An Adaptive Bandwidth Reservation Scheme for High-Speed Multimedia Wireless Networks. IEEE Journal on Selected Areas in Communications, vol. 16, pp. 858-874, 1998. 4. C. H. Choi, M. I. Kim, T. J. Kim and S. J. Kim. Adaptive Bandwidth Reservation Mechanism using Mobility Probability in Mobile Multimedia Computing Environment. IEEE Local Computer Networks 2000, pp. 76-85, 2000. 5. S. Choi and K. G. Shin. Predictive and Adaptive Bandwidth Reservation for Handoffs in QoS-Sensitive Cellular Networks. ACM SIGCOMM ’98, pp. 155-166, 1998. 6. R. Zander and J. M. Karlsson. An Adaptive Algorithm for Allocation of Dynamic Guard Channels -Impact of the Channel Holding Time Distribution. Wireless 2001, pp. 300-308, 2001.
Author Index
Aalto, Samuli 1178 Ait-Hellal, Omar 731 Al-Begain, K. 984 Alexandraki, Adamantia 202 Altman, Eitan 226, 731 Anastasi, Giuseppe 240, 1069 Aoyama, Tomonori 129 Arakawa, Kensuke 1129 Arvanitis, Theodoros N. 769 Atmaca, T¨ ulin 314 Atov, Irena 352 Awan, I. 984 Badr, Hussein 588 Baiocchi, Andrea 612 Balafoutis, Elias 214 Baldine, Ilia 887, 1081 Bao, Lichun 154 Bartoli, Alberto 1069 Basagni, Stefano 1087 Battiti, Roberto 289 Bauer, Daniel 959 Baughan, Kevin 769 Berghe, Steven Van den 1117 Berno, Anna 491 Beˇster, Janez 1147 Beylot, Andr´e-Luc 1208 Bianchi, Giuseppe 327 B´ır´ o, J´ ozsef 166 Blaˇzevi´c, Ljubica 141 Blondia, Chris 503 Bonnet, Aur´elien 685 Bonuccelli, Maurizio A. 1057 Boussetta, Khaled 1032 Boutaba, R. 936 Boxma, Onno 117 Brunato, Mauro 289 Bruneel, Herwig 745, 757 Bruni, C. 947 Bruno, Raffaele 1087 Calvagna, Andrea 1221 Canonico, Roberto 1172 Carmo, Miguel 49 Carrozzo, G. 1093
Carvalho, Paulo 709, 1159 Casals, Olga 503 Cassioli, Dajana 479 Cerd` a, Lloren¸c 503 Chahed, Tijani 314 Chan, Hung Nguyen 996 Chan, King Sun 636 Chan, Sammy 636 Chaouchi, Hakima 1099 Chen, Hui Min 636 Chen, Meng Chang 190 Chiasserini, Carla-Fabiana 376 Chionsini, V. 1093 Chlamtac, Imrich 376 Chou, Zi-Tsan 399 Constantinou, Costas C. 769 Conti, Marco 240 Cremonese, P. 1105 Crowcroft, Jon 1 Cs´ asz´ ar, Andr´ as 443 Cui, Jun-Hong 1032 Dang, Trang Dinh 105 Das, Sajal K. 28 Deb, Supratim 455 Demeester, Piet 1117 Demestichas, P. 899, 1190 Denteneer, Dee 117 Detti, Andrea 479 Duarte, Otto Carlos Muniz Bandeira 515, 563 Dube, Parijat 226, 731 Duret, Christian 1117 Elbiaze, Halima 314 Er, Meng Hwa 1239 Esposito, M. 1105 Fadel, Rima Abi 814 Fan, Zhong 826 Fantacci, Romano 778 Fidler, Markus 551 Fodor, G´ abor 277 Foh, Chuan Heng 467 Freitas, Vasco 709, 1159 Freytes, Mat´ıas 539
1254
Author Index
Fu, Xiaoming 721 Fuhrmann, Thomas
1214
Gagnaire, Maurice 301 Ganesh, Ayalvadi 455 Garcia-Luna-Aceves, J.J. 154 Garc´ıa-Vidal, Jorge 1135 Garroppo, R.G. 340 George, Laurent 575 Gerla, Mario 600, 1032 Giambene, Giovanni 778 Giordano, S. 340, 1093 Giordano, Silvia 141, 1105 Gomez, Rafa Mompo 996 Granados, Judith Redoli 996 Gregori, Enrico 240 Haas, Zygmunt J. 1153 H¨ am¨ al¨ ainen, Timo 1202 Hardjono, Thomas 1123 Harney, Hugh 1123 Harris, Richard J. 352, 924 Hasegawa, Go 252 H´ebuterne, G´erard 314 Heikkinen, Tiina 850 Heuven, Pim Van 1117 Homan, Peter 1147 Hsu, Ching-Chi 399 Hu, Fei 660 Huang, Qian 636 Hutchison, David 1172 Ichikawa, Yasushi 1129 Ikenaga, Takeshi 648 Iliadis, Ilias 959 Iraqi, Y. 936 Ishikawa, Norihiro 527 Itao, Tomoko 129 Jackson, Laura E. 887 Jacquet, Philippe 387 Jeske, Daniel R. 178 Kalogeras, Dimitris 1233 Kamra, Abhinav 838 Kang, Shin-Kyu 1045 Kappler, Cornelia 721 Karagiannis, Georgios 443 Karl, Holger 721 Karlsson, Johan M. 1245
Karvo, Jouni 1020 Kasahara, Shoji 972 Kawahara, Kenji 648 Key, Peter 455 Kim, Jinkyu 1032 Kist, Alexander A. 924 Knightly, Edward 49 Ko, King Tim 636 Konorski, Jerzy 1141 Kos, Andrej 1147 Kouvatsos, D.D. 984 Krishnamurthy, Srikanth V. Kuri, Josu´e 301 Kurose, Jim 423
672
Laouiti, Anis 387 Laoutaris, Nikolaos 214 Laspreses, Val´ery 1117 Lattmann, Jo¨el 1117 Launois, C´edric de 685 Laurenti, Nicola 491 Le Boudec, Jean-Yves 141 Leckie, Christopher 697 Lee, Bu Sung 1239 Lepe-Aldama, Oscar-Iv´ an 1135 Liang, Ben 1153 Lima, Solange 1159 Lin, Ferng-Ching 399 Lobelle, Marc 685 Lomi, Valentina 1165 Loreti, Pierpaolo 479 Loreto, S. 1111 Lo Cigno, R. 600 Luccio, Flaminia L. 1069 Lucetti, S. 340 Lyakhov, Andrey 1008 Maggiorini, Dario 1032 Maglaris, Basil 1233 Magoni, Damien 364 Malicsk´ o, G´ abor 277 Mancuso, Vincenzo 327 Marco, G. De 1111 Marias, Giannis F. 911 Maricza, Istv´ an 105 Marinca, Dana 575 Marques, Victor 49 Martinez, Belen Carro 996 Mathy, Laurent 1172 Matsuo, Masato 129
Author Index Maxemchuk, Nicholas F. 10 Mazzenga, Franco 479 Mendes, Paulo 74 Merakos, Lazaros 911 Minet, Pascale 387, 575 Misra, Archan 624 Mitrou, N. 899 Modica, Giuseppe Di 1221 Molle, Mart 672 Moln´ ar, S´ andor 105 Mondini, M. 1105 Monteiro, Edmundo 74 Monti, Paolo 376 Morita, Mitsushige 264 Murata, Masayuki 252, 264 Murayama, Yuko 1129 Nakamura, Tetsuya 129 Neglia, Giovanni 327 Niccolini, S. 1093 Nikolouzou, E. 1227 Norros, Ilkka 86 Nucci, Antonio 376 Nyberg, Eeva 1178 Ohsaki, Hiroyuki 264 Ohta, Yoshiaki 648 Oie, Yuji 648 Okamoto, Takuya 252 Ott, Teunis J. 624 Pagani, Elena 1184 Pagano, M. 340 Panagakis, Antonis 214 Pansiot, Jean-Jacques 364 Papadopoulou, L.-M. 1190 Passarella, Andrea 240 Paterakis, Michael 202 Peng, Tao 697 Pentikousis, Kostas 588 Perros, Harry G. 790, 863, 1081 Petrioli, Chiara 1087 Pierobon, Gianfranco L. 1165 Pietrabissa, Antonio 802 Pi´ oro, MichaGl 277 Poppe, Fabrice 875 Preuß, Stephan 1196 Procissi, G. 600 Puech, Nicolas 301 Pujolle, Guy 515, 563, 1099, 1208
1255
Raatikainen, Pertti 1202 Ramamohanarao, Kotagiri 697 Resing, Jacques 117 Rexhepi, Vlora 443 Rezende, Jos´e Ferreira de 563 Rischette, Francis 1117 Romano, S.P. 1105 Rossi, Gian Paolo 1184 Rouskas, George N. 790, 863, 887, 1081 Rousseau, Bart 875 Roy, Abhishek 28 Rubinstein, Marcelo G. 515 Salgado, Roger 49 Salvadori, Elio 289 Samadi, Behrokh 178 Santececca, Cristiana 802 Santos, Alexandre 1159 Saran, Huzur 838 Sargento, Susana 49 Sch¨ oller, Marcus 1214 Schulzrinne, Henning 74 Scoglio, C. 947 Segall, Adrian 411 Sen, Sandeep 838 Senouci, Sidi-Mohammed 1208 Seres, Gergely 166 Shapiro, Jonathan K. 423 Sharma, Neeraj K. 660 Shim, Young-Chul 1045 Shorey, Rajeev 838 Silva, Magda 1159 Silva Gon¸calves, Paulo Andr´e da Simpson, Steven 1172 Sohraby, Kazem 178 Sorrentino, G. 1111 Sousa, Pedro 709 Speer, Anke 1214 Stavrakakis, Ioannis 214 Stavroulaki, V. 899, 1190 Stepanenko, Alexander 769 Stevenson, Dan 1081 Steyaert, Bart 745 Suda, Tatsuya 129 Sumino, Hiromitsu 527 Sun, Yeali S. 190 Suzuki, Hideharu 527 Szab´ o, R´ obert 443 Szeto, W. 936 ´ ad 166 Szl´ avik, Arp´
563
1256
Author Index
Tachibana, Takuji 972 Tak´ acs, Attila 443 Takahashi, Osamu 527 Terai, Tatsuhiko 252 Theologou, M. 899, 1190 Tohm´e, Samir 814 Tomarchio, Orazio 1221 Tonetto, Daniele 1165 Tountopoulos, V. 899 Towsley, Don 423 Tripathi, Satish K. 672 Tsolakou, E. 1227 Tu, Yung-Cheng 190 Ueno, Hidetoshi 527 Urpi, Alessandro 1057 Valadas, Rui 49 Vangelista, Lorenzo 1165 Vatalaro, Francesco 479 Velenis, Dimitris 1233 Veltri, L. 1111 Vendictis, Andrea De 612 Venieris, S. 1227 Ventre, G. 1105 Vergari, S. 947
Viennot, Laurent 387 Vinck, Bart 757 Vishnevsky, Vladimir 1008 Walraevens, Joris 745 Wang, Yung-Terng 178 Wano, Keisuke 1129 Wikstr¨ om, Mika 1202 Willems, Gert 503 Wittevrongel, Sabine 757 Wydrowski, Bartek 62 Xu, Lisong 863 Xu, Zhong 672 Ye, Zhenqiang 672 Yeo, Chai Kiat 1239 Zaim, A. Halim 790 Zander, Roland 1245 Z´ atonyi, J´ anos 166 Zhang, Qinqing 178 Zitterbart, Martina 1214 Zukerman, Moshe 62, 467 Zussman, Gil 411