Approximation and online algorithms: 6th international workshop, WAOA 2008, Karlsruhe, Germany, September 18-19, 2008: revised papers [1 ed.] 9783540939795, 3540939792

This book constitutes the thoroughly refereed post workshop proceedings of the 6th International Workshop on Approximati

270 38 4MB

English Pages 302 Year 2009

Report DMCA / Copyright

DOWNLOAD PDF FILE

Recommend Papers

Approximation and online algorithms: 6th international workshop, WAOA 2008, Karlsruhe, Germany, September 18-19, 2008: revised papers [1 ed.]
 9783540939795, 3540939792

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Lecture Notes in Computer Science Commenced Publication in 1973 Founding and Former Series Editors: Gerhard Goos, Juris Hartmanis, and Jan van Leeuwen

Editorial Board David Hutchison Lancaster University, UK Takeo Kanade Carnegie Mellon University, Pittsburgh, PA, USA Josef Kittler University of Surrey, Guildford, UK Jon M. Kleinberg Cornell University, Ithaca, NY, USA Alfred Kobsa University of California, Irvine, CA, USA Friedemann Mattern ETH Zurich, Switzerland John C. Mitchell Stanford University, CA, USA Moni Naor Weizmann Institute of Science, Rehovot, Israel Oscar Nierstrasz University of Bern, Switzerland C. Pandu Rangan Indian Institute of Technology, Madras, India Bernhard Steffen University of Dortmund, Germany Madhu Sudan Massachusetts Institute of Technology, MA, USA Demetri Terzopoulos University of California, Los Angeles, CA, USA Doug Tygar University of California, Berkeley, CA, USA Gerhard Weikum Max-Planck Institute of Computer Science, Saarbruecken, Germany

5426

Evripidis Bampis Martin Skutella (Eds.)

Approximation and Online Algorithms 6th International Workshop, WAOA 2008 Karlsruhe, Germany, September 18-19, 2008 Revised Papers

13

Volume Editors Evripidis Bampis IBISC, CNRS FRE 3190, Université d’Evry Val d’Essonne Boulevard François Mitterand, 91025 Evry Cedex, France E-mail: [email protected] Martin Skutella Technische Universität Berlin Fakultät II - Mathematik und Naturwissenschaften Straße des 17. Juni 136, 10623 Berlin, Germany E-mail: [email protected]

Library of Congress Control Number: Applied for CR Subject Classification (1998): F.2.2, G.2.1-2, G.1.2, G.1.6, I.3.5, E.1 LNCS Sublibrary: SL 1 – Theoretical Computer Science and General Issues ISSN ISBN-10 ISBN-13

0302-9743 3-540-93979-2 Springer Berlin Heidelberg New York 978-3-540-93979-5 Springer Berlin Heidelberg New York

This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting, reproduction on microfilms or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer. Violations are liable to prosecution under the German Copyright Law. springer.com © Springer-Verlag Berlin Heidelberg 2009 Printed in Germany Typesetting: Camera-ready by author, data conversion by Scientific Publishing Services, Chennai, India Printed on acid-free paper SPIN: 12607129 06/3180 543210

Preface

The 6th Workshop on Approximation and Online Algorithms (WAOA 2008) focused on the design and analysis of algorithms for online and computationally hard problems. Both kinds of problems have a large number of applications from a variety of fields. WAOA 2008 took place in Karlsruhe, Germany, during September 18–19, 2008. The workshop was part of the ALGO 2008 event that also hosted ESA 2008, WABI 2008, and ATMOS 2008. The previous WAOA workshops were held in Budapest (2003), Rome (2004), Palma de Mallorca (2005), Zurich (2006), and Eilat (2007). The proceedings of these previous WAOA workshops appeared as LNCS volumes 2909, 3351, 3879, 4368, and 4927, respectively. Topics of interest for WAOA 2008 were: algorithmic game theory, approximation classes, coloring and partitioning, competitive analysis, computational finance, cuts and connectivity, geometric problems, inapproximability results, mechanism design, network design, packing and covering, paradigms for design and analysis of approximation and online algorithms, randomization techniques, real-world applications, and scheduling problems. In response to the call for papers, we received 56 submissions. Each submission was reviewed by at least three referees, and the vast majority by at least four referees. The submissions were mainly judged on originality, technical quality, and relevance to the topics of the conference. Based on the reviews, the Program Committee selected 22 papers. We are grateful to Andrei Voronkov for providing the EasyChair conference system, which was used to manage the electronic submissions, the review process, and the electronic PC meeting. It made our task much easier. We would also like to thank all the authors who submitted papers to WAOA 2008 as well as the local organizers of ALGO 2008. November 2008

Evripidis Bampis Martin Skutella

Organization

Program Co-chairs Evripidis Bampis Martin Skutella

University of Evry TU Berlin

Program Committee Yossi Azar Philippe Baptiste Thomas Erlebach Klaus Jansen Christos Kaklamanis Amit Kumar Stefano Leonardi Aris Pagourtzis Paolo Penna Roberto Solis-Oba Maxim Sviridenko Marc Uetz

Microsoft Research and Tel-Aviv University CNRS, Ecole Polytechnique University of Leicester University of Kiel University of Patras IIT Delhi Sapienza University of Rome National Technical University of Athens University of Salerno University of Western Ontario IBM T. J. Watson Research Center University of Twente

Referees Christoph Amb¨ uhl Aris Anagnostopoulos Eric Angel Spyridon Antonakopoulos Arash Asadpour Vincenzo Auletta Amitabha Bagchi Evangelos Bampas Nikhil Bansal Luca Becchetti Boaz Ben-Moshe Andre Berger Benjamin Birnbaum Vincenzo Bonifaci Ioannis Caragiannis Panagiotis Cheilaris

Marek Chrobak Miriam Di Ianni Ilias Diakonikolas Florian Diedrich Daniel Dressler Christoph D¨ urr Tomas Ebenlendr Khaled Elbassioni Matthias Englert Leah Epstein Bruno Escoffier William Evans Eyal Even-Dar Esteban Feuerstein Dimitris Fotakis Iftah Gamzu Fabrizio Grandoni

Alexander Grigoriev Birgit Heydenreich Johann Hurink Sandy Irani Vincent Jost Panagiotis Kanellopoulos Georgia Kaouri Hans Kellerer Alex Kesselman Rohit Khandekar Tracy Kimbrel Stavros Kolliopoulos Alexander Kononov Annamaria Kovacs Christian Laforest Michael Lampis

VIII

Organization

Lap Chi Lau Retsef Levi Konstantin Makarychev Yury Makarychev Euripides Markou Nicole Megow Ioannis Milis Joe Mitchell Jerome Monnot Luca Moscardelli Rudolf M¨ uller Vishnawath Nagarajan Alantha Newman Christos Nomikos Arindam Pal Evi Papaioannou Vangelis Paschos Fanny Pascual Jacob Jan Paulus

Britta Peis George Pierrakos Katerina Potika David Pritchard Ann Robert Thomas Roeblitz Heiko Roeglin Adi Rosen Gianluca Rossi George Rouskas Stefan Schmid Christiane Schmidt Andreas S. Schulz Ulrich Schwarz Jay Sethuraman Jiri Sgall Riccardo Silvestri Mohit Singh Rene Sitters

Alexander Souza David Steurer Nicol´ as Stier Moses Orestis Telelis Kavitha Telikepalli Aristeidis Tentes Ralf Th¨ ole Carmine Ventre Anastasios Viglas Tjark Vredeveld Matthias Westermann Andreas Wiese Gerhard Woeginger Alexander Wolff Prudence W. H. Wong Guochuan Zhang Vassilis Zissimopoulos

Table of Contents

WAOA 2008 Max-Weight Integral Multicommodity Flow in Spiders and High-Capacity Trees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jochen K¨ onemann, Ojas Parekh, and David Pritchard Size Versus Stability in the Marriage Problem . . . . . . . . . . . . . . . . . . . . . . . P´eter Bir´ o, David F. Manlove, and Shubham Mittal Degree-Constrained Subgraph Problems: Hardness and Approximation Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Omid Amini, David Peleg, St´ephane P´erennes, Ignasi Sau, and Saket Saurabh

1 15

29

A Lower Bound for Scheduling of Unit Jobs with Immediate Decision on Parallel Machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Tom´ aˇs Ebenlendr and Jiˇr´ı Sgall

43

Improved Randomized Online Scheduling of Unit Length Intervals and Jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Stanley P.Y. Fung, Chung Keung Poon, and Feifeng Zheng

53

Minimizing Average Flow Time on Unrelated Machines . . . . . . . . . . . . . . . Ren´e A. Sitters

67

Cooperation in Multiorganization Matching . . . . . . . . . . . . . . . . . . . . . . . . . Laurent Gourv`es, J´erˆ ome Monnot, and Fanny Pascual

78

Randomized Algorithms for Buffer Management with 2-Bounded Delay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Marcin Bienkowski, Marek Chrobak, and L  ukasz Je˙z

92

A General Scheme for Designing Monotone Algorithms for Scheduling Problems with Precedence Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Clemens Thielen and Sven O. Krumke

105

Malicious Bayesian Congestion Games . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Martin Gairing Stackelberg Strategies and Collusion in Network Games with Splittable Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Tobias Harks Peak Shaving through Resource Buffering . . . . . . . . . . . . . . . . . . . . . . . . . . Amotz Bar-Noy, Matthew P. Johnson, and Ou Liu

119

133 147

X

Table of Contents

On Lagrangian Relaxation and Subset Selection Problems . . . . . . . . . . . . . Ariel Kulik and Hadas Shachnai Approximation Algorithms for Prize-Collecting Network Design Problems with General Connectivity Requirements . . . . . . . . . . . . . . . . . . . Chandrashekhar Nagarajan, Yogeshwer Sharma, and David P. Williamson

160

174

Caching Content under Digital Rights Management . . . . . . . . . . . . . . . . . . Leah Epstein, Amos Fiat, and Meital Levy

188

Reoptimization of Weighted Graph and Covering Problems . . . . . . . . . . . . Davide Bil` o, Peter Widmayer, and Anna Zych

201

Smoothing Imprecise 1.5D Terrains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chris Gray, Maarten L¨ offler, and Rodrigo I. Silveira

214

Local PTAS for Dominating and Connected Dominating Set in Location Aware Unit Disk Graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Andreas Wiese and Evangelos Kranakis Dynamic Offline Conflict-Free Coloring for Unit Disks . . . . . . . . . . . . . . . . Joseph Wun-Tat Chan, Francis Y.L. Chin, Xiangyu Hong, and Hing Fung Ting

227 241

Experimental Analysis of Scheduling Algorithms for Aggregated Links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Wojciech Jawor, Marek Chrobak, and Mart Molle

253

A (2 − c logn n ) Approximation Algorithm for the Minimum Maximal Matching Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Zvi Gotthilf, Moshe Lewenstein, and Elad Rainshmidt

267

On the Maximum Edge Coloring Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . Giorgio Lucarelli, Ioannis Milis, and Vangelis Th. Paschos

279

Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

293

Max-Weight Integral Multicommodity Flow in Spiders and High-Capacity Trees Jochen K¨ onemann1 , Ojas Parekh2, and David Pritchard1 1

University of Waterloo Dept. of Combinatorics & Optimization, 200 University Avenue West, Waterloo, Ontario, Canada, N2L 3G1. Fax: 519-725-5441 [email protected], [email protected] 2 Emory University Math/CS Department, 400 Dowman Dr., Suite 401, Atlanta, GA, USA, 30322. Fax: 404-727-5611 [email protected]

Abstract. We consider the max-weight integral multicommodity flow problem in trees. In this problem we are given an edge-capacitated tree and weighted pairs of terminals, and the objective is to find a max-weight integral flow between terminal pairs subject to the capacities. This problem was shown to be APX-hard by Garg, Vazirani and Yannakakis [Algorithmica, 1997], and a 4-approximation was given by Chekuri, Mydlarz and Shepherd [ACM Trans. Alg., 2007]. Some special cases are known to be exactly solvable in polynomial time, including when the graph is a path or a star. First, when every edge has capacity at least µ ≥ 2, we use iterated LP relaxation to obtain an improved approximation ratio of min{3, 1 + 4/µ + 6/(µ2 − µ)}. We show this ratio bounds the integrality gap of the natural LP relaxation. A complementary hardness result yields a 1 + Θ(1/µ) threshold of approximability (if P = NP). Second, we extend the range of instances for which exact solutions can be found efficiently. When the tree is a spider (i.e. if only one vertex has degree greater than 2) we give a polynomial-time algorithm to find an optimal solution, as well as a polyhedral description of the convex hull of all integral feasible solutions.

1

Introduction

In the max-weight integral multicommodity flow problem (WMCF), we are given an undirected supply graph G = (V, E), terminal pairs (s1 , t1 ), . . . , (sk , tk ) where si , ti ∈ V , non-negative weights w1 , . . . , wk , and non-negative integral edgecapacities ce for all e ∈ E. The goal is to simultaneously route integral  si -ti flows of value yi subject to the capacities, so as to maximize the weight wi yi . The single-commodity version (k = 1) of WMCF is well-known to be solvable in polynomial time. If we drop the integrality restriction the problem can be solved in polynomial time via linear programming for any k. However, when integrality is required, even the 2-commodity unit-capacity, unit-weight version E. Bampis and M. Skutella (Eds.): WAOA 2008, LNCS 5426, pp. 1–14, 2009. c Springer-Verlag Berlin Heidelberg 2009 

2

J. K¨ onemann, O. Parekh, and D. Pritchard

is NP-complete — see Even, Itai, and Shamir [12]. Moreover, recent results of 1 Andrews et al. [1] show that WMCF is hard to approximate to within log 2 −ǫ (n) unless NP ⊂ ZPTIME(npolylog(n) ), even with unit capacities and weights. On the other hand, randomized LP rounding [22] yields a O( logloglogn n )-approximation. An easier and significant special case is where the supply graph G is a tree, which we denote by WMCFT. Garg, Vazirani and Yannakakis [15] considered the unit-weight case and showed APX-hardness even if G’s height is at most 3 and all capacities are 1 or 2; but on the positive side, they gave a 2-approximate polynomial-time primal-dual algorithm. Techniques of Garg et al. show that WMCFT can be solved in polynomial time when G has unit capacity (using dynamic programming and matching) or is a star (this problem is essentially equivalent to b-matching). The case where G is a path (so-called interval packing) is also polynomial-time solvable [4,6,17], e.g. by linear programming since the natural LP has a totally unimodular constraint matrix. For general WMCFT, without restrictions on capacities or weights, Chekuri, Mydlarz and Shepherd [6] gave a 4-approximation algorithm, and this remains the best ratio known to date. Results. Throughout the paper, we use μ to denote the minimum capacity of any edge in the WMCFT instance. In the first part of the paper we use iterated rounding/relaxation to develop improved approximation ratios for WMCFT when μ is suitably large. Iterated relaxation yields an integral solution with optimal value or better but exceeding edge capacities by up to 2 (additively). This resolves a problem stated in Chekuri et al. [6]; in their words we prove “the c-relaxed integrality gap is 1” for c = 2 whereas they could not prove it for any constant c.1 We remark that our iterated rounding does not require uncrossing. When the minimum capacity μ is Ω(log |V |), randomized rounding gives a 1 + O( logμ|V | )-approximation for WMCFT. This suggests that the problem is easier to approximate as the minimum capacity increases. This was anticipated by Chekuri et al. [6], and indeed, by plugging our iterated rounding result into √ [6, Cor. 3.5] we get a 1 + O(1/ μ)-approximation for WMCFT when μ > 36. In this paper, our first main result is an improvement on the best ratio for all μ ≥ 2. Theorem 1. For WMCFT, there are polynomial-time algorithms achieving (a) approximation ratio 3 for μ ≥ 2, and (b) approximation ratio (1 + 4/μ + 6/(μ2 − μ)) for general μ. A slight modification of Garg et al.’s hardness proof shows that for some ǫ > 0, for all μ ≥ 2, it is NP-hard to approximate WMCFT within a ratio of 1 +ǫ/μ; we detail this modification in Section 2.4. Thus (if P = NP) the ratio in Theorem 1(b) is tight up to the constant in the μ1 term. Our methodology for Theorem 1 is to decrease the additively-violating solution towards feasibility, without losing too much weight. Part (a) uses an 1

To say that the c-relaxed integrality gap is 1 for an LP means that, where OP T is the LP’s optimal value, there is an integral solution of value at least OP T but violating each constraint by up to +c. E.g. when c = 0 this means the LP has an integral optimum.

Max-Weight Integral Multicommodity Flow in Spiders

3

argument of Cheriyan, Jord´ an and Ravi [7]. Part (b) relies on an auxiliary covering problem; every feasible cover, when subtracted from the +2-violating WMCFT solution, results in a feasible WMCFT solution. An approach due to Jain [18] shows that iterated LP rounding, applied to the auxiliary problem, leads to a provably low-weight integral solution for the covering problem. A crucial fact in obtaining Theorem 1(b) is that the approximation guarantee of Jain’s approach is relative to the optimal fractional solution of the natural LP. Our second result maps out more of the landscape of “easy” and “hard” WMCFT instances. An all-ror instance is one in which, for some choice of root vertex, each commodity path either goes through the root or is radial. (A path is radial if one of its endpoints is an ancestor of the other with respect to the root.) For example, every instance of WMCFT in which G is a spider is an all-ror instance. Theorem 2. All-ror WMCFT instances can be solved in strongly polynomial time. One way to view this result is as an efficient solution for a common generalization of b-matching and interval packing. Our proof of Theorem 2 is via a combinatorial reduction to bidirected flow [9]. This reduction also yields a polyhedral characterization of the feasible solutions for all-ror instances. We remark that our methodology can be generalized. Generalizing Theorem 1, for flows/covers of undirected/bidirected/vertex-capacitated trees, we can get constant-approximation algorithms, as well as 1 + O(1/μ)-approximation algorithms where μ is the minimum capacity/requirement; for all these settings 1 + Ω(1/μ)-hardness can be shown. Similarly, Theorem 2 and its polyhedral counterpart generalize from flows to covers. Related Work. WMCFT appears in the literature under a variety of names including cross-free-cut matching [15] in the unit-capacity case and packing of a laminar family [7]. One generalization is the demand version [6] in which each commodity i is given a requirement ri and we require yi ∈ {0, ri } for each feasible solution. The word bidirected has two meanings in the literature. We discuss bidirected flows later. In contrast, a bidirected tree is obtained from an undirected tree by replacing each edge by two antiparallel directed edges. WMCF extends naturally to directed graphs (with directed demands). A slight modification of Garg et al.’s hardness proof shows that WMCF on bidirected trees is APX-hard even for unit capacities and weights. On the other hand, WMCF on bidirected trees admits a ( 53 + ǫ)-approximation for unit capacities [10] and a 4-approximation [6] for general capacities. On bidirected trees obtained from paths, stars, and spiders the problem can be solved in polynomial time, e.g., by reduction to a max-weight circulation problem, see also [11]. The WMCF problem on unit capacity trees is equivalent to the weighted edgedisjoint paths (WEDP) problem. The polylog-hardness result on general graphs [1] applies to WEDP even with unit weights, however for fixed k, WEDP with at

4

J. K¨ onemann, O. Parekh, and D. Pritchard

most k commodities is polynomial-time solvable. See e.g. [23, §70.5] for further discussion. The extreme points of the natural LP for WMCFT arise frequently in the literature of LP-based network design [6,7,10,13,14,16,18,20,25]. From this perspective, WMCFT is a natural starting point for an investigation of how large capacities/requirements affect the difficulty of weighted network design problems. We remark that the min-cardinality k-edge-connected spanning subgraph problem has a similar history to WMCFT: a 1 + 2/(k + 1)-approximation was given by Cheriyan & Thurimella [8], (1 + ǫ/k)-hardness of approximation was shown by Gabow et al. [14], and the best current algorithm for the problem, due to Gabow & Gallagher [13], uses iterated rounding. Organization of the Paper. Section 1.1 contains some basic definitions and notation. In Section 2 we give the proof of Theorem 1 and the complementary hardness result. In Section 3 we provide a proof of Theorem 2. In Section 3.1, we state our polyhedral results. Finally, we suggest some directions for future work in Section 4. Some of the proofs which we did not have space to include are omitted, but can be obtained in the full version from the authors’ webpages. 1.1

Formulation

We define the commodities by a set of demand edges D = {{s1 , t1 }, . . . , {sk , tk }} on vertex set V with a weight wd assigned to each demand edge d ∈ D; this is without loss of generality since the supply graph and demand edges are undirected. Since we discuss WMCF only on undirected trees, each commodity has a unique path along which flow is sent. For each demand edge d, let its demand path pd be the unique path in G joining the endpoints of d. We thus may represent a multicommodity flow by a vector {yd }d∈D where yd is the amount of commodity d that is routed (along pd ). Then a flow y is feasible if it meets the two conditions ∀d ∈ D : ∀e ∈ E :

 d:e∈pd

yd ≥ 0

(1)

y d ≤ ce

(2)

The objective of WMCFT is to find a feasible integral y that maximizes w · y.

2

Improved Approximation

In this section we obtain a min{3, 1+4/μ+6/(μ2 −μ)}-approximation algorithm for WMCFT, assuming ce ≥ μ ≥ 2 for each edge e. The algorithm uses the iterated rounding paradigm, which was used first by Jain [18] and more recently by Lau et al. [20] and others [3,13,25] for network design. The idea is that in each iteration, we use an extreme fractional optimum y ∗ of the natural LP to develop an integral solution. If some demand edge d has value 1 or greater in y ∗ ,

Max-Weight Integral Multicommodity Flow in Spiders

5

we route the integer part and decrease the capacity of edges on pd accordingly. If 0 < y < 1 we perform a relaxation step; a counting argument (Section 2.3) guarantees that a particular relaxation can always be performed. At the end, we obtain an integral solution which exceeds the capacity of each edge by at most 2, and which has weight at least as large as that of an optimal feasible solution. In Sections 2.1 and 2.2 we show how to compute high-weight feasible solutions from this +2-violating solution. In Section 2.4 we show it is NP-hard to (1+ǫ/μ)-approximate WMCFT, fox any fixed μ ≥ 2, and for some ǫ independent of μ. The natural LP for WMCFT, which we denote by P0 , is as follows: (P0 ) :

maximize w · y subject to (1) and (2).

This program has a linear number of variables and constraints, and thus can be solved in polynomial time. Any integral vector y is feasible for P0 iff it is feasible for the WMCFT instance. However, the linear program has fractional extreme points in general, and thus solving the LP does not give us the type of solution we seek. Nonetheless, optimal solutions to the LP have certain properties that permit an iterated rounding approach, such as the following. Lemma 3. Let y ∗ be an optimal solution to P0 , define OP T = w · y ∗ , and suppose yd∗ ≥ t for some d ∈ D and some integer t ≥ 1. Reduce the capacity of each edge e ∈ pd by t and let OP T ′ denote the new optimal value of P0 . Then OP T ′ = OP T − twd . Proof. Let z denote the vector such that zd = t and zd′ = 0 for each d′ = d. Then it is easy to see that y ∗ − z is feasible for the new LP, and hence OP T ′ ≥ w · (y ∗ − z) = OP T − twd . On the other hand, where y ′ denotes the optimal solution to the new LP, it is easy to see that y ′ + z is feasible for the original LP; so OP T ≥ OP T ′ + twd . Combining these inequalities, we are done. ⊓ ⊔ From now on, let OP T denote the optimal value to P0 . In general terms, our iterated rounding approach works on the following principles. Define the following restricted version of P0 : (P0 ) :

maximize w · y subject to (1) and (2) and ∀d ∈ D : yd ≤ 1.

Assume for the moment that P0 also has optimal value OP T . We iteratively build an integral solution to P0 with value at least equal to OP T . The first step in each iteration is to solve P0 , obtaining solution y ∗ . If yd∗ = 0 for some demand edge d, then we can discard d without affecting the optimal value of P0 . If yd∗ = 1 for some d, then we can route one unit of flow along pd and update capacities accordingly. Similar to Lemma 3, the optimal LP value will drop by an amount equal to the weight of the flow that was routed. If neither of these cases applies, we use the following lemma, whose proof appears in Section 2.3. Lemma 4. Suppose that y ∗ is an extreme point solution to P0 , and that 0 < yd∗ < 1 for each demand edge d ∈ D. Then there is an edge e ∈ E so that |{d ∈ D : e ∈ pd }| ≤ 3.

6

J. K¨ onemann, O. Parekh, and D. Pritchard IteratedSolver 1. 2. 3. 4. 5.

Set y = 0 If D = ∅ terminate and return y Let y ∗ be an optimal extreme point solution to P0 For each d such that yd∗ = 0, discard d For each d such that yd∗ = 1, increase yd by 1, decrease ce by 1 for each e ∈ pd , and discard d 6. If neither step 4 nor 5 applied, find e as specified by Lemma 4 and contract e 7. Go to step 2 Fig. 1. The iterated rounding algorithm

Our algorithm discards the capacity constraint (2) for e from our LP. We call this contracting e because the effect is the same as if we had merged the two endpoints of e in the tree G. Pseudocode for our algorithm is given in Figure 1. To justify our assumption that P0 and P0 have the same optimal value, we preprocess the problem as follows. First, we compute any optimal solution y ∗ to P0 . Then we route the integer part ⌊y ∗ ⌋ of the solution (i.e., we initialize y = ⌊y ∗ ⌋) and reduce each capacity ce by d:e∈p ⌊yd∗ ⌋. The residual problem has y ∗  := y ∗ − ⌊y ∗ ⌋ as an optimal solution, and since 0 ≤ y ∗  ≤ 1, our assumption is justified. Assuming Lemma 4, we now prove the main properties of our iterated rounding algorithm: it runs in polynomial time, it exceeds each capacity by at most 2, and it produces a solution of value at least OP T . d

Property 1. IteratedSolver runs in polynomial time. Proof. Recall that P0 and P0 can be solved in polynomial time. In each iteration we decrease |D| + |E|, so polynomially many iterations occur, and the result follows. ⊓ ⊔ Property 2. The integral solution computed by IteratedSolver violates each capacity constraint (2) by at most 2. Proof. Consider what happens to any given edge e during the execution of the algorithm. In the preprocessing and in each iteration, the flow routed through e equals the decrease in its residual capacity. If in some iteration, e’s residual capacity is decreased to 0, all demand paths through e will be discarded in the following iteration. Thus if e is not contracted, its capacity constraint (2) will be satisfied by the final solution. The other case is that we contract e in step 6 of some iteration because e lies on at most 3 demand paths. The residual capacity of e is at least 1, and at most one unit of flow will be routed along each of these 3 demand paths in future iterations. Hence the final solution violates (2) for e by at most +2. ⊓ ⊔ Property 3. The integral solution computed by IteratedSolver has objective value at least equal to OP T .

Max-Weight Integral Multicommodity Flow in Spiders

7

Proof. When we contract an edge e we just remove a constraint from P0 , which cannot decrease the optimal value of P0 since it is a maximization LP. In every other iteration and in preprocessing, Lemma 3 implies that the LP optimal value drops by an amount equal to the increase in w · y. When termination occurs, the optimal value of P0 is 0. Thus the overall weight of flow routed must be at least as large as the initial value of OP T . ⊓ ⊔ 2.1

Minimum Capacity μ = 2

As per Property 2, our iterated solver may exceed some of the edge capacities. When ce ≥ μ ≥ 2 for each edge e we can invoke the following theorem, which appears as Thm. 6 in [7], to produce a high-weight feasible solution. Theorem 5 (Cheriyan, Jord´ an, Ravi). Suppose that y is a nonnegative integral vector so that for each edge e, the constraint (2) is violated by at most a multiplicative factor of 2 by y. Then in polynomial time, we can find an integral vector y ′ with w · y ′ ≥ (w · y)/3, and 0 ≤ y ′ ≤ y, and such that y ′ satisfies all constraints (2). The algorithm as literally described in [7] is actually pseudo-polynomial, but it is straightforward to modify it to have polynomial running time; the proof appears in the full version. Assuming this fact, we now prove part (a) of Theorem 1. Proof (of Theorem 1(a)). Let y be the output of IteratedSolver. Since ce ≥ 2 for each edge e, and since by Property 2 each edge’s capacity is additively violated by at most +2, Theorem 5 applies. Thus y ′ is a feasible solution to the WMCFT instance with objective value w · y ′ ≥ w · y/3 ≥ OP T /3, using Property 3. Finally, since P0 is an LP-relaxation of the WMCFT problem, OP T is at least equal to the optimal WMCFT value, and so y ′ is a 3-approximate feasible integral solution. ⊓ ⊔ 2.2

Arbitrary Minimum Capacity

Given the infeasible solution y produced by IteratedSolver, we want to reduce y in a minimum-weight way so as to attain feasibility. For each edge e let fe =  max{0, d:e∈p yd − ce }, i.e. fe is the amount by which y violates the capacity of e. Note now that a reduction z with 0 ≤ z ≤ y makes y − z a feasible (integral) WMCFT solution if and only if z is a feasible (integral) solution to the following linear program. d

(Pc ) :

minimize w · z subject to 0 ≤ z ≤ y and ∀e ∈ E :

 d:e∈pd

zd ≥ fe .

Notice that Pc is a covering analogue of P0 (with added upper bounds). Furthermore, the integer program Pc can be 2-approximately solved using Jain’s iterated rounding framework [18].

8

J. K¨ onemann, O. Parekh, and D. Pritchard

Theorem 6. There is a polynomial-time algorithm which returns an integral feasible solution z for Pc such that w · z is at most twice the LP optimal value of Pc . Proof. The idea is very similar to the main result of [18] but simpler in that no uncrossing is needed, because we already have a tree structure. Hence we only sketch the details. In each iteration, we obtain an extreme point optimal solution z ∗ to the linear program Pc . We increase z by the integer part of z ∗ and accordingly decrease the requirements f . If zd∗ = 0, d is discarded. Finally if 0 < z ∗ < 1 a token redistribution argument of Jain shows that some d∗ ∈ D has zd∗∗ ≥ 1/2. In this case we increase zd∗ by 1 and update the requirements accordingly. Standard arguments then give the claimed bound on the cost of z and polynomial running time. ⊓ ⊔ Here is how we use Theorem 6 to approximate WMCFT instances on trees. 2 y is a feasible fractional solution to Proof (of Theorem 1(b)). Notice that z = μ+2 2 Pc . Hence, the optimal value of Pc is at most μ+2 y·w. Thus the solution z produced 4 y·w, so y− z is a feasible solution to the WMCFT by Theorem 6 satisfies z·w ≤ μ+2 4 4 ) y · w ≥ (1 − μ+2 )OP T . This gives us a problem, with w · ( y − z) ≥ (1 − μ+2 4 2 1/(1 − μ+2 ) = 1 + 4/μ + O(1/μ ) approximation algorithm for WMCFT. To obtain the exact bound claimed in Theorem 1(b), we refine this slightly by taking a two-round approach. In the first round we set fe to be the characteristic vector of those edges which y violates by +2, obtaining y′ := y − z. Then y′ has only +1 additive violation, and the same reasoning as before shows y′ · 2 )OP T . The second round analogously extracts from y′ a feasible w ≥ (1 − μ+2 2 ) y ′ · w. This gives approximation ratio solution with weight at least (1 − μ+1 2 2 )(1 − μ+1 ) = 1 + 4/μ + 6/(μ2 − μ), as desired. ⊓ ⊔ 1/(1 − μ+2

2.3

Proof of Lemma 4

First, we need the following simple counting argument. Lemma 7. Let T be a tree with n vertices and let ni denote the number of its vertices that have degree i. Then n1 > (n − n2 )/2. Proof. Using  lemma and the fact that T has n − 1 edges, we have the handshake 2(n − 1) = i i · ni . But i i · ni ≥ n1 + 2n2 + 3(n − n1 − n2 ) = 3n − 2n1 − n2 and hence 2n − 2 ≥ 3n − 2n1 − n2 . Solving for n1 gives n1 ≥ (n − n2 + 2)/2 as needed. ⊓ ⊔ Proof (of Lemma 4). Using basic facts from polyhedral combinatorics, it follows that there exists a set E ∗ ⊂ E of edges with |E ∗ | = |D| such that y ∗ is the unique solution to  y d = ce ∀e ∈ E ∗ . (3) d∈D:e∈pd

In particular, the characteristic vectors of the sets {d : e ∈ pd } for e ∈ E ∗ are linearly independent.

Max-Weight Integral Multicommodity Flow in Spiders

9

Contract each edge of E\E ∗ in (V, E), resulting in the tree T ′ = (V ′ , E ∗ ); call elements of V ′ nodes. We now use a counting argument to establish the existence of the desired edge e within E ∗ . We call the two ends of each d ∈ D endpoints and say that node v ′ ∈ V ′ owns k endpoints when the degree of v ′ in (V ′ , D) is k. First, consider any node v ′ ∈ V ′ that has degree 2 in T ′ ; let e1 , e2 be its incident edges in T ′ . If v ′ owns no endpoints then {d : e1 ∈ pd } = {d : e2 ∈ pd }, contradicting linear independence. If v ′ owns exactly one endpoint, the symmetric difference {d : e1 ∈ pd }△{d : e2 ∈ pd } consists of a single demand edge; but since y ∗ satisfies (3), 0 < y ∗ < 1, and c is integral, this is a contradiction. Hence v ′ owns two or more endpoints. If there exists a leaf node v ′ of T ′ that owns at most 3 endpoints then we are done, since this implies that the edge of E ∗ incident to v ′ , viewed in the original graph, lies on a most 3 demand paths. Otherwise, we apply a counting argument to T ′ , seeking a contradiction. Let ni denote the number of nodes of T ′ of degree i. Then our previous arguments establish that the total number of endpoints is at least 4n1 + 2n2 . Lemma 7 then shows that the total number of endpoints is more than 2(|V ′ | − n2 ) + 2n2 = 2|V ′ | > 2|E ∗ | = 2|D|. This is the desired contradiction, since there are only 2|D| endpoints in total. ⊓ ⊔ We remark that Lemma 4 is tight in the following sense: if we replace the bound |{d ∈ D : e ∈ pd }| ≤ 3 with |{d ∈ D : e ∈ pd }| ≤ 2, the resulting statement is false. An example of an extreme point solution for which the modified version fails, due to Cheriyan et al. [7], is given in Figure 2. 2.4

Hardness of MCFT with Constant Lower Bounds

We use MCFT to stand for the special case of WMCFT where all costs are 1. Theorem 8. For some ǫ > 0, for all integers μ ≥ 2, it is NP-hard to approximate MCFT to a factor of less than 1 + ǫ/μ, even when restricted to instances where all capacities are at least μ. Proof. The proof appears in the full version.

⊓ ⊔

Fig. 2. An extreme point solution to P0 . There are 9 edges in the supply graph, shown as thick lines; each has capacity 1. There are 9 demand edges, shown as thin lines; the solid ones have value 1/2, and the dashed ones have value 1/4. This is a tight example for Lemma 4 because each edge lies on at least three demand paths.

10

3

J. K¨ onemann, O. Parekh, and D. Pritchard

Exact Solution for Spiders

In this section we show that WMCFT can be exactly solved in polynomial time when the supply graph is a spider. (A spider is a tree with exactly one vertex of degree greater than 2.) Call the vertex of degree ≥ 3 the root of the spider. Call each maximal path having the root as an endpoint a leg of the spider. Observe that in WMCFT when (V, E) is a spider, each demand path pd either goes through the root, or else lies within a single leg. We generalize this observation into the following definition. Definition 9. Consider an instance of WMCFT on graph (V, E). With respect to a chosen root vertex r ∈ V , a demand edge d is said to be root-using, if r is an internal vertex of pd ; radial, if one endpoint of d is a descendant of the other, with respect to the orientation of (V, E) induced by the root r. The instance is all-ror (short for “all root-using or radial”) if there exists a choice of r ∈ V for which every demand edge is either root-using or radial. Instances with only radial demand paths can be exactly solved via the LP P0 since the constraint matrix is unimodular. Instances with only root-using demand paths can be solved using a matching approach, see e.g. the work of Nguyen [21]. To solve all-ror instances in general we use bidirected flows, which were introduced by Edmonds and Johnson [9]. Bidirected flow problems can be solved via a combinatorial reduction to b-matching (e.g., see [23]) which increases the instance size by a constant factor. We present in this section a reduction from all-ror WMCFT to bidirected flow. A bidirected graph is an undirected graph together with, for each edge e and each of its endpoints v ∈ e, a sign σv,e ∈ {−1, +1}. Thus an edge can have two negative ends, two positive ends, or one end of each type; these are respectively called negative edges, positive edges, and directed edges. We will speak of directed edges as having the +1 end as their head and −1 end as their tail. An instance of capacitated max-weight bidirected flow is an integer program of the following form.  πe xe (4) maximize e∈E

∀v ∈ V : ∀e ∈ E :

av ≤



σv,e xe ≤ bv

(5)

ℓe ≤ xe ≤ ue

(6)

e∋v

x integral

(7)

When a = b = 0 and all edges are directed, (4)–(7) becomes a max-weight circulation problem; when all edges are positive and a = 0, (4)–(7) becomes a b-matching problem. We now describe the reduction.

Max-Weight Integral Multicommodity Flow in Spiders

11

r

Fig. 3. An all-ror multicommodity flow instance. The tree graph (V, E) is depicted using thick lines, and the demand edges D are thin. Radial demand edges are dashed and root-using demand edges are solid. The root is r. An arrowhead denotes a positive endpoint, while the remaining endpoints are negative; these signs correspond to the reduction in the proof of Theorem 2.

Proof (of Theorem 2). Let r denote the root vertex, i.e., assume every demand edge is either radial or root-using with respect to r. We construct a bidirected graph whose underlying undirected graph is (V, E ∪ D). Make each edge e ∈ E directed, with head pointing towards r in the tree (V, E). We make each rootusing d ∈ D a positive edge; we make each radial d ∈ D a directed edge, with head pointing away from r. See Figure 3 for an illustration. Set ar = −∞, br = +∞ and av = bv = 0 for each v ∈ V \{r} in the bidirected flow problem (4)–(7). For each demand edge d ∈ D, call C(d) := {d} ∪ pd the demand cycle of d. For a set F let χF denote the characteristic vector of F . Our choices of signs for the endpoints ensure that for each demand cycle C(d), its characteristic vector x = χC(d) satisfies constraint (5). Moreover, any linear combination of these vectors is easily seen to satisfy (5), and the following converse holds. Claim 10. Any x satisfying (5) is a linear combination of characteristic vectors of demand cycles.  Proof. Let x′ = x − d xd χC(d) , and observe that x′ also satisfies (5). Moreover, as each particular demand edge d∗ occurs only in one demand cycle, namely  C(d) C(d∗ ), we have x′d∗ = xd∗ − d xd χd∗ = xd∗ − xd∗ = 0 for each d∗ ∈ D. In ′ other words, x vanishes on D. Now consider any leaf v = r of G and its incident edge uv ∈ E. Since x′ satisfies (5) at v and x′ is zero on every edge incident to v except possibly uv, we deduce that xuv = 0. By induction we can repeat this  argument to  show that x′ also vanishes on all of E, so x′ = 0. Then x = x′ + d xd χC(d) = d xd χC(d) , which proves Claim 10. ⊓ ⊔ By Claim 10, we may change the variables in the optimization problem from x to instead have one variable yd for each d ∈ D; the variables are thus related by

12

J. K¨ onemann, O. Parekh, and D. Pritchard

 x = d yd χC(d) . In the bidirected optimization problem, set ℓe = 0, ue = ce for each e ∈ E, and ℓd = 0, ud = +∞ for each d ∈ D. Rewriting (6) in terms of the new variables gives precisely the constraints (1) and (2). In other words, feasible integral flows x correspond bijectively to feasible integral solutions y for the WMCFT instance. Setting πd = wd for d ∈ D and πe = 0 for e ∈ E, the objective function of (4) represents the weight for y, completing the reduction. As mentioned earlier, this bidirected flow problem can in turn be reduced to a b-matching problem with a constant factor increase in the size of the problem. Using the strongly polynomial b-matching algorithm of Anstee [2], the proof of Theorem 2 is complete. ⊓ ⊔ 3.1

Polyhedral Results

The reduction used in the proof of Theorem 2 can also be used to derive the following polyhedral characterization; note that it is independent of which vertex is the root. Theorem 11. The convex hull of all integral feasible solutions in an all-ror WMCFT problem has the following description:   d∈D

e∈pd

yd ≥ 0,

∀d ∈ D

(8)

y d ≤ ce ,

∀e ∈ E

(9)

∀F ⊂ E with c(F ) odd

(10)

yd ⌊|pd ∩ F |/2⌋ ≤ ⌊c(F )/2⌋,

Proof. The proof appears in the full version.

⊓ ⊔

Interestingly, results of Garg et al. [15, preliminary version] show that (8)–(10) is also integral in unit-capacity WMCFT. It is possible to synthesize our Theorems 2 and 11 with corresponding results of [15] for the unit-capacity case, by “gluing” all-ror instances at capacity-1 edges. Caprara and Fischetti [5] gave a strongly polynomial-time algorithm to separate over the family (10) of inequalities. The ubiquity of the polyhedral formulation (8)–(10) suggests that it might be useful in designing a better approximation algorithm for WMCFT. One roadblock is that we do not know how to perform uncrossing (see, e.g., [18]) for that LP.

4

Closing Remarks

For the problem of finding a k-edge connected subgraph with the smallest number of edges, it is known [8,14] that the best approximation ratio possible (if P = NP) is 1 + Θ(1/k). We have proven a similar phenomenon for WMCFT (in terms of μ) and these results rely on similar techniques, notably iterated LP rounding. It would be nice to resolve the following outstanding question: for

Max-Weight Integral Multicommodity Flow in Spiders

13

the problem of finding a min-weight k-edge connected subgraph, does the best possible approximation ratio decrease as k increases? There is a close relation between WMCFT and its “demand” version where every flow variable is restricted according to yd ∈ {0, rd } for some constants {rd }d∈D . E.g., combining IteratedSolver and Cor. 3.5 of [6], we obtain a 1 + O( rmax /μ) approximation for demand WMCFT where rmax is the maximum demand. Shepherd and Vetta [24] showed that when the tree is a star, the O(rmax )-relaxed integrality gap of the demand analogue of P0 is 1, and we can develop this result to get a 1 + O(rmax /μ)-approximation algorithm for demand WMCFT on stars. It would be interesting to demonstrate the same results on arbitrary trees. When every edge has capacity one WMCFT is exactly solvable [15], and Theorem 1(a) gives a 3-approximation when there are no capacity-1 edges. Can we combine these results to improve upon the 4-approximation by Chekuri et al. [6] for general instances?

Acknowledgement We would like to thank Joseph Cheriyan, Jim Geelen, and Andr´ as Seb˝o for helpful discussions.

References 1. Andrews, M., Chuzhoy, J., Khanna, S., Zhang, L.: Hardness of the undirected edge-disjoint paths problem with congestion. In: Proc. 46th FOCS, pp. 226–244 (2005) 2. Anstee, R.P.: A polynomial algorithm for b-matchings: An alternative approach. Inf. Process. Lett. 24(3), 153–157 (1987) 3. Bansal, N., Khandekar, R., Nagarajan, V.: Additive guarantees for degree bounded directed network design. In: Proc. 40th STOC, pp. 769–778 (2008) 4. Calinescu, G., Chakrabarti, A., Karloff, H.J., Rabani, Y.: Improved approximation algorithms for resource allocation. In: Cook, W.J., Schulz, A.S. (eds.) IPCO 2002. LNCS, vol. 2337, pp. 401–414. Springer, Heidelberg (2002) atal–Gomory cuts. Math. Program 74(3), 5. Caprara, A., Fischetti, M.: {0, 21 }-Chv´ 221–235 (1996) 6. Chekuri, C., Mydlarz, M., Shepherd, F.B.: Multicommodity demand flow in a tree and packing integer programs. ACM Trans. Algorithms 3(3), 27 (2007); Extended abstract appeared In: Proc. 30th ICALP, pp. 410–425 (2003) 7. Cheriyan, J., Jord´ an, T., Ravi, R.: On 2-coverings and 2-packings of laminar families. In: Neˇsetˇril, J. (ed.) ESA 1999. LNCS, vol. 1643, pp. 510–520. Springer, Heidelberg (1999) 8. Cheriyan, J., Thurimella, R.: Approximating minimum-size k-connected spanning subgraphs via matching. SIAM J. Comput. 30(2), 528–560 (2000); Preliminary version appeared In: Proc. 37th FOCS, pp. 292–301(1996) 9. Edmonds, J., Johnson, E.: Matching: A well-solved class of integer linear programs. In: Guy, R., Hanani, H., Sauer, N., Schonheim, J. (eds.) Proceedings, Calgary International Conference on Combinatorial Structures and their Applications, pp. 82–92. Gordon and Breach (1970)

14

J. K¨ onemann, O. Parekh, and D. Pritchard

10. Erlebach, T., Jansen, K.: Conversion of coloring algorithms into maximum weight independent set algorithms. In: Rolim, J.D.P., et al. (eds.) Proc. ICALP Satellite Workshops, pp. 135–146 (2000) 11. Erlebach, T., Jansen, K.: The maximum edge-disjoint paths problem in bidirected trees. SIAM J. Discrete Math. 14(3), 326–355 (2001); In: Chwa, K.-Y., H. Ibarra, O. (eds.) ISAAC 1998. LNCS, vol. 1533, pp. 326–355. Springer, Heidelberg (1998) 12. Even, S., Itai, A., Shamir, A.: On the complexity of timetable and multicommodity flow problems. SIAM Journal on Computing 5(4), 691–703 (1976); Preliminary version appeared In: Proc. 16th FOCS, pp. 184–193, (1975) 13. Gabow, H.N., Gallagher, S.: Iterated rounding algorithms for the smallest k-edgeconnected spanning subgraph. In: Proc. 19th SODA, pp. 550–559 (2008) ´ Williamson, D.P.: Approximating the 14. Gabow, H.N., Goemans, M.X., Tardos, E., smallest k-edge connected spanning subgraph by LP-rounding. In: Proc. 16th SODA, pp. 562–571 (2005) 15. Garg, N., Vazirani, V.V., Yannakakis, M.: Primal-dual approximation algorithms for integral flow and multicut in trees. In: Lingas, A., Carlsson, S., Karlsson, R. (eds.) ICALP 1993. LNCS, vol. 700, pp. 64–75. Springer, Heidelberg (1993) 16. Goemans, M.X.: Minimum bounded degree spanning trees. In: Proc. 47th FOCS, pp. 273–282 (2006) 17. Hartman, I.B.-A.: Optimal k-colouring and k-nesting of intervals. In: Dolev, D., Rodeh, M., Galil, Z. (eds.) ISTCS 1992. LNCS, vol. 601, pp. 207–220. Springer, Heidelberg (1992) 18. Jain, K.: A factor 2 approximation algorithm for the generalized Steiner network problem. Combinatorica 21(1), 39–60 (2001); Preliminary version appeared In: Proc. 39th FOCS, pp. 448–457 (1998) 19. Kann, V.: On the Approximability of NP-complete Optimization Problems. PhD thesis, Royal Institute of Technology Stockholm (1992) 20. Lau, L.C., Naor, J.S., Salavatipour, M.R., Singh, M.: Survivable network design with degree or order constraints. In: Proc. 39th STOC, pp. 651–660 (2007) 21. Nguyen, T.: On the disjoint paths problem. Oper. Res. Lett. 35(1), 10–16 (2007) 22. Raghavan, P., Thompson, C.: Randomized rounding: a technique for provably good algorithms and algorithmic proofs. Combinatorica 7, 365–374 (1987) 23. Schrijver, A.: Combinatorial optimization. Springer, New York (2003) 24. Shepherd, F.B., Vetta, A.: The demand-matching problem. Mathematics of Operations Research 32(3), 563–578 (2007); In: Cook, W.J., Schulz, A.S. (eds.) IPCO 2002. LNCS, vol. 2337, pp. 457–578. Springer, Heidelberg (2002) 25. Singh, M., Lau, L.C.: Approximating minimum bounded degree spanning trees to within one of optimal. In: Proc. 39th STOC, pp. 661–670 (2007)

Size Versus Stability in the Marriage Problem P´eter Bir´ o1,⋆ , David F. Manlove1,⋆⋆ , and Shubham Mittal2,⋆⋆⋆ 1

Department of Computing Science, University of Glasgow, Glasgow G12 8QQ, UK {pbiro,davidm}@dcs.gla.ac.uk 2 Department of Computer Science and Engineering, Block VI, Indian Institute of Technology, Delhi, Hauz Khas, New Delhi 110 016, India [email protected]

Abstract. Given an instance I of the classical Stable Marriage problem with Incomplete preference lists (smi), a maximum cardinality matching can be larger than a stable matching. In many large-scale applications of smi, we seek to match as many agents as possible. This motivates the problem of finding a maximum cardinality matching in I that admits the smallest number of blocking pairs (so is “as stable as possible”). We show that this problem is NP-hard and not approximable within n1−ε , for any ε > 0, unless P=NP, where n is the number of men in I. Further, even if all preference lists are of length at most 3, we show that the problem remains NP-hard and not approximable within δ, for some δ > 1. By contrast, we give a polynomial-time algorithm for the case where the preference lists of one sex are of length at most 2.

1

Introduction

The Stable Marriage problem (sm) was introduced in the seminal paper of Gale and Shapley [7]. In its classical form, an instance of sm involves n men and n women, each of whom specifies a preference list, which is a total order on the members of the opposite sex. A matching M is a set of (man,woman) pairs such that each person belongs to exactly one pair. If (m, w) ∈ M , we say that w is m’s partner in M , and vice versa, and we write M (m) = w, M (w) = m. A person x prefers y to y ′ if y precedes y ′ on x’s preference list. A matching M is stable if it admits no blocking pair, namely a (man,woman) pair (m, w) such that m prefers w to M (m) and w prefers m to M (w). Gale and Shapley [7] proved that every instance of sm admits at least one stable matching, and described an algorithm – the Gale / Shapley algorithm – that finds such a matching in time that is linear in the input size. In general, there may be many stable matchings (in fact exponentially many in n) for a given instance of sm [12]. ⋆ ⋆⋆

⋆⋆⋆

Supported by EPSRC grant EP/E011993/1 and by OTKA grant K69027. Supported by EPSRC grants EP/E011993/1 and GR/R84597/01, and by an RSE / Scottish Executive Personal Research Fellowship. Supported by a vacation scholarship from the Department of Computing Science, University of Glasgow.

E. Bampis and M. Skutella (Eds.): WAOA 2008, LNCS 5426, pp. 15–28, 2009. c Springer-Verlag Berlin Heidelberg 2009 

16

P. Bir´ o, D.F. Manlove, and S. Mittal

Incomplete lists. A variety of extensions of the basic problem have been studied. In the Stable Marriage problem with Incomplete lists (smi), the numbers of men and women need not be the same, and each person’s preference list consists of a subset of the members of the opposite sex in strict order. A (man,woman) pair (m, w) is acceptable if each member of the pair appears on the preference list of the other. A matching M is now a set of acceptable pairs such that each person belongs to at most one pair. In this context, (m, w) is a blocking pair for a matching M if (a) (m, w) is an acceptable pair, (b) m is either unmatched or prefers w to M (m), and likewise (c) w is either unmatched or prefers m to M (w). Given the definitions of a matching and a blocking pair, we lose no generality by assuming that the preference lists are consistent (i.e., given a (man,woman) pair (m, w), m appears on the preference list of w if and only if w appears on the preference list of m). As in the classical case, there is always at least one stable matching for an instance of smi, and it is straightforward to extend the Gale / Shapley algorithm to give a linear-time algorithm for this case. Again, there may be many different stable matchings, but Gale and Sotomayor [8] showed that every stable matching for a given smi instance has the same size and matches exactly the same set of people. Motivation. The Hospitals/Residents problem (hr) is a many-to-one generalisation of smi, so called because of its applications in centralised matching schemes that handle the allocation of graduating medical students, or residents, to hospitals [20]. The largest such scheme is the National Resident Matching Program (NRMP) [24] in the US, but similar schemes exist in Canada [25], in Scotland [11,26], and in a variety of other countries and contexts. In the 2006-07 run of the Scottish medical matching scheme, called the Scottish Foundation Allocation Scheme (SFAS), there were 781 students and 53 hospitals, with total capacity 789. The matching algorithm (designed and implemented at the Department of Computing Science, University of Glasgow) found a stable matching of size 744, thus leaving 37 students unmatched. Clearly stability is the key property to be satisfied, and it is this that restricts the size of the resultant matching. Nevertheless the administrators asked whether, were the stability criterion to have been relaxed, a larger matching could have been found. We found that a matching of size 781 did exist, but the matching we computed admitted 400 blocking pairs. “Almost stable” maximum matchings. In practical situations, a blocking pair of a given matching M need not always lead to M being undermined, since the agents involved might be unaware of their potential to improve relative to M . For example, in situations where preference lists are not public knowledge, there may be limited channels of communication that would lead to the awareness of blocking pairs in practice. Nevertheless, it is reasonable to assert that the greater the number of blocking pairs of a given matching M , the greater the likelihood that M would be undermined by a pair of agents in practice. In particular, a maximum cardinality matching (henceforth a maximum matching) for the 200607 SFAS data that admits only 10 blocking pairs might be considered to be “more stable” than one with 400 blocking pairs. This motivates the problem of

Size Versus Stability in the Marriage Problem

17

finding a maximum matching that admits the smallest number of blocking pairs (and is therefore, in the sense described above, “as stable as possible”). Eriksson and H¨ aggstr¨ om [6] also argue that counting the number of blocking pairs of a matching can be an effective way to measure its degree of instability; earlier, this approach had already been taken by Khuller et al. [14]. Further applications. Further practical applications of “almost stable” maximum matchings arise in similar bipartite settings, where the size of the matching may be considered to be a higher priority than its stability in a particular matching market. Examples include school placement [1] and the allocation of students to projects in a university department [3]. Furthermore, the US Navy has a bipartite matching problem involving the assignment of sailors to billets [18,23] in which every sailor should be matched to a billet, and meanwhile there are some critical billets that cannot be left vacant. In non-bipartite contexts, applications arise in kidney exchange settings [22,27], for example. Here, both the size and the stability of a matching have been considered as being the most important criteria. Centralised programs have been organised in many countries to match incompatible patient-donor pairs, including the US, the Netherlands and the UK. In most programs, the main goal is to maximise the number of transplants (i.e., the first priority is to find a maximum matching) [22]. However other studies [21] consider stability as the first priority. Another example in a non-bipartite setting involves pairing up chess players [15]. Our results. In this paper we present a range of algorithmic results for max size min bp smi, the problem of finding a maximum matching with the smallest number of blocking pairs, given an instance of smi. We firstly show in Section 2 that this problem is NP-hard and not approximable within n1−ε , for any ε > 0, unless P=NP. We then consider special cases of the problem where the preference lists on one or both sides are short (this is motivated in practice by applications such as SFAS, where students are asked to rank six hospitals in order of preference). We show in Section 3 that, even when preference lists on both sides are of length at most 3, max card min bp smi is NP-hard and not approximable within δ, for some δ > 1, unless P=NP. On the other hand, for the case where the lists on one side are of length at most 2 (and the lists on the other side are unbounded in length), in Section 4, we give a polynomial-time algorithm for max card min bp smi. Section 5 contains concluding remarks. Related work. Matchings with few blocking pairs have previously been studied from an algorithmic point of view in the context of the Stable Roommates problem (sr), a non-bipartite generalisation of sm, as a means of coping with the fact that, in contrast to the case for sm, an sr instance need not admit a stable matching. Abraham et al. [2] showed that, given an sr instance, the problem of finding a matching with the smallest number of blocking pairs is NP-hard and not approximable within n1/2−ε , for any ε > 0, unless P=NP. In the case that preference lists include ties, the lower bound was strengthened to n1−ε . On the other hand, given a fixed integer K, they showed that the problem of finding

18

P. Bir´ o, D.F. Manlove, and S. Mittal

a matching with exactly K blocking pairs, or reporting that no such matching exists, is solvable in polynomial time. This paper can be viewed as a counterpart of [2], strengthening its results by moving to the bipartite setting, and answering the remaining previously open questions in a table shown in Section 5.

2

Unbounded Length Preference Lists

Before presenting the main result of this section, we define some notation and terminology relating to matchings and graphs. Given an instance I of smi, let M denote the set of matchings in I and let M+ denote the set of maximum matchings in I. Given a matching M ∈ M, let bpI (M ) denote the set of blocking pairs with respect to M in I (we omit the subscript when the instance is clear from the context). Let bp+ (I) = min{|bpI (M )| : M ∈ M+ }. Define max size min bp smi to be the problem of finding, given an smi instance I, a matching M ∈ M+ such that |bpI (M )| = bp+ (I). Given a graph G, the subdivision graph of G, denoted by S(G), is a bipartite graph obtained by subdividing each edge {u, w} of G in order to obtain two edges {u, v} and {v, w} of S(G), where v is a new vertex. A matching M in a graph G is said to be maximal if no proper superset of M is a matching in G. Let β(G) denote the size of a maximum matching in G. Define exactmm to be the problem of deciding, given a graph G and integer K, whether G admits a maximal matching of size exactly K. exact-mm is NP-complete, even for subdivision graphs of cubic graphs [17, Lemma 2.2.1]. We now present a gap-introducing reduction from exact-mm to max size min bp smi. Theorem 1. max size min bp smi is not approximable within n1−ε , where n is the number of men in a given instance, for any ε > 0, unless P=NP. Proof. Let ε > 0 be given. We transform from exact-mm restricted to subdivision graphs of cubic graphs, which is NP-complete as noted above. Hence let G = (V, E) (a subdivision graph of some cubic graph G′ ) and K (a positive integer) be an instance of exact-mm. Then G is a bipartite graph, and V is a disjoint union of two sets U and W , where each edge e ∈ E joins a vertex in U to a vertex in W . Let m = |E|. We lose no generality by assuming that K ≤ β(G) ≤ min{|U |, |W |}. Suppose that U = {u1 , u2 , . . . , un1 } and W = {w1 , w2 , . . . , wn2 }. Without loss of generality assume that each vertex in U has degree 2 and each vertex in W has degree 3. For each ui ∈ U , let wp and wq be the two neighbours of ui in G, where pi < qi . Also, for each wj ∈ W , let ur , us and ut , be the three neighbours of wj , where rj < sj < tj . Let B = 3ε and let C = (n1 + n2 )B+1 − (n1 + n2 ) + 1. We create an instance I of smi as follows. The sets of men and women in I are denoted by U and W respectively, where U and W are as defined in Figure 1. It follows that |U| = |W| = 3n1 + 4n2 + 2mC − K. Let U 1 = {u1i : 1 ≤ i ≤ n1 } and let W 1 = {wj1 : 1 ≤ j ≤ n2 }. For each ui ∈ U and wj ∈ W such that {ui , wj } ∈ E, define σj,i = 1 if wj = wp and σj,i = 2 if wj = wq , and define τi,j = 1 if ui = ur , τi,j = 2 if ui = us and τi,j = 3 if ui = ut . i

i

j

j

j

i

i

j

j

j

Size Versus Stability in the Marriage Problem

19

  n2 n1 Vi ) ∪ X U = (∪ Ui ) ∪ ∪  i=1  {u ,w }∈E Gi,j ∪ (∪i=1 n2 1 W = ∪j=1 Wj ∪ ∪{u ,w }∈E Hi,j ∪ ∪n j=1 Zj ∪ Y Gi,j = G1i,j ∪ G2i,j ({ui , wj } ∈ E) i

j

i

j

c,d Gdi,j = {gi,j : 1 ≤ c ≤ C}

Hi,j = d Hi,j =

1 2 Hi,j ∪ Hi,j c,d {hi,j : 1 ≤ c ≤ C} {u1i , u2i , u3i } {vi1 , vi2 , vi3 } {wj1 , wj2 , wj3 , wj4 }

Ui

=

Vi

=

Wj X Y Zj

= = {xi : 1 ≤ i ≤ n2 − K} = {yj : 1 ≤ j ≤ n1 − K} = {zj1 , zj2 }

({ui , wj } ∈ E ∧ 1 ≤ d ≤ 2) ({ui , wj } ∈ E) ({ui , wj } ∈ E ∧ 1 ≤ d ≤ 2) (1 ≤ i ≤ n1 ) (1 ≤ i ≤ n2 ) (1 ≤ j ≤ n2 )

(1 ≤ j ≤ n1 ).

Fig. 1. Men and women in the constructed instance of max size min bp smi

Preference lists for the men and women in I are as shown in Figure 2. In a given person’s preference list, the symbol [S] denotes all members of the set S listed in some arbitrary strict order at the point where the symbol appears, and the symbol [[S]] denotes all members of S listed in increasing subscript order at the point where the symbol appears. We now give some intuition behind this construction. Suppose that M is a maximal matching of size K in G. For each {ui , wj } ∈ M , the relevant pair in Ui × Wj (who rank each other in second place) will be added to a matching M ′ in I. The n1 − K men in U (respectively n2 − K women in W ) who are unmatched in M are collectively matched in M ′ to the women in Y (respectively men in X). The remaining members of Ui (for each ui ∈ U ) and Wj (for each wj ∈ W ) are collectively matched in M ′ to the members of Zi and Vj respectively. Each of U × Zi and Vj × W contributes one blocking pair to M ′ . It is then possible to extend M ′ to a perfect matching in I without introducing any additional blocking pairs by adding a perfect matching between the members of Gi,j ∪Hi,j for each {ui, wj } ∈ E. Hence |bp(M ′ )| = n1 + n2 . Conversely, from a perfect matching M ′ in I, it is straightforward to extract a matching M in G of size K. If M is not maximal then there is some ui ∈ U and wj ∈ W , both unmatched in M , such that {ui , wj } ∈ E. In this case, c,1 ′ 1 ′ for each c (1 ≤ c ≤ C), either (u1i , hc,1 i,j ) ∈ bp(M ) or (gi,j , wj ) ∈ bp(M ), and hence ′ |bp(M )| ≥ C. This introduces the required ‘gap’ for the inapproximability result. The formal proof of correctness of the reduction is based on a number of claims as follows, each of which is proved in [5]. Claim 1 : I admits a perfect matching. Claim 2 : if G admits a maximal matching of size K, then bp(I) ≤ n1 + n2 . Claim 3 : if G admits no maximal matching of size K then bp(I) > (n1 + n2 )B+1 . Hence the existence of a (n1 +n2 )B -approximation algorithm for max size min bp smi implies a polynomial-time algorithm for exact-mm in subdivision graphs of cubic graphs, a contradiction unless P=NP. Claim 4 : (n1 + n2 )B ≥ n1−ε , completing the proof. ⊓ ⊔

20

P. Bir´ o, D.F. Manlove, and S. Mittal τi

u1i : zi1 wp u2i : u3i : c,1 gi,j c,2 gi,j 1 vi : vi2 : vi3 :

: :

xi :

(1 ≤ i ≤ n1 )

i

(1 ≤ i ≤ n1 ) (1 ≤ i ≤ n1 ) hc,2 i,j

({ui , wj } ∈ E ∧ 1 ≤ c ≤ C) ({ui , wj } ∈ E ∧ 1 ≤ c ≤ C) (1 ≤ i ≤ n2 ) (1 ≤ i ≤ n2 ) (1 ≤ i ≤ n2 )

[[W ]]

σj

wj3

i

τi , q zi2 wqi i zi1 zi2 1 hc,1 i,j wj c,1 hc,2 i,j hi,j 1 4 wi wi wi2 wi4 wi3 wi4 1

wj1 : vj1 ur wj2

1 1 [Hi,p ] [Hi,q ] [[Y ]]

, p i

i

:

vj2

:

vj3

σj usj σj

ut

, r j

j

(1 ≤ i ≤ n2 − K) [G1j,r ] [G1j,s ] [G1j,t ] [[X]] j

j

j

(1 ≤ j ≤ n2 )

, s j

(1 ≤ j ≤ n2 )

, t j

(1 ≤ j ≤ n2 )

j

wj4 : vj1 vj2 vj3

(1 ≤ j ≤ n2 )

hc,1 i,j hc,2 i,j zj1 : zj2 :

({ui , wj } ∈ E ∧ 1 ≤ c ≤ C)

yj :

: :

c,2 c,1 gi,j u1i gi,j c,1 c,2 gi,j gi,j 1 3 uj uj u2j u3j 1

[[U ]]

({ui , wj } ∈ E ∧ 1 ≤ c ≤ C) (1 ≤ j ≤ n1 ) (1 ≤ j ≤ n1 ) (1 ≤ j ≤ n1 − K)

Fig. 2. Preference lists in the constructed instance of max size min bp smi

Let max size exact bp smi denote the problem of finding, given an smi instance I and an integer K ′ , a matching M ∈ M+ such that |bpI (M )| = K ′ . Corollary 1. max size exact bp smi is NP-complete. Proof. We use the same reduction as in the proof of Theorem 1 and set K ′ = n1 + n2 and ε = ∞ (i.e. B = 0 and C = 1). As before G has a maximal matching of size K if and only if I admits a perfect matching M ′ such that |bp(M ′ )| ≤ K ′ . However it is straightforward to verify that any perfect matching M ′ in I satisfies ⊓ ⊔ |bp(M ′ )| ≥ K ′ , and hence the result follows. Given that smi is a special case of sr, we may reuse results from [2] to obtain the following theorem. Theorem 2 ([2]). max size exact bp smi is solvable in polynomial time when K ′ is fixed.

3

Preference Lists of Length at Most 3

In this section we consider the case where preference lists in a given instance I of smi are of bounded length. Given two integers p and q, let max size min bp

Size Versus Stability in the Marriage Problem

21

(p, q)-smi denote the restriction of max size min bp smi in which each man’s preference list is of length at most p, and each woman’s list is of length at most q. We use p = ∞ or q = ∞ to denote the possibility that the men’s lists or women’s lists are of unbounded length, respectively. We begin by showing that max size min bp (3, 3)-smi is NP-hard and not approximable within some δ > 1 unless P=NP. To prove this, we give a reduction from a restricted version of sat. Given a Boolean formula B in CNF and a truth assignment f , let t(f ) denote the number of clauses of B satisfied simultaneously by f , and let t(B) denote the maximum value of t(f ), taken over all truth assignments f of B. Let max (2,2)-e3-sat [4] denote the problem of finding, given a Boolean formula B in CNF in which each clause contains exactly 3 literals and each variable occurs exactly twice as an unnegated literal in B and exactly twice as a negated literal in B, a truth assignment f such that t(f ) = t(B). 1 ), max size min bp (3, 3)-smi is not Theorem 3. Given any ε (0 < ε < 2032 3557 approximable within 3556+2032ε unless P=NP. 1 ) be given. Let B be an instance of max (2,2)-e3-sat. Proof. Let ε (0 < ε < 2032 Let V = {v0 , v1 , . . . , vn−1 } and C = {c1 , c2 , . . . , cm } be the set of variables and clauses in B respectively. Then for each vi ∈ V , each of literals vi and v¯i appears exactly twice in B. Also |cj | = 3 for each cj ∈ C. We form an instance I of max size min bp smi as follows. The set of men in I is X ∪P ∪Q and the set of women n−1 Xi , Xi = {x4i+r : 0 ≤ r ≤ 3} (0 ≤ i ≤ n − 1), in I is Y ∪ C ′ ∪ Z, where X = ∪i=0 n−1 m 1 2 3 P = ∪j=1 Pj , Pj = {pj , pj , pj } (1 ≤ j ≤ m), Q = {qj : cj ∈ C}, Y = ∪i=0 Yi , ′ r Yi = {y4i+r : 0 ≤ r ≤ 3} (0 ≤ i ≤ n − 1), C = {cj : cj ∈ C ∧ 1 ≤ r ≤ 3} and Z = {zj : cj ∈ C}. The preference lists of the men and women in I are shown in Figure 3. In the preference list of an agent x4i+r ∈ X (0 ≤ i ≤ n − 1 and r ∈ {0, 1}), the symbol c(x4i+r ) denotes the woman csj ∈ C ′ such that the (r + 1)th occurrence of vi appears at position s of cj . Similarly if r ∈ {2, 3} then the symbol c(x4i+r ) denotes the woman csj ∈ C ′ such that the (r − 1)th occurrence of v¯i appears at position s of cj . Also in the preference list of an agent csj ∈ C ′ , if literal vi appears at position s of clause cj ∈ C, the symbol x(csj ) denotes the man x4i+r−1 where r = 1, 2 according as this is the first or second occurrence of literal vi in B, otherwise if literal v¯i appears at position s of clause cj ∈ C, the symbol x(csj ) denotes the man x4i+r+1 where r = 1, 2 according as this is the first or second occurrence of literal v¯i in B. Clearly each preference list is of length at most 3. For each i (0 ≤ i ≤ n − 1), let Ti = {(x4i+r , y4i+r ) : 0 ≤ r ≤ 3} and Fi = {(x4i+r , y4i+r+1 )} : 0 ≤ r ≤ 3}, where addition is taken module 4i. We firstly note that M is a perfect matching of the men and women in I, where

M=

n−1 

Ti ∪ {(p1j , c1j ), (p2j , c2j ), (p3j , zj ), (qj , c3j ) : 1 ≤ j ≤ m}.

i=0

We now give some intuition behind this construction. The people in Xi ∪ Yi correspond to variable vi ∈ V , whilst the people in Pj ∪ {qj , c1j , c2j , c3j , zj } correspond to clause cj ∈ C. The pairs in Ti are added to a matching M in I if vi ∈ V

22

P. Bir´ o, D.F. Manlove, and S. Mittal x4i x4i+1 x4i+2 x4i+3 prj

: : : : :

y4i c(x4i ) y4i+1 y4i+1 c(x4i+1 ) y4i+2 y4i+3 c(x4i+2 ) y4i+2 y4i c(x4i+3 ) y4i+3 crj zj

qj : c1j c2j c3j y4i : x4i x4i+3 y4i+1 : x4i x4i+1 y4i+2 : x4i+1 x4i+2 y4i+3 : x4i+2 x4i+3 crj : prj x(crj ) qj zj : p1j p2j p3j

(0 ≤ i ≤ n − 1) (0 ≤ i ≤ n − 1) (0 ≤ i ≤ n − 1) (0 ≤ i ≤ n − 1) (1 ≤ j ≤ m ∧ 1 ≤ r ≤ 3) (1 ≤ j ≤ m) (0 ≤ i ≤ n − 1) (0 ≤ i ≤ n − 1) (0 ≤ i ≤ n − 1) (0 ≤ i ≤ n − 1) (1 ≤ j ≤ m ∧ 1 ≤ r ≤ 3) (1 ≤ j ≤ m)

Fig. 3. Preference lists in the constructed instance of max size min bp (3, 3)-smi

is true under a truth assignment f of B, otherwise the pairs in Fi are added to M . Crucially, if vi is false under f then each of x4i and x4i+1 (corresponding to the first and second occurrences of literal vi ) has his third choice in M . Similarly if vi is true under f then each of x4i+2 and x4i+3 (corresponding to the first and second occurrences of literal v¯i ) has his third choice in M . Hence if any clause cj is false under f , then since (qj , csj ) ∈ M for some s ∈ {1, 2, 3}, it follows that (x(csj ), csj ) ∈ bp(M ). Additionally, regardless of the truth values of V under f , the members of Xi × Yi contribute one blocking pair for each vi ∈ V , as do the members of Pj × C ′ for each cj ∈ C. For the formal argument showing the correctness of the reduction, we claim (see [5] for the proof) that t(B) + bp+ (I) = n + 2m = 11 4 m, since 3m = 4n. Berman et al. [4] show that it is NP-hard to distinguish between instances  B 1015 + ε m. of max (2,2)-e3-sat for which (i) t(B) ≥ (1 − ε)m and (ii) t(B) ≤  1016 3556 + By our construction, it follows  that in case (i), bp (I) ≤ 2032 + ε m, whilst in  3558 + case (ii), bp (I) ≥ 2032 − ε m. Hence an approximation algorithm for max size 3557 min bp (3, 3)-smi with performance guarantee r, for any r ≤ 3556+2032ε , could be used to decide between cases (i) and (ii) for max (2,2)-e3-sat in polynomial time, which is a contradiction unless P=NP. ⊓ ⊔

4

Preference Lists on One Side of Length at Most 2

We now consider instances of smi in which all preference lists on one side are of length at most 2. Let I be an smi instance in which U is the set of men and W is the set of women. Assume without loss of generality that every man has a list of length at most 2. Define the underlying graph of I to be a bipartite graph G = (V, E), where V = U ∪ W and E is the set of mutually acceptable pairs. Let n = |V (G)| and m = |E(G)|. Note that m ≤ 2 · |U| < 2n. Define perfect min bp (p, q)-smi as follows. An instance of this problem is an smi instance I in which each man’s preference list is of length at most p and each woman’s preference list is of length at most q (p = ∞ or q = ∞ denotes

Size Versus Stability in the Marriage Problem

23

unbounded length preference lists as before). A solution is a perfect matching with the minimum number of blocking pairs in I if I admits a perfect matching, or “no” otherwise. Lemma 1. perfect min bp (2, ∞)-smi is solvable in O(n) time, where n is the number of men in I. The algorithm is quite simple; the description can be found in [5]. We continue with the related problem men-cover min bp (2, ∞)-smi. Here, we suppose that the preference lists of the men are of length at most 2, and the problem is to minimize the number of blocking pairs over all matchings that cover the men. Lemma 2. men-cover min bp (2, ∞)-smi is solvable in O(n2 ) time, where n is the number of men in I. Proof. Suppose that the graph of the instance, G = (U ∪ W, E) is connected, otherwise, we can solve the problem separately for each component. If the number of men |U| is greater than the number of women |W| then we output “no”. If |U| = |W| then we get an instance of perfect min bp (2, ∞)-smi. The connectivity of G implies |W| ≤ |U| + 1, so the last possible case is |W| = |U| + 1. Here, for every wj ∈ W we solve an instance Ij of perfect min bp (2, ∞)-smi after removing wj from the graph. Note that if a matching Mj is a minimum solution for Ij then Mj is also a minimum for I between the matchings that does not cover wj , since in those matchings in I, where wj is not covered, every man in wj ’s list has only one possible partner. Therefore, we can get the optimal solution for I by solving |W| instances of perfect min bp (2, ∞)-smi and choosing the minimum of these solutions. ⊓ ⊔ The problem women-cover min bp (2, ∞)-smi can be defined similarly. Here, we suppose that the preference lists of the men are of length at most 2, and the problem is to minimize the number of blocking pairs over all matchings that cover the women. Lemma 3. women-cover min bp (2, ∞)-smi is solvable in O(n3 ) time, where n is the number of men in I. Proof. Let G = (U ∪ W, E) be the graph of the instance I and let bp(M ) denote the set of blocking pairs for a matching M in I. If there is no such matching that covers W then we output “no”. Otherwise, we deal only with such matchings in this proof that covers W, so we assume this property hereby. Let bpint (M ) denote the set of internal blocking pairs for M , those blocking pairs that are covered by M . Furthermore, let bpext (M ) denote the external blocking pairs, where the men are uncovered by M . Note that bp(M ) = bpint (M ) ∪ bpext (M ). Our algorithm consists of two cycles. In the first one, we eliminate the external blocking pairs without creating any new internal blocking pair. In the second one, we try to reduce the number of internal blocking edges by switching pairs along augmenting paths and cycles. Finally, we prove that if neither of these steps is possible then the solution is optimal.

24

P. Bir´ o, D.F. Manlove, and S. Mittal

Eliminating the external blocking pairs Claim 1: Suppose that for a matching M , bpext (M ) = ∅. We can construct a matching M ∗ such that bpint (M ) ⊇ bpint (M ∗ ) = bp(M ∗ ). Suppose that (ui , wj ) ∈ bpext (M ), and if (ui , wk ) is also in bpext (M ) then ui prefers wj to wk . Let M ′ = M \ (M (wj ), wj ) ∪ (ui , wj ). We get bpint (M ′ ) ⊆ bpint (M ) since only ui and wj could be part of a new internal blocking pair. This is because (ui , wk ) cannot be blocking since either ui prefers wj if (ui , wk ) is blocking for M or (ui , wk ) is not blocking for M , and wj received a better partner so she cannot be part of any new blocking pair. Therefore, the set of internal blocking pairs can only reduce. We keep doing this elimination process until obtaining a matching M ∗ such that bpint (M ∗ ) = bp(M ∗ ). This process must terminate, since the women get better and better partners after each elimination, so no pair can be eliminated twice. The final matching M ∗ satisfies the required condition. Reducing the number of internal blocking pairs. Let the alternating path P and alternating cycle C be defined as follows. For a matching M , a path P = {(u0 , w1 ), (w1 , u1 ), (u1 , w2 ), . . . , (uk−1 , wk ), (wk , uk )} is an alternating path / M for every 1 ≤ i ≤ k. If u0 = uk then we get an if (wi , ui ) ∈ M and (ui−1 , wi ) ∈ alternating cycle. Let M ⊕P denote the matching obtained by switching the edges along the alternating path, i.e. by removing the edges (ui , wi ) from M and adding (ui−1 , wi ) to M for every 1 ≤ i ≤ k. Furthermore, let PW and CW be the women covered by P and C, respectively, and let PU = {u1 , u2 , . . . , uk } = M (PW ) and PU0 = {u0 , u1 , . . . , uk−1 } = (M ⊕ P )(PW ). Finally, let D(S) denote the set of edges incident with the set of vertices S. Claim 2: Suppose that for a matching M , bpext (M ) = ∅. If there is an alternating path P such that |bpint (M ⊕ P ) ∩ D(PW )| < |bpint (M ) ∩ D(PW )| then |bpint (M ⊕P )| < |bpint (M )|. Similarly, if there is an alternating cycle C such that |bpint (M ⊕C)∩D(CW )| < |bpint (M )∩D(CW )| then |bpint (M ⊕C)| < |bpint (M )|. It is enough to show that if wj ∈ / PW then wj cannot be involved in any new internal blocking pair for M ⊕ P . Suppose indirectly that (ui , wj ) is a new internal blocking pair. If ui ∈ / PU0 then ui is either uncovered by M ⊕ P or has the same partner as in M , so (ui , wj ) cannot be a new internal blocking pair. If ui ∈ PU0 ∩ PU then (ui , wj ) = E(G) since ui has only two women in his list and both of them are in PW . Finally, if ui = u0 = PU0 \ PU then (u0 , wj ) cannot be blocking since u0 was uncovered by M and we supposed that no external blocking pair exists for M , a contradiction. The optimality. The next claim indicates that if neither of the above improvements is possible then the solution is optimal. Claim 3: Suppose that bpint (M ) = bp(M ) and there is a matching M opt such that |bp(M opt )| < |bp(M )| then there must be either an alternating path P such that |bpint (M ⊕ P ) ∩ D(PW )| < |bpint (M ) ∩ D(PW )| or an alternating cycle C such that |bpint (M ⊕ C) ∩ D(CW )| < |bpint (M ) ∩ D(CW )|. By Claim 1 we can suppose that bpint (M opt ) = bp(M opt ). Considering the symmetric difference of M and M opt we get some alternating paths, some

Size Versus Stability in the Marriage Problem

25

alternating cycles and some pairs that remain matched in M opt too. Let PW and CW denote the set of women that are involved in an alternating path and an alternating cycle, respectively, and let RW denote the set of women who get the same partner in M and M opt . Furthermore, let PU = M (PW ), PU0 = M opt (PW ), CU = M (CW ) and RU = M (RW ). Finally, let DIF = CU ∪ (PU ∩ PU0 ) denote the set of men who are matched with different partners in M and M opt . First we show that every women wj in RW must be involved in the same internal blocking pairs for M and M opt . Let us consider a pair (ui , wj ). If ui ∈ RU then (ui , wj ) is blocking for M if and only if it is blocking for M opt too, obviously. If ui ∈ DIF then (ui , wj ) ∈ / E(G) since ui has only two women in his list: M (ui ) and M opt (ui ), who are in PW ∪ CW . Finally, if ui ∈ PU0 \ PU then ui is uncovered by M , so (ui , wj ) cannot be blocking since there is no external blocking pair for M . Similarly, if ui ∈ PU \ PU0 then ui is uncovered by M opt , so (ui , wj ) cannot be blocking since there is no external blocking pair for M opt . Therefore, if we sum up the internal blocking pairs according the sets of women involved in the same alternating path or in the same alternating cycle for M and M opt , then we get either an alternating path P or an alternating cycle C such that either |bp(M opt ) ∩ D(PW )| < |bp(M ) ∩ D(PW )| or |bp(M opt ) ∩ D(CW )| < |bp(M ) ∩ D(CW )|. If for an alternating path P , |bp(M opt ) ∩ D(PW )| < |bp(M ) ∩ D(PW )| then we can prove that {bpint (M ⊕ P ) ∩ D(PW )} ⊆ {bp(M opt ) ∩ D(PW )} which implies |bpint (M ⊕ P ) ∩ D(PW )| < |bpint (M ) ∩ D(PW )|. To verify this it is enough to show that if for a woman wj ∈ PW , (ui , wj ) is an internal blocking pair for M ⊕ P then (ui , wj ) is an internal blocking pair for M opt too. Note that M ⊕ P (wj ) = M opt (wj ), and ui is from the set of men covered by M ⊕ P that is M ⊕ P (W) = RU ∪ CU ∪ (PU \ PU ) ∪ PU0 ⊆ RU ∪ PU0 ∪ CU ∪ PU = (RU ∪ PU0 ) ∪ (DIF \ PU0 ) ∪ (PU \ PU0 ). If ui ∈ RU or ui ∈ PU0 then M ⊕ P (ui ) = M opt (ui ), so / E(G) since wj can the statement is obvious. If ui ∈ DIF \ PU0 then (ui , wj ) ∈ be neither M ⊕ P (ui ) = M (ui ) nor M opt (ui ). Finally, if ui ∈ PU \ PU0 then ui is uncovered by M opt , so again, (ui , wj ) cannot be blocking for M ⊕ P since there is no external blocking pair for M opt . Similarly, if for an alternating cycle C, |bp(M opt )∩D(CW )| < |bp(M )∩D(CW )| then we can prove in the same way that {bpint (M ⊕ C) ∩ D(CW )} ⊆ {bp(M opt ) ∩ D(CW )} which implies |bpint (M ⊕ C) ∩ D(CW )| < |bpint (M ) ∩ D(CW )|. Conclusion of the proof. If a matching M is not optimal and there is no external blocking pair then Claim 3 implies that we can find an alternating path or cycle that satisfies the condition described in Claim 2, so by switching the edges along this path or cycle the number of internal blocking pairs reduces. Finally, the overall algorithm has complexity O(n3 ) (see [5] for full details). ⊓ ⊔ Theorem 4. max size min bp (2, ∞)-smi is solvable in O(n3 ) time, where n is the number of men in I. Proof. Let the bipartite graph be G = (U ∪ W, E), where every man in U has a preference list of length at most 2. First, we decompose G by using K¨ onig’s theorem. Let X ⊆ U and Y ⊆ W be such that X ∪ Y is a minimum vertex

26

P. Bir´ o, D.F. Manlove, and S. Mittal

cover, whose size is equal to the size of a maximum matching of G. Let M be a maximum matching that covers X ∪ Y . Note that there cannot be an edge (x, y) in M with (x, y) ∈ (X × Y ). Let U2 be a subset of X such that for every ui ∈ U2 there is an alternating path from some y ∈ Y to ui , and let W2 = M (U2 ). Furthermore, let U3 = X \ U2 , U1 = U \ X, W1 = Y and W3 = W \ (W1 ∪ W2 ). We claim that W1 ∪ W2 ∪ U3 is also a minimum vertex cover, moreover, the component restricted to the set of vertices U1 ∪ U2 ∪ W1 ∪ W2 is independent from the component restricted to the set of vertices U3 ∪ W3 . The fact that (W1 ∪ W2 ) × U3 does not contain any edge is obvious by the definition of U2 . There is no edge between U1 and W3 since X ∪ Y is a vertex cover. Finally, for every man ui in U2 , both women in u′i s list must be in W1 ∪ W2 by the definition of U2 , so no woman in ui ’s list can be from W3 . Therefore, we can obtain the solution for instance I of max size min bp (2, ∞)-smi by separately solving a problem of men-cover min bp (2, ∞)-smi for the subinstance restricted to U3 ∪ W3 and a problem of women-cover min bp (2, ∞)-smi for the subinstance restricted to U1 ∪ U2 ∪ W1 ∪ W2 . ⊓ ⊔

5

Concluding Remarks

In Table 1 we summarise complexity results for problems involving finding stable matchings and finding matchings with the minimum number of blocking pairs, in the context of instances of smi and sr. The table is split into columns according to these problems, and further according to whether the preference lists are strictly ordered or include ties. So far all preference lists in this paper have been strictly ordered, however ties arise in practice: for example a large hospital with many applicants may be indifferent between those in certain groups. The rows of the table refer to the case that we seek either a stable matching or a matching with the minimum number of blocking pairs; these rows are further split into the cases that the matching should be of arbitrary or maximum size. In a given table entry, ‘P’ denotes that the problem in question is polynomialtime solvable, whilst ‘NPc’ denotes the NP-completeness of the related decision problem. Furthermore, ‘=0’ denotes the fact that an optimal solution admits 0 blocking pairs, whilst ‘(*)’ indicates that the complexity result is established in this paper. Indeed, the complexity result in the last row shown in boldface implies the result immediately to its right. Table 1 already indicates that the hardness results of Sections 2 and 3 also apply to the extension of smi to the case where preference lists may include ties. However it remains to extend the algorithms of Section 4 to this setting. Similar remarks apply if we are to consider the extension of the results to hr (and its generalisation hrt, where preference lists may include ties). The inapproximability result established by Theorem 3 leaves open the question as to whether there is a c-approximation algorithm for max size min bp (3, 3)-smi, for some constant c > 1.

Size Versus Stability in the Marriage Problem

27

Table 1. Complexity results for problems involving finding stable matchings and finding matchings with the minimum number of blocking pairs The problem is to find a matching M such that M is stable such that M has min no. of blocking pairs

where M is arbitrary maximum arbitrary maximum

smi instances strict with ties P[7] P [7,9] P[7,8] NPc [16] P (=0) [7] P (=0) [7,9] NPc (*) NPc (*)

sr instances strict with ties P [10] NPc [19,13] P[10,9] NPc [19,13] NPc [2] NPc [2] NPc [2] NPc [2]

We conclude by mentioning an alternative way to minimise the instability of a maximum matching M in an instance of smi. As described by Eriksson and H¨aggstr¨ om [6], rather than trying to minimise |bp(M )|, one could try to minimise the number of people who are involved in blocking pairs of M . We can modify the proofs of the results in Sections 2 and 3 so that they hold for this variant of max size min bp smi (the details are omitted for space reasons), however again it remains to extend the results of Section 4 to this case.

Acknowledgement We would like to thank Rob Irving and the anonymous referees for helpful comments on earlier versions of this paper.

References 1. Abdulkadiroˇ gluand, A., S¨ onmez, T.: School choice: A mechanism design approach. American Economic Review 93(3), 729–747 (2003) 2. Abraham, D.J., Bir´ o, P., Manlove, D.F.: “Almost stable” matchings in the roommates problem. In: Erlebach, T., Persinao, G. (eds.) WAOA 2005. LNCS, vol. 3879, pp. 1–14. Springer, Heidelberg (2006) 3. Abraham, D.J., Irving, R.W., Manlove, D.F.: Two algorithms for the StudentProject allocation problem. Journal of Discrete Algorithms 5(1), 79–91 (2007) 4. Berman, P., Karpinski, M., Scott, A.D.: Scott Approximation hardness of short symmetric instances of MAX-3SAT. Electronic Colloquium on Computational Complexity Report, number 49 (2003) 5. Bir´ o, P., Manlove, D.F., Mittal, S.: Size versus stability in the marriage problem. Technical Report TR-2008-283, University of Glasgow, Department of Computing Science (2008) 6. Eriksson, K., H¨ aggstr¨ om, O.: Instability of matchings in decentralized markets with various preference structures. International Journal of Game Theory (2008) 7. Gale, D., Shapley, L.S.: College admissions and the stability of marriage. American Mathematical Monthly 69, 9–15 (1962) 8. Gale, D., Sotomayor, M.: Some remarks on the stable matching problem. Discrete Applied Mathematics 11, 223–232 (1985) 9. Gusfield, D., Irving, R.W.: The Stable Marriage Problem: Structure and Algorithms. MIT Press, Cambridge (1989)

28

P. Bir´ o, D.F. Manlove, and S. Mittal

10. Irving, R.W.: An efficient algorithm for the “stable roommates” problem. Journal of Algorithms 6, 577–595 (1985) 11. Irving, R.W.: Matching medical students to pairs of hospitals: A new variation on a well-known theme. In: Bilardi, G., Pietracaprina, A., Italiano, G.F., Pucci, G. (eds.) ESA 1998. LNCS, vol. 1461, pp. 381–392. Springer, Heidelberg (1998) 12. Irving, R.W., Leather, P.: The complexity of counting stable marriages. SIAM Journal on Computing 15(3), 655–667 (1986) 13. Irving, R.W., Manlove, D.F.: The Stable Roommates Problem with Ties. Journal of Algorithms 43, 85–105 (2002) 14. Khuller, S., Mitchell, S.G., Vazirani, V.V.: On-line algorithms for weighted bipartite matching and stable marriages. Theoretical Computer Science 127, 255–267 (1994) 15. Kujansuu, E., Lindberg, T., M¨ akinen, E.: The stable roommates problem and chess tournament pairings. Divulgaciones Matem´ aticas 7(1), 19–28 (1999) 16. Manlove, D.F., Irving, R.W., Iwama, K., Miyazaki, S., Morita, Y.: Hard variants of stable marriage. Theoretical Computer Science 276(1-2), 261–279 (2002) 17. O’Malley, G.: Algorithmic Aspects of Stable Matching Problems. PhD thesis, University of Glasgow, Department of Computing Science (2007) 18. Robards, P.A.: Applying two-sided matching processes to the united states navy enlisted assignment process. Master’s thesis, Naval Postgraduate School, Monterey, California (2001) 19. Ronn, E.: NP-complete stable matching problems. Journal of Algorithms 11, 285– 304 (1990) 20. Roth, A.E.: The evolution of the labor market for medical interns and residents: a case study in game theory. Journal of Political Economy 92(6), 991–1016 (1984) ¨ 21. Roth, A.E., S¨ onmez, T., Utku Unver, M.: Kidney exchange. Quarterly Journal of Economics 119, 457–488 (2004) ¨ 22. Roth, A.E., S¨ onmez, T., Utku Unver, M.: Pairwise kidney exchange. Journal of Economic Theory 125, 151–188 (2005) 23. Yang, W., Giampapa, J.A., Sycara, K.: Two-sided matching for the U.S. Navy Detailing Process with market complication. Technical Report CMU-RI-TR-0349, Robotics Institute, Carnegie-Mellon University (2003) 24. http://www.nrmp.org (National Resident Matching Program website) 25. http://www.carms.ca (Canadian Resident Matching Service website) 26. http://www.nes.scot.nhs.uk/sfas (Scottish Foundation Allocation Scheme website) 27. http://www.nepke.org (New England Program for Kidney Exchange website)

Degree-Constrained Subgraph Problems: Hardness and Approximation Results⋆ Omid Amini1 , David Peleg2 , St´ephane P´erennes3, Ignasi Sau3,4 , and Saket Saurabh5 1

2

Max-Planck-Institut f¨ ur Informatik, Saarbr¨ ucken, Germany [email protected] Department of Computer Science, Weizmann Institute of Science, Rehovot, Israel [email protected] 3 Mascotte joint project - INRIA/CNRS-I3S/UNSA, Sophia-Antipolis, France {Stephane.Perennes,Ignasi.Sau}@sophia.inria.fr 4 Graph Theory and Combinatorics group at Applied Mathematics IV Department of UPC, Barcelona, Spain 5 Department of Informatics, University of Bergen, Bergen, Norway [email protected]

Abstract. A general instance of a Degree-Constrained Subgraph problem consists of an edge-weighted or vertex-weighted graph G and the objective is to find an optimal weighted subgraph, subject to certain degree constraints on the vertices of the subgraph. This paper considers two natural Degree-Constrained Subgraph problems and studies their behavior in terms of approximation algorithms. These problems take as input an undirected graph G = (V, E), with |V | = n and |E| = m. Our results, together with the definition of the two problems, are listed below. • The Maximum Degree-Bounded Connected Subgraph problem (MDBCSd ) takes as input a weight function ω : E → R+ and an integer d ≥ 2, and asks for a subset E ′ ⊆ E such that the subgraphG′ = (V, E ′ ) is connected, has maximum degree at most d, and e∈E ′ ω(e) is maximized. This problem is one of the classical NP-hard problems listed by Garey and Johnson in [Computers and Intractability, W.H. Freeman, 1979], but there were no results in the literature except for d = 2. We prove that MDBCSd is not in Apx for any d ≥ 2 (this was known only for d = 2) and we provide a (min{m/ log n, nd/(2 log n)})-approximation algorithm for unweighted graphs, and a (min{n/2, m/d})-approximation algorithm for weighted graphs. We also prove that when G has a lowdegree spanning tree, in terms of d, MDBCSd can be approximated within a small constant factor in unweighted graphs. ⋆

This work has been partially supported by European project IST FET AEOLUS, PACA region of France, Ministerio de Educaci´ on y Ciencia of Spain, European Regional Development Fund under project TEC2005-03575, Catalan Research Council under project 2005SGR00256 and COST action 293 GRAAL, and has been done in the context of the crc Corso with France Telecom.

E. Bampis and M. Skutella (Eds.): WAOA 2008, LNCS 5426, pp. 29–42, 2009. c Springer-Verlag Berlin Heidelberg 2009 

30

O. Amini et al. • The Minimum Subgraph of Minimum Degree≥d (MSMDd ) problem requires finding a smallest subgraph of G (in terms of number of vertices) with minimum degree at least d. We prove that MSMDd is not in Apx for any d ≥ 3 and we provide an O(n/ log n)approximation algorithm for the class of graphs excluding a fixed graph as a minor, using dynamic programming techniques and a known structural result on graph minors.

Keywords: Approximation Algorithms, Degree-Constrained Subgraphs, Hardness of Approximation, Apx, PTAS, Excluded Minor.

1

Introduction

In this paper we consider two natural Degree-Constrained Subgraph problems and study them in terms of approximation algorithms. A general instance of a Degree-Constrained Subgraph problem consists of an edge-weighted or vertex-weighted graph G and the objective is to find an optimal weighted subgraph, subject to certain degree constraints on the vertices of the subgraph. These problems have attracted a lot of attention in the last decades and have resulted in a large body of literature [1, 8, 10, 11, 12, 14, 17, 20]. The most wellstudied ones are probably the Minimum-Degree Spanning Tree [10] and the Minimum-Degree Steiner Tree [11] problems. Beyond the esthetic and theoretical appeal of Degree-Constrained Subgraph problems, the reasons for such intensive study are rooted in their wide applicability in the areas of interconnection networks and routing algorithms, among others. For instance, given an interconnection network modeled by an undirected graph, one may be interested in finding a small subset of nodes having high degree of connectivity for each node. This translates into finding a small subgraph with a lower bound on the degree of its vertices, i.e., to the MSMDd problem. Note that if the input graph is bipartite, these problems are equivalent to classical transportation and assignment problems in operation research. The first problem studied in the paper is a classical NP-hard problem listed in [13] (cf. Problem [GT26] for the unweighted version): Maximum Degree-Bounded Connected Subgraph (MDBCSd ) Input: A graph G = (V, E), a weight function ω : E → R+ and an integer d ≥ 2. Output: A subset E ′ ⊆ E such that the subgraph G′ = (V, E ′ ) is con nected, has maximum degree at most d, and e∈E ′ ω(E) is maximized. For d = 2, the unweighted MDBCSd problem corresponds to the Longest Path problem. Indeed, given the input graph G (which can be assumed to be connected), let P and G′ be optimal solutions of Longest Path and MDBCS2 in G, respectively. Then observe that |E(G′ )| = |E(P )| unless G is Hamiltonian, in which case |E(G′ )| = |E(P )| + 1. One could also ask the question: what happens when G′ is not required to be connected in the definition of MDBCSd ?

Degree-Constrained Subgraph Problems

31

It turns out that without the connectivity constraint, both the edge version and the vertex version (where the goal is to maximize the total weight of the vertices of a subgraph respecting the degree constraints) of the MDBCSd problem are known to be solvable in polynomial time using matching techniques [13, 16]. In fact, without connectivity constraints, even a more general version where the input contains an interval of allowed degrees for each node is known to be solvable in polynomial time. The most general version of Degree-Constrained Subgraph problems is to find a subgraph under constraints given by lower and upper bounds on the degree of each vertex, the objective being to minimize or maximize some parameter (usually the size of the subgraph). A common variant ignores the lower bound on the degree and just requires the vertices of the subgraphs to have a given maximum degree [20], in which case the typical optimization criterion is to maximize the size of a subgraph satisfying the degree constraints. The resulting problem is also called an Upper Degree-Constrained Subgraph problem in [12]. In contrast, we are unaware of existing results considering just a lower bound on the degree of the vertices of the subgraph, except for combinatorial conditions on the existence of such a subgraph [8]. In an attempt to fill this void in the literature, the last problem considered in this paper aims at minimizing the size of a subgraph with a given minimum degree. For a graph H, let δH denote the minimum degree of the vertices in H. Minimum Subgraph of Minimum Degree≥d (MSMDd ) Input: An undirected graph G = (V, E) and an integer d ≥ 2. Output: A subset S ⊆ V such that for H = G[S], δH ≥ d and |S| is minimized. MSMDd is closely related to MDBCSd . Indeed, MSMDd corresponds exactly to the dual (unweighted) node-minimization version of MDBCSd . MSMSd is also a generalization of the Girth problem (finding a shortest cycle), which corresponds exactly to the case d = 2. In Amini et al. [5], the MSMDd problem was introduced and studied in the realm of the parameterized complexity. It was shown that MSMDd is W[1]-hard for d ≥ 3 and explicit FPT algorithms were given for the class of graphs excluding a fixed graph as a minor and graphs of bounded local-treewidth. Besides the above discussion, our main motivation for studying MSMDd is its close relation to the well studied Dense k-Subgraph (DkS) [9, 15] and Traffic Grooming [4] problems. Indeed, if good approximate solutions could be found for the MSMSd problem, then one could also find good approximate solutions (up to a constant factor) for the DkS and Traffic Grooming problems. Roughly, the idea is that a small subgraph with minimum degree at least d has density at least d2 , and this provides an approximation for the densest subgraph (in fact, Traffic Grooming can be reduced, essentially, to finding dense subgraphs). See [4,5] for further details. The above discussion illustrates that the study of the above mentioned problems is very natural and that the results obtained for them can reverberate in several other important optimization problems, coming from both theoretical and practical domains.

32

O. Amini et al.

Our Results: In this paper we obtain both approximation algorithms and results on hardness of approximation. All the hardness results are based on the hypothesis P = NP. More precisely, our results are the following: • We prove that the MDBCSd problem is not in Apx for any d ≥ 2. On the other hand, we give an approximation algorithm for general unweighted graphs with ratio min{m/ log n, nd/(2 log n)}, and an approximation algorithm for general weighted graphs with ratio min{n/2, m/d}. The first algorithm uses an algorithm introduced in [2], that is based on the color-coding method. In the full version [3] we also present a constant-factor approximation when the input graph has a low-degree spanning tree, in terms of the integer d. • We prove that the MSMDd problem is not in Apx for all d ≥ 3. The proof is obtained by the following two steps. First, by a reduction from Vertex Cover, we prove that MSMDd does not admit a PTAS. In particular, this implies that MSMDd is NP-hard for any d ≥ 3. Then, we use the error amplification technique to prove that MSMDd is not in Apx for any d ≥ 3. On the positive side, we give an O(n/ log n)-approximation algorithm for the class of graphs excluding a fixed graph H as a minor, using a known structural result on graph minors and dynamic programming over graphs of bounded treewidth. In particular, this gives an O(n/ log n)-approximation algorithm for planar graphs and graphs of bounded genus. Organization of the paper: In Section 2 we establish that MDBCSd is not in Apx for any d ≥ 2, and in Section 3 we present two approximation algorithms for unweighted and weighted general graphs, respectively. The constant-factor approximation for MDBCSd when the input graph has a low-degree spanning tree is provided in [3] for unweighted graphs. In Section 4 we prove that MSMDd is not in Apx for any d ≥ 3, and in Section 5 we give an O(n/ log n)-approximation algorithm for the class of graphs excluding a fixed graph H as a minor. Finally, we conclude with some remarks and open problems in Section 6. The omitted proofs and some basic definitions can be found in [3].

2

Hardness of Approximating MDBCSd

As mentioned in Section 1, MDBCS2 is exactly the Longest Path problem, which is known to admit no constant-factor approximation [14] unless P = NP. In this section we extend this result and prove that, under the assumption P=NP, MDBCSd is not in Apx for any d ≥ 2, proving first that MDBCSd is not in PTAS for any d ≥ 2. We refer to [3] for the definitions of the complexity classes Apx, PTAS and for the notion of gap-preserving reduction, which will be used freely throughout the paper. Theorem 1. MDBCSd does not admit a PTAS for any d ≥ 2, unless P = NP. Proof: We prove the result for the case when d ≥ 3. The result for the case d = 2 follows from [14]. We give our reduction from TSP(1, 2), which does not have

Degree-Constrained Subgraph Problems

33

a PTAS unless P = NP [19]. An instance of TSP(1, 2) consists of a complete graph G = (V, E) on n vertices and a weight function f : E → {1, 2} on its edges, and the objective is to find a traveling salesman tour of minimum edge weight in G. We show that if there is a PTAS for MDBCSd for some d ≥ 3, then it is possible to construct a PTAS for TSP(1, 2). Towards this, we transform the graph G into a new graph G′ with a modified weight function g on its edges. For every vertex v ∈ V we add d − 2 new vertices {v1 , · · · , vd−2 } and an edge from v to every vertex vi , 1 ≤ i ≤ d − 2. This concludes the description of G′ . Let V ′ = {v1 , · · · , vd−2 | v ∈ V } be the set of new vertices, and let E ′ = {(vi , v) | 1 ≤ i ≤ d − 2, v ∈ V } be the set of new edges. Define the weight function g of G′ as follows: g(e) = 3 − f (e) if e ∈ E (weights of original edges get flipped), and g(e) = 3 if e ∈ E ′ . Next we prove a claim concerning the structure of the maximal solutions of MDBCSd in G′ . Essentially, we show that any solution G1 of MDBCSd in G′ with value W can be transformed into another solution G2 of MDBCSd in G′ with value at least W , such that G2 contains all the newly added edges and induces a hamiltonian cycle in G. The proof is deferred to [3] due to lack of space. Claim. Any solution G1 = (V ∪ V ′ , E1 ) to MDBCSd in G′ can be transformed ′ in polynomial time into a solution G2 = (V ∪ V ′ , E2 ) of MDBCS d in G such that (a) G3 = (V, E ∩ E2 ) is a hamiltonian cycle in G, and (b) e∈E2 g(e) ≥  ′ e′ ∈E1 g(e ). Suppose that there exists a PTAS for MDBCSd realized by an approximation scheme Aδ . This family of algorithms takes as input a graph G′′ and a parameter δ > 0, and returns a solution of MDBCSd of weight at least (1−δ)OP TG′′ , where OP TG′′ is the value of an optimum solution of MDBCSd in G′′ . Now we proceed to describe a PTAS for TSP(1, 2). Given a graph G, an instance of TSP(1, 2), and ε > 0, do the following: • Fix δ = h(ε, d) (to be specified later) and run Aδ on G′ (the graph obtained from G with the transformation described above). • Apply the polynomial time transformation described in Claim 2 on the solution obtained by Aδ on G′ . Let the new solution be G∗ = (V ∪ V1 , E ∗ ). • Return E ∗ ∩ E as the solution of TSP(1, 2). Now  we prove that as claimed, the solution returned by our algorithm satisfies e∈E ∗ ∩E f (e) ≤ (1 + ε)OT , where OT is the weight of an optimum tour in G. Let such an optimum tour contain a edges of weight 1 and b edges of weight 2. Then OT = a + 2b and a + b = n. Equivalently a = 2n − OT and b = OT − n. Let OD be the value of an optimum solution of MDBCSd in G′ . Then by Claim 2 and the flipping nature of the function g, we have that OD = (d − 2)3n + 2a + b.

(1)

34

O. Amini et al.

∗ ∗ Let 3(d − 2)n + OD be the value of the solution returned by Aδ , where OD is the  sum of the weights of the edges of the hamiltonian cycle in G, that is, ∗ OD = e∈E ∗ ∩E g(e). Since Aδ is a PTAS, ∗ 3(d − 2)n + OD ≥ (1 − δ)OD .

(2)

Combining Equation (1) and Inequality (2) gives ∗ ≥ (1 − δ)OD − 3(d − 2)n = 3n − OT + δOT − n(3d − 3)δ. OD

(3)

On the other hand, the value of the solution returned by our algorithm for ∗ ∗ (since if OD = 2x + y, x being the number of edges TSP(1, 2) is OT∗ = 3n − OD of weight 2 and y being the number of edges of weight 1, with x + y = n, then ∗ the value of the solution for TSP(1, 2) is x + 2y). Substituting OD = 3n − OT∗ in Inequality (3) and noting that OT ≥ n yields OT∗ ≤ OT − δOT + n(3d − 3)δ ≤ OT − δn + n(3d − 3)δ.

(4)

To show that OT∗ ≤ (1+ε)OT , by (4) it suffices to bound −δn+n(3d−3)δ ≤ ε·OT . Rather, we show that −δn + n(3d − 3)δ ≤ εn, which automatically implies the ε , yielding a required bound. This can be done by setting δ = h(ε, d) = 3d−4 PTAS for TSP(1, 2). Since TSP(1, 2) does not admit a PTAS [19], the last assertion also rules out the existence of a PTAS for MDBCSd for any d ≥ 3, unless P = NP.  We are now ready to state the main result of this section. The proof is based on using the innaproximability constant given by Theorem 1 and applying the error amplification technique to rule out the existence of a constant-factor approximation. The proof details of Theorem 2 are deferred to [3] due to lack of space. Theorem 2. MDBCSd , d ≥ 2, does not admit any constant-factor approximation, unless P = NP.

3

Approximating MDBCSd

In this section we focus on approximating MDBCSd . As seen in Section 2, MDBCSd does not admit any constant-factor approximation in general graphs. In [3] we show that when the input graph has a low-degree spanning tree (in terms of d), MDBCSd can be approximated within a constant factor. In this section we deal with general graphs. Concerning the Longest Path problem (which corresponds to the case d = 2 of MDBCSd as discussed in the introduction) the best approximation algorithm [6] has approximation ratio O(n(log log n/ log n)2 ), which improved the ratio O(n/ log n) of [2]. Using the results of [2], we provide in Theorem 4 an approximation algorithm for MDBCSd in general unweighted graphs for any d ≥ 2. Then we turn to weighted graphs,

Degree-Constrained Subgraph Problems

35

providing a new approximation algorithm for general weighted graphs in Theorem 5. Finally we compare both algorithms for unweighted graphs. To the best of our knowledge, these are the first approximation algorithms for MDBCSd in general graphs. We need a preliminary lemma, that uses the following result: Proposition 1. [18] Any unordered tree on n nodes can be represented using 2n + o(n) bits with adjacency queries being supported in O(n) time. Let Tn,d be the set of non-isomorphic unlabeled trees on n nodes with maximum degree at most d. Lemma 1. The set Tlog n,d can be generated in polynomial time on n.

Proof: It is well known [21] that |Tn,n−1 | ∼ Cαn n−5/2 as n → ∞, where C and α are positive constants. Hence, the set Tlog n,log n−1 has a number of elements polynomial on n. In addition, one can efficiently generate all the elements of Tlog n,log n−1 , since by Proposition 1 any unlabeled tree on log n nodes can be represented using 2 log n + o(log n) bits with adjacency queries being supported in O(log n) time. Finally, the set Tlog n,d is obtained from Tlog n,log n−1 by removing all the elements T with Δ(T ) > d, where Δ(T ) is the maximum degree of the tree T .  The main ingredient of the first algorithm is a powerful result of [2], which uses the color-coding method. Theorem 3. [2] If a graph G = (V, E) contains a subgraph isomorphic to a graph H = (VH , EH ) whose treewidth is at most t, then such a subgraph can be found in 2O(|V |) · |V |t+1 · log |V | time. H

In particular, trees on log |V | vertices can be found in time |V |O(1) · log |V |. We are ready to describe our algorithm for unweighted graphs. Algorithm A: (1) Generate all the elements of Tlog n,d . Define the set F := {}. (2) For each T ∈ Tlog n,d , test if G contains a subgraph isomorphic to T . If such a subgraph is found, add it to F . (3) If F = ∅ or d > log n, output an arbitrary connected subgraph of G with d edges. Otherwise, output any element in F . Theorem 4. For all d ≥ 2, algorithm A provides a ρ-approximation algorithm for MDBCSd in unweighted graphs, with ρ = min{m, nd/2}/ log n. Proof: Let us first see that the running time of algorithm A is polynomial in n. Indeed, steps (1) and (2) can be executed in polynomial time by Lemma 1 and Theorem 3, respectively. Step (3) takes constant time. Algorithm A is clearly correct, since by definition of the set Tlog n,d the output graph is a solution of MDBCSd in G. Finally, let us consider the approximation ratio of algorithm A. Let OP T be the number of edges of an optimal solution of MDBCSd in G, and let ALG be the number of edges of the solution found by algorithm A. We distinguish two cases:

36

O. Amini et al.

n ˆ has at least log n vertices. , then any optimal solution H • If OP T ≥ d·log 2 ˆ contains a tree on log n vertices, and so does G. Hence, In particular, H this tree will be found in step (2), and therefore ALG ≥ log n − 1. (We can assume that ALG = log n by replacing everywhere Tlog n,d with Tlog n+1,d .) On the other hand, we know that OP T ≤ min{m, nd/2}. n , then ALG ≥ d. Note that such a connected • Otherwise, if OP T < d·log 2 subgraph with d edges can be greedily found starting from any node of G.   min{m, 2 } log n T ≤ max , .  = min{m,nd/2} In both cases, OP ALG log n 2 log n n d

Theorem 5. The MDBCSd problem admits a ρ-approximation algorithm on weighted graphs, with ρ = min{n/2, m/d}. Proof: Let us describe the approximation algorithm. Let F be the set of d heaviest edges in the input graph G, and let W be the set of endpoints of those edges. We consider two cases according to the connectivity of the subgraph H = (W, F ). Let ω(F ) denote the total weight of the edges in F . If H is connected, then the algorithm returns H. We claim that this yields a ρ-approximation. Indeed, if an optimal solution consists of m∗ edges of total ω∗ weight ω ∗ , then ALG = ω(F ) ≥ m ∗ · d, since by the choice of F the average weight of the edges in F cannot be smaller than the average weight of the edges ∗ of an optimal solution. As m∗ ≤ m and m∗ ≤ dn/2, we get that ALG ≥ ωm · d ∗ ∗ ω ω and ALG ≥ dn/2 · d = n/2 . Now suppose H = (W, F ) consists of a collection F of k connected components. Then we glue these components together in k−1 phases. In each phase, we pick two components C, C ′ ∈ F, and combine them into a new connected component Cˆ by adding a connecting path, without touching any other connected ˆ component of F . We then set F ← F \ {C, C ′ } ∪ {C}. Each phase operates as follows. For every two components C, C ′ ∈ F, compute their distance, defined as d(C, C ′ ) = min{dist(u, u′, G) | u ∈ C, u′ ∈ C ′ }. Take a pair C, C ′ ∈ F attaining the smallest distance d(C, C ′ ). Let u ∈ C and u′ ∈ C ′ be two vertices realizing this distance, i.e., such that dist(u, u′ , G) = d(C, C ′ ). Let p(u, u′ ) be a shortest path between u and u′ in G. Let Cˆ be the connected component obtained by merging C, C ′ and the path p(u, u′ ). For the correctness proof, we need the following two observations: First, observe that in every phase, the path p(u, u′ ) used to merge the components C and C ′ does not go through any other cluster C ′′ , since otherwise, d(C, C ′′ ) would be strictly smaller than d(C, C ′ ), contradicting the choice of the pair (C, C ′ ). Moreover, p(u, u′ ) does not go through any other vertex v in the cluster C except for its endpoint u, since otherwise dist(v, u′ , G) < dist(u, u′ , G), contradicting the choice of the pair u, u′ . Similarly, p(u, u′ ) does not go through any other vertex v ′ in C ′ . We now claim that after i phases, the maximum degree of H satisfies ΔH ≤ d − k + i + 1. This is proved by induction on i. For i = 0, i.e., for the initial graph H = (W, F ), we observe that as F consists of d edges arranged in k

Degree-Constrained Subgraph Problems

37

separate components, the largest component has no more than d − k + 1 edges, hence ΔH ≤ d − k + 1, as required. Now suppose the claim holds after i − 1 phases, and consider phase i. All nodes other than those of the path p(u, u′ ) maintain their degree from the previous phase. The nodes u and u′ increase their degree by 1, so by the inductive hypothesis, their new degree is at most (d − k + (i − 1) + 1) + 1 = d − k + i + 1, as required. Finally, the intermediate nodes of p(u, u′ ) have degree 2 ≤ d − k + i + 1 (since i ≥ 1 and k ≤ d). It follows that by the end of phase k − 1, ΔH ≤ d − k + k − 1 + 1 = d. Also, at that point H is connected. Hence H is a valid solution. Finally, observe that the approximation ratio of the algorithm is still at most ρ = min{n/2, m/d}, since this ratio was guaranteed for the originally selected F , and the final subgraph contains the set F .  Comparing the approximation ratios of algorithms A and B, of Theorems 4 and 5 respectively, for unweighted graphs, we note that algorithm A performs better when d < 2 log n, while algorithm B is better when d ≥ 2 log n. Hence running algorithms A and B in parallel, and selecting the best solution, yields a ρ-approximation algorithm for the MDBCSd problem on unweighted graphs with ρ = min{n/2, nd/(2 log n), m/d, m/ log n}.

4

Hardness of Approximating MSMDd

The main result of this section, Theorem 7, shows that MSMDd does not admit a constant-factor approximation on general graphs, for d ≥ 3. We first prove that MSMDd does not admit a PTAS and then prove the main result using the error amplification technique. The first result is obtained by a reduction from the Vertex Cover (VC) problem. Theorem 6. MSMDd , for d ≥ 3, does not admit a PTAS unless P = NP. Proof: We prove the theorem for d = 3, deferring the proof for d ≥ 4 to [3] due to lack of space. We give a gap-preserving reduction from Vertex Cover. Let H be an instance of Vertex Cover on n vertices. Construct an instance G = f (H) of MSMD3 . Without loss of generality, we can suppose that H contains 3 · 2m edges for some integer m, and also that every vertex of H has degree at least three. Let T be the complete ternary rooted tree with root r and height m + 1. The number of leaves of T is 3·2m, and T contains 3·2m+1 −2 vertices. Let us identify the leaves of T with edges of H, and call this set E (note that E ⊆ V (T )). We add another copy of E, called F , and a Hamiltonian cycle on E ∪ F inducing a bipartite graph with partition classes E and F as shown in Fig. 1. Let us also identify the vertices of F with edges in H. Now we add n new vertices A identified with vertices of H, and join them to the leaves of T according to the adjacency relations between the edges and vertices in H, i.e. an element ℓ ∈ T is connected to v ∈ A if the edge corresponding to ℓ in H is adjacent to the vertex v of V (H). The graph G built in this way is depicted in Fig. 1.

38

O. Amini et al.

T

E

E(H)

F

E(H)

A

V(H) Fig. 1. Graph G built in the reduction of Theorem 6

We claim that minimum subgraphs of G of minimum degree at least three correspond to minimum vertex covers of H and vice versa. To see this, first note that if such a subgraph U of G contains a vertex of T ∪ F , then it should contain all the vertices of T ∪ F , because of the degree constraints. Obviously U cannot consist just of vertices of A, hence U must contain all the vertices of T ∪ F . Note that all the vertices of F have degree two in G[T ∪ F ]. Therefore, the problem reduces to finding the smallest subset of vertices in A covering all the vertices in F . This is exactly the Vertex Cover problem for H. Thus, we have that OP TMSMD3 (G) = OP TVC (H) + |V (T )| + |V (F )| = OP TVC (H) + 9 · 2m − 2 . To complete the proof, note that Vertex Cover is Apx-hard, even restricted to graphs H of size linear in OP TVC (H). A PTAS for MSMD3 provides a PTAS for Vertex Cover, which is a contradiction (assuming Apx = PTAS).  Theorem 7. MSMDd , d ≥ 3, does not admit any constant-factor approximation, unless P = NP. Proof: Again we give the details for d = 3, and defer the result for d ≥ 4 to [3]. The proof is by appropriately applying the standard error amplification technique. Let G1 = {G} be the family of graphs we constructed above (Fig. 1) from the instances H of vertex cover, G being a typical member of this family, and let α > 1 be the factor of inapproximability of MSMD3 , that exists by Theorem 6. We construct a sequence of families of graphs Gk , such that MSMD3 is hard to approximate within a factor θ(αk ) in the family Gk . This proves that MSMD3 does not admit any constant-factor approximation. In the following, Gk denotes a typical element of Gk constructed using the element G of G1 . We describe the construction of G2 , and obtain the result by repeating the same construction

Degree-Constrained Subgraph Problems

39

inductively to obtain Gk . For every vertex v in G (of degree dv ), construct a graph Gv as follows. First, take a copy of G, and choose dv other arbitrary vertices x1 , . . . , xd of degree three in T ⊂ G. Then, replace each of these vertices xi with a cycle of length four, and join three of the vertices of the cycle to the three neighbors of xi , i = 1, . . . , dv . Let Gv be the graph obtained in this way. Note that it contains exactly dv vertices of degree two in Gv . Now, take a copy of G, and replace each vertex v with Gv . Then, join the dv edges incident to v to the dv vertices of degree two in Gv . This completes the construction of the graph G2 . Note that |V (G2 )| = |V (G)|2 +o(|V (G)|2 ), because each vertex of G is replaced with a copy of G where we had replaced some of the vertices with a cycle of length four. To find a solution of MSMD3 in G2 , note that for any v ∈ V (G), once a vertex in Gv is chosen, we have to look for MSMD3 in G, which is hard up to a constant factor α. But approximating the number of v’s for which we should touch Gv is also MSMD3 in G, which is hard up to the same factor α. This proves that approximating MSMD3 in G2 is hard up to a factor α2 . The proof of the theorem is completed by repeating this procedure, applying the same  construction to obtain G3 , and inductively Gk . v

5

Approximating MSMDd

In this section, it is shown that for fixed d, MSMDd is in P for graphs whose treewidth is O(log n). This is done by giving a polynomial time algorithm based on dynamic programming. We refer to [3] for the definitions of treedecomposition and treewidth. This dynamic programming algorithm is then used in Section 5.2 to provide an O(n/ log n)-approximation algorithm of MSMDd for all classes of graphs excluding a fixed graph as a minor. This algorithm relies on a partitioning result for minor-excluded class of graphs, proved by Demaine et al. in [7]. 5.1

MSMDd Is in P for Graphs with Small Treewidth

In order to prove our results we need the following lemma, which gives the time complexity of finding a smallest induced subgraph of degree at least d in graphs of bounded treewidth. The proof is based on standard dynamic programming techniques, and can be found in [3]. Lemma 2. Let G be a graph on n vertices with a tree-decomposition of width at 2 most t, and let d be a positive integer. Then in time O((d + 1)t (t + 1)d n) one can either find a smallest induced subgraph of minimum degree at least d in G, or identify that no such subgraph exists. As usual in algorithms based on tree-decompositions, the proof relies on a dynamic programming approach based on a given nice decomposition, which at the end either produces a connected subgraph of G of minimum degree at least d and size at most k, or decides that G does not have any such subgraph.

40

O. Amini et al.

Given a tree-decomposition (T, X ), first suppose that the tree T is rooted at a given fixed vertex r. A {0, 1, 2, 3, . . . , d}-coloring of vertices in Xi is a function ci : Xi → {0, 1, . . . , d − 1, d}. Let supp(c) = {v ∈ Xi | c(v) = 0} be the support of c. For any such {0, 1, . . . , d}-coloring c of vertices in Xi , denote by a(i, c) the minimum size of an induced subgraph H(i, c) of G[Xi ∪ j a child of i Xj ], which has degree c(v) for every v ∈ Xi with c(v) = d, and degree at least d on its other vertices. Note that H(i, c) ∩ Xi = supp(c). If such a subgraph does not exist, define a(i, c) = +∞. We can then develop recursive formulas for a(i, c), starting from the leaves of T . Looking at the values of a(r, c) we can decide if such a subgraph exists in G. The complete proof of Lemma 2 is given in [3]. A graph G is q-degenerate if every induced subgraph of G has a vertex of degree at most q. It is well known that√there is a constant c such that for every h, every graph with no Kh minor is ch √log h-degenerate. This implies that M minor-free graphs with |M | = h are ch log h-degenerate and hence the largest √ value of d for which MSMDd is non-empty is ch log h, a constant. The above discussion, combined with the time complexity analysis mentioned in Lemma 2, imply the following Corollary 1. Let G be an n-vertex graph excluding a fixed graph M as minor, with a tree-decomposition of width O(log n), and let d be a (constant) positive integer. Then in polynomial time one can either find a smallest induced subgraph of minimum degree at least d in G, or conclude that no such subgraph exists. 5.2

Approximation Algorithm for M -Minor-Free Graphs

The following result of Demaine et al. [7] provides a way for partitioning the vertices of a graph excluding a fixed graph as a minor into subsets with small treewidth. Theorem 8. [7] For a fixed graph M , there is a constant cM such that for any integer k ≥ 1 and for every M -minor-free graph G, the vertices of G (or the edges of G) can be partitioned into k + 1 sets such that any k of the sets induce a graph of treewidth at most cM k. Furthermore, such a partition can be found in polynomial time. One may assume without loss of generality that the minimum degree of the minor-free input graph G = (V, E) is at least d (by removing all the vertices of lower degree), and also that |V (G)| = n = 2p for some integer p ≥ 0 (otherwise, replace log n with ⌈log n⌉ in the description of the algorithm). Description of the algorithm: (1) Relying on Theorem 8, partition V (G) in polynomial time into log n + 1 sets V0 , . . . , Vlog n such that any log n of the sets induce a graph of treewidth at most cM log n, where cM is a constant depending only on the excluded graph M. (2) Run the dynamic programming algorithm of Section 5.1 on all the subgraphs Gi = G[V \ Vi ] of log n sets, i = 0, . . . , log n.

Degree-Constrained Subgraph Problems

41

(3) This procedure finds all the solutions of size at most log n. If no solution is found, output the whole graph G. This algorithm clearly provides an O(n/ log n)-approximation for MSMDd in minor-free graphs, for all d ≥ 3. The running time of the algorithm is polynomial in n, since in step (2), for each Gi , the dynamic programming algorithm runs 2 in O((d + 1)t (ti + 1)d n) time, where ti is the treewidth of Gi , which is at most cM log n. i

6

Conclusions

This paper considered two Degree-Constrained Subgraph problems and studied their behavior in terms of approximation algorithms and hardness of approximation. Our main results and several interesting questions that remain open are discussed below. We proved that the MDBCSd problem is not in Apx for any d ≥ 2, and provided a deterministic approximation algorithm with ratio min{n/2, m/d} (respectively, min{m/ log n, nd/(2 log n)}) for general weighted (resp., unweighted) graphs. Finally, we gave a constant-factor approximation when the input graph has a low-degree spanning tree. Closing the huge gap between the hardness bound and the approximation ratio of our algorithm looks like a promising research direction. We proved that the MSMDd problem is not in Apx for any d ≥ 3. It would be interesting to strengthen this hardness result using the power of the PCP theorem. On the positive side, we gave an O(n/ log n)-approximation algorithm for the class of graphs excluding a fixed graph H as a minor. Finally, finding an approximation algorithm for MSMDd in general graphs seems to be a challenging open problem. It seems that MSMDd remains hard even for proper minor-closed classes of graphs.

References 1. Addario-Berry, L., Dalal, K., Reed, B.: Degree constrained subgraphs. Discrete Appl. Math. 156(7), 1168–1174 (2008) 2. Alon, N., Yuster, R., Zwick, U.: Color-coding: a new method for finding simple paths, cycles and other small subgraphs within large graphs. In: Proc. 26th ACM Symp. on Theory of Computing, New York, USA, pp. 326–335 (1994) 3. Amini, O., Peleg, D., P´erennes, S., Sau, I., Saurabh, S.: Degree-Constrained Subgraph Problems: Hardness and Approximation Results. Technical Report 6690, INRIA, Accessible in Ignasi Sau’s homepage (2008) 4. Amini, O., P´erennes, S., Sau, I.: Hardness and Approximation of Traffic Grooming. In: Tokuyama, T. (ed.) ISAAC 2007. LNCS, vol. 4835, pp. 561–573. Springer, Heidelberg (2007)

42

O. Amini et al.

5. Amini, O., Sau, I., Saurabh, S.: Parameterized Complexity of the Smallest DegreeConstrained Subgraph Problem. In: Grohe, M., Niedermeier, R. (eds.) IWPEC 2008. LNCS, vol. 5018, pp. 13–29. Springer, Heidelberg (2008) 6. Bj¨ orklund, A., Husfeldt, T.: Finding a Path of Superlogarithmic Length. SIAM J. on Computing 32(6), 1395–1402 (2003) 7. Demaine, E., Hajiaghayi, M., Kawarabayashi, K.C.: Algorithmic Graph Minor Theory: Decomposition, Approximation and Coloring. In: Proc. 46th IEEE Symp. on Foundations of Computer Science, pp. 637–646 (October 2005) 8. Erd˝ os, P., Faudree, R., Rousseau, C.C., Schelp, R.H.: Subgraphs of minimal degree k. Discrete Math. 85(1), 53–58 (1990) 9. Feige, U., Peleg, D., Kortsarz, G.: The Dense k-Subgraph Problem. Algorithmica 29(3), 410–421 (2001) 10. F¨ urer, M., Raghavachari, B.: Approximating the minimum-degree spanning tree to within one from the optimal degree. In: Proc. 3rd ACM-SIAM Symp. on Discrete Algorithms, USA, pp. 317–324 (1992) 11. F¨ urer, M., Raghavachari, B.: Approximating the minimum-degree steiner tree to within one of optimal. J. Algorithms, 409–423 (1994) 12. Gabow, H.: An efficient reduction technique for degree-constrained subgraph and bidirected network flow problems. In: Proc. 15th ACM Symp. on Theory of Computing, USA, pp. 448–456. ACM Press, New York (1983) 13. Garey, M., Johnson, D.: Computers and Intractability. W.H. Freeman, San Francisco (1979) 14. Karger, D., Motwani, R., Ramkumar, G.: On approximating the longest path in a graph. Algorithmica 18(1), 82–98 (1997) 15. Khot, S.: Ruling out PTAS for graph min-bisection, densest subgraph and bipartite clique. In: Proc. 45th IEEE Symp. on Foundations of Computer Science, pp. 136– 145 (2004) 16. Lov´ asz, L., Plummer, M.: Matching Theory. Annals of Discrete Math, vol. 29. North-Holland, Amsterdam (1986) 17. Lund, C., Yannakakis, M.: The Approximation of Maximum Subgraph Problems. In: Proc. 20th Int. Colloq. on Automata, Languages, and Programming (1993) 18. Munro, J., Raman, V.: Succinct Representation of Balanced Parentheses and Static Trees. SIAM J. on Computing 31(3), 762–776 (2001) 19. Papadimitriou, C., Yannakakis, M.: The traveling salesman problem with distances one and two. Mathematics of Operations Research 18(1), 1–11 (1993) 20. Ravi, R., Marathe, M., Ravi, S., Rosenkrantz, D., Hunt III, H.B.: Approximation algorithms for degree-constrained minimum-cost network-design problems. Algorithmica 31(1), 58–78 (2001) 21. Richard, O.: The Number of Trees. Annals of Mathematics, Second Series 49(3), 583–599 (1948)

A Lower Bound for Scheduling of Unit Jobs with Immediate Decision on Parallel Machines Tom´aˇs Ebenlendr and Jiˇr´ı Sgall ˇ a 25, CZ-11567 Praha 1, Czech Republic Institute of Mathematics, AS CR, Zitn´ [email protected], [email protected]

Abstract. Consider scheduling of unit jobs with release times and deadlines on m identical machines with the objective to maximize the number of jobs completed before their deadlines. We prove a new lower bound for online algorithms with immediate decision. This means that the jobs arrive over time and the algorithm has to decide the schedule of each job immediately upon its release. Our lower bound tends to e/(e − 1) ≈ 1.58 for many machines, matching the performance of the best algorithm.

1

Introduction

Suppose that we have unit jobs that arrive over time. Each job arrives at its release time and has a deadline, these times are integers. The goal is to schedule as many jobs as possible before their deadlines, on m identical machines. In the online setting, at each time t the algorithm chooses at most m jobs to be started at time t (among the jobs released before or at t, with a deadline strictly after t and not scheduled yet). This is a very simple online problem: At each time t we schedule m jobs with the earliest deadlines. This generates an optimal schedule. In this note, we study a modification of this problem called scheduling with immediate decision, introduced and studied in [5,4]. In this variant, the online algorithm has to decide the schedule of the newly released jobs immediately after they are released. This means that at time t, the schedule of jobs with release time t is fixed, and even if a job is scheduled to start only at time t′ > t, its schedule cannot be changed later. Obviously, this is harder for the online algorithm, and, for example, the optimal algorithm described above does not work in this model. In [4], Ding et al. presented an online algorithm with immediate decision with the competitive ratio decreasing to e/(e − 1) for m → ∞. It works even for the more general case when the processing times are equal (but possibly larger than 1), with the same competitive ratio. This algorithm is actually very simple: The machines are kept sorted by decreasing completion times, i.e., the first machine is the one that would complete the currently assigned jobs latest. The newly released jobs are processed one by one, so that each job is scheduled on the first machine at the completion time of that machine; if that would violate the deadline, try the second machine, and so on; if no machine works, the job is rejected. E. Bampis and M. Skutella (Eds.): WAOA 2008, LNCS 5426, pp. 43–52, 2009. c Springer-Verlag Berlin Heidelberg 2009 

44

T. Ebenlendr and J. Sgall Table 1. Summary of new and previous results m 2 3 4 5 ··· a lower bound with immediate decision, 1.678 1.626 1.607 1.598 unit jobs [new result] an algorithm with immediate decision, equal 1.8 1.730 1.694 1.672 processing times [4] a lower bound with immediate decision, equal 1.8 − − − processing times [4] an algorithm without immediate decision, equal 1.5 − − − processing times [5,7]

→∞ 1.582 1.582 1.333 −

The obvious question is: Is there a better algorithm with immediate decision at least for unit jobs? Our results. We prove that no algorithm for unit jobs with immediate decision on m machines has a competitive ratio smaller than rm =

e e

m

−1 m

m

−1 m



m m+1

.

For m → ∞, rm decreases to e/(e − 1) ≈ 1.582. For m = 2, r2 ≈ 1.678, and a few more values are listed in Table 1. Our lower bound shows that the simple algorithm from [4] is optimal at least in the limit for large m. This is true even for unit jobs, showing that for scheduling with immediate decision, handling the unit jobs is almost as hard as scheduling jobs with equal processing times. In most online settings, scheduling unit jobs is significantly easier compared to jobs with equal processing times, thus we find the new lower bound quite surprising. Note also that for our problem, as well as for the basic variant without immediate decision, it is natural that more machines allow algorithms with better competitive ratio, because it is possible to keep a fraction of machines available for the jobs that arrive later. In fact, for unit jobs, this can be formalized: The case of m machines is equivalent to the case of a single machine with an additional restriction that all release times and all deadlines are multiples of m. Thus the competitive ratio for m is at least the ratio for m′ whenever m divides m′ . Previous results. The exact competitive ratio of the algorithm from [4] with immediate decision is 1  m , Rm = m 1 − m+1 see Table 1 for a few values. The only previous lower bound for scheduling of unit jobs with immediate decision is a bound of 1.6 for m = 2 [4]. Another related and more general model, as we already mentioned, assumes that all jobs have the same processing time p. This is a significantly harder problem, as the release times and deadlines do not need to be multiples of p.

A Lower Bound for Scheduling of Unit Jobs with Immediate Decision

45

Thus, for example, a new job can arrive when all the machines are committed to process other jobs. All the results below are for equal processing times. It is known that a greedy algorithm is 2-competitive for any number of machines and this is optimal among the deterministic algorithms for a single machine [1]. For a single machine without immediate decision there also exists a 5/3-competitive randomized algorithm [3] and no better than 4/3-competitive randomized algorithm is possible [6]. For the case of two machines, two 1.5-competitive deterministic algorithms without immediate decision were designed independently in [5,7]. This competitive ratio is optimal for m = 2 without immediate decision. So, for m = 2 and equal processing times, immediate decision increases the competitive ratio from 1.5 to 1.8; the lower bound of 1.8 for immediate decision is from [4]. For m ≥ 3, the algorithm of [4] is the best algorithm currently known (and in fact the only one better than the 2-competitive greedy algorithm). We have no better algorithm even without immediate decision for equal processing times. A standard example gives a lower bound of 4/3 with immediate decision, as noticed in [4], while for general algorithms the lower bound approaches 6/5 for large m [5]. Preliminaries and notations. We are given m, the number of the machines, and n jobs. Each job is described by a pair of numbers: a release time rj and a deadline dj ; these numbers are integers. Each job has a unit processing time. The goal is to maximize the number of jobs completed by the deadline (the number of early jobs). We allow the algorithm to reject some jobs; in fact, w.l.o.g., we restrict ourselves to schedules where each job is either rejected or completed by its deadline. In the online version, each job is released at its release time rj , at this time also its deadline dj becomes known. The algorithm does not know that the job exists before this time. Moreover, if the online algorithm has the immediate decision property, it must decide the schedule of the job as soon as it is released and cannot change this decision later. Unrestricted online algorithms decide which jobs to start at each time, leaving the remaining jobs uncommitted. We use the standard competitive analysis: The algorithm is c-competitive if, for every input, it schedules at least 1/c fraction of the number of jobs scheduled by the optimum schedule. We denote the starting time of a job (that is not rejected) by sj . The job then occupies the time interval [sj , sj + 1) on some machine. This means that each job (that is not rejected) must be scheduled so that rj ≤ sj ≤ dj − 1. With unit processing times and integral release times, we restrict ourselves to integral starting times, both for the optimum and the online algorithms without loss of generality: Whenever non-integral starting times would occur, we can move the jobs forward one by one to start at ⌊sj ⌋, with no loss in the performance. Since the jobs are aligned, we do not need to know which particular machine a job is assigned to. A valid machine assignment is available if and only if |{j | sj = t}| ≤ m for each time t. Our goal is to maximize the number of properly scheduled jobs.

46

T. Ebenlendr and J. Sgall

2

The Idea of the Proof

In this section we describe the idea of the lower bound. The exact proof is given in the next section. As usual, our lower bound is formulated as an adversary strategy in a game between a deterministic algorithm and an adversary who has the power to react to the actions of the algorithm. The adversary releases some jobs at time t (thus rj = t for these jobs and dj is set by the adversary for each job independently). Once the jobs are released, the algorithm schedules (or rejects) all these jobs and then the time advances to the next release time decided by the adversary. Adversary strategy. The adversary starts with a sufficiently long interval [0, T ). This means that the adversary first releases a few jobs with the release time 0 and the deadline T . Due to the immediate decision property, the algorithm has to commit to the schedule of these jobs. By averaging, we can find a big part of the interval where the algorithm schedules at least the average number of jobs and such that the adversary can schedule all the jobs outside of this part. Then the adversary uses the same procedure recursively. Now we do a few rough calculations to see how this idea gives the lower bound of e/(e − 1), disregarding various rounding issues. So now we describe the recursive process in more detail. For simplicity, let us also assume that the algorithm always schedules the released jobs so that they are spread uniformly over the feasible interval. (Later we show that no other algorithm performs much better against our adversary.) During the process, at time t, the adversary has scheduled all the previously released jobs before t, while the algorithm has already scheduled x jobs in the remaining interval [t, T ) of length l = T − t. We call [t, T ) the active interval and we say that its density is ρ = x/(ml). Then the adversary at time t releases εml jobs with deadlines equal to T , for a small ε. The adversary schedules them before time t′ = t + εl. The density increases to ρ + ε on [t, T ) as well as on the interval [t′ , T ) (due to the uniform spreading assumption). The adversary then increases time to t′ and continues until the density increases to 1. We express the length l of the active interval as a function of the density ρ. When ρ increases by ε, then l decreases by εl. Taking ε infinitesimally small, we get a differential equation dl/dρ = −l. We have the initial condition l(0) = T , and thus the equation is solved by the function l(ρ) = e−ρ · T . So, starting with the length T , the adversary ends with an interval of length at least l = l(1) = T /e, during which all time steps have m jobs scheduled in the schedule of the algorithm but no jobs in the schedule of the adversary. At this point, both the adversary and the algorithm have scheduled m(T − l) = (1 − 1/e)mT jobs, as all the released jobs exactly fit before the active interval. Now the adversary simply releases the final batch of lm jobs that cannot be scheduled outside the active interval. The adversary schedules all of these jobs while the algorithm has to reject them. The adversary schedules the total of mT jobs, while the algorithm only (1 − 1/e)mT jobs, yielding the lower bound of e/(e − 1).

A Lower Bound for Scheduling of Unit Jobs with Immediate Decision

47

Technical issues. The sketch of the proof above needs to be properly formalized. In the end, the proof is somewhat technical, as there are many issues that we have to deal with. Here we sketch the main issues and the ways to solve them. Finding the dense part. First, we cannot assume that the algorithm spreads the jobs evenly. We need to find a dense part of a given length on which we focus in the recursion. This is done essentially by an averaging argument. Unfortunately, the dense part does not necessarily form a single interval. Instead, it can be composed of two non-overlapping intervals (and this is sufficient). This, in turn, makes the recursion more difficult. The number of intervals increases exponentially. The recursive procedure arranges them naturally in a tree of nested intervals. At any time, we have a list of active intervals instead of just one, and we release jobs corresponding to the interval which starts first. This corresponds to traversing the tree of intervals in the depth-first order. To analyze the length of the intervals, however, we always need to argue about the total length of the intervals on one level of the tree. Discretization and rounding. We need to argue that by taking a small ε, the bound obtained from the continuous version of the recursion can be approximated arbitrarily well. We also need to account for various rounding errors and the fact that the adversary cannot release two batches of jobs at the same time. To this end, we use an initial active interval of length exponential in 1/ε and carefully bound the errors. In general, we release slightly more jobs than can fit in the adversary schedule and let the adversary reject some of them, to make the calculations simpler. Improving the bound for small m. To improve the bound for small m, we stop the iterative process at density (m − 1)/m instead of 1. Instead of increasing the density by ε, we increase it by almost 1/m in a single last phase. Then we present the final set of jobs as m tight jobs for each time step in the schedule where the density is 1. Handling the rejected jobs. So far we have assumed that the algorithm schedules all the released jobs. This is not necessarily true, and with immediate decision, it may seem that it could possibly be an advantage for an algorithm to reject a job and keep more options for later steps. However, a simple exchange argument shows that every algorithm can be modified so that no jobs are rejected, unless all machines are occupied during the whole feasible interval. Taking all this into account, we give an adversary strategy which for any ε > 0 −1 −1 /(e − generates an instance showing that the competitive ratio is at least e m m+1 + O(ε)). m

m

m

3

m

The Lower Bound

We first define the density and state the lemma which ensures the adversary to find dense intervals after releasing some jobs.

48

T. Ebenlendr and J. Sgall

The density of an interval is defined as the number of jobs scheduled in it by the adversary, divided by the maximal number of jobs that can fit. Notice that the density depends on the schedule of the algorithm; in particular, it may increase during the online process but it can never decrease. Definition 1. Suppose that the algorithm has scheduled x jobs during the time interval [t1 , t2 ). Then the density of the interval is ρ[t1 ,t2 ) = x/(m(t2 − t1 )). For an empty interval, i.e., t2 = t1 , we define the density to be 1. Lemma 1. Given an interval [t1 , t2 ) with density ρ and an integer l ≤ t2 − t1 , we can find one or two non-overlapping intervals with total length l, each of them having the density at least ρ. Proof. Let t be the smallest time such that t ≥ t2 − l and ρ[t,t2 ) ≥ ρ. (Note that t = t2 is always eligible.) If t = t2 − l, then we have a single dense interval of length l and we are done. Otherwise we take [t, t2 ) as one of the intervals and look for another interval of length l′ = l − (t2 − t). Let q = ⌈(t − t1 )/l′ ⌉ − 1; we have q > 0 as l < t2 − t1 . By the choice of q, we have t1 + ql′ ≥ t − l′ = t2 − l. Thus the interval [t1 + ql′ , t2 ) has density less than ρ as otherwise we would choose t ≤ t1 + ql′ . Considering the density of the whole interval, [t1 , t1 + ql′ ) has density at least ρ. It can be covered by disjoint intervals [t1 + (i − 1)l′ , t1 + il′ ) for i = 1, . . . , q; thus one of these intervals has density at least ρ as well, and it can be chosen as the desired second interval of length l′ . ⊓ ⊔ Now we are ready to prove the main result. Theorem 1. Let A be a deterministic online algorithm for scheduling unit jobs with release times and deadlines on m machines with the objective to maximize the number of accepted jobs. If A satisfies the restriction of immediate decision −1 −1 m ). /(e − m+1 then its competitive ratio is at least rm = e m

m

m

m

Proof. We proceed in several major steps roughly following the sketch in the previous section. Handling rejections. First we observe that any algorithm can be modified so that no job is rejected, unless all machines are occupied during the whole feasible interval of that job. Suppose that the online algorithm A does not satisfy this restriction. Using A, we construct a new algorithm B which never rejects a job, unless it has to, and accepts at least the same number of jobs. At a given time, B schedules all the newly released jobs at the same time and the same machine as A, if that slot is empty. The remaining jobs, both those rejected by A and those not yet scheduled by B because the corresponding slot was not empty, are processed one by one in an arbitrary order. If there is an empty slot (i.e., a pair of machine and time) during the feasible interval of a job, it is scheduled to any such slot. Otherwise the job is rejected. We claim that at any moment, the algorithm B has scheduled a job to any slot where A has scheduled one. For a given slot, this property can be violated

A Lower Bound for Scheduling of Unit Jobs with Immediate Decision

49

only at the time when A schedules a job to that slot. However, if B has this slot still empty, it puts the same job there. Thus, at the end, B has scheduled at least as many jobs as A. From now on, we assume that A is the modified algorithm which never rejects a job unless during its feasible interval all the machines are full. This is important as our adversary strategy relies on gradually increasing the density. An overview of the adversary strategy. Choose k a sufficiently large integer. Let ε = (m − 1)/(mk) and T = ⌈2k /ε2 ⌉. The adversary starts with one active interval [0, T ). Throughout the process, the adversary maintains a list of disjoint non-empty active intervals such that all these intervals start at the current time or later. Upon reaching the start of the interval, after some action of the adversary (typically releasing some jobs and waiting for the algorithm to schedule them), the interval is removed from the list and possibly replaced by one or two disjoint subintervals with strictly larger starting time. Then we let the time advance and continue the process as long as there is any active interval on the list. Each interval has a level: The initial interval [0, T ) has level 0. Each subsequent interval created while processing an interval at level i has level i + 1. During the process we guarantee that each active interval at level i has density at least iε. The maximal level of an interval is k; at this point the density is at least (m − 1)/m. Overall, we create and process less than 2k+1 intervals during the whole process. The action of the adversary at the start time of the interval depends on the level of the interval. If the level is less than k, the adversary increases the density as described below, with the exception of the intervals of length at most 2/ε that are ignored. If the level of the processed interval is k, we do a more complicated phase described later, which guarantees that the algorithm rejects many jobs; in this case no new interval is introduced. Increasing the density by ε. Suppose that the first active interval is [t1 , t2 ) at level i < k. Thus its density is at least ρ[t1 ,t2 ) ≥ iε. Denote the length of the interval by l = t2 − t1 . If l ≤ 2/ε, the adversary removes this interval without any further action. Otherwise, the adversary submits εlm + m jobs with rj = t1 and dj = t2 . The density ρ[t1 +1,t2 ) increases to at least ρ[t1 ,t2 ) + ε ≥ (i + 1)ε after the algorithm schedules the released jobs, as at most m jobs (old or new) may be scheduled at time t1 . Let l′ = ⌈e−ε l⌉ be the desired length of the dense subintervals. Note that we use a factor of e−ε in place of 1−ε in the intuitive description; this approximation is good for small ε and makes it possible to bound the error from discretization. Using elementary calculus we verify that 1 − e−x ≥ x(1 − x) for all x, and we obtain l − l′ > (1 − e−ε )l − 1 ≥ ε(1 − ε)l − 1.

(1)

In particular, we have l > l′ due to our restriction l > 2/ε and ε < 1/2 (which can be guaranteed by taking a large k).

50

T. Ebenlendr and J. Sgall

Now the adversary applies Lemma 1 to find one or two disjoint subintervals of [t1 + 1, t2 ) with total length l′ = ⌈le−ε ⌉ with density at least (i + 1)ε; we can apply the lemma to this shorter interval as l′ < l. These one or two intervals are added to the list of active intervals, and [t1 , t2 ) is removed. Note that the new active intervals are at level i + 1 and they have the density (i + 1)ε required for this level. The adversary schedules (l − l′ )m of the new jobs during [t1 , t2 ) but outside the new active intervals and rejects the remaining jobs (if any). Using (1), it follows that the number of jobs rejected by the adversary is at most εlm + m − (l − l′ )m ≤ εlm + m − (ε(1 − ε)lm − m) = 2m + ε2 lm.

(2)

The final phase (at level k). In the remaining case, the first active interval is [t1 , t2 ) at level k. Denote its length by l = t2 − t1 . Note that the density is at least (m − 1)/m. The adversary releases ⌈lm/(m + 1)⌉ jobs with rj = t1 and dj = t2 . Considering the total number of jobs and the fact that at most m jobs may be scheduled during each time step, it follows that after the algorithm schedules the new jobs, there are at least ⌈lm/(m+ 1)⌉− 1 time steps in the interval [t1 + 1, t2 ) where the algorithm scheduled m jobs. The adversary chooses exactly ⌈lm/(m + 1)⌉ − 1 of these full time steps, and for each chosen time step [t, t + 1), it releases m tight jobs with rj = t and dj = t + 1. All these jobs are rejected by the algorithm. The adversary schedules the jobs released at t1 during [t1 , t2 ) but outside the new active intervals. The number of available time steps is       l l lm +1= +1≥ , l− m+1 m+1 m+1 thus no job is rejected. The tight jobs are then scheduled during their corresponding time steps, so the adversary does not reject any of these jobs, either. Bounding the competitive ratio. To bound the competitive ratio, we first bound the number of jobs rejected by the adversary and by the algorithm A; we denote these by Radv and RA , respectively. Let L be the total length of the intervals at level k. The total length of all intervals that are removed because they are too short, over all levels, is at most 2k · 2/ε. The total length of the remaining intervals decreases by at most a factor of e−ε at each level. Thus the overall length is at least −1 2k+1 2k+1 = e− . T− ε ε At a level smaller than k, using (2), the adversary rejects at most 2m jobs per interval plus ε2 lm for each interval of length l. Since the intervals at the same level are disjoint, their total length is at most T for each level. No jobs are rejected at level k. Thus, overall, the number of jobs rejected by the adversary is at most

L ≥ e−kε T −

m

m

Radv ≤ 2m · 2k + k · ε2 mT = 2m · 2k + ε(m − 1)T ≤ 3ε · mT, where the last inequality follows by our choice of T .

A Lower Bound for Scheduling of Unit Jobs with Immediate Decision

51

The algorithm A rejects jobs at the level k, for an interval of length l it rejects at least lm2 /(m + 1) − m jobs. Since there are at most 2k intervals at the level k, the number of rejected jobs is at least m − m2 L − m2k ≥ e m+1 m+1  m − −1 e ≥ − 3ε mT, m+1

RA ≥

m

−1

· mT −

m

m2k+1 − m2k ε

m

m

where the last inequality follows by our choice of T again. The competitive ratio is at least (n − Radv )/(n − RA ), where n is the number of released jobs. As the ratio is larger than 1, the bound decreases with n, and we need to upper bound n. Using a simple fact that the adversary schedules at most mT jobs, we have n ≤ mT + Radv . Thus the competitive ratio is at least n − Radv mT ≥ n − RA mT + Radv − RA ≥

mT mT + 3ε · mT −

m −1 m ( m+1 e− m

=

− 3ε)mT

1 1−

m − m m−1 m+1 e

+ 6ε

.

For a sufficiently large k, we have ε arbitrarily small, and thus the lower bound approaches the claimed bound of rm = and the proof is complete.

4

1 1−

m − m m−1 m+1 e

=

e e

m

−1 m

m

−1 m



m m+1

, ⊓ ⊔

Conclusions

With immediate decision, our lower bound still leaves a gap for small m, for scheduling unit jobs. It gives an intuition what is needed for an algorithm to perform better than the algorithm of [4]: If a number of jobs with the same deadline is released, they should be spread uniformly between the release time and the deadline, and not packed starting from the release time. However, it seems quite hard to define and analyze such an algorithm once there are many different deadlines. It is interesting to consider randomized algorithms instead of deterministic ones. Our problem can be viewed as a special case of online bipartite matching (matching jobs to slots in the schedule). Thus there exists an e/(e − 1)competitive algorithm. (A simplified proof and further references can be found in [2]. Their proof actually gives only an asymptotic result, but a simple padding argument proves the bound of e/(e − 1) for any instance.) Our lower bound can be modified to yield a lower bound of e/(e − 1) for randomized algorithms for any number of machines m. Since it is based on averaging arguments, it can be used for a randomized algorithm if we simply replace density by expected

52

T. Ebenlendr and J. Sgall

density. The only change that is needed is that we omit the last phase which is used to improve the deterministic lower bound for small m. We postpone the details to the journal version of this paper. More fundamental problem in this area is to close the gap for general algorithms. It is quite surprising that for m ≥ 3 we have no algorithms that would go beyond the immediate decision restriction.

Acknowledgments We are grateful to an anonymous referee to draw our attention to the randomized case. We thank Marek Chrobak and Rob van Stee for pointers and discussions regarding online matching. Partially supported by Institutional Research Plan No. AV0Z10190503, by ˇ ˇ Inst. for Theor. Comp. Sci., Prague (project 1M0545 of MSMT CR), and grant ˇ IAA1019401 of GA AV CR.

References 1. Baruah, S.K., Haritsa, J., Sharma, N.: On-line scheduling to maximize task completions. J. Comb. Math. Comb. Comput. 39, 65–78 (2001); a preliminary version appeared in: Proc. 15th Real-Time Systems Symp., pp. 228–236. IEEE, Los Alamitos (1994) 2. Birnbaum, B.E., Mathieu, C.: On-line bipartite matching made simple. SIGACT News 39, 80–87 (2008) 3. Chrobak, M., Jawor, W., Sgall, J., Tich´ y, T.: Online scheduling of equal-length jobs: Randomization and restarts help. SIAM J. Comput. 36, 1709–1728 (2007); a preliminary version appeared in: D´ıaz, J., Karhum¨ aki, J., Lepist¨ o, A., Sannella, D. (eds.) ICALP 2004. LNCS, vol. 3142, pp. 358–370. Springer, Heidelberg (2004) 4. Ding, J., Ebenlendr, T., Sgall, J., Zhang, G.: Online scheduling of equal-length jobs on parallel machines. In: Arge, L., Hoffmann, M., Welzl, E. (eds.) ESA 2007. LNCS, vol. 4698, pp. 427–438. Springer, Heidelberg (2007) 5. Ding, J., Zhang, G.: Online scheduling with hard deadlines on parallel machines. In: Cheng, S.-W., Poon, C.K. (eds.) AAIM 2006. LNCS, vol. 4041, pp. 32–42. Springer, Heidelberg (2006) 6. Goldman, S.A., Parwatikar, J., Suri, S.: Online scheduling with hard deadlines. J. Algorithms 34, 370–389 (2000) 7. Goldwasser, M.H., Pedigo, M.: Online, non-preemptive scheduling of equal-length jobs on two identical machines. In: Arge, L., Freivalds, R. (eds.) SWAT 2006. LNCS, vol. 4059, pp. 113–123. Springer, Heidelberg (2006); to appear in ACM Transactions on Algorithms

Improved Randomized Online Scheduling of Unit Length Intervals and Jobs⋆ Stanley P.Y. Fung1 , Chung Keung Poon2 , and Feifeng Zheng3 1

Department of Computer Science, University of Leicester, United Kingdom [email protected] 2 Department of Computer Science, City University of Hong Kong, China [email protected] 3 School of Management, Xi’an Jiaotong University, China [email protected]

Abstract. We study the online interval scheduling problem and the online job scheduling problem (with restart). In both problems, the intervals or jobs have unit length and arbitrary weights, and the objective is to maximize the total weight of completed intervals (or jobs). We first gave a 2-competitive randomized algorithm for the case of intervals. The algorithm is barely random in the sense that it randomly chooses between two deterministic algorithms at the beginning and then sticks with it thereafter. The algorithm is surprisingly simple and improves upon several previous results. Then we extended the algorithm to the scheduling of jobs with restarts, and proved that it is 3-competitive. We also proved a lower bound of 2 on the competitive ratio of all barely random algorithms that choose between two deterministic algorithms for scheduling intervals (and jobs).

1

Introduction

In this paper, we study two online preemptive scheduling problems. In the interval scheduling problem, we are to schedule a set of weighted intervals which arrive online so that at any moment, at most one interval is being processed. We can abort the interval currently being processed in order to start a new one. The goal is to maximize the sum of the weights of completed intervals. The problem can be viewed as a job scheduling problem in which each job has, besides its weight, an arrival time, a processing time and a deadline. Moreover, the deadline is always tight, i.e., deadline always equals arrival time plus processing time. Thus, if one does not start an interval immediately upon its arrival, or if one aborts it before its completion, that interval will never be completed. The problem is fundamental in scheduling and is clearly relevant to a number of online problems such as call control and bandwidth allocation (see e.g., [2,5,17]). We also study the more general problem of job scheduling with restart. Here, the deadline of ⋆

The work described in this paper was fully supported by a grant from NSFC Grant No. 70702030.

E. Bampis and M. Skutella (Eds.): WAOA 2008, LNCS 5426, pp. 53–66, 2009. c Springer-Verlag Berlin Heidelberg 2009 

54

S.P.Y. Fung, C.K. Poon, and F. Zheng

a job needs not be tight and we can abort a job and restart it from the beginning some time later. Both problems are in fact special cases of the broadcast scheduling problem which gains much attention recently due to its application in video-on-demand, stock market quotation, etc (see e.g., [12,16,18]). In that problem, a server holding a number of pages receives requests from its clients and schedules the broadcasting of its pages. A request is satisfied if the requested page is broadcasted completely before the deadline of the request. The page currently being broadcasted can be aborted in order to start a new one and the aborted page can be re-broadcasted from the beginning later. Interval and job scheduling with restart can be seen as a special case in which each request asks for a different page. In this paper, we focus on the case of equal-length intervals/jobs, which is already highly non-trivial. As we will see shortly, most previous works on these problems put further restrictions on the inputs (such as requiring jobs to be unweighted or arrival times to be integral.) The power of randomization for these problems are especially unclear. Much of our results concern barely random algorithms, i.e. randomized algorithms using only very few (a constant number of) random bits. A typical use is to choose between a small number of deterministic algorithms randomly. Quite some previous work in online scheduling consider the use of barely random algorithms (see e.g. [1,8,15]); it is interesting to consider how the competitiveness improves (upon their deterministic counterparts) by using only a small number of random bits. Previous work. We first mention results for interval scheduling. The deterministic case was settled in [17] where a 4-competitive algorithm and a matching lower bound was given. Miyazawa and Erlebach [14] were the first to give a better randomized algorithm: its competitive ratio is 3, but it only works for a special case where the weights of the intervals form a non-decreasing sequence. They also gave the first randomized lower bound of 5/4. For the general case, the first randomized algorithm that has competitive ratio better than 4 (the bound for deterministic algorithms) was devised in [11]. It is 3.618-competitive and is barely random, choosing between two deterministic algorithms with equal probability. In the same paper, a lower bound of 2 for such barely random algorithms and a lower bound of 4/3 for general randomized algorithms were also proved. In the case of jobs with restarts, Zheng et al. [18] gave a 4.56-competitive deterministic algorithm. The algorithm was for the more general problem of scheduling broadcasts but it works for jobs scheduling with restarts too. We are not aware of previous results in the randomized case. Nevertheless, Chrobak et al. [8] considered a special case where the jobs have no weights and the objective is to maximize the number of completed jobs. For the randomized nonpreemptive case they gave a 5/3-competitive barely random algorithm and a lower bound of 3/2 for barely random algorithms that choose between two deterministic algorithms. They also gave an optimal 3/2-competitive algorithm for the deterministic preemptive (with restart) case, and a lower bound of 6/5 for the randomized preemptive case.

Improved Randomized Online Scheduling of Unit Length Intervals and Jobs

55

We can also assume time is discretized into unit-length slots and all (unit) jobs must start at the beginning of each slot. This version of unit job scheduling is a special case of the problem we consider in this paper, and has been widely studied, and has applications in buffer management of QoS switches. For this problem, a e/(e − 1)-competitive randomized algorithm was given in [6], and a randomized lower bound of 1.25 was given in [7]. The current best deterministic algorithm is 1.828-competitive [9]. An alternative preemption model is to allow the partially-executed job to resume its execution from the point that it is preempted. This was studied, for example, in [3,13]. Our results. In this paper we give a new randomized algorithm for the online interval scheduling problem. It is barely random and has a competitive ratio of 2, thus substantially improves previous results. It should be noted that although the algorithm and its analysis are very simple, it was not discovered in several previous attempts by other researchers and ourselves [10,11,14]. In particular, Epstein and Levin [10] very recently studied a related problem (called “benevolent instances”), and some of their results apply to the case of unit length intervals as well: they gave a 2.4554-competitive randomized algorithm, and a 3.22745-competitive barely random algorithm. This should be compared with our simple 2-competitive barely random algorithm. Next we extend this algorithm to the case of job scheduling (with restarts), and prove that it is 3-competitive. This is the first randomized algorithm we are aware of for this problem. The extension of the algorithm is very natural, but the proof is considerably more involved. Finally we prove a lower bound of 2 for barely random algorithms for scheduling intervals (and jobs) that choose between two deterministic algorithms, not necessarily with equal probability. Thus it matches the upper bound of 2 for this class of barely random algorithms. Although this lower bound does not cover more general classes of barely random or randomized algorithms, we believe this is still of interest. For example, a result of this type appeared in [8]. Also, no barely random algorithm using 3 or more deterministic algorithms with a better performance is known. The proof is also much more complicated than the one in [11] with equal probability assumption. Due to space constraints, some proofs are omitted and will only appear in the full version of the paper.

2

Preliminaries

We partition the time axis into unit length segments called slots such that the i-th slot covers time [i − 1, i) for i = 1, 2, . . .. A slot s = [i − 1, i) is an odd slot if i is odd, and is an even slot otherwise. The notation [s1 ..s2 ] denotes a range of slots from s1 to s2 inclusive, where s1 is before s2 . Arithmetic operators on slots carry the natural meaning, so s + 1 is the slot immediately after s, s − 1 is the slot immediately before s, s1 < s2 means s1 appears before s2 , etc.

56

S.P.Y. Fung, C.K. Poon, and F. Zheng

A job x is specified by its release time r(x), its deadline d(x), its weight w(x) and its length. Without loss of generality, we assume all jobs have length 1. An interval is a job with tight deadline, i.e. d(x) = r(x) + 1. The remaining terminologies applies to both jobs and intervals. For simplicity we only mention jobs. We say a job is completed by an algorithm A in slot s if it is started by A in slot s and is then completed without interruption. The job that is completed by A in slot s is denoted by A(s). (Clearly, at most one job can be completed in a slot in any given schedule.) The inverse A−1 (x) denotes the slot s with A(s) = x. The value of a schedule is the total weight of the jobs that are completed in the schedule. The performance of online algorithms is measured using competitive analysis [4]. An online randomized algorithm A is c-competitive (or the competitive ratio is c) if the expected value obtained by A is at least 1/c the value obtained by the optimal offline algorithm, for any input instance. We use OP T to denote the optimal algorithm (and its schedule).

3

A 2-Competitive Barely Random Algorithm for Intervals

In this section we describe and analyse a very simple algorithm RAN for the online scheduling of intervals. RAN is barely random and consists of two deterministic algorithms A and B, described as follows. Intuitively, A takes care of odd slots and B takes care of even slots. Let si = [i − 1, i). Within each odd slot si , A starts the interval arriving first. If a new interval arrives in this slot while an interval is being processed, A will abort and start the new interval if its weight is larger than the current interval; otherwise the new interval is discarded. At the end of this slot, A is running (or about to complete) an interval with the largest weight among those that arrive within si ; let Ii denote this interval. A then run Ii to completion without any abortion during the next (even) slot. It then stays idle until the next odd slot. B runs similarly on even slots. RAN chooses one of A and B with equal probability 1/2 at the beginning. Theorem 1. RAN is 2-competitive for the online scheduling of unit-length intervals. Proof. Each Ii is completed by either A or B. Therefore, RAN completes each Ii with probability 1/2. On the other hand, OP T can complete at most one interval in each si , with weight at most w(Ii ). It follows that the total value of OP T is at most 2 times that of RAN . ⊓ ⊔ Trivial examples can show that RAN is not better than 2-competitive (e.g. a single interval). In fact we will show in Section 5 that no barely random algorithm that chooses between two deterministic algorithms is better than 2competitive. But first we consider how this result can be generalized to the case of job scheduling.

Improved Randomized Online Scheduling of Unit Length Intervals and Jobs

4

57

A 3-Competitive Barely Random Algorithm for Jobs

In this section we extend RAN to the online scheduling of unit-length jobs with restarts. The algorithm remains very simple but the analysis is more involved. Again RAN chooses between two deterministic algorithms A and B, each with probability 1/2, and again A takes care of odd slots and B takes care of even slots. At the beginning of each odd slot, A considers all pending jobs that can still be completed, and starts the one with the largest weight. (If there are multiple jobs with the same maximum weight, start an arbitrary one.) If another job of a larger weight arrives within the slot, A aborts the current job and starts the new one instead. At the end of this odd slot, the job that is being executed will run to completion (into the following even slot) without abortion. A will then stay idle until the next odd slot. Even slots are handled by B similarly. Our approach to the proof is to map (or charge) the weights of jobs completed in OP T to jobs completed in slots in A or B, so that each slot in A or B receives a charge at most 1.5 times the weight of the job in the slot. In some cases it is not possible, and we pair up slots with large charges with slots with small charges so that the overall ratio is still at most 1.5. Since each job in A or B is completed with probability 1/2 only, the expected value of the online algorithm is half the total value of jobs completed by A and B. This gives a competitiveness of 3. We first define a charging scheme to map the value of jobs completed in OP T to slots in either A or B. Consider a slot s in OP T where the job OP T (s) is completed. Suppose s is odd (A is executing). If w(A(s)) ≥ w(OP T (s)), charge the value of OP T (s) to s. We call this a downward charge. Otherwise, A must have completed OP T (s) at some earlier slot s′ . Charge half the value of OP T (s) to this slot s′ . This is called a self charge. For B, either it has completed the job OP T (s) before s, in which case we charge the remaining half to that slot (this is also a self charge); or OP T (s) is still pending at slot s − 1, which means at slot s − 1, B completes a job with value at least w(OP T (s)). Charge the remaining half to the slot s − 1. This is called a backward charge. The charges for the case when s is an even slot are similarly defined. Clearly, all values in OP T are charged. Observe that for each charge from OP T to A/B, the slot receiving the charge in A/B contains a job with weight at least that of the job in OP T generating the charge. A slot is said to receive x units of charge if the job completed in the slot has weight w and the total charges to the slot is x × w. Each slot in A or B receives at most 2 units of charges, because each downward charge has at most 1 unit, while each self or backward charge has at most 0.5 unit. For simplicity we can assume charges are always integral multiples of 0.5. Slots receiving 2 units of charges are called bad; they must receive a backward charge. Slots with at most 1 unit charge are called good. Each bad slot s can be characterized by a pair (x, y) where x is the job A(s) or B(s), and y is the job OP T (s + 1) generating the backward charge. If a slot receives at most 1.5 units of charge, we leave it as it is. For each bad slot, however, we will need to pair it up with a good slot.

58

S.P.Y. Fung, C.K. Poon, and F. Zheng

Lemma 1. For each bad slot s = (x, y), there is a good slot s′ such that the weight of the job in s′ is at least w(y). Moreover, any two bad slots are paired with different good slots. If Lemma 1 is true, then we have Lemma 2. Slots s and s′ as defined in Lemma 1 together receive a charge at most 1.5 times the total weight of the jobs in A/B in the two slots. Proof. Let ws and ws′ be the weight of jobs in A/B in s and s′ respectively. The charges to s is at most 1.5ws + 0.5w(y) while the charges to s′ is at most ws′ . The overall ratio is therefore (1.5ws + 0.5w(y) + ws′ )/(ws + ws′ ) ≤ 1.5 since w(y) ≤ ws′ . ⊓ ⊔ Theorem 2. RAN is 3-competitive for the online scheduling of unit-length jobs with restarts. Proof. All the weights of jobs completed in OP T are mapped to jobs in A or B. Each job in A and B receives charges at most 1.5 times its own weight, either as a single job or as a pair of jobs as defined in Lemma 1. Since each job in A or B is only completed with probability 1/2, the expected value of the online algorithm is half the total weight of jobs in A and B. The competitiveness of 3 follows. ⊓ ⊔ The following simple example shows that the bound 3 is tight. Consider three jobs x, y, z, where r(x) = 0, d(x) = 3, w(x) = 1 + ǫ for arbitrary small ǫ > 0; r(y) = 0, d(y) = 1, w(y) = 1; and r(z) = 1, d(z) = 2, w(z) = 1. Both A and B will do x only, but OP T can do all three. Before proving Lemma 1 we first show some properties of bad slots in the following lemma. (Although the lemma is stated in terms of odd slots, the case of even slots is similar.) Lemma 3. For each bad slot s = (x, y), where s is an odd slot, (i) both x and y are completed by B before s (call the slots s1 and s2 , where s1 is before s2 ), (ii) All odd slots in A[s1 +1..s2 −1] contain jobs with weights ≥ w(B(s1 )) ≥ w(y), and all odd slots in A[s2 + 1..s] contain jobs with weights ≥ w(B(s2 )) ≥ w(y). Proof. (i) Since slot s+1 makes a backward charge instead of a downward charge, we have w(B(s + 1)) < w(y). Hence y must be completed in B before s, or else y can be completed in B(s + 1). Clearly w(x) ≥ w(y). Job x = A(s) receives a self charge from a slot after s + 1, and therefore must also be completed in B before s or otherwise it can be completed in B(s + 1). (ii) If B(s1 ) = y, then y has already been released in slot s1 but is not completed by A until s. Hence all odd slots in A[s1 + 1..s2 − 1] must contain jobs with weights at least w(y). If B(s1 ) = x then the same reasoning implies that these slots contain jobs with weights at least w(x), which is at least w(y). The same argument holds for A[s2 + 1..s]. ⊓ ⊔

Improved Randomized Online Scheduling of Unit Length Intervals and Jobs

y

OPT y2

A B

59

x

y0

y1

s1

s2

s

Fig. 1. Lemma 3 and Step 1 of the procedure

We now prove Lemma 1. We give a step-by-step procedure for identifying a good slot for every bad slot. Consider an odd bad slot s = (x, y). (The case for even slots is similar.) Denote B(s2 ) by y1 (which is either x or y) and B(s1 ) by y0 , where s1 and s2 are as defined in Lemma 3. See Fig. 1. Step 1.1. B(s2 ) has weight at least w(y); thus if s2 receives at most 1 unit of charge, we have identified a good slot and we can stop. Otherwise, go to Step 1.2. Step 1.2. From Lemma 3(ii), all odd slots in A[s1 ..s] contain jobs with weights at least w(y). In particular, consider the job A(s2 − 1). Denote this job by y2 . Then w(y2 ) ≥ w(y). Slot s2 −1 does not receive backward charge (since otherwise the slot s2 does not receive downward charge, which means it receives at most 1 unit of charge so we would have stopped in Step 1.1 before). If the slot s2 − 1 does not receive a self charge, then it is a good slot (charge at most 1.0) and we stop. Otherwise go to Step 1.3. Step 1.3. Now, s2 − 1 receives a self charge. This implies that OP T will satisfy y2 at some slot s′ that is after s2 , and s′ cannot receive a downward charge. (s′ = s2 since s2 receives a downward charge.) We claim (proof can be found in full paper) that B must have completed y2 at some slot s3 before min(s, s′ ). We then consider the two slots s1 and s3 , and re-label the earlier one as s1 and the other as s2 . Then we move on to the next step; they become the new slots s1 and s2 in Step 2. In each subsequent Step i ≥ 2, we consider the slots s1 and s2 , both identified in some previous steps and one of which is identified in the immediately preceding step. Both slots contain jobs among {y0 , y1 , ..., yi }. The discussion is similar to that in Step 1. We first state the following lemma which is basically a generalization of Lemma 3. Lemma 4. Suppose y0 , y1 , . . . , yi have been identified in B. Let sp and sq be the two slots containing two consecutive yp and yq in B (where “consecutive” refers to their locations in B), with sp before sq . If yp is already the last so there is no such yq , take sq = s. Then for all odd slots in A[sp + 1..sq − 1], they contain jobs with weights at least w(yp ). Moreover, w(yj ) ≥ w(y) for all j. Step i.1. If the job in B(s2 ) (=yj for some j ≤ i) has at most 1 unit of charge, then we stop. Step i.2. Otherwise, by Lemma 4, A(s2 − 1) contains a job of weight at least w(y); let yi+1 be this job. It cannot receive backward charge because s2 receives downward charge (otherwise we would have stopped before). If it receives no self charge, then it is a good slot and we are done.

60

S.P.Y. Fung, C.K. Poon, and F. Zheng

Step i.3. Otherwise it receives a self charge and hence is completed by OP T at a slot s′ after s2 . Similar to the reasoning in Step 1.3, we can conclude that B must also complete yi+1 at a slot s3 where s3 < min(s, s′ ). Move on to Step (i + 1) with s1 and s3 being the new s1 and s2 . Note that in each step, the new job yi+1 identified cannot be the same job as the yi ’s in previous steps. Each step identifies a new slot in B for these yi ’s, all of which are before s. Since there are only a finite number of slots before s, this procedure must end after a finite number of steps, at which point we have identified a good slot. We still need to prove that, using the above procedure, any two bad slots are paired with different good slots. This part of the proof is deferred to the full paper.

5

Lower Bound

In this section, we show a lower bound of 2 for barely random algorithms that choose between two deterministic algorithms, possibly with unequal probability. Theorem 3. No barely random algorithm choosing between two deterministic algorithms has a competitive ratio better than 2. As a warm-up, we start with a weaker lower bound of 1.657 in the next subsection. We then strengthen it to a lower bound of 2 in subsection 5.2. 5.1

A 1.657 Lower Bound

We let c be the positive root of the equation c2 + 8c − 16 = 0, so c ≈ 1.657. Let ALG be an arbitrary barely random algorithm that chooses between two deterministic algorithms A and B with probability p and q respectively such that p + q = 1 and 0 < p ≤ q < 1. We will be using sets of intervals similar to that in Woeginger [17]. More formally, let ǫ be an arbitrary positive real number and let v, w be any pair of real numbers such that 0 ≤ v ≤ w. We define SET (v, w, ǫ) as a set of intervals of weight v, v + ǫ′ , v + 2ǫ′ , . . . , w (where ǫ′ is the largest number such that ǫ′ ≤ ǫ and w − v is a multiple of ǫ′ ) and their relative arrival times are such that intervals of smaller weight come earlier and the last interval (i.e., the one that arrives last and has weight w) arrives before the first interval finishes. Thus, there is overlapping between any pair of intervals in the set. See Figure 2(a). This presents a difficulty for the online algorithm as it has to choose the right interval to process without knowledge of the future. For convenience and when the context is clear, we will refer to an interval by its weight. If x is an interval in SET (v, w, ǫ) where x > v (i.e., x is not the earliest interval in the set), then x− denotes the weight of the interval that arrives just before x in SET (v, w, ǫ). Suppose the two algorithms of ALG are processing intervals x and y, not necessarily distinct and possibly from some SET (·, ·, ·). We are interested in the worst case ratio of what OP T gains on these two sets and any subsequently

Improved Randomized Online Scheduling of Unit Length Intervals and Jobs w

61

w’

w

y

w

y y x

v+ ε’ v (a)

v’ v

v (b)

Fig. 2. (a) SET (v, w, ǫ). On the left is the actual set of intervals; the vertical arrow on the right is the notation we use to denote such a set. (b) Local competitive ratio, when qx + py ≤ (x + y)/2.

released intervals to the expected gain of ALG on the same collection of intervals; and will refer to this ratio as the local competitive ratio of ALG with respect to the relevant intervals. The actual competitive ratio is always at least the local competitive ratio because OP T may have completed some intervals before, but ALG cannot (as will become clear in the proof below). We first make a simple observation. Lemma 5. Suppose A and B of ALG are respectively processing an interval y in SET (v ′ , w′ , ǫ′ ) and an interval x in SET (v, w, ǫ) where x > v and y > v ′ . Further suppose that x ≤ y, x precedes and overlaps y. Then the local competitive ratio of ALG is at least 2. (Note that SET (v, w, ǫ) and SET (v ′ , w′ , ǫ′ ) can be the same set; and x and y can be the same interval.) Proof. Recall that A and B are executed with probability p and q respectively. Thus the expected gain by ALG upon completion of the current intervals x and y is qx + py ≤ (x + y)/2. To defeat ALG, the adversary releases another interval of weight y just before x finishes. We illustrate the scenario in Figure 2(b). There, the horizontal lines labelled with y and x touching the vertical lines represent the intervals chosen by A and B respectively. If ALG (more precisely, algorithm B) continues with x, then OP T gains x− + y ≥ x + y − ǫ while ALG gains qx + py ≤ (x + y)/2. If B aborts x and starts the new y, then the adversary releases yet another y just before the y in SET (v ′ , w′ , ǫ′ ) finishes. Then OP T gains y − + y ≥ 2y − ǫ′ while ALG gains qy + py = y. ⊓ ⊔ Our lower bound proof takes a number of steps. In each step, the adversary will release some set of intervals SET (·, ·, ·) adaptively according on how ALG reacts in the previous steps. In each step, the adversary forces ALG not to finish any interval (and hence gain no profit) while OP T will gain some. Eventually, OP T will accumulate at least c − δ times of what ALG can gain no matter what ALG does in the last step, for any arbitrarily small positive real δ. Step 1. The adversary releases SET (v1 , w1 , ǫ1 ) where v1 is some positive real number, w1 = c((q/p)/ǫ)v1 and ǫ, ǫ1 are some sufficiently small positive real numbers. Let x1 and y1 (x1 ≤ y1 ) be the intervals chosen by ALG. By Lemma 5, we

62

S.P.Y. Fung, C.K. Poon, and F. Zheng w2

w1

y1

w1

y1

y1

x1 v1

x1 v1

v2

Fig. 3. Step 1. (left) Case 1, (right) Case 2.

assume that x1 is processed by A and y1 is processed by B. So, the expected gain by ALG is px1 + qy1 ≤ y1 . We can assume y1 > w1 /c or else OP T schedules w1 . We claim that x1 > v1 . Assume to the contrary that x1 = v1 . Then the interval x1 will just have a negligible contribution to the overall gain by ALG. More precisely, v1 = (w1 /c)ǫ/(q/p) ≤ ǫ(w1 /c) < ǫy1 and px1 = pv1 < pǫy1 < ǫy1 . Thus, the adversary releases another y1 just before the y1 in SET (v1 , w1 , ǫ1 ) finishes. Then ALG gains at most p(x1 + y1 ) + qy1 = pv1 + y1 ≤ (1 + ǫ)y1 . OP T gains y1− + y1 ≥ 2y1 − ǫ1 . Hence the claim follows. Now we consider two cases depending on the relative size of x1 and y1 . Case 1: y1 ≥ (4/c)x1 (i.e., y1 is large). The adversary releases another y1 just before the y1 in SET (v1 , w1 , ǫ1 ) finishes. See Figure 3. Upon finishing x1 , algorithm A can go on to finish the new y1 . Therefore, ALG has an expected gain of p(x1 + y1 ) + qy1 = px1 + y1 ≤ ((c/8) + 1)y1 . On the other hand, OP T gains y1− + y1 ≥ 2y1 − ǫ1 . Hence the competitive ratio is approximately 2/(1 + c/8) = c by definition of c. Case 2: y1 < (4/c)x1 (i.e., y1 is small). In this case, the adversary releases a new set of intervals, SET (v2 , w2 , ǫ2 ), just before x1 finishes where v2 = x1 , w2 = max{c(px1 + qy1 )− x1 , v2 } and ǫ2 = ǫ1 /2. See Figure 3. Note that ALG cannot ignore SET (v2 , w2 , ǫ2 ) and continue with both x1 and y1 . Otherwise, its expected gain is px1 + qy1 while OP T gains x− 1 + w2 ≥ c(px1 + qy1 ) − ǫ1 . Suppose ALG (algorithm B) aborts y1 and starts some y2 in SET (v2 , w2 , ǫ2 ) while algorithm A continues to process x1 . Then the adversary releases a new y2 just before the y2 in SET (v2 , w2 , ǫ2 ) finishes. OP T can complete x− 1 in SET (v1 , w1 , ǫ1 ), − y2− in SET (v2 , w2 , ǫ2 ) and the new y2 . So, it gains x− + y + y 2 ≥ x1 + 2y2 − 1.5ǫ1 . 1 2 On the other hand, A can complete x1 and then the new y2 while B can complete y2 . Thus ALG gains at most p(x1 + y2 ) + qy2 = px1 + y2 ≤ (1/2)x1 + y2 . Now, suppose B continues with y1 but A aborts x1 to start some x2 in SET (v2 , w2 , ǫ2 ). Note: it must be the case that x2 < y1 (otherwise, we can apply Lemma 5). Then the adversary releases a y1 just before the y1 in SET (v1 , w1 , ǫ1 ) finishes. OP T gains y1− + y1 ≥ 2y1 − ǫ1 while ALG gains qy1 plus at most py1 (thus total gain by ALG is at most y1 ). Based on the above discussion, the only remaining sensible response for ALG is to abort both x1 and y1 and start some x2 and y2 in SET (v2 , w2 , ǫ2 ) (where

Improved Randomized Online Scheduling of Unit Length Intervals and Jobs

63

x2 ≤ y2 ). By Lemma 5, we can further assume that x2 < y2 and that x2 and y2 are started by A and B respectively. Moreover, we claim that x2 > v2 . Assume to the contrary that x2 = v2 = x1 . Then the adversary releases a y2 just before − the y2 in SET (v2 , w2 , ǫ2 ) finishes. OP T gains x− 1 + y2 + y2 ≥ x2 + 2y2 − 1.5ǫ1 while ALG gains p(x2 + y2 ) + qy2 ≤ (1/2)x2 + y2 . Hence the claim follows. This also finishes our discussion on Step 1 and we now proceed to Step 2. Step i ≥ 2. In general, at the beginning of Step i ≥ 2, we have the following situ− − ation: OP T has gained x− 1 +x2 +· · · xi−1 while ALG has not gained anything yet. Moreover, A and B of ALG are respectively serving xi and yi in SET (vi , wi , ǫi )  i−1 where vi = xi−1 , wi = max{c(pxi−1 + qyi−1 ) − i−1 and j=1 xj , vi }, ǫi = ǫ1 /2 vi < xi < yi . We go through a similar case analysis to that in Step 1: Case 1: yi ≥ (4/c)xi . The adversary releases another yi just before the yi in SET (vi , wi , ǫi ) finishes. Then ALG has an expected gain of p(xi + yi ) + qyi = pxi + yi ≤ ((c/8) + 1)yi . On the other hand, OP T gains yi− + yi ≥ 2yi − ǫi . Hence the competitive ratio is (approx.) 2/(1 + c/8) = c. Case 2: yi < (4/c)xi . The adversary releases SET (vi+1 , wi+1 , ǫi+1 ) just before xi finishes where vi+1 = xi , wi+1 = max{c(pxi + qyi ) − xi − · · · − x2 − x1 , vi+1 } and ǫi+1 = ǫ1 /2i . See Figure 4. As in Step 1, ALG cannot continue with both xi and yi . Otherwise, OP T − − schedules (on top of x− 1 , . . . , xi−1 ) xi and then wi+1 , thus gaining at least c(pxi + qyi ) − 2ǫ1 . Suppose B aborts yi in order to start some yi+1 in SET (vi+1 , wi+1 , ǫi+1 ) while A continues with xi . Then the adversary releases a new yi+1 just before the − yi+1 in SET (vi+1 , wi+1 , ǫi+1 ) finishes. So, OP T gains (on top of x− 1 , . . . , xi−1 ) − − xi +yi+1 +yi+1 ≥ xi +2yi+1 −1.5ǫi while ALG gains at most p(xi +yi+1 )+qyi+1 = pxi + yi+1 ≤ (1/2)xi + yi+1 . Suppose B continues with yi while A aborts xi to start xi+1 . By Lemma 5, we have that xi+1 < yi . Then the adversary releases a yi just before the yi − − in SET (vi , wi , ǫi ) finishes. Thus, OP T gains (on top of x− 1 , . . . , xi−1 ) yi + yi ≥ 2yi − ǫi while ALG gains qyi plus at most pyi (thus total gain by ALG is at most yi ). Based on the above, we conclude that ALG has to abort both xi and yi and start some xi+1 and yi+1 in SET (vi+1 , wi+1 , ǫi+1 ) (where xi+1 ≤ yi+1 ). By Lemma 5, we can assume that xi+1 < yi+1 and that A and B respectively started xi+1 and yi+1 . wi+1 wi yi xi ...

xi−1

vi+1 vi

x1

Fig. 4. Step i

64

S.P.Y. Fung, C.K. Poon, and F. Zheng

− We now proceed to Step i+1. Note that OP T has already gained x− 1 +· · ·+xi while ALG still has not gained anything. We can also argue, as in Step 1, that xi+1 > vi+1 . Note that c(pi xi + qi yi ) − (x1 + · · · + xi ) ≤ cyi − (x1 + · · · + xi ) < 4xi − (x1 + · · · + xi ) where the last inequality holds since we have yi < (4/c)xi . From Woeginger [17], we know that for all 2 < d < 4, any strictly increasing sequence a1 , a2 , . . . cannot satisfy a1 + a2 + · · ·+ ai−1 + ai + ai+1 ≤ dai for every finite i. So eventually, c(pi xi +qi yi )−(x1 +· · ·+xi ) ≤ xi and hence we set wi+1 = vi+1 (= xi ). In that situation, if ALG continues both xi and yi , its expected − gain is pxi + qyi . OP T schedules wi+1 and gains in total x− 1 + · · · + xi + wi+1 ≥ c(pi xi + qi yi ) − 2ǫ1 . On the other hand, it makes no difference even if ALG aborts xi to start wi+1 (which is equal to xi ).

5.2

A Lower Bound of 2

In this subsection, we let c = 2 − ǫ for some small positive ǫ. The previous lower bound can be strengthened to c by refining the analysis in Case 1 of the proof in Section 5.1. Below we only consider Step 1 of the analysis. We omit the adaptation to the i-th step, which is just a matter of renaming variables. Consider Case 1 of Section 5.1. Observe that when OP T has just completed y1− , the ratio of what OP T gains to that of ALG is y1− /(px1 ) ≥ (4/c)x1 /((1/2)x1 ) = 8/c. This is much larger than what is needed (i.e., 2). However, with the range of probability p being [0, 1/2], it is difficult to design one good SET (·, ·, ·) that makes ALG perform poorly. Thus, we break down the range [0, 1/2] into subranges [0, h1 ), [h1 , h2 ), . . . , [hn , hn+1 = 1/2] and design an appropriate SET (·, ·, ·) for each subrange. More formally, we define two sequences h1 , . . . , hn+1 and u1 , u2 , . . . , un as 4ǫ = (2−ǫ) follows. h1 = 4(2−c) 2 and for i ≥ 1, c2 ui =

hi+1 =





1−hi c2 /4 c(1−hi )−1 , 1−2ǫ+ǫ2 /4 , ǫ

ui +1−c cui +c2 /4−c , 1 2,

if hi
0 and ui > 1−2h > 1. i

i

2

/4 > 1 for sufficiently small ǫ. Let n be the smallest i Otherwise, ui = 1−2ǫ+ǫ ǫ such that hi+1 = 1/2. The following lemma (proof omitted) shows that such an n does exist.

Lemma 6. There exists a finite n such that hn+1 = 1/2. If

1−ǫ 2−ǫ

≤ hn < 21 , then un =

then un =

2

1−hn c /4 c(1−hn )−1

1−2ǫ+ǫ2 /4 ǫ

by definition of ui . Otherwise, if hn
1+u 1+h +1 ((c/4)+u −1) = c, since p < hi+1 and (c/4) + ui − 1 > 0. The last equality is by definition of hi+1 . Case 1.2.2. ALG aborts y1 and starts some x′ and y ′ (≥ x′ ) in SET (0, ui , ǫ1 ) with probability p and q respectively. (Note that by Lemma 5, ALG cannot start x′ with probability q ≥ 1/2.) In this case, another y ′ will be released just before the x′ in SET (0, ui , ǫ1 ) finishes. If ALG aborts x′ to start the newly arrived y ′ , a third y ′ will be released just before the first y ′ in SET (0, ui, ǫ1 ) finishes and then the local competitive ratio is 2 with the same reasoning as the proof in Lemma 5. On the other hand, if ALG does not abort x′ , then its expected gain is p(x1 + x′ ) + qy ′ ≤ p((c/4) + x′ ) + (1 − p)y ′ . OP T will complete (x′ )− and y ′ , and it has completed y1− before. (Note that if x′ is the first interval with weight of v2 = 0 in SET (0, ui , ǫ1 ), then we set (x′ )− to be of weight 0.) The x′ +1+y ′ 1+x′ +y ′ local competitive ratio is (approx.) p((c/4)+x ′ )+(1−p)y ′ = px′ +p(c/4)+(1−p)y ′ . It can be shown that this fraction is at least c. i

i

i

i

i

Case 1.3: hn ≤ p < hn+1 = 1/2. The analysis of this case is similar to that of Case 1.2 and is omitted. Hence, in each case, the local competitive ratio is at least c.

References 1. Albers, S.: On randomized online scheduling. In: Proc. 34th ACM Symposium on Theory of Computing, pp. 134–143 (2002) 2. Awerbuch, B., Bartal, Y., Fiat, A., Rosen, A.: Competitive non-preemptive call control. In: Proc. 5th ACM-SIAM Symposium on Discrete Algorithms, pp. 312– 320 (1994) 3. Baruah, S., Koren, G., Mao, D., Mishra, B., Raghunathan, A., Rosier, L., Shasha, D., Wang, F.: On the competitiveness of on-line real-time task scheduling. RealTime Systems 4, 125–144 (1992)

66

S.P.Y. Fung, C.K. Poon, and F. Zheng

4. Borodin, A., El-Yaniv, R.: Online Computation and Competitive Analysis. Cambridge University Press, New York (1998) 5. Canetti, R., Irani, S.: Bounding the power of preemption in randomized scheduling. SIAM Journal on Computing 27(4), 993–1015 (1998) 6. Chin, F.Y.L., Chrobak, M., Fung, S.P.Y., Jawor, W., Sgall, J., Tich´ y, T.: Online competitive algorithms for maximizing weighted throughput of unit jobs. Journal of Discrete Algorithms 4(2), 255–276 (2006) 7. Chin, F.Y.L., Fung, S.P.Y.: Online scheduling with partial job values: does timesharing or randomization help? Algorithmica 37(3), 149–164 (2003) 8. Chrobak, M., Jawor, W., Sgall, J., Tich´ y, T.: Online scheduling of equal-length jobs: randomization and restarts help. SIAM Journal on Computing 36(6), 1709– 1728 (2007) 9. Englert, M., Westermann, M.: Considering suppressed packets improves buffer management in QoS switches. In: Proc. 18th ACM-SIAM Symposium on Discrete Algorithms, pp. 209–218 (2007) 10. Epstein, L., Levin, A.: Improved randomized results for that interval selection problem. In: Proc. 16th European Symposium of Algorithms (2008) 11. Fung, S.P.Y., Poon, C.K., Zheng, F.: Online interval scheduling: Randomized and multiprocessor cases. In: Lin, G. (ed.) COCOON 2007. LNCS, vol. 4598, pp. 176– 186. Springer, Heidelberg (2007) 12. Kim, J.-H., Chwa, K.-Y.: Scheduling broadcasts with deadlines. Theoretical Computer Science 325(3), 479–488 (2004) 13. Koren, G., Shasha, D.: Dover : An optimal on-line scheduling algorithm for overloaded uniprocessor real-time systems. SIAM Journal on Computing 24, 318–339 (1995) 14. Miyazawa, H., Erlebach, T.: An improved randomized on-line algorithm for a weighted interval selection problem. Journal of Scheduling 7(4), 293–311 (2004) 15. Seiden, S.S.: Randomized online interval scheduling. Operations Research Letters 22(4–5), 171–177 (1998) 16. Ting, H.-F.: A near optimal scheduler for on-demand data broadcasts. In: Calamoneri, T., Finocchi, I., Italiano, G.F. (eds.) CIAC 2006. LNCS, vol. 3998, pp. 163–174. Springer, Heidelberg (2006) 17. Woeginger, G.J.: On-line scheduling of jobs with fixed start and end times. Theoretical Computer Science 130(1), 5–16 (1994) 18. Zheng, F., Fung, S.P.Y., Chan, W.-T., Chin, F.Y.L., Poon, C.K., Wong, P.W.H.: Improved on-line broadcast scheduling with deadlines. In: Chen, D.Z., Lee, D.T. (eds.) COCOON 2006. LNCS, vol. 4112, pp. 320–329. Springer, Heidelberg (2006)

Minimizing Average Flow Time on Unrelated Machines Ren´e A. Sitters⋆ Technische Universiteit Eindhoven [email protected]

Abstract. We give an O(Q)-approximation for minimizing average flow time on unrelated machines, where Q is the maximum number of different process times on a machine. Consequently, the ratio is O(log P/ log ǫ) if all process times are a power of ǫ. Here, P is the ratio of the maximum and minimum process time of a job.

1

Introduction

A natural measure for the efficiency of processing a job is the time that elapses between the earliest moment it can start and the time it completes. This is often called the flow time of the job. Given a set of jobs that become available over time, the objective of a scheduler might be to minimize the total sum of flow times, or equivalently, the average flow time of a job. We consider the unrelated machine model in which we have m machines and n jobs and the time to process job j on machine i is given by an integer pij . We allow a job to be preempted and continued later on the same machine but do not allow a job to migrate to another machine. Hence, we consider preemptive, non-migratory schedules on unrelated machines with the objective of minimizing the total flow time. The problem in which all machines are identical is well-studied [1,3,4,9,10]. Leonardi and Raz [9] showed that the Shortest Remaining Processing Time (SRPT) rule gives an O(log P )-approximate schedule, where P is the ratio of the maximum and minimum processing time. Recently, Garg and Kumar [7] showed that for non-migratory schedules, no polynomial time algorithm can be better than Ω( log P/ log log P )-approximate, unless P=NP. In an upcoming paper [8], this bound is increased to Ω(log P/ log log P ). For the related machine model in which machines have different speeds, Garg and Kumar [5,6] give an O(log P )-approximation algorithm. The same authors [7] also give an O(log P )-approximation algorithm for the problem in which each job is given a process time pj and a subset of the machines on which it may be processed, i.e., the process time pij of job j on machine i is either pj or infinite. ⋆

Supported by a research grant from the Netherlands Organization for Scientific Research (NWO-veni grant).

E. Bampis and M. Skutella (Eds.): WAOA 2008, LNCS 5426, pp. 67–77, 2009. c Springer-Verlag Berlin Heidelberg 2009 

68

R.A. Sitters Table 1. Approximability of average flow time scheduling Problem Single machine Identical machines Related machines Subset model Unrelated machines

Lower bound 1 Ω(log P/ log log P ) Ω(log P/ log log P ) Ω(log P/ log log P ) Ω(log P/ log log P )

[8] [8] [7] [7]

Upper bound 1 [2] O(log P ) [9,10] O(log P ) [5,6] O(log P ) [7] O(Q)

The problem of minimizing total flow time on unrelated machines is wide open. We present an O(Q)-approximation, where Q is the maximum number of different process times on a machine, i.e., Q = maxi |{pi1 , pi2 , . . . , pin }|. Although Q may be larger than the O(log P ) ratio we have for the more restricted models, there are several interesting corollaries. For example, it yields an O(log P )approximation for unrelated machines, provided that all process times are a power of some constant ǫ. Further, it gives an O(1)-approximation if the number of different process times is constant. Independently, Garg et al. [8] gave an O(K)-approximation for this problem, where K is the total number of different process times. Their algorithm is more general in the sense that it also achieves the best ratios for the identical machines and the subset model. On the other hand, it uses the single source unsplittable flow problem as a subroutine whereas our algorithm uses simply unweighted bipartite matching instead. Our algorithm consists of four simple steps: linear programming, reordering jobs on each machine, matching, and again reordering per machine. Except for the linear program, it differs substantially from any of the previous algorithms for minimizing total flow time on parallel machines.

2

Algorithm

The following algorithm gives an O(Q)-approximation for minimizing total flow time on unrelated machines, where Q is the number of different process times per machine. Outline of the algorithm: 1. 2. 3. 4. 2.1

Solve linear program (LP). Apply preemptive SPT on each machine: → LP-schedule. Find an assignment of jobs to machines. Apply preemptive SPT on each machine: → Schedule A. Step 1: The Linear Program

We use a time-indexed LP-formulation. The variable xijt gives the time that job j is processed on machine i between time t − 1 and t. It is defined for all

Minimizing Average Flow Time on Unrelated Machines

69

i ∈ {1, 2, . . . , m}, j ∈ {1, 2, . . . , n}, and t ∈ {1, 2, . . . , T }, where T an upper bound on the schedule length. Constraint (3) ensures that each job to be completely processed. Constraint (4) says that any timeslot is filled for at most one time unit. To explain (1) and (2) consider an optimal non-migratory schedule σ. Let xσijt , Pjσ , and Mjσ be the corresponding values. Clearly, Pjσ is a lower bound on the flow time of job j since it is the total time that job j is processed. Since all release dates and process times are integer we may assume that in σ any job starts, ends, or is preempted only at integer time points. Consequently, each value xσijt will be either zero or one. The flow time of job j is exactly max{t − rj | xσi,j,t = 1 for some i}. From constraint (2) we see that also Mjσ is a lower bound on the flow time of job j in schedule σ. In the LP-objective function we take the average of Pjσ and Mjσ . Hence, the optimal LP-value, say Z LP , is a  lower bound on the minimal total flow time, say  F ∗ . Moreover, if we let P LP = j PjLP and M LP = j MjLP , then also 12 P LP and 12 P LP are a lower bound on F ∗ . Lemma 1. The optimal LP-value Z LP is at most the minimal total flow time F ∗ . Further, for any optimal LP-solution we have P LP ≤ 2F ∗ and M LP ≤ 2F ∗ .  1   LP Pj + MjLP 2 j  = xijt

(LP) minimize Z LP = subject to PjLP

MjLP =

  xijt i

  xijt i



t

for all j (1)

t

i

pij

t

=1

xijt ≤ 1

pij

(t − rj )

for all j (2) for all j (3) for all i, t (4)

j

2.2

xijt = 0

if t ≤ rj − 1, for all i, j, t (5)

xijt ≥ 0

for all i, j, t (6)

Step 2: Preemptive SPT

Given an optimal LP-solution we apply the preemptive Shortest Processing Time procedure independently for each machine. This procedure simply reorders the processing of jobs on a machine and does not change the time that a job is processed on a machine. We assume that jobs are labeled in order of non-decreasing release dates, i.e., rj ≤ rk if j < k. To simplify notation further we introduce for each machine i a total order ≺i on the set of jobs by setting j ≺i k if pij < pik or pij = pik and j < k.

70

R.A. Sitters

 Let yij = t xi,j,t be the time that job j is processed on machine i in the LP-solution. Starting at time zero we schedule jobs of length yij in the order ≺i and preempt job k only at the moment a job j arrives with j ≺i k. In other words we schedule preemptively in non-decreasing order of process times pij , and jobs with the same process time are ordered by release dates. We do this for each machine i and call the resulting schedule the LP-schedule. Note that this schedule may not be feasible since a job may be processed simultaneously on different machines. This pseudo schedule has two nice properties: (i) On each machine, jobs with the same process time pij are scheduled in order of release dates. (ii) If yij > 0 then there is no idle time on machine i between time rj and the time that the fraction of job j on machine i is completed. We will use both properties in our proof. 2.3

Step 3: Assignment of Jobs to Machines

Let K be the total number of different process times and denote the set by {p1 , p2 , . . . , pK } = {pij | 1 ≤ i ≤ m, 1 ≤ j ≤ n}. Let Ki be the number of different process times on machine i. Then, Q = maxi Ki . We say that job j is of type h on machine i if pij = ph . Given the LP-schedule, let  n(i, h) = yij /pij j of type h on i

be the sum of the fractions of type h jobs on machine i. To simplify the analysis we restrict for the moment to the case that n(i, h) is integer for all i and h. We discuss how to handle the general case in Section 3.1. For each machine i and type h we partition the LP-schedule in intervals. Let Time(i, h, q) be the earliest point in time at which the fractions of jobs of type h on machine i add up to q. (See Figure 1.) Define Time(i, h, 0) = 0 and define for all q ∈ {1, 2, . . . , n(i, h)} the interval Interval(i, h, q) = [Time(i, h, q − 1), Time(i, h, q)]. 1 /3

1 /3

2 /3

1 /2

1 /6

M a c h in e E 0

tim e I n te r v a l( E,D ,1 ) ty p e D -jo b s

I n te r v a l( E,D ,2 ) o th e r jo b s a n d id le tim e

Fig. 1. The partition into intervals for machine i and job-type h. This is part of step 3 in the algorithm. The numbers indicate fractions of h-jobs.

Minimizing Average Flow Time on Unrelated Machines

71

Let Set(i, h, q) be the set of h-jobs processed in Interval(i, h, q). The sum of fractions of the jobs add up to exactly one for each set. Consequently, the total number of sets defined in this way is the number of jobs, n. Note that any union of k sets contains at least k jobs and, conversely, any collection of k jobs hits at least k sets. Hence, by Hall’s theorem there is a perfect matching of jobs to sets. Now we compute a perfect matching Π. Let Job(i, h, q) be the job matched with Set(i, h, q) (or equivalently, to Interval(i, h, q)). 2.4

Step 4: Once More Preemptive SPT

Given the assignment Π of step 3, we apply preemptive SPT on each machine in the same way as we did in step 2. For each machine i we process all jobs assigned to i in step 3 using the SPT procedure, i.e., starting at time zero we process the available jobs in the order ≺i and preempt job k only at the moment a job j arrives with j ≺i k. Let this result in schedule A.

3

Analysis

Given any schedule σ we define the mean flow time of a job as the average time at which a job is processed minus its release time. To be more precise, let δij (t) = 1 if job j is processed at machine i at time t ≤ T and zero otherwise. Here, T is an upper bound on the length of the schedule. The mean flow time is  1  T δij (t) t dt − rj . fjσ := pij t=0 i Now let f LP be the total mean flow time of the LP-schedule constructed in step 2 and let M LP = j MjLP correspond to the LP-solution.

Lemma 2. f LP ≤ M LP

Proof. Given any schedule σ the contribution of job j on machine i to the total T mean flow time is p1 t=0 δij (t) t dt − rj . For given values yij the expression is minimized if we schedule preemptively in non-decreasing order of pij . ⊓ ⊔ i j

The total time that jobs are processed in the LP-schedule is exactly P LP since in step 2 we only reorder the jobs on each machine. Clearly, step 4 preserves volume as well. The next lemma shows that the same holds for step 3. Lemma 3. The total volume of the schedule is unchanged by step 3. Proof. The Set(i, h, q) only contains the h-jobs from Interval(i, h, q) and these fractions add up to exactly one. Therefore, the volume of these fractions is exactly ⊓ ⊔ ph , which is the process time of job Job(i, h, q) on machines i. Since the volume of the schedule is invariant, we denote it from now on simply by V . Let f A and F A be, respectively, the total mean flow time and the total flow time of schedule A.

72

R.A. Sitters

Lemma 4. F A ≤ f A + QV Proof. By our SPT-rule no job of type h is preempted by another job of type h in schedule A. Moreover, there is no idle time between start and completion of a job. Thus, for each machine i and type h the sum of completion times Cj of h-jobs scheduled on i is at most the sum of the start times sj plus the total time that other jobs are processed on machine i. In other terms,  CjA − sA j ≤ Vi , j of type h on i

where Vi is the total process time (volume) of machine i. Further, the mean flow time fjA is at least sA j − rj . A A A A A FjA = CjA − rj = sA j − rj + Cj − sj ≤ fj + Cj − sj .

Taking the sum over all types h and machines i completes the proof.     A A A i Ki Vi ≤ f + QV. i h j of type h on i Fj ≤ f +

⊓ ⊔

Instead of analyzing the total flow time of schedule A we will construct another schedule B for the same assignment Π found in step 3 and we will bound its total flow time F B instead. Since we applied preemptive SPT in step 4, the total mean flow time f A is minimal given assignment Π. Hence, f A ≤ f B where f B is the total mean flow time of schedule B. Further, f B ≤ F B . With Lemma 4 we get F A ≤ f A + QV ≤ f B + QV ≤ F B + QV. (7) From Lemma 1 we know that V ≤ 2F ∗ . Hence, it suffices to show that B is an O(Q)-approximation for the total flow time. Of course, this implies that we could have replaced step 4 by the definition of B as given below. However, we prefer to keep the algorithm simple and put the complexity in the analysis. Now we construct B. We schedule the jobs one by one in a greedy manner as follows. Consider the machines in an arbitrary order and for each machine consider the types in an arbitrary order. For ease of notation let this order be 1, 2, . . . , K. For machine i and type h consider the jobs Job(i, h, q) in increasing order of q (1 ≤ q ≤ n(i, h)). (If n(i, h) = 0 we do nothing.) We use the following rule to schedule job Job(i, h, q): (a) Schedule it preemptively and as early as possible but not before its release time and, (b) do not use the timeslots assigned in the LP-schedule to jobs of type g > h (the types we did not schedule yet on this machine in this procedure). This defines schedule B. Note that by property (i) (Section 2.2) we ensure that job Job(i, h, q1 ) ≺i Job(i, h, q2 ) if q1 ≤ q2 . Therefore, for each type h the order of the jobs is preserved, i.e, they are processed in order ≺i . Consequently, no job of type h will be preempted by another job of type h in schedule B.

Minimizing Average Flow Time on Unrelated Machines

73

To simplify notation we consider from now on one specific machine i and type h and denote the jobs assigned by Π to machine i and being of type h on i simply by 1, 2, . . . , n(i, h). Further, we denote Job(i, h, q) by q. Let Cq be the completion time of job q in B. Lemma 5. Cq ≤ Time(i, h, q + 1) for any q ≤ n(i, h) − 1. Proof. The proof is by induction on q. Job 1 is released no later than Time(i, h, 1) since it is scheduled in the LP-schedule before this time. In the LP-schedule the fractions of type h jobs processed between Time(i, h, 1) and Time(i, h, 2) is exactly 1. By rule (b) we ensure that job 1 completes not later than Time(i, h, 2). Now assume that Cq ≤ Time(i, h, q + 1) for some q ≤ n(i, h) − 2. By the same arguments we have rq+1 < Time(i, h, q + 1). Since we schedule the jobs in order of release dates and as early as possible and by rule (b) there is enough idle time left to complete job q + 1 before Time(i, h, q + 2). ⊓ ⊔ For all q ∈ {1, 2, . . . , n(i, h)} we define: ◦ VqB as the total process time (over all types) in B between Cq−1 and Cq . (Define C0 = 0); ◦ VqLP as the total process time (over all types) in the LP-schedule between Cq−1 and Cq ; ◦ fjLP as the contribution of the type h jobs in interval Interval(i, h, j) to the total mean flow time of the LP-schedule. Define VqB = VqLP = fqLP = 0 for any q ≤ 0. The next lemma forms the core of the proof. It bounds the flow time of a job in terms of its mean flow time in the LP-schedule and the volume of the schedule. Lemma 6. For any q ∈ {1, 2, . . . , n(i, h)}, the flow time of q is LP Cq − rq ≤ fq−1 +

q    B Vj + VjLP .

j=q−2

Proof. For q = 1 we have C1 − r1 ≤ V1B + V1LP , since we schedule the job as early as possible while considering the already scheduled jobs and the timeslots used by other types in the LP-schedule. Therefore, C1 − r1 is at most the total volume of all jobs between r1 ≥ C0 = 0 and C1 plus the volume of the slots used by the LP-schedule in this interval. Similarly, for q = 2 we have C2 − r2 ≤ V1B + V1LP + V2B + V2LP . For q ≥ 3 we consider two cases. If rq ≥ Cq−3 then by the same arguments we have q    B Vj + VjLP . Cq − rq ≤ j=q−2

74

R.A. Sitters

Now assume rq ≤ Cq−3 . Note that any h-jobs appearing in the LP-schedule in the interval Interval(i, h, q − 1) was released not later than any of the hjobs appearing in the interval Interval(i, h, q), and in particular not later than rq . By definition, Interval(i, h, q − 1) starts at Time(i, h, q − 2) and also by definition, fractions in any interval add up to exactly one. Hence, LP fq−1 ≥ Time(i, h, q − 2) − rq ≥ Cq−3 − rq ,

where the last inequality follows from Lemma 5. We obtain Cq − rq = Cq − Cq−3 + Cq−3 − rq LP ≤ Cq − Cq−3 + fq−1

(8)

Since rq ≤ Cq−3 and by the similar argument we used for q ≤ 2 we know that Cq − Cq−3 ≤

q   B  Vj + VjLP .

j=q−2

Combined with (8) this completes the proof of the lemma. LP Cq − rq ≤ fq−1 +

q  

j=q−2

 VjB + VjLP .

⊓ ⊔

When we apply Lemma 6 to every job and take the sum we see that the total flow time of schedule B is at most the mean flow time of the LP-schedule plus 3Q times the volume of B plus 3Q times the volume of the LP-schedule: F B ≤ f LP + 6QV, where V is the volume of B, or equivalently, of the LP-schedule. Finally, we combine this inequality with inequality (7) and Lemma’s 2 and 1. F A ≤ F B + QV ≤ f LP + 7QV ≤ M LP + 7QV ≤ 2F ∗ + 14QF ∗ = (2 + 14Q)F ∗ . 3.1

Refined Algorithm and Analysis

So far, we restricted the algorithm and proof to the case where n(i, h) is integer for all i and h. We sketch what to do if this is not the case. Step 1 and 2 are not changed. In step 3 we define for each pair i and h a total of ⌊n(i, h)⌋ intervals as before. Let n′ be the total number of intervals defined. The remaining fractions are partitioned in n − n′ groups such that the fractions in each group add up to exactly one. Hence, a group spans multiple machines and/or multiple types. Now we describe how the grouping is done. Take an arbitrary ordering of the machines, for example, 1, 2, . . . , m and assume that the processing times are indexed in non-decreasing orde: p1 < p2 < · · · < pK . Consider the following process. Place all remaining fractions in a list such that:

Minimizing Average Flow Time on Unrelated Machines

75

(i) a fraction of type h is placed in the list before a fraction of type k if h < k (i.e. ph < pk ), and (ii) a fraction of type h from machine i is placed in the list before a fraction of type h from machine j if i < j. Partition this list into n−n′ groups of size one and label them n′ +1, n′ +2, . . . , n (where the fractions of jobs of type 1 on machine 1 are placed in group n′ + 1. ). We defined n′ sets (intervals) and n − n′ groups and by Hall’s theorem there is a perfect matching of the n jobs to the n′ sets and n − n′ groups. A job that is matched to a set is uniquely assigned to a machine. We assign a job that is matched to a group to an arbitrary machine that has a fraction of the job in the group. Given the assignment, we define a feasible schedule as follows. We schedule the jobs that arise from the intervals as in step 4 above. Then, the total flow time of these jobs is O(Q) times the total optimal flow time. Now we schedule the jobs that arise from the groups. Let job j be the job assigned to the last group in the list as described above. We schedule each of the remaining jobs, except for job j, as early as possible on the machine to which we assigned it. Finally, we schedule job j on its fastest machine. Of course, we do this as early ′ as possible given the schedule of the other jobs. Let this schedule be A′ and V A its volume. ′

Lemma 7. V A ≤ 2V . Proof. Number the groups in the list n′ +1, n′ +2, . . . , n. Since groups are ordered by processing time we guarantee that the process time of the job assigned to group i is at most the volume of group i + 1, for any i ∈ {n′ + 1, n′ + 2, . . . , n− 1}. Hence, the total volume of all jobs, except for j, is at motst V . Clearly, the assignment of job j adds no more than V . ⊓ ⊔ It remains to bound the flow time of the jobs matched to groups. By construction of the list, for each machine i and type h there are at most two groups that contain fractions of jobs that are assigned to i in the LP-schedule and are of type h on machine i. Hence, groups contribute at most two jobs of each type to any machine in the assignment. When we also note that there is no idle time between release date and completion time of any of these jobs in A′ we see that the total flow time of these jobs is O(QV ), which is O(Q) times the optimal total flow time.

4

Interval-Indexed LP

The time indexed LP we used is not of polynomial size if the upper bound T on the schedule length is not polynomially bounded. We can easily get around this problem using an interval indexed LP. Assume that the smallest process time is 1 and the largest is P . Then any job j completes latest at rj + nP . For each job j we define a set of time points 2 {rj , rj + 1, rj + 2, rj + 4, . . . , rj + 2⌈ log(nP )⌉ . This gives a total of O(n log(nP ))

76

R.A. Sitters

time points, which partition the time line in a set I of the same number of intervals indexed by {1, 2, . . . , |I|}. Now let zijs denote the time that job j is processed on machine i during interval s ∈ {1, 2, . . . , |I|}. Let ts be the end point of interval t and let t0 = 0.  1   LP2 Pj + MjLP2 3 j  subject to PjLP 2 = zijs

(LP2) minimize

MjLP 2 =

  zijs i

  zijs i



s

for all j

s

i

pij

s

pij

=1

zijs ≤ ts − ts−1

(ts − rj )

for all j for all j for all i, s

j

zijs = 0 zijs ≥ 0

if ts ≤ rj − 1 for all i, j, s for all i, j, s

For any feasible schedule σ with the corresponding values zijs we have PjLP2 = Pjσ . On the other hand, MjLP 2 may be larger than the actual flow time of job j. But not by more than a factor two since for any time point t in interval s, we have (ts − rj )/(t − rj ) ≤ (ts − rj )/(ts−1 − rj ) ≤ 2. Hence, MjLP 2 ≤ 2Mjσ . We changed the factor in the objective from 12 to 13 such that the optimal LP-solution is still a lower bound on the minimal total flow time F ∗ . Further, given an optimal LP-solution, we have P LP 2 ≤ 3F ∗ and M LP 2 ≤ 3F ∗ . Hence, apart from the change in the constant from 2 to 3, Lemma 1 remains the same. Lemma 2 remains valid as well. The other lemma’s do not use the LP-formulation but only assume that the schedule after step 2 is a preemptive SPT schedule.

5

Open Problems

It is not clear wether or not an O(log P )-approximation would be possible for unrelated machines. From Table 1 one might be inclined to believe it is the case. But the unrelated machine model could very well be much more difficult than related machines or the subset model. As a comparison, note that average completion time in the preemptive setting is polynomial time solvable in the subset model [11] but is APX-hard for unrelated machines [12]. A possible approach for a negative answer would be to prove that there is no constant approximation ratio possible in the case that all process times are within the interval [1, 2]. Further, the recently presented lower bounds [7,8] only apply to the nonmigratory setting. It remains to prove existence or non-existence of an O(1)approximation for minimizing total flow time on parallel machines if migrations of jobs is allowed.

Minimizing Average Flow Time on Unrelated Machines

77

References 1. Awerbuch, B., Azar, Y., Leonardi, S., Regev, O.: Minimizing the flow time without migration. In: STOC 1999: Proceedings of the Thirty-First Annual ACM Symposium on Theory of Computing, pp. 198–205 (1999) 2. Baker, K.R.: Introduction to sequencing and scheduling. Wiley, Chichester (1974) 3. Bansal, N.: Minimizing flow time on a constant number of machines with preemption. Oper. Res. Lett. 33(3), 267–273 (2005) 4. Baptiste, P., Brucker, P., Chrobak, M., D¨ urr, C., Kravchenko, S.A., Sourd, F.: The complexity of mean flow time scheduling problems with release times. J. of Scheduling 10(2), 139–146 (2007) 5. Garg, N., Kumar, A.: Better algorithms for minimizing average flow-time on related machines. In: Bugliesi, M., Preneel, B., Sassone, V., Wegener, I. (eds.) ICALP 2006. LNCS, vol. 4051, pp. 181–190. Springer, Heidelberg (2006) 6. Garg, N., Kumar, A.: Minimizing average flow time on related machines. In: STOC 2006: Proceedings of the Thirty-Eighth Annual ACM Symposium on Theory of Computing, pp. 730–738 (2006) 7. Garg, N., Kumar, A.: Minimizing average flow-time: Upper and lower bounds. In: FOCS 2007: Proceedings of the 48th Annual IEEE Symposium on Foundations of Computer Science, pp. 603–613 (2007) 8. Garg, N., Kumar, A., Muralidhara, V.N.: Minimizing total flow-time: the unrelated case. In: ISAAC 2008: Proceedings of 19th International Symposium on Algorithms and Computation (2008) 9. Leonardi, S., Raz, D.: Approximating total flow time on parallel machines. In: STOC 1997: Proceedings of the Twenty-Ninth Annual ACM Symposium on Theory of Computing, pp. 110–119 (1997) 10. Leonardi, S., Raz, D.: Approximating total flow time on parallel machines. J. Comput. Syst. Sci. 73(6), 875–891 (2007) 11. Sitters, R.A.: Complexity of preemptive minsum scheduling on unrelated parallel machines. J. of Algorithms 57(1), 37–48 (2005) 12. Sitters, R.A.: Approximability of average completion time scheduling on unrelated machines. In: Halperin, D., Mehlhorn, K. (eds.) Esa 2008. LNCS, vol. 5193, pp. 768–779. Springer, Heidelberg (2008)

Cooperation in Multiorganization Matching Laurent Gourv`es1,2, J´erˆome Monnot1,2 , and Fanny Pascual3 1 CNRS, UMR 7024, F-75775 Paris, France Universit´e de Paris-Dauphine, LAMSADE, F-75775 Paris, France 3 LIP6, Universit´e Pierre et Marie Curie - Paris 6 104 avenue du Pr´esident Kennedy, 75016 Paris, France {laurent.gourves,monnot}@lamsade.dauphine.fr, [email protected] 2

Abstract. We study a problem involving a set of organizations. Each organization has its own pool of clients who either supply or demand one unit of an indivisible product. Knowing the profit induced by each buyerseller pair, an organization’s task is to conduct such transactions within its database of clients in order to maximize the amount of the transactions. Inter-organizations transactions are allowed: in this situation, two clients from distinct organizations can trade and their organizations share the induced profit. Since maximizing the overall profit leads to unacceptable situations where an organization can be penalized, we study the problem of maximizing the overall profit such that no organization gets less than it can obtain on its own. Complexity results, an approximation algorithm and a matching inapproximation bound are given.

1

Introduction

We are given a two-sided assignment market (B, S, A) defined by a set of buyers B, a disjoint set of sellers S, and a nonnegative matrix A = (aij )(i,j)∈B×S where aij represents a profit if the pair (i, j) ∈ B × S trade. In this market products come in indivisible units, and each participant either supplies or demands exactly one unit. The units need not be alike and the same unit may have different values for different participants. We study a problem involving a set of organizations {O1 , . . . , Oq } which forms a partition of the market. A buyer (resp. seller) is a client of exactly one organization. It is assumed that for every transaction (i, j), organizations of i and j make an overall profit aij which is divided between the seller’s organization and the buyer’s organization as follows. The seller’s organization receives ps aij while the buyer’s organization gets pb aij , where ps and pb are fixed numbers between 0 and 1 and such that pb + ps = 1. Thus aij is a sort of commission that these two organizations divide according to pb and ps . We assume without loss of generality that 0 ≤ pb ≤ ps ≤ 1. In this model, buyers and sellers do not make pairs by themselves, but these pairs are formed by their organizations. Each organization acts as a selfish agent who only knows its list of clients and only cares about its profit. Thus, each organization Oi shall maximize the weight of a matching on its own list of clients E. Bampis and M. Skutella (Eds.): WAOA 2008, LNCS 5426, pp. 78–91, 2009. c Springer-Verlag Berlin Heidelberg 2009 

Cooperation in Multiorganization Matching

79

(this task can be done in polynomial time). However the global profit can be better if transactions between clients of distinct organizations are allowed. This leads to a situation of cooperation where the agents accept to disclose their lists of clients by reporting them to a trusted entity. This trusted entity can conduct transactions between a buyer and a seller from distinct organizations, and of course, it can also do it for two clients of the same organization. The trusted entity shall maximize the collective profits. However, maximizing the collective profits by returning a maximum weight matching may lead to unacceptable situations: each organization is selfish so it does not want to cooperate if its profit is worse than it could obtain on its own. The optimization problem faced by the trusted entity is then to maximize the collective profit so that no organization is penalized. 1.1

The Multiorganization Assignment Problem

The market is modelled with a weighted bipartite graph G = (B, S; E; w) and q sets (or organizations) O1 , . . . , Oq forming a partition of B ∪ S. Every buyer (resp. seller) is represented by a vertex in B (resp. S), E ⊆ B × S is the edge set representing pairs and w : E → R+ is a nonnegative weight function. The subgraph of G induced by Oi is denoted by Gi . We have Gi = (Bi , Si ; Ei , w) where Bi = B ∩ Oi and Si = S ∩ Oi . A set M ⊆ E is an assignment (or a matching) iff each vertex in (B, S; M, w) has degree at most one. The weight of an assignment M (i.e. the sum of the weights of its edges) is denoted by w(M ), and the profit of organization Oi in M is denoted by wi (M ) and defined as   ps w([x, y]) pb w([x, y]) + wi (M ) = {[x,y]∈M: (x,y)∈Bi ×S}

{[x,y]∈M: (x,y)∈B×Si }

where ps and pb are two nonnegative rational numbers such that ps + pb = 1 and 0 ≤ pb ≤ ps ≤ 1. We say that an edge whose endpoints are in the same organization (resp. in distinct organizations) is internal (resp. shared). The maximum weight matching ˜ . Let M ˜ i be the restriction of G reduced to its internal edges is denoted by M ˜ of M to Gi . The multiorganization assignment problem (moa for short) is to ˜ ) for all find a maximum weight matching M of G such that wi (M ) ≥ wi (M ˜ ˜ i ∈ {1, . . . , q}. Here wi (M ) is what organization Oi can get on its own. Then M ∗ is a feasible solution to the moa problem. As a notation, M denotes a maximum ∗ weight matching of G whereas Mcont is an optimum for moa. 1.2

Applications

We give here two applications where moa arises. The “agencies problem”. Each organization has its own pool of sellers (S) and buyers (B) who either supply or demand one unit of an indivisible product. Consider for example that organizations are real estate agencies. Each organization

80

L. Gourv`es, J. Monnot, and F. Pascual

receives a commission on each transaction it deals, and its goal is to maximize its profit. Therefore each organization accepts the assignment given by a trusted entity if and only if its profit is at least equal to the profit it would have had without sharing its file with the other organizations. The overall aim is then to find an assignment which maximizes the total amount of transactions done, while guaranting that no organization decreases its profit by sharing its file. A scheduling example. Each organization (which can be a university, laboratory, etc.) owns unit tasks (given by its users), and several (possibly different) machines. During some given time slots, the machines are available to schedule the tasks of the users. Each user gives her preferences for a given machine and a given time slot. These preferences are represented by integers (aij ) between 0 (a task cannot be scheduled on this machine at this time), and a given upper bound. The goal of each organization is to maximize the average satisfaction of its users, represented by the sum of the satisfactions of its users divided by the number of users, in the returned assignment. Therefore an organization will accept a multiorganization assignment if and only if the average satisfaction of its users is at least as high as when the organization accepts only the tasks from its users. Here, an unmatched user’s satisfaction is 0. This corresponds to moa when S is the set of users, B the set of couples (time slot, machine), ps = 1 and pb = 0. 1.3

Related Work

The multi-organization assignment problem is a variant of the old assignment problem (see [11] for a recent survey). Besides its combinatorial structure, moa involves self-interested agents whose cooperation can lead to significant improvements but a solution is feasible only if it does not harm any local utility. Non cooperative game theory studies situations inC D volving several players whose selfish actions affect each C 3, 3 0, 4 other [10]. In Tucker’s prisoner’s dilemma, two players D 4, 0 1, 1 can either cooperate (C), i.e. stay loyal to the other pristhe prisoner’s dilemma oner, or defect (D), i.e. agree to testify against the other. A social optimum is reached if both play C but the situation where both prisoners defect is the only stable situation (a Nash equilibrium). In fact, the game designer of the prisoner’s dilemma filled the payoff matrix in way that any prisoner has incentive to defect. moa models the opposite situation where the game designer tries to fill the payoff matrix such that each organization’s (weakly) dominant strategy is to cooperate, i.e. disclose its list of clients and follow the trusted entity. The game designer has to compute a Nash equilibrium (a stable matching) that optimizes the social welfare (total profit). The maximum weight matching M ∗ is sometimes unstable because the or∗ ganizations are selfish. Then, one has to consider a different optimum Mcont which is the maximum weight Nash equilibrium (no organization can increase its profit by using its own maximum weight matching instead of the solution

Cooperation in Multiorganization Matching

81

returned by the trusted entity). Interestingly, a theoretical measure of this loss of profit due to the selfishness of the organizations exists. Known as the price of stability (PoS) [12,1], it is defined as the (worst case) ratio between the most socially valuable state and the worth of the best Nash equilibrium. For moa, ∗ PoS= w(Mcont )/w(M ∗ ). moa is also related to cooperative game theory [10]. A central issue in this field is to allocate the worth of a coalition to its members. Shapley and Shubik associate to any two-sided assignment market (B, S, A) a cooperative game with transferable utility (the assignment game) and show that its core is nonempty, and has a lattice structure [13]. moa is close in spirit to other works which study, at an algorithmic level, how to make organizations cooperate. In [9], the authors study a scheduling problem involving several organizations. Each of them has a set of jobs to be completed as early as possible and its own set of processors. A selfish schedule is such that the processors only execute jobs of their owner. The authors propose an algorithm which returns schedules with good makespans and in which the organizations cooperate without being penalized. In [6,5], the authors study the selfish distributed replication problem. This problem involves several nodes of a network whose task is to fetch electronic contents (objects) located at distant servers. Instead of taking an object from its server at each request, the nodes can save time by making a local copy. An intermediate strategy is to get an object from another node which is closer than the server. The optimization problem is to fill the (bounded) memory of each node in order to minimize the overall expected response time. Since an optimum solution can be unacceptable to selfish nodes (e.g. a node’s memory is filled with objects that it rarely requests), the authors of [5] propose equilibrium placement strategies where no one is penalized. 1.4

Contribution

We investigate the computational complexity of moa in Section 2. In particular, we show that the problem is strongly NP-hard if the number of organizations if not fixed. It is weakly NP-hard for two organizations. A possible proof of strong NP-hardness for a fixed number of organizations is discussed and some pseudo-polynomial and polynomial cases are given as well. We provide an approximation algorithm with performance guarantee pb and a matching proof of inapproximation in Section 3. We also show in this section that the price of stability of moa is pb . Section 4 is devoted to generalizations of moa and also generalizations of the results of this article. We conclude in Section 5. Our results apply for any values of ps and pb such that 0 ≤ pb ≤ ps ≤ 1 and pb + ps = 1. Some proofs are omitted due to space limitations.

2

Complexity Results

We prove that moa is strongly NP-hard in the general case, even if the weights are polynomially bounded. We also show that the restriction of moa to 2

82

L. Gourv`es, J. Monnot, and F. Pascual

organizations is weakly NP-hard. Next we show pseudopolynomial and polynomial cases. 2.1

Computationally Hard Cases

Given a positive profit P and an instance of moa, the decision version asks ˜ i) whether the instance admits a matching M such that ∀i∈{1,...,q} wi (M ) ≥ w(M and w(M ) ≥ P . Theorem 1. The decision version of moa is strongly NP-complete. We make a reduction from 3-partition which is strongly NP-complete (problem [SP15] in [3]). Theorem 2. The decision version of moa is NP-complete, even if there are 2 organizations and the underlying graph is of maximum degree 2. Proof. Let ps and pb be two reals such that 1 ≥ ps ≥ pb ≥ 0 and ps + pb = 1. The reduction n is done from partition: given a set {a1 , . . . , an } of nintegers such that i=1 ai = 2W , decide whether J ⊂ {1, . . . , n} such that j∈J aj = W exists. partition is known to be NP-complete (problem [SP12] in [3]). From I, instance of partition, we build I ′ , instance of moa by the following way: • we are given 2 organizations O1 and O2 • O1 has n + 1 sellers and n + 1 buyers respectively denoted by s1,i and b1,i for i = 1, . . . , n + 1 • O2 has also n + 1 buyers and n + 1 sellers respectively denoted by b2,i and s2,i for i = 1, . . . , n + 1 • The edge set of the underlying graph is given by {[s1,n+1 , b2,n+1 ]} ∪ {[b2,n+1 , s2,n+1 ]} ∪ {[s2,n+1 , b1,n+1 ]} ∪ {[b1,i , s1,i ], [s1,i , b2,i ], [b1,i , s2,i ] : i = 1, . . . , n} The weights are defined by: • w([b1,i , s1,i ]) = 6ai and w([b2,i , s1,i ]) = w([s2,i , b1,i ]) = 3ai for i = 1, . . . , n • w([b2,n+1 , s2,n+1 ]) = 6W and w([s1,n+1 , b2,n+1 ]) = w([b1,n+1 , s2,n+1 ]) = 3W + 1 The underlying graph is made of a collection of n + 1 disjoint paths of length 3. Figure 1 gives an illustration of this construction.  ˜ ) = (ps + pb ) n 6ai = 12W if it Organization O1 can make a profit w1 (M i=1 ˜ ) = (ps + pb )6W = 6W . works alone. The local profit of organization O2 is w2 (M Thus, globally, the weight of this matching is 18W . ˆ ˆ We claim that I ′ admits a feasible assignment M  such that w(M ) ≥ 18W + 2 if and only if I admits a set J ⊆ {1, . . . , n}  with j∈J aj = W .  Let J be a subset of {1, . . . , n} such that j∈J aj = W (and then, j ∈J / aj = ˆ ˆ W ). We build the assignment M as follows: M = {[b2,j , s1,j ], [s2,j , b1,j ] : j ∈ J} ∪ {[b1,j , s1,j ] : j ∈ / J} ∪ {[s1,n+1 , b2,n+1 ], [b1,n+1 , s2,n+1 ]}

Cooperation in Multiorganization Matching

83

Fig. 1. The construction of I ′

ˆ is given by w(M ˆ ) = 18W + 2. Now, let us verify that M ˆ Clearly, the cost of M  is a feasible solution. The local profit of organization O1 is (ps + pb ) j ∈J / 6aj +  ˜ 1) = 12W + 1 ≥ w1 (M ) whereas the profit (ps + pb ) j∈J 3aj + (ps + pb )(3W +  of organization O2 becomes (ps + pb ) j∈J 3aj + (ps + pb )(3W + 1) = 6W + 1 ≥ ˜ ). w2 (M ˆ be a feasible assignment such that w(M ˆ ) ≥ 18W + 2. The Conversely, let M following property can be easily proved. Property 1. Any feasible solution of moa can be supposed maximal with respect to inclusion. ˆ necessarily contains the edges [s1,n+1 , b2,n+1 ] and [b1,n+1 , Now, remark that M s2,n+1 ] since on the one hand, the weight of any maximal matching on the graph induced by all vertices except {s1,n+1 , s2,n+1 , b1,n+1 , b2,n+1 } is 12W , and ˆ must contain some edges on the other hand w([b2,n+1 , s2,n+1 ]) = 6W . Thus, M [b2,j , s1,j ] or [b1,j , s2,j ] in order to compensate the loss of edge [b2,n+1 , s2,n+1 ]. Let ˆ }. By property 1, M ˆ is completely described by M ˆ = J = {j ≤ n : [b2,j , s1,j ] ∈ M {[b2,j , s1,j ], [b1,j , s2,j ] : j ∈ J} ∪ {[b1,j , s1,j ] : j ∈ / J} ∪ {[s1,n+1 , b2,n+1 ], [b1,n+1 , s2,n+1 ]}.  The profit of organization O2 is (ps + pb ) j∈J 3aj + (ps + pb )(3W + 1) =  ˜ ) = 6W , we deduce that 3 j∈J aj + 3W + 1. Since that profit is at least w2 (M    1 j∈J aj must be an integer, so j∈J j∈J aj ≥ W − 3 . Finally, aj ≥ W . On the other hand, the profit of organization O1 is given by (ps + pb ) j ∈J / 6aj + (ps +  n  pb ) j∈J 3aj +(ps +pb )(3W +1) = 6 j=1 aj −3 j∈J aj +3W +1. This quantity   ˜ ) = 6 n aj . Since must be at least w1 (M j∈J aj is an integer, we obtain j=1  j∈J aj = W which means that {a1 , . . . , an } can j∈J aj ≤ W . In conclusion, be partitioned into two sets of weight W .  Is moa strongly NP-complete for two organizations? We were not able to answer but we can relate the question to another one stated more than 25 years ago and still open: Is the exact weighted perfect matching problem in bipartite graphs strongly NP-complete? Given a graph whose edges have an integer weight and given a bound W , ExactPM is to decide whether the graph contains a perfect matching M of

84

L. Gourv`es, J. Monnot, and F. Pascual

total weight exactly W [2,4,7,8]. Papadimitriou and Yannakakis [8] prove that ExactPM is (weakly) NP-complete in bipartite graphs. Barahona and Pulleyblank [2] propose a pseudopolynomial algorithm in the case of planar graphs and Karzanov [4] gives a polynomial algorithm when the graph is either complete or complete bipartite and the weights are restricted to 0 or 1. Mulmuley, Vazirani and Vazirani [7] show that ExactPM has a randomized pseudo-polynomialtime algorithm. However, the deterministic complexity of this problem remains unsettled, even for bipartite graphs. (Papadimitriou and Yannakakis conjectured that it is strongly NP-complete [8]). ExactPM is an auto-reducible problem, that is, finding a perfect matching of weight W is polynomially equivalent to decide whether such a matching exists. Here, we prove that there is a Turing reduction from moa when there are 2 organizations to ExactPM. Thus, we conclude that if moa with 2 organizations is strongly NP-complete then ExactPM is also strongly NP-complete in bipartite graphs. Notice that this result also holds when there is a constant number of organizations. Proposition 1. If ExactPM is polynomial in bipartite graphs when weights are polynomially bounded, then moa with two organizations and weights polynomially bounded is polynomial for every values of ps , pb such that 1 ≥ ps ≥ pb ≥ 0 and ps + pb = 1. Proof. Let pb , ps be two rational numbers such that 1 ≥ ps ≥ pb ≥ 0 and ps + pb = 1, and let I = (G, w) be an instance of moa with two organizations where G = (V, E). W.l.o.g. w(e), ps w(e) and pb w(e) are integers for every edge e ∈ E. Moreover ∀e ∈ E, w(e) ≤ P (|V |) for some polynomial P . Let R be the weight of a maximum weight matching of G. Consider the bipartite graph G′ = (V ′ , E ′ ) built from G by adding dummy vertices and edges with weight 0 such that any matching of G can be completed into a perfect matching of G′ with the same value. Formally, we add a copy of K|S|,|B| and each new B-vertex (resp., Svertex) is completely linked to the S-vertices (resp., B-vertices) of G. Then, each shared edge e = [u, v] ∈ E is replaced by a path of length 3 [u, ue ], [ue , ve ], [ve , v] where ue , ve are new vertices. Remark that either {[u, ue ], [ve , v]} or {[ue , ve ]} is included in a perfect matching of G′ . Consider the weight function w′ defined as w′ (e) = (R+1)3 w(e) if e is internal to organization O1 and w′ (e) = (R+1)2 w(e) if e is internal to organization O2 . Moreover, if e = [u, v] ∈ E is a shared edge then w′ ([u, ue ]) = (R + 1)ps w([u, v]) if u ∈ S ∩ O1 and w′ ([u, ue ]) = (R + 1)pb w([u, v]) otherwise (i.e. u ∈ B ∩O1 ). We also set w′ ([v, ve ]) = ps w([u, v]) if u ∈ S ∩O2 and w′ ([v, ve ]) = pb w([u, v]) otherwise. The weight of each remaining edge of G′ is 0. It is clear that G′ is built within polynomial time and w′ remains polynomially bounded. Let I ′ = (G′ , w′ ). For any matching M , we denote by M1 (resp., M2 ) the restriction of M to organization O1 (resp., O2 ) and by Mshared the set of shared edges of M . Denote by W1 (resp., W2 ) the contribution of the shared edges of M to the profit of organization O1 (resp., O2 ). We have w(Mshared ) = W1 + W2 since ps + pb = 1.

Cooperation in Multiorganization Matching

85

We claim that w(M ) = w(M1 )+w(Mshared )+w(M2 ) if and only if there exists a matching of I ′ with weight W = (R+1)3 w(M1 )+(R+1)2 w(M2 )+(R+1)W1 +W2 . ˜ ) for i = 1, 2. Moreover, M is a feasible solution to moa iff w(Mi ) + Wi ≥ wi (M ′ ′ One direction is trivial. So, let M be a matching of I with value w′ (M ′ ) = W = (R + 1)3 A + (R + 1)2 B + (R + 1)C + D. By the choice of R, we must get ′ w(M1′ ) = A, w(M2′ ) = B and w(Mshared ) = C + D, where C (resp., D) is the ′ contribution of Mshared to the profit of organization O1 (resp., O2 ). The profit of organization O1 (resp. O2 ) according to M ′ is A + C (resp. B + D). In conclusion by applying at most R4 times the polynomial algorithm for ExactPM, we find an optimal solution of moa. By an exhaustive search, we ˜ ) and try all values of A, B, C, D at most equal to R such that A + C ≥ w1 (M ˜  B + D ≥ w2 (M ). Proposition 2. moa with a constant number of organizations can be solved in pseudopolynomial time when the underlying graph has a maximum degree 2. Actually, dynamic programming can improve the time complexity given in Proposition 2. 2.2

Polynomial Cases

moa is trivially polynomial when there is a unique organization or when the underlying graph is of maximum degree 1. Furthermore an exhaustive search can efficiently solve the problem if the underlying graph G = (V, E) contains O(log |E|) shared edges. Let moa0,1 be the subcase where w([i, j]) ∈ {0, 1} for all (i, j) ∈ B ×S. We prove that an optimum to moa0,1 is a maximum cardinality assignment of the underlying graph though a maximum cardinality assignment is not necessarily a solution of moa0,1 . Theorem 3. moa0,1 is polynomial. Proof. Let M be an assignment on an unweighted bipartite graph G = (B, S; E). Recall that a path in G is alternating with respect to M if it alternates edges of M and edges of E \ M . Furthermore, an alternating path π is augmenting if no edge of M is incident to its extremal nodes. The word “augmenting” means that (M \ π) ∪ (π \ M ) is a matching of size |M | + 1. It is well known that M is of maximum size on G if G does not admit any augmenting alternating path with respect to M . ˆ be an optimal matching Let I be an instance of moa0,1 defined upon G. Let M ˜ built as follows. Start with the feasible matching M and increase its size with augmenting alternating paths while it is possible. ˆ j be the matching produced at step j. We suppose that t steps are Let M ˆ . Hence, M ˆ0 = M ˜ and M ˆt =M ˆ . We mainly prove needed to reach M ˆ j+1 ) ≥ wi (M ˆ j ), ∀i ∈ {1, . . . , q} wi (M

(1)

for all j ∈ {0, . . . , t − 1}. This inequality states that the use of an augmenting alternating path cannot deteriorate the profit of any organization.

86

L. Gourv`es, J. Monnot, and F. Pascual

Given v ∈ V and a matching M , let c(v, M ) be the contribution of v to the profit of its organization in M : ⎧ ⎨ ps if v ∈ S and an edge of M is incident to v c(v, M ) = pb if v ∈ B and an edge of M is incident to v ⎩ 0 otherwise

ˆ j+1 = Let V ′ be the vertices of π ′ , the augmenting alternating path such that M j ′ ′ j ˆ \ π ) ∪ (π \ M ˆ ). We deduce that (M ˆ j+1 ) − wi (M ˆ j) = wi (M



v∈V

ˆ j+1 ) − c(v, M ˆ j) c(v, M

(2)



ˆ j+1 ) if v ∈ V ′ ˆ j ) = c(v, M for all i ∈ {1, . . . , q}. One can observe that c(v, M ′ ′ and v is not an extremal node of π . Indeed, a buyer b ∈ V matched with a ˆ j is still matched in M ˆ j+1 but with another seller. Similarly, seller s ∈ V ′ in M ′ ˆ j is still matched in M ˆ j+1 a seller s ∈ V matched with a buyer b ∈ V ′ in M ′ ′ but with another buyer. If v ∈ S ∩ V (resp. v ∈ B ∩ V ) and v is an extremal ˆ j ) = 0 and c(v, M ˆ j+1 ) = ps (resp. c(v, M ˆ j ) = 0 and node of π ′ then c(v, M j+1 ˆ ) = pb ). Hence, c(v, M ˆ j+1 ) − c(v, M ˆ j) ≥ 0 c(v, M

(3)

ˆ j+1 ) − for all v ∈ V because ps ≥ pb ≥ 0. Using (2) and (3) we obtain wi (M j ˆ ˆ ˆ t) ≥ wi (M ) ≥ 0 for all i ∈ {1, . . . , q}. M is a feasible assignment because wi (M t−1 0 ˆ ˆ ˜ ˆ) = wi (M ) ≥ . . . ≥ wi (M ) = w(Mi ) for all i ∈ {1, . . . , q}. In addition, w(M ∗ w(M ) because the algorithm stops when no augmenting alternating path exists. ∗ ˆ is optimal because w(M ∗ ) ≥ w(Mcont In conclusion, M ). 

3

Approximation

Recall that ps and pb are any values such that 0 ≤ pb ≤ ps ≤ 1 and ps + pb = 1. We start by the following property. ˜ i ), and this bound is asymptotically tight. Property 2. wi (M ∗ ) ≥ pb w(M Let us consider algorithm Approx given below. Algorithm Approx – Construct the graph G′ = (V ′ , E ′ ) from G = (V, E) as follows: V ′ = V , and E ′ = E, except that the weights of the edges are modified: for each edge [u, v] such that u belongs to organization Oi and v belongs to organization Oj , w′ ([u, v]) = w([u, v]) if u and v belong to the same organization (i = j), and otherwise w′ ([u, v]) = pb w([u, v]). – Return a maximum weight matching of G′ .

Cooperation in Multiorganization Matching

87

Theorem 4. Approx is a pb -approximate algorithm for moa, and this bound is asymptotically tight. Proof. Let ps , pb be two numbers such that 1 ≥ ps ≥ pb ≥ 0 and ps + pb = 1. Let M be a matching returned by algorithm Approx on graph G. We first show ˜ i ). Thus M is a that the profit of each organization Oi in M is at least w(M solution of moa. Let M int(i) be the set of edges of M such that both endpoints belong to Oi , and let M ext(i) be the set of edges of M such that exactly one endpoint belongs to Oi . Since M is a maximum weight matching of G′ , w′ (M int(i) )+w′ (M ext(i) ) ≥ ˜ i ), otherwise we could have a matching with a larger weight by replacing the w(M   ˜ i . Thus the profit of Oi is at edges of M int(i) ∪ M ext(i) in M by the edges of M int(i) ext(i) ′ int(i) ′ ˜ i ) = wi (M ˜ ). least w(M )+pb w(M ) = w (M )+w (M ext(i) )) ≥ w(M

Let us now show that Approx is pb -approximate. The edges of G′ are the same as the ones of G, except that the weight of some of them has been multiplied by pb < 1. Thus M , which is a maximum weight matching of G′ , has a weight ∗ ). w(M ) ≥ pb w(M ∗ ) ≥ pb w(Mcont Let us show that this bound is asymptotically tight, by considering the following instance. Here, we assumed pb > 0. Recall that pb ≤ 1/2 since 1 ≥ ps ≥ pb ≥ 0. Let ε > 0 such that ε < 1/pb − 1. There are two organizations, organization O1 , which owns two vertices b1 and s1 , linked by an edge of weight 1, and organization O2 , which owns two vertices b2 and s2 , linked by an edge of weight 1. There are two shared edges, between b1 and s2 , and between b2 and s1 : both edges have weight p1 − ε. Algorithm Approx returns the matching M ={[b1 , s1 ], [b2 , s2 ]} with weight 2 in G′ because the weight of {[b1 , s2 ], [b2 , s1 ]} in G′ is 2(1 − pb ε) < 2. The optimal solution would have been ∗ = {[b1 , s2 ], [b2 , s1 ]}. The ratio between the weights of these two solutions Mcont w(M) 2  is w(M ∗ ) = 2/p −2ε , which tends towards pb when ε tends towards 0. b

b

c o n t

∗ )/w(M ∗ ) Theorem 4 implies that the price of stability of moa defined as w(Mcont is at least pb . In fact, we are able to prove that PoS= pb .

Proposition 3. The price of stability is pb . ∗ Proof. It follows from Theorem 4 that w(Mcont )/w(M ∗ ) ≥ pb since Approx ∗ returns a matching M such that w(Mcont ) ≥ w(M ) ≥ pb w(M ∗ ). Let us now show that this bound is tight. There are two organizations: organization O1 , which owns two vertices b1 and s1 , linked by an edge of weight W1 , and organization O2 , which owns one vertex s2 , linked to b1 by a link of weight W2 . Suppose that W1 = ε such that 0 < ε < 1 and W2 = 1 when pb = 0. The w(M ∗ ) ratio w(M ∗ ) = ε, tends towards 0 = pb when ε tends towards 0. Suppose that W1 = 1 and W2 = 1/pb − ε such that 0 < ε < 1/pb − 1 when pb > 0. The ratio w(M ∗ ) p  w(M ∗ ) = 1−ε p , tends towards pb when ε tends towards 0. c o n t

c o n t

b

b

We can prove that Theorem 4 is best possible if P =NP, i.e. we cannot obtain a (pb + ε)-approximation for all ε > 0. Actually, we prove a slightly stronger result where n denotes the number of vertices.

88

L. Gourv`es, J. Monnot, and F. Pascual

Fig. 2. The instance It resulting from the above reduction

Theorem 5. For any polynomial P , it is NP-hard to obtain a (pb + Θ(2 1 ( approximation for moa where at least three organizations are involved. P

n

))

)-

Proof. We describe a gap reduction. We start with an instance of partition given by a set of n integers {a1 , . . . , an } such that ni=1 ai = 2W . For any real t > 1, we construct an instance It of moa as follows: • we are given 3 organizations O1 , O2 and O3 . • O1 has n + 1 buyers and n + 1 sellers respectively denoted by b1,i and s1,i for i = 1, . . . , n + 1. • O2 has 2 buyers denoted by b2,1 , b2,n+1 and n + 1 sellers denoted by s2,i for i = 1, . . . , n + 1. • O3 has one seller s3,1 . • The edge set of the underlying graph is {[s1,i , b1,i ], [b1,i , s2,i ] : i = 1, . . . , n} ∪ {[s1,n+1 , b2,1 ]} ∪ {[b1,n+1 , s2,n+1 ], [s2,n+1 , b2,n+1 ], [b2,n+1 , s3,1 ]} The weights are given by: • w([s1,i , b1,i ]) = w([b1,i , s2,i ]) = ai for i = 1, . . . , n. • w([s1,n+1 , b2,1 ]) = ps W , w([b1,n+1 , s2,n+1 ]) = ps W , w([s2,n+1 , b2,n+1 ]) = tpb W + 2ps W , and w([b2,n+1 , s3,1 ]) = tW . An illustration of this construction is given in Figure 2. If t = O(2P (|V |) ) where |V | = 3n+ 6 is the order of the underlying graph, then it is not difficult to see that the above construction is given within polynomial time. ˜ The profits nthe organizations˜can make on their own are respectively w1 (M ) = (ps + pb ) i=1 ai = 2W , w2 (M ) = (ps + pb )(tpb W + 2ps W ) = tpb W + 2ps W and ˜ ) = 0. w3 (M We prove that there are only two distinct values for the optimal value of moa, that are OP T (It ) = tpb W + 3ps W + 2W or OP T (It ) = tW + 2ps W + 2W , and OP T (It ) = tW + 2ps W + 2W if and only if {a1 , . . . , an } admits a partition. Observe that tW + 2ps W + 2W > tpb W + 3ps W + 2W if and only if t > 1 ∗ be an optimal solution of moa (with since pb = 1 − ps and ps > 0. Let Mcont value OP T (It )). Let us consider two cases: ∗ . An optimal solution can be described by Case [s2,n+1 , b2,n+1 ] ∈ Mcont

{[s1,i , b1,i ] : i = 1, . . . , n} ∪ {[s1,n+1 , b2,1 ], [s2,n+1 , b2,n+1 ]}.

Cooperation in Multiorganization Matching

89

∗ ∗ Actually, [s1,n+1 , b2,1 ] ∈ Mcont because Mcont is maximal by Property 1 (cf page 83). Moreover, the weight of any maximal matching on the graph induced by {s1,i , b1,i , s2,i : i = 1, . . . , n} has the same value 2W . In this case, we get OP T (It ) = tpb W + 3ps W + 2W . ∗ / Mcont . Edges {[b1,n+1 , s2,n+1 ], [b2,n+1 , s3,1 ], [s1,n+1 , b2,1 ]} Case [s2,n+1 , b2,n+1 ] ∈ ∗ belong to Mcont by Property 1. The contribution of these 3 edges to the profit of O2 is ps w([b1,n+1 , s2,n+1 ]) + pb w([b2,n+1 , s3,1 ]) + pb w([s1,n+1 , b2,1 ]) = tpb W + ps W < tpb W + 2ps W = w([s2,n+1 , b2,n+1 ]) since ps > 0. Hence, a subset ∗ of shared edges between O1 and O2 must belong to Mcont . Let J ∗ = {j ≤ ∗ ∗ n : [b1,j , s2,j ] ∈ Mcont } be this subset. Then, Mcont is entirely described by {[b1,n+1 , s2,n+1 ], [b2,n+1 , s3,1 ], [s1,n+1 , b2,1 ]} ∪ {[b1,j , s2,j ] : j ∈ J ∗ } ∪ {[s1,j , b1,j , ] : j∈ / J ∗ }. ∗ ∗ ˜ 1 ), i.e.  ∗ aj + must satisfy w1 (Mcont ) ≥ w(M To be feasible, Mcont j ∈J /   pb j∈J ∗ aj + (ps + pb )ps W ≥ nj=1 aj from which we deduce W ≥ j∈J ∗ aj ∗ ∗ ˜ because  pb = 1 − ps and ps > 0. M must also satisfy w2 (Mcont ) ≥ w(M2 ), a + (p + p )p W + tp W ≥ tp W + 2p W , which is equivalent to i.e. p ∗ s b s b  s  s j∈J j b a ≥ W . Then, we obtain a = a = W . On the one hand ∗ ∗ ∗ j j j j∈J j∈J j ∈J / OP T (It ) = tW + 2ps W + 2W and on the other hand {a1 , . . . , an } has a partition given by J ∗ . Conversely, if {a1 , . . . , an } admits a partition then it is not difficult to prove that OP T (It ) = tW + 2ps W + 2W . Now, assume that there is a (pb + c2 1(| |) )-approximation of moa given within polynomial time for some c > 0. Consider t0 = 5c2P (|V |) and let apx(It0 ) denote the value of the approximate solution on instance It0 . P

V

• {a1 , . . . , an } does not admit a partition. One has OP T (It0 ) = 5c2P (|V |) pb W + 3ps W + 2W and then apx(It0 ) ≤ 5c2P (|V |) pb W + 3ps W + 2W . • {a1 , . . . , an } admits a partition. We have OP T (It0 ) = 5c2P (|V |) W + 2ps W + 2W . Since apx(It0 ) ≥ (pb + c2 1(| |) )OP T (It0 ) by hypothesis and ps ≤ 1, we deduce apx(It0 ) > 5W + 5c2P (|V |) pb W ≥ 5c2P (|V |) pb W + 3ps W + 2W . P

V

In conclusion, apx allows us to distinguish within polynomial time whether {a1 , . . . , an } has a partition or not, which is impossible if P = NP. 

4 4.1

Generalizations Relaxation of the Selfishness of the Organizations

Suppose that each organization Oi accepts a proposed global matching if its ˜ i )/x where x ≥ 1 is fixed. This means that each own profit is at least w(M organization accepts to divide by x the profit it would have without sharing its file with the other organizations. The problem, denoted by moa(x) is then ˜ i )/x for all to find a maximum weight matching M such that wi (M ) ≥ w(M ∗ i ∈ {1, . . . , q}. Let Mcont(x) denote such a maximum weight matching.

90

L. Gourv`es, J. Monnot, and F. Pascual

If x = 1, an organization does not accept to reduce its profit, and this problem is the one stated in the Introduction. If x ≥ 1/pb , the organizations accept to divide their profits by 1/pb . Property 2 page 86 shows that in a maximum ˜ i ). Thus weight matching M ∗ , the profit of organization Oi is at least pb w(M ∗ ∗ Mcont(x) = M . Our aim is now to solve moa(x) for 1 ≤ x < 1/pb . With a slight modification of the proof of Theorem 1, we can show that this problem is strongly NP-hard for each value x smaller than 1/pb . One can also extend Approx to a slightly modified algorithm1 Approx(x) and prove that it is (x pb )-approximate algorithm for moa(x) and this bound is tight. In addition, the price of stability is x pb for this generalization. 4.2

General Graphs

One can extend moa to general graphs when ps = pb = 1/2. In this case, the distinction between buyers and sellers is lost. For example, the problem has the following application: Numerous web sites offer to conduct home exchanges during holidays. The concept is simple, instead of booking expensive hotel rooms, pairs of families agree to swap their houses for a vacation. We model the situation with a graph G = (V, E) whose vertices are candidates for house exchange. The vertex set is partitioned into q sets/organizations O1 . . . Oq . Vertices within an organization are its clients. Every edge [a, b] ∈ E has a weight w([a, b]) representing the satisfaction of candidates a and b if they swap. Pairs are formed by the organizations which only care about the satisfaction of their clients. In case of a mixed-organizations exchange [a, b], it is assumed that the satisfaction of both participants is w([a, b])/2. The problem is to maximize the collective satisfaction while no organization is penalized. Theorems 3 to 5 and Proposition 3 (where pb is replaced by 1/2) hold for general graphs since the proofs do not use the fact that G is bipartite.

5

Conclusion

We studied cooperation, at an algorithmic level, between organizations. We showed that the price of stability is pb , and we studied the complexity of moa. We presented polynomial cases, and showed that the problem is NP-hard in the general case. We also gave an approximation algorithm, matching the inapproximation bound when there are at least 3 organizations. There remains some open problems: is it possible to have an algorithm with a better approximation ratio when there are two organizations? Is this problem strongly NP-hard in this case (we notice that this problem is related to the open Exact Perfect Matching problem)? When we consider that each organization accepts a solution if it does not reduce its profit by a factor larger than x, is it possible to get an algorithm with an approximation ratio better than x pb ? An interesting direction would also be to study fairness issues in this problem. For example, among all the solutions of 1

The weight of shared edges is multiplied by xpb instead of pb .

Cooperation in Multiorganization Matching

91

˜ i, the same quality, return the one which maximizes the minimum wi (Mcont ) − M that is the minimum increase of profit of the organizations.

References ´ Wexler, T., Roughgar1. Anshelevich, E., Dasgupta, A., Kleinberg, J.M., Tardos, E., den, T.: The Price of Stability for Network Design with Fair Cost Allocation. In: Proc. of FOCS 2004, pp. 295–304 (2004) 2. Barahona, F., Pulleyblank, W.R.: Exact arborescences, matchings, and cycles. Discrete Appl. Math. 16, 91–99 (1987) 3. Garey, M.R., Johnson, D.S.: Computers and Intractability: A Guide to the Theory of NP-Completeness. W. H. Freeman and Company, New York (1979) 4. Karzanov, A.V.: Maximum matching of given weight in complete and complete bipartite graphs. Cybernetics 23, 8–13 (1987) (translation from Kibernetika 1, 711 (1987)) 5. Laoutaris, N., Telelis, O., Zissimopoulos, V., Stavrakakis, I.: Distributed Selfish Replication. IEEE Trans. Parallel Distrib. Syst. 17(12), 1401–1413 (2006) 6. Leff, A., Wolf, L., Yu, P.S.: Efficient LRU-based buffering in a LAN remote caching architecture. IEEE Trans. Parallel Distrib. Syst. 7(2), 191–206 (1996) 7. Mulmuley, K., Vazirani, U., Vazirani, V.V.: Matching is as easy as matrix inversion. Combinatorica 7, 105–113 (1987) 8. Papadimitriou, C.H., Yannakakis, M.: The complexity of restricted spanning tree problems. J. ACM 29, 285–309 (1982) 9. Pascual, F., Rzadca, K., Trystram, D.: Cooperation in multi-organization scheduling. In: Kermarrec, A.-M., Boug´e, L., Priol, T. (eds.) Euro-Par 2007. LNCS, vol. 4641, pp. 224–233. Springer, Heidelberg (2007) 10. Osborne, M., Rubinstein, A.: A Course in Game Theory. MIT Press, Cambridge (1994) 11. Pentico, D.W.: Assignment problems: A golden anniversary survey. EJOR 176, 774–793 (2007) 12. Schultz, A.S., Stier Moses, N.: On the performance of user equilibria in traffic networks. In: Proc. of SODA 2003, pp. 86–87 (2003) 13. Shapley, L.S., Shubik, M.: The assignment game I: The core. International Journal of Game Theory 1, 111–130 (1972)

Randomized Algorithms for Buffer Management with 2-Bounded Delay Marcin Bienkowski1,⋆ , Marek Chrobak2,⋆⋆ , and L  ukasz Je˙z1,⋆ 1

Institute of Computer Science, University of Wroclaw, 50-383 Wroclaw, Poland 2 Department of Computer Science, University of California, Riverside, CA 92521, USA

Abstract. In the problem of buffer management with bounded delay, packets with weights and deadlines arrive at a network switch over time, and the goal is to send those packets on the outgoing link while maximizing the total weight of the packets that are sent before their deadlines expire. In the 2-bounded delay case, each packet has to be sent either in the step of its release or in the next step. In the deterministic case, the optimal competitive ratio for this case is φ ≈ 1.618. In the randomized case, against oblivious adversaries, the optimal competitive ratio is 1.25. The only yet unresolved case is that of randomized algorithms against adaptive adversaries. For this case, we give a complete solution by proving that the optimal competitive ratio is 4/3. Additionally, we give a lower bound of 1.2 for the 2-uniform case.

1

Introduction

In this paper, we consider the problem of buffer management with bounded delay, introduced by Kesselman et al. [9]. This problem models the behavior of a single network switch. We assume that time is slotted and divided into steps. At the beginning of a time step, any number of packets may arrive at a switch and are stored in its buffer. A packet has a positive weight and a deadline, which is an integer denoting the last step in which packet may be transmitted. Only one packet can be transmitted in a single step. Packets whose deadline already passed are lost and removed from the buffer. The goal is to maximize the gain, defined as the total weight of the transmitted packets. We note that this problem is equivalent to a scheduling problem, in which packets are jobs of unit length, with given release time, deadline and weight. Release times and deadlines are restricted to integer values. In this setting, the goal is to maximize the total weight of jobs which are completed before their deadlines. As the process of managing packet queue is inherently a real-time task, we model it as an online problem. This means that the algorithm, deciding which packets to transmit, has to base its decision solely on the packets which have already arrived at a switch, without the knowledge of the future. ⋆

⋆⋆

Supported by MNiSW grants number N206 001 31/0436, 2006–2008 and N N206 1723 33, 2007–2010. Supported by NSF grants OISE-0340752 and CCF-0729071.

E. Bampis and M. Skutella (Eds.): WAOA 2008, LNCS 5426, pp. 92–104, 2009. c Springer-Verlag Berlin Heidelberg 2009 

Randomized Algorithms for Buffer Management with 2-Bounded Delay

93

Competitive Analysis. To measure the performance of an online algorithm, we use a standard notion of the competitive analysis [3], which, roughly speaking, compares the gain of the algorithm to the gain of the optimal solution on the same input sequence. For any algorithm Alg, we denote its gain on input sequence I by GALG (I); we denote the optimal offline algorithm by Opt. We say that a deterministic algorithm Alg is R-competitive if on any input sequence I, 1 · GOPT (I). it holds that GALG (I) ≥ R When analyzing the performance of an online algorithm Alg, we view the process as a game between Alg and an adversary. The adversary controls what packets are injected into the buffer and chooses which of them to send. The goal is then to show that the adversary’s gain is at most R times Alg’s gain. If the algorithm is randomized, we consider its expected gain, E[GALG (I)], where the expectation is taken over all possible random choices made by Alg. However, in the randomized case, the power of the adversary has to be further specified. Following Ben-David et al. [2], we distinguish between an oblivious and adaptive-online adversary (called adaptive for short). An oblivious adversary has to construct the whole input sequence in advance, not taking into account the random bits used by an algorithm. The expected gain of Alg is compared to the gain of the optimal offline solution on I. An adaptive adversary may decide about the next packets injected into the buffer upon seeing which packets are transmitted by the algorithm. However, it has to provide an answering entity Adv, which creates a solution in parallel to Alg. Adv’s solution may not be changed afterwards. We say that Alg is R-competitive against an adaptive adversary if for any input sequence I created adaptively and any answering algorithm Adv, 1 · E[GADV (I)]. We note that Adv is deterministic, it holds that E[GALG (I)] ≥ R but as Alg is randomized, so is the input sequence I. In the literature on online algorithms (see e.g. [3]), the definition of the competitive ratio sometimes allows an additive constant, i.e., a deterministic algorithm is R-competitive if there exists a constant α ≥ 0 such that for input 1 · GOPT (I) − α. An analogous definition sequence I, it holds that GALG (I) ≥ R applies to randomized case. In this paper, our upper bound has α = 0, while our lower bounds hold for any constant α. Bounded Sequences. A packet delay is the number of steps in which the packet may be transmitted. In an s-bounded sequence the delay of each packet is at most s, whereas in an s-uniform sequence the delay of each packet is exactly s. 1.1

Related Work

The currently best, 1.828-competitive deterministic algorithm for general instances was given by Englert and Westermann [7]. The lower bound of φ ≈ 1.618 for the competitive ratio of deterministic algorithms was shown in [1,5,8]. The input sequences which incur this lower bound are 2-bounded. On the other hand, for 2-bounded and 3-bounded instances, deterministic φ-competitive algorithms are known (see [9] and [4], respectively). If we restrict the set of inputs

94

M. Bienkowski, M. Chrobak, and L  . Je˙z

general input upper general input lower 2-bounded upper 2-bounded lower 2-uniform upper 2-uniform lower

deterministic 1.828 [7] 1.618 1.618 [9] 1.618 [1,5,8] 1.377 [6] 1.377 [6]

(rand.) adaptive 1.582 [4] 1.25 1.333 1.582 1.333 1.25 1.333 1.377 1.333 1.172 1.2

(rand.) oblivious 1.582 1.25 1.25 [4] 1.25 [5] 1.25 1.172 [4]

Fig. 1. Comparison of known and new results. The results of this paper are written in boldface. The results without citations are implied by other entries of the table.

to 2-uniform instances, the optimum competitive ratio equals approximately 1.377 [6]. These bounds are summarized in Fig. 1. The best known randomized solution is the algorithm RMix by Chin et al. [4]. e ≈ 1.582, which holds even against RMix achieves the competitive ratio e−1 an adaptive adversary. The best lower bound of 1.25 for randomized algorithms was given by Chin and Fung [5]. This lower bound holds even against an oblivious adversary and uses 2-bounded instances. A matching upper bound for 2-bounded instances and oblivious adversaries was given by Chin et al. [4]. For 2-uniform instances, currently best known lower bound for competitive ratio against oblivious adversary is approximately 1.172 [4]. 1.2

Our Contribution

In our paper we consider 2-bounded and 2-uniform sequences and adaptive adversaries. In addition to purely theoretical interest, the adaptive adversary model is in fact quite reasonable in the network traffic setting. This is because, in reality, the traffic through a switch is not at all independent of the packet scheduling algorithm. For example, lost packets are typically resent, and low throughput in a node can affect the choice of routes for data streams in a network. These phenomena are not captured by the oblivious adversary model. Prior to this work, the only result on packet scheduling, which involved these adversaries, was the algorithm RMix by Chin et al. [4]. The lower bounds for this variant are implied by the corresponding lower bounds for oblivious adversaries. In particular, the currently best lower bound for general input sequence was 1.25 [5]. We improve this bound by presenting an adaptively created 2-bounded instance which forces any randomized algorithm to have a competitive ratio at least 4/3. We present an optimal algorithm Rand, which achieves this ratio for 2-bounded sequences. Obviously, the 4/3 upper bound applies to 2-uniform instances, as well. We also give a lower bound of 1.2 for 2-uniform instances. 1.3

Preliminaries

For any algorithm A, let BA denote the state of its buffer, that is, the set of packets stored in the buffer. In particular, BADV denotes the state of the buffer of the adversary Adv. We denote the (expected) gain of algorithm A by GA .

Randomized Algorithms for Buffer Management with 2-Bounded Delay

95

We assume that time is divided into steps. In step t, first the adversary injects any number of packets. By (w, d) we denote a packet with weight w and deadline d. For 2-bounded instances, d ∈ {t, t + 1}, whereas for 2-uniform ones, d = t + 1. Then Adv and the algorithm decide, at the same time, which packet to transmit. We do a fine-grained description of the step (packet disappearances due to deadlines or removal of superfluous packets) in the upper bound section.

2

Lower Bound for 2-Bounded Sequences

First, we present a finite strategy of the adversary, which forces any randomized algorithm Alg to have gain at least 4/3 − ǫ times smaller than the gain of the adversary. As after executing this strategy, buffers of both the algorithm and the adversary are empty, we may repeat it, achieving arbitrarily high gain of the adversary. This and the fact that the ǫ term can be made arbitrarily small imply the lower bound of 4/3 on the ratio of any randomized algorithm. Without loss of generality, we assume that Alg always sends a packet in a step if its buffer is non-empty. Input Construction. First, we show how the adversary creates a finite input sequence of length at most n. Our construction preserves the following invariant: at the very beginning of each step t, before the adversary injects new packets, both Alg and Adv have at most one pending packet, (2t , t). To this end, we assume that before we start both Alg and Adv have packet (2, 1) in their buffers (this packet is simply injected by the adversary in the first step). If at the beginning of a step BALG = ∅, the sequence is finished. Otherwise, we describe what packets are injected in step t, distinguishing three disjoint cases. Case 1. BALG = {(2t , t)} and t < n. Then the adversary injects (2t , t) and (2t+1 , t + 1). Case 2. BALG = ∅. Then no new packet is injected. Case 3. BALG = {(2t , t)} and t = n. Then the adversary injects (2t , t). If case i occurs in step t, we call such step an i-step. Since after any 2-step or 3-step, BALG = BADV = ∅, the sequence ends immediately. It might happen that in a 2-step the adversary has no packet. In this case, neither the algorithm nor the adversary transmits anything in this step; we call such step an empty 2-step. We present the strategy of the adversary in a single step t. In a 2-step, the adversary simply transmits a packet if he has one; in a 3-step, the adversary has one or two packets (2t , t); he transmits one of them. In a 1-step, both the algorithm and the adversary have packet (2t+1 , t + 1) and at least one copy of packet (2t , t). In this case, the adversary tries to transmit packet different from the one transmitted by the algorithm. Formally, let pt denote the probability that Alg transmits the packet with later deadline. If pt ≥ 1/2, then Adv transmits the packet with the earlier deadline, otherwise it chooses the packet with the later deadline.

96

M. Bienkowski, M. Chrobak, and L  . Je˙z

Fig. 2. Tree of the game between Alg and Adv for 2-bounded sequences

Game Tree. Consider the following thought experiment: assume that n is infinite, i.e., case 3 never occurs and inspect a 1-step t. With probability 1 − pt , Alg transmits (2t , t) and leaves (2t+1 , t + 1) in the buffer. In this case the next step is a 1-step. With probability pt , the algorithm transmits (2t+1 , t + 1) and ends this step with empty buffer. In such case, the next step is a 2-step, after which the sequence ends. Thus, we may identify Alg with an infinite sequence of (pt )t . Note that this observation holds also if the sequence is finite, as the algorithm does not know the value of n until step n. We view the whole process as a game between the algorithm Alg and the adversary Adv. If we fix the sequence (pt )t of Alg and n, the length of the game, this uniquely defines the game tree Tn . An example of such tree is depicted in Fig. 2. The tree represents all the possible outcomes of the random choices made by Alg, i.e. a possible run of Alg is represented by a path starting from the root and ending in a leaf. We say that a node v occurs in the game if it is on such a path. Level t of the tree corresponds to step t; the root is on level 1. The probability that a particular state represented by a tree node v actually appears throughout the game is equal to the product of probabilities assigned to edges on the path from the root to v. Let Et denotean event of reaching t−1 a step t on the leftmost path of the tree, hence Pr[Et ] = i=1 (1 − pi ). We note that tree Tn itself is well defined even if Pr[En ] = 0, i.e., if the algorithm never reaches the n-th level of the tree. We split the expected gain of Alg on the sequence into the gains in individual steps (corresponding to appropriate tree nodes). We define GALG (v) as contribution to the expected gain of Alg gathered in the step represented by v. In other words, GALG (v) is the (unconditioned) expected  gain of Alg in this step. For any subset K ⊆ Tn , we define GALG (K) = v∈K GALG (v). Although this gain

Randomized Algorithms for Buffer Management with 2-Bounded Delay

97

is well defined for any subset of vertices K, we consider only subsets which are either single nodes, paths or whole subtrees. We introduce analogous definition for GADV . In these terms, GALG (Tn ) and GADV (Tn ) are expected gains of Alg and Adv, respectively, on the whole input sequence of length n. Thus, the lower bound is implied by the following theorem, proved in in the subsection below. Theorem 1. For any ǫ > 0 and any probability   sequence (pt )t of Alg, there exists an integer n, such that GADV (Tn ) ≥ 43 − ǫ · GALG (Tn ). Comparing Expected Gains. To prove Theorem 1, we partition Tn into the following parts. Let Xt denote a set consisting of a node corresponding to a 1-step in step t and a 2-step in step t + 1. Let Yn be the node corresponding to the only 3-step in tree Tn , see Figure 2. Below, we compute the expected gains on these parts. Lemma 1. For any algorithm Alg, the corresponding game tree Tn and any 1 ≤ t < n, we have (i) GALG (Xt ) = (1 + pt ) · 2t · Pr[Et ], (ii) GADV (Xt ) = max{2, 1 + 2 · pt } · 2t · Pr[Et ], (iii) GADV (Yn ) = GALG (Yn ) = 2n · Pr[En ]. Proof. Fix any set Xt . The probability of reaching its 1-step is Pr[Et ]. In this 1-step, both Alg and Adv have (2t , t) and (2t+1 , t + 1) in their buffers. The algorithm transmits the latter with probability pt and, by the construction, does not have any packet in the 2-step. Thus (i) follows, as GALG (Xt ) = Pr[Et ] · (pt · 2t+1 + (1 − pt ) · 2t ). As for (ii), if pt < 1/2, then Adv transmits packet (2t+1 , t + 1) and the subsequent 2-step is empty. If pt ≥ 1/2, then Adv transmits packet (2t , t) and is left with (2t+1 , t + 1) in the buffer. This packet is then transmitted in the next 2-step. Therefore, the gain of the adversary is Pr[Et ] · 2t+1 if pt < 1/2 and Pr[Et ] · (2t + pt · 2t+1 ) if pt ≥ 1/2. Hence, (ii) holds. As in the only 3-step of Yn , both Alg and Adv transmit (2n , n), property (iii) follows. ⊓ ⊔ The following lemmas imply that the ratio of the expected gains of Alg and Adv on Tn is 4/3 if we neglect either the gain in Yn or some constant gain. Lemma 2. For any integer n, GADV (Tn ) ≥

4 3

· GALG (Tn ) −

1 3

· GALG (Yn ).

Proof. From Lemma 1, parts (i) and (ii), and using inequality max{2, 1+2·pt } ≥ 4 4 ADV (Xt ) ≥ 3 · GALG (Xt ). The lemma follows immediately by 3 (1 + pt ), we get G n−1 noting that Tn = t=1 Xt ⊎ Yn and applying Lemma 1, part (iii). ⊓ ⊔ Lemma 3. For any integer n, GADV (Tn ) ≥

4 3

· GALG (Tn ) − 23 .

Proof. We implicitly exploit the idea that if the gain on Yn is large, then the probabilities pt are small (at least on average), implying that the ratio of gains of Adv and Alg on Xt is in fact higher than 4/3.

98

M. Bienkowski, M. Chrobak, and L  . Je˙z

Formally, let Tnt be the subtree of Tn rooted at a node corresponding to a 1-step t. We prove, by a backward induction on t, that for each 1 ≤ t ≤ n, 3 · GADV (Tnt ) ≥ 4 · GALG (Tnt ) − 2t · Pr[Et ] .

(1)

Since Pr[E1 ] = 1, this inequality with t = 1 immediately implies the lemma. As Tnn = Yn , (1) holds for t = n. Assuming (1) holds for t + 1, we prove it holds for t as well. 3·GADV (Tnt ) = 3 · GADV (Xt ) + 3 · GADV (Tnt+1 ) ≥ 3 · max{2, 1 + 2 · pt } · 2t · Pr[Et ] + 4 · GALG (Tnt+1 ) − 2t+1 · Pr[Et+1 ] = 4 · GALG (Tnt+1 ) + 2t · Pr[Et ] · (3 · max{2, 1 + 2 · pt } − 2 · (1 − pt )) ≥ 4 · GALG (Tnt+1 ) + 2t · Pr[Et ] · (4 · (1 + pt ) − 1) = 4 · GALG (Tnt+1 ) + 4 · GALG (Xt ) − 2t · Pr[Et ] = 4 · GALG (Tnt ) − 2t · Pr[Et ] , where the second inequality follows as 3 · max{2, 1 + 2 · pt } ≥ 5 + 2 · pt .

⊓ ⊔

Proof (of Theorem 1). First, we compare the expected gain of Alg if the adversary cuts the sequence after n steps to the case where the sequence is cut after n + 1 steps. In the former case, if En occurs, step n is a 3-step, in which Alg transmits packet (2n , n). In the latter case, if En occurs, step n is a 1-step, in which Alg has packets (2n , n) and (2n+1 , n + 1) and transmits one of them. Hence GALG (Xn ) ≥ GALG (Yn ), which implies GALG (Tn+1 ) ≥ GALG (Tn ) + GALG (Yn+1 ) .

(2)

Fix any ǫ > 0. We consider a non-decreasing sequence (GALG (Tn ))n and distinguish two cases, depending on whether {GALG (Tn )}n converges or not. 2 . Lemma 3 If limn→∞ GALG (Tn ) = ∞, choose n such that GALG (Tn ) ≥ 3·ǫ implies GADV (Tn ) ≥ ( 43 − ǫ) · GALG (Tn ). Note that in this case, to obtain a high gain, the adversary needs not repeat this strategy; it may simply choose sufficiently large n. Otherwise, let g = limn→∞ GALG (Tn ). Without loss of generality, we may assume that ǫ ≤ 1/3. In this case, for a sufficiently large n, GALG (Tn ) ≥ g − 3 2 · ǫ · g ≥ g/2. By (2), it holds that GALG (Yn+1 ) ≤ GALG (Tn+1 ) − GALG (Tn ) ≤ g − (g − 32 · ǫ · g) = 23 · ǫ · g ≤ 3 · ǫ · GALG (Tn ) ≤ 3 · ǫ · GALG (Tn+1 ). Hence, by Lemma 2, GADV (Tn+1 ) ≥ ( 43 − ǫ) · GALG (Tn+1 ). If this case occurs, the adversary has to repeat the process, to ensure that the overall gain is arbitrarily high. ⊓ ⊔

3

Lower Bound for 2-Uniform Sequences

In this section we adapt the previous lower bound construction, so that it holds for 2-uniform sequences as well. The strategy of the adversary has to be changed,

Randomized Algorithms for Buffer Management with 2-Bounded Delay

99

because injecting packets with deadline t in step t is no longer possible. Adapting the adversary’s strategy to the 2-uniform setting results in a weaker lower bound of 6/5 on the competitive ratio. Let us start with an informal description of the changes to the adversary’s strategy in reference to the game tree Tn . Suppose that at some 1-step t of the game tree in Fig. 2, the algorithm uses pt < 21 as the probability of transmitting (2t+1 , t + 1). Consequently, the adversary transmits (2t+1 , t + 1). In the following 1-step t + 1, the adversary is unable to inject (2t+1 , t + 1). Instead —before each 1-step in the tree— we add a new kind of step, a 0-step, obtaining a tree depicted in Fig. 3. In a 0-step the adversary injects two copies of the packet from the algorithm’s buffer, ensuring both players transmit the same packet and are left with essentially the same buffers afterwards. Thus, in every 1-, 2-, or 3-step, the adversary can behave as previously. As each 1-step and the sole 3-step is preceded by a 0-step, we treat each 0-step and the step following it as a single entity. Input Construction. For sake of similarity with the previous lower bound, we number the steps differently. The steps’ numbers now form the sequence: 1 1 1 1 2 , 1, 1 2 , 2, 2 2 , 3, . . . A packet injected in step t has deadline t + 2 . Our construction preserves the following invariant: at the very beginning of each step t, before the adversary injects new packets: (a) if t is integer, both Alg and Adv have at least one copy of a packet (2t , t) in their buffers, and (b) if t is not integer, both Alg and Adv have at most one copy of a packet (2t+1/2 , t) in their buffers, and no packet of other weight. If BALG = ∅ at the very beginning of some step t ≥ 1, the sequence is finished. We describe in detail what packets are injected at step t, distinguishing four disjoint cases. Case 0. t < n is not an integer and BALG = {(2t+1/2 , t)}, or t = 12 . Then the adversary injects 2 copies of (2t+1/2 , t + 12 ). Case 1. t < n is an integer (in this case the algorithm has at least one copy of (2t , t) in its buffer). Then the adversary injects (2t+1 , t + 12 ). Case 2. t > 1 is not integer and BALG = ∅. Then the adversary injects nothing. Case 3. t = n. Then the adversary injects nothing. If case i occurs in step t, then we call such a step an i-step. We present the strategy of the adversary in a single step t. In a 2-step, the adversary transmits a packet if it has one (by the invariant, this packet is (2t+1/2 , t)); if it has no packet, then we call this 2-step empty. In a 3-step, the adversary transmits the packet (2t , t), which it has by the invariant. Note that such steps end the sequence. In a 0-step, both the adversary and the algorithm have two copies of (2t+1/2 , t + 12 ). Each of them may also have the packet (2t+1/2 , t). Both can transmit any of these packets and are left with at least one copy of (2t+1/2 , t+ 12 ). In a 1-step, both the algorithm and the adversary have packet (2t+1 , t + 21 ) and at least one copy of packet (2t , t). In this case, the adversary tries to transmit packet different from the one transmitted by the algorithm. Formally, let pt be

100

M. Bienkowski, M. Chrobak, and L  . Je˙z

Fig. 3. Tree of the game between Alg and Adv for 2-uniform sequences

the probability that Alg transmits the packet with later deadline. If pt ≥ 1/2, Adv transmits the packet with earlier deadline, otherwise it chooses the packet with later deadline. Again, to obtain the lower bound, it suffices to prove the following theorem. Theorem 2. For any ǫ > 0 and any probability  sequence (pt )t of Alg, there  exists an integer n, such that GADV (Tn ) ≥ 65 − ǫ · GALG (Tn ). Comparing Expected Gains. Let us now define analogues of Et , Xt and Yt from the previous section. By Et , for integer t, we denote the event of reaching t−1 the step t. Therefore, Pr[Et ] = i=1 (1 − pi ). Let Xt denote the set consisting of the node corresponding to 0-step in step t − 12 , 1-step in step t and 2-step in step t + 12 . Let Yn denote the set consisting of two nodes corresponding to the only 3-step in tree Tn and the 0-step preceding it. We remark that the only difference between the sets Xt and Yn defined now and in the previous section are 0-steps. Thus, the gains of both Alg and Adv on Xt increase by Pr[Et ] · 2t . The same holds for Yn . Therefore, the following result holds. Lemma 4. For any algorithm Alg, the corresponding game tree Tn and any 1 ≤ t < n, (i) GALG (Xt ) = (2 + pt ) · 2t · Pr[Et ] (ii) GADV (Xt ) = max{3, 2 + 2pt } · 2t · Pr[Et ]. (iii) GALG (Yn ) = GADV (Yn ) = 2n+1 · Pr[En ]

Randomized Algorithms for Buffer Management with 2-Bounded Delay

101

The following analogues of Lemmas 2 and 3 are proved in the appendix. Together, these lemmas yield Theorem 2. Its proof, a straightforward modification of proof of Theorem 1, is left out. Lemma 5. For any integer n, GADV (Tn ) ≥ Lemma 6. For any integer n, GADV (Tn ) ≥

4

6 5

· GALG (Tn ) −

6 5

· GALG (Tn ) − 45 .

1 5

· GALG (Yn ).

Upper Bound

In this section, we present our memoryless algorithm Rand, whose competitive ratio against an adaptive adversary is 4/3. Without loss of generality, we assume that in each step t the adversary injects at most one packet with deadline t and at most two packets with deadline t + 1, as any reasonable algorithm will drop superfluous packets with smaller weights. In fact, we assume that such three packets are injected in each step, as the adversary may use 0-weight packets. Algorithm Rand. We describe the algorithm’s behavior in step t. We can assume that at the beginning of the step the algorithm has exactly one packet and its deadline is t. (At the very beginning, we can assume it is a dummy packet of weight 0.) We divide the step into two stages: 1. The adversary adds a packet with deadline t. Rand drops the lighter of its packets with deadline t (breaking ties arbitrarily). The remaining one is denoted a = (a, t). 2. The adversary adds two packets c = (c, t + 1) and d = (d, t + 1), where c ≤ d. Rand transmits a with probability min {a/d, 1} and d with the remaining probability. Finally, at the end of step t, Rand removes expired packets (all packets with deadline t) and superfluous packets (all but the most valuable packet with deadline t + 1). When we look at the definition of Rand, we observe that c, the lighter of two packets with deadline t + 1 is never transmitted by Rand in step t. Below, we prove that, without loss of generality, c is also not transmitted in step t by Adv. Recall that Adv is a deterministic online algorithm, the answering part of the adversary. Lemma 7. For any online (deterministic) algorithm Alg, there is an online (deterministic) algorithm Alg, with the following properties: (i) the gain of Alg on every sequence is at least the gain of Alg, (ii) if, at step t, BALG contains x = (x, t + 1) and y = (y, t + 1) where x > y, then y is not transmitted in step t. Proof. We transform Alg into Alg iteratively, i.e. we take the minimum t0 such that Alg first violates property (ii) in step t0 and we transform it into an algorithm Alg′ with gain no smaller than that of Alg, which satisfies property (ii) up to step t0 , possibly violating it in further steps.

102

M. Bienkowski, M. Chrobak, and L  . Je˙z

Let t0 be the first step in which property (ii) is violated. Let y = (y, t0 + 1) be the packet transmitted by Alg and x = (x, t0 + 1) be the heaviest packet with deadline t0 + 1. Then Alg′ transmits the same packets as Alg up to step t0 − 1, but in step t0 it transmits x, and in the remaining steps tries to transmit the same packets as Alg. It is impossible only in step t0 + 1 and only if Alg transmits x in that step. In this case Alg′ transmits y in step t0 + 1. Clearly, the gain of Alg′ is at least as large as the gain of Alg. ⊓ ⊔ By the lemma above, the adversary may as well inject d = (d, t + 1) at step t and c = (c, t + 1) at step t + 1, rather than injecting them both at step t. Such a change does not influence the behavior of Rand and, by Lemma 7, does not hinder the performance of Adv. We obtain the following corollary. Corollary 1. Without loss of generality, at each step t the adversary injects exactly one packet with deadline t and exactly one packet with deadline t + 1. 4.1

Computing Competitive Ratio

We introduce a potential function Φ. It is well-defined at the beginning and at the end of any stage, i.e., when |BADV | = |BRAND | = 1. Let rand and adv be the weights of the packets in the buffers of Rand and Adv, respectively. Then Φ = max{adv − rand, 0} . Let Φt be the potential at the very beginning of step t+1. We prove the following lemma. Lemma 8. For any step t, E[Φt − Φt−1 ] + GADV (t) ≤

4 3

· E[GRAND (t)].

Proof. We show that for any stage of any step t, it holds that E[ΔΦ] + GADV ≤ 4 3 · E[GRAND ], where ΔΦ is the change of the potential in this stage. In the first stage, the adversary injects a packet with deadline t. Since the algorithm and the adversary are not transmitting anything, GRAND = GADV = 0. Without loss of generality, we assume, that both Rand and Adv drop their lighter packet at this moment. The potential cannot increase by such an action. Assume that at the beginning of the second stage, BRAND = {a}, BADV = {b}, and the adversary injects a packet d, where a = (a, t), b = (b, t), and d = (d, t+1). We consider two cases. (i) a ≥ d. In this case, Rand transmits a, gaining a, and Adv transmits b or d, and thus GADV ≤ max{b, d}. Additionally after this step BRAND = {d} and BADV ⊆ {d}, hence Φt = 0. Therefore, it holds that GADV + ΔΦ ≤ max{b, d} − max{b − a, 0} = max{b, d} − max{b, a} + a ≤a ≤ (4/3) · a .

Randomized Algorithms for Buffer Management with 2-Bounded Delay

103

(ii) a < d. In this case, Rand transmits a with probability a/d and d with the remaining probability. Thus, E[GRAND ] =

 (a − d/2)2 + a a ·a+ 1− ·d = d d d

3 4

· d2



3 ·d 4

It remains to prove that GADV + E[ΔΦ] ≤ d. We consider two cases. (a) If the adversary transmits d, then GADV = d and Φt = 0. Therefore, GADV + ΔΦ ≤ GADV = d. (b) If the adversary transmits b, then GADV = b. In this case, with probability a/d at the end of this step BRAND = BADV = {d} and Φt = 0. With the remaining probability BRAND = ∅ and hence Φt = d. Thus, E[Φt ] = (1 − a/d) · d = d − a. Summing up, we obtain GADV + E[ΔΦ] = b + (d − a) − max{b − a, 0} ≤ d. ⊓ ⊔ Finally, as Φ0 = 0 and Φ is always non-negative, summing the inequality yielded by Lemma 8 over all steps from the input sequence, we obtain the following bound. Theorem 3. Rand is 34 -competitive against an adaptive-online adversary.

References 1. Andelman, N., Mansour, Y., Zhu, A.: Competitive queueing policies for QoS switches. In: Proc. of the 14th ACM-SIAM Symp. on Discrete Algorithms (SODA), pp. 761–770 (2003) 2. Ben-David, S., Borodin, A., Karp, R.M., Tardos, G., Wigderson, A.: On the power of randomization in on-line algorithms. Algorithmica 11, 2–14 (1994); Also appeared In: Proc. of the 22nd STOC, pp. 379–386 (1990) 3. Borodin, A., El-Yaniv, R.: Online Computation and Competitive Analysis. Cambridge University Press, Cambridge (1998) 4. Chin, F.Y.L., Chrobak, M., Fung, S.P.Y., Jawor, W., Sgall, J., Tich´ y, T.: Online competitive algorithms for maximizing weighted throughput of unit jobs. Journal of Discrete Algorithms 4, 255–276 (2006) 5. Chin, F.Y.L., Fung, S.P.Y.: Online scheduling for partial job values: Does timesharing or randomization help? Algorithmica 37, 149–164 (2003) 6. Chrobak, M., Jawor, W., Sgall, J., Tich´ y, T.: Improved online algorithms for buffer management in QoS switches. ACM Transactions on Algorithms 3(4), 50 (2007); In: Albers, S., Radzik, T. (eds.) ESA 2004. LNCS, vol. 3221, pp. 204–215. Springer, Heidelberg (2004) 7. Englert, M., Westermann, M.: Considering suppressed packets improves buffer management in QoS switches. In: Proc. of the 18th ACM-SIAM Symp. on Discrete Algorithms (SODA), pp. 209–218 (2007) 8. Hajek, B.: On the competitiveness of online scheduling of unit-length packets with hard deadlines in slotted time. In: Conference in Information Sciences and Systems, pp. 434–438 (2001) 9. Kesselman, A., Lotker, Z., Mansour, Y., Patt-Shamir, B., Schieber, B., Sviridenko, M.: Buffer overflow management in QoS switches. SIAM Journal on Computing 33(3), 563–583 (2004); Also appeared In: Proc. of the 33rd STOC, pp. 520–529 (2001)

104

A

M. Bienkowski, M. Chrobak, and L  . Je˙z

Appendix

Proof (of Lemma 5). By Lemma 4, GADV (Xt ) ≥ 65 ·GALG (Xt ). The lemma follows n−1 immediately by noting that Tn = t=1 Xt ⊎ Yn and applying Lemma 4. ⊓ ⊔ Proof (of Lemma 6). Again, let Tnt be the subtree of Tn rooted at the node corresponding to 0-step t − 12 . We prove, by a backward induction on t, that for each 1 ≤ t ≤ n, 5 · GADV (Tnt ) ≥ 6 · GALG (Tnt ) − 2t+1 · Pr[Et ] .

(3)

Since Pr[E1 ] = 1, for t = 1 this inequality immediately implies the lemma. As Tnn = Yn , (3) holds for t = n. Assuming (3) holds for t + 1, we prove it holds for t as well. 5·GADV (Tnt ) = 5 · GADV (Xt ) + 5 · GADV (Tnt+1 ) ≥ 5 · max{3, 2 + 2 · pt } · 2t · Pr[Et ] + 6 · GALG (Tnt+1 ) − 2t+2 · Pr[Et+1 ] = 6 · GALG (Tnt+1 ) + 2t · Pr[Et ] · (5 · max{3, 2 + 2 · pt } − 4 · (1 − pt )) ≥ 6 · GALG (Tnt+1 ) + 2t · Pr[Et ] · (6 · (2 + pt ) − 2) = 6 · GALG (Tnt+1 ) + 6 · GALG (Xt ) − 2t+1 · Pr[Et ] = 6 · GALG (Tnt ) − 2t+1 · Pr[Et ] , where the second inequality follows since 5 · max{3, 2 + 2 · pt } ≥ 14 + 2 · pt .

⊓ ⊔

A General Scheme for Designing Monotone Algorithms for Scheduling Problems with Precedence Constraints Clemens Thielen and Sven O. Krumke Department of Mathematics, University of Kaiserslautern, Paul-Ehrlich-Str. 14, D-67663 Kaiserslautern, Germany {thielen,krumke}@mathematik.uni-kl.de

Abstract. We provide a general scheme for constructing monotone algorithms for a wide class C of scheduling problems Q|prec, rj |γ on related machines with precedence constraints and/or release dates. Our scheme works in the offline and the online setting. It takes as input two approximation/competitive algorithms for the (simpler) scheduling problems P |prec, rj |γ on identical machines and 1|prec, rj |γ on a single machine and then generically constructs a monotone approximation/ competitive algorithm for the problem on related machines. Monotone algorithms are necessary and sufficient for the design of truthful scheduling mechanisms in the setting with selfish machines. The algorithms constructed by our scheme are among the first monotone algorithms for scheduling problems with precedence constraints. For example, we show that our scheme applies to the problems of minimizing the makespan or the weighted sum of completion times when the jobs have precedence constraints and/or release dates. Keywords: Monotone algorithms, scheduling, precedence constraints, algorithmic mechanism design.

1

Introduction

We consider the problem of scheduling a sequence of n jobs on m machines. There are precedence constraints between the jobs, which are given by a directed acyclic graph G = (V, E). The vertices of the graph correspond to the n jobs, and a directed edge from job j1 to j2 indicates that j1 has to be completed before j2 can be started (j1 ≺ j2 ). Each job j has a processing requirement pj ≥ 0 and a release date rj ≥ 0 before which it can not be started. If j1 ≺ j2 we assume that rj2 ≥ rj1 . In the online setting, job j and its processing requirement pj become known at the release date rj . Each machine i has a fixed speed si > 0 at which it runs and which does not depend on the job being executed. We always assume that the speeds are normalized such that the largest speed is 1. Processing job j on machine i needs time pj /si . Each job must be scheduled without preemption and each machine can process at most one job at a time. In the three-field classification scheme widely used in literature (see, e.g., [1]), E. Bampis and M. Skutella (Eds.): WAOA 2008, LNCS 5426, pp. 105–118, 2009. c Springer-Verlag Berlin Heidelberg 2009 

106

C. Thielen and S. Krumke

the scheduling problems Π considered in our paper are of the form Q|prec, rj |γ, where γ denotes the objective function. In our context of algorithmic mechanism design, each machine belongs to a selfish agent. The private value of agent i is equal to 1/si , that is, the cost of doing one unit of work on its machine. A mechanism for such a scheduling problem Π is a pair M = (A, P) consisting of an (online) algorithm A for Π and a payment scheme P. In the offline setting, the mechanism collects the claimed private data (bids). Based on these bids, it uses the algorithm A to schedule the jobs in an effort to minimize the (social) objective γ and hands out the payment Pi to each agent i, where P = (P1 , . . . , Pm ) depends on the schedule produced by A and on the bids. In the online setting, the jobs arrive over time, where each job becomes known at its release date and must be scheduled immediately. Once a job has been scheduled, it may not be interrupted or reallocated. The mechanism collects the bids of the agents before the first job arrives. Based on the bids, it then uses the online algorithm A to schedule the jobs during the online-phase. The payment scheme P in the online setting is given by sequences (Pik )k for i = 1, . . . , m and k = 1, . . . , n ¯ . Here, n ¯ ≤ n denotes the number of distinct values r¯1 < r¯2 < · · · < r¯n¯ of release dates. Pik denotes the payment given to agent i immediately after the jobs released at time r¯k are scheduled and depends on the assignment of all jobs released at times r¯1 , . . . , r¯k and on the bids. All payments Pik are assumed to be nonnegative, i.e., the  mechanism can not ask money back from an agent. ¯ In the online setting, Pi = nk=1 Pik denotes the total payment to agent i. The profit of agent i in both settings is Pi − Li , where Li denotes the total size of the jobs assigned to machine i divided by its speed si . The agents know the mechanism and are computationally unbounded in maximizing their profit. Note that our objective function needs not be directly related to the profits of the agents. The general class C of scheduling problems considered in this paper consists of all (offline and online) scheduling problems Π = Q|prec, rj |γ on related machines where some objective function γ has to be minimized subject to precedence constraints and/or release dates and, informally, the objective does not worsen too much if, instead of on m machines (out of which many are slow), we schedule on a single machine. Moreover, we require that increasing the speed of a subset of the machines can only decrease the optimal objective value. A formal definition of the class C is given in Sect. 2. Definition 1. An offline scheduling algorithm is called monotone if for every machine the amount of work assigned to it does not increase if its bid increases (i.e., its speed decreases). An online scheduling algorithm is called monotone if for every machine and every k = 1, . . . , n ¯ the amount of work assigned to the machine after the jobs released at times r¯1 , . . . , r¯k are scheduled does not increase if its bid increases. 1.1

Previous Results

For the offline setting, Archer and Tardos [2] showed that if the profits of the agents are defined as above, a truthful mechanism M = (A, P) can be based on

A General Scheme for Designing Monotone Algorithms

107

the algorithm A if and only if A is a monotone (approximation) algorithm. Using this result, monotone approximation algorithms (and, hence, truthful mechanisms) were designed for several classical problems like scheduling related machines to minimize the makespan, where the bid of a machine is the inverse of its speed [2,3,4,5,6]. However, to the best of our knowledge, the algorithm for Q|prec|Cmax presented in [7] is the only monotone approximation algorithm for a scheduling problem with precedence constraints. For the online setting, the definition of a monotone online algorithm and the above result of Archer and Tardos [2] immediately imply that an online truthful mechanism can be based on the algorithm A if and only if A is a monotone online algorithm. Inspired by this result, Auletta et al. [8] showed how to translate an arbitrary ρ-competitive online algorithm for the two-machines makespan minimization problem Q2||Cmax without precedence constraints and release dates into a monotone online algorithm, which achieves a competitive ratio of max{ρ · t, 1 + 1/t} for an arbitrary t > 1. We apply our scheme to two particular problems: the  makespan minimization problem Q|prec, rj |Cmax and the problem Q|prec, rj | wj Cj of minimizing the weighted sum of completion times, both of which are known to be NP-hard to solve [9]. A survey on online algorithms and competitiveness results for the problems can be found in [10]. 1.2

Our Results

We show that, for each offline/online problem Π = Q|prec, rj |γ of the class C, a monotone approximation/competitive algorithm can be obtained in a generic way given an arbitrary approximation/competitive algorithm Asing for the problem 1|prec, rj |γ on a single machine and an approximation/competitive algorithm Aid for the problem P |prec, rj |γ on identical machines that satisfies the natural increase m-property: Increase m-property: For every pair of integers m2 ≥ m1 > 0, there exists a constant δ(m1 , m2 ) ∈ R+ such that, for any instance of P |prec, rj |γ, the objective value γAid produced by Aid on m1 machines satisfies γAid ≤ δ(m1 , m2 ) · optm2 , where optm2 denotes the optimal objective value for the instance on m2 machines. Note that the property, in particular, implies that Aid is δ(m, m)-approximate/ competitive on m machines, so the approximation/competitive ratio of the algorithm is encoded in the constants δ(m, m). We show that the two specific scheduling problems with  precedence constraints and release dates Q|prec, rj |Cmax and Q|prec, rj | wj Cj in the offline and the online version are contained in the class C. We obtain monotone O(m2/3 )approximation algorithms for the offline versions of both problems and also an O(m2/3 )-competitive algorithm for Q|prec, rj |Cmax in the online case. We note that, for the basic case without precedence constraints and release dates, there are better (constant factor) monotone approximation algorithms; see, e.g., [5,6]. However, the best approximation algorithm known for Q|prec, rj |Cmax and

108

C. Thielen and S. Krumke

 Q|prec, rj | wj Cj is the (nonmonotone) algorithm of Chudak and Shmoys [11], which achieves an approximation ratio of O(log m). The idea of our generic algorithm is based on the monotone algorithm from [7], where a monotone algorithm for the offline version of Q|prec|Cmax was presented. However, our results are far more general. Moreover, our algorithm has a polynomial running time provided that both algorithms Asing and Aid run in polynomial time.

2

The Class C

We start our discussion by introducing the class C of scheduling problems. It consist of all offline and online scheduling problems Π = Q|prec, rj |γ with the following two properties: 1-machine property: There exist constants cm,q,α ∈ R+ such that the following holds: If optm,q,α denotes the optimal objective value for any given instance of Π on m ≥ q > 0 machines, of which q have speed 1 and the remaining m − q have speed α ≤ 1, and opt1 denotes the optimal objective value for the corresponding instance of the single machine version 1|prec, rj |γ of Π on a machine of speed 1, then opt1 ≤ cm,q,α · optm,q,α . Speed change property: If the speed of all machines is decreased by a constant factor 0 < ϕ ≤ 1, then the objective value of any feasible schedule for an instance of Π gets worse by a factor at most 1/ϕ. If the speed of any number of machines is increased by a constant factor ϕ > 1, then the optimal objective value for any instance of Π can not get worse. Note that the online version of a problem Π = Q|prec, rj |γ is in C if and only if the offline version of Π is in C. This follows since the 1-machine property and the speed change property only depend on relations between optimal objective values of Π on different machines and between objective values of given schedules, both of which do not depend on whether the problem is online or not. Hence, we do not need to specify whether we consider the offline or the online version of a problem Π when writing Π ∈ C. For every problem Π ∈ C, we denote the corresponding problem P |prec, rj |γ on identical machines by Πid and the corresponding problem 1|prec, rj |γ on a single machine by Πsing .  We now show that the two problems Q|prec, rj |Cmax and Q|prec, rj | wj Cj are contained in the class C. An example of a problem that does not belong to C is the problem of minimizing the maximum lateness. Here, each job j has a due date dj and its lateness is its completion time minus dj . Consider the situation when there is one job of size 1, due date 1, and release date 0. If at least one machines has speed at least 1, the optimum value is 0, but if the speed of all machines is decreased below 1, the optimum value is strictly positive, so the speed change property is violated.

A General Scheme for Designing Monotone Algorithms

109

Theorem 1. The problem Q|prec, rj |Cmax has the the 1-machine property with cm,q,α = q + (m − q) · α and the speed change property and, hence, belongs to the class C. Proof. The speed change property is obvious since the objective is to minimize the makespan. To show that the problem has the 1-machine property, we fix m, q, α and scale the job sizes in the instance considered such that optm,q,α = 1. We denote an optimal schedule on m machines, of which q have speed 1 and m − q have speed α ≤ 1, by Sopt . Since q machines have speed 1, the total amount of work assigned to each of these machines (say machine k) in the schedule Sopt is at most 1 − tkidle , where tkidle denotes the total amount of time in which machine k is idle in Sopt . The total amount of work assigned to each of the m − q machines of speed α (say machine k ′ ) in Sopt is at most ′ α · (1 − tkidle ). Hence, the total amount of work to be done is bounded from above m q ′ by k=1 (1 − tkidle )+ k′ =q+1 α · (1 − tkidle ), so the optimal makespan on a single machine of speed 1 is at most x(m) = tidle +

q 

(1 − tkidle ) +

k=1

≤ q + (m − q) · α +

m 



α · (1 − tkidle )

k′ =q+1 m  tidle − tkidle k=1

,

where tidle is the total amount of time the single machine is idle in an optimal k single machine schedule S. We will now show that tidle − m k=1 tidle ≤ 0, which then yields x(m) ≤ q + (m − q) · α, so the claim follows. Let t be the last time before x(m) where the single machine switches from load to idle in the schedule S, and denote the time where the next job is started in the schedule S by t + l. Then, it follows by optimality of S that t + l is the minimum of the release dates of all jobs that were not processed in S before time t. Hence, none of these jobs can be started in any feasible schedule before time t + l. In particular, the total amount of work done by all machines in the schedule Sopt until time t + l can not be larger than the amount of work done by the single machine in the schedule S until time t + l, which implies that the idle time of the fastest machine (say machine k0 ) in Sopt until time t + l must be m 0 at least as large as tidle , i.e., tkidle ≥ tidle , so tidle − k=1 tkidle ≤ 0 as claimed. ⊓ ⊔  Theorem 2. The problem Q|prec, rj | wj Cj has the 1-machine property with cm,q,α = q + (m − q) · α and the speed change property and, hence, belongs to the class C. Proof. The speed change property is again obvious since the objective is to minimize the weighted sum of completion times. To show that the problem has the 1-machine property, we first consider the case that all release dates are zero. We fix m, q, α and an instance of the problem. To introduce notation, we first consider m machines, of which the first q have speed 1 and the remaining m − q have speed α ≤ 1. For this setting, we let S denote a schedule that

110

C. Thielen and S. Krumke

minimizes the weighted sum of completion times. The objective value of S (i.e., the optimal objective value in this setting) is denoted by optm,q,α , and Cj denotes the completion time of a job j in S. opt1 denotes the minimal sum of weighted completion times on a single machine of speed 1. Now we consider the schedule S ′ on a single machine of speed 1 obtained by scheduling the jobs in order of nondecreasing Cj , and we denote the completion time of a job j in the schedule S ′ by Cj′ . Note that the schedule S ′ does not contain any idle times and it respects the precedence constraints since S does so. Moreover, the following inequalities hold for every job j:  (1) p˜j ≤ q · Cj + (m − q) · α · Cj Cj′ ≤ ˜ j:C˜j ≤Cj

The first inequality follows from the construction of the schedule S ′ and the second one from the fact that the m machines used in the schedule S can do at most q + α · (m − q) units of work per time due to their speeds. Using these inequalities, we now obtain opt1 ≤

n 

wj Cj′ ≤

j=1

≤q·

n 

n 

wj · (q · Cj + (m − q) · α · Cj )

j=1

wj Cj + (m − q) · α ·

j=1

n 

wj Cj = (q + (m − q) · α) · optm,q,α ,

j=1

which proves the 1-machine property with cm,q,α = q + (m − q) · α for the case that all release dates are zero. In the case where some release dates are nonzero, we may still assume that the smallest release date is zero since no work can be done on any number of machines before the smallest release date. Then, the argumentation above still goes through except for the fact that the single machine schedule S ′ can now contain idle time. But if the single machine schedule S ′ contains an idle slot [t, t+ l], then all jobs j with release date rj < t+ l must have been completed in S ′ at time t, and t+l is the smallest release date of the remaining jobs. For the jobs that are completed until time t, the argumentation above then yields the inequalities (1) as before. Moreover, it follows by optimality of the schedule S that all these jobs are also completed in S until time t, and all the remaining jobs cannot be started before time t+l in S due to their release dates. Hence, the argumentation above can be applied to the schedules S and S ′ from time t + l onwards and yields the inequalities (1) for the remaining jobs. ⊓ ⊔

3

The General Scheme

We now present our general scheme for constructing a monotone approximation/competitive algorithm for a problem Π ∈ C from given approximation/ competitive algorithms Aid for Πid and Asing for Πsing .

A General Scheme for Designing Monotone Algorithms

111

Algorithm 1 (General Monotone Algorithm for a Problem Π ∈ C) 1. Fix an ordering of the machines that does not depend on the speeds. 2. Ignore all machines with speed less than α, where 0 < α ≤ 1 is a parameter to be fixed later. 3. If less than g(m) machines remain, use Asing to schedule all jobs on the fastest machine, where g : N+ → N is a function to be fixed later with the property that g(m) ≤ m for all m. 4. If at least g(m) machines remain, choose the g(m) fastest among them. Treat them as g(m) identical machines numbered according to the ordering fixed in step 1 and use Aid to schedule the jobs on them. Ties among machines of the same speed are always broken by choosing the smaller machine with respect to the ordering fixed in step 1. Note that Algorithm 1 is clearly an online algorithm if the two algorithms Aid and Asing are online algorithms since the decision about the algorithm to use is based only on the machine speeds, which are known before the first job arrives. Proposition 1. Algorithm 1 has polynomial running time if Aid and Asing have polynomial running time. ⊓ ⊔ Theorem 3. Algorithm 1 is monotone. Proof. Fix a machine i, an instance of the problem, and the speeds of all other machines. If i is the fastest machine and gets all the jobs, a decrease in the speed of i will clearly not increase the load assigned to i any further. Hence, we can assume for the remainder of the proof that this case does not occur. If there are at least g(m) “fast machines”, we distinguish two cases: i belongs to the g(m) fastest machines or not. In the first case, as long as i stays among the g(m) fastest machines, the load assigned to i only depends on the position of i in the ordering fixed in step 1 since the g(m) fastest machines are treated as identical machines and the schedule is produced by Aid . On the other hand, if i decreases its speed so much that it drops out of the set of the g(m) fastest machines, it will receive no jobs any more and its load will clearly decrease. In the second case, i is not among the g(m) fastest machines before it decreases its speed. Hence, it received no load at all. After decreasing its speed, i will not belong the g(m) fastest machines, either, so it will still receive no load. If there are strictly less than g(m) “fast machines”, our algorithm schedules all jobs on the fastest machine, which is not i by our initial assumption. Hence, i receives no load at all. When i decreases its speed, the number of “fast machines” will not increase since the speed of the fastest machine remains unchanged, so all jobs will still be assigned to the fastest machine, which is not i. ⊓ ⊔ Theorem 4. If Asing has approximation/competitive ratio ρsing , then, with an appropriate choice of α, Algorithm 1 has an approximation/competitive ratio of min max{ρsing · cm,g(m),α ,

00 with m2 ≥ m1 . Let M denote the makespan of the schedule produced by Algorithm 3 when scheduling on m1 machines, and let optm2 denote the optimal makespan for scheduling the given jobs on m2 machines. Let R := maxj=1,...,n rj . Then:     m2 − 1 m2 − 1 · optm2 + 2R ≤ 2 · 2 + · optm2 M ≤2· 1+ m1 m1 Proof. The second inequality in the claim follows from the fact that optm2 ≥ R, so it just remains to show that the first inequality holds. According to Graham [12]   m2 − 1 ′ ′ , · LSTm LSTm1 ≤ 1 + 2 m1

′ ′ where LSTm denote the makespans of arbitrary schedules produced and LSTm 2 1 by list scheduling on m1 , m2 machines, respectively, without release dates. Since without release dates there does always exist a job list (i.e., an ordering of the

114

C. Thielen and S. Krumke

jobs) such that the schedule produced by list scheduling with this job list has optimal makespan, this, in particular, yields   m2 − 1 ′ optm1 ≤ 1 + · opt′m2 , m1 where opt′m1 and opt′m2 denote the optimal makespans on m1 , m2 machines, respectively, without release dates. Clearly, introducing release dates can not decrease the optimal makespan, so opt′m2 ≤ optm2 . Moreover, we know that optm1 ≤ opt′m1 + R (where optm1 denotes the optimal makespan for scheduling on m1 machines with release dates) since starting the optimal schedule without release dates at time R yields a feasible schedule with release dates. So since M ≤ 2 · optm1 by Proposition 4, we get   m2 − 1 M ≤ 2 · optm1 ≤ 2 · opt′m1 + 2R ≤ 2 · 1 + · opt′m2 + 2R m1   m2 − 1 ≤2· 1+ · optm2 + 2R m1 as claimed.

⊓ ⊔

Corollary 1. Algorithm 3 has the increase m-property. Proof. Directly from Lemma 1 by setting δ(m1 , m2 ) := 2 · (2 +

m2 −1 m1 )

> 0.

⊓ ⊔

Hence, we can use Algorithm 2 and Algorithm 3 in Algorithm 1, and since both 2 and 3 are clearly polynomial time algorithms, Theorem 4 and Proposition 2 immediately yield the following result: Theorem 5. Using Algorithm 2 as Asing and Algorithm 3 as Aid in Algorithm 1 yields a monotone polynomial time competitive algorithm for the problem Q|prec, rj |Cmax , which has competitive ratio   m−1 1 ·2· 2+ min max{g(m) + (m − g(m)) · α , } 0 2 − 3p + n · p = Δ + 2 − 3p. The theorem follows for p → 0, which implies n → ∞.

⊓ ⊔

Recall that the Price of Anarchy of (non-malicious) congestion games with affine latency functions is 25 [5]. By combining this with Corollary 1 and Theorem 5 we get: Corollary 2. Consider the class of malicious Bayesian congestion games G(Δ) with affine latency functions and identical type probability p. Then, PoM(Δ) = Θ(Δ). For certain congestion games, introducing malicious types might also be beneficial to the system, in the sense that the social cost of the worst case equilibrium (one that maximizes social cost) decreases. To capture this, we define the Windfall of Malice. The term Windfall of Malice is due to [4]. For a malicious Bayesian congestion game Ψ , denote WoM(Ψ ) as the ratio between the social costs of the worst case Nash equilibrium of the corresponding congestion game ΓΨ and the worst case Bayesian Nash equilibrium of Ψ . We show: Theorem 6. For each ǫ > 0 there is a malicious Bayesian congestion game Ψ with linear latency functions and identical type probability p, such that WoM(Ψ ) ≥ 25 − ǫ. Proof (Sketch). Define Ψ = Ψ (α) as in Example 1 with n = 3 and α = 1. For the congestion game ΓΨ that corresponds to Ψ , all players u choosing s2u is a Nash equilibrium s that maximizes social cost and SC(ΓΨ , s) = 5. For the malicious congestion game Ψ (where p > 0), there is a unique (pure) Bayesian Nash equilibrium σ where σ(us ) = s1u and σ(um ) = s3u for all players u ∈ N . For its social cost we get SC(Ψ, σ) = 2 + 4p. So, for each ǫ > 0 there is a sufficiently small p, such that WoM(Ψ ) =

SC(ΓΨ , s) 5 5 = ≥ − ǫ. SC(Ψ, σ) 2 + 4p 2

This completes the proof of the theorem.

⊓ ⊔

We remark that this is actually a tight result, since for the considered class of malicious Bayesian congestion games the Windfall of Malice cannot be larger than the Price of Anarchy of the corresponding class of congestion games which was shown to be 52 in [5].

Malicious Bayesian Congestion Games

5

131

Conclusion and Open Problems

In this paper, we have introduced and studied a new extension to congestion games, that we call malicious Bayesian congestion games. More specifically, we have studied problems concerned with the complexity of deciding the existence of pure Bayesian Nash equilibria. Furthermore, we have presented results on the Price of Malice. Although we were able to derive multiple interesting results, this work also gives rise to many interesting open problems. We conclude this paper by stating those, that we consider the most prominent ones. – Our NP-completeness result in Theorem 1 holds even for linear latency functions, identical type probabilities, and if all strategy sets are singleton sets of resources. However, if such games are further restricted to symmetric games and identical linear latency functions, then deciding the existence of a pure Bayesian Nash equilibrium becomes a trivial task. We believe that this task can also be performed in polynomial time for non-identical linear latency functions and symmetric strategy sets. – Although the upper bound in Corollary 1 and the corresponding lower bound in Theorem 5 are asymptotically tight, there is still potential to improve. We conjecture that in this case PoB(Δ) = Δ + O(1). – We believe that the concept of malicious Bayesian games is very interesting and deserves further study also in other scenarios. We hope, that our work will encourage others to study such malicious Bayesian games.

Acknowledgments We are very grateful to Christos Papadimitriou and Andreas Maletti for many fruitful discussions on the topic. Moreover, we thank Florian Schoppmann for his helpful comments on an early version of this paper.

References 1. Ackermann, H., Röglin, H., Vöcking, B.: On the Impact of Combinatorial Structure on Congestion Games. In: Proc. of the 47th Annual Symposium on Foundations of Computer Science (FOCS 2006), pp. 613–622 (2006) 2. Aland, S., Dumrauf, D., Gairing, M., Monien, B., Schoppmann, F.: Exact price of anarchy for polynomial congestion games. In: Durand, B., Thomas, W. (eds.) STACS 2006. LNCS, vol. 3884, pp. 218–229. Springer, Heidelberg (2006) 3. Awerbuch, B., Azar, Y., Epstein, A.: The Price of Routing Unsplittable Flow. In: Proc. of the 37th Annual ACM Symposium on Theory of Computing (STOC 2005), pp. 57–66 (2005) 4. Babaioff, M., Kleinberg, R., Papadimitriou, C.H.: Congestion Games with Malicious Players. In: Proc. of the 8th ACM Conference on Electronic Commerce (EC 2007), pp. 103–112 (2007)

132

M. Gairing

5. Christodoulou, G., Koutsoupias, E.: The Price of Anarchy of Finite Congestion Games. In: Proc. of the 37th Annual ACM Symposium on Theory of Computing (STOC 2005), pp. 67–73 (2005) 6. Conitzer, V., Sandholm, T.: New Complexity Results about Nash Equilibria. In: Proc. of 18th International Joint Conference on Artificial Intelligence (IJCAI 2003), pp. 765–771 (2003) 7. Dunkel, J., Schulz, A.S.: On the Complexity of Pure-Strategy Nash Equilibria in Congestion and Local-Effect Games. In: Spirakis, P.G., Mavronicolas, M., Kontogiannis, S.C. (eds.) WINE 2006. LNCS, vol. 4286, pp. 62–73. Springer, Heidelberg (2006) 8. Fabrikant, A., Papadimitriou, C.H., Talwar, K.: The Complexity of Pure Nash Equilibria. In: Proc. of the 36th Annual ACM Symposium on Theory of Computing (STOC 2004), pp. 604–612 (2004) 9. Fotakis, D.A., Kontogiannis, S.C., Spirakis, P.G.: Symmetry in network congestion games: Pure equilibria and anarchy cost. In: Erlebach, T., Persinao, G. (eds.) WAOA 2005. LNCS, vol. 3879, pp. 161–175. Springer, Heidelberg (2006) 10. Gairing, M., Monien, B., Tiemann, K.: Selfish Routing with Incomplete Information. Theory of Computing Systems 42(1), 91–130 (2008) 11. Gairing, M., Schoppmann, F.: Total Latency in Singleton Congestion Games. In: Deng, X., Graham, F.C. (eds.) WINE 2007. LNCS, vol. 4858, pp. 381–387. Springer, Heidelberg (2007) 12. Goemans, M.X., Mirrokni, V., Vetta, A.: Sink Equilibria and Convergence. In: Proc. of the 46th Annual Symposium on Foundations of Computer Science (FOCS 2005), pp. 142–154 (2005) 13. Gottlob, G., Greco, G., Mancini, T.: Complexity of Pure Equilibria in Bayesian Games. In: Proc. of 20th International Joint Conference on Artificial Intelligence (IJCAI 2007), pp. 1294–1299 (2007) 14. Harsanyi, J.C.: Games with Incomplete Information Played by Bayesian Players, I, II, III. Management Science 14, 159–182, 320–332, 468–502 (1967) 15. Karakostas, G., Viglas, A.: Equilibria for networks with malicious users. In: Ibaraki, T., Katoh, N., Ono, H. (eds.) ISAAC 2003. LNCS, vol. 2906, pp. 696–704. Springer, Heidelberg (2003) 16. Koutsoupias, E., Papadimitriou, C.H.: Worst-case equilibria. In: Meinel, C., Tison, S. (eds.) STACS 1999. LNCS, vol. 1563, pp. 404–413. Springer, Heidelberg (1999) 17. Libman, L., Orda, A.: Atomic Resource Sharing in Noncooperative Networks. Telecommunication Systems 17(4), 385–409 (2001) 18. Milchtaich, I.: Congestion Games with Player-Specific Payoff Functions. Games and Economic Behavior 13(1), 111–124 (1996) 19. Moscibroda, T., Schmid, S., Wattenhofer, R.: When Selfish Meets Evil: Byzantine Players in a Virus Inoculation Game. In: Proc. of the 25th Annual ACM Symposium on Principles of Distributed Computing (PODC 2006), pp. 35–44 (2006) 20. Nash, J.F.: Non-Cooperative Games. Annals of Mathematics 54(2), 286–295 (1951) 21. Rosenthal, R.W.: A Class of Games Possessing Pure-Strategy Nash Equilibria. International Journal of Game Theory 2, 65–67 (1973) 22. Roughgarden, T., Tardos, É.: How Bad Is Selfish Routing? Journal of the ACM 49(2), 236–259 (2002) 23. Tovey, C.A.: A Simplified NP-complete Satisfiability Problem. Discrete Applied Mathematics 8, 85–89 (1984)

Stackelberg Strategies and Collusion in Network Games with Splittable Flow Tobias Harks⋆ Institute of Mathematics, Technical University Berlin, Germany [email protected]

Abstract. We study the impact of collusion in network games with splittable flow and focus on the well established price of anarchy as a measure of this impact. We first investigate symmetric load balancing games and show that the price of anarchy is bounded from above by m, where m denotes the number of coalitions. For general networks, we present an instance showing that the price of anarchy is unbounded, even in the case of two coalitions. If latencies are restricted to polynomials, we prove upper bounds on the price of anarchy for general networks, which improve upon the current best ones except for affine latencies. In light of the negative results even for two coalitions, we analyze the effectiveness of Stackelberg strategies as a means to improve the quality of Nash equilibria. We show that for a simple strategy, called SCALE, the price of anarchy reduces to 1 + α for general networks and a single atomic follower. Finally, we investigate SCALE for multiple coalitional followers, general networks, and affine linear latencies. We present the first known upper bound on the price of anarchy in this case. Our bound smoothly varies between 1.5 when α = 0 and full efficiency when α = 1.

1

Introduction

Over the past years, the impact of the behavior of selfish, uncoordinated users in congested networks has been investigated intensively in the theoretical computer science literature. In this context, network routing games have proved to be a reasonable means of modeling selfish behavior in networks. The basic idea is to model the interaction of selfish network users as a noncooperative game. We are given a directed graph with latency functions on the arcs and a set of origin-destination pairs, called commodities. Every commodity is associated with a demand, which specifies the rate of flow that needs to be sent from the respective origin to the destination. In the nonatomic variant, every demand represents a continuum of agents, each controlling an infinitesimal amount of flow. The latency that an agent experiences to traverse an arc is given by a (non-decreasing) function of the total flow on that arc. Agents are assumed to act selfishly and route their flow along a minimum-latency path from their origin ⋆

Research supported by the Federal Ministry of Education and Research (BMBF grant 03MOPAI1).

E. Bampis and M. Skutella (Eds.): WAOA 2008, LNCS 5426, pp. 133–146, 2009. c Springer-Verlag Berlin Heidelberg 2009 

134

T. Harks

to the destination; a solution in which no agent can switch to a path with smaller travel time corresponds to a Wardrop equilibrium [27]. In this paper, our focus is to study the impact of coalitions of subsets of agents on the price of anarchy. Consider a nonatomic network game and a subset of agents forming a coalition. We assume that this coalition aims at minimizing the average delay experienced by this coalition. In this setting, we study Nash equilibria: stable points, where no coalition can unilaterally improve its cost by rerouting its flow. The considered model is relevant in todays traffic networks, where route-guidance systems are increasingly popular. A route-guidance operator can be thought of as a coalition in the sense that the operator determines routes so as to minimize the travel time of the customers. Even though the model under consideration has been studied by many researchers, see among others Cominetti et al. [6], Hayrapetyan et al. [13], Korilis et al.[16], and Roughgarden and Tardos [24], several intriguing open questions persist. Cominetti et al. [6] discovered that the price of anarchy in these games may exceed that of corresponding nonatomic games without coalitions. More precisely, Cominetti et al. [6] presented an instance showing that for polynomial latency functions of degree d, the price of anarchy grows as Ω(d). On the positive side, they presented upper bounds of 1.5, 2.56, and 7.83, for polynomial latency functions of degree d = 1, 2, 3, respectively. For polynomials of larger degree, the best known upper bound is O(2d dd+1 ), which is due to Hayrapetyan et al. [13]. Albeit these efforts, the gap between Ω(d) and O(2d dd+1 ) is still large. Our Results We investigate nonatomic network routing games with coalitions. Our contribution in this setting is the following:

– First, we consider symmetric load balancing games, that is, we are given parallel arcs that connect a common source and a common sink. In this setting, we show that for convex latencies the price of anarchy is bounded by m, where m denotes the number of coalitions. This result generalizes the previous result of Cominetti et al. [6] in the sense that we do not require that the flow is evenly distributed among coalitions. On the other hand, our result is more restrictive as it only holds for parallel arcs. – We then investigate the efficiency of Nash equilibria for general networks. We show that the price of anarchy in such games is unbounded, even for two coalitions. If the class of allowable latencies is restricted to polynomials with nonnegative coefficients and maximum degree d, we prove upper bounds on √ d for d ≥ 2. This result improves the price of anarchy of approximately d all previous known bounds, except for the affine linear case. Due to the large efficiency loss of Nash equilibria, researchers have proposed different approaches to reduce the price of anarchy in network routing games. One of the most promising approaches is the use of Stackelberg routing, see [16,22]. In this setting, it is assumed that a fraction α ∈ [0, 1] of the entire demand is controlled by a central authority, termed Stackelberg leader, while the

Stackelberg Strategies and Collusion in Network Games with Splittable Flow

135

remaining demand is controlled by the selfish players, also called the followers. In a Stackelberg game, the Stackelberg leader first routes the centrally controlled flow according to a predetermined policy, called the Stackelberg strategy, and then the remaining demand is routed by the selfish followers. The aim is to devise Stackelberg strategies so as to minimize the price of anarchy of the resulting combined flow. – In light of the negative results that hold even for only two coalitions, we investigate Stackelberg strategies as a way to improve the quality of Nash equilibria. Recently, Bonifaci et al. [3] showed that for nonatomic followers and single commodity networks, no Stackelberg strategy can reduce the price of anarchy to a constant. This result, however, does not rule out the existence of a Stackelberg strategy inducing a constant price of anarchy, when the number of coalitional followers is small. Indeed, we prove that the SCALE strategy reduces the price of anarchy to 1 + α when there is a single atomic follower. This result holds for convex latencies and general networks. – Finally, we consider general networks and multiple coalitional followers. For affine linear latencies, we prove that the SCALE strategy yields an upper bound on the price of anarchy, which smoothly varies between the best known bound on the price of anarchy of 1.5 when α = 0 and full efficiency when α = 1. Related Work. Koutsoupias and Papadimitriou [17] initiated the investigation of the efficiency loss caused by selfish behavior. They introduced a measure to quantify the inefficiency of Nash equilibria which they termed the price of anarchy. The price of anarchy is defined as the worst-case ratio of the cost of a Nash equilibrium over the cost of a system optimum. In a seminal work, Roughgarden and Tardos [24] showed that the price of anarchy for network routing games with nonatomic players and linear latency functions is 4/3; in particular, this bound holds independently of the underlying network topology. The case of more general families of latency functions has been studied by Roughgarden [20] and Correa et al. [7]. (For an overview of these results, we refer to the book by Roughgarden [23].) For discrete network games, where players control a discrete amount of flow, Roughgarden and Tardos examined the price of anarchy for the unsplittable variant [24]. Awerbuch et al. [2], Christodoulou and Koutsoupias [5], and Aland et al. [1] studied the price of anarchy for weighted and unweighted congestion games with polynomial latency functions. Close to our work are the papers by Hayrapetyan et al. [13] and Cominetti et al. [6]. The former presented a general framework for studying congestion games with colluding players. Their goal is to investigate the price of collusion: the factor by which the quality of Nash equilibria can deteriorate when coalitions form. Their results imply that for symmetric nonatomic load balancing games with coalitions the price of anarchy does not exceed that of the game without coalitions. For weighted congestion games with coalitions and polynomial latencies they proved upper bounds of O(2d dd+1 ). They also presented examples

136

T. Harks

showing that in discrete network games, the price of collusion may be strictly larger than 1, i.e., coalitions may strictly increase the social cost. Cominetti et al. [6] studied the atomic splittable selfish routing model, which is a special case of the nonatomic congestion game with coalitions. They observed that the price of anarchy of this game may exceed that of the standard nonatomic selfish routing game. Based on the work of Catoni and Pallotino [4], they presented an instance with affine latency functions, where the price of anarchy is 1.34. Using a variational inequality approach, they presented bounds on the price of anarchy for linear and polynomial latency functions of degree two and three of 1.5, 2.56, and 7.83, respectively. For polynomials of larger degree, their approach does not yield bounds. For single commodity networks with symmetric demands (every coalition controls the same amount of flow), Cominetti et al. [6] proved an upper bound of m on the price of anarchy. Fotakis, Kontogiannis, and Spirakis [12] studied algorithmic issues in the setting of atomic congestion games with coalitions and unsplittable flows. They proved upper bounds on the price of anarchy, where the cost of a coalition is defined as the maximum latency, see also the KP-model [17]. The idea of using Stackelberg strategies to improve the performance of a system was first proposed by Korilis, Lazar, and Orda [16]. The authors identified necessary and sufficient conditions for the existence of Stackelberg strategies that induce a system optimum; their model also considers atomic splittable followers. In particular, they showed that for a single atomic splittable follower, parallel arcs, and M/M/1 latencies, there exists an optimal Stackelberg strategy that reduces the price of anarchy to one. Roughgarden [22] proposed some natural Stackelberg strategies, e.g., SCALE and Largest-Latency-First (LLF). For parallel-arc networks he showed that the price of anarchy for LLF is bounded by 4/(3 + α) and 1/α for linear and arbitrary latency functions, respectively. Both bounds are tight. Moreover, he also proved that it is NP-hard to compute the best Stackelberg strategy. Kumar and Marathe [18] gave a PTAS to compute the best Stackelberg strategy for the case of parallel-arc networks. Karakostas and Kolliopoulos [15] proved upper bounds on the price of anarchy for SCALE and LLF. Their bounds hold for arbitrary multi-commodity networks and linear latency functions. Swamy [26] obtained upper bounds on the price of anarchy for SCALE and LLF for polynomial latency functions. He also proved a bound of 1+1/α for single-commodity, series-parallel networks with arbitrary latency functions. Bonifaci et al. [3] proved that even for single-commodity networks no Stackelberg strategy can induce a bounded price of anarchy for any α ∈ (0, 1). Correa and Stier-Moses [9] proved, besides some other results, that strategies in which the Stackelberg leader sends no more flow on every edge than the system optimum, does not increase the price of anarchy. Sharma and Williamson [25] considered the problem of determining the smallest value of α such that the price of anarchy can be improved. They obtained results for parallel-arc networks and linear latency functions. Kaporis and Spirakis [14] studied a related question of finding the minimum demand that the Stackelberg leader needs to control in order to enforce an optimal flow.

Stackelberg Strategies and Collusion in Network Games with Splittable Flow

2

137

The Model

In a network routing game we are given a directed network G = (V, A) and k origin-destination pairs (s1 , t1 ), . . . , (sk , tk ) called commodities. For every commodity i ∈ [k], a demand ri > 0 is given that specifies the amount of flow with origin si and destination ti . Let Pi be the set of all paths from si to ti in G and let P = ∪i Pi . A flow is a function f : P → R+ . The flow f is feasible (with respect to r) if for all i, P ∈Pi fP = ri . For a given flow f , we define the flow on an arc a ∈ A as fa = P ∋a fP . Moreover, each arc a ∈ A has an associated variable latency denoted by ℓa (·). For each a ∈ A the latency function ℓa is assumed to be nonnegative, nondecreasing and differentiable. If not  indicated otherwise, we also assume that ℓa is defined on [0, ∞) and that xℓa x) is a convex function of x. Such functions are called standard [20]. The latency of a path P with respect to a flow f is defined as the sum of the latencies of the arcs  in the path, denoted by ℓP (f ) = a∈P ℓa (fa ). The cost of a flow f is C(f ) = a∈A fa ℓa (fa ). The feasible flow of minimum cost is called optimal. We will denote the optimal flow by o. In a nonatomic network game, infinitely many agents are carrying the flow and each agent controls only an infinitesimal fraction of the flow. The continuum of agents of type j (traveling from sj to tj ) is represented by the interval [0, rj ]. It is well known that for this setting flows at Nash equilibrium exist and their total latency is unique, see [23]. Furthermore, the price of anarchy, which measures the worst case ratio of the cost of any Nash flow and that of an optimal flow is well understood, see Roughgarden and Tardos [24], Correa et al. [7,8], Perakis [19], and Roughgarden [23]. In this paper, we study the impact of coalition formation of subsets of agents on the price of anarchy. Consider a discrete quantity of flow that forms a coalition. We assume that this coalition aims at minimizing the average delay experienced by this coalition. Let [C] = {c1 , . . . , cm } denote the set of coalitions. Each coalition ci ∈ [C] is characterized by a tuple (ci1 , . . . , cik ), where every cij is a subset of the continuum [0, rj ] of agents of type j. We assume that every agent of type j belongs to exactly one coalition ci , i.e, the disjoint union of cij , i ∈ [m] represents the entire continuum of [0, rj ]. The tuple (G, r, ℓ, C) is called an instance of the nonatomic network game with coalitions. This model has been proposed by Hayrapetyan et al. [13] and it includes the special case, where we have exactly k coalitions each of which controlling the flow for commodity k. This case corresponds to the atomic splittable selfish routing model studied by Roughgarden and Tardos [24], Correa et al. [8], and Cominetti et al. [6]. We denote by f ci the flow of coalition ci and define faci to be the aggregated flow of coalition ci on arc a. The cost for coalition ci is defined as C(f ci ; f −ci ) :=  ci −ci denotes the flow of all other coalitions. In a Nash a∈A ℓa (fa ) fa , where f equilibrium, every coalition routes its flow so as to minimize C(f ci ; f −ci ) with the understanding that coalition ci optimizes over f ci while the flow f −ci of all other coalitions is fixed.

138

T. Harks

If latencies are restricted to be standard, minimizing C(f ci ; f −ci ) is a convex optimization problem, see for example Roughgarden [21]. The following conditions are necessary and sufficient to characterize a flow at Nash equilibrium for a nonatomic network game with coalitions. Lemma 1. A feasible flow f is at Nash equilibrium for a nonatomic network game with m coalitions if and only if for every i ∈ [m] the following inequality is satisfied:       ℓa fa + ℓ′a fa faci (faci − xcai ) ≤ 0 for all feasible flows xci . (1) a∈A

The proof is based on the first and second order optimality conditions for minimizing C(f ci ; f −ci ), see Dafermos and Sparrow [10].

3 3.1

Nonatomic Network Games with Coalitions Symmetric Load Balancing Games

A symmetric load balancing game is a network game, where the underlying digraph simply connects two distinguished nodes with parallel links. Theorem 1. For symmetric load balancing games with m coalitions and nondecreasing, differentiable, and convex latency functions, the price of anarchy is bounded from above by m. Proof. As usual, let f denote a Nash flow and o an optimal flow. We bound the cost of each coalition individually. Assume the flow for coalition i carries αi units of flow. We claim that there exists a feasible flow g i such that gai + fa−i ≤ oa for all a ∈ A with gai > 0. To see this, we define the flow g¯ = [o − f −i ]+ , where the positive projection is applied component wise, that is, for arc a we have [¯ ga ]+ = g¯a , if g¯a ≥ 0, and 0 otherwise. It is straight-forward to verify that g¯ is a feasible flow for β ≥ αi units of flow. Hence, the flow g = αβi g¯ is feasible for coalition i. The  cost of coalition i when  applying strategy  g can be bounded by C i (g; f −i ) = a∈A ℓa (ga + fa−i ) ga ≤ a∈A ℓa (oa ) ga ≤ a∈A ℓa (oa ) oa . The first inequality is valid since for arcs a with ga > 0, we have αβi [oa −fa−i ]+ +fa−i ≤ oa , because oa ≥ fa−i and αβi ≤ 1. The second inequality follows since g is by definition opt-restricted, that is, ga ≤ oa for all a ∈ A. Using that coalition i i −i We apply the plays a best response in equilibrium, we have C i (f ; f ) ≤i C(o). same argument for every coalition, thus, C(f ) = i∈[m] C (f i ; f −i ) ≤ m C(o). ⊓ ⊔ 3.2

Multi-commodity Networks

We prove the following theorem. Proposition 1. Let M > 0. There is a multi-commodity instance I = (G, r, ℓ, C) with |C| = 2 such that for the Nash flow f , and the optimal flow o, we have C(f ) ≥ Ω(M ) · C(o).

Stackelberg Strategies and Collusion in Network Games with Splittable Flow

s0

ℓ(x)

0 0 s1

0

139

t0

0 1

t1

Fig. 1. The graph G, used in the proof of Proposition 1

Proof. We have two players, where one player has a demand of size M from s0 to t0 . The second player has a demand of size 1 from s1 from t1 . All latencies are constant (1 or 0 as indicated in Fig. 1) except for the latency function ℓ(x), which is defined as ℓ(x) = max{0, x − M }. In a Nash equilibrium, the second player will route 1/2 of its flow along the upper path. Indeed, in this case the marginal latency evaluates to ℓ(1/2 + M ) + ℓ′ (1/2 + M ) 1/2 = 1. The total cost of the combined flow f evaluates to C(f ) = 1/2 ∗ (M + 1/2) + 1/2 = Ω(M ). The optimum routes the flows along the direct path leading to the cost C(o) = 1. ⊓ ⊔ Note that the function ℓ(x) used in the theorem is not differentiable in x = M . ¯ But this can be removed by defining a different function ℓ(x), which smoothly interpolates between ℓ(M ) = 0 and ℓ(1/2 + M ) and satisfies ℓ(1/2 + M ) + ℓ′(1/2 + M ) 1/2 = 1. 3.3

The Price of Anarchy for Restricted Latencies

For every arc a, latency function ℓa , and nonnegative parameter λ we define the following nonnegative value:   c c  (fa i xai − (faci )2 ) (ℓa (fa ) − λ ℓa (xa )) xa + ℓ′a (fa ) ω(ℓa ; m, λ) :=

i∈[m]

sup

ℓa (fa ) fa

fa ,xa ≥0

.

(2) We assume 0/0 = 0 by convention. For a given class of functions L, we further define ωm (L; λ) := sup ω(ℓa ; m, λ). Given a class of latency functions L, the set ℓa ∈L   of feasible λ is defined as Λ(L) := {λ ∈ R+ | 1 − ωm (L; λ) > 0}. Theorem 2. Let L be a class of continuous, nondecreasing, and standard latency functions. Then, the price of anarchy for the nonatomic network game   −1 with m coalitions is at most inf λ∈Λ(L) λ (1 − ωm (L; λ) ) .

Proof. Let f be a flow at Nash equilibrium, and x be any feasible flow. Then,          C(f ) ≤ (3) ℓa fa + ℓ′a fa faci (xcai − faci ) ℓa (fa ) fa + a∈A

i∈[m]

       λ ℓa (xa ) xa + ℓa (fa ) − λ ℓa (xa ) xa + = ℓ′a fa faci (xcai − faci ) a∈A

≤ λ C(x) + ωm (L; λ) C(f ).

i∈[m]

(4)

140

T. Harks

Here, (3) follows from the variational inequality stated in Lemma 1. The last inequality (4) follows from the definition of ωm (L; λ). Taking x as the optimal flow the claim is proven. ⊓ ⊔ Note that whenever Λ(L) = ∅ or Λ(L) = {∞}, the approach does not yield a finite price of anarchy. Our definition of ωm (L; λ) is closely related to the parameter β m (L) in Cominetti, Correa, and Stier-Moses [6] and αm (L) in Roughgarden [21] for the atomic splittable selfish routing model. For λ = 1 we have β m (L) = ωm (L; 1) and αm (L) = (1 − ωm (L; 1))−1 . As we show in the next section, the generalized value ωm (L; λ) implies improved bounds for a large class of latency functions, e.g., polynomial latency functions. The previous approaches with β m (L) (or αm (L)) failed for instance to generate upper bounds for polynomials of degree d ≥ 4 since this value exceeds 1 (or is infinite). The advantage of Theorem 2 is that we can tune the parameter λ and, hence, ωm (L; λ) so as to minimize the price of anarchy given by λ/(1 − ωm (L; λ)). We make use of a result due to Cominetti, Correa, and Stier-Moses [6]. Theorem 3 (Cominetti et al. [6]). The value β m (ℓa ) = ω(ℓa ; m, 1) is at most     2 ℓa (fa ) − ℓa (xa ) xa + ℓ′a (fa ) (xa )2 /4 − fa − xa /2 /m . ℓa (fa ) fa xa ,fa ≥0 sup

Since the necessary calculations to prove the above claim only affect the last term in (2), which is the same for ω(ℓa ; m, λ) and β m (ℓa ), this bound carries over for arbitrary nonnegative values of λ. Corollary 1. If λ ≥ 0, the value ω(ℓa ; m, λ) is at most     2 ℓa (fa ) − λ ℓa (xa ) xa + ℓ′a (fa ) (xa )2 /4 − fa − xa /2 /m . sup ℓa (fa ) fa xa ,fa ≥0 Linear and Affine Linear Latency Functions. Cominetti et al. [6] proved an upper bound of 1.5 for affine latencies. In the following, we present a stronger result for linear latencies. We also show that for affine latencies the best bound can be achieved by setting λ = 1. In this case, we have β m (L) = ωm (L; 1). Theorem 4. Consider linear latency functions in L∗1 = {a1 z : a1 ≥ 0} and m ≥ 2 coalitions. Then, the price of anarchy is at most √



  √ 2 m + 2 m (m + 1) m + 1 + 2 m (m + 1) 2

. P (m) = 8 m (m + 1) (m + 1) √ Furthermore, lim P (m) = 34 + 21 2 ≈ 1.46. m→∞

Proof. For proving the first claim, we start with the bound on ω(ℓa ; m, λ) given in Corollary 1. We define μ := xfaa for fa > 0 and 0, otherwise, and replace       4m 1 xa = μ fa . This yields ω(ℓa ; m, λ) ≤ maxμ≥0 μ2 m−1−λ + μ m+1 −m . 4m m

Stackelberg Strategies and Collusion in Network Games with Splittable Flow

141

For λ > m−1 4 m this is a strictly convex program with a unique solution given −2 (m+1) λ by μ∗ = m−1−λ ≤ 4m+3−4 4 m . Inserting the solution, yields ω(ℓa ; m, λ) λ m+1−m . The m−2 condition λ ∈ Λ(L∗1 ) is equivalent to λ > max m−1 4 m , 2 m−2 . We define λ =

1 1 2 (m + 1)/m ∈ Λ(L∗1 ). Applying Theorem 2 with this value proves the 2 + 4 claim. ⊓ ⊔ The proof for affine latencies is similar and leads to C(f ) ≤ minλ≥1 showing that best bound can be achieved, when λ = 1.

4 λ2 −λ 4 λ−2

C(x)

Polynomial Latency Functions. We start this section with bounding the value ω(ℓa ; m, λ) for standard latency functions. The following Proposition is based on results obtained by Cominetti et al. [6]. In addition to their approach that is based on analyzing the parameter β m (L), we need to keep track of restrictions on the parameter λ. We focus in the following on the general case m ∈ N ∪ {∞}. Therefore, we define   ℓa (fa ) − λ ℓa (xa ) xa + ℓ′a (f ) (xa )2 /4 . (5) ω(ℓa ; ∞, λ) := sup ℓa (fa ) fa xa ,fa ≥0 Then, it follows from Theorem 3 that ω(ℓa ; m, λ) ≤ ω(ℓa ; ∞, λ), since the square 2  is nonnegative and limm→∞ fa − xa /2 /m = 0. The following characterization of ω(ℓa ; m, λ) via a differentiable function s(z) can be proved with techniques from Cominetti et al. [6]. Proposition 2. Let L be a class of continuous, nondecreasing and convex latency functions. Furthermore, assume that λ ≥ 1 and ℓa (κ fa ) ≥ s(κ) ℓa (fa ) for all κ ∈ [0, 1], where s : [0, 1] → [0, 1] is a differentiable function with s(1) = 1.  Then, ω(ℓa ; ∞, λ) ≤ max u 1 − λ s(u) + s′ (1) u/4 . 0≤u≤1

We consider the class Ld := {ad xd + · · · + a1 x + a0 : as ≥ 0, s = 0, . . . , d}. Corollary 2. For latency functions in Ld , d ≥ 1, the price of anarchy is at most   −1  . (6) λ 1 − max u 1 − λ ud + d u/4 inf λ∈Λ(Ld )∩R≥1

0≤u≤1

We omit the proof for the sake of brevity. An asymptotic approximation for general d is provided in the next theorem. Theorem 5. Consider latency functions in L√d , d ≥ 2. Then, the price of anard  √ √ d+1 . chy is bounded from above by 12 d + 21 d+ d+1 Proof. We define λ(d) as follows λ(d) :=

 1√ 1 d 2 d + 2

√ √d+ d+1 . ( d+1) (d+1)

The proof

proceeds by proving a claim, which yields a bound on ω(ℓa ; ∞, λ(d)) for ℓa ∈ Ld .    d , for all d ≥ 2. Claim. max T (u) := u 1 − λ(d) ud + d u4 = d+1 0≤u≤1

142

T. Harks

 Proof. To prove the claim it is convenient to write λ(d) as λ(d) = du1 (d) + √    4 u1 (d) − 4 d/(d + 1) / 4 u1 (d)d+1 with u1 (d) := 2/( d + 1). Then, the claim is proven by verifying the following facts: 1. T ′ (u1 (d)) = 0, T ′′ (u1 (d)) < 0 and T ′′ (u) has at most one zero in (0, 1) d . 2. T (0) = 0, T (1) ≤ 0 and T (u1 (d)) = d+1 Before we prove these facts, we show how they imply the claim. The first fact implies that u1 (d) is the only local maximum of T (u) in the open interval (0, 1). Then, by comparing T (u1 (d)) to the boundary values T (0) and T (1) it follows that T (u1 (d)) = d/(d + 1) is the global maximum. We start by proving the first fact. The first derivative T ′ (u) evaluates to ′ T (u1 (d)) = −(d (u1 (d)2 d + 4 u1 (d) − u1 (d)2 − 4))/(4 u1 (d)). Then, it is easy to 2 T ′ (u1 (d)) = 0. check that u1 (d) solves the equality x2 d+ 4 x−  dx − 4 1= 0 proving d ′′ Furthermore, we get T (u1 (d)) = −d (d + 1) 4 + u1 (d) − (d+1) u1 (d)2 + d2 . For   T ′′ (u1 (d)) < 0, it is sufficient to show that (d + 1) d4 + u11(d) − (d+1)du1 (d)2 > 12 . √

Inserting the definition of u1 (d) and rewriting yields 2 d+24 d+2 > 12 , which holds for all d ≥ 1. Furthermore, it is easy to check that T ′′ (u) has at most one zero in (0, 1), so we omit the details. ⊓ ⊔

We can now invoke Corollary 2, which implies that ω(ℓa ; ∞, λ(d)) is bounded d . Hence, λ(d) ∈ Λ(Ld ) so we can use Theorem 2 to obtain the claimed by d+1 bound of (d + 1) λ(d). ⊓ ⊔ In the following we analyze the growth of the derived upper bound for large d, (d ≥ 4). The proof consists of standard calculus and is omitted.  √ d √d+1 √ d √ Corollary 3. 12 d + 21 d+ ≤ d for d ≥ 4. d+1

Note that there is still a large gap between the best known lower bound, which is Ω(d), see Cominetti et al. [6].

4

Stackelberg Strategies with Coalitional Followers

Since the price of anarchy in network games with only two coalitions is already unbounded (Proposition 1), we investigate Stackelberg routing as a means to improve the quality of Nash equilibria. In a Stackelberg network game, we are given in addition to G, r, ℓ, and C, a parameter α ∈ (0, 1). A (strong) Stackelberg strategy is a flow g feasible with respect to r′ = (α1 r1 , . . . , αk rk ), for some k k α1 , . . . , αk ∈ [0, 1] such that i=1 αi ri = α i=1 ri . If αi = α for all i, g is called a weak Stackelberg strategy. Thus, both strong and weak strategies route a fraction α of the overall traffic, but a strong strategy can choose how much flow of each commodity is centrally controlled. For single-commodity networks the two definitions coincide. A Stackelberg strategy g is called opt-restricted if ga ≤ oa for all a ∈ A. Given a Stackelberg strategy g, let ℓ˜a (x) = ℓa (ga + x) for all a ∈ A and let r˜ = r − r′ . Then a flow h is induced by g if it is a Nash flow for the instance

Stackelberg Strategies and Collusion in Network Games with Splittable Flow

143

˜ C). The Nash flow h can be characterized by the following variational (G, r˜, ℓ, inequality (see Lemma 1): h is a Nash flow induced by g if for all flows x feasible with respect to r˜,    ℓa (ga + ha ) + ℓ′a (ga + ha ) hia (xia − hia ) ≥ 0. (7) i∈[m] a∈A

We will mainly be concerned with the cost of the combined induced flow g + h,  given by C(g + h) = a∈A (ga + ha )ℓa (ga + ha ). In particular, we are interested in bounding the ratio C(g + h)/C(o), called the priceof anarchy. It will be convenient to separate the cost C(g + h) in C1 (g; h) := a∈A ℓa (ga + ha ) ga and C2 (h; g) := a∈A ℓa (ga + ha ) ha . 4.1

Symmetric Load Balancing Games

For symmetric load balancing games we observe the following. Let g be the Largest-Latency-First (LLF) strategy introduced by Roughgarden [22]. LLF simply calculates o and saturates the arcs with largest latencies first. Roughgarden showed that the price of anarchy drops to 1/α in this case. Fotakis [11] extended this result to unsplittable flows. Combining the results of Hayrapetyan et al. [13] and Roughgarden [22], it is easy to show that this result also holds in the case of coalitional followers. To see this, we simply observe that for nonatomic followers, the LLF strategy induces a flow of cost at most 1/α C(o). Hayrapetyan et al. [13] showed that for symmetric load balancing games colluding nonatomic players only decrease the total cost. Thus, the bound of 1/αC(o) carries over. 4.2

General Networks with a Single Follower

In the following, we will analyze the SCALE strategy defined by g = αo. We consider a single follower that forms a coalition, that is, the follower coordinates its flow so as to minimize its cost. This setting has been previously studied by Korillis et al. [16] for the special case of parallel arcs and M/M/1 latencies. They showed that one can efficiently compute a Stackelberg strategy, which reduces the price of anarchy to one. We show in this section a result for general networks with a weaker guarantee though (1 + α). Theorem 6. Let g be the SCALE strategy and let there be a single atomic follower. Then, the price of anarchy of the equilibrium flow g + h is bounded from above by 1 + α. Proof. We bound the cost C1 (g; h) and C2 (h; g) separately. For the follower, we ¯ = (1−α) oa is a feasible flow. Since the follower plays a best response know that h  in equilibrium, we have C2 (h; α o) ≤ C2 ((1−α) o; α o) = a∈A ℓa (oa ) (1−α) oa ≤ (1−α) C(o). Now we bound the cost of the leader. Let h denote the best response of the follower. We consider the following cases. (i) 0 ≤ ha ≤ (1 − α)oa . In this case it follows that ℓa (α oa + ha )α oa ≤ α ℓa (oa ) oa . (ii) ha > (1 − α) oa . Then, α 1 ha and we have ℓa (α oa + ha )α oa ≤ 1−α ℓa (α oa + ha ) ha . this implies oa < 1−α

144

T. Harks

α C2 (h; α o) ≤ 2 α C(o), Using both cases, we have C1 (α o; h) ≤ α C(o) + 1−α where the last inequality follows since C2 (h; α o) ≤ (1 − α) C(o). Summing both inequalities for C1 and C2 proves the claim. ⊓ ⊔

Based on a single-commodity instance, one can show that no Stackelberg strategy can induce a price of anarchy of one, even if there is only a single atomic follower. We will present the details in the full version of this paper. 4.3

SCALE Strategy with Multiple Followers

In this section, we allow for multiple coalitional (atomic splittable) followers. Lemma 2. Let g be the SCALE strategy and let α be the fraction that the Stackelberg leader controls. Then, for the remaining followers with flow h, we have:     ℓa (αoa + ha ) + ℓ′a (αoa + ha ) hka (1 − α) oka − hka ≥ 0 k∈[m] a∈A

The lemma follows directly from (7) and taking xa = (1−α) oa , which is a feasible flow for the remaining (1 − α) r demand. For every arc a, latency function ℓa , and nonnegative number λ1 , we define the following nonnegative value: ω1 (ℓa ; α, λ1 ) :=

ℓa (α oa + ha ) α oa − λ1 ℓ(oa ) oa . ℓa (α oa + ha ) (α oa + ha ) oa ,xa ≥0 sup

(8)

We assume by convention 0/0 = 0. For L, we further define ω1 (L; α, λ1 ) := sup ω1 (ℓa ; α, λ1 ). Similarly, ℓa ∈L

  (1 − α) ℓa (α oa + ha ) − λ2 ℓa (oa ) oa + za (f, h) , ω2 (ℓa ; α, m, λ2 ) := sup ℓa (α oa + ha ) (α oa + ha ) oa ,ha ≥0 with za (f, h) := ℓ′a (α oa + ha )

 

k∈[m]

value za (f, h) is at most sup ω2 (ℓa ; α, m, λ2 ).

ℓ′a (α oa

 [(1 − α) hka oka − (hka )2 ] . Note that the 2

+ ha ) (1−α)4(oa ) . We define ω2 (L; α, m, λ2 ) :=

ℓa ∈L

Proposition 3. Let 0 ≤ α ≤ 1 be the fraction that the Stackelberg leader controls according to the SCALE strategy and let L be the class of allowable latency functions. Then, C1 (g; h) ≤ λ1 C(o)+ ω1 (L; α, λ1 ) C(g + h) and C2 (h; g) ≤ λ2 C(o) + ω2 (L; α, m, λ2 ) C(g + h). The proof simply uses Lemma  2and the definitions of ω1 andω2 . We define Λ(L; α) := {(λ1 , λ2 ) ∈ R2+ | 1 − ω1 (L; α, λ1 ) + ω2 (L; α, m, λ2 ) > 0}. Then, using similar arguments as in Theorem bound on  2 we get the following upper  the price of anarchy inf (λ1 ,λ2 )∈Λ(L;α)

λ1 +λ2



1− ω1 (L;α,λ1 )+ω2 (L;α,m,λ2 )

 .

Stackelberg Strategies and Collusion in Network Games with Splittable Flow

4.4

145

Linear Latency Functions

We will use the above result to prove upper bounds on the price of anarchy for linear latencies. First, we need two technical lemmas (proofs are omitted). α2 1 Lemma 3. For λ1 ∈ R+ , ω1 (L1 ; α, λ1 ) ≤ max α−λ α , 4 λ1 . Lemma 4. For λ2 ≥

1+α−2 α2 , 4

ω2 (L1 ; α, λ2 ) ≤ max



2

1−α−λ2 , 4 (1−α) α λ2 +α−1



.

Theorem 7. Let 0 ≤ α ≤ 1. Let L1 be a class of affine linear latency functions. Then, the price√ of anarchy for the SCALE strategy and√atomic followers√ is at most √ √ (1+2 1−α) (1+ 1−α)2 α−2 α 1−α−1+2 α2 ) (1+ 1−α) α √ √ √ for α ∈ [0, 12 3] and (−32 (−3 for 4+4√ 1−α−3 α α−3 α 1−α+1+ 1−α+α2 ) 1 α ∈ [ 2 3, 1]. The proof follows from the previous lemmas.

References 1. Aland, S., Dumrauf, D., Gairing, M., Monien, B., Schoppmann, F.: Exact price of anarchy for polynomial congestion games. In: Durand, B., Thomas, W. (eds.) STACS 2006. LNCS, vol. 3884, pp. 218–229. Springer, Heidelberg (2006) 2. Awerbuch, B., Azar, Y., Epstein, A.: The price of routing unsplittable flow. In: STOC 2005: Proceedings of the thirty-seventh annual ACM symposium on Theory of computing, pp. 57–66. ACM Press, New York (2005) 3. Bonifaci, V., Harks, T., Schaefer, G.: The impact of stackelberg routing in general networks. Technical report, Technical University Berlin (July 2007) 4. Catoni, S., Pallotino, S.: Traffic equilibrium paradoxes. Transportation Science 25, 240–244 (1991) 5. Christodoulou, G., Koutsoupias, E.: The price of anarchy of finite congestion games. In: STOC 2005: Proceedings of the thirty-seventh annual ACM symposium on Theory of computing, pp. 67–73. ACM Press, New York (2005) 6. Cominetti, R., Correa, J.R., Stier-Moses, N.E.: Network games with atomic players. In: Bugliesi, M., Preneel, B., Sassone, V., Wegener, I. (eds.) ICALP 2006. LNCS, vol. 4051, pp. 525–536. Springer, Heidelberg (2006) 7. Correa, J.R., Schulz, A.S., Stier-Moses, N.E.: Selfish routing in capacitated networks. Math. Oper. Res. 29, 961–976 (2004) 8. Correa, J.R., Schulz, A.S., Stier-Moses, N.E.: On the inefficiency of equilibria in congestion games. In: J¨ unger, M., Kaibel, V. (eds.) IPCO 2005. LNCS, vol. 3509, pp. 167–181. Springer, Heidelberg (2005) 9. Correa, J.R., Stier-Moses, N.E.: Stackelberg routing in atomic network games. Technical report, Columbia Business School (February 2007) 10. Dafermos, S.C., Sparrow, F.T.: The traffic assignment problem for a general network. J. Res. Natl. Bur. Stand., Sect. B 73, 91–118 (1969) 11. Fotakis, D.A.: Stackelberg strategies for atomic congestion games. In: Arge, L., Hoffmann, M., Welzl, E. (eds.) ESA 2007. LNCS, vol. 4698, pp. 299–310. Springer, Heidelberg (2007) 12. Fotakis, D.A., Kontogiannis, S.C., Spirakis, P.G.: Atomic congestion games among coalitions. In: Bugliesi, M., Preneel, B., Sassone, V., Wegener, I. (eds.) ICALP 2006. LNCS, vol. 4051, pp. 572–583. Springer, Heidelberg (2006)

146

T. Harks

13. Hayrapetyan, A., Tardos, Wexler, T.: The effect of collusion in congestion games. In: STOC 2006: Proceedings of the thirty-eighth annual ACM symposium on Theory of computing, pp. 89–98. ACM Press, New York (2006) 14. Kaporis, A.C., Spirakis, P.G.: The price of optimum in Stackelberg games on arbitrary single commodity networks and latency functions. In: Proc. of the 18th annual ACM Symposium on Parallelism in Algorithms and Architectures (SPAA), pp. 19–28. ACM Press, New York (2006) 15. Karakostas, G., Kolliopoulos, S.G.: Stackelberg strategies for selfish routing in general multicommodity networks. Technical report, McMaster University (June 2006) 16. Korilis, Y.A., Lazar, A.A., Orda, A.: Achieving network optima using Stackelberg routing strategies. IEEE/ACM Transactions on Networking 5(1), 161–173 (1997) 17. Koutsoupias, E., Papadimitriou, C.H.: Worst-case equilibria. In: Meinel, C., Tison, S. (eds.) STACS 1999. LNCS, vol. 1563, pp. 404–413. Springer, Heidelberg (1999) 18. Kumar, V.S.A., Marathe, M.V.: Improved results for stackelberg scheduling strategies. In: Widmayer, P., Triguero, F., Morales, R., Hennessy, M., Eidenbenz, S., Conejo, R. (eds.) ICALP 2002. LNCS, vol. 2380, pp. 776–787. Springer, Heidelberg (2002) 19. Perakis, G.: The price of anarchy when costs are non-separable and asymmetric. In: Bienstock, D., Nemhauser, G.L. (eds.) IPCO 2004. LNCS, vol. 3064, pp. 46–58. Springer, Heidelberg (2004) 20. Roughgarden, T.: The price of anarchy is independent of the network topology. Journal of Computer and System Science 67, 341–364 (2002) 21. Roughgarden, T.: Selfish routing with atomic players. In: Proceedings of the 16th Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 973–974 (2005) 22. Roughgarden, T.: Stackelberg scheduling strategies. SIAM Journal on Computing 33(2), 332–350 (2004) 23. Roughgarden, T.: Selfish Routing and the Price of Anarchy. MIT Press, Cambridge (2005) 24. Roughgarden, T., Tardos, E.: How bad is selfish routing? Journal of the ACM 49(2), 236–259 (2002) 25. Sharma, Y., Williamson, D.: Stackelberg thresholds in network routing games or the value of altruism. In: Proc. of the 8th ACM conference on Electronic Commerce, pp. 93–102 (2007) 26. Swamy, C.: The effectiveness of Stackelberg strategies and tolls for network congestion games. In: Proc. of the 18th annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pp. 1133–1142 (2007) 27. Wardrop, J.G.: Some theoretical aspects of road traffic research. Proceedings of the Institute of Civil Engineers 1(Part II), 325–378 (1952)

Peak Shaving through Resource Buffering⋆ Amotz Bar-Noy, Matthew P. Johnson, and Ou Liu The Graduate Center of the City University of New York

Abstract. We introduce and solve a new problem inspired by energy pricing schemes in which a client is billed for peak usage. At each timeslot the system meets an energy demand through a combination of a new request, an unreliable amount of free source energy (e.g. solar or wind power), and previously received energy. The added piece of infrastructure is the battery, which can store surplus energy for future use. More generally, the demands could represent required amounts of energy, water, or any other tenable resource which can be obtained in advance and held until needed. In a feasible solution, each demand must be supplied on time, through a combination of newly requested energy, energy withdrawn from the battery, and free source. The goal is to minimize the maximum request. In the online version of this problem, the algorithm must determine each request without knowledge of future demands or free source availability, with the goal of maximizing the amount by which the peak is reduced. We give efficient optimal algorithms for the offline problem, with and without a bounded battery. We also show how to find the optimal offline battery size, given the requirement that the final battery level equals the initial battery level. Finally, we give efficient Hn -competitive algorithms assuming the peak effective demand is revealed in advance, and provide matching lower bounds.

1 Introduction There is increasing interest in saving fuel costs by use of renewable energy sources such as wind and solar power. Although such sources are highly desirable, and the power they provide is in a sense free, the typical disadvantage is unreliability: availability depends e.g. on weather conditions (it is not “dispatchable” on demand). Many companies seek to build efficient systems to gather such energy when available and store it, perhaps in modified form, for future use [16]. On the other hand, power companies charge some high-consumption clients not just for the total amount of power consumed, but also for how quickly they consume it. Within the billing period (typically a month), the client is charged for the amount of energy used (usage charge, in kWh) and for the maximum amount requested over time (peak charge, in kW). If demands are given as a sequence (d1 , d2 , . . . , dn ), then  the total bill is of the form c1 i di + c2 maxi {di } (for some constants c1 , c2 > 0), i.e., a weighted sum of the total usage and the maximum usage. (In practice, the discrete timeslots may be 30-minute averages [2].) This means that a client who powers a 100kW piece of machinery for one hour and then uses no more energy for the rest of the ⋆

This work was supported by grants from the National Science Foundation and the New York State Office of Science, Technology and Academic Research.

E. Bampis and M. Skutella (Eds.): WAOA 2008, LNCS 5426, pp. 147–159, 2009. c Springer-Verlag Berlin Heidelberg 2009 

148

A. Bar-Noy, M.P. Johnson, and O. Liu

month would be charged more than a client who uses a total of 100kWh spread evenly over the course of the month. Since the per-unit cost for peak charges may be on the order of 100 times the per-unit cost for total usage [3], this difference can be significant. This suggests a second use for the battery: to store purchased energy for future use. Indeed, at least one start-up company [1] is currently marketing such a battery-based system intended to reduce peak energy charges. In such a system, a battery is placed between the power company and a high-consumption client site, in order to smooth power requests and shave the peak. The client site will charge to the battery when demand is low and discharge when demand is high. Spikes in the demand curve can thus be rendered consistent with a relatively flat level of supplied power. The result is a lower cost for the client and a more manageable request curve for the provider. We may generalize this problem of minimaxing the request to any resource which is tenable in the sense that it may be obtained early and stored until needed. For example, companies frequently face shortages of popular products: “Plentiful supply [of Xboxes] would be possible only if Microsoft made millions of consoles in advance and stored them without releasing them, or if it built vast production lines that only ran for a few weeks–both economically unwise strategies,” a recent news story asserted [11]. A producer could smooth the product production curve by increasing production and warehousing supply until future sales. But when should the producer “charge” and “discharge”? (In some domains, there may also be an unpredictable level of volunteer help.) A third application is the scheduling of jobs composed of generic work-units that may be done in advance. Although the problem is very general, we will use the language of energy and batteries for concreteness. In the online version of our problem, the essential choice faced at each timeslot is whether (and by how much) to invest in the future or to cash in a prior investment. The investment in our setting is a request for more energy than is needed at the time. If the algorithm only asks for the minimum required, then it is vulnerable to spikes in demand; if it asks for much more energy than it needs, then the greater request could itself introduce a new, higher peak. The strictness of the problem lies in the fact that the cost is not cumulative: we want every request to be low. Background. Experimental work applying variations of these online algorithms to settings lacking provable guarantees was recently presented [6]. The present paper focuses on settings that allow guaranteed competitiveness. There is a wide literature on commodity production, storage, warehousing, and supply-chain management (see e.g. [13,17,9,14]). More specifically, there are a number of inventory problems based on the Economic Lot Sizing model [8], in which demand levels for a product vary over a discrete finite time-horizon and are known in advance. A feasible solution in these problems must obtain sufficient supply through production (sometimes construed as ordering) or through other methods, in order to meet each of the demands on time, while observing certain constraints. The nature of solution quality varies by formulation. One such inventory problem is Single-Item Lot-Sizing, in which sufficient supplies must be ordered to satisfy each demand, while minimizing the total cost of ordering charges and holding charges. The ordering charge consists of a fixed charge per order plus a charge linear in order size. The holding charge for inventory is per-unit and

Peak Shaving through Resource Buffering

149

per-timeslot. There is a tradeoff between these incentives since fixed ordering charges encourage large orders while holding charges discourage them. Wagner & Whitin [15] showed in 1958 that this problem can be solved in polynomial time. Under the assumption of non-speculative costs, in which case orders should always be placed as late as possible, the problem can be solved in linear time. Such “speculative” behavior, however, is the very motivation of our problem. There are many lot-sizing variations, including constant-capacity models that limit the amount ordered per timeslot. (See [14] and references therein.) Our offline problem differs in that our objective is minimizing this constant capacity (for orders), subject to a bound on inventory size, and we have no inventory charge. Another related inventory problem is Capacity and Subcontracting with Inventory (CSI) [5], which incorporates trade-offs between production costs, subcontracting costs, holding costs, and the cost for maximum per-unit-timeslot production capacity. The goal in that problem is to choose a production capacity and a feasible production/ subcontracting schedule that together minimize total cost, whereas in our problem choosing a production capacity, subject to storage constraints, is the essential task. In the minimax work-scheduling problem [12], the goal is to minimize the maximum amount of work done in any timeslot over a finite time-horizon. Our online problem is related to a previously studied special case in which jobs with deadlines are assigned online. In that problem, all work must be done by deadline but cannot be begun until assigned. Subject to these restrictions, the goal is to minimize the maximum work done in any timeslot. While the optimization goal is the same, our online problem differs in two respects. First, each job for us is due immediately when assigned. Second, we are allowed to do work (request and store energy) in advance. One online algorithm for the jobs-by-deadlines problem is the α-policy [12]: at each timeslot, the amount of work done is α times the maximum per-unit-timeslot amount of work that OPT would have done, when running on the partial input received so far. One of our online algorithms adopts a similar strategy. Contributions. We introduce a novel scheduling problem and solve several versions optimally with efficient combinatorial algorithms. We solve the offline problem for two kinds of batteries: unbounded battery in O(n) time and bounded in O(n2 ). Separately, we show how to find the optimal offline battery size, for the setting in which the final battery level must equal the initial battery level. This is the smallest battery size that achieves the optimal peak. The online problem we study is very strict. A meta-strategy in many online problems is to balance expensive periods with cheap ones, so that the overall cost stays low [7]. The difficulty in our problem lies in its non-cumulative nature: we optimize for the max, not for the average. We show that several versions of the online problem have no algorithm with non-trivial competitive ratio (i.e., better than √ n or Ω( n)). Given advanced knowledge of the peak demand D, however, we give Hn -competitive algorithms for batteries bounded and unbounded. Our fastest algorithm has O(1) per-slot running-time. Hn is the (optimal) competitive ratio for both battery settings. Examples. Although there is no constant-ratio competitive algorithm for unbounded n, our intended application in fact presumes a fixed time-horizon. If the billing period is one month, and peak charges are computed as 30-minute averages, then for this setting

150

A. Bar-Noy, M.P. Johnson, and O. Liu

Hn is approximately 7.84. If we assume that the battery can fully recharge at night, so that each day can be treated as a separate time period, then for a 12-hour daytime time-horizon Hn is approximately 3.76.

2 Model and Preliminaries Definition 1. At each timeslot i, di is the demand, ri is the request, bi is the battery charge level at the start of the timeslot, and fi is the amount of free source available. By dˆi we indicate the effective demand di − fi . We sometimes refer to the sequence over time of one of these value types as a curve, e.g., the demand curve. D is the maximum effective demand maxi {dˆi }, and R is the maximum request maxi {ri }. The problem instance comprises the demands, the free source curve, battery size B, initial charge b1 , and required final charge bn+1 (in the offline case). The problem solution consists of the request curve. Definition 2. Let overflow be the situation in which ri + fi − di > B − bi , i.e., there is not enough room in the battery for the amount we want to charge. Let underflow be the situation in which di − ri − fi > bi , i.e., there is not enough energy in the battery for the amount we want to discharge. Call an algorithm feasible if underflow never occurs. The goal of the problem is to minimize R (for competitiveness measures this is construed as maximizing D − R) while maintaining feasibility. In the absence of overflow/underflow, the battery level at timeslot i is simply bi = bi−1 + ri−1 + fi−1 − di−1 . It is forbidden for bi to ever fall below 0. That is, the request ri , the free source fi , and the battery level bi must sum to at least the demand di at each timeslot i. Notice that effective demand can be negative, which means that the battery may be charged (capacity allowing), even if the request is 0. We assume D, however, is strictly positive. Otherwise, the problem instance is essentially trivial. We use the following to simplify the problem statement: Observation 1 If effective demands may be negative, then free source energy need not be explicitly considered. As such, we set aside the notion of free source, and for the remainder of the paper (simplifying notation) allow demand di to be negative. In the energy application, battery capacity is measured in kWh, while instantaneous request is measured in kW. By discretizing we assume wlog that battery level, demand, and request values are expressed in common units. Peak charges are based linearly on the max request, which is what we optimize for. The battery can have a maximum capacity B or be unbounded. The problem may be online, offline, or in between; we consider the setting in which the peak demand D is revealed in advance, perhaps predicted from historical information. Threshold algorithms: For a particular snapshot (di , ri , bi ), demand di must be supplied through a combination of the request ri and a change in battery bi+1 − bi . This means that there are only three possible modes for each timeslot: request exactly the

Peak Shaving through Resource Buffering

151

demand-free, request more than this and charge the difference, or request less and discharge the difference. We refer to our algorithms as threshold algorithms. Let T1 , T2 , ..., Tn be a sequence of values. Then the following algorithm uses these as request thresholds: for each timeslot i if di < Ti charge min(B − bi , Ti − di ) else discharge di − Ti

Intuitively, the algorithm amounts to the rule: at each timeslot i, request an amount as near to Ti as the battery constraints will allow. Our offline algorithms are constant threshold algorithms, with a fixed T ; our online algorithms compute Ti dynamically for each timeslot i. A constant-threshold algorithm is specifiable by a single number. In the online setting, predicting the exact optimal threshold from historical data suffices to solve the online algorithm optimally. A small overestimate of the threshold will merely raise the peak cost correspondingly higher. Unfortunately, however, examples can be found in which even a small underestimate eventually depletes the battery before peak demand and thus produce no cost-savings at all. The offline problem can be solved approximately, within additive error ǫ, through binary search for the minimum feasible constant threshold value T . Simply search the range [0, D] for the largest value T for which the threshold algorithm has no underflow, in time O(n log Dǫ ). If the optimal peak reduction is R − T , then the algorithm’s peak reduction will be at least R − T − ǫ. It is straightforward to give a linear programming formulation of the offline problem; it can also be solved by generalized parametric maxflow [4]. Our interest here, however, is in efficient combinatorial optimal algorithms. Indeed, our combinatorial offline algorithms are significantly faster than these general techniques and lead naturally to our competitive online algorithms. Online algorithms based on such general techniques would be intractable for fine-grain timeslots.

3 Offline Problem We now find optimal algorithms for both battery settings. For unbounded, we assume the battery starts empty; for bounded, we assume the battery starts with amount B. For both, the final battery level is unspecified. We show below that these assumptions are made wlog. The two offline threshold functions, shown in Table 1, use the following definition:  Definition 3. Let μ(j) = 1j jt=1 dt be the mean demand of the prefix region [1, j], and let μ ˆ(k) = max1≤j≤k μ(j) be the maximum mean among of the prefix regions −B+

j

dt

= up to k. Let ρ(i, j) = be the density of the region [i, j] and ρˆ(k) = j−i+1 max1≤i≤j≤k ρ(i, j) be the maximum density among all subregions of [1, k]. t

i

Bounded capacity changes the character of the offline problem. It suffices, however, to find the peak request made by the optimal algorithm, Ropt . Clearly Ropt ≥ D − B,

152

A. Bar-Noy, M.P. Johnson, and O. Liu Table 1. Threshold functions used for offline algorithm settings Alg.

threshold Ti run-time

battery

1.a unbounded 1.b bounded

µ ˆ(n) ρˆ(n)

O(n) O(n2 )

since the ideal case is that a width-one peak is reduced by size B. Of course, the peak region might be wider. ˆ(n), for unbounded battery) and AlgoTheorem 2. Algorithm 1.a (threshold Ti = μ rithm 1.b (threshold Ti = ρˆ(n), for bounded battery) are optimal, feasible, and run in times O(n) and O(n2 ), respectively. Proof. First, let the battery be unbounded. For any region [1, j], the best we can hope for is that requests for all demands d1 , ..., dj can be spread evenly over the first j timeslots. Therefore the optimal threshold cannot be lower than the maximum µ(j), which is Algorithm 1.a’s threshold. For feasibility, it suffices to show that after each time j, the battery level is nonnegative. But by time j, the total input to the system will be  j·µ ˆ(n) ≥ j · µ(j) = jt=1 dj , which is the total output to the system up to that point. j·μ(j)+d +1 , so the sequence of µ values, and For complexity, just note that µ(j + 1) = j+1 their max, can be computed in linear time. Now let the battery be bounded. Over the course of any region [i, j], the best that can be hoped for is that the peak request will be reduced to B/(j − i + 1) less than the average di in the region, i.e., ρ(i, j), so no threshold lower than Algorithm 1.b’s is possible. For feasibility, it suffices to show that the battery level will be nonnegative after each time j. Suppose j is the first time underflow occurs. Let i − 1 be the last timeslot prior to j with a full battery. Then there is no underflow or overflow in [i, j), and so for each t ∈ [i, j] the discharge at t is bt − bt+1 = dt − T (possibly negative, meaning a charge)  and the so the total net discharge over [i, j] is jt=i dt −(j −i+1)T . Total net discharge j

−B+

j

dt

= , which contradicts the definition of T . The greater than B implies T < j−i+1 2 densest region can be found in O(n ), with n separate linear-time passes, each of which finds the densest region beginning in some position i, since ρ(i, j + 1) can be computed in constant time from ρ(i, j). t

i

3.1 Battery Level Boundary Conditions We assumed above that the battery starts empty for the unbounded offline algorithm and starts full for the bounded offline algorithm, with the final battery level left indeterminate for both settings. A more general offline problem may require that b1 = β1 and bn+1 = β2 , i.e., the battery begins and ends at some charge levels specified by parameters β1 and β2 . We argue here that these requirements are not significant algorithmically, since by pre- and postprocessing, we can reduce to the default cases for both the unbounded and bounded versions.

Peak Shaving through Resource Buffering

153

First, consider the unbounded setting, in which the initial battery level is 0. In order to enforce that b1 = β1 and bn+1 = β2 , run the usual optimal algorithm on the sequence (d1 − β1 , d2 , ..., dn−1 , dn + β2 ). (Recall that negative demands are allowed.) Then bn+1 will be at least β2 larger than dn . To correct for any surplus, manually delete a total of bn+1 − β2 from the final requests. For the bounded setting, the default case is b1 = B and bn+1 indeterminate. To support b1 = β1 = B and bn+1 = β2 , modify the demand sequence as above, except with d1 + (B − β1 ) as the first demand and then do similar postprocessing as in the unbounded case to deal with any final surplus. 3.2 Optimal Battery Size A large component of the fixed initial cost of the system will depend on battery capacity. A related problem therefore is finding the optimal battery size B for a given demand curve di , given that the battery starts and ends at the same level β (which can be seen as an amount borrowed and repaid). The optimal peak request possible will be n 1 i=1 di = µ(n), and the goal is to find the smallest B and β that achieve peak µ(n). n (A completely flat request curve is possible given a sufficiently large battery.) This can be done in O(n).   di = ri , there must be Since we will have b1 = bn+1 , ri = µ for all i, and no overflow. Let d′i = di − µ, i.e., the amount by which di is above average (positive means discharge, negative means charge). Then the minimum possible β = b1 is the maximum prefix sum of the d′ curve (which will be at least 0). It could happen that the battery level will at some point rise above b1 , however. (Consider the example d = (0, 0, 1, 0, 0, 0), for which µ = 1/6, d′ = (−1/6, −1/6, 5/6, −1/6, −1/6, −1/6) and β = 1/2.) The needed capacity B can be computed as β plus the maximum prefix sum of the negation of the d′ curve (which will also be at least 0). (In the example, we have B = β + 2/6 = 5/6.) Although B is computed using β, we emphasize that the computed β is the minimum possible regardless of B, and the computed B is the minimum possible regardless of β.

4 Online Problem We consider two natural choices of objective function for the online problem. One option is to compare the peak requests, so that if ALG is the peak request of the online algorithm ALG and OP T is that of the optimal offline algorithm OPT, then a ALG c-competitive algorithm for c ≥ 1 must satisfy OP T ≤ c for every demand sequence. Although this may be the most natural candidate, we argue that for many settings it is uninteresting. If the peak demand is a factor k larger than the battery capacity, for example, then the trivial online algorithm that does not use the battery would be (k/(k − 1))-competitive. If we drop the assumption of a small battery, then no reasonable competitive factor is possible in general, even if D is revealed in advance. Proposition 1. With peak demand D revealed in advance and finite time horizon n, no online algorithm for the problem of minimizing peak request can have competitive

154

A. Bar-Noy, M.P. Johnson, and O. Liu

ratio 1) better than n if the battery begins with some strictly positive charge, or 2) better √ than Ω( n) if the battery begins empty.1 Proof. For part 1, suppose that the battery begins with initial charge D. Consider the demand curve (D, 0, ..., 0, ?), where the last demand is either Dor 0. ALG must discharge D at time 1, since OP T = 0 when dn = 0. Thus ALG’s battery is empty at time 2. If ALG requests nothing between times 2 and n − 1, and dn = D, then we have OP T = D/n and ALG = D; if ALG requests some α > 0 during any of those timeslots, and dn = 0, then we have OP T = 0 and ALG = α. This yields a lower bound of n. For part 2, suppose the battery begins empty, which is a disadvantage for both ALG and OPT. Consider the demand curve (0, 0, ..., 0, D), in which case OP T = D/n. If an algorithm is c-competitive for some c ≥ 1, then in each of the first n − 1 timeslots of this demand curve ALG can charge at most amount cD/n. Now suppose that the only nonzero demand, of value D, arrives possibly earlier, at some timeslot k ∈ [1, n], following k − 1 demands of zero, during which ALG can charge at most (k − 1)cD/n. In this case, we have OP T = D/k and ALG ≥ D − (k − 1)cD/n, which yields the competitive ratio: 1 − (k − 1)c/n D − (k − 1)cD/n = = k − k 2 c/n + kc/n D/k 1/k √ Solving for c, and then choosing k = n, we have: √ √ n k √ = Ω( n) = c≥ 1 + k 2 /n − k/n 1 + 1 − 1/ n c≥

thus establishing the lower bound. Instead, we compare the peak shaving amount (or savings), i.e., D − R. For a given input, let OPT be the peak shaving of the optimal algorithm, and let ALG be the peak shaving of the online algorithm. Then an online algorithm is c-competitive for c ≥ 1 if T c ≥ OP ALG for every problem instance. For this setting, we obtain the online algorithms described below. The online algorithms (see Def. 4) are shown in Table 2. Definition 4. Let Tiopt be the optimal threshold used by the appropriate optimal algorithm when run on the first i timeslots. At time i during the online computation, let si be the index of the most recent time prior to i with bs = B (or 1 in the unbounded setting). i

Table 2. Threshold functions used for online algorithms Alg. battery threshold Ti per-slot time 2.a 2.b

1

D−Ti

o p t

both D − H ,i) both D − D−ρ(s H − +1 n

i

n

O(n) O(1)

s i

If the peak is not known, a lower bound of n can be obtained also for the latter case.

Peak Shaving through Resource Buffering

155

4.1 Lower Bounds for D − R Since the competitiveness of the online algorithms holds for arbitrary initial battery level, in obtaining lower bounds on competitiveness, we assume particular initial battery levels. Proposition 2. With peak demand D unknown and finite time horizon n, there is no online algorithm 1) with any constant competitive ratio for unbounded battery (even with n = 2) or 2) with competitive ratio better than n for bounded battery. Proof. For part 1, assume b1 = 0, and suppose d1 = 0. Then if ALG requests r1 = 0 and we have d2 = D, then OP T = D/2 and ALG = 0; if ALG requests r1 = a (for some a > 0) and we have d2 = a, then OP T = a/2 and ALG = 0. For part 2, let b1 = B, and assume ALG is c-competitive. Consider the demand curve (B, 0, 0, . . . , 0). Then OPT clearly discharges B at time 1 (decreasing the peak by B). For ALG to be c-competitive, it must discharge at least Bc in the first slot. Now consider curve (B, 2B, 0, 0, . . . , 0). At time 2, OPT discharges B, decreasing the peak by B, so at time 2, ALG must discharge at least Bc . (At time 1, ALG already had to discharge B ..., iB, ..., nB), ALG must discharge Bc . Total c .) Similarly, at time i for (B, 2B, 3B, n B B discharging by ALG is then at least: i=1 c = nB c . Since we must have n c ≤ B, it follows that c ≥ n. The trivial algorithm that simply discharges amount B/n at each of the n timeslots and never charges is n-competitive (since OP T ≤ B) and so matches the lower bound for the bounded case. Proposition 3. With peak demand D known in advance and finite time horizon n, no online algorithm can have 1) competitive ratio better than Hn if the battery begins nonempty or 2) competitive ratio better than Hn − 1/2 if the battery begins empty, regardless of whether the battery is bounded or not. Proof. First assume the battery has initial charge b. (The capacity is either at least b or unbounded.) Suppose ALG is c-competitive. Consider the curve (D, 0, 0, . . . , 0), with D ≥ b. Then OPT clearly discharges b at time 1 (decreasing the peak by b). For ALG to be c-competitive, it must discharge at least bc . Now consider curve (D, D, 0, 0, . . . , 0). At times 1 and 2, OPT discharges 2b , decreasing the peak by 2b . At time 2, ALG will b have to discharge at least b/2 D, D, ..., D), ALG must c = 2c . Similarly, at time i on (D, n b discharge ic . Total discharging by ALG is then at least: i=1 icb = Hn bc . Since we discharge at each timeslot and never charge, we must have cb Hn ≤ b, and so it follows that c ≥ Hn . Now let the battery start empty. Assume the battery capacity is at least D or is unbounded. Repeat the argument as above, except now with a zero demand inserted at the start of the demand curves, which gives both ALG and OPT an opportunity to charge. Then for each time i ∈ [2, n], ALG must discharge at least D ic since OPT may discharge D (and so save) i (in which case it would have initially charged D(1 − 1/i)). ALG is then required to discharge (Hn −1) Dc during the last n−1 timeslots. Obviously it could not have charged more than D during the first timeslot. In fact, it must charge less than

156

A. Bar-Noy, M.P. Johnson, and O. Liu

this. On the sequence (0, D, 0, 0, ..., 0), OPT charges D/2 at time 1 and discharges it D at time 2 in order to be c-competitive at time 2, saving D/2. ALG must discharge 2c D on this sequence, and so reduce the peak D by 2c . Therefore at time 1, ALG cannot D D ≥ (Hn − 1)D/c, which charge more than D − 2c . Therefore we must have D − 2c implies that c ≥ Hn − 1/2. 4.2 Bounded Battery Our first online algorithm bases its threshold at time i on a computation of the optimal offline threshold Tiopt for the demands d1 , ..., di . The second bases its threshold at time i on ρ(si , i) (see Defs. 3 and 4). Assuming the algorithms are feasible (i.e., no battery underflow occurs), it is not difficult to show that they are competitive. Theorem 3. Algorithms 2.a and 2.b are Hn -competitive, if they are feasible, and have per-timeslot running times of O(n) and O(1), respectively. Proof. First observe that ρˆ(i) ≥ ρ(si , i) implies

D−ρ(i) ˆ Hn



D−ρ(si ,i) Hn −s i +1

implies Tia ≥ Tib

for all i. Therefore it suffices to prove competitiveness for Algorithm 2.a. Since Tiopt is the lowest possible threshold up to time i, D − Tiopt is the highest possible peak shaving as of time i. Since the algorithm always saves a 1/Hn fraction of this, it is Hn -competitive by construction. Since µ(1, i + 1) can be found in constant time from µ(1, i), Algorithm 2.b is constant-time per-slot. Similarly, Algorithm 2.a is, recalling the proof of Theorem 2, linear per-slot. We now show that indeed both algorithms are feasible, using the following lemma, which allows us to limit our attention to a certain family of demand sequences. Lemma 1. If there is a demand sequence (d1 , d2 , ..., dn ) in which underflow occurs for Algorithm 2.a or 2.b, then there is also a demand sequence (for the same algorithm) in which underflow continues to the end (i.e., bn+1 < 0) and no overflow ever occurs, i.e., one in which the battery level decreases monotonically from full to empty. Proof. The battery is initialized to full, b1 = B. Over the course of running one of the algorithms on a particular problem instance, the battery level will fall and rise, and may return to full charge multiple times. Suppose underflow were to occur at some time t, i.e. bt < 0, and let s be the most recent time before t when the battery was full. We now construct a demand sequence with the two desired properties, for both algorithms. First, if s > 1, then also considering region [1, s − 1] when defining the threshold Ti for Algorithm 2.a or 2.b can only raise the threshold over what it would be if only region [s, t] were considered. Therefore shifting the region leftward from [s, t] to [1, t′ ] = [1, t − s + 1] will only lower the thresholds used, which therefore preserves the underflow. Second, since any underflow that occurs in region [1, t′ ] can be extended to the end of sequence by setting each demand after time t′ to D, we can assume wlog that t′ = n. Theorem 4. Algorithms 2.a and 2.b are feasible.

Peak Shaving through Resource Buffering

157

Proof. For a proof by contradiction, we can restrict ourselves by Lemma 1 to regions that begin with a full battery, underflow at the end, and have no overflow in the middle. For such a region, the change in battery-level is well behaved (bi − bi+1 = di − Ti ), which allows us to sum the net discharge and prove it is bounded by B. We now show that it is impossible for the battery to fall below 0 at time n, by upperbounding the net discharge over this region. Let Δbi = bi − bi+1 = di − Ti be the amount of energy the battery discharges at step i. (Δbi will be negative when the battery charges.) We will  show that 1≤i≤n Δbi ≤ B. Let Δbai and Δbbi refer to the change in battery levels for the corresponding algorithms. Because as we observed above Tia ≥ Tib , we have: Δbai = di − Tia ≤ Δbbi = di − Tib Therefore it suffices to prove the feasibility result for Algorithm 2.b, and so we drop the superscripts. Expanding the definition of that algorithm’s threshold, we have:    i  1  1  1 dk − B (D − ρ(1, i)) = di − D − D− ∆bi = di − Ti = di − D − Hn Hn i k=1 (1)

By summing Eq. 1 for each i, we obtain: n 

i  n    D − ( k=1 dk − B)/i  di − D − Hn i=1     n n i   D − ( k=1 dk )/i  B/i = di − D − + H Hn n i=1 i=1    n i   D − ( k=1 dk )/i  = +B di − D − Hn i=1

Δbi =

i=1

Therefore it suffices to show that: n   i=1

 D− di − D −

i

k=1

Hn

dk /i 

≤0

(2)

which is equivalent to: n 

di − nD +

n 

di + nD(

i=1

iff

k=1 n

i=1

iff

n  i   1  nD − dk /i ≤ 0 Hn i=1 i

 1  1 − 1) − dk /i ≤ 0 Hn Hn i=1 k=1

n  i=1

Hn di −

n  i  d  k

i=1 k=1

i

≤ nD(Hn − 1)

(3)

158

A. Bar-Noy, M.P. Johnson, and O. Liu

With the following derivation: k −1 n n i n n n n n n n n      dk        dk dk dk dk Hk Hn dk − = − = − = i i i i i i =1 k =1 i =1 k =1 i =1 k =i +1 k =1 k =1 i =1 k =1 i =1 k =1

−1 dk

n we can rewrite Eq. 3 (replacing the parenthesized expression) as:  i=1 Hi−1 di ≤ n n(Hn − 1)D. Since di ≤ D and D > 0, it suffices to show that: i=1 Hi−1 ≤ n(Hn − 1). In fact, this holds with equality (see [10], Eq. 2.36). 4.3 Unbounded Battery and Boundary Conditions Both online algorithms, modified to call appropriate subroutines, also work for the unbounded battery setting. The algorithms are feasible in this setting since ρ(i) ≤ Tiopt still holds, where Tiopt is now the optimal threshold for the unbounded battery setting. (Recall that offline Algorithm 1.a can “greedily” run in linear total time.) The algorithm is Hn -competitive by construction, as before. Corollary 1. Algorithms 2.a and 2.b are feasible in the unbounded battery setting. Proof. The proof is similar to that of Theorem 4, except that b1 (which may be 0) is plugged in for all occurrences of B (resulting in a modified ρ), and overflow is no longer a concern. We also note that the correctness and competitiveness of both algorithms (with minor preprocessing) holds for the setting of arbitrary initial battery levels. In the case of Algorithm 2.a, each computation of Tiopt is computed for the chosen b1 , as described in Section 3.1, and the online algorithm again provides a fraction 1/Hn of this savings. Although in the bounded setting “increasing” b1 for the Tiopt computation by modifying d1 may raise the peak demand in the offline algorithm’s input, the D value for the online algorithm is not changed. Indeed, examining the proof of Theorem 4, we note that the upperbound on the di is only used for i > 1 (see Eq. 4).

5 Conclusion In this paper, we formulated a novel peak-shaving problem, and gave efficient optimal offline algorithms and optimally competitive online algorithms. In work in progress, we are testing our online algorithms on actual client data from Gaia [1]. (See [6] for preliminary results.) There are several interesting extensions to the theoretical problem that we plan to address, such as adapting the algorithms to inefficient batteries that lose a percentage of charge instantly or over time or batteries with a charging speed limit. We could also optimize for a moving average of demands rather than a single peak. Finally, online algorithms could also be granted additional predictions about the future. Acknowledgements. We thank Ib Olson of Gaia Power Technologies and Ted Brown of CUNY and its Institute for Software Development and Design for posing this problem to us.

Peak Shaving through Resource Buffering

159

References 1. Gaia Power Technologies, gaiapowertech.com 2. ConEd electricity rates document, www.coned.com/documents/elec/043-059h.pdf 3. Orlando Utilities Commission website, www.ouc.com/account/rates/electric-comm.htm 4. Ahuja, R.K., Magnanti, T.L., Orlin, J.B.: Network Flows. Prentice-Hall, Englewood Cliffs (1993) 5. Atamturk, A., Hochbaum, D.S.: Capacity acquisition, subcontracting, and lot sizing. Management Science 47( 8) (2001) 6. Bar-Noy, A., Feng, Y., Johnson, M.P., Liu, O.: When to reap and when to sow – lowering peak usage with realistic batteries. In: McGeoch, C.C. (ed.) WEA 2008. LNCS, vol. 5038, pp. 194–207. Springer, Heidelberg (2008) 7. Borodin, A., El-Yaniv, R.: Online Computation and Competitive Analysis. Cambridge University Press, Cambridge (1998) 8. Florian, M., Lenstra, J., Kan, A.R.: Deterministic production planning: algorithms and complexity. Management Science 26 (1980) 9. Goh, M., Jihong, O., Chung-Piaw, T.: Warehouse sizing to minimize inventory and storage costs. Naval Research Logistics 48(4) (April 3rd, 2001) 10. Graham, R.L., Knuth, D.E., Patashnik, O.: Concrete Mathematics: A Foundation for Computer Science, 2nd edn. Addison-Wesley Professional, Reading (1994) 11. Harford, T.: The great Xbox shortage of 2005. Slate (December 15, 2005), www.slate.com/id/2132071/ 12. Hunsaker, B., Kleywegt, A.J., Savelsbergh, M.W.P., Tovey, C.A.: Optimal online algorithms for minimax resource scheduling. SIAM J. Discrete Math (2003) 13. Lee, M.-K., Elsayed, E.A.: Optimization of warehouse storage capacity under a dedicated storage policy. Int. J. Prod. Res. 43(9) (2005) 14. Pochet, Y., Wolsey, L.: Production Planning by Mixed Integer Programming. Springer, Heidelberg (2006) 15. Wagner, H., Whitin, T.: Dynamic version of the economic lot size model. Management Science 5 (1958) 16. Wald, M.L.: Storing sunshine. The New York Times (July 16, 2007), www.nytimes.com/2007/07/16/business/16storage.html 17. Zhou, Y.-W.: A multi-warehouse inventory model for items with time-varying demand and shortages. Computers and Operations Research 30(14) (December 2003)

On Lagrangian Relaxation and Subset Selection Problems Ariel Kulik and Hadas Shachnai⋆ Computer Science Department, The Technion, Haifa 32000, Israel {kulik,hadas}@cs.technion.ac.il

Abstract. We prove a general result demonstrating the power of Lagrangian relaxation in solving constrained maximization problems with arbitrary objective functions. This yields a unified approach for solving a wide class of subset selection problems with linear constraints. Given a problem in this class and some small ε ∈ (0, 1), we show that if there exists a ρ-approximation algorithm for the Lagrangian relaxation of the ρ −ε problem, for some ρ ∈ (0, 1), then our technique achieves a ratio of ρ+1 to the optimal, and this ratio is tight. The number of calls to the ρ-approximation algorithm, used by our algorithms, is linear in the input size and in log(1/ε) for inputs with cardinality constraint, and polynomial in the input size and in log(1/ε) for inputs with arbitrary linear constraint. Using the technique we obtain approximation algorithms for natural variants of classic subset selection problems, including real-time scheduling, the maximum generalized assignment problem (GAP) and maximum weight independent set.

1

Introduction

Lagrangian relaxation is a fundamental technique in combinatorial optimization. It has been used extensively in the design of approximation algorithms for a variety of problems (see e.g., [12,11,18,16,17,5] and a comprehensive survey in [20]). In this paper we prove a general result demonstrating the power of Lagrangian relaxation in solving constrained maximization problems of the following form. Given a universe U , a weight function w : U → R+ , a function f : U → N and an integer L ≥ 1, we want to solve Π : maxs∈U

f (s)

(1)

subject to: w(s) ≤ L. We solve Π by finding efficient solution for the Lagrangian relaxation of Π, given by Π(λ) : max f (s) − λ · w(s), (2) s∈U

for some λ ≥ 0. ⋆

Work supported by the Technion V.P.R. Fund.

E. Bampis and M. Skutella (Eds.): WAOA 2008, LNCS 5426, pp. 160–173, 2009. c Springer-Verlag Berlin Heidelberg 2009 

On Lagrangian Relaxation and Subset Selection Problems

161

A traditional approach for using Lagrangian relaxation in approximation algorithms (see, e.g., [11,16,5]) is based on initially finding two solutions, SOL1 , SOL2 , for Π(λ1 ), Π(λ2 ), respectively, for some λ1 , λ2 , such that each of the solutions is an approximation for the corresponding Lagrangian relaxation; while one of these solutions is feasible for Π (i.e., satisfies the weight constraint), the other is not. A main challenge is then to find a way to combine SOL1 and SOL2 to a feasible solution which yields approximation for Π. We prove (in Theorem 1) a general result, which allows to obtain a solution for Π based on one of the solutions only, namely, we show that with appropriate selection of the parameters λ1 , λ2 in the Lagrangian relaxation we can obtain solutions SOL1 , SOL2 such that one of them can be used to derive efficient approximation for our original problem Π. The resulting technique leads to fast and simple approximation algorithms for a wide class of subset selection problems with linear constraints. 1.1

Subset Selection Problems

Subset selection problems form a large class encompassing such NP-hard problems as real-time scheduling, the generalized assignment problem (GAP) and maximum weight independent set, among others. In these problems, a subset of elements satisfying certain properties needs to be selected out of a universe, so as to maximize some objective function. (We give a formal definition in Section 3.) We apply our general technique to obtain efficient approximate solutions for the following natural variants of some classic subset selection problems. Budgeted Real Time Scheduling (BRS): Given is a set A = {A1 , . . . , Am } of activities, where each activity consists of a set of instances; an instance I ∈ Ai is defined by a half open time interval [s(I), e(I)) in which the instance can be scheduled (s(I) is the start time, and e(I) is the end time), a cost c(I) ∈ N, and a profit p(I) ∈ N. A schedule is feasible if it contains at most one instance of each activity, and for any t ≥ 0, at most one instance is scheduled at time t. The goal is to find a feasible schedule, in which the total cost of all the scheduled instances is bounded by a given budget L ∈ N, and the total profit of the scheduled instances is maximized. Budgeted continuous real-time scheduling (BCRS) is a variant of this problem where each instance is associated with a time window I = [s(I), e(I)) and length ℓ(I). An instance I can be scheduled at any time interval [τ, τ + ℓ(I)), such that s(I) ≤ τ ≤ e(I) − ℓ(I)). BRS and BCRS arise in many scenarios in which we need to schedule activities subject to resource constraints, e.g., storage requirements for the outputs of the activities. Budgeted Generalized Assignment Problem (BGAP): Given is a set of bins (of arbitrary capacities), and a set of items, where each item has a size, a value and a packing cost for each bin. Also, given is a budget L ≥ 0. The goal is to pack in the bins a feasible subset of items of maximum value, such that the total packing cost is at most L. BGAP arises in many real-life scenarios (e.g., inventory planning with delivery costs). Budgeted Maximum Weight Independent Set (BWIS): Given is a budget L and a graph G = (V, E), where each vertex v ∈ V has an associated profit

162

A. Kulik and H. Shachnai

pv (or, weight ) and associated cost cv , choose a subset V ′ ⊆ V such that V ′ is an independent set (for anye = (v, u) ∈ E, v ∈ / V ′ or u ∈ / V ′ ), the total cost ′ of vertices in V , given by v∈V ′ cv , is bounded by L, and the total profit of  V ′ , v∈V ′ pv , is maximized. BWIS is a generalization of the classical maximum independent set (IS) and maximum weight independent set (WIS) problems. 1.2

Contribution

We prove (in Theorem 1) a general result demonstrating the power of Lagrangian relaxation in solving constrained maximization problems with arbitrary objective functions. We use this result to develop a unified approach for solving subset selection problems with linear constraint. Specifically, given a problem in this class and some small ε ∈ (0, 1), we show that if there exists a ρ-approximation algorithm for the Lagrangian relaxation of the problem, for some ρ ∈ (0, 1), then our ρ −ε to the optimal. It can be shown that there is a technique achieves a ratio of ρ+1 subset selection problem Γ such that if there exists a ρ-approximation algorithm for the Lagrangian relaxation of Γ , for some ρ ∈ (0, 1), then there is an input I for which finding the solutions SOL1 and SOL2 (for the Lagrangian relaxation) ρ to the optimal (see in and combining the solutions yields at most a ratio of ρ+1 [19]). This shows the tightness of our bound, within additive of ε. The number of calls to the ρ-approximation algorithm, used by our algorithms, is linear in the input size and in log(1/ε), for inputs with cardinality constraint (i.e., where w(s) = 1 for all s ∈ U ), and polynomial in the input size and in log(1/ε) for inputs with arbitrary linear constraint (i.e., arbitrary weights w(s) ≥ 0). We apply the technique to obtain efficient approximations for natural variants of some classic subset selection problems. In particular, for the budgeted variants of the real-time scheduling problem we obtain (in Section 4.1) a bound of (1/3−ε) for BRS and (1/4 − ε) for BCRS. For budgeted GAP we give (in Section 4.2) an 1−e−1 approximation ratio of 2−e −1 − ε. For BWIS we show (in Section 4.3) how an approximation algorithm A for WIS can be used to obtain an approximation algorithm for BWIS with the same asymptotic approximation ratio. More specifically, let A be a polynomial time algorithm that finds in a graph G an independent set whose profit is at least f (n) of the optimal, where (i) f (n) = o(1) and (ii) log(f (n)) is polynomial in the input size.1 Our technique yields an approximation algorithm which runs in polynomial time and achieves an approximation ratio of g(n) = Θ(f (n)). Moreover, limn→∞ fg(n) (n) = 1. Since BWIS generalizes WIS, this implies that the two problems are essentially equivalent in terms of hardness of approximation. ρ − ε)-approximation Our technique can be applied iteratively to obtain a ( 1+dρ algorithm for subset selection problems with d linear constraints, when there exists a ρ-approximation algorithm for the non-constrained version of the problem, for some ρ ∈ (0, 1). 1

These two requirements hold for most approximation algorithm for the problem.

On Lagrangian Relaxation and Subset Selection Problems

163

It is important to note that the above results, which apply to maximization problems with linear constraints, do not exploit the result in Theorem 1 in its full generality. We believe that the theorem will find more uses, e.g., in deriving approximation algorithms for subset selection problems with non-linear constraints. 1.3

Related Work

Most of the approximation techniques based on Lagrangian relaxation are tailored to handle specific optimization problems. In solving the k-median problem through a relation to facility location, Jain and Vazirani developed in [16] a general framework for using Lagrangian relaxation to derive approximation algorithms (see also [11]). The framework, that is based on a primal-dual approach, finds initially two approximate solutions SOL1 , SOL2 for the Lagrangian relaxations Π(λ1 ), Π(λ2 ) of a problem Π, for carefully selected values of λ1 , λ2 ; a convex combination of these solutions yields a (fractional) solution which uses the budget L. This solution is then rounded to obtain an integral solution that is a good approximation for the original problem. Our approximation technique (in Section 2) differs from the technique of [16] in two ways. First, it does not require rounding a fractional solution: in fact, we do not attempt to combine the solutions SOL1 , SOL2 , but rather, examine each separately and compare the two feasible solutions which can be easily derived from SOL1 , SOL2 , using an efficient transformation of the non-feasible solution, SOL2 , to a feasible one. Secondly, the framework of [16] crucially depends on a primal-dual interpretation of the approximation algorithm for the relaxed problem, which is not required here. K¨ onemann et al. considered in [17] a technique for solving general partial cover problems. The technique builds on the framework of [16], namely, an instance of a problem in this class is solved by initially finding the two solutions SOL1 , SOL2 and generating a solution SOL, which combined these two solutions. For a comprehensive survey of other work see, e.g., [20].2 There has been some earlier work on using Lagrangian relaxation to solve subset selection problems. The paper [21] considered a subclass of the class of subset selection problems that we study here. Using the framework of [16], the paper claims to obtain an approximation ratio of ρ − ε for any problem in this subclass,3 given a ρ-approximation algorithm for the Lagrangian relaxation of the problem (which satisfies certain properties). Unfortunately, this approximation ratio was shown to be incorrect [22]. Recently, Berget et al. considered in [5] the budgeted matching problem and the budgeted matroid intersection problem. The paper gives the first polynomial time approximation schemes for these problems. The schemes, which are based on Lagrangian relaxation, merge the two obtained solutions using some strong combinatorial properties of the problems. 2

3

For conditions under which Lagrangian relaxation can be used to solve discrete/continuous optimization problems see, e.g., [23]. This subclass includes the real-time scheduling problem.

164

A. Kulik and H. Shachnai

The non-constrained variants of the subset selection problems that we study here are well studied. For known results on real-time scheduling and related problems see, e.g., [2,7,3,4]. Surveys of known results for the generalized assignment problem are given, e.g., in [6,8,9,10]. Numerous approximation algorithms have been proposed and analyzed for the maximum (weight) independent set problem. Alon at al. [1] showed that IS cannot be approximated within factor n−ε in polynomial time, where n = |V | and ε > 0 is some constant, unless P = N P . The best known approximation 2 orsson [14]. A survey ratio of Ω( logn n ) for WIS on general graphs is due to Halld´ of other known results for IS and WIS can be found e.g., in [13,15]. To the best of our knowledge, approximation algorithms for the budgeted variants of the above problems are given here for the first time. Due to space constraints, some of the proofs are omitted. The detailed results appear in [19].

2

Lagrangian Relaxation Technique

Given a universe U , let f : U → N be some objective function, and let w : U → R+ be a non-negative weight function. Consider the problem Π of maximizing f subject to a budget constraint L for w, as given in (1), and the Lagrangian relaxation of Π, as given in (2). We assume that the value of an optimal solution s∗ for Π satisfies f (s∗ ) ≥ 1. For some ε′ > 0, suppose that λ2 ≤ λ1 ≤ λ2 + ε′ .

(3)

The heart of our approximation technique is the next result. Theorem 1. For any ε > 0 and λ1 , λ2 that satisfy (3) with ε′ = ε/L, let s1 = SOL1 and s2 = SOL2 be ρ-approximate solutions for Π(λ1 ), Π(λ2 ), such that w(s1 ) ≤ L ≤ w(s2 ). Then for any α ∈ [1 − ρ, 1], at least one of the following holds: 1. f (s1 ) ≥ αρf (s∗ ) 2) 2. f (s2 )(1 − α − ε)f (s∗ ) w(s L . Proof. Let Li = w(si ), i = 1, 2, and L∗ = w(s∗ ). From (2) we have that f (si ) − ρf (s∗ ) ≥ λi (Li − ρL∗ ).

(4)

Assume that, for some α ∈ [1 − ρ, 1], it holds that f (s1 ) < αρf (s∗ ), then (α − 1)ρf (s∗ ) > f (s1 ) − ρf (s∗ ) ≥ λ1 (L1 − ρL∗ ) ≥ −ρλ1 L∗ ≥ −ρλ1 L. The second inequality follows from (4), the third inequality from the fact that λ1 L1 ≥ 0, and the last inequality holds due to the fact that L∗ ≤ L. Using (3), we have (1 − α)f (s∗ ) < λ1 < λ2 + ε′ . (5) L

On Lagrangian Relaxation and Subset Selection Problems

165

Since ε′ = ε/L, we get that  (1 − α)f (s∗ ) ′ f (s2 ) ≥ λ2 (L2 − L ) + ρf (s ) > − ε (L2 − L) + ρf (s∗ ) L L2 L2 L2 − ε′ L2 ≥ (1 − α − ε′ L) f (s∗ ) = (1 − α − ε) f (s∗ ) ≥ (1 − α)f (s∗ ) L L L ∗





The first inequality follows from (4), by taking i = 2, and the second inequality is due to (5) and the fact that L∗ ≤ L. The third inequality holds since ρ ≥ 1−α,  and the last inequality follows from the fact that f (s∗ ) ≥ 1. Theorem 1 asserts that at least one of the solutions s1 , s2 is good in solving our original problem, Π. If s1 is a good solution then we have an αρ-approximation for Π, otherwise we need to find a way to convert s2 to a solution s′ such that w(s′ ) ≤ L and f (s′ ) is a good approximation for Π. Such conversions are presented in Section 3 for a class of subset selection problems with linear constraints. Next, we show how to find two solutions which satisfy the conditions of Theorem 1. 2.1

Finding the Solutions s1 , s2

Suppose that we have an algorithm A which finds a ρ-approximation for Π(λ), for any λ ≥ 0. Given an input I for Π, denote the solution which A returns for Π(λ) by A(λ), and assume that it is sufficient to consider Π(λ) for λ ∈ (0, λmax ), where λmax = λmax (I) and w(A(λmax )) ≤ L. Note that if w(A(0)) ≤ L then A(0) is a ρ-approximation for Π; otherwise, there exist λ1 , λ2 ∈ (0, λmax ) such that λ1 , λ2 , and s1 = A(λ1 ), s2 = A(λ2 ) satisfy (3) and the conditions of Theorem 1, and λ1 , λ2 can be easily found using binary search. Each iteration of the binary search requires a single execution of A and reduces the size of the search range by half. Therefore, after R = ⌈log(λmax ) + log(L) + log(ε−1 )⌉ iterations, we have two solutions which satisfy the conditions of the theorem. Theorem 2. Given an algorithm A which outputs a ρ-approximation for Π(λ), and λmax , such that w(A(λmax )) ≤ L, a ρ-approximate solution or two solutions s1 , s2 which satisfy the conditions of Theorem 1 can be found by using binary search. This requires ⌈log(λmax ) + log(L) + log(ε−1 )⌉ executions of A. We note that when A is a randomized approximation algorithm whose expected performance ratio is ρ, a simple binary search may not output solutions that satisfy the conditions of Theorem 1. In this case, we repeat the executions of A for the same input and select the solution of maximal value. For some preselected values β > 0 and δ > 0, we can guarantee that the probability that any of the used solutions is not a (ρ − β)-approximation is bounded by δ. Thus, with appropriate selection of the values of β and δ, we get a result similar to the result in Theorem 1. We discuss this case in detail in the full version of the paper.

166

3

A. Kulik and H. Shachnai

Approximation Algorithms for Subset Selection Problems

In this section we develop an approximation technique for subset selection problems. We start with some definitions and notation. Given a universe U , let X ⊆ 2U bea domain, and f : X → N a set function. For a subset S ⊆ U , let w(S) = s∈S ws , where ws ≥ 0 is the weight of the element s ∈ U . Definition 1. The problem

Γ : max f (S) S∈X

subject to: w(S) ≤ L

(6)

is a subset selection problem with a linear constraint if X is a lower ideal, namely, if S ∈ X and S ′ ⊆ S then S ′ ∈ X, and f is a linear non-decreasing set function4 with f (∅) = 0. Note that subset selection problems with linear constraints are in the form of (1), and the Lagrangian relaxation of any problem Γ in this class is Γ (λ) = maxS∈X f (S) − λw(S); therefore, the results of Section 2 hold. Thus, for example, BGAP can be formulated as the following subset selection problem with linear constraint. The universe U consists of all pairs (i, j) of item 1 ≤ i ≤ n and bin 1 ≤ j ≤ m. The domain X consists of all the subsets S of U , such that each item appears at most once (i.e., for any item 1 ≤ i ≤ n, |{(i′ , j ′ ) ∈ S : i′ = i}| ≤ 1), and the collection of items that appears with a bin j, i.e., {i : (i, j) ∈ S} defines a feasible assignment of items tobin j. It is easy to see that X is indeed a lower ideal. The function f is f (S) = (i,j)∈S fi,j , where  fi,j is the profit from the assignment of item i to bin j, and w(S) = (i,j)∈S wi,j where wi,j is the size of item i when assigned to bin j. The Lagrangian relaxation of BGAP is then  max f (S) − λw(S) = max (fi,j − λwi,j ) . S∈X

S∈X

(i,j)∈S

The latter can be interpreted as the following instance of GAP: if fi,j −λwi,j ≥ 0 then set fi,j − λwi,j to be the profit from assigning item i to bin j; otherwise, make item i infeasible for bin j (set the size of item i to be greater than the capacity of bin j). We now show how the Lagrangian relaxation technique described in Section 2 can be applied to subset selection problems. Given a problem Γ in this class, suppose that A is a ρ-approximation algorithm for Γ (λ), for some ρ ∈ (0, 1]. To find λ1 , λ2 and SOL1 , SOL2 , the binary search of Section 2.1 can be applied over the range [0, pmax ], where pmax = max f (s) s∈U

4

(7)

For simplicity, we assume throughout the discussion that f (·) is a linear function; however, all of the results in this section hold also for the more general case where f : 2S → N is a non-decreasing submodular set function, for any S ∈ X.

On Lagrangian Relaxation and Subset Selection Problems

167

is the maximum profit of any element in the universe U . To obtain the solutions S1 , S2 which correspond to λ1 , λ2 , the number of calls to A in the binary search is bounded by O(log( L·pε )). Given the solutions S1 , S2 satisfying the conditions of Theorem 1, consider the case where, for some α ∈ [1 − ρ, 1], property 2 (in the theorem) holds. Denote the value of an optimal solution for Γ by O. Given a solution S2 such that m a x

f (S2 ) ≥ (1 − α − ε)

w(S2 ) · O, L

(8)

our goal is to find a solution S ′ such that w(S ′ ) ≤ L (i.e., S ′ is valid for Γ ), and f (S ′ ) is an approximation for O. We show below how S ′ can be obtained from S2 . We first consider (in Section 3.1) instances with unit weights. We then describe (in Section 3.2) a scheme for general weights. Finally, we give (in Section 3.3) a scheme which yields improved approximation ratio for general instances, by applying enumeration. 3.1

Unit Weights

Consider first the special case where ws = 1 for any s ∈ U (i.e., w(S) = |S|; we refer to (6) in this case as cardinality constraint ). Suppose that we have solutions S1 , S2 which satisfy the conditions of Theorem ρ 1 we get that either f (S1 ) ≥ ( 1+ρ − ε)O, or f (S2 ) ≥ 1, then by taking α = 1+ρ

ρ ρ 2) ( 1+ρ − ε) w(S L O. If the former holds then we have a ( 1+ρ − ε)-approximation

ρ 2) ′ for the optimum; otherwise, f (S2 ) ≥ ( 1+ρ − ε) w(S L O. To obtain S , select the L elements in S2 with the highest profits.5 It follows from (8) that f (S ′ ) ≥ ρ (1 − α − ε) · O = ( 1+ρ − ε)O. Combining the above with the result of Theorem 2, we get the following.

Theorem 3. Given a subset selection problem Γ with unit weights, an algorithm A which yields a ρ-approximation for Γ (λ) and λmax , such that w(A(λmax )) ≤ ρ L, a ( ρ+1 −ε)-approximation for Γ can be derived by using A and selecting among S1 , S ′ the set with highest profit. The number of calls to A is O(log( L·pε where pmax is given in (7).

m a x

3.2

)),

Arbitrary Weights

For general element weights, we may assume w.l.o.g. that, for any s ∈ U , ws ≤ L. We partition S2 to a collection of up to 2WL(S2 ) disjoint sets T1 , T2 , . . . such that w(Ti ) ≤ L for all i ≥ 1. A simple way to obtain such sets is by adding elements of S2 in arbitrary order to Ti as long as we do not exceed the budget L. A slightly more efficient implementation has a running time that is linear in the size of S2 (details omitted). 5

When f is a submodular function, iteratively select the element s ∈ S2 which maximizes f (T ∪ {s}), where T is the subset of elements chosen in the previous iterations.

168

A. Kulik and H. Shachnai

Lemma 1. Suppose that S2 satisfies (8) for some α ∈ [1−ρ, 1], then there exists · O. i ≥ 1 such that f (Ti ) ≥ 1−α−ε 2 2) Proof. Clearly, f (T1 ) + ... + f (TN ) = f (S2 ), where N ≤ 2w(S is the number L of disjoint sets. By the pigeon hole principle there exists 1 ≤ i ≤ N such that (S2 ) 1−α−ε 2) ≥ L·f · O.  f (Ti ) ≥ f (S N 2w(S2 ) ≥ 2

Assuming we have solutions S1 , S2 which satisfy the conditions of Theorem 1, ρ 2ρ 1 by taking α = 1+2ρ we get that either f (S1 ) ≥ ( 1+2ρ − ε)O, or f (S2 ) ≥ ( 1+2ρ −

2) ′ ′ ε) w(S L O and can be converted to S (by setting S = Ti for Ti which maximizes ρ ρ ′ − ε)-approximation f (Ti )), such that f (S ) ≥ ( 1+2ρ − ε)O, i.e., we get a ( 1+2ρ for Γ . Combining the above with the result of Theorem 2, we get the following.

Theorem 4. Given a subset selection problem with a linear constraint Γ , an algorithm A which yields a ρ-approximation for Γ (λ), and λmax , such that ρ − ε)-approximation for Γ can be obtained using A. w(A(λmax )) ≤ L, a ( 2ρ+1 The number of calls to A is O(log( L·pε )), where pmax is given in (7). m a x

3.3

Improving the Bounds via Enumeration

In this section we present an algorithm which uses enumeration to obtain a new problem, for which we apply our Lagrangian relaxation technique. This enables to improve the approximation ratio in Section 3.2 to match the bound obtained for unit weight inputs (in Section 3.1).6 For some k ≥ 1, our algorithm initially ‘guesses’ a subset T of (at most) k elements with the highest profits in some optimal solution. Then, an approximate solution is obtained by adding elements in U , whose values are bounded by f (T )/|T |). Given a subset T ⊆ U , we define ΓT , which can be viewed as the sub-problem that ‘remains’ from Γ once we select T to be the initial solution. Thus, we refer to ΓT below as the residual problem with respect to T. Let     f (T )  XT = S  S ∩ T = ∅, S ∪ T ∈ X, and ∀s ∈ S : f ({s}) ≤ (9) |T | Consider the residual problem ΓT and its Lagrangian relaxation ΓT (λ): ΓT maximize f (S) subject to : S ∈ XT w(S) ≤ L − w(T )

ΓT (λ) maximize f (S) − λw(S) subject to : S ∈ XT

In all of our examples, the residual problem ΓT is a smaller instance of the problem Γ , and therefore, its Lagrangian relaxation is an instance of the Lagrangian relaxation of the original problem. Assume that we have an approximation algorithm A which, given λ and a pre-selected set T ⊆ U of at most 6

The running time when applying enumeration depends on the size of the universe (which may be super-polynomial in the input size; we elaborate on that in Section 4.1).

On Lagrangian Relaxation and Subset Selection Problems

169

k elements, for some constant k > 1, returns a ρ-approximation for ΓT (λ) in polynomial time (if there is a feasible solution for ΓT ). Consider the following algorithm, in which we take k = 2: 1. For any T ⊆ U such that |T | ≤ k, find solutions S1 , S2 (for ΓT (λ1 ), ΓT (λ2 ) respectively) satisfying the conditions of Theorem 1 with respect to the problem ΓT . Evaluate the following solutions: (a) T ∪ S1 (b) Let S ′ = ∅, add elements to S ′ in the following manner: Find an element x ∈ S2 \S ′ which maximizes the ratio f ({x}) . If w(S ′ ∪ {x}) ≤ L − w(T ) then add x to S ′ and w repeat the process, otherwise return S ′ ∪ T as a solution. 2. Return the best of the solutions found in Step 1. x

Let O = f (S ∗ ) be an optimal solution for Γ , where S ∗ = {x1 , . . . , xh }. Order the elements in S ∗ such that f ({x1 }) ≥ f ({x2 }) ≥ . . . ≥ f ({xh }). Lemma 2. Let Ti = {x1 , . . . , xi }, for some 1 < i ≤ h, then for any j > i, f ({xj }) ≤ f (Ti ) . i

In analyzing our algorithm, we consider the iteration in which T = Tk . Then S ∗ \ Tk is an optimal solution for ΓT (since S ∗ \ Tk ∈ XT as in (9)); thus, the optimal value for ΓT is at least f (S ∗ \Tk ) = f (S ∗ ) − f (Tk ). k

k

Lemma 3. Let S ′ be the set generated from S2 by the process in Step 1(b) of ) f (T ) the algorithm. Then f (S ′ ) ≥ f (S2 ) L−w(T w(S2 ) − |T | Proof. Note that the process cannot terminate when S ′ = S2 since w(S2 ) > , but was L − w(T ). Consider the first element x that maximized the ratio f ({x}) w not added to S ′ , since w(S ′ ∪ {x}) > L − w(T ). By the linearity of f , it is clear that x

(i)

f (S ′ ∪{x}) w(S ′ ∪{x})



(ii) For any y ∈

f ({x}) , wx S2 \(S ′

and ∪ {x}),

f ({y}) wy



f ({x}) . wx

Thus, we get that for any y ∈ S2 \(S ′ ∪ {x}), f (S2 ) = f (S ′ ∪ {x}) +



y∈S2 \(S ′ ∪{x})

f ({y}) wy



f (S ′ ∪{x}) w(S ′ ∪{x}) ,

f ({y}) ≤ f (S ′ ∪ {x})

and

w(S2 ) . w(S ′ ∪ {x})

) By the linearity of f , we get that f (S ′ ) + f ({x}) = f (S ′ ∪ {x}) ≥ f (S2 ) L−w(T w(S2 ) .

Since x ∈ S2 ∈ XT , we get f ({x}) ≤

f (T ) |T | .

) Hence f (S ′ ) ≥ f (S2 ) L−w(T w(S2 ) −

f (T ) |T | . 

170

A. Kulik and H. Shachnai

Consider the iteration of Step 1. in the above algorithm, in which T = T2 (assuming there are at least two elements in the optimal solution; else T = T1 ), and the values of the solutions found in this iteration. By Theorem 1, taking 1 , one of the following holds: α = 1+ρ 1. f (S1 ) ≥

ρ ∗ 1+ρ [f (S )

− f (T )]

w(S2 ) 2. f (S2 ) ≥ (1 − ρ − ε)[f (S ∗ ) − f (T )] L−w(T ). ρ ρ − ε)[f (S ∗ ) − f (T )] ≥ ( 1+ρ − If 1. holds then we get f (S1 ∪ T ) ≥ f (T ) + ( 1+ρ

w(S2 ) ρ ε)f (S ∗ ), else we have that f (S2 ) ≥ ( 1+ρ − ε)[f (S ∗ ) − f (T )] L−w(T ) , and by Lemma 3,

f (S ′ ) ≥ f (S2 )

ρ f (T ) L − w(T ) f (T ) − ≥( − ε)[f (S ∗ ) − f (T )] − . w(S2 ) |T | 1+ρ |T |

Hence, we have f (T ) ρ − ε)[f (S ∗ ) − f (T )] − 1+ρ |T | ρ 1 ρ − ε)[f (S ∗ ) − f (T )] ≥ ( − ε)f (S ∗ ). = (1 − )f (T ) + ( k 1+ρ 1+ρ

f (S ′ ∪ T ) = f (S ′ ) + f (T ) ≥ f (T ) + (

The last inequality follows from choosing k = 2, and the fact that

1 2



ρ 1+ρ −ε.

ρ Theorem 5. The algorithm outputs a ( 1+ρ −ε)-approximation for Γ . The number of calls to algorithm A is O((log(pmax )+log(L)+log(ε−1 ))n2 ), where n = |U | is the size of the universe of elements for the problem Γ .

Submodular objective functions: In the more general case, where f is a submodular function, we need to redefine the objective function for ΓT to be (T ) fT (S ′ ) = f (S ′ ∪ T ) − f (T ), and the condition f ({s}) ≤ f|T | should be modified

to fT ({s}) ≤

f (T ) |T | .

In Step 1(b) of the algorithm, the element x to be chosen in

each stage is x ∈ S2 \ S ′ which maximizes the ratio

4

fT (S ′ ∪{x})−fT (S ′ ) . wx

Applications

In this section we show how the technique of Section 3 can be applied to obtain approximation algorithms for several classic subset selection problems with linear constraint. 4.1

Budgeted Real Time Scheduling

The budgeted real-time scheduling problem can be interpreted as the following subset selection problem with linear constraint. The universe U consists of all instances associated with the activities {A1 , . . . , Am }. The domain X is the

On Lagrangian Relaxation and Subset Selection Problems

171

set of all feasible schedules; for any S ∈ X, f (S) is the profit from the instances in S, and w(S) is the total cost of the instances in S (note that each instance is associated with specific time interval). The Lagrangian relaxation of this problem is the classic interval scheduling problem discussed in [2]: the paper gives a 21 -approximation algorithm, whose running time is O(n log n), where n is the total number of instances in the input. Clearly, pmax (as defined in (7)) can be used as λmax . By Theorem 2, we can find two solutions S1 , S2 which satisfy the conditions of Theorem 1 in O(n log(n) log(Lpmax /ε)) steps. Then, a straightforward implementation of the technique of Section 3.1 yields a 31 − ε approximation algorithm whose running time is O(n log(n) log(Lpmax /ε)) for inputs where all instances have unit cost. The same approximation ratio can be obtained in O(n3 · log(n) log(Lpmax /ε)) steps when the instances may have arbitrary costs, using Theorem 5 (Note that the Lagrangian relaxation of the residual problem with respect to a subset of elements T is also an instance of the interval scheduling problem). Consider now the continuous case, where each instance within some activity Ai , 1 ≤ i ≤ m, is given by a time window. One way to interpret BCRS as a subset selection problem is by setting the universe to be all the pairs of an instance and a time interval in which it can be scheduled. The size of the resulting universe is unbounded: a more careful consideration of all possible start times of any instance yields a universe of exponential size. The Lagrangian relaxation of this problem is known as single machine scheduling with release times and deadlines, for which a ( 12 − ε)-approximation algorithm is given in [2]. Thus, we can apply our technique for finding two solutions S1 , S2 for which Theorem 1 holds. However, the running time of the algorithm in Theorem 5 may be exponential in the input size (since the number of the enumeration steps depends on the size of the universe, which may be exponentially large). Thus, we derive an approximation algorithm using the technique of Section 3.2. We summarize in the next result. Theorem 6. There is a polynomial time that yields an approximation algorithm

ratio of ( 13 − ε) for BRS and the ratio 14 − ε for BCRS.

Our results also hold for other budgeted variants of problems that appear in [2]. 4.2

The Budgeted Generalized Assignment Problem

Consider the interpretation of GBAP as a subset selection problem, as given in Section 3. The Lagrangian relaxation of BGAP (and also of the deduced residual problems) is an instance of GAP, for which the paper [10] gives a (1 − e−1 − ε)approximation algorithm. We can take in Theorem 2 λmax = pmax , where pmax is defined by (7), and the two solutions S1 , S2 that satisfy the condition of Theorem 1 can be found in polynomial time. Applying the techniques of Sections 3.1 and 3.3, we get the next result. Theorem 7. There is a polynomial time algorithm that yields an approximation −1 ratio of 1−e 2−e−1 − ε ≈ 0.387 − ε for BGAP.

172

A. Kulik and H. Shachnai

A slightly better approximation ratio can be obtained by using an algorithm of [9]. More generally, our result holds also for any constrained variant of the separable assignment problem (SAP) that can be solved using a technique of [10]. 4.3

Budgeted Maximum Weight Independent Set

BWIS can be interpreted as the following subset selection problem with linear constraint. The universe U is the set of all vertices in the graph, i.e., U = V , the domain X consists of all subsets V ′ of V , such that V ′  is an independent set ′ ) = in the given graph G. The objective function f is f (V v∈V ′ pv , the weight  function is w(V ′ ) = v∈V ′ cv , and the weight bound is L. The Lagrangian relaxation of BWIS is an instance of the classic WIS problem (vertices with negative profits in the relaxation are deleted, along with their edges). Let |V | = n, then by Theorem 5, given an approximation algorithm A for WIS with approximation ratio f (n), the technique of Section 3.3 yields an approximation algorithm AI f (n) for BW IS, whose approximation ratio is 1+f (n) − ε. The running time of AI is polynomial in the input size and in log(1/ε). If log(1/f (n)) is polynomial, take ε = f (n) n ; the value log(1/ε) = log(1/f (n)) + log(n) is polynomial in the input size; thus, the algorithm remains polynomial. For this selection of ε, we have the following result. Theorem 8. Given an f (n)-approximation algorithm for WIS, where f (n) = o(n), for any L ≥ 1 there exists a polynomial time algorithm that outputs a g(n)-approximation ratio for any instance of BWIS with the budget L, where g(n) = Θ(f (n)), and limn→∞ fg(n) (n) = 1. This means that the approximation ratios of A and AI are asymptotically the same. Thus, for example, using the algorithm of [14], our technique achieves an 2 Ω( logn n )-approximation for BWIS. Note that the above result holds for any constant number of linear constraints added to an input for WIS, by repeatedly applying our Lagrangian relaxation technique.

References 1. Alon, N., Feige, U., Wigderson, A., Zuckerman, D.: Derandomized graph products. Computational Complexity 5(1), 60–75 (1995) 2. Bar-Noy, A., Bar-Yehuda, R., Freund, A., Naor, J., Schieber, B.: A unified approach to approximating resource allocation and scheduling. Journal of the ACM, 1–23 (2000) 3. Bar-Noy, A., Guha, S., Naor, J., Schieber, B.: Approximating the throughput of multiple machines in real-time scheduling. SIAM J. Comput. 31(2), 331–352 (2001) 4. Bar-Yehuda, R., Beder, M., Cohen, Y., Rawitz, D.: Resource allocation in bounded degree trees. Algorithmica (to appear)

On Lagrangian Relaxation and Subset Selection Problems

173

5. Berger, A., Bonifaci, V., Grandoni, F., Sch¨ afer, G.: Budgeted matching and budgeted matroid intersection via the gasoline puzzle. In: Lodi, A., Panconesi, A., Rinaldi, G. (eds.) IPCO 2008. LNCS, vol. 5035, pp. 273–287. Springer, Heidelberg (2008) 6. Chekuri, C., Khanna, S.: A ptas for the multiple knapsack problem. SIAM J. Comput. 35(3), 713–728 (2006) 7. Chuzhoy, J., Ostrovsky, R., Rabani, Y.: Approximation algorithms for the job interval selection problem and related scheduling problems. Math. Oper. Res. 31(4), 730–738 (2006) 8. Cohen, R., Katzir, L., Raz, D.: An efficient approximation for the generalized assignment problem. Inf. Process. Lett. 100(4), 162–166 (2006) 9. Feige, U., Vondr´ ak, J.: Approximation algorithms for allocation problems: Improving the factor of 1 - 1/e. In: FOCS, pp. 667–676 (2006) 10. Fleischer, L., Goemans, M.X., Mirrokni, V.S., Sviridenko, M.: Tight approximation algorithms for maximum general assignment problems. In: SODA, pp. 611–620 (2006) 11. Garg, N.: A 3-approximation for the minimum tree spanning k vertices. In: FOCS (1996) 12. Goemans, M.X., Ravi, R.: The constrained minimum spanning tree problem. In: Karlsson, R., Lingas, A. (eds.) SWAT 1996. LNCS, vol. 1097. Springer, Heidelberg (1996) 13. Halld´ orsson, M.M.: Approximations of independent sets in graphs. In: Jansen, K., Rolim, J.D.P. (eds.) APPROX 1998. LNCS, vol. 1444. Springer, Heidelberg (1998) 14. Halld´ orsson, M.M.: Approximations of weighted independent set and hereditary subset problems. J. Graph Algorithms Appl. 4(1) (2000) 15. Halld´ orsson, M.M.: Approximations of weighted independent set and hereditary subset problems. Journal of Graph Algorithms and Applications, 1–16 (2000) 16. Jain, K., Vazirani, V.V.: Approximation algorithms for metric facility location and k-median problems using the primal-dual schema and lagrangian relaxation. Journal of the ACM, 274–296 (2001) 17. K¨ onemann, J., Parekh, O., Segev, D.: A unified approach to approximating partial covering problems. In: Azar, Y., Erlebach, T. (eds.) ESA 2006. LNCS, vol. 4168, pp. 468–479. Springer, Heidelberg (2006) 18. K¨ onemann, J., Ravi, R.: A matter of degree: Improved approximation algorithms for degree-bounded minimum spanning trees. SIAM J. Comput. 31(6), 1783–1793 (2002) 19. Kulik, A., Shachnai, H.: On lagrangian relaxation and subset selection problems. full version, http://www.cs.technion.ac.il/∼ hadas/PUB/KS lagrange.pdf 20. Mestre, J.: Primal-Dual Algorithms for Combinatorial Optimization Problems. PhD thesis, CS Dept, Univ. of Maryland (2007) 21. Naor, J., Shachnai, H., Tamir, T.: Real-time scheduling with a budget. Algorithmica, 343–364 (2007) 22. Sch¨ afer, G.: Private communication (2007) 23. Wolsey, L.A., Nemhauser, G.L.: Integer and Combinatorial Optimization. Wiley Interscience, Hoboken (1999)

Approximation Algorithms for Prize-Collecting Network Design Problems with General Connectivity Requirements Chandrashekhar Nagarajan1,⋆ , Yogeshwer Sharma2,⋆ , and David P. Williamson1,⋆ 1

2

School of OR&IE, Cornell University, Ithaca, NY 14853 [email protected], [email protected] Department of Computer Science, Cornell University, Ithaca, NY 14853 [email protected] Abstract. In this paper, we introduce the study of prize-collecting network design problems having general connectivity requirements. Prior work considered only 0-1 or very limited connectivity requirements. We introduce general connectivity requirements in the prize-collecting generalized Steiner tree framework of Hajiaghayi and Jain [9], and consider penalty functions linear in the violation of the connectivity requirements. Using Jain’s iterated rounding algorithm [11] as a black box, and ideas from Goemans [7] and Levi, Lodi, Sviridenko [14], we give a 2.54-factor approximation algorithm for the problem. We also generalize the 0-1 requirements of PCF problem introduced by Sharma, Swamy, and Williamson [15] to include general connectivity requirements. Here we assume that the monotone submodular penalty function of Sharma et al. is generalized to a multiset function that can be decomposed into functions in the same form as that of Sharma et al. Using ideas from Goemans and Berstimas [6], we give an (α log K)-approximation algorithm for the resulting problem, where K is the maximum connectivity requirement, and α = 2.54.

1

Introduction

Over the past two decades, there has been a significant amount of work in the study of approximation algorithms for finding low-cost networks with specific connectivity requirements; see, for example, [6, 1, 8, 16, 11]. Kortsarz and Nutov [13] give a survey of some of this work. In prize-collecting network design problems, connectivity requirements become “soft” constraints; we may drop them if other considerations (such as cost) become more important. This is usually expressed through penalties on the connectivity requirements. We may drop the connectivity requirement if we are willing to instead pay the penalty.1 ⋆ 1

Supported in part by NSF CCF-0514628. One may well wonder why problems with penalties are called ‘prize-collecting’. The answer is an historical accident. Balas [2] introduced the prize-collecting traveling salesman problem, which had prizes that were collected by the salesman as he visited various nodes, and penalties for unvisited nodes. Bienstock et al. [3] dropped the prizes, but kept the penalties and the name. Most subsequent work has addressed the Bienstock et al. variant of the problem.

E. Bampis and M. Skutella (Eds.): WAOA 2008, LNCS 5426, pp. 174–187, 2009. c Springer-Verlag Berlin Heidelberg 2009 

Approximation Algorithms for Prize-Collecting Network Design Problems

175

One of the first problems studied from this perspective was the prize-collecting Steiner tree problem (PCST). This problem is a variant of the Steiner tree problem in which we are given an undirected graph with nonnegative edge costs, a root vertex r, and a nonnegative penalty πi for each vertex i ∈ V . The goal is to find a minimum-cost tree T connected to the root such that we minimize the cost of the tree plus the sum of the penalties of all vertices not spanned by the tree. The problem models one of a network provider deciding how to expand a network so as to maximize its profit [12]; here the penalty of a vertex represents the potential revenue that can be captured if the vertex is connected to the network. The objective function is that of maximizing the sum of the revenues generated by the vertices connected to the network minus the cost of the network. This objective gives the same optimal solution as that of the prizecollecting Steiner tree problem, but from an approximability standpoint the two problems are not the same; Feigenbaum, Papadimitriou, and Shenker [5] have shown the profit maximization problem cannot be approximated to within any factor. Bienstock, Goemans, Simchi-Levi, and Williamson [3] gave the first approximation algorithms for the PCST. Goemans and Williamson later gave a primal-dual 2-approximation algorithm [8]. Only recently have the generalizations of PCST been considered. Hajiagayi and Jain [9] consider the prize-collecting version of the generalized Steiner tree problem (abbreviated to PCGST for “prize-collecting generalized Steiner tree”). In the PCGST problem, we are given an undirected graph with nonnegative costs on the edges and a set of pairs of vertices si -ti with penalties πi for each i for not connecting that particular pair. The goal is to find a subset F of edges so as to minimize the cost of the selected edges plus the penalties of pairs not connected in the subgraph (V, F ). Hajiaghayi and Jain give a primal-dual 3-approximation algorithm and an LP-rounding 2.54-approximation algorithm. Hayrapetyan, Swamy and Tardos [10] extend the PCST framework in a different direction, by considering the case when penalties are more expressive than just a simple sum of penalties of disconnected vertices. In their model, penalties are modelled by an arbitrary monotone submodular function of the set of disconnected vertices. They are able to extend the Goemans-Williamson primal-dual 2-approximation algorithm to this problem. Sharma, Swamy, and Williamson [15] introduce prize-collecting forest problems (PCF for short) which generalize connectivity requirements still further and make penalties more expressive; this work generalizes both the results of Hajiaghayi and Jain and Hayrapetyan et al. In the PCF problem, the connectivity requirements are specified by an arbitrary function f : 2V → {0, 1} which assigns a connectivity requirement to each subset of vertices; and the penalty is specified by a submodular function π : 22 → ℜ+ on collections of subsets of vertices. The goal is to find a subset F of edges such that the cost of selected edges F plus the penalty function value on the collection of all violated subsets is minimized, where a subset S of vertices is said to be violated if f (S) = 1 but δ(S) ∩ F = ∅; δ(S) is the set of all edges with exactly one end point in S. Sharma et al. [15] give a primal-dual 3-approximation algorithm and an V

176

C. Nagarajan, Y. Sharma, and D.P. Williamson

LP-rounding 2.54-approximation algorithm for the PCF problem when the penalty function obeys certain properties. All of the work outlined above considers only problems where the network created is a tree or a forest. However, many fundamental network design questions involve more general connectivity requirements. In real world networks, a typical client might not just like to connect to the network, but might want to connect via a few (different) paths. There could be several reasons for this including needing higher bandwidth than a single connection can provide, or needing redundant connections in case of edge failures. For instance, in the survivable network design problem (called SNDP for short), the input is the same as that for generalized Steiner tree problem, except that now we are also given a connectivity requirement ri for each pair si -ti , and we need at least ri edge-disjoint paths from si to ti in the solution network. Jain [11] introduces the technique of iterated rounding and gives a 2-approximation algorithm for this problem. His technique extends to network design problems in which for every cut δ(S) in the network we must select at least f (S) edges, where f is a weakly supermodular function. In this paper, we initiate the investigation of prize-collecting network design problems with general connectivity requirements. There has been some previous work investigating prize-collecting network design problems with connectivity requirements greater than 1 (see below for discussion), but to the best of our knowledge, there has been no previous work on prize-collecting problems with general connectivity requirements. In this investigation, we consider prizecollecting generalized Steiner tree problem (PCGST) and prize-collecting forest problem (PCF) and introduce general connectivity versions for these problems. We also design and analyze approximation algorithms for these problems. As mentioned above, there has been some previous work on prize-collecting network design problems with connectivity requirements greater than 1. Based on a previous problem of Balas [2], Bienstock et al. [3] introduced the the prizecollecting travelling salesman problem (PCTSP). In PCTSP problem, we are given an undirected edge-weighted graph G = (V, E, c) (edge weights satisfy triangle inequality), a root r ∈ V , and penalties πi for all i ∈ V . The goal is to find a tour which includes root r, but excludes some (possibly empty) set of vertices such that the sum of the cost of the tour and aggregate penalty of excluded vertices is minimized. Goemans and Williamson [8] give a 2-approximation for this problem. Chimani, Kandyba, and Mutzel [4] consider the 2-root-connected prize-collecting Steiner network problem. In this problem, each node v ∈ V has a connectivity requirement rv ∈ {0, 1, 2}, which indicates how many node-disjoint paths it requires to the root. There is also a penalty πv for not having rv node disjoint paths from v to the root. The goal is then to select a subset E ′ of edges such that the cost of E ′ plus the sum of penalties of nodes whose connectivity requirement is not satisfied is minimized. Chimani, Kandyba, and Mutzel give an integer linear programming approach based on directed cuts to solve this problem to optimality. The issue of how to define penalties immediately arises while considering prizecollecting problems with general connectivity. If we need to select f (S) edges from

Approximation Algorithms for Prize-Collecting Network Design Problems

177

δ(S), how much penalty should be charged if there are only f (S)−1? There are two obvious variants: (1) satisfaction of requirement is all-or-nothing, (as in Chimani, Kandyba, Mutzel [4]); (2) satisfaction of requirements is gradual in the sense that each violation carries an additional penalty. In this paper, we model the gradual version of the penalty with the following restriction: the earlier violation of the connectivity requirement of a set S costs at most as much as the later violations; that is, if the connectivity requirement is K, then reducing connectivity from K to K − 1 carries at most as much additional penalty as reducing connectivity from K−1 to K−2, which in turn carries as most as much additional penalty as reducing it from K − 2 to K − 3 and so on. Designing algorithms for all-or-nothing version of the penalty is an important open problem. We first consider the prize-collecting version of the survivable network design problem (called PCSNDP for short) with linear penalties. The input to this problem is the same as the SNDP problem, except that now there is penalty πi associated with each pair si -ti . The goal is to select a subset E ′ of edges so as to minimize the cost of the edges selected plus the sum over all pairs of the product of πi and number of times its requirement is violated; the penalty is linear in this sense, as each additional violation costs the same. We use Jain’s algorithm for the survivable network design problem as a black-box and use some variations of rounding techniques of Levi, Lodi, and Sviridenko [14] and of Goemans [7] to give a 2.54-approximation algorithm for this case. Then we consider the PCF problem with integral connectivity requirements (called PCF-Z for short). The connectivity requirement function is any function f : 2V → Z≥0 . The penalty function is a function π : (K + 1)2 → R≥0 where K is the maximum value of the connectivity function f . The penalty function π is assumed to be decomposable in certain form, which reflects the fact that the multiset function π(·) satisfies a variant of the submodularity property. Each decomposed function of π is assumed to satisfy the conditions mentioned in earlier work of [15], in particular, it is a monotone submodular function. Without going into the details of the decomposition, it is worth pointing out that the decomposition is motivated by the fact that users have decreasing marginal utility for increased connectivity. We show an example of a SNDP problem with penalties convex in the number of missing connections that is expressible in our framework. Borrowing ideas from Goemans and Berstimas [6], we give an (α · log K)-approximation algorithm for this problem, where α ≈ 2.54 is the approximation factor of the LP-rounding algorithm in [15] and K is the maximum connectivity value. For this result, we allow ourselves to purchase as many copies of an edge as needed; in the previous result for PCSNDP we can restrict the number of copies of each edge in the solution. The rest of the paper is organized as follows. In Section 2, we introduce the prize collecting variant of the survivable network design problem, and give a 2.54approximation algorithm. The general connectivity version of prize-collecting forest problem is introduced in Section 3, where we also give an algorithm for the problem. The algorithm is analyzed in Section 4. Section 5 concludes with open problems and future work. V

178

2

C. Nagarajan, Y. Sharma, and D.P. Williamson

The Prize-Collecting Survivable Network Design Problem

2.1

The Problem Definition

In the PCSNDP, we are given an edge weighted graph G = (V, E, c : E → R≥0 ), a set I of k source sink pairs si -ti for i ∈ I, a requirement ri for each pair, and penalty πi for each i for violating each connectivity requirement for that pair. The goal is to find a subset E ′ of edges such that the cost of the edges in E ′ and penalties of all disconnections (described below) is minimized. If pair i has qi edge disjoint paths in E ′ , then the disconnection penalty of pair i is πi times max(0, ri −qi ). The total disconnection penalty is the sum of disconnection penalties over all pairs, that is i∈I πi ·max(0, ri −qi ). There is also a requirement that each edge e ∈ E can be used at most ae ∈ Z≥0 number of times. The linear programming relaxation of the problem is as follows. The xe variables represent the decision of including edges and zi variables represent the decision to pay penalties for pairs. We use S ⊙ i as a binary predicate which is true if |S ∩ {si , ti }| = 1.   πi zi subject to (PCSNDP) ce xe + Min i∈I e∈E xe + zi ≥ ri ∀i, S : S ⊙ i;

e∈δ(S)

2.2

0 ≤ xe ≤ ae

∀e ∈ E;

zi ≥ 0

∀i ∈ I.

Idea of the Algorithm

We first solve the LP using ellipsoid method. For that, we need a separation oracle for the inequalities. A simple max flow computation for each pair in I tells us whether the current solution is feasible for the linear program or not. Let (x∗ , z ∗ ) be the optimal solution to the linear program. Then we “round” the z ∗ part of the solution. Let us fix a real number α ∈ [0, 1] and define z¯ solution as follows, where {r} denotes the fractional part of the number r: z¯i = ⌊zi ⌋ if {zi } ≤ α and ⌈zi ⌉ otherwise. In this process, we increase the penalty of the solution if we round the zi variable up, and decrease it otherwise. Then we solve a modified problem without prize-collecting constraints. The new problem has same graph and pairs as its input, but the connectivity requirements are adjusted to reflect the z¯i variables’ values. Let us define new requirements ri′ = ri − z¯i . We then solve the SNDP using Jain’s iterative rounding technique [11]. The LP for the modified problem is given below.  Min ce xe subject to (SNDP) e∈E xe ≥ ri′ (= ri − z¯i ) for all i, S : S ⊙ i;

e∈δ(S)

0 ≤ xe ≤ ae

for all e ∈ E.

Let the solution we get from Jain’s algorithm be called x ¯. We return (¯ x, z¯) as the final solution, which is a feasible solution to the original problem.

Approximation Algorithms for Prize-Collecting Network Design Problems

179

We are now ready to prove that the algorithm presented above is a 3-approximation algorithm. Towards the end of this section, we will also point out how we can change the algorithm to make it a 2.54-approximation algorithm. 2.3

Bounding the Edge Costs

First, we focus on the edge costs. Consider the solution x ¯ which is the output of Jain’s algorithm. By the properties of Jain’s LP-rounding 2-approximation  ¯e ≤ 2 · OPT(SNDP), where OPT(SNDP) is algorithm, we know that e∈E ce x the cost of the optimal solution to the LP (SNDP). Our goal is to prove that the 1 1 x∗ costs at least OPT(SNDP) by proving that the solution 1−α x∗ solution 1−α is feasible for the linear program SNDP. This will prove the following:  1  ce x ¯e ≤ 2 · OPT(SNDP) ≤ 2 · ce x∗e . 1−α e∈E

e∈E

1 ∗ 1−α x

is a feasible solution to the Although we would like to show that modified connectivity requirements ri′ , this might potentially violate the constraint on the number of edges used. Instead, we define the solution x∗∗ as 1 ∗ x∗∗ e = min{ 1−α xe , ae }. Here is the idea: if the connectivity requirements ri were 1 to be reduced by exactly zi∗ , then the scaled solution 1−α x∗ will actually be feasible for modified connectivity requirements. But the requirements actually go down by only ⌊zi∗ ⌋, and we need to make sure in such cases the scaled solution (which is also truncated at ae ) is indeed feasible.

Lemma 1. x∗∗ is a feasible solution for modified connectivity requirements r′ . Proof. We prove that x∗∗ satisfies all constraints in the modified LP (SNDP) above. Let us fix an i and S : S ⊙ i, and assume that zi has been rounded down (otherwise, the solution is trivially feasible). We have x∗ (δ(S)) ≥ ri −⌊zi∗ ⌋−α and we need to prove that x∗∗ (δ(S)) ≥ ri −⌊zi∗ ⌋. The fact that x∗ (δ(S)) ≥ ri −⌊zi∗ ⌋−α can be restated as x∗ (δ(S)) ≥ (ri −⌊zi∗ ⌋−1)+(1−α). The following claim, based on an idea from Levi, Lodi, and Sviridenko [14], will help us get the required bound.  Claim. Let e∈E ye ≥ n + (1 − α) and suppose  there is ya non-negative integer , ne } ≥ n + 1. ne for each edge e such that ye ≤ ne . Then, e∈E min{ 1−α e

Proof. To prove the inequality, we define new y ′ and n′ ; proving the inequality for new values will prove it for original values too. Let  ye′ = ye − ⌊ye ⌋ ; n′e = ne − ⌊ye ⌋ ; n′ = n − ⌊ye ⌋ . e∈E



 y′ Now we have that e∈E ye′ ≥ n′ + (1 − α) and we need to prove e∈E min{ 1−α , n′e } ≥ n′ + 1. Note that proving this suffices to prove the claim. We will prove  y′ something stronger: e∈E min{ 1−α , 1} ≥ n′ + 1. Let F = {e ∈ E : ye′ ≥ 1 − α}. If |F | ≥ n′ + 1, then we are done (each edge in F contributes at least a unit to e

e

180

C. Nagarajan, Y. Sharma, and D.P. Williamson

the sum). Else, |F | ≤ n′ . Then the sum on the left hand side can be written as the following, proving the claim. 

ye′ ,1 1−α

    ′ ye′ ye = ,1 + ,1 min min min 1−α 1−α e∈E e∈F e∈F    y′  1 e = |F | + ≥ |F | + ye′ − |F | 1−α 1−α 

e∈F







e∈E

1 ≥ |F | + (n′ + (1 − α) − |F |) 1−α 1−α ≥ n′ + 1. ≥ |F | + (n′ − |F |) + 1−α

For a given S, we apply the claim by letting ye = x∗e for all e ∈ δ(S), ne = ae ,  x∗ and n = ri − ⌊zi∗ ⌋ − 1. Then the claim shows that e∈δ(S) min{ 1−α , ae } ≥ (ri − ⌊zi∗ ⌋ − 1) + 1, or that x∗∗ (δ(S)) ≥ ri − ⌊zi∗ ⌋. Hence the constraint in the modified linear program corresponding to (i, S : S ⊙ i) is satisfied. Also note ∗∗ that x∗∗ e ≤ ae is also satisfied by the definition of x . This finishes the proof of the lemma. e

2.4

Penalty of the Solution and Its Total Cost

Now we can also bound the cost of the solution contributed by penalties. Note that when we round the zi∗ variables up, we ∗scale them up by a factor of at most 1 1 . This proves that π z ¯ ≤ cost of the solution i∈I i i α  α i∈I πi zi . Hence  the total  1 2 ∗ ∗ c x + π z . Taking α = 31 gives ¯e + i∈I πi z¯i ≤ 1−α is e∈E ce x e∈E e e i∈I i i α a 3-approximation. The algorithm above can be improved to 2.54 approximation by choosing α uniformly at random from [0, β] where β = 1−e−1/2 . This is a standard technique introduced by Goemans [7]; see the paper of Hajiaghayi and Jain [9] or Sharma, Swamy, and Williamson [15] for details.

3 3.1

The Prize-Collecting Forest Problem with General Connectivity Requirements (PCF-Z) Definition and Formulation of the Problem

In the discussion below, we denote a family of subsets of V by uppercase scripted letters (like S) and multisets of subsets of V by uppercase scripted letters with ¯ The set of all families of subsets of V is denoted by 22 a bar over them (like S). and the set of all multisets of subsets of V by N2 . Also, the set of all multisets with multiplicity of any subset at most K is denoted by (K + 1)2 or [0 . . . K]2 . For a multiset S¯ and a subset S, we denote by nS¯(S) the number of copies of V

V

V

V

Approximation Algorithms for Prize-Collecting Network Design Problems

181

¯ For two (multi)sets S¯ and T¯ , S¯ + T¯ (called multiset S that are contained in S. addition) is defined such that nS+ ¯ T¯ (S) = nS¯(S) + nT¯ (S), for all S. PCF-Z is a generalization of the prize-collecting forest problem (called PCF) in Sharma, Swamy, and Williamson (SSW) [15]. In the PCF problem, the input is an edge weighted graph G = (V, E, c : E → R≥0 ) with a connectivity requirement function f : 2V → {0, 1} (each subset S requires us to select f (S) from the cut δ(S)). A submodular connectivity function π : 22 → R≥0 on the family of all subsets is also given, which is used to determine the penalty of the solution. The goal is to find a minimum cost subgraph G′ = (V, E ′ ) of G such that the sum of edges costs in E ′ plus the penalty function value on the family of violated subsets is minimized. The LP relaxation of the integer programming formulation of the problem is given below. V

Min



c(e)x(e) +

e∈E



e∈δ(S)

x(e) +



π(S)z(S)

subject to

(PCF-P)

S∈22V



S:S∈S

z(S) ≥ f (S) ∀S ⊆ V ;

V

x(e), z(S) ≥ 0 ∀e ∈ E, S ∈ 22 .

SSW give a primal-dual 3-approximation algorithm and an LP-rounding 2.54approximation algorithm for this problem. To achieve their result, they needed a few restrictions on the penalty functions, in particular the penalty function is assumed to be monotone and submodular. Please refer to [15] for details. We generalize the PCF to the PCF-Z problem, in which general connectivity requirements are allowed. In the PCF-Z problem, the input is same as the PCF problem, save the following two changes: (1) The connectivity requirement function f is defined from 2V to Z≥0 . (2) The penalty function π(·) is defined on multisets of subsets of 2V , since now a subset can be violated multiple times. According to the problem definition, each subset S in 2V has a requirement of f (S) edges to cross it in a solution. We call a set S violated in the subgraph E ′ if this requirement is not satisfied for the set S; that is, |δ(S) ∩ E ′ | < f (S). Our objective is to find a network E ′ such that we pay for the cost of the edges in the network N and pay penalties for the sets S for which there are less than f (S) edges in the cut δ(S) in the solution E ′ . ¯ where S¯ denotes the multiset Here we model the penalty function as π ¯ (S) of sets for which the requirement is violated, i.e the number of edges selected from the cut is less than the associated requirement for that set. If the subgraph chosen is E ′ , then the multiplicity of a set S in S¯ is f (S) − |δ(S) ∩ E ′ |, i.e. the number of times the set is violated. So we define the violated multiset of a ¯ ′ ) = {S : max(f (S) − δE ′ (S), 0) times }. Let K = maxS f (S), network E ′ as S(E the maximum connectivity requirement of any subset. Without loss of generality, we assume that f (S) = K for all S because if f (S) were less than K for some S then we can set f (S) = K for all subsets and modify the penalty function π ¯ into π ¯ ′ in such a way that it is equivalent to the original problem. This can be ¯ := π ¯ ′ (S) ¯ (T¯ ) where nT¯ (S) = achieved by the defining for any S¯ ∈ [0 . . . K]2 , π max{nS¯(S) − (K − f (S)), 0} for all S ⊆ V . V

182

C. Nagarajan, Y. Sharma, and D.P. Williamson

This problem can be formulated as an integer problem, whose LP relaxation is shown below:   ¯ S) ¯ Min π(S)z( subject to (PCF-Z-P) c(e)x(e) + e∈E



e∈δ(S)

¯ 2V S∈N

x(e) +



¯ S:S∈ S¯

¯ · nS¯(S) ≥ f (S) z(S)

0≤

 S¯

∀S ⊂ V

x(e) ≥ 0

∀e ∈ E

¯ ≤ 1. z(S)

In the linear program above, the last constraint is valid for integer optimum solution since only one multiset needs to be set to 1. 3.2

¯ Properties of the Penalty Function π ¯ (S)

In this section, we lay out and motivate the assumptions on the penalty function. 1. The penalty function is decomposable: We assume that the penalty ¯ = ¯ (S) function π ¯ can be decomposed into π1 ,. . . , πK : 22 → R≥0 such that π π1 (S1 ) + π2 (S2 ) + . . . + πK (SK ), where each of π1 , π2 , . . . , πK satisfies penalty ¯ = (S1 , S2 , . . . , SK ). Here, Sr is the functions properties in [15] and splitK (S) ¯ family of all subsets in S which occur at least K − r + 1 times; that is, Sr = {S : ¯ r). Note nS¯(S) ≥ K − r + 1}. To refer to the Sr , we use the notation splitK (S, that changing the penalty function as mentioned in the previous section (so that we can assume f (S) = K for all S) does not affect whether π is decomposable in this fashion. 2. The penalty function is monotonic. We assume π1 (S) ≥ π2 (S) ≥ . . . ≥ πK (S) for all S. Intuitively, π1 charges penalties for subsets that are violated (at least) K times, π2 charges penalties for subsets that are violated at least K − 1 times, and so on. We assume that K-th violation costs at least as much as (K − 1)-st violation, so π1 (S) ≥ π2 (S) (and similarly for others). 3. The penalty function is cross-submodular. We assume that for i < j, πi (Si ) + πj (Sj ) ≥ πi (Si ∩ Sj ) + πj (Si ∪ Sj ). This means that it is cheaper to have a larger set with larger indexed π function and smaller set with smaller indexed π function. These restrictions on the π ¯ (·) might seem restrictive, but we show below that an important variant of the prize-collecting survivable network design problem can be modelled in our framework. V

A concrete problem. The following problem can be cast in PCF-Z model considered above. Let us consider an instance of PCSNDP problem in which each pair of vertices has √ a connectivity requirement of K. The profit function is defined as profit(i) = iK for i = 0, 1, . . . , K, which reflects how much profit the pair derives by getting i connections. Note that profit(·) is a concave function. The loss(·) is defined as negative of profit, but  it is translated by K to make it a positive function. Thus, loss(i) = K − (K − i)K. The loss function shows how much

Approximation Algorithms for Prize-Collecting Network Design Problems

183

loss the pair i suffers if i of the K requirements are not satisfied. The penalty of the solution is defined to be aggregate loss of all pairs. Note that this imposes the natural condition that the difference in penalty for violating the very last unit of connectivity (loss(1) − loss(0) = K − √ (K − 1)K) is much less then violating the very first (loss(K) − loss(K − 1) = K). This problem cannot be modelled in the PCSNDP framework mentioned in the first part of the paper because each disconnection carries a different penalty. But this problem can indeed be modelled in the PCF-Z framework of this section (proof omitted). In fact, there was nothing special about the square-root function above, any concave profit function (and hence convex loss function) can be modelled in our framework. 3.3

Algorithm

In this section, we present the algorithm for the PCF-Z problem defined in last few sections. The algorithm uses the algorithm for PCF from SSW [15]. The call to the PCF algorithm is denoted by PCF(G(V, E, c), f, π), where c is the cost function on edges, f is the connectivity requirement on subsets, and π is the penalty function. 1. Decompose π ¯ = π1 + π2 + · · · + πK , and run PCF(G(V, E, c), 1, πk ) for k = 1, 2, . . . , K to obtain forests F1 , F2 , . . . , FK and violated families S1 , S2 , . . . , SK . 2. Construct a network E ′ = F1 + F2 + · · · + FK (multiset addition) and output E ′ as the set of selected edges. Output S¯ = S1 + S2 + · · · + SK as the multiset of violated subsets, on which the penalty is paid. Fig. 1. PCF-Z algorithm for (G(V, E, c), K, π ¯ ). K: maximum connectivity requirement.

4

Analysis for PCF-Z Algorithm

In this section, we prove that the solution found by Algorithm in Figure 1 is good approximation to the minimum cost network for PCF-Z. Let S¯ be the multiset of violated subsets in F1 + F2 + · · · + FK , that is S¯ = S1 + S2 + · · · + SK . It follows from property 3 of the penalty function that ¯ =π π ¯ (S) ¯ (S1 + S2 + · · · + SK ) ≤ π1 (S1 ) + π2 (S2 ) + . . . + πK (SK ). (1) 4.1

The Performance Guarantee

We are now ready to prove the performance guarantee for the algorithm. We will need to consider three linear programs. The first one is the linear program in [15] for the 0-1 connectivity requirements and with penalty function πr . Note there are K such linear programs, one for each r = 1, 2, . . . , K. Min



e∈E

c(e)x(e) +



S∈22V

πr (S)z(S)

subject to

(LP1(r))

184

C. Nagarajan, Y. Sharma, and D.P. Williamson





x(e) +

S:S∈S

e∈δ(S)

z(S) ≥ 1

∀S ⊂ V V

∀e ∈ E, S ∈ 22 .

x(e), z(S) ≥ 0

We defer discussion of the second LP for a moment. The third LP is the LP for the original problem PCF-Z with each connectivity requirement equal to K. Recall that we argued previously that this is equivalent to the original problem by modifying the penalty function.   ¯ S) ¯ subject to π ¯ (S)z( (LP3) c(e)x(e) + Min e∈E



2V ¯ S∈[0...K]

x(e) +



¯ S:S∈ S¯

e∈δ(S)

¯ · nS¯(S) ≥ K z(S)

∀S ⊂ V

¯ ≥0 x(e), z(S)  ¯ ≤ 1. z(S) 0≤

∀e ∈ E, S¯ ∈ [K + 1]2

V



Let OPTLP1(r) and (x∗ , z∗ ) be the optimal value and the optimal LP1(r) LP1(r) ∗ ) be the solution for the linear program LP1(r), and OPTLP3 and (x∗LP3 , zLP3 optimal value and the optimal solution for the linear program LP3. We have the following inequalities, which prove the performance guarantee of α · ln(K), given the α-approximation algorithm for PCF from [15].  ¯ cost(F1 + F2 + · · · + FK ) = e∈F1 +F2 +···+F c(e) + π ¯ (S)   c(e) + · · · + c(e) + π1 (S1 ) + · · · + πK (SK ) (From (1)) ≤ K

e∈F1

=

K  r=1

≤ α·

K  r=1

⎛ ⎝

e∈FK

e∈Fr



e∈E

c(e) + πr (Sr )

(e) + c(e) · x∗ LP1(r)



2V

S∈2



(S)⎠ πr (S) · z ∗ LP1(r)

(From [15])

 K   OPTLP3 ≤ α· ≤ α · OPTLP3 · ln(K). (From (2)) r r=1

We need to prove the following theorem to finish the proof.

Theorem 1. For all r = 1, 2, . . . , K,   OPTLP3 ∗ . c(e) · x∗LP1(r) (e) + πr (S) · zLP1(r) (S) ≤ r e

(2)

S

4.2

Proof of Theorem 1

To prove this theorem we use a new linear program LP2(r) which requires a new truncated penalty function π ¯ r which restricts the original penalty function π ¯ to

Approximation Algorithms for Prize-Collecting Network Design Problems

185

the first r components in its decomposition (see property 1 of the penalty function). In other words for S¯ ∈ [0 . . . r]2 , the new penalty function is defined as fol¯ where splitr (·) is defined in the discussion lows. Let (S1 , S2 , . . . , Sr ) = splitr (S) ¯ = π1 (S1 )+π2 (S2 )+. . .+πr (Sr ). of property 1 of the penalty function. Then π ¯ r (S) The linear program LP2(r) is the following; note that there is one linear program for each value of r.   ¯ S) ¯ π ¯ r (S)z( subject to (LP2(r)) c(e)x(e) + Min V

e∈E



e∈δ(S)

2V ¯ S∈[0...r]

x(e) +



¯ S:S∈ S¯

0≤

¯ · nS¯(S) ≥ r z(S)

∀S ⊆ V

¯ ≥0 x(e), z(S)  ¯ ≤ 1. z(S)

∀e ∈ E, S¯ ∈ (r + 1)2

V

2V ¯ S∈(r+1)

Here is the road-map of the proof. We first relate the optimum value of LP2(r) and LP3, and then of LP1(r) and LP2(r). Combining them will finish the proof. Lemma 2. OPTLP2(r) ≤ OPTLP3 . Lemma 3. OPTLP1(r) ≤ OPTLP2(r) /r. Proof. We construct a feasible solution (xLP1(r) , zLP1(r) ) for LP1(r) from , z∗ ) of LP2(r) which costs no more than the optimal solution (x∗ LP2(r) LP2(r) OPT LP2(r) . r ¯ value into r equal parts and give one part The idea is to split the z ∗ (S) LP2(r) ¯ = (S1 , S2 , . . . , Sr ). Note that a family might end each to r families in splitr (S) up getting z-contribution from many multisets, or even more than once from the same multiset. More formally, we define (xLP1(r) , zLP1(r) ) as follows: x∗ (e) LP2(r) xLP1(r) (e) = , r  zLP1(r) (S) =

and

2V :S∈splitr (S) ¯ ¯ S∈(r+1)

¯ z∗ (S) LP2(r) · nsplit r

r

¯ (S). (S)

¯ is as defined in property (1) of the penalty function and Here splitr (S) r ¯ nsplit (S) ¯ (S) is the number of times family S occurs in (the ordered set) split (S). r

Feasibility of constructed solution. We first prove that the solution (xLP1(r) , LP1(r) ) constructed above is feasible for LP1(r). ¯ into r parts The idea behind the proof is the following: we divide z ∗ (S) LP2(r) ¯ In the original soluand distribute it equally to r resulting families of splitr (S). ¯ to the constraint of S, but in the tion, S¯ was contributing nS¯(S) · z ∗ (S) LP2(r)

186

C. Nagarajan, Y. Sharma, and D.P. Williamson

¯ (S)/r new solution, nS¯(S) different families of subsets are contributing z ∗ LP2(r) to the constraint of S. Since the contribution to S from edges is also divided by r, the total contribution just gets divided by r. This proves the feasibility for an arbitrary subset S. A formal proof is omitted for space reasons. Bounding the objective function. The main idea in proving the bound on the objective function is that the z-value for a particular S¯ in the solution of LP2(r) ¯ whose penalties are evaluated is divided equally among r families (in splitr (S)), by functions π1 (·), π2 (·), . . . , πr (·). Since πr (·) is the least among them, if we evaluate all penalties at πr (·), the cost only gets lower. More formally, the objective function of the solution (xLP1(r) , zLP1(r) ) for LP1(r) can be bounded in terms of the objective function value of optimal , z∗ ) of LP2(r) as follows: solution (x∗ LP2(r) LP2(r)   ¯ ∗ ¯ c(e)x∗LP2(r) (e) + π ¯ r (S)z LP2(r) (S) e∈E



= r·⎣

2V ¯ S∈[0...K]



e∈E

x∗ (e) LP2(r) + c(e) r



2V ¯ S∈[0...r]

⎤ ¯ z∗ (S) LP2(r) ⎦ (π1 (S1 ) + · · · + πr (Sr )) r

¯ (here we denote (S1 , S2 , . . . , Sr ) = splitr (S)) ⎡ ⎤ ∗ ¯ z∗ (S) x (e)   LP2(r) LP2(r) ⎦ + (πr (S1 ) + · · · + πr (Sr )) ≥ r·⎣ c(e) r r 2 e∈E ¯ S∈[0...r] ⎤ ⎡ ∗ ¯ x (e)  z∗ (S)   LP2(r) LP2(r) ⎦ c(e) πr (S) · nsplit (S) + = r·⎣ ¯ (S) · r r ¯ ¯ e∈E S S:S∈split (S) ⎤ ⎡ ∗ ¯ x (e)  z∗ (S)   LP2(r) LP2(r) ⎦ c(e) + nsplit (S) πr (S) = r·⎣ ¯ (S) · r r ¯ ¯ V

r

r

r

e∈E

S

S:S∈split (S) r

(Changing the order of summation)     =r· c(e)xLP1(r) (e) + πr (S)zLP1(r) (S) e∈E

S

This shows that OPTLP1(r) ≤ costLP1(r) (xLP1(r) , zLP1(r) ) ≤ proving the lemma.

5

OPT

LP2(r) r

,

Conclusions

One of the most important open problems is to design algorithms for the all-ornothing version of penalty functions: penalty functions which charge the penalty even if the connectivity requirement is slightly violated. Other open problems include the following.

Approximation Algorithms for Prize-Collecting Network Design Problems

187

– Can we generalize the form of penalties as in the case of prize-collecting forest problem with general connectivity requirements? For example, penalties could be a submodular multi-set function of the set of disconnected pairs. – We assume that the penalty function is decomposable in some simpler function which satisfy economy of scale conditions, but it would be nice to generalize it to submodular functions. – Our algorithm for the prize-collecting forest problem needs to use each edge possibly many times (without bound). Can it be modified such that each edge is used a maximum number of times, which is a function of that edge?

References [1] Agrawal, A., Klein, P.N., Ravi, R.: When trees collide: An approximation algorithm for the generalized steiner problem on networks. SIAM J. Comput. 24(3), 440–456 (1995) [2] Balas, E.: The prize collecting traveling salesman problem. Networks 19, 621–636 (1989) [3] Bienstock, D., Goemans, M.X., Simchi-Levi, D., Williamson, D.P.: A note on the prize collecting traveling salesman problem. Math. Programming 59, 413–420 (1993) [4] Chimani, M., Kandyba, M., Mutzel, P.: A new ILP formulation for 2-rootconnected prize-collecting steiner networks. In: Arge, L., Hoffmann, M., Welzl, E. (eds.) ESA 2007. LNCS, vol. 4698, pp. 681–692. Springer, Heidelberg (2007) [5] Feigenbaum, J., Papadimitriou, C.H., Shenker, S.: Sharing the cost of multicast transmissions. Journal of Computer and System Sciences 63, 21–41 (2001) [6] Goemans, M.X., Bertsimas, D.: Survivable networks, linear programming relaxations and the parsimonious property. Math. Program. 60, 145–166 (1993) [7] Goemans, M.: Personal communication (1998) [8] Goemans, M.X., Williamson, D.P.: A general approximation technique for constrained forest problems. SIAM J. Comput. 24(2), 296–317 (1995) [9] Hajiaghayi, M.T., Jain, K.: The prize-collecting generalized Steiner tree problem via a new approach of primal-dual schema. In: SODA, pp. 631–640 (2006) ´ Network design for information networks. [10] Hayrapetyan, A., Swamy, C., Tardos, E.: In: SODA, pp. 933–942 (2005) [11] Jain, K.: A factor 2 approximation algorithm for the generalized Steiner network problem. Combinatorica 21(1), 39–60 (2001) [12] Johnson, D.S., Minkoff, M., Phillips, S.: The prize collecting Steiner tree problem: theory and practice. In: SODA, pp. 760–769 (2000) [13] Kortsarz, G., Nutov, Z.: Approximating minimum cost connectivity problems. In: Gonzales, T. (ed.) Handbook of Approximation Algorithms and Metaheuristics. CRC Press, Boca Raton (2006) [14] Levi, R., Lodi, A., Sviridenko, M.I.: Approximation algorithms for the multiitem capacitated lot-sizing problem via flow-cover inequalities. In: Fischetti, M., Williamson, D.P. (eds.) IPCO 2007. LNCS, vol. 4513, pp. 454–468. Springer, Heidelberg (2007) [15] Sharma, Y., Swamy, C., Williamson, D.P.: Approximation algorithms for prize collecting forest problems with submodular penalty functions. In: SODA, pp. 1275–1284 (2007) [16] Williamson, D.P., Goemans, M.X., Mihail, M., Vazirani, V.V.: A primal-dual approximation algorithm for generalized steiner network problems. Combinatorica 15(3), 435–454 (1995)

Caching Content under Digital Rights Management Leah Epstein1 , Amos Fiat2 , and Meital Levy2,⋆ 1

Department of Mathematics, University of Haifa, 31905 Haifa, Israel [email protected] 2 School of Computer Science, Tel-Aviv University, Israel {fiat,levymeit}@post.tau.ac.il

Abstract. Digital rights management systems seek to control the use of proprietary material (e.g., copyrighted movies). The use of DRM introduces a new set of caching issues not previously studied. Technically, these problems include elements of ski rental algorithms as well as paging, and their generalizations, online “capital investment” and generalized caching (the Landlord algorithm). The introduction of DRM restrictions does not impact the competitive ratio by more than a constant factor.

1

Introduction

Digital rights management is a byproduct of the vast economies involved in copyrighted content. Enormous effort has been made to devise such schemes, e.g., the Advanced Access Content System (AACS) produced by a consortium that includes Disney, Intel, Microsoft, Matsushita (Panasonic), Warner Brothers, IBM, Toshiba and Sony, or the Protected Media Path technology that forms a major component of the Windows Vista operating system. Yet other systems in current or future deployment include Apple’s DRM system — FairPlay, BD+ — an addition to AACS for Blu-ray discs, MMC — Mandatory Managed Copy, HDCP — High-bandwidth Digital Content Protection, and others. According to Richard Stallman the purpose of systems such as AACS is to restrict use of HDTV recordings and software so they would not be used except as these companies permit. Renato Iannella, Chief Scientist of IPR Systems says “The second-generation of DRM covers the description, identification, trading, protection, monitoring and tracking of all forms of rights usages over both tangible and intangible assets including management of rights holders relationships.” Use restrictions included within various DRM packages include the following (a very partial list of possible restrictions supported or intended): – Limited download ability. The right to download a copy of the content is limited to some pre-determined number of times. Moreover, the price of the content may depend on the number of times download is allowed. In particular one sees the following: ⋆

The research has been supported by the Eshkol Fellowship funded by the Israeli Ministry of Science.

E. Bampis and M. Skutella (Eds.): WAOA 2008, LNCS 5426, pp. 188–200, 2009. c Springer-Verlag Berlin Heidelberg 2009 

Caching Content under Digital Rights Management

189

• Some price p for a single download. • Some other price q ≫ p for unlimited downloads. – Limited use ability. The content may be downloaded for a pre-determined number of uses or a pre-determined length of time (e.g., NetFlix). DRM-like restrictions on use introduce an entirely new set of problems. The Cryptographic/Security aspects of such systems have been studied at great length, with hundreds of citations on citeseer, not to mention the vast literature that is not within the CS academic community. Some background on DRM can be found in [8,3,9] Here, we believe that we are initiating the study of an entirely new set of problems associated and related to DRM, that of caching content that is DRM “protected”. To the best of our knowledge, the issues dealt with in this paper are entirely innovative and not previously studied. DRM related work has been security/crypto oriented or dealing with system architecture. Caching has been studied in the context of paging and web caching, but not within the framework of DRM content. Motivated by DRM issues and caching algorithms, the problems we consider are a combination of web caching with elements of the ski rental problem and generalizations thereof such as the factory manufacturing problem. The penultimate web caching algorithm is the Landlord algorithm [12], one of the few algorithms to have made the jump from theory (competitive analysis of algorithms) to practical implementation and real use. We remark that caching of copyrighted material may seem strange, specifically, why would one discard something of value? However, the ever growing storage requirements (e.g., HDTV) and the ever growing storage availability pull in conflicting directions so caching remains an issue. In particular, we consider the following setting: – Cache memory is of size k. – Content items that can be seen e.g. as movies are labeled 1, 2, . . . , n. – Every item, i, has a size si > 0. At all times, j sj ≤ k for all items in the cache. – Item i can be purchased under any of a set of alternative “packages”, Pi = (pi [1], pi [2], . . . , pi [zi ]). – A single package for item i is of the form pi [j] = (ci [j], ℓi [j], ti [j]), where • ci [j] is the initial cost of purchasing the package pi [j]. • ℓi [j] is the number of allowable downloads of item i. • ti [j] is the cost per download. This cost is complementary to the initial package cost of ci [j] and is valid for up to a maximum of ℓi downloads. After this maximum is reached another package has to be purchased. Note that setting ℓi [j] = ∞ and ti [j] = 0 means that one pays ci [j] for unlimited free downloads of item i, and setting ℓi [j] = 1 means that one pays ci [j] + ti [j] for a single download.

190

L. Epstein, A. Fiat, and M. Levy

Just so as to place the issues in context, the problem is what to do when the cache runs out? Obviously, it is better to discard a large and cheap item that is infrequently used than a small and expensive item that is accessed very frequently. So far, so good. If all packages were single-download packages (ℓi [j] = 1), then there is no point in having more than one package per item, (choose the one with minimal ci [j] + ti [j]), and the problem becomes considerably simpler. 1.1

Our Results

The main result of this paper is that online caching DRM controlled items is not much more difficult than the “standard” paging problem. I.e., paging has a competitive ratio of k, and online DRM caching has a competitive ratio of O(k)1 . In Section 4 we discuss additional results for more restricted models. In particular, these results imply a lower bound of 2k for the competitive ratio of the DRM problem. 1.2

Online DRM Overview and the Structure of This Paper

Any online DRM algorithm can be viewed as consisting of two separate components: 1. The purchase strategy: what packages should the algorithm purchase and when. 2. The eviction strategy: when a page is brought into the cache, what page (or pages) need be evicted (if any). The purchase strategy is strongly related to the ski rental problem and generalizations thereof. In particular, [1] considers online strategies involving the tradeoff between initial capital cost and the production costs, when demands are unknown. One can view the package purchase issue for the DRM algorithm as a capital investment problem with amortized production costs. The DRM file eviction strategy is based upon Young’s “Landlord algorithm”. Landlord deals with a caching scenario where every file has some arbitrary cost and size. To do DRM caching, we simulate a generalization of the Landlord algorithm, called “diminishing cost Landlord” (DCLL - Section 2). Diminishing cost landlord generalizes the Landlord setting in that file costs may decrease over time. Events may be either file access requests or cost update events. The DRM caching algorithm simulates a DCLL algorithm and follows its eviction strategy. When a package is purchased, cost update events are injected into the DCLL event sequence. Also, DRM file access events are copied into the DCLL event sequence. 1

The exact constant depends on the model, but all variants are O(k) competitive. Also, similarly to paging, if the cache size of the online algorithm is (1 + c) times larger than that of the adversary, c > 0, then the competitive ratio drops to O(1).

Caching Content under Digital Rights Management

191

The DRM caching algorithm eviction strategy follows the cache contents of the simulated DCLL algorithm. The only missing “detail” in the description above is how to generate the DCLL cost update events. With the appropriate cost update events, the two settings (DRM and DCLL) become essentially equivalent with respect to the competitive ratio. Given this, it then suffices to prove an O(k) competitive ratio for the DCLL algorithm. For clarity of exposition, we consider simpler special case, the general DRM problem, and later argue that these simplifications do not impact the competitive ratio. 1.3

Other Related Work

Young [12] gives a deterministic algorithm called the Landlord which generalizes paging algorithms such as LRU, FIFO and the Balance algorithm [10,5,11]. The Landlord algorithm has a competitive ratio of k/(k − h + 1), where k is the size of the cache of the Landlord algorithm and h is the size of the cache of the optimal offline algorithm. In a very recent paper, Bansal et al. [2] give the first polylogarithmic competitive randomized algorithm for the problem. The problem of caching with expiration times was studied by Kimbrel [7] and Gopalan et al [6]. Kimbrel obtained deterministic algorithms for the general case in which pages have expiration times, varying sizes as well as varying costs with the same competitive ratio as the Landlord algorithm. Our problem is also related to the problem of “Capital investment” for production [1].

2

Diminishing Cost Landlord

The Diminishing Cost Landlord algorithm deals with the general case of standard caching where each file has an arbitrary cost and an arbitrary size, similarly to the landlord algorithm. Specifically, a file g has a cost(g) and size sg > 0. Moreover, the algorithm can handle a special type of request called “decrease cost” request. In a “decrease cost(g, x)” request, the cost of the file g drops to x for future requests for the file (x must be no larger than the previous cost of this file). Like the landlord algorithm, the diminishing cost landlord algorithm maintains a variable credit(g). Credit is given to each file when requested, initially credit(g) is set to cost(g). On a fault on g, cached files with no credit are evicted to make room for g in the cache. The credit of a file is decreased proportionally to its size. On a decrease cost event — if the credit of the file exceeds the new cost — the credit is reduced to the new cost. We show that the competitive ratio for this generalized case is equal to the competitive ratio of the Landlord algorithm. Lemma 1. The competitive ratio of Diminishing Cost Landlord is k/(k −h+1). Here, k is the cache size for the DCLL algorithm, and h is the cache size of the adversary.

192

L. Epstein, A. Fiat, and M. Levy

Algorithm 1. DCLL - Diminishing Cost Landlord 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16: 17: 18: 19: 20:

Upon the next event in the sequence: if event is “Decrease cost (g, x)” then cost(g) ← x; if credit(g) > cost(g) then credit(g) ← cost(g). end if else % event must be of the form “Request for file g” if g is not in the cache then repeat for each file f in the cache do decrease credit(f ) by Δ · size(f ) where Δ = minf ∈cache credit(f )/size(f ) end for Evict from the cache any subset of the files such that credit(f ) = 0 until there is room for g in the cache Bring g into the cache Set credit(g) ← cost(g) end if end if

Proof. The proof closely emulates that of the proof for the original Landlord algorithm [12]. All we need do is consider the effect of cost updates (which can not occur in the original problem and thus Landlord does not handle them). We use the same potential function as used in [12]: Φ = (h−1)×



f ∈DCLL

credit(f )+k× cache



f ∈OPT

[cost(f )−credit(f )] (1) cache

OPT above is some optimal algorithm. Since for every f ∈ / DCLL cache, credit(f ) = 0, we have that Φ = 0, initially. After every step for OPT and for DCLL, Φ ≥ 0. We say that an algorithm retrieves a file if the file was not in the cache and was read into cache. We assume lazy algorithms (don’t retrieve a file before necessary). Following [12], we argue a competitive ratio of k/(k − h + 1), where k is the size of the DCLL cache and h is the size of the OPT cache, as follows: – If OPT retrieves a file of cost c, Φ increases by at most kc. – If DCLL retrieves a file of cost c, Φ decreases by at least (k − h + 1)c. – At all other times Φ does not increase. In other words we show that for every state change for state of DCLL and OPT, in reaction to any event (file request or cost update), the following is maintained: (k − h + 1)cost(DCLL) + ΔΦ ≤ k · cost(OPT).

(2)

Caching Content under Digital Rights Management

193

The notation ΔΦ is the difference in the potential function Φ. We consider all events that may affect the potential function and analyze the change to Φ of each such action in isolation. The new element, not considered in [12] is that of cost updates. Consider the event that decreases the cost of file g from y to x < y, we now study how this impacts Φ. Let Δcost(g) = y − x and let Δcredit(g) be the difference between the original credit and the (possibly) reduced credit of file g. There are four cases to consider: g∈ / DCLL cache and g ∈ / OPT cache: Φ does not change. g ∈ DCLL cache and g ∈ / OPT cache: If credit(g) decreases than the potential function decreases by (h − 1)Δcredit(g). g∈ / DCLL cache and g ∈ OPT cache: Since g ∈ / DCLL, credit(g) = 0 and does not change. The only change in the potential function is due to the reduced cost. Φ decreases by kΔcost(g). g ∈ DCLL cache and g ∈ OPT cache: The first summation decreases by (h − 1)Δcredit(g). The right hand side summation decreases by k[Δcost(g) − Δcredit(g)] ≥ 0. To conclude the proof of 2 we need to consider the effect of file evictions and file retrievals. This analysis is taken in its entirety from [12], and we give it only for completeness. Lemma 2. Equation 2 holds in the remaining cases (that do not involve reduction of cost) [12]. Proof. Using the following case analysis: OPT evicts a file f : Since credit(f ) ≤ cost(f ), Φ can not increase. OPT retrieves a file g: Since credit(g) ≥ 0, we find that Φ can increase by at most kcost(g). OPT gets a hit: Φ does not change. DCLL decreases credit(f ) for all f ∈ DCLL: . Since the decrease of a given credit(f ) is Δsize(f ), the net decrease in Φ is Δ times (h − 1)size(DCLL) − ksize(OPT ∩ DCLL), where size(X) denotes f ∈X size(f ). When this step occurs, we can assume that the requested file g has already been retrieved by OPT but is not in DCLL. Thus, size(OPT ∩ DCLL) ≤ h − size(g). Further, there is no room for g in DCLL, so that size(DCLL) ≥ k − size(g) + 1 (recall that sizes are assumed to be integers). Thus the decrease in the potential function is at least Φ times (h − 1)(k − size(g) + 1) − k(h − size(g)). Since size(g) ≥ 1 and k ≥ h, this is at least (h − 1)(k − 1 + 1) − k(h − 1) = 0. DCLL evicts a file f : Since credit(f ) = 0, Φ does not change. DCLL retrieves a file g and sets its credit(g) to cost(f ): Since f was not previously in the cache (and thus credit(g) was zero), and because we can assume that g ∈ OPT, Φ decreases by (h − 1)cost(g) + kcost(g) = (k − h + 1)cost(g). DCLL resets credit(g) between its current value and cost(g): We can assume g ∈ OPT. If credit(g) changes, it can only increase. In this case, since (h − 1) < k, Φ decreases.

194

3

L. Epstein, A. Fiat, and M. Levy

The Online DRM Algorithm

The online DRM algorithm combines two components, a purchasing strategy and an eviction strategy which can be run and explained separately. The DRM caching algorithm simulates a diminishing cost landlord algorithm, DCLL. The DRM algorithm does an on-the-fly transformation of its own request sequence, σDRM , to a sequence of events for the DCLL algorithm σDCLL , by injecting file cost updates into the DCLL event sequence. The eviction strategy of the DRM algorithm is identical to the eviction strategy of the DCLL algorithm on σDCLL . The purchasing strategy, and the associated cost updates injected into the DCLL sequence, ensure that 1. The cost to the DRM algorithm is at most O(1) times the cost to the DCLL algorithm, see Lemma 4. 2. The cost to the DCLL adversary is no more than O(1) times the cost to the DRM adversary, see Lemma 5. Lemma 5 is technically more challenging. Given that the DCLL algorithm is O(k) competitive with respect to the DCLL adversary, this gives an O(k) competitive algorithm for DRM. 3.1

Simplifications

We simplify the discussion by making the following assumptions, one can remove these assumptions and yet increase the competitive ratio by no more than a constant factor (see Section 3.4): – There is no per-download cost per item, i.e., all packages for item j are of the form (ci [j], ℓi [j], 0). – All ci [j] costs are powers of 2. – Package offerings are Pareto-optimal, i.e., if ci [j] ≥ ci [j ′ ], j = j ′ , then ci [j]/li [j] ≤ ci [j ′ ]/li [j ′ ]. – The p’th package for item i is priced at ci [p] = 2m−1+p for all p ≥ 1 where 2m = ci [1] is the cost of the cheapest package for item i. 3.2

The Online DRM Purchasing Strategy

Given the simplifications above, the purchasing strategy is simple. On the first request to an item, purchase the cheapest possible package for that item. When all allowable downloads are used up, purchase the next more expensive package. For any item i, the cost of this purchase strategy is no more than twice the cost of any purchase strategy that performs the same number of downloads. The algorithm initially purchases the cheapest package for item i, p[1] = (c[1], ℓ[1], 0). Subsequently, whenever all downloads of the current packages are used up, the algorithm will purchase the next package in the list.

Caching Content under Digital Rights Management

195

The algorithm purchases the t’th package in the sequence only when at least ℓ[1] + ℓ[2] + · · · + ℓ[t − 1] + 1 downloads are required. Due to Pareto-optimality, c[t]/ℓ[t] ≤ c[t − 1]/ℓ[t − 1] ≤ c[t − 2]/ℓ[t − 2] ≤ · · · ≤ c[1]/ℓ[1]. Using one package for each 1 ≤ j < t does not give sufficiently many downloads. Thus, either the adversary would purchase package t at a cost of c[t], or the adversary would purchase at least two packages of some index 1 ≤ j < t. Now, for two packages of index j, the adversary can substitute one package of index j + 1, at the same price, and at least the same number of downloads. Thus, we can assume that the adversary has no more than one of each of the packages 1 < j ≤ t, and we know that one of each package up to t − 1, inclusive, is insufficient. Thus, the adversary must have purchased at least one package of index ≥ t and therefore has cost ≥ c[t]. t t Now, the sum of costs for the DRM algorithm is j=1 c[j] = j=1 2m−1+j < 2c[t]. No other deterministic algorithm can do better (since it is a generalization of ski rental [4]). To summarize this section we have that Lemma 3. The online DRM purchasing strategy above is 2 competitive with respect to any algorithm that performs the same number of downloads for every page. 3.3

Injecting Cost Updates into the DCLL Sequence

The DRM algorithm does an on-the-fly transformation of its own request sequence, σDRM , to a sequence of events for the DCLL algorithm σDCLL . A natural approach is to inject a DCLL cost for item j of ci [j]/ℓi [j] when the DRM algorithm purchases package pi [j]. The cost to DCLL for item j decreases over time since the ratio of price to the number of downloads never increases. However, these costs do not induce the property that the purchase costs for the DRM algorithm are no more than O(1) times the DCLL costs. Specifically, the cost to DCLL may be infinitesimally small if the first package allows a great many downloads but the true sequence of requests is short. To correct this problem, we deal with the first package separately. The cost to DCLL of the first access to item i is 2ci [1]/ℓi [1] plus the full price of the cheapest package for item i (ci [1]). The cost to DCLL of all subsequent accesses to item i is 2ci [j]/ℓi [j] when package pi [j] is the most recent package purchased for item i. This implies that when the DRM algorithm purchases package pi [1], this cost is also paid by the DCLL algorithm. When the DRM algorithm purchases package pi [j], j > 1, this cost (ci [j]) has been shared amongst the previous li [j − 1] accesses to item i in the DCLL sequence. The following example (see Table 1) illustrates the costs for the DCLL sequence. We omit indices 5 and 7, such Pareto optimal packages can be constructed on the fly, simply by using two packages of index 4 or two packages of index 6. Consider a sequence of requests for item i. The true costs to the DRM algorithm on successive downloads are as follows. The DRM algorithm only has a charge when a new package is purchased.

196

L. Epstein, A. Fiat, and M. Levy Table 1. Package #(j)

1

2

3

4

6

8

Package cost (ci [j])

1

2

4

8

32

128

#downloads (ℓi [j]) of the package

10

40 200 600 3200 128000

Package price/#downloads

1 10

1 20

1 50

1 75

1 100

1 1000

1, 0, 0 . . . |2, 0, 0, . . . |4, 0, 0, . . . |8, 0, 0, . . . | 8, 0, 0, . . . |8, 0, 0, . . . |32, 0, 0, . . . |32, 0, 0, . . . | The sequence of costs for the DCLL algorithm are as follows. Cost update events are injected by the DRM algorithm to achieve these decreasing costs for item i. 12 10 , 2 75 , 2 100 ,

2 10 , 2 75 , 2 100 ,

2 10 , 2 75 , 2 100 ,

... ... ...

2 10 | 2 75 | 2 100 |

2 20 , 2 75 , 2 100 ,

2 20 , 2 75 , 2 100 ,

2 20 , 2 75 , 2 100 ,

... ... ...

2 20 | 2 75 | 2 100 |

2 50 , 2 75 ,

2 50 , 2 75 ,

2 50 , 2 75 ,

... ...

2 50 | 2 75 |

(3)

To formalize the discussion above, the DRM algorithm maintains two variables: z the index of the current package in use, and cost(g), the cost that DRM determines that DCLL will pay to retrieve file g — this cost changes over time. Discard from the DRM sequence all requests to items in the DRM cache, and let σDRM be the resulting nemesis sequence. The DRM algorithm ignores all other requests anyway and this only reduces the cost to OPT. Then, let σDCLL be the resulting DCLL sequence, this is also a nemesis sequence for DCLL in that no request is for a file in the cache. 1. The cost to the DRM algorithm is no more than O(1) times the cost to the DCLL algorithm. 2. The cost to the DCLL adversary is no more than O(1) times the cost to the DRM adversary. Theorem 1. The competitive ratio of the DRM algorithm is 6k. Lemma 4. The cost to the DRM algorithm on σDRM is no more than the cost to the DCLL algorithm on σDCLL . Proof. At any point in time the accumulated cost to DRM on σDRM is at most the accumulated cost of DCLL on σDCLL . This follows from the DCLL cost sequence construction. Lemma 5. The cost to the optimal DCLL adversary on σDCLL is no more than 6 times the cost to the DRM adversary on σDRM .

Caching Content under Digital Rights Management

197

Algorithm 2. DRM 1: When item g is requested: 2: if g is not in the cache then 3: if g has never been previously requested then 4: set z = 1 5: purchase package pg [1] 6: set cost(g) ← cg [1] + 2cg [1]/ℓg [1] 7: end if 8: if there are no downloads left for the current package % need to buy a new package % then 9: set z = z + 1 10: purchase package pg [z] 11: set cost(g) ← 2cg [z]/ℓg [z] 12: end if 13: Inject an “update cost” for item g with a price of cost(g) to the DCLL algorithm. 14: Inject a request for file g to the DCLL algorithm. Copy the DCLL eviction strategy. 15: end if

Proof. Consider an optimal offline algorithm, A, for the DRM problem. The costs associated with such an algorithm depend only on the number of faults this algorithm will have on each of the different items. Once this is set, the best package or collection of packages can be determined. The algorithm A for the DRM problem induces an algorithm B for the DCLL setting. The costs of a fault for B depend on the behavior of the online DRM algorithm. We seek to bound the price that B pays on the DCLL sequence with the DCLL costs in terms of the price that A pays on the DRM sequence. Consider one specific item i, and let xi be the number of times that A faults on i. We can increase the cost for B on faults for i by assuming that all of B’s faults occur at maximal cost in the DCLL sequence of cost updates. Thus, we have that x  i

costB (σDCLL ) ≤

i

costσDCLL (t, i)

t=1

where costσDCLL (t, i) is the cost of the t’th request to file i in the DCLL sequence (determined by the online DRM algorithm via cost updates).   x Now, i t=1 costσDCLL (t, i) is also the cost to the online DCLL algorithm on the prefix of σDCLL with xi requests to file i. Fix one file i, the cost to the online DCLL algorithm associated with file i in such a prefix is no more than three times the cost to the online DRM algorithm, associated with item i, on a prefix of the DRM nemesis sequence including xi requests to item i. This follows as the costs paid for by the online DCLL algorithm on the DCLL prefix cannot exceed the costs paid by the online DRM i

198

L. Epstein, A. Fiat, and M. Levy

algorithm on the appropriate prefix by more than the cost of the next package to be purchased by the online DRM algorithm. We also know that the online DRM algorithm on a prefix of the DRM sequence does not pay more than twice the cost to any algorithm with the same number of downloads per item (Lemma 3). Let costDRM (σDRM , i) be the cost for the online DRM algorithm to purchase packages for item i on a prefix of the nemesis sequence σ(DRM) with xi requests to item i. Likewise, define costA (σDRM , i) to be cost of A on the same sequence. In total, we get that x    costσDCLL (t, i) costB (σDCLL ) ≤ i

≤3

i

t=1



costDRM (σDRM , i)

i

≤6



costA (σDRM , i)

i

= 6 costA (σDRM ). The last equality follows as the cost to A depends only on the number of downloads of the various items and not on the specific sequence. 3.4

DRM – Removing the Simplifying Assumptions

Theorem 2. The DRM algorithm is Θ(k) competitive, even without assuming the simplifying assumptions of Section 3.1. Proof. We first consider the case where packages may also have a price per download. For this analysis we do not change the DRM algorithm. We introduce a preliminary transformation that changes the packages to have only an initial cost for the package and no per-download cost. We lose a factor of 4 in the competitive ratio. For every j, the package pi [j] = (ci [j], ℓi [j], ti [j]) is modified as follows. pi [j] = (ci [j], ℓi [j], 0) if pi [j] = (ci [j],

ci [j] ti [j] , 0)

if

ci ti ci ti

[j] [j] [j] [j]

> ℓi [j]; ≤ ℓi [j].

In the first case above, the per-download cost is small enough that over all downloads, the total cost is dominated by the initial package cost. In this case, we can ignore the per download cost and only lose a factor of 2. In the second case, the per-download cost is large so that if we pay it over all permissible downloads, this cost dominates the initial purchase price of the package. In this setting, we can simply purchase the package again when the sum of per-download costs reaches the initial cost, again losing a factor of 2. With either transformation, the optimal DRM algorithm only does better. The assumption on costs being powers of two is entirely standard and only increases the competitive ratio by a factor of two. Similar, the assumption that

Caching Content under Digital Rights Management

199

we have a package at every such power can be achieved by concatenating two smaller packages of cost 2j to one more expensive package of cost 2j+1 , with the same average cost per download. Finally, the Pareto optimality assumption is clearly satisfied simply by ignoring packages that are dominated by other packages.

4

Other Results

The DRM problem generalizes one of the variants of a problem which we call Paging with a buying option. In this problem, we assume a cache of size k. All pages have a uniform size of 1, and a uniform download cost of 1. However, it is possible to buy a page for a cost of M , where M is an integer cost such that M > 1. A page fault on a bought page still requires downloading the page into the cache, but there is no additional charge. For this model we can prove tight bounds of 2k on the competitive ratio. Our algorithm combines methods of rent or buy problems with methods coming from standard paging (such as FIFO). Clearly, this implies a lower bound of 2k for the DRM problem. We consider two additional models, where bypassing is allowed. In the first model, a bought page does not need to be inserted into the cache. In this case we think of purchasing a page as using different means of storage for it, rather than inserting it into the cache each time that it is requested. For this model we can show tight bounds of 2k + 1 on the competitive ratio. In the second model, we allow bypassing even on pages which were not bought. If a page which was not bought is requested, while it is absent from the cache, the page can be either loaded into the cache for a cost of 1, or read without loading it into the cache, for the same cost. Bought pages do not need to be inserted into the cache, and do not cost any additional cost after they are purchased. For this model we can show tight bounds of 2k + 2 on the competitive ratio. We also consider the case where M is relatively small (M ≤ k), for which we show tight bounds of M for all three variants.

References 1. Azar, Y., Bartal, Y., Feuerstein, E., Fiat, A., Leonardi, S., Rosen, A.: On capital investment. Algorithmica 25(1), 22–36 (1999) 2. Bansal, N., Buchbinder, N., Naor, S.: Randomized competitive algorithms for generalized caching. In: The 40th Annual ACM Symposium on Theory of Computing (STOC 2008) (2008) 3. Becker, E., Buhse, W., G¨ unnewig, D., Rump, N. (eds.): Digital Rights Management - Technological, Economic, Legal and Political Aspects. LNCS, vol. 2770. Springer, Heidelberg (2003) 4. Borodin, A., El-Yaniv, R.: Online Computation and Competitive Analysis. Cambridge University Press, Cambridge (1998) 5. Chrobak, M., Karloff, H.J., Payne, T.H., Vishwanathan, S.: New results on server problems. SIAM Journal of Discrete Math. 4(2), 172–181 (1991)

200

L. Epstein, A. Fiat, and M. Levy

6. Gopalan, P., Karloff, H., Mehta, A., Mihail, M., Vishnoi, N.: Caching with expiration times for internet applications. Internet Mathematics 2(2), 165–184 (2005) 7. Kimbrel, T.: Online paging and file caching with expiration times. Theor. Comput. Sci. 268(1), 119–131 (2001) 8. Ku, W., Chi, C.-H.: Survey on the technological aspects of digital rights management. In: Zhang, K., Zheng, Y. (eds.) ISC 2004. LNCS, vol. 3225, pp. 391–403. Springer, Heidelberg (2004) 9. Liu, Q., Safavi-Naini, R., Sheppard, N.P.: Digital rights management for content distribution. In: ACSW Frontiers 2003: Proceedings of the Australasian information security workshop conference on ACSW frontiers, Darlinghurst, Australia, Australia, pp. 49–58. Australian Computer Society, Inc. (2003) 10. Sleator, D.D., Tarjan, R.E.: Amortized efficiency of list update and paging rules. Commun. ACM 28(2), 202–208 (1985) 11. Young, N.E.: The k-server dual and loose competitiveness for paging. Algorithmica 11(6), 525–541 (1994) 12. Young, N.E.: On-line file caching. Algorithmica 33(3), 371–383 (2002)

Reoptimization of Weighted Graph and Covering Problems Davide Bil`o, Peter Widmayer, and Anna Zych Institut f¨ ur Theoretische Informatik, ETH, Z¨ urich, Switzerland {dbilo,peter.widmayer,anna.zych}@inf.ethz.ch

Abstract. Given an instance of an optimization problem and a good solution of that instance, the reoptimization is a concept of analyzing how does the solution change when the instance is locally modified. We investigate reoptimization of the following problems: Maximum Weighted Independent Set, Maximum Weighted Clique, Minimum Weighted Dominating Set, Minimum Weighted Set Cover and Minimum Weighted Vertex Cover. The local modifications we consider are addition or removal of a constant number of edges to the graph, or elements to the covering sets in case of Set Cover problem. We present the following results: 1. We provide a PTAS for reoptimization of the unweighted versions of the aforementioned problems when the input solution is optimal. 2. We provide two general techniques for analyzing approximation ratio of the weighted reoptimization problems. 3. We apply our techniques to reoptimization of the considered optimization problems and obtain tight approximation ratios in all the cases.

1

Introduction

The classical optimization theory focuses on finding a good quality solution for an instance of a problem, where very little is known about the instance. In reality this is not necessarily the case. In practice a problem instance can arise from a small modification of a previous problem instance. Thus, there might be a need to recompute the solution provided some prior knowledge of a solution for similar instance. As an example, imagine that an optimal timetable (for some objective function, under some constraints) for a given railway network is known, and that now a railway station is closed down. It is intuitively obvious that we should profit somehow from the old timetable when we try to find a new timetable. These considerations lead to the concept of reoptimization. A reoptimization problem can be built on top of any optimization problem. The goal is to benefit from the old solution when solving the modified instance. The initial results show that NP-hard optimization problems vary from the reoptimization point of view. Some are easier to reoptimize than the other ones, and for some the prior knowledge does not help at all. In principle however, a reoptimization version remains NP-hard, although becomes easier to approximate. This motivates investigating reoptimization as a useful classification tool for NP-hard problems. E. Bampis and M. Skutella (Eds.): WAOA 2008, LNCS 5426, pp. 201–213, 2009. c Springer-Verlag Berlin Heidelberg 2009 

202

D. Bil` o, P. Widmayer, and A. Zych

The idea of reoptimization was introduced in 2003 [1] and has received much attention since then. Reoptimization of the Traveling Salesman Problem [2,5] and reoptimization of the Steiner Tree Problem [8,4] have been extensively studied. For the results we refer interested reader to the cited papers. The hardness of reoptimization in general has also been studied [6]. Motivated by these results, we focus in this paper on investigating classical NP-hard problems and their most known variations in the light of reoptimization. The underlying optimization problems we consider are mostly node-weighted graph problems, i. e. Maximum Weighted Independent Set, Maximum Weighted Clique, Minimum Weighted Dominating Set and Minimum Weighted Vertex Cover. We consider also Minimum Weighted Set Cover Problem. For more information about these problems we refer to the literature [7,9,14,11,13,12,3,10]. We analyze local modifications of adding or removing a constant number of edges to the input graph, or in case of a Minimum Weighted Set Cover Problem adding or removing a constant number of elements to input sets. We provide a PTAS for reoptimization of the unweighted version of each aforementioned problem under the condition that an optimal solution of the unmodified instance is given. We develop general techniques for analyzing approximation ratios of weighted reoptimization problems. We apply these techniques to reoptimization of Maximum Weighted Independent Set and obtain tight approximation bounds. We provide these and other bounds in Table 1. Due to the lack of space, we prove the bounds for problems other than reoptimization of Maximum Weighted Independent Set in the full version of this paper (to be found at http://www.inf.ethz.ch/personal/zycha). The remainder of the paper is organized as follows. We formally define the reoptimization in Section 2, where we also provide preliminary results and basic notation used throughout the paper. We include in this section two general techniques for analyzing approximation ratios of reoptimization problems. In Section 3 we analyze in detail reoptimization of Maximum Weighted Independent Set. We conclude the paper with Section 4, which gives an overview of our results and points out why they are interesting.

2

Preliminaries

For an optimization problem P let IO and IN be valid input instances, where IN = LM(IO ) is obtained by applying to IO a local modification LM. By OptO and OptN we denote an optimal solution of IO and IN respectively, whereas SolO and SolN are any feasible solutions. A reoptimization problem ReoptP (LM) corresponding to P is defined as follows. An instance of ReoptP (LM) is a triple (IO , IN , SolO ), where IO , IN and SolO are as described above. The solutions of ReoptP (LM) on (IO , IN , SolO ) are solutions of P on IN , thus the cost of a solution for ReoptP (LM) is the cost of a corresponding solution for optimization problem P . We begin with a simple but important lemma.

Reoptimization of Weighted Graph and Covering Problems

203

Lemma 1. For any NP-hard problem, for which any instance is reachable from an easy instance by a polynomial number of local modifications, the reoptimization version is NP-hard. Proof. For the formal proof we refer to the general hardness results for reoptimization problems [6]. Here we give a sketch of the proof. Assume we want to find an optimum of an instance I of a problem P . We start from an instance I0 ∈ P , for which we can easily find solution Sol0 in polynomial time. Then, we construct a sequence of instances, where instance Ii+1 is a locally modified Ii , that is Ii+1 = LM(Ii ). The last element of this sequence is I. Then, by consecutive applications of the reoptimization algorithm on (Ii , Ii+1 , Soli ) we finally obtain an optimal solution for I. ⊓ ⊔ Since the instances of ReoptP (LM) provide more information, ReoptP (LM) is at most as difficult as P . The next lemma provides approximability results for some reoptimization problems. Lemma 2. If the instances of ReoptP (LM) are of form (IO , IN , OptO ), solutions are sets, the cost of a solution is the number of its elements, and we can compute from OptO in polynomial time a feasible solution SolN , such that ||SolN | − |OptN || ≤ C for some constant C, then ReoptP (LM) admits a PTAS. Algorithm 1. PTAS for ReoptP (LM) Input: IO , IN , OptO , constant ǫ > 0 1: Obtain SolN 2: For each set from the solution space, that contains at most is a feasible solution 3: Let SolB be best of the identified feasible solutions Output: SolN := better of SolN and SolB

C ǫ

elements, verify if it

Proof. Let P be a maximization problem (for minimization problems the proof is analogical). Thus, |SolN | ≤ |OptN |. Note that if |OptN | ≤ Cǫ for ǫ > 0, then Algorithm 1 returns optimum. Otherwise, ||SolN | − |OptN || ≤ C implies  C  |SolN | ≥ |OptN | 1 − ≥ (1 − ǫ)|OptN | |OptN | ⊓ ⊔ To proceed with the analysis further in the paper, we introduce the notion of approximation for reoptimization algorithms. Let A(I) be the output of algorithm A on input instance I. Definition 1. Let P be a reoptimization (maximization) problem and f : R+ ∪ {0} → R+ ∪ {0} be a function. We define a parameterized family of polynomial time approximation algorithms for P as follows: AP (ρ, f (ρ)) := {A|A is polynomial time and c(SolO ) ≥ ρc(OptO ) ⇒ c(A(IO , IN , SolO )) ≥ f (ρ)c(OptN )}

204

D. Bil` o, P. Widmayer, and A. Zych

When it is clear from the context to which problem A refers, we omit the subscript. According to the above definition, an algorithm in class A(ρ, f (ρ)) takes as an input any feasible solution SolO , and it guarantees, that given a good old solution SolO ≥ ρc(OptO ), it returns a good solution SolN for the modified instance, meaning that c(SolN ) ≥ f (ρ)c(OptN ). Below we provide two general techniques to deal with the approximation of reoptimization problems. We use these techniques further in the paper. The general idea behind proving inapproximablity results for reoptimization problems is as follows. Let RE = ReoptP (LM) be a reoptimization version of an optimization problem P , and let P be inapproximable within ratio r unless P = NP or some other highly unlikely condition holds. We want to prove that RE is not approximable within factor r′ . Given A ∈ A(ρ, f (ρ)) for some function f , we construct polynomial time algorithm B (see Algorithm 2) approximating P with ratio g(f (ρ)) that we can estimate. To complete the proof, we have to prove the following property: f (ρ) ≥ r′ =⇒ g(f (ρ)) ≥ r. Algorithm 2. B: appr. alg. for RE Input: Input instance I ∈ P 1: Build (IO , IN , SolO ) ∈ RE s.t. SolO is a ρ-approximation 2: SolN := A(IO , IN , SolO ) 3: Obtain a feasible solution Sol for P from SolN Output: Sol

The general technique we use to obtain positive approximation results is the following. Since the modification is local, it alters only a part of an old instance IO of a constant size. Therefore in polynomial time we can guess the part of an optimal solution OptN for instance IN restricted to the modified part. Then, by suboptimality property, we reduce the problem instance and use the additional information we have (SolO ) to solve the problem there. Basic notation. We denote an optimal solution for an optimization problem P on an input instance I as OptP (I). Given a simple graph G, we denote its set of vertices with V (G) and its set of edges with E(G). For a graph G without edges e ∈ E ′ or with extra edges e ∈ E ′ , we simply write G − E ′ or G + E ′ respectively. We denote the set of endpoints of the set of edges E with Endpoints(E). A subgraph of a graph G induced by the nodes in S ⊂ V (G) is denoted as G[S]. A node weighted graph (or simply weighted) is a triple G = (V, E, c), where the weight (cost) function c is given on the set of nodes: c : V → R+ ∪{0}. For graph problems, the instances are graphs, thus we replace IO by GO and IN by GN . The solutions are typically sets. For unweighted problems the cost of a solution is its size: c(Sol) = |Sol|. For their weighted analogues the cost of a solution is  the sum of the costs of its elements: c(Sol) = v∈Sol c(v). In the remaining part of the paper, we apply the techniques described above to obtain tight approximation bounds for the reoptimization of the problem defined as follows.

Reoptimization of Weighted Graph and Covering Problems

205

Definition 2. Maximum Weighted Independent Set (WMIS for short) Instance: Graph G = (V, E, c). Solution: An independent set of vertices in G, i.e. a subset V ′ ⊆ V such that no two vertices in V ′ are joined by an edge in E. Objective: Maximize the cost of the independent set V ′ . The same techniques can be applied to the following problems, where the input instance is a weighted graph G = (V, E, c) unless specified otherwise: – Maximum Weighted Clique, where the solution is a clique in G, i.e. a subset V ′ ⊆ V such that every two vertices in V ′ are joined by an edge in E, and the objective is to maximize the cost of the clique V ′ , – Minimum Weighted Dominating Set, where the solution is a dominating set for G, i.e., a subset V ′ ⊆ V such that for all u ∈ V \ V ′ there is a v ∈ V ′ for which (u, v) ∈ E, and the objective is to minimize the cost of the dominating set V ′ , – Minimum Weighted Vertex Cover, where the solution is a vertex cover for G, i.e., a subset V ′ ⊆ V such that, for each edge (u, v) ∈ E, at least one of u and v belongs to V ′ , and the objective is to minimize the cost of the vertex cover V ′ , – Minimum Weighted Set Cover, where the instance is a collection U of weighted subsets of a finite set U , the solution is a set cover for U , i.e., a subset U ′ ⊆ U such that every element in U belongs to at least one member of U ′ , and the objective is to minimize the cost of the set cover U ′ . Let k be a constant positive integer throughout the remainder of this paper. Let Ek be a set of k edges between nodes V (GO ). We consider local modifications of adding/removing Ek to/from E(GO ), where Ek is implicitly given in the input. We symbolically denote these modifications Ek + and Ek − respectively. Therefore for each considered optimization problem P we have two reoptimization problems: ReoptP (Ek +) and ReoptP (Ek −). Both take as instances (GO , GN , SolO ). An output is an optimal solution in GN = GO + Ek or GN = GO − Ek respectively. In case of the set cover problem, the local modifications we consider are inserting or removing k elements of U (not necessarily different) to/from k arbitrary sets of U (also not necessarily different). Due to the limited space, we prove the approximation bounds for the problems other then WMIS in the full version of the paper (to be found at http://www.inf.ethz.ch/personal/zycha). We conclude this section with two corollaries concerning all the problems listed above. Corollary 1. For all optimization problems P listed above and corresponding LM specified as above, reoptimization problems ReoptP (LM) are NP-hard. Proof. Follows from Lemma 1 and from the fact that aforementioned graph problems are solvable in polynomial time for cliques (edge removal) and empty graphs (edge addition), and the set cover problem is polynomial time solvable for U being a set of singletons (addition of elements) or for U containing U as the cheapest set (removal of elements). ⊓ ⊔

206

D. Bil` o, P. Widmayer, and A. Zych

Corollary 2. If SolO given in the input reoptimization instance is optimal, then Algorithm 1 provides a PTAS for ReoptP (LM ) for all the unweighted versions of the problems P listed above, and with LM of adding or removing k edges (or k elements to some sets of U in case of Set Cover Problem). Proof. The corollary is a trivial observation given that in all the listed reoptimization problems solution OptN differs from OptO by a constant number of vertices (linear in k). So if OptO is feasible, we set SolN = OptO , otherwise SolN is OptO with a constant number of vertices added or removed. ⊓ ⊔

3

Maximum Weighted Independent Set

In this section we analyze reoptimization of the Weighted Maximum Independent Set (WMIS) problem. A special case of WMIS is MIS, where the weights of 2 all nodes are set to 1. MIS has been proven approximable within O( (log|V|V| |) ) [7] and not approximable within |V |11− [14] for any ǫ > 0 unless P = NP. Thus both WMIS and MIS are not approximable within a constant factor unless P = NP. Interestingly, we provide tight constant approximation ratios for both reoptimization variants of WMIS. ǫ

3.1

Reoptimization of WMIS with Edges Addition

In this section we consider reoptimization of Weighted Maximum Independent Set where the modification is the addition of k edges: ReoptWMIS (Ek +). The special cases of this problem are additions of 2, 3 and 9 edges. The corresponding tight approximation ratios are ρ2 , ρ3 and ρ4 respectively, where ρ ∈ [0, 1] is the quality ratio of the unmodified solution. In general, we provide for this problem a tight approximation ratio of f (ρ) = ρl , where l is the maximal positive integer    such that k edges can fill the clique of l nodes: 2l ≤ k < l+1 2 . First, we provide an algorithm in A(ρ, ρl ) for any 0 ≤ ρ ≤ 1. The algorithm AlgWMISkE+ we propose works in the following way. It takes as input (GO , GO + Ek , SolO ) and determines set S independent in GO , which is an intersection of SolO and the set of endpoints of edges in Ek . Lemma 3 shows, that S can be partitioned into l sets IS1 , . . . , ISl independent in GN . This partition can be found in a greedy manner, since any minimal partition into independent sets has at most l elements. To obtain a feasible in GN solution, S is replaced in SolO with ISi of a maximum cost. Since only a part of SolO that is independent in GN remains in the returned solution, this solution must be an independent set. Theorem 1 states, that this algorithm guarantees ρl approximation ratio provided c(SolO ) ≥ ρc(OptO ). Lemma 3. Let G′ be a simple graph with V (G′ ) = Endpoints(Ek ) and E(G′ ) = Ek . Nodes of G′ can be partitioned into l independent sets. Proof. For the sake of contradiction assume that some minimal partition has at least l+1 independent sets. Then k is at least 12 l(l+1), as in any minimal partition

Reoptimization of Weighted Graph and Covering Problems

207

Algorithm 3. AlgWMISkE+ Input: GO , GO + Ek , SolO 1: Let S = {v1 , v2 , . . . , vm } be the endpoints of Ek contained in SolO 2: Find a partition of S into l sets IS1 , . . . , ISl independent in GN 3: Let ISmax ∈ {IS1 , . . . , ISl } : c(ISmax ) = maxi=1...l {c(ISi )}i=1..l Output: SolN := SolO \ S ∪ ISmax

there is an edge linking any two different independent sets. Thus k ≥ contradicts the choice of l.

l+1 2 , what ⊓ ⊔

Theorem 1. ∀0≤ρ≤1 , AlgWMISkE+ ∈ A(ρ, ρl ). Proof. The solution SolN returned by AlgWMISkE+ satisfies 1 1 c(SolN ) = c(SolO ) − c(S) + c(ISmax ) ≥ c(SolO ) − c(S) + c(S) ≥ c(SolO ) l l From c(OptO ) ≥ c(OptN ) and c(SolO ) ≥ ρc(OptO ) we get c(Sol′ ) ≥ ρl c(OptN ). Below we prove, that the approximation bound of

ρ l

is tight.

Theorem 2. ∀ǫ>0 ∀0≤ρ≤1 , A(ρ, ρl + ǫ) = ∅ unless P = NP. Proof. We prove the negative approximation result, using the technique described in the previous section. Let ρ, ǫ be fixed and r′ = ρl + ǫ. According to our technique, we prove that if ReoptWMIS (Ek +) is approximable within ratio r′ , then MIS is approximable within ratio r. This gives a contradiction for any constant r [9,14], therefore let r = ρl . Let G be an instance of MIS. Consider the instance of ReoptWMIS (Ek +) shown in Figure 1. Weighted graph GO is constructed from G by connecting each node in G to each node in an independent in GO set of nodes {v1 , . . . , vl }. Additionally, we add a set {vl+1 , . . . , vl+k−( ) } of separate vertices in GO . The cost is assigned 2 as follows: l

– c(x) := 1 for any x ∈ V (G) – c(vi )i=1..l := ρl |OptMIS (G)| – c(vi )i>l := 0 Let Ek be the set of edges between each pair of vertices from {v1 , . . . , vl } plus the edges between v1 and nodes in {vl+1 , . . . , vl+k−( ) }. Thus, after addition of Ek to l

2

GO nodes in {v1 . . . vl } become a clique. Solution SolO := {v1 , . . . , vl } is feasible in GO , moreover c(SolO ) = ρ|OptMIS (G)| and thus SolO is a ρ-approximation of OptO , because maximum independent set in G is an optimal solution in GO : OptO = OptMIS (G). Also in GN = GO + Ek we have OptN = OptMIS (G). Thus, algorithm A ∈ A(ρ, f (ρ)) applied to (GO , GN , SolO ) returns f (ρ)-approximation SolN of OptN . Solution Sol is obtained by removing v-nodes from SolN if there are any. To conclude, we need f (ρ) ≥ ρl + ǫ ⇒ g(f (ρ)) ≥ r. Assume f (ρ) > ρl . Then

208

D. Bil` o, P. Widmayer, and A. Zych c(vi ) =

ρ |OptMIS (G)|, ∀i l

= 1, . . . , l v3

G

v2

vl

0 vl+1

v1 0 vl+2

0 v

  k− 2l

Fig. 1. NP-hardness of approximating ReoptWMIS (Ek +)

c(SolN ) > ρl c(OptN ) ≥ ρl |OptMIS (G)|. Solution SolN does not contain any vi where i ≤ l, because taking such node excludes taking any significant other node and gives c(SolN ) = ρl |OptMIS (G)|. Therefore c(Sol) = c(SolN ) > ρl |OptMIS (G)| and the approximation ratio of algorithm B is g(f (ρ)) > ρl . The technical detail we still have to deal with is that we do not know the value of |OptMIS (G)|, and we use it when constructing GO . This is easy to overcome by constructing n = |V (G)| instances instead of one. For each m ∈ {1..n} we construct an instance described above with c(vi )i≤l = ρl m. Algorithm B applies A to every such instance and chooses the best result to be SolN . This does not affect analysis, since there must be m ∈ {1..n} for which |OptMIS (G)| = m. Therefore in the remainder of the paper we do not mention this technical detail anymore and focus the analysis on the important instance. For the formal proof and algorithm description we refer to appendix. ⊓ ⊔ 3.2

Reoptimization of WMIS with Edges Removal

In this section we consider reoptimization of WMIS problem with the local modification of removing at most k edges: ReoptWMIS (Ek −). The special cases of this problem are removals of 2, 3 and 9 edges. The corresponding tight approximation ratios are 1+2 2 , 2+3 3 and 3+4 4 respectively, where ρ ∈ [0, 1] is again the quality ρ

ρ

ρ

ratio of the unmodified solution. In general, we provide for this problem a tight ρl approximation ratio of ρ(l−1)+l , where l is again the constant positive integer l+1 l such that 2 ≤ k < 2 . ρl ). The algorithm First we provide an algorithm AlgWMISkE- ∈ A(ρ, ρ(l−1)+l takes as an input (GO , GO −Ek , SolO ) and sets S to be the set of endpoints of Ek . It returns the best of two solutions: SolO , which is still an independent set after removing some edges, or OptWMIS (GN [S]), which is the maximum independent set in the subgraph of GN induced by S. Since the size of S is constant (at most 2k), OptWMIS (GN [S]) can be found using exhaustive search method. Theorem 3 provides approximation ratio of this algorithm. Theorem 3. ∀0≤ρ≤1 , AlgWMISkE- ∈ A(ρ,

ρl ρ(l−1)+l ).

Reoptimization of Weighted Graph and Covering Problems

209

Algorithm 4. AlgWMISkEInput: GO , GN = GO − Ek , SolO 1: Let S = {v1 , v2 , . . . , vm } be the endpoints of Ek 2: SolN := OptWMIS (GN [S]) 3: If c(SolN ) < c(SolO ) then SolN := SolO Output: SolN

Proof. Now we analyze AlgWMISkE- (Algorithm 4). Assume that for a fixed ρ we have c(SolO ) ≥ ρc(OptO ). Let S ′ = OptN ∩ S, where S is the set of endpoints of Ek as defined in AlgWMISkE-. By Lemma 3, S ′ can be partitioned into l sets independent in GO . Let I = {IS1 , . . . , ISl } be such a partition. Solution l OptN \ { j=1,j =i ISj } is feasible in GO for any choice of i, thus ⎞ ⎛

c(ISj )}⎠ c(SolO ) ≥ ρc(OptO ) ≥ ρ ⎝c(OptN ) − min { {i=1..l}

j=1...l,j =i

ρ(l−1) Let α = ρ(l−1)+l . If min{i=1...l} { j=1...l,j =i c(ISj )} ≤ αc(OptN ), then SolO is  ρ(1 − α)-approximation. Otherwise min{i=1...l} { j=1...l,j =i c(ISj )} > αc(OptN ) implies l l



c(ISj ) = c(ISj ) ≥ l · αc(OptN ) (l − 1)



j=1

i=1 j=1...l,j =i

and therefore c(OptWMIS (GN [S])) ≥ c(S ′ ) =

l

c(ISj ) ≥

j=1

lα c(OptN ) l−1

Plugging in the value of α gives the desired approximation ratio for both cases. ⊓ ⊔ Below we prove, that the approximation bound of

ρl ρ(l−1)+l

is tight.

ρl Theorem 4. ∀ǫ>0 ∀0 0, ρ ≤ 1 be fixed and r′ = ρ(l−1)+l + ǫ. According to our technique, we prove that if ReoptWMIS (Ek −) is approximable within ratio r′ , then MIS is approximable within ratio r. This gives a contradicρl ). Let G be an instance tion for any constant r [9,14], therefore let r = ǫ(1 + l−ρ of MIS. Consider the instance of ReoptWMIS (Ek −) shown in Figure 2. Weighted graph GO contains l − 1 copies of G and additional nodes z, v1 , . . . , vl+k−( ) . Nodes {v1 , . . . , vl } form an l-clique in GO , and node v1 is connected 2 with all the nodes vi where i > l. Node z is connected to all nodes in GO . The cost function is given as follows: l

210

D. Bil` o, P. Widmayer, and A. Zych l − 1 copies of G G

G

G

v3 z l−1 c(z) = ρ l ρ−1 |OptMIS (G)| l−1 |OptMIS (G)|, ∀i = 1, . . . , l c(vi ) = ρ ρ−1

v2

vl

0 vl+1

v1 0 vl+2

v

0

  k− 2l

Fig. 2. NP-hardness of approximating of ReoptWMIS (Ek −)

– – – –

c(x) := 1 for each x in some copy of G l−1 |OptMIS (G)| c(vi )i=1...l := ρ l−ρ c(vi )i>l := 0 l−1 c(z) := ρl l−ρ |OptMIS (G)|

Ek is the set of edges interconnecting clique {v1 . . . vl } plus the edges between v1 and {vl+1 . . . vl+k−( ) }. Thus, after removing Ek , set {v1 , . . . , vl } is an inl

2

dependent set in GN . Solution SolO := {z} is trivially an independent set in GO . An optimal solution in GO is an independent set containing l − 1 copies of OptMIS (G) and one of the nodes in {v1 , . . . , vl }. Such solution OptO satisfies c(OptO ) = (l−1)|OptMIS (G)|+ρ

l(l − 1) l−1 |OptMIS (G)| = |OptMIS (G)| ≥ c(z). l−ρ l−ρ

Thus solution SolO is a ρ-approximation of OptO . An optimal solution in GN = GO − Ek contains l − 1 copies of OptMIS (G) and the whole set {v1 , . . . , vl }, therefore c(OptN ) = (l − 1)|OptMIS (G)| + lc(v1 ) = (l − 1)(1 +

ρl )|OptMIS (G)|. l−ρ

After applying an algorithm from A(ρ, f (ρ)) to (GO , GN , SolO ) solution Sol is obtained from SolN by removing v-vertices: Sol′ := SolN \ {v1 , . . . , vl+k−( ) } and l

2



returning the subset of Sol with highest cost belonging to one of the copies of G. Now let us assume f (ρ) ≥ r′ , what implies c(SolN ) ≥ (

ρl + ǫ)c(OptN ). ρ(l − 1) + l

Reoptimization of Weighted Graph and Covering Problems

Solution SolN can not contain z, because c(SolN ) > Since c(SolN ) ≥ (l − 1)

ρl ρ(l−1)+l c(OptN )

211

= c(z).

ρl ρl |OptMIS (G)| + ǫ(l − 1)(1 + )|OptMIS (G)|, l−ρ l−ρ

there must hold c(Sol′ ) = c(SolN \ {v1 . . . vl+k−( ) }) = c(SolN ) − c({v1 . . . vl+k−( ) }) 2 2 l

l

≥ ǫ(l − 1)(1 + Thus, c(Sol) ≥ ǫ(1 +

ρl l−ρ )|OptMIS (G)|

ρl )|OptMIS (G)|. l−ρ which implies g(f (ρ)) ≥ r.

⊓ ⊔

Corollary 3. Reoptimization of Maximum Weighted Clique (WMC) admits the ρl following tight approximation bounds: ρl for ReoptWMC (Ek −) and ρ(l−1)+l for ReoptWMC (Ek +). Proof. This follows from obvious L-reductions between following problems: – WMIS and WMC – ReoptWMIS (Ek +) and ReoptWMC (Ek −) – ReoptWMIS (Ek −) and ReoptWMC (Ek +) These reductions are based on the fact, that a maximum independent set in G is a maximum clique in the complement graph G. ⊓ ⊔

4

Conclusions

We have investigated classical weighted NP-hard problems in the light of reoptimization. For the graph problems, we have considered the local modification of adding/removing k edges to/from the input graph, and for the set cover problem adding/removing k elements of the universe set to/from the covering sets. We have presented general techniques for analyzing approximation of reoptimization problems. We have applied our techniques to concrete problems and obtained tight approximation bounds in all the cases we have considered. We have also provided a PTAS for the corresponding unweighted versions under the condition that an optimal solution of unmodified problem instance is given in input. Table 1 summarizes the achieved approximability results. We have obtained a variety of interesting results. In fact, while for some reoptimization problems the knowledge of a good (even optimal) solution of the unmodified instance does not add any new information when we consider the modified instance, for some other reoptimization problems we can really benefit of such an additional knowledge. Moreover, while for some problems (see WMIS and WMC) the quality of the new solution depends on the quality of the old one as well as on k, for the other problems, only the quality of the old solution affects the quality of the new one. Interestingly, for the WMVC problem, where

212

D. Bil` o, P. Widmayer, and A. Zych

Table 1. Table provides the obtained approximation ratios. WMIS stands for Maximum Weighted Independent Set, WMC stands for Maximum Weighted Clique, WMDS stands for Minimum Weighted Dominating Set, WMSC stands for Minimum Weighted Set Cover, and WMVC stands for Minimum Weighted Vertex Cover.    The value ρ de. All the bounds notes the quality of the solution, while l is such that 2l ≤ k < l+1 2 are tight under the condition that P=NP (for WMVC the condition is that there is no polynomial time algorithm for the unweighted version of the problem which guarantees a constant approximation ratio better than 2). WMIS addition of k elements removal of k elements

WMC

ρ ρl l ρ(l−1)+l ρl ρ ρ(l−1)+l l

WMDS WMSC O(log |V (G)|) O(log |V (G)|) 1+ρ 1+ρ

3 2

WMVC 1 + ρ2 if ρ = 1, 2 if ρ > 1

Fig. 3. Plots of the tight approximation ratios achieved for reoptimization of WMIS under the local modification of adding (left side) and removing (right side) k edges, respectively

the local modification is the removal of k edges, we can benefit of the solution of the unmodified instance only if it is optimal, otherwise the problem is as difficult as the underlying optimization problem for unweighted graphs. In Figure 3 we present two plots. They illustrate the dependency of the approximation ratio of respectively ReoptWMIS (Ek +) and ReoptWMIS (Ek −) on the quality ρ of the old solution and the size k of the modification. They show clearly, how removing edges is simpler then adding them.

References 1. Archetti, C., Bertazzi, L., Speranza, M.G.: Reoptimizing the traveling salesman problem. Networks 42(3), 154–159 (2003) 2. Ausiello, G., Escoffier, B., Monnot, J., Paschos, V.T.: Reoptimization of minimum and maximum traveling salesman’s tours. In: Arge, L., Freivalds, R. (eds.) SWAT 2006. LNCS, vol. 4059, pp. 196–207. Springer, Heidelberg (2006)

Reoptimization of Weighted Graph and Covering Problems

213

3. Bar-Yehuda, R., Even, S.: A local-ratio theorem for approximating the weighted vertex cover problem. Annals of Discrete Mathematics 25, 27–46 (1985) 4. Bil` o, D., B¨ ockenhauer, H.-J., Hromkoviˇc, J., Kr´ aloviˇc, R., M¨ omke, T., Widmayer, P., Zych, A.: Reoptimization of steiner trees. In: Gudmundsson, J. (ed.) SWAT 2008. LNCS, vol. 5124, pp. 258–269. Springer, Heidelberg (2008) 5. B¨ ockenhauer, H.-J., Forlizzi, L., Hromkoviˇc, J., Kneis, J., Kupke, J., Proietti, G., Widmayer, P.: Reusing optimal TSP solutions for locally modified input instances (extended abstract). In: Fourth IFIP International Conference on Theoretical Computer Science—TCS 2006. IFIP Int. Fed. Inf. Process, vol. 209, pp. 251–270. Springer, New York (2006) 6. B¨ ockenhauer, H.-J., Hromkovic, J., M¨ omke, T., Widmayer, P.: On the hardness of reoptimization. In: Geffert, V., Karhum¨ aki, J., Bertoni, A., Preneel, B., N´avrat, P., Bielikov´ a, M. (eds.) SOFSEM 2008. LNCS, vol. 4910, pp. 50–65. Springer, Heidelberg (2008) 7. Boppana, R., Halld´ orsson, M.M.: Approximating maximum independent sets by excluding subgraphs. BIT 32(2), 180–196 (1992) 8. Escoffier, B., Milanic, M., Paschos, V.T.: Simple and fast reoptimizations for the Steiner tree problem. Technical Report 2007-01, DIMACS (2007) 9. Hastad, J.: Clique is hard to approximate within n1−ǫ . In: Foundations of Computer Science -FOCS 1996, pp. 627–636 (1996) 10. Hastad, J.: Some optimal inapproximability results. In: STOC 1997: Proceedings of the twenty-ninth annual ACM symposium on Theory of computing, pp. 1–10. ACM, New York (1997) 11. Johnson, D.S.: Approximation algorithms for combinatorial problems. In: STOC 1973: Proceedings of the fifth annual ACM symposium on Theory of computing, pp. 38–49. ACM, New York (1973) 12. Monien, B., Speckenmeyer, E.: Ramsey numbers and an approximation algorithm for the vertex cover problem. Acta Inf. 22(1), 115–123 (1985) 13. Raz, R., Safra, S.: A sub-constant error-probability low-degree test, and a subconstant error-probability pcp characterization of np. In: STOC 1997: Proceedings of the twenty-ninth annual ACM symposium on Theory of computing, pp. 475–484. ACM, New York (1997) 14. Zuckerman, D.: Linear degree extractors and the inapproximability of max clique and chromatic number. In: STOC 2006: Proceedings of the thirty-eighth annual ACM symposium on Theory of computing, pp. 681–690. ACM, New York (2006)

Smoothing Imprecise 1.5D Terrains⋆ Chris Gray1 , Maarten L¨ offler2 , and Rodrigo I. Silveira2 1

Department of Computer Science, TU Braunschweig, Germany [email protected] 2 Dept. Computer Science, Utrecht University, The Netherlands {loffler,rodrigo}@cs.uu.nl

Abstract. We study optimization problems in an imprecision model for polyhedral terrains. An imprecise terrain is given by a triangulated point set where the height component of the vertices is specified by an interval of possible values. We restrict ourselves to 1.5-dimensional terrains: an imprecise terrain is given by an x-monotone polyline, and the y-coordinate of each vertex is not fixed but constrained to a given interval. Motivated by applications in terrain analysis, in this paper we present two linear-time approximation algorithms, for minimizing the largest turning angle and for maximizing the smallest one. In addition, we also provide linear time exact algorithms for minimizing and maximizing the sum of the turning angles.

1

Introduction

Terrain modeling is a central task in geographical information systems (GIS). Terrain models can be used in many ways, for example for visualization or analysis purposes to compute features like watersheds or visibility regions [4]. One common way to represent a terrain is by means of a triangulated irregular network (TIN): a planar triangulation with additional height information on the vertices. This defines a bivariate and continuous function, defining a surface that is often called a 2.5-dimensional (or 2.5D) terrain. The height information used to construct TINs is often collected by airplanes flying over the terrain and sampling the distance to the ground, for example using radar or laser altimetry techniques, or it is sometimes obtained by optically scanning contour maps and then fitting an approximating surface. These methods often return a height interval rather than a fixed value, or produce heights with some known error bound. For example, in high-resolution terrains distributed by the United States Geological Survey, it is not unusual to have vertical errors of up to 15 meters [12]. However, algorithms in computational geometry often assume that the height values are precise. ⋆

This research was partially supported by the Netherlands Organisation for Scientific Research (NWO) through the project GOGO and project no. 639.023.301. A preliminary version of this paper was presented at the 24th European Workshop on Computational Geometry (EuroCG 2008), under the title “Smoothing imprecise 1-dimensional terrains”.

E. Bampis and M. Skutella (Eds.): WAOA 2008, LNCS 5426, pp. 214–226, 2009. c Springer-Verlag Berlin Heidelberg 2009 

Smoothing Imprecise 1.5D Terrains

(a)

215

(b)

Fig. 1. (a) An imprecise terrain. (b) A possible realization of the real terrain.

In order to deal with this imprecision in terrains, one can use a more involved model that takes imprecision into account. Gray and Evans [5] propose a model where an interval of possible heights is associated with every vertex of the triangulation. Fig. 1 shows an example of an imprecise terrain, and a possible realization of the real terrain. Kholondyrev and Evans [8] also study this model. Silveira and Van Oostrum [11] also allow moving vertices of a TIN up and down to remove local minima, but do not assume bounded intervals. In this paper we use the same model as in [5,8]: each vertex has a height interval. The imprecision model creates some freedom in the terrain: any choice of a height for each vertex, as long as it is within its height interval, leads to a realization of the imprecise terrain. This leads naturally to the problem of finding one that is best (i) according to the characteristics of the real terrain being modeled, and (ii) depending on the way the model will be used. As an example, consider a terrain model of an area that is known to have a smooth topography, like the dunes of some desert or hills in a relatively flat area. Then based on the information known about the real terrain, trying to find a realization that is smooth becomes important. On the other hand, terrain models are often used for terrain analysis, such as water run-off simulation. When simulating the way water flows through a terrain, artificial pits create artifacts where water accumulates, affecting the simulation. Therefore minimizing the number of pits is another criterion of interest. There are several interesting applications for smoothing terrains. A first one, already mentioned, is because some additional information about the topography of the terrain may be known. Other applications include visualization (smooth shapes usually look better), compression and noise reduction. Smoothing of terrains has been previously studied in the context of grid terrains [12,7], where techniques from image processing can be applied, but not, to our knowledge, for imprecise TINs. In a TIN, a smooth terrain implies that the spatial angles between triangle normals are small. Measures on spatial angles are known to be important for several GIS applications [2,3,13]. In our context, we can try to find a height value for each vertex, restricted by the intervals, such that the resulting terrain minimizes some function of the spatial angles that models the notion of smoothness, like the largest spatial angle or the sum of all spatial angles. We study several measures related to spatial angles, but only for 1.5D terrains. Even though the 2.5D version is clearly more interesting, a simpler model is easier

216

C. Gray, M. L¨ offler, and R.I. Silveira

to handle and gives insight into the difficulties of 2.5D terrains. As will become clear, this restricted model is still challenging enough and the results can serve as building blocks for the 2.5D version. Some comments on this are given in the conclusions. Moreover, 1.5D terrains are interesting in their own right, and have been recently studied in relation to guarding problems [1,9]. Our main result, presented in Sect. 3, is an approximation algorithm for minimizing the largest turning angle in the terrain. A similar algorithm can be used for the opposite problem of maximizing the smallest turning angle, leading to the worst case realization of the terrain, and is discussed in Sect. 4. Section 5 deals briefly with exact algorithms for minimizing and maximizing the sum of the turning angles. All our algorithms run in linear time. Due to space limitations, most results presented here are only sketched on a global or intuitive level; the interested reader can find more details in the full version of this paper [6].

2

Preliminaries

In this section we introduce imprecise 1.5D terrains more formally, together with a number of definitions and concepts that are used throughout the remainder of the paper. A 1.5D terrain is an x-monotone polyline with n vertices. An imprecise 1.5D terrain is a 1.5D terrain with a y-interval at each vertex rather than a fixed ycoordinate. A realization of a terrain is given by a sequence of n y-coordinates, one for each interval. Each y-coordinate must be within its corresponding interval. In the discussion of the algorithms below, we sometimes treat realizations of 1.5D terrains directly as x-monotone polylines. We sometimes refer, with some abuse of notation, to the vertices and edges of the realization. In this context, we call a vertex left-turning when this polyline, seen from left to right, turns to the left (upwards). Similarly, we call the vertex right-turning if it turns to the right (downwards), and straight if it does not change direction. We call an edge left-turning if both its endpoints are left-turning vertices, right-turning if both its endpoints are right-turning, or a saddle edge if one of its endpoints is left-turning and the other is right-turning. In a realization of a 1.5D terrain, we say that a vertex is external if it lies on one of the endpoints of its imprecision interval, and internal otherwise. The external vertices of a realization partition it into a sequence of chains: each chain starts at an external vertex, then goes through a (possibly empty) sequence of internal vertices, and finally ends at an external vertex again (see Fig. 2(a)). The leftmost and rightmost chains might be only half-chains if the leftmost or rightmost vertices of the terrain are not external. Chains can be further subdivided into subchains: a subchain consists of only left-turning or right-turning vertices (Fig. 2(b)). Subchains are connected by saddle edges into chains. Chains are connected by shared vertices into the final terrain.

Smoothing Imprecise 1.5D Terrains

C1

C2 (a)

C3

C4

C1

C2 (b)

C3

217

C4

Fig. 2. (a) Division of a realization of terrain into chains {C1 , · · · , C4 } by external vertices (white circles). (b) Division of chains into subchains. Consecutive subchains in the same chain share a saddle edge.

3

Minimizing the Largest Turning Angle

In this section, we consider the problem of minimizing the largest angle in the terrain. Computing the optimal terrain under this measure exactly is difficult, since it involves computing the inverse of a non-algebraic function that cannot be computed under the real-RAM model commonly used in computational geometry. Therefore we present an algorithm to compute an approximate solution. In fact, we present an algorithm for a somewhat more general problem: we minimize the complete sorted vector of turning angles in lexicographical order. For a given terrain T , we denote this vector by V (T ). Figure 3 shows an example of a path that minimizes the angle vector V (T ). It can be shown that the solution is unique, unless it consists of all vertices on a straight line. A solution T is said to be an ε-approximation of the optimal solution T ∗ if its vector of angles V is at most ε larger at every position, that is, if Vi (T ) ≤ Vi (T ∗ ) + ε. Our solution incorporates approximation in different stages; for this purpose we define a smaller value ε′ = ε/4. We also define k = ⌈ 2επ′ ⌉; the algorithm relies on dividing the possible directions of edges in the terrain into k sectors, and on subdividing the terrain into independent pieces of length O(k). Finally, we define δ = ε′ /3k. This is the factor to which we must approximate certain portions of the problem that appear later. We start by defining subproblems. In a subproblem, the objective is to smooth only a certain portion of the imprecise terrain. Our algorithm solves many small

Fig. 3. Optimal terrain for min max angle

218

C. Gray, M. L¨ offler, and R.I. Silveira

subproblems, and tries to combine them into a global solution. We define a subproblem S(pi , di , pj , dj ) on two fixed points pi on the ith interval, and pj on the jth interval, and two sectors di and dj , which bound the possible directions in which the solution may leave pi and enter pj . Within this, we want to compute the locally optimal terrain T ∗ (pi , di , pj , dj ) that leaves pi in a direction from di , enters pj in a direction from dj , and otherwise optimizes the sorted vector of angles. In the subproblems we consider, pi and pj are always external vertices, except for possibly p1 and pn . Furthermore, di and dj are restricted to be one out of k possible sectors. If T ∗ (pi , di , pj , dj ) has no external vertices other than pi and pj , we call it free. In this case we also call the subproblem S(pi , di , pj , dj ) free. We call a subproblem δ-free if it is free and any δ-approximation of the optimal solution is also free. We show that there is a good approximation of the optimal solution T ∗ that can be split into short δ-free subproblems, and provide an algorithm to efficiently compute it. 3.1

Properties of Optimal Terrains

First, we go over some properties of the shape of optimal terrains. Such terrains tend to have many consecutive small turns in the same direction, resulting in a smooth polyline. Let T ∗ be an optimal terrain for some subproblem. Let C ∗ be some chain of T ∗ . We establish a number of lemmata that should be reasonably intuitive, the formal proofs of which can be found in the full version [6]. Lemma 1. C ∗ has at most two subchains. If a chain has more subchains, we could make a shortcut somewhere. Because of this lemma, every optimal chain has at most one saddle edge. Lemma 2. All vertices of C ∗ have the same turning angle, except possibly for one of the vertices of the saddle edge, which can have a smaller angle. If we have any other realization C of the same part of the imprecise terrain, then we can morph C to C ∗ : we can continuously deform the terrain such that the vector of turning angles V (C) only decreases (lexicographically). Lemma 3. Any chain C can be morphed into the optimal chain C ∗ . This implies that there cannot be any locally optimal terrains that are not globally optimal. Because of this, we have the following important observation, which states that the imprecision intervals do not matter for internal vertices. Lemma 4. Given fixed directions for the first and last edges of a chain C ∗ , if we replace the imprecision intervals of the internal vertices of C ∗ by vertical lines (that is, if we allow any y-coordinate for each vertex), then C ∗ is still the optimal chain. Finally, we observe the following property, related to approximation. It states that any long chain can be replaced by a number of shorter chains, without damaging the vector of turning angles too much. Recall that k = ⌈ 2π ε ⌉. Figure 4 gives the intuition behind the proof.

Smoothing Imprecise 1.5D Terrains

(a)

219

(b)

Fig. 4. (a) The optimal solution. (b) An ε-approximation. The middle part can be moved down as far as we want, since the turning angles at the saddles can never become worse than ε.

Lemma 5. Suppose C ∗ consists of more than 3k internal vertices. Then there exists a sequence of j chains C1 , C2 , . . . , Cj such that each Ci has at most 2k vertices, C1 starts at the same external vertex and in the same direction as C ∗ and Cj ends at the same external vertex and in the same direction as C ∗ such that the vector of angles V (C1 ∪ C2 ∪ . . . ∪ Cj ) is at most ε worse than V (C ∗ ). 3.2

Solving the Subproblems

We are now ready to give an algorithm to solve the subproblems. If a subproblem is free, its optimal solution consists of only one chain, and by Lemma 1 this has at most two subchains. Because of the algebraic difficulties described in the beginning of this section, we cannot solve a subproblem optimally. However, we can approximate it arbitrarily well. We present a δ-approximation algorithm for free subproblems that runs in O(m log 1δ ) time, where m is the number of vertices in the subproblem. The solution relies on a binary search on the worst angle, and on the following decision algorithm. Given a value θ, we ask the decision question: is there a solution terrain T for this subproblem that uses only angles of θ or less? To decide this, we try to construct a solution that consists of a curve from pi and a curve from pj with consecutive angles of θ, and a straight line segment to connect them. We can choose whether the terrain starting from pi turns left or right, and the same for pj , resulting in four different combinations. We test them all. For such a combination, we still have some freedom in which direction (from the sector di ) we leave pi exactly. This results in a subinterval of the (i + 1)th vertical line. Here we make a, say, right turn of θ. Then we reach the (i + 2)th vertical line at a different subinterval, and so on. Figure 5 shows two examples of the same

220

C. Gray, M. L¨ offler, and R.I. Silveira

θ

θ pj

di pi

dj

pj

di pi

dj

θ (a)

θ (b)

Fig. 5. (a) This value of θ is not feasible. (b) This value of θ is feasible. The dotted line shows the tangent between the two inner curves that realizes a terrain with worst angle θ.

subproblem for different values of θ. We call the regions between the curves that begin at extreme directions horns. To test feasibility, consider the inner curves of both horns. If they intersect, θ is not feasible. If not, we compute their inner tangent. The two curves have two bitangents; we use the one with tangent points closest to pi and pj , see Figure 5. If this tangent stays within both horns, θ is feasible. If not, it is not feasible (we test feasibility for all combinations of left-turning and right-turning paths— and if any is feasible, θ is feasible). The reason is that the tangent touches the curves at vertices, thus it directly gives a correct solution. On the other hand, if there exists a solution with worst angle θ, then we find it, since there is always a solution with this shape. Define the shortest θ-terrain as the shortest terrain with worst turning angle θ. Then this terrain has turning angles of exactly θ at its ends, and a straight part in the middle. Using this decision routine, a simple binary search on θ yields the following result: Lemma 6. Given a free subproblem S(pi , di , pj , dj ), we can approximate the locally optimal solution T ∗ (pi , di , pj , dj ) of this problem within a factor δ in O((j − i) log 1δ ) time.

It is important to note that even though the subproblem we are solving (and therefore its optimal solution) is free, this does not guarantee that the approximate solution we compute stays within the imprecision intervals. In other words, the returned solution might be invalid. However, our algorithm is guaranteed to produce a valid solution for a δ-free problem.

3.3

Main Algorithm

Now we are ready to describe the algorithm for the original problem. The algorithm itself is very simple; the correctness analysis is more involved. Each subproblem S(pi , di , pj , dj ) is defined on two points and two sectors. Consider all subproblems where j − i ≤ 3k. There are O(k 3 n) such problems,

Smoothing Imprecise 1.5D Terrains

221

since there are n possible choices for i, 3k possible choices for j, 2 possible choices for both pi and pj (each can be either at the top or at the bottom of its interval), and k possible choices for both di and dj . By Theorem 6, each subproblem can be approximated within an error of δ in O(k log δ1 ) time, because j − i ≤ 3k. We approximate all subproblems of size at most 3k within a factor of δ = O(ε2 ), so we spend O(nk 4 log k) time in total. Next, we discard any solutions that do not respect the imprecision intervals (we establish that those subproblems are not δ-free). Then we invoke dynamic programming to compute the best concatenation of subproblems, processing them from left to right and computing for each position (i, pi , di ) the best solution so far by minimizing over all possible placements of the previous point. We claim that the resulting terrain is an ε-approximation for the original problem; the analysis is done in the next subsection. Note that the optimal terrain does not need to have external first and last vertices. Thus if j < 3k, it is possible that there is no previous external interval. To solve these special subproblems, where i = 1 or j = n, we exploit a property about the beginning and end of an optimal terrain. In particular, we prove in the full version that if the optimal terrain does not start with an external vertex, then all the vertices between the first vertex and the first external vertex lie on a straight line. A symmetric argument holds for the last vertex. Using this property, the algorithm solves the subproblems from the beginning (internal) interval to vj , with ending direction sector dj , by checking if there exists a line from vj that hits the first interval and stays within all intermediate intervals. If that is not the case, the subproblem can be discarded because we know there must be some other subproblem that contains the line in an optimal solution. Checking if such a line exists can be done in linear time. 3.4

Correctness Analysis

Let T ∗ be the optimal terrain. We argue the existence of several other terrains, each of which is a close approximation of T ∗ and has certain properties. Eventually, we establish that one of the terrains in the class that we encounter in the algorithm is also a close approximation of T ∗ . By Lemma 5, we know that there exists a different terrain S that approximates T ∗ within ε′ such that all chains of S are at most 3k long. Let S ∗ be the optimal terrain among all terrains for which all chains are at most 3k long. Clearly, S ∗ also approximates T ∗ within ε′ . The terrain S ∗ is partitioned into short chains. For each external vertex in S ∗ , we divide its circle of directions into 2k sectors, and we locate its two outgoing edges in this sector set. We fix these sectors. Observe that any terrain C that respects these sectors is within an error of ε′ from S ∗ at these vertices. Therefore, if we take the terrain apart into a sequence of independent subproblems, each restricted by two vertices and sectors, and we solve each subproblem optimally, this results in a terrain C ∗ that is an ε′ -approximation of S ∗ . These optimal subsolutions may again have external vertices at places where S ∗ had none. In this case, we again fix these vertices and their sectors, and subdivide the terrain

222

C. Gray, M. L¨ offler, and R.I. Silveira

further. Each vertex gets fixed at most once, and in the other steps can only get a better turning angle, so the terrain remains an ε′ -approximation of S ∗ . Eventually we reach a terrain D with the property that D is a concatenation of smaller subproblems, each of length at most 3k such that each subproblem, defined on a pair of vertices and a pair of directions, is locally optimal and free. Furthermore, D is within ε′ from S ∗ . Let D∗ be the optimal terrain with these properties. Then also D∗ is within ε′ from S ∗ , and therefore within 2ε′ from T ∗ . Next, we show that there exists a terrain E that is a concatenation of δ-free subproblems, that approximates D∗ within 2ε′ . Our algorithm encounters and approximately solves all δ-free subproblems, so it computes the optimal terrain E ∗ of this form. E ∗ is better than E, so it also approximates D∗ within 2ε′ , and therefore T ∗ within ε. Let S(pi , di , pj , dj ) be a free subproblem, and let R∗ be its optimal solution. We define the error of a vertex in a terrain as the difference between its turning angle and the corresponding turning angle in T ∗ . We define the error vector e(pi , di , pj , dj ) as the sorted vector of errors in R∗ . Lemma 7. If a free subproblem S(pi , di , pj , dj ) has an error e, then it is either δ-free, or there exists a sequence of smaller subproblems S(pi0 , di0 , pi1 , di1 ), S(pi1 , di1 , pi2 , di2 ), . . . , S(pih −1 , dih −1 , pih , dih ), where i0 = i and ih = j, that are all free and have errors of at most e + δ, and such that the turning angles of their external vertices have an error of at most e + ε′ . This lemma implies that we can actually split a subproblem into δ-free subproblems, not just into free problems, at the cost of a slightly worse error vector. Corollary 1. If a free subproblem S(pi , di , pj , dj ) of length m has an error e, it is either δ-free, or there exists a sequence of smaller subproblems S(pi0 , di0 , pi1 , di1 ), S(pi1 , di1 , pi2 , di2 ), . . . , S(pih −1 , dih −1 , pih , dih ), where i0 = i and ih = j, that are all δ-free and have errors of e + mδ, and such that the turning angles of their external vertices have an error of at most e + (m − 1)δ + ε′ . Now, consider D∗ . We construct the terrain E by replacing each free subproblem in D∗ by a sequence of δ-free subproblems, according to Corollary 1. The errors ε′ , this in E may have gotten at most 3kδ + ε′ worse than in D∗ . If we set δ = 3k is exactly what we need. Theorem 1. Given an imprecise 1.5D terrain, a realization of the terrain that minimizes the vector of turning angles lexicographically can be computed approximately within an error of ε in O( εn4 log 1ε ) time.

4

Maximizing the Smallest Turning Angle

In this section, we discuss a different variant of the problem: instead of minimizing the largest angle, we maximize the smallest angle. This results in a terrain that is “as rough as possible” given the imprecision constraints. An example is shown in Fig. 6. While this is not something one would want to do in practice, it does give some insight into the worst case for the terrain at hand.

Smoothing Imprecise 1.5D Terrains

223

Fig. 6. Optimal terrain for max min angle.

As the example shows, the optimal solution can use internal vertices, because moving such a vertex in either direction decreases one of the neighboring vertices. In fact, there could be multiple consecutive such internal vertices, that all keep each other in balance. Therefore, as in Sect. 3, we cannot solve the problem exactly, but we can solve it approximately in roughly the same way. Here we just list the differences; we give a more detailed description in the full version. Because of the nature of the problem, there are some important differences. When maximizing the smallest angle, an optimal terrain will try to zig-zag as much as possible, as opposed to the optimal terrain in the previous section, which tries to avoid zig-zagging. This actually makes the problem easier: whenever the terrain is able to zig-zag, it will use external vertices, which results in many very short chains without internal vertices. However, there are situations where zig-zagging is not possible, and the optimal terrain must use internal vertices to balance the angles, as is shown in Fig. 7. Such chains with internal vertices can still have at most two subchains, and still have the same turning-angle almost everywhere. However, such a situation is only locally optimal: if we would move the vertices a small distance, the solution would get worse, but if we would move them a large distance, the solution would get better again. This means that Lemma 4 does not hold here. However, we can still compute (δ-approximations of) the locally-optimal chains. Some parts of the algorithm become simpler. We do not need to split long chains into shorter chains anymore, since any chain that is longer than 2k must have turning angles of less than ε. But if this is the case in the optimal

Fig. 7. A chain. Moving any vertex down makes its own angle smaller, moving it up makes its neighbors’ angles smaller. Note however, that if we would be able to move a vertex very far down (or up), all angles involved would get larger.

224

C. Gray, M. L¨ offler, and R.I. Silveira

(a)

(b)

Fig. 8. (a) A terrain minimizing the sum of its turning angles. (b) A terrain maximizing the sum of its turning angles.

solution, since we are maximizing the smallest angle, any terrain would be an ε-approximation. Also, the leftmost and rightmost vertices are always external here, which simplifies the situation. Other than these small differences, the same algorithm works. We can still solve the subproblems in O(m log δ1 ) time, although the actual conditions become a bit different (a vertex of the saddle edge should now have a turning angle that is larger than the rest, rather than smaller). The dynamic programming based on the subproblems goes through unchanged. Theorem 2. Given an imprecise 1.5D terrain, a realization of the terrain that maximizes the vector of turning angles lexicographically can be computed approximately within an error of ε in O( εn4 log 1ε ) time.

5

Minimizing and Maximizing the Total Turning Angle

Another measure of smoothness or roughness that we may wish to optimize is the sum of the turning angles of a terrain. Figure 8 shows the optimal terrain in these cases for the same example that was used in the previous sections. We show that the terrain with the minimum sum of turning angles is related to the shortest path that respects the intervals. We use this observation to design a linear-time algorithm that computes a terrain that minimizes this sum. We also observe that the terrain that maximizes the total turning angle has the property that all the vertices are external. This property allows us to design a linear-time algorithm to find such a terrain. We begin by looking at the terrain that minimizes the total turning angle. The first observation that we can make is that the total turning angle of a left-turning (or right-turning) path between two vertices depends only on the starting and ending directions, and is the turning angle between those directions. In other words, the location of the vertices in a left-turning chain does not matter as long as the entire chain remains left-turning. This implies that there are many terrains that minimize the total turning angle. Moreover, we observe that if the leftmost and rightmost intervals were single points, the shortest path between those points through the corridor defined by

Smoothing Imprecise 1.5D Terrains

225

the upper and lower endpoints of the intervals would be an optimal solution. Our algorithm, the details of which are omitted, is to find this shortest path and then fix the ends of the terrain by making them as straight as possible in a relatively simple procedure. We prove in the full version: Theorem 3. Given an imprecise 1.5D terrain, a realization of the terrain that minimizes the total turning angle can be computed in linear time. The observation that all the vertices of a path that maximize the total turning angle must be external also leads us to a simple algorithm. We can use dynamic programming to find the best terrain from left to right. We show in the fullversion: Theorem 4. Given an imprecise 1.5D terrain, a realization of the terrain that maximizes the total turning angle can be computed in linear time.

6

Discussion and Future Challenges

We studied several measures to compute the smoothest or roughest possible 1.5D terrain, when height information is imprecise. We presented approximation algorithms that run in linear time for optimizing the worst turning angle in the terrain (either smallest or largest). They find a terrain where the worst turning angle is at most ε away from the one in the optimal terrain, for any ε > 0. We highlight that the depth of the algorithms lies in the correctness analysis, and not in the algorithms themselves, which are relatively simple and should be easy to implement. As a supplement to these results, we also studied algorithms for optimizing the total turning angle. We sketched two exact algorithms that also run in linear time. These algorithms should also be fairly simple to implement. Clearly, the major open problem suggested by this work is to smooth 2.5D terrains, which are encountered more often in practice. Such terrains pose challenges on two different levels. On the modeling level, it is unclear how to define the problem correctly—it is difficult to define a smooth terrain in a way that ensures that all the features are smooth. For example, even if all of the solid angles between faces of the terrain are small, a peak can be created at the intersection of three or more faces that is quite sharp. Even when this would be clear, though, designing efficient algorithms is still challenging. For example, fitting a smooth terrain through a few known fixed points, when the remaining points have no bounded intervals, is already a non-trivial task. With the algorithms presented in this paper, we show that the 1.5-dimensional case is already more challenging than it looks at first; one cannot expect the 2.5-dimensional case to be any easier. At the same time, we hope that our solutions will provide the necessary insight required to solve these problems eventually. A tool recently introduced in the analysis of terrains is the realistic terrain model proposed by Moet et al. [10]. In this model, restrictions are placed on four properties of the triangles of a terrain. These four properties are the minimum angle of each triangle, the ratio of the size of the largest triangle to the size of the

226

C. Gray, M. L¨ offler, and R.I. Silveira

smallest triangle, the aspect ratio of the bounding rectangle of the terrain, and the steepness of each triangle. In all cases, the properties are restricted to be a constant. Most of these restrictions have to do with the underlying triangulation of the terrain. However, the restriction that the steepness of any triangle in the terrain is bounded deals directly with the heights of the vertices of the terrain. We wonder whether the first three restrictions make the last any easier to satisfy in an uncertain terrain. That is, given an imprecise terrain whose triangulation is “realistic”, can an algorithm set the heights of the vertices in such a way that the steepness of the steepest triangle is minimized? This question, among many others, is interesting to study in the context of imprecise terrains.

References 1. Ben-Moshe, B., Katz, M.J., Mitchell, J.S.B.: A constant-factor approximation algorithm for optimal 1.5d terrain guarding. SIAM Journal on Computing 36(6), 1631–1647 (2007) 2. Dyn, N., Levin, D., Rippa, S.: Data dependent triangulations for piecewise linear interpolation. IMA Journal of Numerical Analysis 10, 137–154 (1990) 3. Feciskanin, R.: Optimization of triangular irregular networks for modeling of geometrical structure of georelief. In: Proc. International Cartographic Conference 2007 (2007) 4. de Floriani, L., Magillo, P., Puppo, E.: Applications of computational geometry in Geographic Information Systems. In: Sack, J., Urrutia, J. (eds.) Handbook of Computational Geometry, pp. 333–388. Elsevier, Amsterdam (1997) 5. Gray, C., Evans, W.: Optimistic shortest paths on uncertain terrains. In: Proc. 16th Canadian Conf. Comput. Geom., pp. 68–71 (2004) 6. Gray, C., L¨ offler, M., Silveira, R.I.: Smoothing imprecise 1.5D terrains. Tech. Rep. UU-CS-2008-036, Department of Information and Computing Sciences, Utrecht University (2008) 7. Hofer, M., Sapiro, G., Wallner, J.: Fair polyline networks for constrained smoothing of digital terrain elevation data. IEEE Trans. Geosc. Remote Sensing 44, 2983–2990 (2006) 8. Kholondyrev, Y., Evans, W.: Optimistic and pessimistic shortest paths on uncertain terrains. In: Proc. 19th Canadian Conf. Comput. Geom., pp. 197–200 (2007) 9. King, J.: A 4-approximation algorithm for guarding 1.5-dimensional terrains. In: Correa, J.R., Hevia, A., Kiwi, M. (eds.) LATIN 2006. LNCS, vol. 3887, pp. 629–640. Springer, Heidelberg (2006) 10. Moet, E., van Kreveld, M., van der Stappen, A.F.: On realistic terrains. In: Proc. 22nd Annu. ACM Sympos. Comput. Geom., pp. 177–186 (2006) 11. Silveira, R.I., van Oostrum, R.: Flooding countries and destroying dams. In: Dehne, F., Sack, J.-R., Zeh, N. (eds.) WADS 2007. LNCS, vol. 4619, pp. 227–238. Springer, Heidelberg (2007) 12. Tasdizen, T., Whitaker, R.T.: Feature preserving variational smoothing of terrain data. In: Proc. 2nd International IEEE Workshop on Variational, Geometric and Level Set Methods in Computer Vision (2003) 13. Wang, K., Lo, C.P., Brook, G.A., Arabnia, H.R.: Comparison of existing triangulation methods for regularly and irregularly spaced height fields. International Journal of Geographical Information Science 15(8), 743–762 (2001)

Local PTAS for Dominating and Connected Dominating Set in Location Aware Unit Disk Graphs Andreas Wiese1,⋆,⋆⋆ and Evangelos Kranakis2,⋆⋆⋆ 2

1 Technische Universität Berlin, Institut für Mathematik, Germany School of Computer Science, Carleton University, 1125 Colonel By Drive, Ottawa, Ontario, Canada K1S 5B6

Abstract. We present local 1+ǫ approximation algorithms for the minimum dominating and the connected dominating set problems in location aware Unit Disk Graphs (UDGs). Our algorithms are local in the sense that the status of a vertex v in the output (i.e. whether or not v is part of the set to be computed) depends only on the vertices which are a constant number of edges (hops) away from v. This constant is independent of the size of the network. In our graph model we assume that each vertex knows its geographic coordinates in the plane (location aware nodes). Our algorithms give the best approximation ratios known for this setting. Moreover, the processing time that each vertex needs to determine whether or not it is part of the computed set is bounded by a polynomial in the number of vertices which are a constant number of hops away from it. We employ a new method for constructing the connected dominating set and we give the first analysis of trade-offs between approximation ratio and locality distance.

1

Introduction

Locality is a particularly important issue in wireless ad hoc networks since in such networks the wireless devices do not have knowledge about the entire network and it is often not practical and in many cases even impossible to explore the whole network completely. Especially in the case of dynamically changing networks the attempt to examine the entire system would require too much time. Therefore we are interested in local algorithms where the status of a node v (whether or not v is in the dominating set, connected dominating set etc.) depends only on the nodes that are at most a constant number of edges (hops) ⋆

⋆⋆

⋆⋆⋆

Research conducted while the authors were visiting the School of Computing Science at Simon Fraser University, Vancouver. Research supported by a scholarship from DAAD (German Academic Exchange Service). Research supported in part by NSERC (Natural Science and Engineering Research Council of Canada). Research supported in part by MITACS (Mathematics of Information Technology and Complex Systems).

E. Bampis and M. Skutella (Eds.): WAOA 2008, LNCS 5426, pp. 227–240, 2009. c Springer-Verlag Berlin Heidelberg 2009 

228

A. Wiese and E. Kranakis

away from v. This ensures that, when computing a solution in a network, messages do not propagate uncontrollably far. It is also advantageous for disaster recovery since in this context one does not need to recompute the entire solution but only the parts that have been affected by the incident. Also in dynamic networks (especially, but not limited to the case of ad hoc wireless networks) we want that local changes in the graph only affect the solution locally so that we do not have to recompute the entire solution if only small local changes occur. Maintaining Topology Control (i.e., network stability, power conservation, interference reduction etc.) is an important issue in ad hoc networks. In order to handle this, vertices are often organized in clusters. In each cluster, one vertex takes the role of a clusterhead. All other vertices in the cluster are assigned to this special vertex. Hence, the clusterheads form a dominating set. They are responsible for the communication of the members of their cluster with nodes in other clusters. For being able to send messages from one cluster to another one wants the clusterheads to form a connected graph which results in a connected dominating set. For efficiency reasons we want the dominating and connected dominating sets to be as small as possible. We model the wireless network as a Unit Disk Graph (UDG) consisting of nodes with identical transmission range in which each node knows its geographic coordinates but does not have any other information that distinguishes it from the other nodes. This models the case of identical wireless devices that have a constant transmission range and know their geographic position e.g. from a GPS receiver or from virtual coordinates assigned by another source. In wireless and sensor networks each device has a limited transmission range and whether communication between two nodes is possible depends (among other properties) on their Euclidean distance. With the advent of GPS devices our assumption regarding positional knowledge in each node seems to be relevant. 1.1

Related Work

For general graphs G = (V, E) with n nodes computing minimum dominating and minimum connected dominating sets is NP-hard. Dominating set cannot be approximated in polynomial time with a ratio better than c · log |V | for some c > 0 unless every problem in NP can be solved deterministically in O(npoly log n ) [17]. But it can be approximated within a bound of O(log n) [8]. For the restricted case of unit disk graphs the situation is a bit different. Dominating set and connected dominating set remain NP-hard [4]. However, there are constant ratio approximations known [1,15] and polynomial time approximation schemes. The first polynomial time approximation scheme (PTAS) was proposed by Hunt III et al. [7] and needs the geometric embedding of the graph as part of the input. Later, Nieberg and Hurink [16] found a PTAS that computes an approximation for the dominating set without the geometric embedding of the graph. This is an advantage since the recognition of a unit disk graph (given a graph determining if it is a unit disk graph or not) is NP-hard [2] and therefore finding an embedding for a given unit disk graph is NP-hard as well. Even finding an approximation for an embedding of a unit disk graph is

Local PTAS for Dominating and Connected Dominating Set

229

NP-hard [10]. For the connected dominating set problem Cheng et al. proposed a global PTAS [3] that needs the embedding of the graph as part of the input. Gfeller and Vicari presented a distributed (but not local) approximation scheme for connected dominating set in growth-bounded graphs [6] (this includes unit disk graphs) which does not rely on the geometric embedding of the graph. When looking for local algorithms, Kuhn et al. [13] proposed local approximation schemes for maximum independent set and dominating set for growthbounded-graphs. This class of graphs includes UDGs. However, in their definition of locality the status of a vertex v depends on the vertices up to O(log∗ n) hops from v. This bound depends on the size of the graph and is therefore not constant. In the graph model they use the fact that each vertex has a unique ID to distinguish itself from the other vertices. This is the graph model that was mostly discussed in the literature in terms of algorithms and lower bounds, see e.g. [12,14]. However, in this paper we assume a different model in which every vertex knows its coordinates in the plane and can make use of them for computing a dominating or a connected dominating set. One paper in which our model was discussed is [11]. The paper shows how coordinate information can be used to reduce the locality distances of distributed algorithms for packing problems. In particular, together with [9] it can be used to find an algorithm for finding a maximal independent set with a constant locality distance. This technique could be applied to dominating set and connected dominating set algorithms presented in [13,6]. However, this does not immediately give information about the tradeoffs between approximation ratios and locality distances. Another paper using the location aware model is [5]. There, Czyzowicz et al. present a factor 5 approximation for dominating set and a 7.453 + ǫ approximation for connected dominating set. 1.2

Main Result

Our main results are a local 1 + ǫ approximation algorithm for dominating set and a local 1 + ǫ approximation algorithm for connected dominating set. For constructing the connected dominating set we use a new approach that differs significantly from the one used in [6]. Namely, we extend the technique used in [16] and combine it with the a method presented in [5]. In terms of the approximation ratio our algorithms achieve the same performance bounds that are possible for global polynomial time algorithms assuming P=NP. For each problem we give upper bounds for the respective locality distance. Thus, we obtain trade-offs between approximation ratios and locality distances. The remainder of the paper is organized as follows: in Section 2 we present our local 1 + ǫ approximation algorithm for minimum dominating set. In Section 3 we present our local 1 + ǫ approximation algorithm for the minimum connected dominating set problem. Finally in Section 4 we summarize our algorithms, discuss what parameters would be worthwhile to be improved and what remains open.

230

A. Wiese and E. Kranakis

Due to space restrictions for full proofs of our theorems we refer to our technical report[18].

2

Local 1 + ǫ Approximation Algorithm for Dominating Set

In this section we present a local 1 + ǫ approximation algorithm for minimum dominating set in unit disk graphs, prove its correctness and give an upper bound for its locality distance. We show that the latter depends only on ǫ. First we introduce some definitions and preliminaries including the concept of 2-separated collections which enables us to guarantee a 1 + ǫ approximation factor. This concept was introduced in [16] in order to obtain a PTAS. Then we present a tiling of the plane in hexagons with certain properties. It is essentially the same tiling as the one given in [5] and allows us to obtain a local algorithm. Finally, we present our algorithm and prove its correctness. 2.1

Preliminaries

An undirected graph G = (V, E) is a unit disk graph if there is an embedding in the plane for G such that two vertices u and v are connected by an edge if and only if the Euclidean distance between them is at most 1. The graph G we consider for all our algorithms is a connected unit disk graph. A set D ⊆ V dominates a vertex v if v ∈ D or there is a vertex d ∈ D and an edge {v, d}. The set D is called a dominating set for G if it dominates every vertex in G. A set D is called a minimum dominating set for G if it is a dominating set and for all other dominating sets D′ ⊆ V it holds that |D| ≤ |D′ |. For two vertices u and v let d(u, v) be the hop-distance between u and v, that is the number of edges on a shortest path between these two vertices. The hopdistance is not necessarily the geometric distance between two vertices. Denote by N r (v) = {u ∈ V | d(u, v) ≤ r} the r-neighborhood of a vertex v. For ease of 0 1 ′ notation we  set N (v)′ := {v}, N (v) := N (v) and for a set V ⊆ V we define ′ N (V ) = v′ ∈V ′ N (v ). Note that v ∈ N (v). We define the diameter of a set of vertices V ′ ⊆ V as diam (V ′ ) := max ′ d(u, v). u,v∈V

Denote by the locality distance (or short the locality) of an algorithm the minimum α such that the status of a vertex v (e.g. whether or not v is in a dominating or connected dominating set) depends only on the vertices in N α (v). For all algorithms presented in this paper we prove that α depends only on the desired approximation factor for the respective problem. Now we introduce the concept of a 2-separated collection which will give us a lower bound for the optimal dominating set.

Definition 1. Let H be a set and let sets Sh with h ∈ H be subsets of V . Let k be an integer. The sets Sh are called a 2k-separated collection if for any two vertices s ∈ Sh and s′ ∈ Sh′ with h = h′ it holds d(s, s′ ) > 2k.

Local PTAS for Dominating and Connected Dominating Set

231

Let D : P(V ) → P(V ) be an operation returning a dominating set of minimum cardinality for the given subset of vertices. Note that the set D(V ′ ) may contain vertices from V which are not in V ′ . 2.2

Tiling of the Plane

In the algorithm, we will construct a 2k-separated collection for k = 1. In order to do this locally we need a tiling of the plane. We split the plane into hexagons. Each hexagon has a class number. The tiling has the following properties: Each vertex is in exactly one hexagon. Every two vertices in a hexagon are connected by an edge. Each hexagon has a class number. The distance between two vertices in different hexagons with the same class number is at least a certain constant. – The number of hexagonal classes is bounded by a constant.

– – – –

We achieve these properties as follows: First we define the constant c to be the c 2 √ smallest even integer such that (2c + 1) < 1 + ǫ . We consider a tiling of the plane with tiles. Each tile itself is tiled by hexagons of diameter one that are being assigned different class numbers (see Figure 1 (a) and (c)). Denote by H the set of all hexagons containing vertices of G (only these hexagons are relevant for us) and by b the number of hexagons in one tile. Ambiguities caused by vertices at the border of hexagons are resolved as shown in Figure 1 (b): The right borders excluding the upper and lower apexes belong to a hexagon, the rest of the border does not. We assume that the tiling starts with the coordinates (0,0) being in the center of a tile of class 1. We choose the number of hexagons per tile in such a way that two hexagons of the same class have an Euclidean distance of at least 2c + 1. Note that this implies that two vertices in different hexagons of the same class number are at least 2c+ 1 hops away from each other. We need this property later in order to allow parallel computation in hexagons of the same class number. We will show that we need at most 12c2 + 18c + 7 hexagons per tile to ensure this, i.e. 12c2 + 18c + 7 ≤ b. Let class(h) be the class number of a hexagon h. 2.3

The Algorithm

Before giving the formal algorithm we present the main concepts and provide an intuitive description of the algorithm. For some hexagons h we construct a set area. Th that contains all vertices in h and the vertices in a certain surrounding  These sets Th are disjoint and partition the set of all vertices, i.e., V = Th . Also, the sets Th have certain properties that ensure the desired approximation ratio of 1 + ǫ. We call vertices contained in a set Th covered. The construction of the sets Th is done by iterating over the class numbers of the hexagons. In the first iteration we cover hexagons of class 1 by computing sets Th for all hexagons h of class 1. After the ith iteration all hexagons of class i have already been

232

A. Wiese and E. Kranakis

(a)

(b)

(c)

Fig. 1. (a) A tile divided into 12 hexagons. Having 12 hexagons in one tile achieves a minimum Euclidean distance between to hexagons of the same class of 2. (b) One hexagon of the tiling. The bold lines indicate the parts of its border that belong to this hexagon. (c) Several tiles glued together.

covered. We proceed to cover all hexagons of class i + 1 whose vertices have not been completely covered so far by computing sets Th for those hexagons. We stop when all vertices in all hexagons have been covered. Moreover, the number of iterations does not exceed the total number of classes. Finally we  compute for all sets Th the minimum dominating set D(Th ). We output D := h D(Th ). Now we present the algorithm in detail. Fix ǫ > 0 and let b the number of hexagonal classes. For all h ∈ H we initialize the set Th := ∅. If all vertices of a hexagon have been covered, call this hexagon covered. For i = 1, ..., b do the following: Consider a hexagon h ∈ H of class i which is not yet covered. Define the vertex vh which is closest to the center of h and which is not covered yet to be the coordinator vertex of h. Ambiguities are resolved by choosing the vertex with the smallest x-coordinate among vertices with the least distance to the center of h. Denote by C(i) all vertices which are covered in previous iterations where hexagons of classes i′ < i were considered. Compute for even r ∈ {0, 2, 4, ..., c} the r-neighborhoods N r (vh ) and compute the minimum dominating sets D (N r (vh )). We determine the smallest value of r with r ≤ c − 2 such that   r+2  D N (vh ) \ C(i)  ≤ (1 + ǫ) · |D (N r (vh ) \ C(i))| (1)

holds and denote it by r¯. Later we will prove that there is at least one value for r with r ≤ c − 2 such that Inequality 1 does indeed hold (see Lemma 1). Now mark all vertices in Th := N r¯+2 (vh ) \ C(i) as covered. In the sequel we

Local PTAS for Dominating and Connected Dominating Set

233

use the notation Sh := N r¯ (vh ) \ C(i) and we prove in Lemma 3 that the sets Sh (for various hexagons h) form a 2-separated collection. We assign all vertices in D (Th ) to the dominating set. We do this procedure for hexagons of class i which are not covered yet. Since two vertices in different hexagons of the same class number are at least 2c + 1 hops away from each other, the order in which hexagons  of the same class number are processed does not matter. We output D := h∈H D (Th ). Algorithm 1: Local algorithm for finding a dominating set in a unit disk graph 1 2 3 4 5 6 7 8 9 10 11 12

// Algorithm is executed independently by each node v; // We assume that the computation of the sets Th for each h has been done already; dominator:=false; Find all vertices in N (v); forall v ′ ∈ N (v) do find the hexagon h′ such that v ′ ∈ Th′ ; compute D(Th′ ); if v ∈ D(Th′ ) then dominator:=true end end if dominator=true then Become part of the dominating set D else Do not become part of D

The previous discussion is presented in Algorithm 1. We prove the correctness of Algorithm 1, its approximation factor, its locality and its processing time in Theorem 1. Theorem 1. Let G be a unit disk graph and let ǫ > 0. Algorithm 1 has the following properties: 1. The computed set D is a dominating set for G. 2. Let DOP T be an optimal dominating set. It holds that |D| ≤ (1 + ǫ) · |DOP T |. 3. Whether   or not a vertex v is in D depends only on the vertices at most O ǫ16 hops away from v, i.e. Algorithm 1 is local. 4. The processing time for  a vertex v is bounded by a polynomial in the number of vertices at most O ǫ16 hops away from v.

We will prove the four parts of this theorem in four steps. In each step we first give some lemmas which are required to understand the proof of the theorem.

Correctness. We want to prove that the set D is indeed a dominating set for G. As mentioned above we first prove that it is sufficient to examine values for r with r ≤ c − 2 while computing the D (N r (v)), employing an argument used in [16].

234

A. Wiese and E. Kranakis

Lemma 1. Let v be a coordinator vertex of a hexagon of class i. While computing its neighborhood N r (v) the values of r that need to be considered to find a value r¯ such that     r¯+2   D N (2) (v) \ C(i)  ≤ (1 + ǫ) · D N r¯(v) \ C(i) 

are bounded by c − 2.

Proof. Assume on the contrary that Inequality 2 is false for all r ∈ {0, 2, ..., c−2}, i.e. for these values of r it holds that   r+2  D N (v) \ C(i)  > (1 + ǫ) · |D (N r (v) \ C(i))|

By Corollary 3 in [16] the number of vertices in a minimum dominating set for a neighborhood N c (v) is bounded from above by (2c + 1)2 (this holds because the size of a maximal independent set for this neighborhood is bounded by (2c + 1)2 and amaximal independent set is always a dominating set). It holds that D N 0 (v)  = |D({v})| = 1. Altogether we have that (2c + 1)2 ≥ |D (N c (v) \ C(i))|    > (1 + ǫ) · D N c−2 (v) \ C(i)     > (1 + ǫ)2 · D N c−4 (v) \ C(i)  > ...    c > (1 + ǫ) 2 · D N 0 (v) \ C(i)  c √ 1+ǫ ≥

c √ 1 + ǫ which is a From the definition of c we know that (2c + 1)2 < contradiction. Therefore, at least for one value of r ∈ {0, 2, ..., c − 2} it holds    that D N r+2 (v)  ≤ (1 + ǫ) · |D (N r (v))|.

Lemma 2. The sets Th cover all vertices of the graph.

Proof. Assume on the contrary that there is a vertex v which is not covered by any Th , h ∈ H. Let h be the hexagon to which v belongs and let i be its class. At some point in the algorithm, the hexagons of class i were considered. Then there were vertices in h which were not covered yet (at least v). Hence, the coordinator vertex of h must have marked a set Th as covered. However, as the hexagons have a diameter of 1 (and r¯ + 2 ≥ 1) it follows that v is contained in Th and therefore covered by Th which is a contradiction. Proof. (of part 1 of Theorem 1): Let v ∈ V be a vertex. By Lemma 2, v is covered by a set Th (note that h is not necessarily the hexagon thatcontains v). Then the set D (Th ) dominates v. As Algorithm 1 outputs D = h D (Th ), it holds that D(Th ) ⊆ D and therefore D dominates v.

Local PTAS for Dominating and Connected Dominating Set

235

Approximation Ratio. We prove that the size of D is only a factor 1 + ǫ greater than the size of a minimum dominating set. Lemma 3. The subsets Sh , h ∈ H form a 2-separated collection. Proof. The claim follows from Lemma 3 in [16]. In [16] the vertices in the centers of the constructed sets Sh were chosen arbitrarily. In our case they were chosen with a certain method (taking the head vertices of hexagons). The proof in [16] applies to our algorithm as well. Using these sets Sh we can obtain a lower bound for the size of an optimal dominating set. Lemma 4. For a 2-separated collection of sets Sh with h ∈ H we have |D(V )| ≥



h∈H

|D (Sh )|

Proof. In Section 3 in [16] the concept of 2-separated collections is introduced and this lemma is proved. Here we give only the main idea: For all sets Sh it holds that |D(V ) ∩ N (Sh )| ≥ |D (Sh )| and the inequality is then obtained by taking the sum over all set Sh .  Lemma 5. Let S = h∈H Sh be a 2-separated collection in G and let Th , h ∈ H be subsets of V with Sh ⊂ Th for all h ∈ H. If there exists an ǫ > 0 such that |D (Th )| ≤ (1 + ǫ) · |D (Sh )|  holds for all h ∈ H and if h∈H D (Th ) forms a dominating set in G, the set  h∈H D (Th ) is a (1 + ǫ)-approximation of a minimum dominating set in G. Proof. This is proved as Corollary 1 in [16].

Proof. (of part 2 of Theorem 1): From the construction we can see that for every pair Sh , Th it holds that |D (Th )| ≤ (1+ǫ)·|D (Sh )|. It follows that the conditions of Lemma 5 are satisfied. Locality. Now we want to prove that Algorithm 1 is local (part 3 of Theorem 1). We prove that whether or not a vertex v belongs to the computed set D depends only on the vertices at most a constant α hops away from v. This constant depends only on ǫ. We give an upper bound for α in terms of ǫ. Now we give a technical lemma. Due to space constraints we refer to our technical report [18] for the proof. Lemma 6. Let v be vertex. Whether or not v is in D depends only on the vertices which are at most 2 + 2c · (b − 1) + c hops away from v.

236

A. Wiese and E. Kranakis

Proof. (of part 3 of Theorem 1): We want to show that  whether or not a vertex 1 v is in D depends only on the vertices at most O 6   ǫ away from v. With some basic calculations we can show that c ∈ O ǫ12 . Denote by α(ǫ) the locality distance of Algorithm 1 when run with a performance guarantee of 1 + ǫ. From Lemma 6 we know that we need to explore the vertices at most 2 + 2c · (b − 1) + c hops away from v. From the definition of b and the above lemmas we get α(ǫ) ≤ 2 + 2c · (b − 1) + c ≤ 2 + 2c · (12c2 + 18c + 7 − 1) + c = 24c3 + 36c2 + 13c + 2   1 ∈O 6 ǫ

Processing Time. The processing time is the time that a single vertex needs in order to compute whether or not it is part of the dominating set. As our algorithm is local it is not very suitable to quantify the processing time in comparison to the total number of vertices in G. Instead we measure it with respect to the number of vertices which are at most α hops away from a vertex v since these are all vertices that a vertex v needs to explore when computing its status. We denote this number by nα (v) (i.e. nα (v) = |N α (v)|). We show that the processing time is bounded by a polynomial in nα (v). Proof. (of part 4 of Theorem 1). In the algorithm, minimum dominating sets for the sets N r (v ′ ) with r ∈ {0, 1, ..., c} and v ′ ∈ N α (v) must be computed. First we show that this can be done in polynomial time. By Corollary 3 in [16] the number of vertices in a minimum dominating set for a neighborhood N r (v ′ ) is bounded by (2r + 1)2 . Therefore, the computation of such a set can be done in 2

O nα (v)(2c+1) , e.g. by enumeration. For each vertex v ′ ∈ N α (v) we might have to compute minimum dominating sets D(N r (v ′ )) for each r ∈ {0, 2, ..., c}. As this dominates the processing time of the algorithm, we find that it is in

4 (2c+1)2 O nα (v) · nα (v) · c and therefore bounded by nα (v)O(1/ǫ ) .

3

Local 1 + ǫ Approximation for Connected Dominating Set

In Section 2 we presented a local 1 + ǫ approximation algorithm for Dominating Set. In this section we will generalize its methodology and design a local 1 + ǫ approximation algorithm for Connected Dominating Set. 3.1

The Algorithm

Let G = (V, E) be a connected unit disk graph. First we give an intuitive description of our 1 + ǫ approximation algorithm. Then we will present it in detail.

Local PTAS for Dominating and Connected Dominating Set

237

Main Concept. In the algorithm we first compute a dominating set D for G with some properties that achieves a certain performance ratio compared to an optimal connected dominating set. For establishing a lower bound for the performance ratio of D we use a 2k-separated collection. Similarly to the computation of D in Algorithm 1 we will compute disjoint set Th for some hexagons h such that each vertex is contained in exactly one set Th . We will ensure that each connected component in D has at least k vertices (we will define the constant k later). Then we will connect these components by adding bridges of one or two vertices between connected components. Description of the Algorithm. Now we present the algorithm in detail. Let 1 + ǫ be the desired factor. We fix d to be the smallest inte approximation √ d−1 ≤ 3 1 + ǫ and fix k to be the smallest integer such that ger such that d−3 √ √ (1 + k2 ) ≤ 3 1 + ǫ. We define ǫ¯ := 3 1 + ǫ − 1 and c to be the smallest integer √ c/2 such that (2c + 1)2 < k 1 + ǫ¯ with c ≡ 0 mod 2k. We tile the plane with tiles as described in Section 2 such that two hexagons of the same class have a minimum Euclidean distance of 2c + 1. Denote   by b the number of different class numbers. It can be shoen that b ∈ O c2 . For all h ∈ H we initialize the set Th := ∅. First we check whether diam(G) < k+2. If this is the case we explore the entire graph G, compute a minimum connected dominating set for G and stop. This computation can be done by enumeration in polynomial time since if diam(G) < k + 2 then the size of a maximal independent set for G is bounded by a constant ck . Since a maximal independent set is also a dominating set, the size of an optimal connected dominating set is bounded by 3 · ck . From now on assume that diam(G) ≥ k + 2 (later we will exploit the fact that this implies that a minimum connected dominating set for G has at least k vertices). With the definitions of the constants as above (including the definition of k) we start a loop with one iteration for each class number. For i = 1, ..., b do the following: Denote by C(i) all vertices that are contained in sets Th with class(h) < i and call these vertices covered. If all vertices in a hexagon have been covered, call this hexagon covered. Consider a hexagon h of class i that is not covered yet. Define the vertex vh which is closest to the center of h and not covered to be the coordinator vertex of h. Ambiguities are resolved by choosing the vertex with the smallest x-coordinate among vertices with the least distance to the center of h. For a set V ′ ⊆ V denote by Dk (V ′ ) a dominating set for V ′ in N k (V ′ ) of minimum cardinality such that each connected component in Dk (V ′ ) has at least k vertices (note that D1 (V ′ ) = D(V ′ )). Consider for all integers r with r ≡ 0 mod 2k and 0 ≤ r ≤ c the r-neighborhoods N r (vh ) and compute the sets Dk (N r (vh )). Find a value for r with r ≤ c − 2k such that   r+2k   Dk N (vh ) \ C(i)  ≤ (1 + ǫ¯) · |Dk (N r (vh ) \ C(i))|

(3)

238

A. Wiese and E. Kranakis

Denote by r¯ the smallest value for r such that Inequality 3 holds. Similarly to Lemma 1 one can show that there is always a value for r that satisfies this. Mark all vertices in Th := N r+2k (vh ) \ C(i) as covered and define Sh := N r (vh ) \ C(i). In the analysis we prove that the sets Sh form a 2k-separated collection for G. Do this for all hexagons of class i that are not covered yet. As two vertices in different hexagons of the same class number are at least 2c + 1 hops away from each other the  order in which the hexagons are processed does not matter. We define D := h Dk (Th ). Now we want to connect all connected components in D to a connected dominating set CD. We consider a tiling of the plane that ensures a minimum Euclidean distance of d + 2 between two different hexagons of the same class (this implies a minimum hop-distance of d + 2 between two vertices in different hexagons of the same class). Denote by b the number of different hexagonal classes needed for such a tiling. We will prove that b ≤ 3d2 + 15d + 19. Denote by H the set of all hexagons that contain vertices of G (only these hexagons are relevant for us). In each hexagon h we declare the vertex that is closest to the center of h to be the head vertex v¯h (note that this is not the same definition as for the coordinator vertex of a hexagon. The head vertex of a hexagon might not be the vertex closest to the center since this vertex might have been covered in previous iterations). Ambiguities here are resolved by choosing the vertex with the lowest x-coordinate among the vertices with the same distance to the center. Our algorithm runs in b rounds, one for each class number. Denote by Di the computed set after the ith round and define D0 := D. In each round i, 1 ≤ i ≤ b the head vertex v¯h of each class i hexagon h computes N := Di−1 ∩ N d (¯ vh ). If there are two connected components in N which could be connected by one vertex v ∈ h we assign this vertex v to the computed set. If there are two connected components in N which could be connected by two adjacent vertices v ∈ h and v ′ ∈ h′ with h′ ∈ H we assign these vertices v and v ′ to the computed set (note that v ′ is not necessarily in h). Perform both operations until they are no longer applicable. As two vertices in different hexagons with the same class number are at least d + 2 hops away from each other the order in which the hexagons of one class number are considered does not matter. This will result in a set CD := Db . We output CD. We refer to the above as Algorithm 2. 3.2

Proof of Correctness

We prove the correctness of Algorithm 2, its approximation factor, its locality and the processing time in Theorem 2. Theorem 2. Let G = (V, E) be a unit disk graph and let ǫ > 0. Algorithm 2 has the following properties: 1. The computed set CD is a connected dominating set for G. 2. Let CDOP T be an optimal dominating set. It holds that |CD| ≤ (1 + ǫ)|CDOP T |. 3. Whether or not  a vertex v is in CD depends only on the vertices which are at most O ǫ19 hops away from v, i.e. Algorithm 2 is local.

Local PTAS for Dominating and Connected Dominating Set

239

4. The processing time for a vertexv is bounded by a polynomial in the number of vertices which are at most O ǫ19 hops away from v.

Due to space constraints we refer to our technical report [18] for the proof.

4

Conclusion

In this paper we designed local approximation algorithms for dominating and connected dominating set in the setting of location aware nodes. We presented a local 1 + ǫ algorithm for dominating set and a local 1 + ǫ algorithm for connected dominating set. Our approximation ratios are optimal since no local algorithm based on location aware nodes can compute an optimal solution for every given graph (note that this does not depend on whether P=NP or P=NP). If P=NP then for the problems which we addressed our local algorithms achieve exactly the same approximation ratios as global polynomial time algorithms. We estimated the locality distances of our algorithms and proved the first bounds in terms of trade-offs between approximation ratios and locality distances. It is evident that further improvements would be desirable. E.g., we mention without proof that using the method employed in our proof of Theorem 2 it is possible to obtain a 3 + ǫ approximation algorithm for constructing   a connected dominating set but with improved locality distance of O ǫ16 . An interesting question to study would be how much the locality distances of the algorithms can be improved for a given 1 + ǫ approximation factor. Also of interest are lower bounds for the possible approximation ratio for a given locality distance. From that we could conclude how far we are away from the best possible local approximation for a given locality distance. Another problem would be to generalize our techniques to other graph classes like quasi-UDGs which are a more realistic model for real wireless networks than unit disk graphs.

References 1. Alzoubi, K., Wan, P., Frieder, O.: Message-optimal connected dominating sets in mobile ad hoc networks. In: MobiHoc 2002: Proceedings of the 3rd ACM international symposium on Mobile Ad Hoc Networking & Computing, pp. 157–164. ACM Press, New York (2002) 2. Breu, H., Kirkpatrick, D.G.: Unit disk graph recognition is NP-hard. Computational Geometry. Theory and Applications 9(1-2), 3–24 (1998) 3. Cheng, X., Huang, X., Li, D., Wu, W., Du, D.-Z.: A polynomial-time approximation scheme for the minimum-connected dominating set in ad hoc wireless networks. Networks 42(4), 202–208 (2003) 4. Clark, B.N., Colbourn, C.J., Johnson, D.S.: Unit disk graphs. Discrete Math. 86(13), 165–177 (1990) 5. Czyzowicz, J., Dobrev, S., Fevens, T., González-Aguilar, H., Kranakis, E., Opatrny, J., Urrutia, J.: Local algorithms for dominating and connected dominating sets of unit disk graphs with location aware nodes. In: Laber, E.S., Bornstein, C., Nogueira, L.T., Faria, L. (eds.) LATIN 2008. LNCS, vol. 4957, pp. 158–169. Springer, Heidelberg (2008)

240

A. Wiese and E. Kranakis

6. Gfeller, B., Vicari, E.: A faster distributed approximation scheme for the connected dominating set problem for growth-bounded graphs. In: Kranakis, E., Opatrny, J. (eds.) ADHOC-NOW 2007. LNCS, vol. 4686, pp. 59–73. Springer, Heidelberg (2007) 7. Hunt III, H.B., Marathe, M.V., Radhakrishnan, V., Ravi, S.S., Rosenkrantz, D.J., Stearns, R.E.: NC-approximation schemes for NP- and PSPACE-hard problems for geometric graphs. J. Algorithms 26(2), 238–274 (1998) 8. Johnson, D.S.: Approximation algorithms for combinatorial problems. In: Proc. 5th Ann. ACM Symp. Theory Computing, NY, pp. 38–49. ACM Press, New York (1973); Also in J. Comput. Syst. Sci. 9(3), 256-278 (1974) 9. Kuhn, F., Moscibroda, T., Nieberg, T., Wattenhofer, R.: Fast deterministic distributed maximal independent set computation on growth-bounded graphs. In: Fraigniaud, P. (ed.) DISC 2005. LNCS, vol. 3724, pp. 273–287. Springer, Heidelberg (2005) 10. Kuhn, F., Moscibroda, T., Wattenhofer, R.: Unit disk graph approximation. In: DIALM-POMC 2004: Proceedings of the 2004 joint workshop on Foundations of mobile computing, pp. 17–23. ACM Press, New York (2004) 11. Kuhn, F., Moscibroda, T., Wattenhofer, R.: On the locality of bounded growth. In: Proceedings of the twenty-fourth annual ACM SIGACT-SIGOPS symposium on Principles of distributed computing, pp. 60–68 (2005) 12. Kuhn, F., Moscibroda, T., Wattenhofer, R.: The price of being near-sighted. In: SODA 2006: Proceedings of the seventeenth annual ACM-SIAM Symposium on Discrete Algorithms, pp. 980–989. ACM Press, New York (2006) 13. Kuhn, F., Nieberg, T., Moscibroda, T., Wattenhofer, R.: Local approximation schemes for ad hoc and sensor networks. In: DIALM-POMC 2005: Proceedings of the 2005 Joint Workshop on Foundations of Mobile Computing, pp. 97–103. ACM Press, New York (2005) 14. Linial, N.: Locality in distributed graph algorithms. SIAM J. Comput. 21(1), 193– 201 (1992) 15. Marathe, M.V., Breu, H., Hunt III, H.B., Ravi, S.S., Rosenkrantz, D.J.: Simple heuristics for unit disk graphs. Networks 25(1), 59–68 (1995) 16. Nieberg, T., Hurink, J.L.: A PTAS for the minimum dominating set problem in unit disk graphs. In: Erlebach, T., Persinao, G. (eds.) WAOA 2005. LNCS, vol. 3879, pp. 296–306. Springer, Heidelberg (2006) 17. Raz, R., Safra, S.: A sub-constant error-probability low-degree test, and a subconstant error-probability PCP characterization of NP. In: Proceedings of the 29th Annual ACM Symposium on the Theory of Computing (STOC 1997), New York, May 1997, pp. 475–484. Association for Computing Machinery (1997) 18. Wiese, A., Kranakis, E.: Local ptas for dominating and connected dominating set in location aware unit disk graphs. Technical Report TR-07-17, Carleton University, School of Computer Science, Ottawa (December 2007)

Dynamic Offline Conflict-Free Coloring for Unit Disks Joseph Wun-Tat Chan1 , Francis Y.L. Chin2,⋆ , Xiangyu Hong3 , and Hing Fung Ting2,⋆⋆ 1

Department of Computer Science, King’s College London, UK [email protected] 2 Department of Computer Science, The University of Hong Kong, Hong Kong {chin,hfting}@cs.hku.hk 3 Department of Computer Science and Engineering, Fudan University, China [email protected]

Abstract. A conflict-free coloring for a given set of disks is a coloring of the disks such that for any point p on the plane there is a disk among the disks covering p having a color different from that of the rest of the disks that covers p. In the dynamic offline setting, a sequence of disks is given, we have to color the disks one-by-one according to the order of the sequence and maintain the conflict-free property at any time for the disks that are colored. This paper focuses on unit disks, i.e., disks with radius one. We give an algorithm that colors a sequence of n unit disks in the dynamic offline setting using O(log n) colors. The algorithm is asymptotically optimal because Ω(log n) colors is necessary to color some set of n unit disks for any value of n [9].

1

Introduction

The conflict-free coloring (CF-coloring) problem was introduced by Even et al. [9] for modeling the frequency allocation problem that arises in wireless communication. In the frequency allocation problem, servers (base stations) and clients are connected by radio links. To establish a communication, a client scans the available frequencies in search for a base station with good reception. To avoid interference, the client needs to choose a specific frequency such that only one of the reachable base stations is assigned with that frequency. A naive solution is to assign each base station a distinct frequency. Since the spectrum is limited and costly, the target is to minimize the total number of frequencies assigned to the base stations such that for any client, among the frequencies assigned to all its reachable base stations, there is always a frequency assigned to exactly one of these stations. In the CF-coloring problem, the geometric regions covered by the base stations are called ranges, and the goal is to find a coloring of these ranges (i.e., one color ⋆ ⋆⋆

The research is partly supported by a Hong Kong RGC grant HKU7113/07E. The research is partly supported by a Hong Kong RGC grant HKU7171/08E.

E. Bampis and M. Skutella (Eds.): WAOA 2008, LNCS 5426, pp. 241–252, 2009. c Springer-Verlag Berlin Heidelberg 2009 

242

J.W.-T. Chan et al.

for each range) such that for every position that is covered by some subset of ranges, we can always find a range in the subset with a unique color. A formal and more general definition of CF-coloring is given as follows. Definition 1 (Range Space). A Range Space is defined by a pair (X, R) where X is a set of vertices, and R is a family of subsets of X. The subsets in R are called ranges. Definition 2 (CF-coloring). Let (X, R) be a range space. A coloring χ : R → N , which maps R to the set of natural numbers N , is conflict-free if for every vertex x ∈ X, there is an R ∈ R such that x ∈ R and χ(R) = χ(R′ ) for all other ranges R′ containing x. In the literature, the frequency allocation problem was also modeled as a dual of the CF-coloring problem [9]. The dual problem treats the base stations as vertices and ranges as the subsets of base stations within some geometric regions. The goal of the dual problem is to find a conflict-free coloring of the vertices such that for any range, there is always a vertex among the vertices in the range with a unique color. Definition 3 (Dual CF-Coloring). Let (X, R) be a range space. A coloring ψ : X → N is conflict-free if for every range R ∈ R, there exists a vertex x ∈ R such that ψ(x) = ψ(x′ ) for all other x′ ∈ R. Note that the dual CF-coloring problem can be seen as a generalization of the traditional graph coloring problem. We can consider a range space as a hypergraph. When every range in the range space contains exactly two vertices, the range space is equivalent to a graph where a dual CF-coloring of the range space is also a vertex coloring of the graph. The problem of CF-colorings was first studied by Even et al. [9] and the PhD work of Smorodinsky [13]. One of the results in [9] was about CF-coloring for n disks (of arbitrary size) on the plane. The range space (X, R) of this problem is defined with X equal to the set of all points on the plane and each range in R is a subset of X covered by one of the n disks. They showed that the range space, and equivalently the n disks, can be colored with O(log n) colors. They also showed that Ω(log n) colors are necessary even to color some set of n unit disks for any value of n. Alon et al. [2] showed that if every disk intersects at most k others, all disks can be colored with O(log3 k) colors, no matter how many disks there are. The dual CF-coloring problem with respect to disks was studied in [9]. The range space (X, R) is defined by a given set X of n vertices on the plane. A range R ⊆ X is in R if and only if there exists a disk that covers only the vertices in R. Even et al. [9] showed that a dual CF-coloring (on the n vertices) can be constructed using O(log n) colors. A matching lower bound is proved by Pach and T´ oth [12] that Ω(log n) colors are required for CF-coloring every set of n vertices on the plane with respect to disks. The CF-coloring for rectangles and dual CF-coloring problems with respect to rectangles have also been studied in several recent papers [1, 8, 9, 10, 14].

Dynamic Offline Conflict-Free Coloring for Unit Disks

243

The dual CF-coloring problem has been studied in the online model, where a sequence of vertices are presented one-by-one and each vertex is colored without knowledge of the vertices not yet presented. The assigned color of a vertex cannot be changed later and The coloring has to guarantee that at any time, the set of colored vertices holds the conflict-free property. Most of the previous works focused on coloring of n vertices on a line where the ranges consist of all intervals of the vertices. It was shown in [9] that the static version of this problem can be solved using 1 + ⌊log2 n⌋ colors and this is also the best that we can do. For the online case, Chen et al. [6] gave an algorithm that uses O(log2 n) colors. In between the static and the online models, Bar-Noy et al. [4] studied two semionline models, namely the dynamic offline model where the entire sequence of the vertices is given but the vertices have to be colored one-by-one according to the order of the sequence and the colors cannot be changed later, and the online absolute position model where the vertices are presented in the online fashion but the positions of all vertices on the line are known, i.e, for every vertex we know how many other vertices are on the left and right of this vertex. Bar-Noy et al. showed that O(log n) colors are sufficient to construct a dual CF-coloring in these two models. Randomized algorithms have been proposed for the online dual CF-coloring for intervals [6], unit disks [5, 7], and hypergraphs (i.e., general range space) [3]. Our Contribution. This paper focuses on the dynamic offline version of the CFcoloring problem for unit disks (i.e., disks with radius one)1 . Let d1 , d2 , . . . , dn be a sequence of n unit disks on the plane, which is given at the beginning. The required CF-coloring χ of these n disks satisfies the following requirement: Property 1. For any 1 ≤ i ≤ n, the coloring χ restricted to the subset of disks Di = {d1 , d2 , . . . , di } is conflict-free, i.e., for any point x covered by such disks, there is one d ∈ Di such that d covers x and for all other d ′ ∈ Di that cover x, χ(d) = χ(d ′ ). To solve the above problem, we define and solve a dual CF-coloring problem with respect to unit disks in the dynamic offline setting that is to find a coloring ψ for a sequence of n vertices c1 , c2 , . . . , cn satisfying the following requirement. Property 2. For any 1 ≤ i ≤ n, the coloring ψ restricted to the subset of vertices Ci = {c1 , c2 , . . . , ci } is conflict-free, i.e., for any unit disk covering a nonempty subset of vertices of Ci , there is one c ∈ Ci such that the unit disk covers c and for all other c ′ ∈ Ci that is covered by the unit disk, ψ(c) = ψ(c ′ ). We show that the primal and dual problems defined above are equivalent in the sense that that they can be reduced to each other. The dual problem (in the dynamic offline setting) is then solved by constructing an equivalent but static dual CF-coloring problem. We show that the ranges of this static problem have a nice property so that we can find a dual CF-coloring with O(log n) colors. 1

It is equivalent to to saying disks with fixed radius for the problem.

244

J.W.-T. Chan et al.

This dual CF-coloring gives immediately, in the dynamic offline setting, a dual CF-coloring for n vertices where ranges are defined by units disks, as well as the corresponding primal problem for coloring unit disks, using O(log n) colors. Our result is interesting theoretically. As mentioned above, the static version of the CF-coloring problem for n disks requires Ω(log n) colors even when the disks are restricted to unit disks. In fact, the conventional technique of solving the problems is by reducing the problem to the coloring problem on simple graphs. However, for unit disks, this technique might not work, as some range spaces can only be represented by hypergraphs with hyperedges having more than 2 elements. On the other hand, Even et al. [9] gave an example showing that for the dynamic offline case, n colors are necessary for coloring n disks with arbitrary radius. This paper shows that for the dynamic offline case coloring unit disks can be done much more efficiently than disks of arbitrary size, where O(log n) colors suffice. Our algorithm has practical applications. First, base stations could have the same power and thus the areas covered by them are congruent disks. Second, the base stations are built according to some order, and before all of them are built, we still need a conflict-free frequency assignment to those stations that have been built. The paper is organized as follows. In Section 2, we give a high-level algorithm that finds a dual CF-coloring for any given range space. We also show a modification of the algorithm, which becomes crucial in Section 4 in bounding the number of colors used. In Section 3, we show how the dynamic offline problem of CF-coloring unit disks can be transformed to an equivalent but static dual CF-coloring of range space. In Section 4, we first outline how the static problem obtained in Section 3 can be decomposed into sub-problems, and explain why we can focus on the sub-problems only. We proceed to show that the sub-problem can be solved using O(log n) colors, and hence the static dual CF-coloring problem defined in Section 3, as well as the one for unit disks can be solved using O(log n) colors.

2

An Algorithm to Find a Dual CF-Coloring

In this section, we describe a high-level algorithm of Even et al. [9] that uses the independent set approach for constructing a dual CF-coloring for any range space (X, R). The following notation and definition are used in the algorithm. For any subset S ⊆ X and any 1 ≤ i ≤ n, define R S = {R ∩ S | R ∈ R}

(1)

to be the set of ranges in R with all elements not in S removed. For any I ⊆ X, we say that I is an independent set of (X, R) if for any range R ∈ R with |R| ≥ 2, we have R ⊆ I. Algorithm CF-color constructs a dual CF-coloring for (X, R) by assigning the same color to the vertices in an independent set of (X, R) and repeat the process using new colors for the range space induced by the uncolored vertices

Dynamic Offline Conflict-Free Coloring for Unit Disks

245

until all vertices are colored. It is not difficult to see that the resulting coloring is conflict-free. Theorem 1 (Even et al. [9]). The coloring χ of X constructed by Algorithm CF-Color is a dual CF-coloring on (X, R). Algorithm 1 : CF-color Require: range space (X, R) Ensure: a CF-coloring χ on X 1. i ← 1; 2. X1 ← X; 3. while Xi = ∅ do 4. find an independent set Ii of (Xi , RXi ); 5. χ(x) = i for all x ∈ Ii ; /* i.e., assign color i to all x in Ii */ 6. Xi+1 ← Xi − Ii ; 7. i ← i + 1; 8. end while

To bound the number of colors used to O(log n), it suffices to make sure that in each iteration the size of the independent set is a constant fraction of the number of vertices in the range space, so that O(log n) iterations, as well as colors, are sufficient. The rest of the paper is devoted to applying this idea to the problem of CF-coloring for unit disks in the dynamic offline setting. 2.1

Modified Algorithm Using Min-Range Subset

Define the min-range subset of a set R of ranges to be min(R) = {R ∈ R | |R| ≥ 2 and ∄ R′ ∈ R such that |R′ | > 2 and R′  R}, i.e., the set of non-trivial ranges in R that is minimum by set inclusion. The following lemma makes sure that to find an independent set of a range space we can focus on the min-range subset. Lemma 1. Given a range space (X, R), an independent set of the range space (X, min(R)) is also an independent set of (X, R). Proof. Consider an independent set I for (X, min(R)). Suppose to the contrary that I is not independent for (X, R). There is a range R ∈ R with |R| ≥ 2 such that R ⊆ I. Since min(R) is a min-range subset of R, we have either R ∈ min(R) or there is another range R′ ∈ min(R) with R′ ⊂ R. In both cases, there is a range in min(R), R or R′ , which is of size at least 2 and is a subset of or equal to I. It contradicts to the assumption that I is an independent set of (X, min(R)). We modify the Algorithm CF-Color in Step 4 as follows. If min(R Xi ) = ∅, find an independent set Ii of (Xi , min(R Xi )); else Ii ← Xi .

246

J.W.-T. Chan et al.

There is a case where every range in R Xi is a subset of Xi with single element. We can take Xi as the independent set. By Lemma 1, the modified algorithm works as the original one. The reason that we use min-range subset will become clear in Section 4.1 when we show that the independent set thus obtained consists of at least a constant fraction of vertices of Xi . This helps to bound the number of colors used in the coloring.

3

Dynamic Offline CF-Coloring for Unit Disks

In this section, we show that the dynamic offline CF-coloring problem for n unit disks can be transformed to a static dual CF-coloring problem on some special range space. Then, in the rest of the paper, we show how to solve this dual problem using O(log n) colors. We first explain how to transform the static CF-coloring problem for n unit disks to its dual problem as follows. Let D = {d1 , d2 , . . . , dn } be a set of n unit disks. For each 1 ≤ i ≤ n, let ci be the center of the unit disk di . Define the range space (C, R) where C = {c1 , c2 , . . . , cn } and   R = R ⊆ C | ∃ a unit disk (not necessarily from D) that covers exactly R . A more general form of the following lemma has been shown in [9] that the static CF-coloring for D can be reduced to the static dual CF-coloring on (C, R).

Lemma 2 (Even et al. [9]). Consider a set of unit disks D and the range space (C, R) defined above. Let ψ : C → N be a dual CF-coloring on (C, R). Then, the coloring χ : D → N where χ(di ) = ψ(ci ) for 1 ≤ i ≤ n is a CF-coloring for D. The following lemma extends Lemma 2 to the dynamic offline setting. Lemma 3. Consider a sequence of unit disks D and the corresponding sequence of centers C. Let ψ : C → N be a dual CF-coloring of C satisfying the dynamic offline Property 2 (in the Introduction). Then, the coloring χ : D → N where χ(di ) = ψ(ci ) for 1 ≤ i ≤ n is a CF-coloring for D satisfying the dynamic offline Property 1 (in the Introduction).  Proof. For any 1 ≤ i ≤ n, let Ci = {c1 , c2 , . . . , ci } and RCi = R ⊆ Ci | ∃ a unit disk covers R but no other centers in Ci }. The lemma is true because according to Property 2, ψ is a dual CF-coloring of (Ci , RCi ) for any 1 ≤ i ≤ n. Then, by Lemma 2, χ is a CF-coloring for Di for any 1 ≤ i ≤ n, and hence χ is a CF-coloring for D satisfying Property 1. To reduce the problem from the dynamic offline setting to a static setting, we need some definition. For any 1 ≤ i ≤ n, let Ci = {c1 , c2 , . . . , ci } and define  RCi = R ⊆ Ci | ∃ a unit disk covers R but no other centers in Ci },  and RC = 1≤i≤n RCi . The following lemma is suggested by Bar-Noy et al. [4] (in a more general form) to reduce problem from the dynamic offline setting to the static setting.

Dynamic Offline Conflict-Free Coloring for Unit Disks

247

Lemma 4 (Bar-Noy et al. [4]). Consider a sequence of centers C and the range space (C, RC ) defined above. Let ψ : C → N be a dual CF-coloring on (C, RC ). Then ψ is also a dual CF-coloring for C that satisfies the dynamic offline Property 2.

4

Dual CF-Coloring of (C, RC ) with O(log n) Colors

In this section, we show how to find a dual CF-coloring of (C, RC ) defined by a sequence of unit disks in Section 3 using the modified Algorithm CF-Color of Section 2 with O(log n) colors. There are mainly two ideas that lead us to the bound. 1. The plane can be partitioned into unit hexagons (with each side of length one). Note that the hexagons can be divided into seven groups such that no two hexagons in the same group intersect with the same unit disk. See Figure 1. The set of centers C is partitioned into disjoint subsets such that each subset consists of centers of C within the same unit hexagon. For each subset F thus defined, we have a range space (F, RC F ) which we can color independently, provided that we are using seven sets of different colors where one set for each group of hexagons and the corresponding subsets of centers.

6

5 1

4

7 3

6

6

5

4

1

7

7

6

3 6

2 5

1 4

1 4

3

2 5

2 5

4

3 6

3

2

1

2

7

4

1 4

7 3

6 2

Fig. 1. The plane is partitioned into hexagons

2. We show (in Section 4.1) that the dual CF-coloring of (F, RC F ) can be obtained by using the modified Algorithm CF-Color with O(log n) colors, where F ⊆ C is a subset of centers within the same unit hexagon. Combining the two ideas, we can find the dual CF-coloring for (C, RC ) with O(log n) colors, as well as a dynamic offline CF-coloring for a sequence of n unit disks. The theorem below follows. Theorem 2. We can construct a dynamic offline CF-coloring for a sequence of n unit disks using O(log n) colors.

248

4.1

J.W.-T. Chan et al.

Dual CF-Coloring in a Unit Hexagon

In this section, we assume that the centers in C = {c1 , c2 , . . . , cn } are all within a unit hexagon. We show how (C, RC ) can be colored by the modified Algorithm CF-Color using O(log n) colors. The input for the algorithm X and R, are C and RC , respectively. Recall that in each iteration of the algorithm, say the i-th iteration, we need to find an independent set Ii of (Xi , min(R Xi )) and remove the centers in Ii from Xi before we continue in the next iteration. The key property for us to prove the O(log n) bound is that Ii consists of a constant fraction of Xi . Therefore, the number of iteration, as well as the number of colors used, is at most O(log |X1 |), i.e., O(log |C|) or O(log n). To prove the key property, we prove the following claims. Given any subset Y ⊆ C, let G be the range space (Y, min(RC Y )). 1. G is indeed a simple graph, i.e., each of its ranges consists of two centers (Lemma 5), and 2. G consists of O(n) edges (Lemma 9). With the second claims, we can guarantee that G has an independent set of size Ω(n) and an independent set of size Ω(n) can be found in polynomial time [11]. Thus, the following theorem follows. Theorem 3. We can construct a dynamic offline dual CF-coloring with respect to unit disks for a sequence of n vertices (or points) all within a unit hexagon using O(log n) colors. The remainder of the section focuses on proving Lemmas 5 and 9. First, we show that for any Y ⊆ C, (Y, min(RC Y )) corresponds to a simple graph2 . Lemma 5. For any Y ⊆ C, all R ∈ min(RC Y ) have |R| = 2. Proof. Suppose that R = {ci1 , ci2 , . . . , cik } ∈ min(RC Y ) where i1 < i2 < · · · < ik . We have |R| = k. There is a unit disk d that covers all centers in R but no other centers cj for j ≤ ik . It follows that d also covers only ci1 and ci2 at the stage when center ci2 is presented. Therefore, we have {ci1 , ci2 } ∈ min(RC Y ). As the ranges are min-ranges, k must be 2; otherwise {ci1 , ci2 } is a proper subset of R, which is a contradiction. To analyze the graph structure of G, which corresponds to (C, min(RC )) (or equivalently (Y, min(RC Y )) as Y can be C), we consider the geometric property of the unit disks D = {d1 , d2 , . . . , dn } where di is centered at ci . Let AD denote the partition of the plane by the unit disks in D. A face is defined to be a partition in AD , i.e., a face is the maximal contiguous region covered by the same set of unit disks in D. ADi is defined similarly as AD but it corresponds to the partition by Di = {d1 , d2 . . . , di }. See Figure 4 for an example. We say that a face is lv-k if the face is covered by k unit disks. 2

For the static dual CF-coloring (of a set of points) with respect to disks [9], this simple graph is in fact the Delaunay graph induced by the set of points. However, in the dynamic offline case, the graph may not be the Delaunay graph.

Dynamic Offline Conflict-Free Coloring for Unit Disks

249

We first give two properties, Lemmas 6 and 7, of AD with respect to a particular unit disk. Lemma 6. Let D be a set of unit disks whose centers are within a unit hexagon. For any d ∈ D, there are at most two lv-1 faces in AD covered by d. Proof. For any three disks whose centers are in a unit hexagon, they must all intersect at least one point (or a face). Figure 2 and 3 show the two different cases. For Case 1, let p1 , p2 and p3 be the three intersection points at the outer boundary of the union of the three disks. It can be proved that any unit disk is too small to inscribe all the three points p1 , p2 and p3 so as to form three lv-1 faces of a unit disk. For Case 2, again there is no way for the new unit disk to create three lv-1 faces.

p

p

1

p

2

p

1

2

p 4

p 3

p 3

Fig. 2. Three unit disks intersect with each other: Case 1

Fig. 3. Three unit disks intersect with each other: Case 2

The following lemma give a bound of the number of a special kind of lv-2 faces of a unit disk, which is defined as follows. For a unit disk d ∈ D, a p-lv-2 faces of d in AD is defined to be a maximal intersecting region of d and a lv-1 face in AD−{d} and the intersecting region is strictly smaller than the lv-1 face. (Note that p-lv-2 faces are subjected to unit disks. For example in Figure 4, f1 is a lv-2 face of both d1 and d2 but f1 is a p-lv-2 face of d2 but not a p-lv-2 face of d1 .) Lemma 7. Let D be a set of unit disks whose centers are within a unit hexagon. For any d ∈ D, d has at most nine p-lv-2 faces in AD , each of the faces corresponds to d intersecting a distinct unit disk in D. Proof. We prove by contradiction. Assume that d has ten or more p-lv-2 faces. For each p-lv-2 face, it shares part of the boundary with d because by the definition of p-lv-2 face the face must be next to some lv-1 face of a disk other than d. For each of the ten p-lv-2 faces, select a point in the face which is on the boundary of d, denoted as f-point. Consider five of the f-points in alternate positions. See Figure 5. Out of the five f-points, there are at least three of them with the property.

250

J.W.-T. Chan et al.

Fig. 4. An example of lv-1 and lv-2 faces

For an f-point p, the part of the boundary of d defined by the f-points on its left pℓ and right pr occupies at most 1/3 of the whole circumference of d. Effectively, every point on the boundary of d from pℓ to pr through p has a distance at most 1 from either pℓ or pr . Let p1 , p2 and p3 be the three f-points satisfying the property. Let c1 , c2 and c3 be the centers of three disks in D that intersect d to form the p-lv-2 faces defined by p1 , p2 and p3 , respectively. By the property, all c1 , c2 and c3 must be outside d (because the distance between pℓ and pr is short) and all f-points are outside the disks centered at c1 , c2 and c3 , except p1 , p2 and p3 , respectively. See Figure 5. It follows that there is no region on the plane covered by d and all disks centered at c1 , c2 , c3 . In other words, c1 , c2 , c3 and the center of d, are not in the same unit disk or unit hexagon, which is a contradiction to the assumption. Recall that our task is to bound the number of edges in G, where G is the simple graph corresponding to (C, min(RC )). The following lemma corresponds the edges in G with the lv-2 faces in ADj for some j. Lemma 8. There is an edge (ci , cj ) in G for i < j if and only if there is a lv-2 face covered by unit disks di and dj in ADj . Proof. If (ci , cj ) is in G, then {ci , cj } ∈ min(RCj ) ⊆ RCj . There is a unit disk d covering ci and cj but no other ck for k ≤ j. It follows that the center of d is at a distance more than one with the other ck . Thus there is some region covered only by ci and cj in ADj , which is a lv-2 face. Similarly, we can prove the other way round. We are now ready to bound the number of edges in G. Lemma 9. There are at most 11n − 1 edges in G where n is the number of centers in C. Proof. For 1 ≤ k ≤ n, let deg− (ck ) denote the number of edges (ck , ci ) in G for i < k. By Lemma 8, deg− (ck ) is equal to the number of lv-2 faces of dk

Dynamic Offline Conflict-Free Coloring for Unit Disks

251

c2 p2

p1 c1

p3

c3

d Fig. 5. Left: Ten f-points on the boundary of a disk, five highlighted in alternate positions. Right: The three chosen f-points p1 , p2 and p3 and the corresponding disks centered at c1 , c2 and c3 that cover them.

in ADk . Let pcover(k) be the number of p-lv-2 faces of dk in ADk . Define a f-lv-2 face of dk to be a lv-2 face but not a p-lv-2 face of dk . Let fcover(k) = deg− (ck ) − pcover(k) denote the number of f-lv-2 faces of dk . Let create(k) be the number of lv-1 faces of dk in ADk . Since each f-lv-2 face of dk must correspond to a distinct lv-1  face of some di in ADk−1 , i.e., i < k, overall, we  have 1≤k≤n create(i) ≥ 1≤k≤n fcover(i). Together with Lemma 7, the number of edges in G is equal to   − (pcover(k) + fcover(k)) 1≤k≤n deg (ck ) = 1≤k≤n ≤ 1≤k≤n (9 + fcover(k))  ≤ 9n + 1≤k≤n create(k) ≤ 9n + 2n − 1 ≤ 11n − 1

5

{since d1 has only one lv-1 face in AD1 }

Remarks and Open Problems

Although the paper focuses on minimizing the colors used in constructing the CF-coloring, we would like to mention the time complexity of our approach. By the Algorithm CF-color, the running time mainly depends on the time to find an independent set in a simple graph times the number of iterations, which turns out to be O(log n) if the size of the independent set is a constant fraction of that of the vertex set. Hochbaum [11] gave an algorithm that finds an independent set (of a graph having linear number of edges) with size Ω(n) in O(n3/2 ) time. As a result, our algorithm runs in O(n3/2 log n) time.

252

J.W.-T. Chan et al.

In the dynamic offline model, we are given the whole sequence of disks at the beginning, so that the corresponding range space is known and fixed. That is why the general framework of the independent set method works. However, in the online (or even the online absolute position) settings, the range space is not known in advance and thus the general framework cannot apply. An interesting open problem is to develop a deterministic algorithm to solve the online version of the problem with polylog number of colors.

References 1. Ajwani, D., Elbassioni, K.M., Govindarajan, S., Ray, S.: Conflict-free coloring for rectangle ranges using O(n.382 ) colors. In: The 19th Annual ACM Symposium on Parallelism in Algorithms and Architectures (SPAA), pp. 181–187 (2007) 2. Alon, N., Smorodinsky, S.: Conflict-free colorings of shallow discs. In: The 22nd ACM Symposium on Computational Geometry (SoCG), pp. 41–43 (2006) 3. Bar-Noy, A., Cheilaris, P., Olonetsky, S., Smorodinsky, S.: Online conflict-free colorings for hypergraphs. In: Arge, L., Cachin, C., Jurdzi´ nski, T., Tarlecki, A. (eds.) ICALP 2007. LNCS, vol. 4596, pp. 219–230. Springer, Heidelberg (2007) 4. Bar-Noy, A., Cheilaris, P., Smorodinsky, S.: Conflict-free coloring for intervals: from offline to online. In: The 18th Annual ACM Symposium on Parallel Algorithms and Architectures (SPAA), pp. 128–137 (2006) 5. Chen, K.: How to play a coloring game against a color-blind adversary. In: The 22nd ACM Symposium on Computational Geometry (SoCG), pp. 44–51 (2006) 6. Chen, K., Fiat, A., Kaplan, H., Levy, M., Matousek, J., Mossel, E., Pach, J., Sharir, M., Smorodinsky, S., Wagner, U., Welzl, E.: Online conflict-free coloring for intervals. SIAM J. Comput. 36(5), 1342–1359 (2007) 7. Chen, K., Kaplan, H., Sharir, M.: Online conflict-free coloring for halfplanes, congruent disks, and axis-parallel rectangles. ACM Transactions on Algorithms (in press) 8. Elbassioni, K.M., Mustafa, N.H.: Conflict-free colorings of rectangles ranges. In: Durand, B., Thomas, W. (eds.) STACS 2006. LNCS, vol. 3884, pp. 254–263. Springer, Heidelberg (2006) 9. Even, G., Lotker, Z., Ron, D., Smorodinsky, S.: Conflict-free colorings of simple geometric regions with applications to frequency assignment in cellular networks. SIAM J. Comput. 33(1), 94–136 (2003) 10. Har-Peled, S., Smorodinsky, S.: Conflict-free coloring of points and simple regions in the plane. Discrete & Computational Geometry 34(1), 47–70 (2005) 11. Hochbaum, D.S.: Efficient bounds for the stable set, vertex cover and set packing problems. Discrete Applied Mathematics 6(3), 243–254 (1983) 12. Pach, J., T´ oth, G.: Conflict-free colorings. In: Aronov, B., Basu, S., Pach, J., Sharir, M. (eds.) Discrete and Computational Geometry – The Goodman-Pollack Festschrift. Springer, Heidelberg (2003) 13. Smorodinsky, S.: Combinatorial Problems in Computational Geometry. PhD thesis, Tel-Aviv University (2003) 14. Smorodinsky, S.: On the chromatic number of some geometric hypergraphs. In: The 17th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pp. 316–323 (2006)

Experimental Analysis of Scheduling Algorithms for Aggregated Links Wojciech Jawor, Marek Chrobak⋆ , and Mart Molle Department of Computer Science, University of California, Riverside, CA 92521

Abstract. We consider networks with aggregated links, that is networks in which physical link segments between two interconnected devices are grouped into a single logical link. Network devices supporting such aggregated links must implement a distribution algorithm responsible for choosing the transmission link for any given packet. Traditionally, in order to maintain packet ordering within conversations, all distribution algorithms transmitted packets from a given conversation on a single link. This approach is attractive for its simplicity, but in some conditions it tends to under-utilize the capacity of the link. On the other hand, due to variations in packet lengths, sending packets from the same conversation simultaneously along different physical links requires careful scheduling to ensure that the packets arrive in the correct order. Recently, Jawor et al. [6] formulated this packet scheduling problem as an online optimization problem and proposed an algorithm called Block. The focus of the work in [6] was purely theoretical – to achieve an optimal competitive ratio. In this paper we study this problem in an experimental setting. To this end, we develop a refined version of Algorithm Block, and we show, through experimental analysis, that it indeed significantly reduces the maximum amount of time packets spent in buffers before transmission, compared to the standard methods based on hashing.

1

Introduction

Link aggregation is a method of grouping two or more physical links between two network devices in such a way that a client can treat the group as a single virtual link. A collaborative use of multiple links, apart from increased bandwidth, improves resiliency to failures. In an event of a link failure, traffic may be redistributed from the broken link to the remaining links in the group. This way the connection and data flow between the interconnected devices is maintained, and the loss only reduces available capacity. Another benefit is that traffic load may be balanced across different links. These advantages, combined with low costs of the technology, have made it very popular: In a survey [3] of 38 major ISPs, only two (smaller) ISPs did not have parallel links between nodes. However, the technology also introduces new challenges. Since packets transmitted between two interconnected devices may be serviced concurrently by ⋆

Work supported by NSF grants OISE-0340752, CCR-0208856 and CCF-0729071.

E. Bampis and M. Skutella (Eds.): WAOA 2008, LNCS 5426, pp. 253–266, 2009. c Springer-Verlag Berlin Heidelberg 2009 

254

W. Jawor, M. Chrobak, and M. Molle

several physical links (possibly of varying bandwidth), the order in which the data packets arrive at the receiver (ie., the order in which the last bits of packets arrive) may be different from the order in which they originally arrived at the sender. The reordering is an important issue for systems with aggregated links, as it may have direct bearing on the performance of the transmission control protocols, which routinely treat certain instances of out of order packet delivery as a congestion signal [4]. This influences the performance of the whole system. In fact, Bennett et al. [2] argue that the parallelism in Internet components and links is one of the main reasons of packet reordering, in contradiction to a common belief that reordering is caused by “pathological” behavior (ie., by incorrect or malfunctioning network components). Network traffic. The traffic entering the network consists of disjoint conversations, where a conversation is the traffic between a distinguishable sourcedestination pair. (In literature the term flow is often used; unfortunately it conflicts with the scheduling terminology.) In the network, packets are sent from one node to the next, along the designated route. Each node is equipped with a scheduler which receives packets from conversations that traverse the node, and for each packet it chooses its transmission time and an output link. We assume that all physical links have the same speed. Even in this scenario, however, packet reordering will occur, due to the variance of lengths between packets and the fact that the ordering of the packets at the destination is determined by the arrival times of their last bit. For example, suppose a conversation contains two packets of lengths p1 and p2 , that arrive at the sender in this order. If p1 > p2 , and if we send both packets simultaneously along different physical links, the second packet will arrive at the destination p1 − p2 time units (bits) earlier than the first one. Thus the sender needs to wait with sending the second packet for at least p1 − p2 time units. Note that this does not necessarily mean that the one link is not utilized during this time, for it may be used to transmit packets from other conversations. Online algorithms. In many practical scenarios algorithms do not have the access to the entire input instance at the beginning of computation. Instead, the input is revealed over time, and the algorithm must react to the new information without the knowledge of the future. Such algorithms are called online, as opposed to offline algorithms, which have a complete knowledge of the input instance at the beginning of computation. Online algorithms are evaluated using competitive analysis introduced by Sleator and Tarjan [7]. For an online algorithm A, let A(I) denote the schedule produced by A on an instance I. For a minimization problem, an algorithm A is defined to be r-competitive if |A(I)| ≤ r · |OPT(I)|, where OPT(I) is an optimum (offline) schedule of I, and | · | denotes the value of the objective function of a schedule. The smallest such value r is called the competitive ratio of A. Past work. The IEEE 802.3ad standard [5] defines an implementation of aggregated links between devices in Local Area Networks. The standard does not specify the packet scheduling algorithm, but it does assume that the algorithm

Experimental Analysis of Scheduling Algorithms for Aggregated Links

255

neither reorders, nor duplicates the packets. In practice, the requirement of maintaining packet ordering is met by ensuring that all packets that compose a given conversation are transmitted on a single physical link. The assignment of conversations to links is achieved by using a hash function. This approach has several drawbacks: First, it does not fully utilize the capacity of an aggregated link if the number of conversations is smaller than the number of links. Second, such an algorithm does not provide load balancing, i.e., if traffic increases beyond a single channel’s bandwidth, it is not distributed among additional links. (For illustration, suppose we have two physical links and three conversations. Then in the hashing scheme, one link’s load will be double that of the other.) Third, it is hard or even not possible to design a hash function that would distribute the traffic well in all situations. And finally, in some (common) configurations the link aggregation algorithm (which is a part of the Link Layer) must violate the layered architecture of network protocols by accessing higher layer information in order to compute useful hash functions. A different approach to the problem has recently been proposed by Jawor et al. [6]. They formulate the problem as an online multiprocessor job scheduling problem with the objective to minimize maximum flow time. The processors represent physical links and jobs represent packets. To model the packet ordering preservation property, the jobs are required to complete in the first-released-firstcompleted order (FRFC, in short). Algorithm Block developed in [6] (which we describe in detail later on) produces schedules in which packets composing a single conversation may be sent on several different links in the aggregation group. The algorithm makes sure that the packets arrive at the receiver in the correct order by taking packet lengths into account to determine their transmission times. In fact, the algorithm does not distinguish between different conversations, but treats the whole input sequence as one large conversation in which the packet ordering is induced by ordering of  packets in the original conversations. The main result in [6] is that Block is O( n/m)-competitive for the maximum flow time objective function, where n is the number of packets and m the number of links in the aggregation group, They also show that this bound is asymptotically optimal. Our results. The focus of the work in [6] was purely theoretical, namely to achieve an optimal (asymptotic) competitive ratio. In contrast, in this paper we study this problem in an experimental setting. We retain the fundamental idea of Block, and, in order to achieve satisfactory experimental performance, we enhance it with several heuristics. Roughly, Block works by scheduling incoming packets in disjoint blocks, often delaying packet transmissions to wait for the start of the next block, even though there may be non-utilized links. This does not affect the asymptotic competitive ratio, but is likely to adversely affect performance in practice. Our heuristics are designed to join these block more efficiently, to optimize link utilization while preserving correct packet ordering. We present a series of experiments on real network traces in which we compare the two algorithms and an algorithm based on the hashing function. Our experiments show that our algorithm significantly reduces the maximum amount

256

W. Jawor, M. Chrobak, and M. Molle

of time packets spent in buffers before transmission, compared to the standard methods based on hashing. We also discuss implementation issues; in particular, we propose a hardware implementation of our method.

2

FRFC Scheduling

To model the situation described in the introduction as a scheduling problem, we associate a job with each network packet, and a machine with each physical link. The goal is to optimize link utilization, under the constraint that packets complete their arrivals at the receiver in the order of their arrivals at the sender. Using scheduling terminology, we state this problem as follows: We are given n jobs (packets) organized in disjoint chains (conversations), with each job j specified by a pair (rj , pj ) where rj is a positive release (arrival) time, and pj is the processing time, or length of the job. We assume that minj rj = 0. In addition, the jobs are ordered so that if a job j precedes j ′ (we simply write j < j ′ ) then rj ≤ rj ′ . The ordering of job indices within a chain represents the ordering of packets in a conversation. We assume that at the device where scheduling takes places the packets arrived in a correct order, which justifies the ordering of the release times. A schedule A is a function, which for each job j specifies the job’s starting time SjA and a machine executing j. The completion time of job j is CjA = SjA +pj . The objective is to construct a schedule A which satisfies the following constraints: (1) SjA ≥ rj , (2) CjA − SjA = pj , (3) if jobs i and j are scheduled on the same machine then CiA ≤ SjA or SiA ≥ CjA , and (4) for any two jobs j < j ′ from the same chain, we have CjA ≤ CjA′ . Whenever the condition (4) is satisfied we say that jobs are scheduled in FRFC order. Given the above constraints, we want to optimize the machine utilization. Let FjA = CjA − rj denote the flow time of a job j in schedule A. We aim at A = maxj FjA constructing schedules which minimize the maximum flow time Fmax of A. The use of this function is motivated by Quality-of-Service applications in which, in order to provide the delay guarantees, an upper bound on the time each job spends in the system must be given. This helps avoid undesirable starvation issues. In the process, we will also construct schedules A to minimize maximum A ∗ = maxj CjA . By Fmax (I) we denote the completion time, or makespan, Cmax ∗ (offline) optimum flow time of I, and by Cmax (I) its optimal makespan. (Note that these optima may be realized by different schedules.)

3

Algorithm Block

Algorithm Block [6] schedules jobs in blocks by repeatedly using an offline scheduling procedure as a black-box. We describe this scheduling procedure first. This next algorithm is designed to minimize the makespan of the schedule. To simplify its description, we will assume that rj = 0 for all jobs j, i.e., all jobs are available at time 0. In this special case minimizing maximum completion time

Experimental Analysis of Scheduling Algorithms for Aggregated Links

257

/* initialize */ for i ← 1 to m do Ti ← 0 /* create auxiliary schedule X */ for j ← n downto 1 do l ← argmax{Ti : 1 ≤ i ≤ m} schedule j on machine l at time SjX ← Tl − pj Tl ← SjX /* construct the final schedule A */ ξ ← minj SjX for j ← 1 to n do SjA ← SjX − ξ Fig. 1. Algorithm COpt

of all jobs is equivalent to minimizing the maximum flow time of the schedule. Also, recall that, although all jobs have equal release times, the order in which they are required to complete in the final schedule is fixed. The algorithm is shown in Figure 1. It receives on input a set of jobs indexed 1, 2, . . . , n, and outputs an FRFC schedule of these jobs. It computes first an auxiliary schedule X , and then it shifts it to meet the release time. Theorem 1. (Jawor et al. [6]) Algorithm COpt computes a schedule that minimizes maxj CjA . As explained earlier, COpt is an essential component of the online algorithm Block whose objective is to minimize the maximum flow time of a schedule. Algorithm Block: The algorithm proceeds in phases numbered 1, 2, 3, . . ., where phase i starts at time βi . First, let β1 = 0. Consider phase i, and let Qi be the set of jobs pending at time βi . We apply algorithm COpt to schedule all jobs in Qi and shift the resulting schedule forward by βi . Suppose that the last job from Qi completes at time βi + δi . Then βi+1 ≥ βi + δi is the first time when there is at least one pending job. (If no more jobs arrive, the computation completes.)

4

Algorithm JBlock

In this section we introduce Algorithm JBlock, which is an improved version of Block. The new algorithm performs better in practice thanks to simple heuristics. There are two places where Block could be improved. First, the algorithm does not start a new block unless all jobs from previous block have been completed, even though some machines may be idle and new jobs may be pending. Second, during the process of building a new block the algorithm shifts jobs to the right, all by the same amount. Since the jobs in a block have, in general, different release times, such uniform shifting may unnecessarily increase the maximum flow time of this block.

258

W. Jawor, M. Chrobak, and M. Molle

Consequently, in JBlock, the jobs in a given block may be shifted by different amounts, and as a result, unlike in Block, machines may complete the execution at different times. This will affect the definition of a block: a new block is formed as soon as one machine becomes idle. It also complicates the process of joining blocks, as now two adjacent blocks have “jagged” boundaries. This provide an opportunity for speedup, since the jobs that start earliest in the new block can be assigned to the machines that become idle earlier in the old block. But in doing this we also need to be careful not to violate the FRFC job ordering. Joining Blocks. We first aim at improving the procedure of joining blocks in the output schedule. More precisely we would like to solve the following problem: Given times λi for each machine i = 1, 2, ..., m and a set Q of jobs pending at time mini λi , compute an FRFC schedule A such that the following conditions are satisfied: (c1) If job j ∈ Q is scheduled on machine i, then SjA ≥ λi ; (c2) CjA ≥ maxi λi , for all j ∈ Q, and, (c3) A has minimum completion time. Intuitively, each λi represents the last completion time of jobs from the previous block scheduled on machine i. Condition (c2) is needed to preserve the FRFC ordering. Without loss of generality, we can assume that λ1 ≥ λ2 ≥ ... ≥ λm , in which case condition (c2) reduces to CjA ≥ λ1 for all j ∈ Q. Since in Block each block is constructed using Algorithm COpt we concentrate on modifying this algorithm first. Let the load of a machine be the sum of processing times of jobs scheduled on that machine. Throughout this section we assume that in the schedule returned by COpt the load of machine i is not greater than the load of machine i′ , for any two machines i < i′ . (Otherwise jobs can be appropriately reassigned.) Algorithm JCOpt, shown in Figure 2, solves the problem stated above. The main idea is simple: first compute the assignment of jobs to machines using the ordering of values λi and Algorithm COpt, and then adjust the starting times to satisfy conditions (c1) and (c2). In the assignment procedure we use the fact that λ1 ≥ λ2 ≥ . . . ≥ λm and that machine loads increase with machine indices. Theorem 2. Algorithm JCOpt computes an FRFC schedule which satisfies conditions (c1), (c2), and (c3). /* Step 1. Initialize */ V ← schedule computed by COpt on Q for i ← 1 to m do τi ← min{SjV : j scheduled on machine i} /* Step 2. Shift jobs */ Δ ← max(λ1 − C1V , maxi (λi − τi )) for j ← 1 to n do SjA ← SjV + Δ Fig. 2. Algorithm JCOpt

Experimental Analysis of Scheduling Algorithms for Aggregated Links

259

λ

1

λ2

λ

3

λ

4

Fig. 3. The idea of JCOpt

Proof. As usual, number the jobs 1, 2, ..., n. The following claim was proven in [6]: (∗) Let O be any feasible schedule and A is the schedule computed by COpt. O A − CjA , for all j = 1, 2, . . . , n. − CjO ≥ Cmax Then Cmax We first observe that A is an FRFC schedule. This follows from the fact that V is an FRFC schedule and SjA = SjV + ∆ for all j. Fix a job j. Let i′ denote the machine on which j is executed in V and A. By the definition of JCOpt we have SjA = SjV + ∆ ≥ SjV + maxi (λi − τi ) ≥ SjV + λi′ − τi′ ≥ λi′ proving that (c1) holds. We also have CjA = CjV + ∆ ≥ CjV + λ1 − C1V ≥ λ1 . This proves that (c2) holds. It remains to prove that A has minimum makespan among all FRFC schedules satisfying conditions (c1) and (c2). Let O be any schedule which satisfies these A O ≤ Cmax . conditions. We prove that Cmax A V − Suppose that ∆ = λ1 −C1 . Then C1A = λ1 . By (∗) we have C1A ≥ C1O +Cmax O A O A A O Cmax ≥ λ1 + Cmax − Cmax = C1 + Cmax − Cmax , where the last inequality follows from the fact that O satisfies (c2) and the equality from the case condition. This O A . ≤ Cmax proves that Cmax Now suppose that ∆ = maxi (λi − τi ). Choose k such that ∆ = λk − τk . Let ji denote the first job started on machine i in A and let J = {ji : i ≥ k}. By the definition of JCOpt and the choice of k we have SjAk = λk . Moreover SjAl ≤ SjAk for l ≥ k. We distinguish two cases. Suppose that all jobs in J are scheduled on different machines in O. Then, at least one of these jobs must be scheduled on machine O O ≥ λi + Cmax − SjOx ≥ i such that λi ≥ λk . Let jx be such a job. We have Cmax A A O O A A A λk + Cmax − Sjx ≥ Cmax , since Cmax − Sjx ≥ Cmax − Sjx by (∗). Now suppose that there are two jobs jx , jy ∈ J such that jx < jy and both are scheduled on the same machine in O. Since O satisfies (c2) we have SjOy ≥ CjOx ≥ A A A O O − SjAk = Cmax , − SjAy ≥ λk + Cmax ≥ λ1 + Cmax − SjOy ≥ λk + Cmax λ1 . So Cmax A A O O since Cmax −Sjy ≥ Cmax −Sjy by (∗). The last inequality holds since, as observed earlier, SjAy ≤ SjAk for y ≥ k. Local Reduction of Flow Times. In this section we describe the second heuristic, which improves the performance of Block. We wish to eliminate the uniform

260

W. Jawor, M. Chrobak, and M. Molle 1: 2: 3: 4: 5: 6: 7:

/* Step 1: Create auxiliary schedule X */ for i ← 1 to m do Ti ← 0 for j ← n downto 1 do l ← argmax{Ti : 1 ≤ i ≤ m} schedule j on machine l at time SjX ← Tl − pj Tl ← SjX reassign jobs in X to obtain |Ti | ≥ |Ti′ | for i ≥ i′

8: 9: 10: 11: 12: 13: 14: 15:

/* Step 2: Shift jobs */ ξ ← mini Ti for i ← 1 to m do αi ← λi κ ← λ1 for j ← 1 to n do l ← index of machine which executes j SjA ← max(SjX − ξ, κ − pj , αl ) κ ← αl ← SjA + pj Fig. 4. JCOpt2, a modified version of JCOpt

shifting of jobs at the end of JCOpt. The improvement is based on the fact that not all jobs in a given block have equal release times, and thus, minimizing maximum flow time is not equivalent to minimizing maximum completion time. We now modify Step 2 of JCOpt and prove that the schedule produced by the modified algorithm also satisfies conditions (c1), (c2) and (c3). In addition we introduce several other modifications that will be useful during the implementation of the algorithm. The modified version of JCOpt, which we call JCOpt2, is shown in Figure 4. Let us briefly explain the modifications. In lines 1-6 we compute the auxiliary schedule exactly as in COpt. In line 7 we reassign jobs to satisfy the requirement that for any two machines i < i′ the load of i is not greater than the load of i′ . This reassignment can be easily obtained by sorting |T1 |, |T2 |, . . . , |Tm |, since |Ti | is equal to the load of machine i. (In the actual implementation at this point we only need to compute a permutation σ of {1, . . . , m} that specifies the ordering of machine loads. We do not need to physically modify the assignments of jobs to machines yet, as this can be done in the loop starting on line 12.) In line 9, we compute the amount by which all jobs must be shifted in order to guarantee the feasibility of the schedule. Originally we needed to compute min SjX (i.e., a minimum over n elements). However, here we use the fact that all jobs are available at time 0, and the fact that Ti is equal to the minimum starting time of all jobs on machine i, which reduces the computation to a minimum over m elements. In lines 12-15 we shift the jobs. This loop contains the computations previously done in two separate loops. The following theorem states that the above modifications do not increase the maximum completion time of the jobs in the schedule.

Experimental Analysis of Scheduling Algorithms for Aggregated Links

261

Theorem 3. Let A be the schedule produced by JCOpt on an instance Q and let A′ be a schedule produced by JCOpt2. Then A′ satisfies conditions (c1) and A′ A = Cmax . (c2), and Cmax Proof. Fix a job j. Let i′ be the machine executing j in A′ . Observe that αi ≥ λi for all i = 1, 2, . . . , m, during the execution of the loop in lines 14-17, therefore ′ ′ SjA ≥ αi′ ≥ λi′ , and condition (c1) is satisfied. Similarly, CjA ≥ κ ≥ λ1 for all j, hence condition (c2) is satisfied as well. Moreover, since at the beginning of A′ , A′ is an FRFC schedule. each iteration for which j > 1, we have κ = Cj−1 A′ A It remains to prove that Cmax ≥ Cmax . To prove this equality we show that ′

∆ ≥ SjA − (SjX + ξ)

(1)

by induction on j. This suffices since SjA = SjX + ξ + ∆. ′ Suppose first that j = 1. If S1A = S1X + ξ then (1) holds since ∆ ≥ 0. If ′ ′ ′ S1A = κ − p1 = λ1 − p1 then S1A − S1V ≤ λ1 − C1V ≤ ∆. Finally, if S1A = λi′ ′ then S1A − S1V ≤ λi′ − τi′ ≤ ∆, where i′ is the machine executing job 1 in V. ′ Now, let j > 1 and assume that the inequality holds for all j ′ < j. If SjA = ′ ′ A we have SjX + ξ then (1) clearly holds. If SjA = κ − pj then, since κ = Cj−1 ′ ′ A A Cj = Cj−1 . Therefore (1) holds by the inductive assumption. Finally, let i′ be ′ the machine executing j in V, and consider the case when SjA = αi′ . From the algorithm, αi′ is equal to the last completion time on machine i′ . Therefore job j is scheduled back-to-back with some other job j ′′ < j, so (1) holds by induction. We have completed describing the two heuristics. We now present our new online algorithm. Algorithm JBlock: The algorithm proceeds in phases numbered 1, 2, 3, . . ., where phase i starts at time βi . First, let β1 = 0 and let λ1 = λ2 = . . . = λm = 0. Consider phase i, and let Qi be the set of jobs pending at time βi . Let λi to be the last completion time on machine i or the current time, whichever is larger. We apply algorithm JCOpt2 to schedule all jobs in Qi with λi as defined in the previous sentence. Let βi + δi be the first time when one of the machines completes all jobs assigned to it. Then βi+1 ≥ βi + δi is the first time when there is at least one pending job. (If no more jobs arrive, the computation completes.) Example. Let us illustrate Algorithm JBlock on an example. Figure 5 shows a 3-machine schedule of jobs: 1 = (0, 1, 1), 2 = (0, 1/2, 1), 3 = (1/4, 2, 1), 4 = (1/4, 1, 1), 5 = (1/4, 1/2, 1), 6 = (1/2, 1, 1), 7 = (4, 1, 1). The first and third blocks are shown in light gray; the second and fourth blocks are shown in dark gray. Initially λ1 = λ2 = . . . = λm = 0, therefore the first block is identical to one constructed by Block. The next time when there are available jobs and a machine is idle is time 1/4. The pending jobs are 3, 4,and 5, λ1 = λ2 = 1, and λ3 = 1/4. We build the second block using algorithm JCOpt2. It first assigns jobs 3, 4, 5, each to a different machine. Since machine executing 3 is most loaded, job 3 is assigned to machine 3. Machines 1 and 2 execute the remaining jobs. Finally the jobs are shifted: job 3 to time 1/4, job 4 to time 5/4, and job 5 to time 7/4. Blocks containing 6 and 7 are constructed similarly.

262

W. Jawor, M. Chrobak, and M. Molle

1

5 2

6

7

4 3

β

1

β

β

2

β

3

4

Fig. 5. Illustration of algorithm JBlock Primary buffer

Stack 2 Output queues

i−1 INPUT i

Si

Mi

i−3

i+1

i−4

i+2

i−5

i+3

i−6

i+4

S i+1 S i+2 S i+3 S i+4

Mi+1 Mi+2 Mi+3 Mi+4

i−2

OUTPUT

minT Array 1

T1

T2

T3

Array 2

λ1

λ2

λ3

Tm σ1 ...

...

...

λm

σ2

σ3

...

...

...

σm

Array 3

Fig. 6. Implementation of JBlock

Implementation of JBlock. We now propose a hardware implementation of Algorithm JBlock. In our implementation frames processed in two consecutive phases are stored in two buffers, called the primary and secondary buffer (depending on the role they currently play). The actual computation is performed on frames stored in the primary buffer, whereas the incoming frames (those to be processed in the next phase) are stored in the secondary buffer. Once the computation proceeds to the next phase, the secondary and primary buffers are switched. Figure 6 presents the components used during the computation (the secondary buffer is not shown). The primary buffer (as well as the secondary buffer) is a stack (a FILO queue). When JCOpt creates the auxiliary schedule (lines 1-7) it processes frames from the last one to the first one, i.e., exactly in the same order in which the frames leave the stack. For each frame i the algorithm computes the starting time (in the auxilary schedule) denoted Si and a machine Mi which executes it. During this computation the algorithm uses Array 1 to hold variables T 1 , T 2 , . . . , Tm . After all frames are processed and stored in Stack 2, the algorithm uses Array 1 and 2 to compute the mapping σ (line 8) which is stored in Array 3 and used when frames are assigned to appropriate output buffers. Observe that during this computation (the loop in lines 14-17) frames must be processed in the order of their arrivals, i.e., exactly in the same order in which they leave Stack 2. This

Experimental Analysis of Scheduling Algorithms for Aggregated Links

263

part of the computation also uses variables α1 , α2 , . . . , αm , which are initially set to λ1 , λ2 , . . . , λm , respectively. Since Array 2 already contains these values, it may be used to hold variables α1 , α2 , . . . , αm during this part of computation. This is beneficial, since, by the definition of the algorithm, at every iteration of the loop starting on line 14, variable αi is equal to the last completion time of jobs scheduled on machine i. Therefore, when the next phase is executed, say at time t, each variable λi can simply be set to max(t, αi ).

5

Experiments

We now describe our experiments and discuss the results. We implemented four algorithms: Hash (using a hashing function), Block and JBlock, and an offline algorithm, FOpt, to compute the optimal schedule. Next, we tested the algorithms on real network traces. Data and configuration. The simulations use trace data from the MAWI traffic archive [1] collected between January and March, 2006. All traces were measured during hours 2:00pm–2:15pm. Each trace consists of one entry per packet, which includes the packet length (in bytes) and a time stamp for the moment when the end of the packet left the router port. Therefore, the length of packet n divided by link speed (100 Mbps in this case) gives us its service time, reqn , and the difference between time stamps n and n − 1 is intn , the time available for transmitting packet n. Since the trace is collected after the packets have been serialized for transmission over the link, there should never be any queueing if we “replay” the trace feeding a link with the same capacity. Therefore, as a first step, we drop all packets from the trace for which reqn > 1.2intn , because they must represent measurement errors. In the second step, we superimpose a total of 12 separate traces from the archive to create a synthetic trace file with more traffic and non-synchronized arrivals between different traces. The resulting synthetic trace is a good approximation to the output queue at a router port that forwards traffic arriving on 12 different input ports. A set of experiments is repeated three times: first on a superposition of traces collected every Wednesday, 1/11-3/29; second on a superposition of traces collected every Thursday 1/12-3/30; third on a superposition of traces collected every Friday 1/13-3/31. The total number of packets was 97,319,767, 98,423,380, and 96,781,700, respectively for the three sets of traces. In each set of the experiments we vary the number of links in the LAG and their bandwidth. We perform two experiments on LAGs containing 4 links, one for 100Mbps links and one for 200Mbps links, and two experiments on LAGs containing 8 links, one for 50Mbps links and one for 100Mbps links. The bandwidths and the number of links were chosen so that there are two pairs of experiments in which the total bandwidth of the LAG is the same (namely 400Mbps and 800Mbps). This allows us to compare the impact of the degree of parallelism of the LAG among experiments while keeping the total bandwidth constant. We identified conversations based on the source and destination IP addresses and port numbers. Each conversation received a unique identification number,

264

W. Jawor, M. Chrobak, and M. Molle

and was identified by this number throughout the computation. The identification is done before the traces are superimposed, which means that in the final experiment we distinguish between conversations that originated in two different traces, even if their source and destination address and port numbers match. Algorithms. We implemented the following four algorithms: Algorithm Hash, that distributes the packets on links using a hashing function, Algorithm Block, Algorithm JBlock, and Algorithm FOpt which computes the optimal offline solution (for all conversations combined into one). Among the above algorithms only Hash exploits the partitioning of the input trace into individual conversations. That algorithm determines the destination link in the LAG based on conversation parameters (source and destination addresses and/or port numbers). The hashing functions used in practice assume that the conversation parameters are distributed in the parameter space uniformly at random and, therefore, the outgoing link will also be chosen uniformly at random if the hashing function is designed properly. This assumption may be invalid as it depends on the topology of the network. However, since our goal was to determine the efficiency of the overall approach, rather than a specific hash function, we assign each conversation to a link chosen uniformly at random. In case of algorithms Block and JBlock it is possible that two different packets (transmitted on parallel links) arrive at the destination at the very same time. Since we have no guarantees on how the hardware at the destination treats such packets, we have modified both algorithms to include idle times between completion time of such packets. We have chosen the length of this idle time to be the time required to transmit 1 byte on a single link. Methodology. In each experiment we compare the maximum flow times of packets scheduled using Hash, Block, JBlock, and the offline optimum. Since computing the offline optimum for more than one conversation is N P-hard [6], in FOpt we, once again, treat all packets as one large conversation with the ordering of packets induced by original conversations. (For this reason it is possible that the algorithm using a hashing function outperforms the offline optimum.) We measure the maximum flow time in the intervals of 1 sec. for each of the algorithms, i.e., at the beginning of each 1 sec. interval we report the maximum flow time of packets that have been transmitted in the previous interval. In case of FOpt, packets that arrive in each interval are treated as a separate input instance. We decided to examine the performance of the algorithms on short intervals, instead of the whole 15 minute interval, because measuring the maximum flow time over the entire instance would provide little information on the local behavior of the algorithms. Results and discussion. The results of the experiments on Wednesday traces are presented in Fig. 7. The graph was obtained from the raw data by sorting the time quanta according to the increasing value of the offline optimum achieved in these quanta, using these values as the x-coordinate, and plotting the results of the other algorithms on the y-coordinate. Both axes are in logarithmic scale. (Additional graphs will appear in the full version of the paper.)

Experimental Analysis of Scheduling Algorithms for Aggregated Links

265

Fig. 7. Relative performance of the scheduling algorithms on Wednesday traces

In all experiments, both Block and JBlock significantly outperform the hashing algorithm, reducing the maximum flow time by a factor of about 3. However, we need to emphasize that this does not imply the overall 3-fold transmission speedup; it only shows the reduction of the flow time of the slowest packet. Recall that we impose an additional restriction on Block and JBlock (as well as the offline optimum), namely that they maintain the global ordering of packets. In contrast, Hash maintains packet ordering only within conversations. The experiments show that in spite of this additional constraint, both algorithms outperform Hash, regardless of the number of links in the aggregation group and their bandwidth. Since Block and JBlock do not use information on the assignment of packets to conversations, they do not need not need to violate the layered architecture of network protocols (unlike Hash). Here are a few more observations that can be made based on the graphs in Figures 7: (a) The hashing algorithm seems insensitive to the particular traffic patterns in different ”quanta”, because the main body of each clump is horizontal (except for outliers at the top and bottom), (b) Block and JBlock show better correlation to traffic patterns (although generally less than the optimum algorithm), (c) the difference between Block and JBlock is surprisingly small, except in the case of 4 links at 200 Mbps, where JBlock is visibly superior. The improvements of JBlock in the case of 4 links at 200 Mbps appear to be due

266

W. Jawor, M. Chrobak, and M. Molle

to the fact that in this configuration both algorithms are likely to spend most time at block boundaries, where JBlock is superior to Block. (It is well known in queueing theory that one faster server with the combined capacity of two slow servers will finish its workload faster, because it can devote its total effort to one customer if necessary.) The 4 links at 200 Mbpbs configuration will have shorter busy periods and more frequent transitions from one block schedule to the next compared to configuration of 8 links at 100 Mbps, so improving the utilization of the block transitions will be more important.

References 1. MAWI Working Group Traffic Archive, http://tracer.csl.sony.co.jp/mawi/ 2. Bennett, J.C.R., Partridge, C., Shectman, N.: Packet reordering is not pathological network behavior. IEEE/ACM Transactions on networking 7(6), 789–798 (1999) 3. Gareiss, R.: Is the internet in trouble? Data Communications Magazine (September 1997) 4. Gharai, L., Perkins, C., Lehman, T.: Packet reordering, high speed networks and transport protocol performance. In: Proceedings of the International Conference On Computer Communications and Networks, pp. 73–78 (2004) 5. C.S.: IEEE. Part 3: Carrier sense multiple access with collision detection (CSMA/CD) access method and physical layer specifications. In: IEEE Std 802.3. Standard for Information technology. Telecommunications and information exchange between systems. Local and metropolitan area networks. Specific requirements. The IEEE, Inc. (2002) 6. Jawor, W., Chrobak, M., D¨ urr, C.: Competitive analysis of scheduling algorithms for aggregated links. Algorithmica 51, 367–386 (2008) 7. Sleator, D., Tarjan, R.E.: Amortized efficiency of list update and paging rules. Commun. ACM 28, 202–208 (1985)

A (2 − c logn n ) Approximation Algorithm for the Minimum Maximal Matching Problem Zvi Gotthilf, Moshe Lewenstein, and Elad Rainshmidt Department of Computer Science, Bar-Ilan University, Ramat Gan 52900, Israel {gotthiz,moshe,rainshe}@cs.biu.ac.il

Abstract. We consider the problem of finding a maximal matching of minimum size, given an unweighted general graph. This problem is a well studied and it is known to be NP-hard even for some restricted classes of graphs. Moreover, in case of general graphs, it is NP-hard to approximate the Minimum Maximal Matching (shortly MMM) within any constant factor smaller than 67 . The current best known approximation algorithm is the straightforward algorithm which yields an approximation ratio of 2. We propose the first nontrivial algorithm yields an approximation ratio of 2−c logn n , for an arbitrarily positive constant c. Our algorithm is based on the local search technique and utilizes an approximate solution of the Minimum Weighted Maximal Matching problem in order to achieve the desirable approximation ratio.

1 1.1

Introduction Background

Given an undirected graph G(V, E), a matching M is said to be maximal if no other matching in G contains it. In the Minimum Maximal Matching (MMM) problem, we are asked to find a maximal matching of minimum cardinality. A closely related problem to MMM is the Minimum Edge Dominating Set problem (shortly Min-EDS), where we asked to find a set of edges (of minimum cardinality) that dominates all the other edges in the graph. MMM and Min-EDS are known to be equivalent problems [13,9]. Thus any approximation algorithm for the MMM problem, yielding an approximation ratio of δ, can be easily transformed to an approximation algorithm for the Min-EDS problem with the same performance guarantee. Finding an arbitrary maximal matching provides a 2-approximation for MMM problem, since each edge in the optimal solution can cover at most two edges of M. Intuitively, it seems that finding the Minimum Maximal Matching over an unweighted graph is much easier than finding it in the case of a weighted input graph. However, for both problems the current best approximation algorithms yields the same performance guarantee of 2. E. Bampis and M. Skutella (Eds.): WAOA 2008, LNCS 5426, pp. 267–278, 2009. c Springer-Verlag Berlin Heidelberg 2009 

268

1.2

Z. Gotthilf, M. Lewenstein, and E. Rainshmidt

Related Results

Both Min-EDS and MMM problems were already referred to in the classical work of Garey and Johnson [7] on NP-completeness. Due to their equivalency, we present the related results of both problems. Yannakakis and Gavril [13] show that the Min-EDS problem is NP-complete even on planar graphs or bipartite graphs of maximum degree 3. Moreover, Horton and Kilakos [10] proved NP-completness of Min-EDS in the cases of line graphs, planar bipartite graphs, perfect claw-free graphs, total graphs and planar cubic graphs. On the other hand, Yannakakis and Gavril [13] gave a polynomial time algorithm for MMM in trees. Furthermore, polynomial time algorithms presented for the cases of bipartite permutation graphs, cotriangulated graphs [12] and claw-free chordal graphs [10]. Moreover, the problem admits polynomial time approximation schemes (PTAS) for λ-precision unit disk graphs [11] and planar graphs [1]. Regarding the approximability of Min-EDS and MMM, Carr et al. [4] gave a 1 -approximation algorithm for the weighted variant of Min-EDS. This result 2 10 was later improved to 2 by Fujito et al. [6]. Moreover, Cardinal et al. [5] gave an expression of the approximation ratio that is strictly  less than two, in the cases where the input graph has n vertices and at least ǫ n2 edges. Another interesting result is due to Chleb´ık and Chleb´ıkova [3] which proved that it is NP-Hard to approximate Min-EDS(and hence also MMM) within any factor better than 67 . There are many optimization problems for which designing a 2-approximation algorithm is trivial, but obtaining 2 − ε (where ε is a positive constant) approximation is extremely hard. Among those problems we can find the classical Minimum Vertex Cover problem. Improved approximations for Min-VC were suggested in [8,2]. 1.3

Our Contribution

In this paper, we focus on finding an improved approximation algorithm for the MMM problem, given an unweighted general graph. We give the first nontrivial approximation algorithm, based on the local search technique, achieves an approximation ratio of (2 − c logn n ), where c is an arbitrarily positive constant and n is the number of vertices of the graph. From an initial maximal matching, our algorithm tries to decrease its size. As long as the size of the current matching is at least 2|OP T |− c log n (where |OP T | is the size of an optimal solution), we can decrease its size by at least one. Hence, we finally obtain a maximal matching of size of at most 2|OP T | − c log n. Our algorithm use a novel technique of exploiting an approximate solution of the Minimum Weighted Maximal Matching problem, over an auxiliary graph, G′ , that satisfies a specific property. Throughout this paper we use c as an arbitrary positive value.

A (2 − c logn n ) Approximation Algorithm for the MMM Problem

2

269

Preliminaries

In this section, we define the minimum maximal matching problem, its approximated version and several notations that will be used during our analysis. Given a graph G(V, E), a matching, M , is said to be maximal if no other matching in G contains it. In MMM, our goal is to find a maximal matching of minimum cardinality. Given an input graph, let OP T be an optimal solution for the MMM problem. Our goal is to find a maximal matching MA such that ratio between |MA | and |OP T | is minimal. The approximation ratio of the algorithm A will be the largest ratio between |MA | and |OP T | over every possible input graph. Throughout this paper, we use M as a maximal matching and we denote by OP T an optimal solution of the MMM problem. Given a maximal matching M , we say that edge e ∈ M is an M-edge. Similarly we say that a vertex v ∈ M is an M-vertex and we denote by VM the group of all M-vertices. We symmetrically define OPT-edge, OPT-vertex and VOP T .

3

Algorithm

In this section we describe our approximation algorithm for the MMM problem. Our algorithm receives an arbitrary maximal matching as an input and tries to decrease its size using the local search technique, while keeping its maximality property. For a given graph, G(V, E), let M be an arbitrary maximal matching and denote by OP T a minimum maximal matching. Let us define the bipartite graph GOP T,M as follows: every vertex of GOP T,M represents a vertex in VM ∪ VOP T and every edge corresponds to an edge e ∈ OP T ∪ M . If e is both an OPT-edge and M-edge we include two edges between its corresponding vertices. Note that, GOP T,M is a multigraph. Observe that the degree of each vertex is at most two and, hence, each connected component of GOP T,M is a simple path or a cycle. See figure 1 as an example of GOP T,M . Notice that the dashed edges represents an optimal solution while the solid edges represents the current matching, M . The bold vertices in the figure represents the ”good” vertices that we define now.

Fig. 1. An example GOP T,M graph

270

Z. Gotthilf, M. Lewenstein, and E. Rainshmidt

Definition 1. Let v1 , v2 , v3 , v4 be the vertices of an alternating path of length three, with an OPT-edge in the middle (over GOP T,M ). We say that v2 and v3 are good vertices and we denote by g the group of all good vertices. The main goal of the algorithm is to bound the number of the length three paths, i.e. bounding the size of g. For example, given a maximal matching M , such that |g| = 0 then we can conclude that |M | ≤ 23 |OP T |, since every possible component of GOP T,M can be one of the following: – A simple path ⇒ a ratio of 2:3 between OPT-edges and M-edges in the worst case. – A cycle ⇒ even number of OPT-edges and M-edges. On the other hand, if most of the components are simple paths of length three with an OPT-edge in the middle (i.e. |g| ≈ |M |) ⇒ |M | ≈ 2|OP T |. Our algorithm consist of three stages. First we find a subset of VM that contains good vertices only, then we search for an approximated minimum weighted maximal matching, M ′ , over an auxiliary graph, G′ . Finally, during the third stage, we find a maximal matching, M ′′ , over G(V, E), that contains the edges of M ′ . Our proof of correctness is based on the following property: The number of non matched VM vertices in M ′′ is more than twice as large as the number of M ′′ edges that one of its vertices ∈ V \VM . The rest of our paper is organized as follows. First we show that during the first stage of our algorithm we find a large enough subset of M-vertices that contains good vertices only. Next, we describe the relatively complex construction of G′ and we prove several important properties regarding the approximated minimum weighted maximal matching that we find over it. At the end, we present our main theorem in which we prove that as long as |M | > 2|OP T |− c log n, our algorithm never fails to find a maximal matching, M ′′ , such that |M ′′ | < |M |. Algorithm 1: MinMax(G(V, E), M ) 1 2 3 4 5 6 7

while true do Divide VM into distinct groups, using the Select procedure. for every subset Sj of every distinct group do Construct an auxiliary graph G′ . Find an approximated minimum weighted maximal matching, M ′ , over G′ . Find an arbitrary maximal matching, M ′′ over G, that contains the edges of M ′ . if |M ′′ | < |M | then M ← M ′′ and exit for loop.

8 9

if for every M ′′ , |M ′′ | ≥ |M | then exit while loop and output M .

10

3.1

Finding 8c log n Good Vertices

In this subsection, we show that during the first stage of the algorithm we find a subset of VM (of size ≥ 8c log n) that contains good vertices only. Later on,

A (2 − c logn n ) Approximation Algorithm for the MMM Problem

271

during our analysis, the reason why we needed at least 8c log n vertices will become clear. The following lemma provides a lower bound on the number of good vertices. Lemma 1. If |M | > 2|OP T | − c log n, then |g| ≥ |M | − 3c log n. Proof: Note that |g| is exactly twice the number of length three paths (over GOP T,M ) with an OPT-edge in the middle. Now, let us assume that |g| < |M | − 3c log n, and we prove in contradiction that |M | ≤ 2|OP T | − c log n. It follows that there are at least 3c log n M-edges that are not part of the above length three paths. Thus, the number of corresponding OPT-edges in GOP T,M is at least 1 2 2 3 (3c log n). Therefore, |OP T | must be at least: 2 (|M | − 3c log n) + 3 (3c log n), and 2|OP T | ≥ |M | + c log n, a contradiction. ⊓ ⊔ Throughout this paper we assume that |M | > 2|OP T |−c log n, hence we conclude that |g| ≥ |M | − 3c log n. At this point, we describe a polynomial time method for finding a subset of VM that contains at least 8c log n vertices and consist of good vertices only (see the Select procedure in the appendix for details). We divide VM into distinct groups, each of size 17c log n, while the last group may contain less vertices (for simplicity we assume every group contains exactly 17c log n vertices). We also demand that each of the above distinct groups does not contain two endpoints of a single M-edge. For every distinct group we check every possible subset of its vertices. In the following lemma we prove that there must be at least one subset (of size ≥ 8c log n) that contains good vertices only. Lemma 2. Our method finds at least one subset of size ≥ 8c log n that contains good vertices only. Proof: Let us assume that every subset contains at least one vertex that is not good. Thus, every distinct group of size 17c log n must contain more than 9 vertices that 9c log n such vertices. Therefore, VM contains more than 2|M | 17 are not good. However, according to Lemma 1 there are at most |M | + 3c log n such vertices in VM . Thus, as long as |M | > 51c log n our method must find an appropriate subset. On the other hand, if |M | ≤ 51c log n, we can easily search for good vertices over ⊓ ⊔ every possible subset of VM . From now on, we assume that during this first stage, we found an appropriate subset of good vertices. 3.2

G′ Construction

In this subsection, we describe the construction of the auxiliary graph G′ , which is based on: (i) The current maximal matching, M . (ii) The original graph G(V, E). (iii) The subset of VM vertices that we found during the first stage.

272

Z. Gotthilf, M. Lewenstein, and E. Rainshmidt

The motivation behind our construction is to try to match the subset of good vertices of the first stage with a corresponding set of good vertices (which is unknown to the algorithm). Finding such a matching will decrease the size of g, and hence it will decrease the size of the the current maximal matching, M . Before delving into the details regarding the construction of G′ , we define the following notations. Given an M-edge, e = (vi , vj ), we write vi = M (vj ) and vj = M (vi ). We also use this notation over a set of vertices (e.g. let V ′ ⊆ MV then M (V ′ ) is the set of vertices that matched to V ′ vertices in M ). We define the following groups of vertices: (i) V ′ - the subset of VM that we found during the first stage. (ii) D′ - a set of dummy vertices. Every di ∈ D′ corresponds to vi ∈ V ′ . (iii) U ⊆ VM \ (V ′ ∪ M (V ′ )) - a vertex vi ∈ U iff for every vj ∈ V ′ there are no edges, in E, between M (vi ) and M (vj ). The reason for defining the above group of vertices, U , is trying to set a potential group of good vertices that might be matched to V ′ vertices over the auxiliary graph, G′ . We now describe G′ and note that an example appears in the appendix. Let V ∪ D′ be the group of vertices of G′ and the set of edges, E ′ , is the union of the following groups of edges: 1. Edges between every vertex vi ∈ V ′ and its dummy vertex, di . Each of those edges (totally |V ′ | edges) receives a weight of ∞. 2. Internal edges in V ′ according to G(V, E). Each of those edges receives a weight of 1. 3. Edges between V ′ and U , according to G(V, E). Those edges receives a weight of 1. 4. The group of all M-edges that both endpoints are not in V ′ . Those edges receives a weight of 1. 5. Edges between M (V ′ ) ∪ M (U ) vertices and V \VM vertices, according to G(V, E). Those edges receives a weight of n2 . Note that U ∩ M (U ) may not be empty. In addition, there are no internal edges in M (V ′ ), since we assume that V ′ contains good vertices only, thus, such an internal edge contradicts the maximality of OP T . Altogether, we denote the above auxiliary weighted graph by G′ (V ∪ D′ , E ′ ). An example figure of the above construction can be found in the appendix (figure 2). First we present the input graph and then we show the construction of G′ (V ∪ D′ , E ′ ) with the appropriate edge weights (non weighted edges implies on a weight of 1). 3.3

Finding an Approximate Minimum Weighted Maximal Matching

The next stage in our algorithm is finding an approximated minimum weighted maximal matching, M ′ , over G′ (V ∪D′ , E ′ ), using the 2-approximation algorithm presented by Fujito et al. [6]. At the end of this subsection we present useful

A (2 − c logn n ) Approximation Algorithm for the MMM Problem

273

properties regarding M ′ . Those will be used during our analysis, while proving the guaranteed approximation ratio of our algorithm. Before presenting the M ′ properties, we need to bound the size of an important group of vertices, VOP T \VM . Lemma 3. Given a maximal matching M and an edge e = (vi , vj ) ∈ / M , such that vi and vj are good vertices. If M ∪ e \ {(vi , M (vi )) ∪ (vj , M (vj ))} is not a maximal matching then there must be an edge between either M (vi ) or M (vj ) and a vertex ∈ VOP T \VM . Proof: Since M is a maximal matching, the only M-vertices that might contradict the maximality of M ∪ e \ {(vi, M (vi )) ∪ (vj , M (vj ))} are M (vi ) and M (vj ). By the definition of the good vertices, an edge between M (vi ) and M (vj ) must contradict the maximality of OP T . Therefore, there must be an edge, e′ , between some vk ∈ / VM to M (vi ) or M (vj ). If vk ∈ / VOP T \VM then both endpoints of e′ ∈ / VOP T . Thus, it contradicts the maximality of OP T . ⊓ ⊔ The following Lemma and Corollary provides an upper bound on the size of VOP T \VM . Lemma 4. |M | ≤ 2|OP T | − |VOP T \VM |. Proof: Recall that if every component of GOP T,M is a simple path of length three with an OPT-edge in the middle, then |M | = 2|OP T |. Moreover, every vi ∈ VOP T \VM must be the first or the last vertex of a simple path over GOP T,M . Thus, along those paths the number of OPT-edges must be greater or equal to the number of M-edges. ⊓ ⊔ The following corollary follows from Lemma 4. Corollary 1. If |M | > 2|OP T | − c log n, then |VOP T \VM | < c log n. Recall, we assumed that |M | > 2|OP T | − c log n and that V ′ contains good vertices only. Now let us present important properties regarding M ′ , while for simplicity, we assume that |V ′ | = 8c log n. Lemma 5. The optimal minimum weighted maximal matching over G′ (V ∪ D′ , E ′ ) must satisfy the following properties: (i) It does not contain an edge with weight of ∞. (ii) It contains less than clogn edges with weight of n2 . Proof (i) Since the V ′ vertices are all good and due to the construction of G′ , each vi ∈ V ′ can be matched to its OPT-neighbor. (ii) According to Lemma 3 and Corollary 1, we can conclude that both M (V ′ ) and M (U ) can have up to |VOP T \VM | (< clogn) edges to V \VM vertices (each of those edges is of weight n2 ). ⊓ ⊔ Since we use a 2-approximation algorithm for finding the minimum weighted maximal matching, M ′ , and according to the above Lemma, the following observation follows immediately.

274

Z. Gotthilf, M. Lewenstein, and E. Rainshmidt

Observation 1. M ′ must satisfy the following properties: (i) It does not contain an edge with weight of ∞. (ii) It contains less than 2clogn edges with weight of n2 . 3.4

Performance Analysis

In this subsection we prove that as long as |M | > 2|OP T | − c log n, Algorithm 1 never fails to find a maximal matching M ′′ such that |M ′′ | < |M |. During the last stage of our algorithm we search for a maximal matching M ′′ over G(V, E) that contains the edges of M ′ (without its weights). In our proof we show that the number of M ′′ edges between VM and V \VM is less than half the number of non-matched VM vertices in M ′′ . The following Observations provides structural properties regarding M ′ . Observation 2. Let vi ∈ VM and vj ∈ V \VM be non-matched vertices in M ′ . If there is an edge between vi and vj , then vi ∈ U \M (U ). Proof: According to Observation 1, V ′ must be matched in M ′ . Furthermore, if vi ∈ VM \(V ′ ∪ M (V ′ ) ∪ U ∪ M (U )) it must be matched (to its original M (vi )) in M ′ , according the construction of G′ . / M (V ′ )∪ Thus, it is sufficient to prove that if such an edge (vi , vj ) exists, then vi ∈ M (U ). Since, every edge between M (V ′ ) ∪ M (U ) and V \VM was already exist in G′ and due to the maximality of M ′ , we can conclude that vi ∈ U \M (U ). ⊓ ⊔ Observation 3. Let vi ∈ VM and vj ∈ M (V ′ ) be non-matched vertices in M ′ , if there is an edge between vi and vj , then vi ∈ U \M (U ). Proof: Using similar arguments to the above Observation, vi ∈ VM \(M (V ′ ) ∪ U ∪ M (U )) must be matched in M ′ . Thus, it is sufficient to prove that if there is an edge (vi , vj ) then vi ∈ / M (V ′ ) ∪ M (U ). According to the definition of U , there are no edges between M (U ) and M (V ′ ). Moreover, as already mentioned, ⊓ ⊔ there are no internal edges in M (V ′ ). Therefore, vi ∈ U \M (U ). Let x be the number of M (V ′ ) vertices that are matched in M ′ and let y be the number of edges between M (V ′ ) and U \M (U ) in M ′′ . Note that: (i) x ≤ 2c log n. (ii) 8c log n − x is the number of non-matched M (V ′ ) vertices in M ′ . Observation 4. There are less than 2clogn − x vertices of U \M (U ) that are not matched in M ′ Proof: According to Observation 1, there are less than 2clogn edges between VM and V \VM in M ′ . Moreover, by the definition of x there are less than 2clogn − x edges between M (U ) and V \VM in M ′ . Since vi ∈ U \M (U ) may not be matched in M ′ iff there is an edge between M (vi ) and V \VM , our prove is done. ⊓ ⊔ In the following Lemma we presents two properties that will lead us directly to the proof of our algorithm’s correctness.

A (2 − c logn n ) Approximation Algorithm for the MMM Problem

275

Lemma 6. M ′′ must satisfy the following properties: (i) The number of non-matched VM vertices in M ′′ ≥ 8c log n − x − y. (ii) M ′′ contain less than 4c log n − x − y edges between VM and V \VM Proof (i) Note that, according to the construction of G′ and the maximality of ′ M : (a) M ′ does not contain edges between M (V ′ ) and VM . (b) M ′′ does not contain an edge between M (V ′ ) and V \VM . Since, |M (V ′ )| = 8c log n and by the definition of x, there are exactly 8c log n − x non matched M (V ′ ) vertices in M ′ . Therefore, by the definition of y and according to Observation 3 we are done with the first property. (ii) According to Observation 1, M ′ contains less than 2clogn edges between VM and V \VM . Moreover, according to Observation 4 and Observation 2, M ′′ contains less than 2clogn − x edges between VM and V \VM . Thus, we can conclude from Observation 3 and the definition of y, that M ′′ contains less than 2clogn − x − y edges between U \M (U ) and V \VM . Altogether, M ′′ contains less than 2clogn + 2clogn − x − y edges between VM and V \VM . ⊓ ⊔ The following theorem follows directly from Lemma 6. Theorem 1. As long as |M | > 2|OP T | − c log n, our algorithm never fails to find a maximal matching, M ′′ such that: |M ′′ | ≤ |M | − 1. T |−c log n n ≤ 2 − c log Thus, our algorithm yields an approximation ratio of 2|OP|OP T| n . See figure 3 (in the appendix) where we present an example of the matching M ′ over G′ (V ∪ D′ , E ′ ) and the construction of M ′′ .

4

Open Questions

A central question is whether one can improve the above approximation ratio. Moreover, one may ask whether we can use similar techniques to improved approximation ratios of related problems. Another interesting question is regarding the approximability of the weighted version of the MMM problem.

References 1. Baker, B.S.: Approximation algorithms for NP-complete problems on planar graphs. Journal of the ACM 41, 153–180 (1994) 2. Bar-Yehuda, R., Even, S.: A local-ratio theorem for approximating the weighted vertex cover problem. Annals of Discrete Mathematics 25, 27–46 (1985) 3. Chleb´ık, M., Chleb´ıkova, J.: Approximation hardness of edge dominating set problems. Journal of Combinatorial Optimization 11(3), 279–290 (2006) 1 -approximation algorithm 4. Carr, R., Fujito, T., Konjevod, G., Parekh, O.: A 2 10 for a generalization of the weighted edge-dominating set problem. Journal of Combinatorial Optimization 5(3), 317–326 (2001)

276

Z. Gotthilf, M. Lewenstein, and E. Rainshmidt

5. Cardinal, J., Langerman, S., Levy, E.: Improved Approximation Bounds for Edge Dominating Set in Dense Graphs. In: Erlebach, T., Kaklamanis, C. (eds.) WAOA 2006. LNCS, vol. 4368, pp. 108–120. Springer, Heidelberg (2007) 6. Fujito, T., Nagamochi, H.: A 2-Approximation Algorithm for the Minimum weight edge dominating set problem. Discrete Applied Mathematics 181, 199–207 (2002) 7. Garey, M.R., Johnson, D.S.: Computers and intractability. A guide to the theory of NP-completeness. W. H. Freeman and Co., New York (1979) 8. Halperin, E.: Improved approximation algorithms for the vertex cover problem in graphs and hypergraphs. SIAM Journal on Computing 31(5), 1608–1623 (2002) 9. Harary, F.: Graph Theorey. Addison-Wesley, Reading (1969) 10. Horton, J.D., Kilakos, K.: Minimum edge dominating sets. SIAM Journal on Discrete Mathematics 6(3), 375–387 (1993) 11. Hunt III, H.B., Marathe, M.V., Radhakrishnan, V., Ravi, S.S., Rosenkrantz, D.J., Stearns, R.E.: NC-Approximation Schemes for NP- and PSPACE-Hard Problems for Geometric Graphs. Journal of Algorithms 26(2), 238–274 (1998) 12. Srinivasan, A., Madhukar, K., Nagavamsi, P., Rangan, C.P., Chang, M.S.: Edge domination on bipartite permutation graphs and cotriangulated graphs. Information Processing Letters 56(3), 165–171 (1995) 13. Yannakakis, M., Gavril, F.: Edge dominating sets in graphs. SIAM Journal on Applied Mathematics 38, 364–372 (1980)

Appendix Select procedure

Procedure Select(M ) 1 2 3 4

if |M | ≤ 51c log n then denote its vertices as a single group and exit. else 2|M | distinct groups, each of size 17c log n, such Divide VM arbitrarily into 17c log n that every group does not contain two vertices of the same M-edge.

A (2 − c logn n ) Approximation Algorithm for the MMM Problem

Stage 1 − Input Graph : V’ vertices : M − edges

n oo

oo oo

oo

oo oo

oo

oo

2

n

2

2

n n

2

2

2

2 n n n n

2

Stage 2 − G’ Construction : U vertices : D’ vertices

Fig. 2. G’ construction

n

2

n

2

n

2

n

2

277

278

Z. Gotthilf, M. Lewenstein, and E. Rainshmidt n oo

oo oo

oo

oo oo

oo

oo

2

n n

2

2

2

2 n n n n

2

n

Stage 3 − M’ : U\M(U) vertices : Vopt \ Vm : M’ −edges

Stage 4 − M’’ : M’’ −edges

Fig. 3. M” construction

2

n

2

n

2

n

2

On the Maximum Edge Coloring Problem (Extended Abstract) Giorgio Lucarelli1,⋆ , Ioannis Milis1 , and Vangelis Th. Paschos2,⋆⋆ 1

Dept. of Informatics, Athens University of Economics and Business, Greece {gluc,milis}@aueb.gr 2 LAMSADE, CNRS UMR 7024 and Universit´e Paris-Dauphine, France [email protected]

Abstract. We study the following generalization of the classical edge coloring problem: Given a weighted graph, find a partition of its edges into matchings (colors), each one of weight equal to the maximum weight of its edges, so that the total weight of the partition is minimized. We present new approximation algorithms for several variants of the problem with respect to the class of the underlying graph. In particular, we deal with variants which either are known to be NP-hard (general and bipartite graphs) or are proven to be NP-hard in this paper (complete graphs with bi-valued edge weights) or their complexity question still remains open (trees).

1

Introduction

In the classical edge coloring problem we ask for the minimum number of colors required in order to assign different colors to adjacent edges of a graph G = (V, E). Equivalently, we ask for a partition S = {M1 , M2 , . . . , Ms } of the edge set of G into matchings (color classes) such that s is minimized. This minimum number of matchings (colors) is known as the chromatic index of the graph and it is denoted by χ′ (G). In several applications, the following generalization of the classical edge coloring problem arises: a positive integer weight is associated with each edge of G and we now ask for a partition S = {M1 , M2 , . . . , Ms } of the edges of G into matchings (colors),  each one of weight wi = max{w(e)|e ∈ Mi }, such that their s total weight W = i=1 wi is minimized. As the weight wi of each matching is defined to be the maximum weight of the edges colored i, we refer to this problem as Maximum Edge Coloring (MEC) problem. ⋆

⋆⋆

This work has been funded by the project PENED 2003. The project is cofinanced 75% of public expenditure through EC–European Social Fund, 25% of public expenditure through Ministry of Development–General Secretariat of Research and Technology of Greece and through private sector, under measure 8.3 of Operational Programme “Competitiveness” in the 3rd Community Support Programme. Part of this work has been carried out while the author was with the Department of Informatics of Athens University of Economics and Business.

E. Bampis and M. Skutella (Eds.): WAOA 2008, LNCS 5426, pp. 279–292, 2009. c Springer-Verlag Berlin Heidelberg 2009 

280

G. Lucarelli, I. Milis, and V.Th. Paschos

The most common application for the MEC problem arises in the domain of communication systems and, especially, in single hop systems. In such systems messages are to be transmitted directly from senders to receivers through direct connections established by an underlying switching network. Any node of such a system cannot participate in more than one transmissions at a time, while the transmission of messages between several pairs of nodes can take place simultaneously. The scheduler of such a system establishes successive configurations of the switching network, each one routing a non-conflicting subset of the messages from senders to receivers. Given the transmission time of each message, the transmission time of each configuration equals to the longest message transmitted. The aim is to find a sequence of configurations such that all the messages are transmitted and the total transmission time is minimized. It is easy to see that the above situation corresponds directly to the MEC problem. In practical applications there exists a non negligible setup delay to establish each configuration (matching). The presence of such a delay, say d, in the instance of the MEC problem can be easily handled: by adding d to the weight of all edges of G, the weight of each matching in S will be also increased by d, incorporating its set-up delay. A natural idea to decrease the weight of a solution to such a problem is to allow preemption i.e., interrupt the transmission of a (set of) message(s) in a configuration and complete it later. However, in this preemptiveMEC problem the presence of the set-up delay d plays a crucial role in the problem’s complexity [11,4,1]. The analogous to the MEC problem generalization for the classical vertex coloring problem, called Maximum (vertex) Coloring (MVC), has been also studied in the literature during last years [2,9,7,5,8,19,18]. In the MVC problem we ask for a partition of the vertices of G into independent sets (colors), each one of weight equal to the maximum weight of its vertices, so that the total weight of the partition is minimized. Like the classical edge and vertex coloring problems, the MEC problem, on a general graph G, is equivalent to the MVC problem on the line graph, L(G), of G. However, this is not true for every special graph class, since most of them are not closed under line graph transformation (e.g. complete graphs, trees and bipartite graphs). Related Work. It is known that the MEC problem is strongly NP-hard even for (i) complete balanced bipartite graphs [20], (ii) bipartite graphs of maximum degree three and edge weights w(e) ∈ {1, 2, 3} [11,13], (iii) cubic bipartite graphs [7] and (iv) cubic planar bipartite graphs with edge weights w(e) ∈ {1, 2, 3} [5]. Moreover, in conjunction with the results (iii) and (iv) above, it has been shown that the MEC problem on k-regular bipartite graphs cannot be approximated k within a ratio less than 2k2−1 , which for k = 3 becomes 8/7 [7]. This inapproximability result has been improved to 7/6 for cubic planar bipartite graphs [5]. Concerning the approximability of the MEC problem, a natural greedy 2approximation algorithm has been proposed by Kesselman and Kogan [13] for general graphs. A 2Δ−1 3 -approximation algorithm, for bipartite graphs of maximum degree ∆, has been presented in [7], which gives an approximation ratio of 5/3 for ∆ = 3. Especially for bipartite graphs of ∆ = 3, an algorithm that

On the Maximum Edge Coloring Problem

281

Table 1. Known approximation ratios for bipartite graphs in [8] and [16] vs. those in this paper Δ 3 4 5 6 7 8 9 10 11 12 13 20 50

[8] 1.42 1.61 1.75 1.86 1.95 >2 >2 >2 >2 >2 >2 >2 >2

[16] 1.17 1.32 1.45 1.56 1.65 1.74 1.81 1.87 1.93 1.98 >2 >2 >2

This paper 1.42 1.54 1.62 1.68 1.72 1.76 1.78 1.80 1.82 1.84 1.85 1.90 1.96

attains the 7/6 inapproximability bound has been presented in [5]. For general bipartite graphs of ∆ ≤ 12 have been also presented algorithms that achieve approximation ratios ρ < 2. In fact, an algorithm presented in [8] achieves such a ratio for 4 ≤ ∆ ≤ 7, while another one presented in [16] achieves the best known ratios for maximum degrees between 4 ≤ ∆ ≤ 12 (see the 2nd and 3rd columns of Table 1). However, for bipartite graphs of ∆ > 12 the best known ratio is achieved by the 2-approximation algorithm in [13] for general graphs. On the other hand, the MEC problem is known to be polynomial for a few very special cases including complete balanced bipartite graphs and edge weights w(e) ∈ {1, 2} [20], general bipartite graphs and edge weights w(e) ∈ {1, 2} [7], chains [8] (in fact, this algorithm can be also applied for graphs of ∆ = 2), stars of chains and bounded degree trees [16]. It is interesting that the complexity of the MEC problem on trees remains open. Our results and organization of the paper. In this paper we further explore the complexity and approximabilty of the MEC problem with respect to the class of the underlying graph. Especially, we present new approximation results for several variants of the problem exploiting the general idea of producing more than one solutions for the problem and choosing the best of them. The next section starts with our notation and a remark on the known greedy 2-approximation algorithm [13]. Then, combining this remark with a simple idea, we present a first algorithm for general and bipartite graphs. For bipartite graphs, this algorithm achieves better approximation ratios than the algorithms in [8] (for ∆ ≥ 4) and [16] (for ∆ ≥ 9) which ratios, in addition, tend asymptotically to 2 as ∆ increases.

282

G. Lucarelli, I. Milis, and V.Th. Paschos

In Section 2 we present a new algorithm for the MEC problem on bipartite graphs which, like algorithms in [8] and [16], produces ∆ different solutions and chooses the best of them. Our algorithm derives the best known ratios for bipartite graphs for any ∆ ≥ 9, that remain always strictly smaller than 2 (see the 4th column of Table 1). Section 4 deals with the MEC problems on trees. An exact algorithm for this case of complexity O(|E|2Δ+O(1) ) has been proposed in [16]. In this section we present a generic algorithm for trees depending on a parameter k, which determines both the complexity of the algorithm and the quality of the solution found. In fact, the complexity of the algorithm is O(|E|k+O(1) ) and it produces an optimal solution, if k = 2∆ − 1, an e/(e − 1)-approximate solution, if k = ∆, and a ρ-approximate solution, with ρ < 2, if 2 ≤ k ≤ ∆. Finally, in Section 5 we prove that the MEC problem is NP-complete even in complete graphs with bi-valued edge weights, and we give an asymptotic 43 approximation algorithm for general graphs of arbitrarily large ∆ and bi-valued edge weights.

2

Notation and Preliminaries

We consider the MEC problem on a weighted graph G = (V, E). By dG (v), v ∈ V (or simply d(v)), we denote the degree of vertex v and by ∆(G) (or simply ∆) the maximum degree of G. We consider, also, the edges of G sorted in nonincreasing order of their weights with e1 denoting the heaviest edge of G, that is w(e1 ) ≥ w(e2 ) ≥ . . . ≥ w(em ). By S ∗ = {M1∗ , M2∗ , . . . , Ms∗∗ } we denote an optimal solution to the MEC problem of weight OP T = w1∗ + w2∗ + . . . + ws∗∗ . We call a solution S = {M1 , M2 , . . . , Ms } to the MEC problem nice if w1 ≥ w2 ≥ . . . ≥ w s and each matching Mi is maximal in the subgraph induced by i−1 the edges E \ j=1 Mj . In the following we consider any (suboptimal or optimal) solution to the MEC problem to be nice. This is due to the next proposition (see also [16]). Proposition 1. Any solution to the MEC problem can be transformed into a nice one, without increasing its total weight. For the number of matchings, s, in such a solution it holds that ∆ ≤ s ≤ 2∆ − 1. The most interesting and general result for the MEC problem is due to Kesselman and Kogan [13] who proposed the following greedy algorithm: Algorithm 1 1. Sort the edges of G in non-increasing order of their weights; 2. Using this order: - Insert each edge into the first matching that fits; - If such a matching does not exist then compute a new matching; It is proved in [13] that Algorithm 1 obtains a solution, S, of total weight 1 tightness example. We prove here W ≤ 2OP T and they also presented a 2 − Δ

On the Maximum Edge Coloring Problem

283

that the approximation ratio of this algorithm matches exactly its lower bound on the given tightness example. In fact, the solution S is, by its construction, a nice one. Using the bound on s in Proposition 1, the bound on W can be slightly improved as in next lemma. Lemma 1. The total weight of the solution obtained by Algorithm 1 is W ≤ Δ 2 i=1 wi∗ − w1∗ ≤ 2OP T − w1∗ .

Proof. (Sketch) Let e be the first edge inserted into matching Mi , i.e. wi = w(e). Let Ei be the set of edges preceding e in the order of the algorithm plus edge e itself, Gi be the graph induced by those edges and ∆i be the maximum degree of Gi . The optimal solution for the MEC problem on the graph Gi contains i∗ ≥ ∆i matchings each one of weight at least wi , that is wi ≤ wi∗∗ . By Proposition 1, the matchings constructed by Algorithm 1 for the graph Gi are i ≤ 2∆i −1 ≤ 2i∗ −1, ∗ ∗ that is ⇒ i∗ ≥ ⌈ i+1 . 2 ⌉. Hence, wi ≤ wi∗ ≤ w⌈ i+1 2 ⌉ Summing up the above bounds for all w ’s, 1 ≤ i ≤ s ≤ 2∆ − 1, we obtain i 2Δ−1 Δ W ≤ i=1 wi = w1∗ + 2( i=2 wi∗ ) = 2OP T − wi∗ . From the first inequality of Lemma 1 we have  ∗ ∗ 2 ∆ i=1 wi −w1  ∆ ∗ w i=1 i

≤2−

w∗ ∆ 1 ∗ i=1 wi

≤ 2− 1 Δ.

w1∗ Δ·w1∗

=2−

1 Δ,

W OP T

=

2

∆ ∗ ∗ i=1 wi −w1  s∗ ∗ i=1 wi



and hence the approximation

ratio of Algorithm 1 is 2 − It is well known that the chromatic index of any graph is either ∆ or ∆ + 1 [21], but deciding between these two values is NP-hard even for cubic graphs [12]. On the other hand, the chromatic index of a bipartite graph is ∆ [14]. As in the following we deal only with edge colorings of graphs, the terms k-coloring or k-colorable graph always refer to an edge coloring. It is well known that a (∆ + 1)-coloring of a general graph or a ∆ coloring of a bipartite graph can be found in polynomial time. Obviously, such an edge coloring algorithm applied to a weighted graph, leads to a solution for the MEC problem that is feasible but not necessarily optimal. If, in addition, the edge weights in an instance of the MEC problem are very close to each other, then such an algorithm will obtain a solution very close to optimal. In fact, this is the case of tightness example presented in [13] for Algorithm 1. Thus, a natural idea is to combine such an edge coloring algorithm and Algorithm 1 as following (for the case of bipartite graphs). Algorithm 2 1. Run Algorithm 1; 2. Find a solution by a ∆-coloring of the input graph; 3. Select the best solution found; 2 )-approximation one for the MEC Theorem 1. Algorithm 2 is a tight (2 − Δ+1 problem on bipartite graphs.

Proof. (Sketch) By Lemma 1, the solution computed in Line 1 of the algorithm has weight W ≤ 2OP T −w1∗ . The solution built in Line 2 consists of ∆ matchings

284

G. Lucarelli, I. Milis, and V.Th. Paschos

C −ǫ C

ǫ

C −ǫ

ǫ C C C

ǫ

M1∗

M2∗

C C C C C −ǫ C −ǫ

(a)

(b)

M3∗

M1

ǫ ǫ ǫ

C C C

M2

M3

C C −ǫ C −ǫ ǫ ǫ

(c)

M4 ǫ

M1

M2

C C C C −ǫ ǫ C −ǫ

M3 C ǫ ǫ

(d)

Fig. 1. (a) A instance of the MEC problem where Δ = 3 and C >> ǫ. (b) An optimal solution of weight 2C + ǫ. (c) The solution built by Algorithm 1 of weight 3C. (d) A solution obtained by a Δ-coloring of weight 3C.

each one of weight at most w1∗ = w(e1 ) and it is, therefore, of total weight W ≤ ∆w1∗ . Multiplying the second inequality with 1/∆ and adding them we 1 2Δ 2 obtain: (1 + Δ )W ≤ 2OP T , that is W ≤ Δ+1 OP T = (2 − Δ+1 )OP T . For the tightness of this ratio let the instance of the MEC problem shown in Figure 1, where an optimal solution as well as the two solutions computed by the algorithm are also shown. The ratio achieved by the algorithm for this 3C 2 ≃ 23 = 2 − Δ+1 . instance is 2C+ǫ 1 2 < 2− Δ for any ∆ ≥ 2, and thus Algorithm 2 outperforms AlNote that 2− Δ+1 gorithm 1. More interestingly, Algorithm 2 outperforms the algorithm proposed in [8] for bipartite graphs of any ∆ ≥ 4 as well as the algorithm proposed in [16] for bipartite graphs of any ∆ ≥ 9. Algorithm 2 can be also extended for general graphs, by creating in Line 2 a (∆ + 1)-coloring of the input graph. The approximation ratio achieved in this 1 2 , which is better than 2 − Δ , for any ∆ ≥ 3. By modifying case becomes 2 − Δ+2 the counterexample for bipartite graphs, we can prove that this ratio is also tight. Thus, the next theorem follows. 2 Theorem 2. Algorithm 2 achieves a tight 2 − Δ+2 approximation ratio for general graphs.

3

Bipartite Graphs

A general idea towards an approximation algorithm for the MEC problem with ratio less than two, is to produce more than one solutions for the problem and to choose the best of them. Algorithm 2 above produces two solutions, while for the case of bipartite graphs with ∆ = 3 [5] three solutions were enough to derive a 76 ratio. Algorithms proposed in [8] and [16] are generalizations of this idea, which produce ∆ different solutions. In this section we present a new algorithm for the MEC problem on bipartite graphs. It also produces ∆ different solutions and chooses the best of them, beats the best known ratios for bipartite graphs for any ∆ ≥ 9 and it is the first one of this kind yielding approximation ratios that tends asymptotically to 2 as ∆ increases.

On the Maximum Edge Coloring Problem

285

In our algorithm we repeatedly split a given bipartite graph G, of maximum degree ∆, first into two and then into three edge induced subgraphs. To describe this partition as well as our algorithm, let us introduce some additional notation. Recall that we consider the edges of G sorted in non-increasing order with respect to their weights, i.e., w(e1 ) ≥ w(e2 ) ≥ . . . ≥ w(em ). For this order of edges we denote by Gj,k , j ≤ k, the subgraph of G induced by the edges ej , ej+1 , . . . , ek . We denote by ∆j,k the maximum degree of graph Gj,k . By convention, we define Gj+1,j to be an empty graph. We denote by jq the maximum index such that ∆1,jq = q. It is clear that j1 < j2 < . . . < jΔ = m. In general, for each j = 1, 2, . . . , j2 our algorithm examines a partition of graph G into two edge induced subgraphs: the graph G1,j of ∆1,j ≤ 2, induced by the j heaviest edges of G, and the graph Gj+1,m , induced by the m−j lightest edges of G. For each one of these partitions, the algorithm computes a solution to the MEC problem on graph G. Moreover, for each pair (j, k), j = 1, 2, . . . , j2 , k = j + 1, . . . , m, of indices, our algorithm examines a partition of graph G into three edge induced subgraphs: the graph G1,j of ∆1,j ≤ 2, induced by the j heaviest edges of G, the graph Gj+1,k , induced by the next k − j edges of G, and the graph Gk+1,m , induced by the m − k lightest edges of G. We shall call such a partition of G a partition (j, k). For each one of these partitions, the algorithm checks the existence of a set of edges in graph Gj+1,k and if there exists it computes a solution to the MEC problem on graph G. The algorithm computes one more solution by finding a ∆-coloring of the original graph G and returns the best among all the solutions found. Algorithm 3 0 by a ∆-coloring of G; 1. Find a solution S1,m 2. For j = 1, 2, . . . , j2 do 1 3. Find an optimal solution S1,j for G1,j ; 1 4. Find a solution Sj+1,m by a ∆-coloring of Gj+1,m ; 1 1 ; and Sj+1,m 5. Concatenate S1,j 6. For k = j + 1 to m do 2 7. Find an optimal solution S1,j for G1,j ; 8. If there is a set of edges E ′ in Gj+1,k saturating any 2 then vertex of Gj+1,k with degree ∆1,k and E ′ fits in S1,j 2 9. Find a solution Sj+1,k by a (∆1,k −1)-coloring of Gj+1,k −E ′ ; 2 10. Find a solution Sk+1,m by a ∆ coloring of Gk+1,m ; 2 2 2 ; and Sk+1,m 11. Concatenate S1,j , Sj+1,k 12. Return the best solution found in Lines 1, 5 and 11; The following lemma shows that the check in Line 8 of Algorithm 3 can be done in polynomial time. Lemma 2. It is polynomial to determine if there exists a set of edges E ′ in 2 Gj+1,k saturating all vertices of degree ∆1,k in Gj+1,k that fits the solution S1,j .

286

G. Lucarelli, I. Milis, and V.Th. Paschos

Proof. (Sketch) For a partition (j, k) of G let d1,j (u) and dj+1,k (u) be the degrees of vertex u in subgraphs G1,j and Gj+1,k , respectively. Consider the subgraph H of Gj+1,k induced by its vertices of degree d1,j (u) ≤ ∆1,j − 1. Note that, by 2 . Let A be the construction, each edge in H fits in a matching of the solution S1,j subset of vertices of H of degree dj+1,k (u) = ∆1,k , i.e. the set of vertices which we want to saturate, and B the subset of vertices in A of degree dH (u) = 1. For each vertex u ∈ B we can clearly insert the single edge (u, v) in E ′ . Let H ′ be the subgraph of H induced by its vertices but those in B and A′ ⊆ A be the subset of vertices of A that are not saturated by the edges already in E ′ . It is now enough to find a matching on H ′ that saturates each vertex in A′ . Adding the edges of this matching in E ′ we get a set that saturates each vertex in A. Determining if such a matching exists can be done in polynomial time as follows. Consider the graph Q = (X, F ) constructed by adding into H ′ an additional vertex, if the number of vertices in H ′ is odd, and all the missing edges between the vertices X − A′ (i.e., the vertices X − A′ induce a clique in Q). If there exists a perfect matching in Q, then there exists a matching in H ′ saturating all vertices in A′ , since no edges adjacent to A′ have been added in Q. Conversely, if there exists a matching M in H ′ saturating all vertices in A′ , then there exists a perfect matching in Q, consisting of the edges of M plus the edges of a perfect matching in the complete subgraph of Q induced by its vertices that are not saturated by M . Therefore, in order to determine if there exists a matching M in H ′ it is enough to check if there exists a perfect matching in Q. It is well known that this can be done in polynomial time (see for example [17]). 3

2Δ Theorem 3. Algorithm 3 is a ( Δ3 +Δ 2 +Δ−1 )-approximation one for the MEC problem on bipartite graphs.

Proof. (Sketch) The solution obtained by a ∆-coloring of the input graph com0 ≤ ∆ · w1∗ , since w1∗ equals puted in Line 1 of the algorithm is of weight W ≤ S1,m to the heaviest edge of the graph. In Lines 3–5, consider the solutions obtained in the iterations where w(ej+1 ) = wz∗ , for z = 2, 3. In both cases it holds that ∆1,j ≤ 2. An optimal solution is  1 ∗ computed for G1,j of weight S1,j ≤ z−1 i=1 wi , since the edges of G1,j are a subset of the edges that appear in the z − 1 heaviest matchings of the optimal solution. 1 Moreover, a ∆-coloring is built for Gj+1,m of weight Sj+1,m ≤ ∆ · w∗ , since ej+1 z−1 ∗ z is the heaviest edge of this subgraph. Therefore, W ≤ i=1 wi + ∆ · wz∗ , for z = 2, 3. In Lines 7–11, consider the solutions obtained in the iterations (j, k) where w(ej+1 ) = w3∗ and w(ek+1 ) = wz∗ , for 4 ≤ z ≤ ∆. In these iterations the set of edges E ′ exists, since in the optimal solution the edges of G1,k belong in at most ∆1,k ≤ z − 1 matchings. The edges of E ′ are lighter than the edges of G1,k , and 2 thus it is possible to add them in S1,j without increasing its weight. Thus, using 1 2 the same arguments as for the weight of S1,j , it holds that S1,j ≤ w1∗ + w2∗ . The ′ ∗ heaviest edges in Gj+1,k − E and Gk+1,m are equal to w3 and wz∗ , respectively. 2 2 ≤ ∆ · wz∗ . Hence, we have that Sj+1,k ≤ (∆1,k − 1) · w3∗ ≤ (z − 2) · w3∗ and Sk+1,m ∗ ∗ ∗ ∗ Therefore, W ≤ w1 + w2 + (z − 2) · w3 + ∆ · wz , for 4 ≤ z ≤ ∆.

On the Maximum Edge Coloring Problem

287

In this way we have ∆ different bounds on W . Multiplying each one of W these inequalities with an appropriate factor and adding them we get OP T ≤ 2Δ3 . Δ3 +Δ2 +Δ−1 The complexity of Algorithm 3 is dominated by the check in Line  8, which by Lemma 2 can be done in polynomial time. This check runs for |E| = O(|E|2 ) 2 different combinations of weights. The approximation ratios achieved by Algorithm 3, as ∆ increases, are given in the 4th column of Table 1.

4

Trees

The complexity of the MEC problem on trees still remains open, while an exact algorithm of complexity O(|E|2Δ+O(1) ) is known [16]. In this section we present a generic algorithm which for a given number k searches exhaustively for the weights of k matchings of an optimal solution. The complexity of our algorithm is O(|E|k+O(1) ) and, within this time it produces an optimal solution, if k = 2∆−1, an (e/(e − 1))-approximate solution, if k = ∆, and a ρ-approximate solution, with ρ < 2, if 2 ≤ k < ∆. Our algorithm is based upon the fact that the following List Edge-Coloring problem can be solved in polynomial time in trees [6], while it is NP-complete for bipartite graphs even for ∆ = 3 [15]. List Edge-Coloring: Instance: A graph G = (V, E), a set of colors C = {C1 , C2 , . . . , Ck } and for each e ∈ E a list of authorized colors L(e). Question: Is there a feasible edge coloring of G, that is a coloring such that each edge e is assigned a color from its list L(e) and adjacent edges are assigned different colors? The first part of our algorithm searches exhaustively for the weights of the z, 1 ≤ z ≤ k − 1, heaviest matchings of the optimal solution, w1∗ ≥ w2∗ ≥ . . . ≥ wz∗ . Then, for each z, the graph is partitioned into two subgraphs induced by the edges of weights w(e) > wz∗ and w(e) ≤ wz∗ , respectively. By a transformation to the List Edge-Coloring problem we obtain a solution for the whole tree consisting of an optimal solution for the first subgraph and a ∆-coloring solution for the second one. In the second part the algorithm searches exhaustively for the weight of the z-th, k ≤ z ≤ ∆, matching of the optimal solution wz∗ . Then, the graph G is ∗ , partitioned into three subgraphs induced by the edges of weights w(e) > wk−1 ∗ ∗ ∗ wk−1 ≥ w(e) > wz and wz ≥ w(e), respectively. By a transformation to the List Edge-Coloring problem we obtain a solution for the whole tree consisting of an optimal solution for the first subgraph, a (z − k + 1)-coloring solution for the second and a ∆-coloring for the third one.

288

G. Lucarelli, I. Milis, and V.Th. Paschos

Algorithm 4 1. Exhaustively search for the weights of the k − 1 heaviest ∗ ; matchings of the optimal solution, w1∗ ≥ w2∗ ≥ . . . ≥ wk−1 2. For z = 1, 2, . . . , k − 1 3. Build the input for the List Edge-Coloring algorithm: - Set of colors {C1 , C2 , . . . , Cz , . . . , Cz+Δ−1 }; - If w(e) > wz∗ then L(e) = {Ci : w(e) ≤ wi∗ , 1 ≤ i ≤ z − 1}; - If w(e) ≤ wz∗ then L(e) = {C1 , C2 , . . . , Cz+Δ−1 }; 4. Run the algorithm for the List Edge-Coloring problem; 5. For z = k, k + 1, . . . , ∆ 6. Exhaustively search for the weight of the z-th matching of the optimal solution, wz∗ ; 7. Build the input for the List Edge-Coloring algorithm: - Set of colors {C1 , C2 , . . . , Ck−1 , . . . , Cz , . . . , Cz+Δ−1 }; ∗ then L(e) = {Ci : w(e) ≤ wi∗ , 1 ≤ i ≤ k − 2}; - If w(e) > wk−1 ∗ ∗ then L(e) = {C1 , C2 , . . . , Cz−1 }; - If wz < w(e) ≤ wk−1 ∗ - If w(e) ≤ wz then L(e) = {C1 , C2 , . . . , Cz+Δ−1 }; 8. Run the algorithm for the List Edge-Coloring problem; 9. Return the best solution found; Lemma 3. Algorithm 4 computes a solution for the MEC problem of weight  ∗ ∗ if 1 ≤ z ≤ k − 1 + ∆ · wz∗ , w1 + w2∗ + . . . + wz−1 W ≤ ∗ ∗ ∗ ∗ w1 + w2 + . . . + wk−2 + (z − k + 1) · wk−1 + ∆ · wz∗ , if k ≤ z ≤ ∆ Proof. (Sketch) For the first part of the bound, consider the solution computed at the z-th iteration of Line 4 of the algorithm. By the construction of the instance of the List Edge-Coloring problem in Line 3, its solution is also a solution for the ∗ MEC problem with z + ∆ − 1 matchings and matching weights w1∗ , w2∗ , . . . , wz−1 ∗ plus ∆ matchings of weight wz . Observe that, for each z, 1 ≤ z ≤ k − 1, the List Edge-Coloring algorithm always finds a feasible solution, because (i) the optimal solution for the MEC problem contains a feasible coloring for the edges with weights greater than wz∗ and (ii) there exists a ∆-coloring for the remaining edges, since the graph is a tree. Therefore, in the z-th iteration of Line 4, the ∗ + ∆ · wz∗ . algorithm returns a solution of weight W ≤ w1∗ + w2∗ + . . . + wz−1 For the second part of the bound similar arguments apply for the z-th iteration of Line 8 of the algorithm. The complexity of Algorithm 4 is exponential in k. In Line 1, the exhaustive search for the weights of the k − 1 heaviest matchings of the optimal solution |E| = O(|E|k−1 ) combinations of weights. Furthermore, in Line 6, examines k−1 the weight of one more matching is exhaustively chosen in O(|E|) time. For each one of these combinations, an algorithm of complexity O(|E| · ∆3.5 ) for the List Edge-Coloring problem is called. Thus, the complexity of Algorithm 4 is O(|E|k+1 · ∆3.5 ), that is O(|E|k+O(1) ), since ∆ is O(|E|). e Theorem 4. Algorithm 4 achieves a e−1 ≃ 1.582 approximation ratio for the Δ+O(1) MEC problem within O(|E| ) time.

On the Maximum Edge Coloring Problem

289

Proof. (Sketch) For k = ∆ the second part of the algorithm (Lines 5–8) runs exactly once for z = ∆. Thus, ∆ − 1 inequalities are obtained by the first part of Lemma 3 and one inequality by its second part. Multiplying the z-th inequality z−1 ∆−z by Δ ·(Δ−1) , 1 ≤ z ≤ ∆, and adding them we get: Δ∆ W OP T



∆

k=1

Δ∆ Δk−1 ·(Δ−1)∆−k

Using the formulæ



k=1

xk =

=

Δ∆ ∆ ∆ k (∆−1) ( k=1 ∆−1 ) · ∆

∆

x∆+1 −x x−1

W OP T



=

∆+1 Δ  ∆ k. (Δ−1)∆ · ∆ k=1 ( ∆−1 )

x x−1 and e = ( x−1 ) it follows that

∆ ) e·( ∆−1 ∆ e·( ∆−1 )−1


Δ · ⌊ |V2 | ⌋. It is easy to see that an overfull graph is (Δ + 1) − colorable.

On the Maximum Edge Coloring Problem

291

MEC problem on bipartite graphs with edge weights w(e) ∈ {1, t} is polynomially solvable [7]. In what follows, we present an approximation algorithm for general graphs with two different edge weights. Assume that the edges of the graph Kn = (V, E) have weights either 1 or t, where t ≥ 2. Let G1 = (V, E1 ), of maximum degree ∆1 , and Gt = (V, Et ), of maximum degree ∆t , be the graphs induced by the edges of Kn with weights 1 and t, respectively. Algorithm 5 1. Find a solution 2. Find a solution a solution by a 3. Return the best

by a (∆ + 1)-coloring of Kn ; by a (∆1 + 1)-coloring of G1 , (∆t + 1)-coloring of Gt and concatenate them; of the two solutions found;

Theorem 6. Algorithm 5 achieves an asymptotic 43 -approximation ratio for the MEC problem on general graphs of arbitrarily large ∆ and edge weights w(e) ∈ {1, t}. Proof. (Sketch) An optimal solution contains at least ∆(Kn ) = n − 1 matchings and at least ∆t of them are of weight equal to t. Therefore, a lower bound to the total weight of an optimal solution is OP T ≥ ∆t · t + (∆ − ∆t ). By Vizing’s theorem any graph has a (∆ + 1)-coloring. Using such colorings the algorithm computes in Line 1 a solution of total weight W ≤ (∆+1)·t, and in Line 2 a solution of total weight W ≤ (∆t +1)·t+(∆1 +1)·1 ≤ (∆t +1)·t+(∆+1). Δ2 +2Δ −Δ t Multiplying the first inequality with t(Δ+1)t 2 , the second one with Δ−Δ Δ+1 and adding them, we get

Δ2 +Δ2t −Δ·Δt +Δt (Δ+1)2

· W ≤ ∆t · t + (∆ − ∆t ) ≤ OP T , that

2 is ≤ (Δ−Δt(Δ+1) )2 +Δt (Δ+1) . This ratio is 4(Δ+1) 4Δ+4 4 W therefore OP T ≤ (Δ+1)+2(Δ−1) = 3Δ−1 = 3

W OP T

maximized when ∆t = +

Δ−1 2 ,

and

16 9Δ−3 .

References 1. Afrati, F.N., Aslanidis, T., Bampis, E., Milis, I.: Scheduling in switching networks with set-up delays. Journal of Combinatorial Optimization 9, 49–57 (2005) 2. Brucker, P., Gladky, A., Hoogeveen, H., Koyalyov, M., Potts, C., Tautenham, T., van de Velde, S.: Scheduling a batching machine. Journal of Scheduling 1, 31–54 (1998) 3. Chetwynd, A.G., Hilton, A.J.W.: Regular graphs of high degree are 1-factorizable. In: Proceedings of the London Mathematical Society, vol. 50, pp. 193–206 (1985) 4. Crescenzi, P., Deng, X., Papadimitriou, C.H.: On approximating a scheduling problem. Journal of Combinatorial Optimization 5, 287–297 (2001) 5. de Werra, D., Demange, M., Escoffier, B., Monnot, J., Paschos, V.T.: Weighted coloring on planar, bipartite and split graphs: Complexity and improved approximation. In: Fleischer, R., Trippen, G. (eds.) ISAAC 2004. LNCS, vol. 3341, pp. 896–907. Springer, Heidelberg (2004) 6. de Werra, D., Hoffman, A.J., Mahadev, N.V.R., Peled, U.N.: Restrictions and preassignments in preemptive open shop scheduling. Discrete Applied Mathematics 68, 169–188 (1996)

292

G. Lucarelli, I. Milis, and V.Th. Paschos

7. Demange, M., de Werra, D., Monnot, J., Paschos, V.T.: Weighted node coloring: When stable sets are expensive. In: Kuˇcera, L. (ed.) WG 2002. LNCS, vol. 2573, pp. 114–125. Springer, Heidelberg (2002) 8. Escoffier, B., Monnot, J., Paschos, V.T.: Weighted coloring: further complexity and approximability results. Information Processing Letters 97, 98–103 (2006) 9. Finke, G., Jost, V., Queyranne, M., Seb˝ o, A.: Batch processing with interval graph compatibilities between tasks. Technical report, Cahiers du laboratoire Leibniz (2004), http://www-leibniz.imag.fr/NEWLEIBNIZ/LesCahiers/index.xhtml 10. Fiorini, S., Wilson, R.J.: Edge-Colourings of Graphs. Pitman, London (1977) 11. Gopal, I.S., Wong, C.: Minimizing the number of switchings in a SS/TDMA system. IEEE Transactions On Communications 33, 497–501 (1985) 12. Holyer, I.: The NP-completeness of edge-coloring. SIAM Journal on Computing 10, 718–720 (1981) 13. Kesselman, A., Kogan, K.: Nonpreemptive scheduling of optical switches. IEEE Transactions on Communications 55, 1212–1219 (2007) ¨ 14. K¨ onig, D.: Uber graphen und ihre anwendung auf determinantentheorie und mengenlehre. Mathematische Annalen 77, 453–465 (1916) 15. Kubale, M.: Some results concerning the complexity of restricted colorings of graphs. Discrete Applied Mathematics 36, 35–46 (1992) 16. Lucarelli, G., Milis, I., Paschos, V.T.: On a generalized graph coloring/batch scheduling problem. In: 3rd Multidisciplinary International Conference on Scheduling: Theory and Applications (MISTA), pp. 353–360 (2007)  17. Micali, S., Vazirani, V.V.: An O( |V ||E|) algorithm for finding maximum matching in general graphs. In: 21st Annual IEEE Symposium on Foundations of Computer Science (FOCS), pp. 17–27 (1980) 18. Pemmaraju, S.V., Raman, R.: Approximation algorithms for the max-coloring problem. In: 32nd International Colloquium on Automata, Languages and Programming (ICALP), pp. 1064–1075 (2005) 19. Pemmaraju, S.V., Raman, R., Varadarajan, K.R.: Buffer minimization using maxcoloring. In: 15th ACM-SIAM Symposium on Discrete Algorithms (SODA), pp. 562–571 (2004) 20. Rendl, F.: On the complexity of decomposing matrices arising in satellite communication. Operations Research Letters 4, 5–8 (1985) 21. Vizing, V.G.: On an estimate of the chromatic class of a p-graph. Diskret. Analiz. 3, 25–30 (1964)

Author Index

Amini, Omid

29

Bar-Noy, Amotz 147 Bienkowski, Marcin 92 Bil` o, Davide 201 Bir´ o, P´eter 15 Chan, Joseph Wun-Tat 241 Chin, Francis Y.L. 241 Chrobak, Marek 92, 253 Ebenlendr, Tom´ aˇs 43 Epstein, Leah 188 Fiat, Amos 188 Fung, Stanley P.Y.

53

Gairing, Martin 119 Gotthilf, Zvi 267 Gourv`es, Laurent 78 Gray, Chris 214 Harks, Tobias 133 Hong, Xiangyu 241 Jawor, Wojciech 253 Je˙z, L  ukasz 92 Johnson, Matthew P. 147 K¨ onemann, Jochen 1 Kranakis, Evangelos 227 Krumke, Sven O. 105 Kulik, Ariel 160 Levy, Meital 188 Lewenstein, Moshe 267 Liu, Ou 147 L¨ offler, Maarten 214 Lucarelli, Giorgio 279

Manlove, David F. 15 Milis, Ioannis 279 Mittal, Shubham 15 Molle, Mart 253 Monnot, J´erˆ ome 78 Nagarajan, Chandrashekhar Parekh, Ojas 1 Paschos, Vangelis Th. 279 Pascual, Fanny 78 Peleg, David 29 P´erennes, St´ephane 29 Poon, Chung Keung 53 Pritchard, David 1 Rainshmidt, Elad

267

Sau, Ignasi 29 Saurabh, Saket 29 Sgall, Jiˇr´ı 43 Shachnai, Hadas 160 Sharma, Yogeshwer 174 Silveira, Rodrigo I. 214 Sitters, Ren´e A. 67 Thielen, Clemens Ting, Hing Fung

105 241

Widmayer, Peter 201 Wiese, Andreas 227 Williamson, David P. 174 Zheng, Feifeng 53 Zych, Anna 201

174