Sports Leagues Scheduling: Models, Combinatorial Properties, and Optimization Algorithms (Lecture Notes in Economics and Mathematical Systems, 603) 9783540755173, 3540755179

In the context of sports leagues scheduling (SLS) several groups' interests must be taken into account. This book t

111 49 3MB

English Pages 175 [168] Year 2008

Report DMCA / Copyright

DOWNLOAD PDF FILE

Recommend Papers

Sports Leagues Scheduling: Models, Combinatorial Properties, and Optimization Algorithms (Lecture Notes in Economics and Mathematical Systems, 603)
 9783540755173, 3540755179

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Lecture Notes in Economics and Mathematical Systems Founding Editors: M. Beckmann H.P. Künzi Managing Editors: Prof. Dr. G. Fandel Fachbereich Wirtschaftswissenschaften Fernuniversität Hagen Feithstr. 140/AVZ II, 58084 Hagen, Germany Prof. Dr. W. Trockel Institut für Mathematische Wirtschaftsforschung (IMW) Universität Bielefeld Universitätsstr. 25, 33615 Bielefeld, Germany Editorial Board: A. Basile, A. Drexl, H. Dawid, K. Inderfurth, W. Kürsten

603

Dirk Briskorn

Sports Leagues Scheduling Models, Combinatorial Properties, and Optimization Algorithms

123

Dirk Briskorn Department of Production and Logistics University of Kiel Olshausenstrasse 40 24098 Kiel Germany [email protected]

ISBN 978-3-540-75517-3

e-ISBN 978-3-540-75518-0

DOI 10.1007/978-3-540-75518-0 Lecture Notes in Economics and Mathematical Systems ISSN 0075-8442 Library of Congress Control Number: 2007938899 © 2008 Springer-Verlag Berlin Heidelberg This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilm or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer. Violations are liable to prosecution under the German Copyright Law. The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. Production: LE-TEX Jelonek, Schmidt & Vöckler GbR, Leipzig Cover design: WMX Design GmbH, Heidelberg Printed on acid-free paper 987654321 springer.com

Basic research is like shooting an arrow into the air and, where it lands, painting a target. Homer Adkins

Preface

This book is the result of my research on sports leagues scheduling at the Christian-Albrechts-University of Kiel. This research has been done during my employment as research associate at the Chair for Production and Logistics. The challenging research topic as well as the friendly environment made these years enjoyable and pleasant. First of all, I wish to express my gratitude to my thesis advisor, Professor Dr. Andreas Drexl. He has established a very inspiring and motivating research environment, and he always took the time for discussions and helpful advices. Moreover, he refereed this work. Furthermore, I am very grateful to Professor Dr. S¨ onke Albers for co-refereeing this thesis. Additionally, I would like to thank my colleagues in Kiel for many helpful discussions, comments, and suggestions. I am especially grateful to Dr. Andrei Horbach, Marcel B¨ uther, and Dr. Yury Nikulin. Stefan Wende was a big help regarding the technological background of my work. Ethel Fritz, Jens Heckmann, and J¨ urgen Lux were also always helpful with most various things. Besides, I thank Professor Frits Spieksma, Juniorprofessorin Dr. Sigrid Knust, and Dr. Thomas Bartsch for stimulating discussions and advice. Finally, I would like to thank my family and, last but not least, Eva for supporting me and bearing me.

Kiel, December 2007

Dirk Briskorn

Contents

1

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Basic Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4 Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1 1 2 3 4

2

Basic Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Single Round Robin Tournament . . . . . . . . . . . . . . . . . . . . . 2.2 Double Round Robin Tournament . . . . . . . . . . . . . . . . . . . . 2.3 r Round Robin Tournament . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Decomposition Schemes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.1 First–Schedule–Then–Break . . . . . . . . . . . . . . . . . . . . 2.4.2 First–Break–Then–Schedule . . . . . . . . . . . . . . . . . . . .

5 6 12 16 18 19 22

3

Real World Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Externally Given Constraints . . . . . . . . . . . . . . . . . . . . . . . . 3.1.1 Forbidden Matches . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.2 Regions’ Capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.3 Highly Attended Matches . . . . . . . . . . . . . . . . . . . . . . 3.2 Fairness Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.1 Breaks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.2 Opponents’ Strengths . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.3 Teams’ Preferences . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Computational Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.1 Generating Problem Instances . . . . . . . . . . . . . . . . . . 3.3.2 Computational Results . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

29 30 30 31 32 33 33 34 40 42 42 43 57

X

Contents

4

Combinatorial Properties of Strength Groups . . . . . . . . 4.1 Factorizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1.1 Ordered 1-Factorization of Kk,k . . . . . . . . . . . . . . . . 4.1.2 Ordered 1-Factorizations of Kk . . . . . . . . . . . . . . . . . 4.1.3 Ordered Symmetric 2-Factorization of 2K2k+1 . . . . 4.2 Group-Balanced Single Round Robin Tournaments . . . . . 4.3 Group-Changing Single Round Robin Tournaments . . . . . 4.4 Complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

59 60 60 60 64 67 70 75 77

5

Home-Away-Pattern Based Branching Schemes . . . . . . 79 5.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 5.2 General Home-Away-Pattern Sets . . . . . . . . . . . . . . . . . . . . 80 5.2.1 Achieving Feasible Home-Away-Pattern Sets . . . . . 81 5.2.2 Choice of Branching Candidates . . . . . . . . . . . . . . . . 83 5.2.3 Node Order Strategy . . . . . . . . . . . . . . . . . . . . . . . . . . 85 5.3 Minimum Number of Breaks . . . . . . . . . . . . . . . . . . . . . . . . . 86 5.3.1 Achieving Feasible Home-Away-Pattern Sets . . . . . 87 5.3.2 Choice of Branching Candidates . . . . . . . . . . . . . . . . 95 5.3.3 Node Order Strategy . . . . . . . . . . . . . . . . . . . . . . . . . . 96 5.4 Computational Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 5.4.1 General Home-Away-Pattern Sets . . . . . . . . . . . . . . . 98 5.4.2 Minimum Number of Breaks . . . . . . . . . . . . . . . . . . . 99 5.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101

6

Branch–and–Price Algorithm . . . . . . . . . . . . . . . . . . . . . . . . 103 6.1 Motivation and Basic Idea . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 6.2 Reformulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 6.2.1 Set Partitioning Master Problem . . . . . . . . . . . . . . . 104 6.2.2 Matching Subproblem . . . . . . . . . . . . . . . . . . . . . . . . . 108 6.3 Branching Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 6.3.1 Branching Strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 6.3.2 Node Order Strategy . . . . . . . . . . . . . . . . . . . . . . . . . . 117 6.4 Column Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 6.4.1 Pricing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 6.4.2 Column Management . . . . . . . . . . . . . . . . . . . . . . . . . 122 6.4.3 Lower Bounds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 6.5 Upper Bounds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 6.6 Computational Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 6.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142

7

Conclusions and Outlook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145

Contents

XI

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 List of Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157 List of Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161 List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163

1 Introduction

1.1 Motivation Sports league scheduling is an essential activity arising in the context of sports events organization. Sports events are of great importance as far as economic aspects are considered. Often, a sports club is a major employer and taxpayer. Thus, private persons as well as public agencies depend on particular sports events as well as on regular sports league seasons. In this research the focus is on regular sports seasons. A sports league schedule (SLS) determines the date and the venue of a match between two opponents. Scheduling must be done by a central agent since each club has specific interests affecting each other. Therefore, a sports league can be seen as a “supplier” of a season. The “customers” are composed of fans watching matches in the stadium and tv channels broadcasting them either live or retarded as a summary. Furthermore, sports clubs make money by offering food supply, selling merchandize assortments such as caps, shirts, scarfs, and so on. While the latter highly depends on the current success of a specific club, the former can be supported by constructing pleasant SLSs. Of course, along with attractiveness there are a lot of attributes with respect to the schedules’ structure, security aspects, resources, and infrastructure to be considered. There are few standard construction schemes for SLSs which all fail if real world constraints are taken into account. It is no surprise that schedules even for professional sports leagues are constructed manually due to the lack of adequate planning tools as reported in Bartsch [5]. Recent research activities have led to several promising approaches, see for example Bartsch et al. [6] and Nemhauser and Trick [68]. However,

2

1 Introduction

due to the tremendous number of possible SLSs and the sheer difficulty to find even one nearly all existing approaches inspect a rather small part of the solution space. Hence, many potential SLSs are cut out. Another popular approach is to search only one feasible solution neglecting better ones. Thus, there is a great need for efficient scheduling approaches. In this field tackling the whole solution space might be an especially challenging task. Additionally, heuristics finding good solutions in an acceptable amount of CPU time are of practical interest.

1.2 Related Work There is a vast field of literature concerning sports league scheduling and related topics. In the following we present an overview. There are several common modelling ideas. For example a popular analogy between SLSs and an edge coloring of complete graphs does exist. Consequently, many articles dealing with graph-based models can be found, for example see de Werra [19, 20, 21, 22, 23], de Werra et al. [24], and Drexl and Knust [30]. Brucker and Knust [13] and Drexl and Knust [30] deal with sports league scheduling problems formulated as multi-mode resource constrained project scheduling problems. Note that an edge coloring of a complete graph Kn with n − 1 colors is equivalent to a 1-factorization of the same graph being closely related to a latin square of size n. A couple of papers study these analogies from a design theory point of view, e.g., Rosa and Wallis [71], Gelling and Odeh [43], Easton et al. [34], and Mendelsohn and Rosa [61]. Several articles concern particular integer programming (IP) formulations, e.g., Bartsch et al. [6], Della Croce and Oliveri [25], and Schreuder [76, 77]. In real world sports leagues schedules with different structures appear. Probably the most popular form is a round robin tournament (RRT). In particular, RRTs attract attention as single RRT (see Bartsch et al. [6], Trick [82], and Easton et al. [33]) and as double RRT. Furthermore, research has been done concerning divisions, e.g., in de Werra [22], and multiple venues not related to the teams, e.g., in de Werra et al. [24]. Most researchers focus on constructing a feasible SLS. Nevertheless, there are approaches evaluating different SLSs and trying to find one having the best evaluation. Among the most popular goals are minimizing the number of breaks, e.g., in Elf et al. [36], Fronˇcek and Meszka [41], and Miyashiro and Matsui [63], minimizing travel costs, see Anag-

1.3 Basic Notation

3

nostopoulos et al. [2] and Easton et al. [32], and minimizing carry-overs, see Russell [73]. Many kinds of different sports are considered such as baseball in Russell and Leung [74], basketball in Nemhauser and Trick [68], ice hockey in Fleurent and Ferland [38], soccer in Bartsch et al. [6], de Werra [23] and Schreuder [76], and tennis in Della Croce et al. [26]. Extensive overviews of literature on sports leagues scheduling in the context of operations research are provided by Knust [51] and Rasmussen and Trick [70]. Furthermore, literature can be found on competing strategies for teams, see Machol et al. [58], Gerchak [44] and Ladany and Machol [53] and the motivation for tackling strategy decisions using operations research methods in Mottley [66] and Schutz [79]. Finally, we refer the reader to some articles concerning the economic importance of professional sports for cities or regions in order to emphasize the practical relevance of generating attractive SLSs. Cairns et al. [16], Jeanrenaud [48] as well as Leeds and von Allmen [55] provide extensive surveys taking into account, for example, demand via paid attendances and broadcasts. Furthermore, the relation between sports and economic development is outlined in Baade [3] and Burgan and Mules [14], for example.

1.3 Basic Notation A sports league is a composition of a set T of n, n even, teams competing each other. Competitions can be specified as exactly one team i playing against exactly one other team j and are called matches in the remainder of this work. A match is carried out in exactly one period p out of the set P . A match takes place at one of the both opponents’ stadiums. Therefore, we can identify a match by a triple (i, j, p). Here, p is the period where teams i and j compete at i’s home. As far as not stated otherwise all indices are “1-based”. A SLS in general is a timetable determining the time and the venue where a specific match is carried out. Throughout this work, we consider leagues with an even number of teams obeying a RRT structure. The number of teams being even is no restriction to generality since we can add a dummy team if n is odd and, therefore, unconditionally obtain n being even. The matches of RRTs are grouped in a fashion that each team plays exactly once per period. The collection of all matches carried out in a period is called matchday (MD). Hence, a MD consists of n2 matches. There are several

4

1 Introduction

different RRT structures but they all have in common that each team i plays against each other team j exactly r times with r ∈ N>0 . Then, each SLS contains r(n − 1) MDs and, consequently, |P | = r(n − 1).

1.4 Outline The work is organized as follows. Chapter 2 is focused on basic problems in sports league scheduling. Different structures for RRTs are presented and corresponding optimization problems are introduced. Furthermore, we provide proofs of complexity. In chapter 3 real world requirements are examined. Moreover, we represent them by means of IP model formulations and provide a computational study. We give detailed insights into consideration of strength groups from a combinatorial point of view in chapter 4. In chapter 5 branching schemes based on a well known decomposition scheme are developed. Furthermore, we provide computational results obtained when employing these branching schemes. Chapter 6 provides an highly flexible exact algorithm to be easily adapted for solving all variations of the problem from chapter 3. Finally, chapter 7 gives conclusions and an outlook on future research. All computational results are obtained using a 3.8 GHz Pentium 4 machine with 3 GBs of RAM. We employ Ilog Cplex 9.0 with default settings (if not mentioned otherwise) as standard solver. Run times are given in seconds.

2 Basic Problems

The chapter at hand concerns basic problems solely derived from SLSs’ structural requirements. To this end, we neglect most real world requests in order to focus on complexity induced by the RRT structure itself. Furthermore, we inspect subproblems resulting from popular decomposition schemes. Correspondingly, we define cost minimization problems and give proofs or conjectures for their complexity, respectively. First, we present the well known planar three index assignment problem (PTIAP) which will serve to proof the scheduling problems’ complexity. Definition 2.1. Given are three sets A, B, C with |A| = |B| = |C| = m, m ∈ N, m even, as well as costs da,b,c for each triple (a, b, c) ∈ A × B × C. Feasible solutions to the PTIAP consist of m2 triples such that each pair in (A × B) ∪ (A × C) ∪ (B × C) is contained exactly once. The PTIAP is to find a solution having the minimum sum of chosen triples’ costs. We give here a formulation of PTIAP as an integer program according to, e.g., Spieksma [80] using m3 binary variables and 3m2 constraints. In this formulation ya,b,c equals 1 if triple (a, b, c) is chosen, 0 otherwise. The objective function (2.1) represents the goal of cost minimization. Equations (2.2), (2.3), and (2.4) force each pair to be contained in exactly one chosen triple. In the following we give the decision version of PTIAP which will be referred to as PTIAP-DEC. Definition 2.2. PTIAP-DEC is defined by input and question: Input: Three m-sets A, B, C, and a set D ⊆ A × B × C. Question: Does there exist a subset of D containing m2 triples such

6

2 Basic Problems

Model 2.1: PTIAP-IP min



da,b,c ya,b,c

(2.1)

a∈A b∈B c∈C

s.t.



ya,b,c

=

1

∀ a ∈ A, b ∈ B

(2.2)

ya,b,c

=

1

∀ a ∈ A, c ∈ C

(2.3)

ya,b,c

=

1

∀ b ∈ B, c ∈ C

(2.4)



{0, 1} ∀ a ∈ A, b ∈ B, c ∈ C

(2.5)

c∈C



b∈B



a∈A

ya,b,c

that each pair (a, b) ∈ (A × B), (a, c) ∈ (A × C), and (b, c) ∈ (B × C) is contained exactly once in those triples? In Frieze [40] PTIAP-DEC is proven to be NP-complete implying that PTIAP is NP-hard. Complexity of several sports league scheduling problems will be shown by reduction from PTIAP-DEC hereafter.

2.1 Single Round Robin Tournament Single RRTs obey the general structural requests outlined in section 1.3. Additionally, r is specified to be equal to 1 which means that each team meets each other team exactly once. The tournament contains n − 1 MDs; see table 2.1 for an instance with n = 6. Here i-j denotes team i playing at home against team j. Table 2.1. Single RRT for n = 6 period

1

match 1 1-2 match 2 5-3 match 3 4-6

2

3

4

5

5-6 1-4 2-3

3-4 2-5 1-6

4-5 3-1 2-6

5-1 4-2 3-6

Next, we define the single RRT problem.

2.1 Single Round Robin Tournament

7

Definition 2.3. Given a set T , |T | = n, and a set P , |P | = n − 1, each triple (i, j, p) ∈ T × T × P, i = j, represents a match of team i against team j at i’s home in period p. Costs ci,j,p are given for each match. A feasible solution to the single RRT problem corresponds to a set of n(n−1) triples such that (i) for each pair (i, j) ∈ T × T, i < j, exactly 2 one triple of form (i, j, p) or (j, i, p) with p ∈ P is chosen and such that (ii) for each pair (i, p) ∈ T × P exactly one triple of form (i, j, p) or (j, i, p) with j ∈ T \{i} is chosen. The problem is to find a feasible solution having the minimum sum of chosen triples’ cost. Condition (i) implies that each pair of teams meets exactly once while condition (ii) ensures that each team plays exactly once per period resulting in n − 1 periods. These requests as well as the goal of cost minimization can be represented as an IP model employing n(n − 1)2 binary variables and 3n(n−1) constraints. 2

Model 2.2: SRRTP-IP min

 



ci,j,p xi,j,p

(2.6)

i∈T j∈T \{i} p∈P

s.t.



(xi,j,p + xj,i,p )

=

1

∀ i, j ∈ T, i < j

(2.7)

=

1

∀ i ∈ T, p ∈ P

(2.8)



{0, 1} ∀ i, j ∈ T, i = j, p ∈ P

(2.9)

p∈P



(xi,j,p + xj,i,p )

j∈T \{i}

xi,j,p

Binary variable xi,j,p is equal to 1 if match (i, j, p) is carried out, and 0 otherwise. Constraints (2.7) and (2.8) correspond to (i) and (ii), respectively, while (2.6) represents the goal of cost minimization. Cost ci,j,p of a specific match can be seen in a rather abstract way here. For example SRRTP-IP can serve as subproblem of a sports scheduling problem taking into account the real world constraints neglected in SRRTP-IP. Then, ci,j,p might cover, among other components, dual variables. However, there are several further applications of SRRTP-IP having practical relevance:

8

2 Basic Problems

• Teams usually have preferences for playing at home in certain periods, a fact which can easily be expressed through ci,j,p . Let pri,p ∈ R be team i’s preference to play at home (pri,p > 0) or to play away (pri,p < 0), respectively, in period p. A preference pri,p is stronger than a preference pri ,p if |pri,p | > |pri ,p |. Then, costs can be defined as ci,j,p = −pri,p + prj,p, for example. Here, cost ci,j,p represents neglected preferences of i and j in p decreased by fulfilled ones if (i, j, p) is carried out. Hence, the objective of SRRTP-IP is to maximize the difference between fulfilled preferences and neglected preferences. • Since a major objective of the organizers of a tournament is to maximize attendance we can represent the economic value of the estimated attendance by ci,j,p. Let estimated attendances eai,j,p be given for each match (i, j, p). Then, we can define costs of SRRTPIP as ci,j,p = −eai,j,p and obtain the objective to maximize total tournament’s attendance. Equivalently, ci,j,p can be defined as the number of seats remaining empty in the stadium of i if (i, j, p) is carried out. • Often, a stadium is owned by some public agency and teams have to pay a fee for each match taking place in that particular stadium. This fee might depend on season, day of the week, and time when the match takes place as well as competing events. We can represent it by ci,j,p and obtain the objective to minimize the sum of fees to be paid. • A special case of SRRTP-IP arises when the costs are restricted to {0, 1}. Then ci,j,p = 1 denotes that team i cannot play team j in team i’s home venue in period p, whereas ci,j,p = 0 denotes that this is possible. A reason for a match (i, j, p) being impossible might be restricted availability of team i’s stadium. What we are interested in is to determine whether a feasible schedule, that is, a zero-cost schedule, exists or not. The complexity of the single RRT problem has been independently stated in Briskorn et al. [12] and Easton [31]. The proof in Briskorn et al. [12] is reproduced below for the sake of completeness. In analogy to PTIAP-DEC we define SRRTP-DEC as the decision version of the single RRT problem first. Definition 2.4. SRRTP-DEC is defined by input and question: Input: An instance of single RRT problem having ci,j,p ∈ {0, 1} for each i, j ∈ T , i = j, p ∈ P . Question: Does there exist a solution having cost equal to 0?

2.1 Single Round Robin Tournament

9

Theorem 2.1. The single RRT problem is NP-hard. Proof. We prove theorem 2.1 by presenting a reduction from PTIAPDEC to the single RRT problem. PTIAP-DEC is proven to be NPcomplete in Frieze [40]. First, we reduce PTIAP-DEC to SRRTP-DEC. We assume, without loss of generality, that m is even. Given an instance of PTIAP-DEC, we now build the instance of SRRT-DEC as follows. There are 2m teams, so we have |T | = n = 2m (and of course |P | = 2m − 1). Further, we set  0 ∀ (i, j, p) ∈ D, ci,m+j,p = 1 ∀ (i, j, p) ∈ / D, and

ci,j,p

 1 if i, j, p ∈ {1, . . . , m} , i = j,       1 if i, j ∈ {m + 1, . . . , 2m} , i = j, p ∈ {1, . . . , m} ,      1 if i ∈ {m + 1, . . . , 2m} , j, p ∈ {1, . . . , m} ,       1 if i ∈ {1, . . . , m} , j ∈ {m + 1, . . . , 2m} , p ∈ {m + 1, . . . , 2m − 1} , =     0 if i, j ∈ {1, . . . , m} , i = j, p ∈ {m + 1, . . . , 2m − 1} ,      0 if i, j ∈ {m + 1, . . . , 2m} , i = j, p ∈ {m + 1, . . . , 2m − 1} ,      1 if i ∈ {m + 1, . . . , 2m} , j ∈ {1, . . . , m} ,    p ∈ {m + 1, . . . , 2m − 1} .

This completes the description of the instance of SRRTP-DEC. A yes-answer to the PTIAP-DEC instance corresponds to a yesanswer to the SRRTP-DEC. First, the triples (a, b, c) which constitute the solution of PTIAP-DEC give rise to the following partial solution of SRRTP-DEC: team i = a plays team j = m + b in period p = c at team i’s home venue. Since in this way we use only triples from D, we have ensured that each match between a team i with i ∈ {1, . . . , m}, and a team j with j ∈ {m + 1, . . . , 2m} is scheduled with zero cost. Second, to schedule the remaining matches, let us first deal with the matches between teams i and j with i, j ∈ {1, . . . , m}, i = j. Observe that we must assign these matches to periods m + 1, . . . , 2m − 1 in order to have a zero-cost solution. Assigning these matches to m − 1 periods can be seen as edge-coloring a complete graph (recall that an edge-coloring of a graph is a coloring of the edges such that adjacent edges have different colors). Indeed, the graph that results when there is a vertex for each of the first m teams, and an edge for each match to be played is complete.

10

2 Basic Problems

It is well known (see Mendelsohn and Rosa [61]) that, in case m is even – as we assumed – (m−1) colors suffice to edge color Km . The resulting coloring gives us a feasible assignment of matches to periods (edges with the same color correspond to matches played in the same period). In this way each period receives m 2 matches, each with zero cost. By using the same procedure for teams i and j with i, j ∈ {m + 1, . . . , 2m}, i = j, we find an assignment of the corresponding matches to periods m + 1, . . . , 2m − 1. Hence, we have found a feasible solution to single RRT problem having total cost equal to zero and, therefore, the answer to SRRTP-DEC is yes. Next, if a zero-cost solution to the single RRT problem exists (which means a yes-answer to SRRTP-DEC), it is not difficult to show that PTIAP admits a zero-cost solution, as well. Indeed, let us focus on the matches between teams i and j with i ∈ {1, . . . , m} and j ∈ {m + 1, . . . , 2m}. From the construction it is clear that the existence of a zero-cost solution implies that team j never plays at its home venue against team i since this costs 1. Hence, the assignment of matches of team i against team j to periods p, p = 1, . . . , m (which must exist since we assumed that a zero-cost solution to the single RRT problem exists), gives us the solution to PTIAP and, hence, the answer to PTIAP-DEC is yes. Secondly, we reduce SRRTP-DEC to the single RRT problem. This is straightforward, since we can give an answer to SRRT-DEC by solving an instance of the single RRT problem having costs ci,j,p ∈ {0, 1} corresponding to the costs of SRRTP-DEC. This instance is a special case of the general single RRT problem.   Easton [31] proofs NP-completeness of a quite similar problem. The problem, namely SRRTP, is to complete a partial single RRT. Easton [31] shows NP-completeness even if each but three periods are scheduled and each team has no more than three unscheduled matches. This problem can be easily reduced to SRRTP-DEC. The idea is to represent a match (i, j, p) which is scheduled in the problem in Easton [31] by forbidding all matches (i , j, p), i = i, and (i, j  , p), j  = j, in SRRTP-DEC. Although PTIAP-DEC is not needed for proofing NP-completeness of the single RRT problem the close relation of both problems’ structures is interesting. We make further use of it in section 2.4. Although the single RRT problem is NP-hard feasible solutions can be found in polynomial time. Well known constructions schemes from graph theory have been adapted. An ordered 1-factorization of the complete graph Kn corresponds to a single RRT. See figure 2.1, for example.

2.1 Single Round Robin Tournament i1

i2

i3

i4

11

Fig. 2.1. 1-Factorization of K4

The graph’s vertices represent teams while edge e represents a match between the teams e is incident to. Edges being in the same 1-factor represent matches in the same MD. Edges in figure 2.1 sketched as dotted, dashed, and solid lines form a MD each. Construction schemes for 1-factorizations of complete graphs are known and can be employed in order to generate single RRTs, see Bartsch [5]. Furthermore, heuristics have been developed which randomly construct 1-factorization within seconds, see Dinitz and Stinson [28] and Hilton and Johnson [46] for example. However, these techniques only cover a quite small part of the solution space and, hence, do not suffice to find optimal or even good solutions. Another result from design theory gives an idea about the difficulties arising while tackling the solution space of single RRT problems. Two 1-factorizations f1 and f2 of Kn are said to be isomorphic if there is a mapping φ : V → V such that φ(f1 ) = f2 , see Dinitz et al. [29] and Ihrig [47]. Obviously, φ(f1 ) has the same structure as f1 . The number of classes of non isomorphic 1-factorizations fnc of Kn corresponding to single RRTs having different structures is rarely known: f2c = f4c = c = 396 and no less than f6c = 1, f8c = 6. Gelling and Odeh [43] state f10 c = 20 years later Dinitz et al. [29] report the number for K12 to be f12 c 526, 915, 620. Today fn = is known neither in general nor for n > 12. The number of distinct 1-factorizations fnd is 252,282,619,805,368,320 for n = 12 and is estimated in Dinitz et al. [29] to rise to 1.52 × 1063 for n = 18. Further results can be found in Lindner [56]. So, the number of feasible solutions of the single RRT problem is very large and seems to be an excellent example for combinatorial explosion. For problem sizes larger than n = 12 we do not even know the number of solutions of which we are searching for the best one.

12

2 Basic Problems

2.2 Double Round Robin Tournament Double RRTs are special cases of RRTs and, therefore, have structures according to the requirements presented in section 1.3. Here, r is set to 2 which means that each team meets each other team exactly twice. The matches are to be carried out at different venues. Consequently, the tournament consists of 2 (n − 1) MDs. There are different kinds of double RRTs. We distinguish between mirrored double RRTs, double RRTs based on rounds, and (general) double RRTs below. A mirrored double RRT is a double RRT hosting a match between teams i and j, i, j ∈ T , i = j, in period p, p ∈ P , at i’s home if and only if a match between teams i and j, i, j ∈ T , i = j, at j’s home takes place in period ((t + n − 1) mod (2n − 2)) , that is, the tournament is divided into two single RRTs being complementary to each other; see table 2.2 for an example with n = 6. Table 2.2. Mirrored double RRT for n = 6 period

1.1

1.2

1.3

1.4

1.5

2.1

2.2

2.3

2.4

2.5

match 1 1-2 match 2 5-3 match 3 4-6

5-6 1-4 2-3

3-4 2-5 1-6

4-5 3-1 2-6

5-1 4-2 3-6

2-1 3-5 6-4

6-5 4-1 3-2

4-3 5-2 6-1

5-4 1-3 6-2

1-5 2-4 6-3

Most real world sports leagues are scheduled using the mirrored double RRT scheme. Because of its equivalence to single RRTs we will focus on these in chapter 3 where real world requirements are considered. Similar to the concept in section 2.1 we define the mirrored double RRT problem. Definition 2.5. Given a set T , |T | = n , two sets P1 and P2 , |P1 | = |P2 | = n − 1, and costs ci,j,p for each (i, j, p) ∈ T × T × (P1 ∪ P2 ) , i = j, a feasible solution to the mirrored double RRT problem corresponds to a set of n (n − 1) triples such that (i) for each pair (i, j) ∈ T × T, i < j, exactly one triple of form (i, j, p1 ) or (j, i, p1 ) with p1 ∈ P1 is chosen, (ii) for each pair (i, p1 ) ∈ T × P1 exactly one triple of form (i, j, p1 ) or (j, i, p1 ) with j ∈ T \{i} is chosen, and (iii) for each chosen triple (i, j, p) ∈ T ×T ×P1 , i = j, the triple (j, i, p) ∈ T ×T ×P2 is chosen. The goal of the mirrored double RRT problem is to find a feasible solution having the minimum sum of chosen triples’ cost. Conditions (i) and (ii) ensure a single RRT in P1 and condition (iii) let the matches in P2 be complementary to the ones in P1 . Obviously,

2.2 Double Round Robin Tournament

13

the mirrored double RRT problem can be solved by solving a single RRT problem where cost ci,j,p are defined as follows: ci,j,p = ci,j,p1(p) + cj,i,p2(p)

∀i, j ∈ T, i = j, p ∈ P

(2.10)

Note that pr (p) denotes the period of Pr corresponding to index p of P in a single RRT. Theorem 2.2. The mirrored double RRT problem is NP-hard. Proof. We reduce the single RRT problem to the mirrored double RRT problem. The single RRT problem is known to be NP-hard due to theorem 2.1. Given an instance of single RRT problem we construct an instance of mirrored double RRT problem by choosing n = n and setting the costs ci,j,p as follows.  ci,j,p ∀i, j ∈ T, i = j, p ∈ P1  ci,j,p = 0 ∀i, j ∈ T, i = j, p ∈ P2 Let td be an optimal solution to the mirrored double RRT problem. Then, the single RRT ts arranged in P1 of td is an optimal solution to the single RRT problem. Suppose there is a single RRT t¯s having lower cost than ts . Then, we can construct a mirrored double RRT t¯d having lower cost than td : set the matches in P1 according to t¯s and let the matches in P2 be complementary to those in P1 . Since t¯s has lower cost than ts and matches in P2 do not affect the tournament’s overall cost t¯d has lower   cost than td . The concept of round-based double RRTs is a generalization of mirrored double RRTs. Here, both rounds might not be complementary. The UEFA champions league is a real world example for round-based double RRTs. Table 2.3 illustrates an example with n = 6 teams. Table 2.3. Round-based double RRT for n = 6 period

1.1

1.2

1.3

1.4

1.5

2.1

2.2

2.3

2.4

2.5

match 1 1-2 match 2 5-3 match 3 4-6

5-6 1-4 2-3

3-4 2-5 1-6

4-5 3-1 2-6

5-1 4-2 3-6

3-2 6-4 1-5

6-1 5-2 4-3

5-4 6-3 2-1

6-5 2-4 1-3

6-2 3-5 4-1

14

2 Basic Problems

We define the corresponding round-based double RRT problem in the following. Definition 2.6. Given a set T , |T | = n , two sets P1 and P2 , |P1 | = |P2 | = n −1, and costs ci,j,p for each (i, j, p) ∈ T ×T ×(P1 ∪ P2 ), i = j, a feasible solution to the round-based double RRT problem corresponds to a set of n (n − 1) triples such that (i) for each pair (i, j) ∈ T × T , i < j, exactly one triple of form (i, j, p1 ) or (j, i, p1 ) with p1 ∈ P1 is chosen and (ii) exactly one triple of form (i, j, p) with p ∈ (P1 ∪ P2 ) is chosen, and such that (iii) for each pair (i, p) ∈ T × (P1 ∪ P2 ) exactly one triple of form (i, j, p) or (j, i, p) with j ∈ T \{i} is chosen. The goal of round-based double RRT problem is to find a feasible solution having the minimum sum of chosen triples’ cost. Conditions (i) and (ii) imply that each team plays against each other team exactly once in each round (once at home and once away). Condition (iii) assures that each team plays exactly once in each period. We represent the round-based double RRT problem as an IP model constraints. employing 2n(n − 1)2 variables and 7n(n−1) 2

Model 2.3: RBDRRTP-IP   min



ci,j,p xi,j,p

(2.11)

i∈T j∈T \{i} p∈(P1 ∪P2 )

s.t.



(xi,j,p1 + xj,i,p1 )

=

1

∀ i, j ∈ T, i < j

(2.12)

=

1

∀ i, j ∈ T, i = j

(2.13)

=

1

∀ i ∈ T, p ∈ (P1 ∪ P2 )

(2.14)



{0, 1} ∀ i, j ∈ T, i = j, p ∈ (P1 ∪ P2 ) (2.15)

p1 ∈P1



xi,j,p

p∈(P1 ∪P2 )



(xi,j,p + xj,i,p )

j∈T \{i}

xi,j,p

Equations (2.12) and (2.14) force the arranged matches to form a single RRT in P1 corresponding to conditions (i) and (iii). Constraint (2.13) ensures that each pair of teams i and j compete twice at different venues representing condition (ii). Hence, taking into account equation (2.14) there is another single RRT formed in P2 .

2.2 Double Round Robin Tournament

15

Theorem 2.3. The round-based double RRT problem is NP-hard. Proof. Reduction of single RRT problem to the round-based double RRT problem with n = n can be done exactly as reduction of single RRT problem to the mirrored double RRT problem.   A (general) double RRT has no additional requirements compared to those specified in section 1.3 with r set to 2, see table 2.4 for example. Table 2.4. Double RRT for n = 6 period

1

match 1 5-4 match 2 6-3 match 3 2-1

2

3

4

5

6

7

8

9

10

1-2 5-3 4-6

5-6 1-4 2-3

3-4 2-5 1-6

6-5 2-4 1-3

4-5 3-1 2-6

5-1 4-2 3-6

3-2 6-4 1-5

6-1 5-2 4-3

6-2 3-5 4-1

We define the corresponding double RRT problem below. Definition 2.7. Given a set T , |T | = n , a set P , |P | = 2 (n − 1), and costs c i,j,p for each (i, j, p) ∈ T × T × P, i = j, a feasible solution to the double RRT problem corresponds to a set of n (n − 1) triples such that (i) for each pair (i, j) ∈ T × T, i = j, exactly one triple of the form (i, j, p) with p ∈ P is chosen and such that (ii) for each pair (i, p) ∈ T × P exactly one triple of form (i, j, p) or (j, i, p) with j ∈ T \{i} is chosen. The double RRT problem is to find a feasible solution having the minimum sum of chosen triples’ cost. We represent the double RRT problem as an IP model using 2n(n − 1)2 variables and 2n(n − 1) constraints, see (2.16) to (2.19). (2.17) and (2.18) directly correspond to (i) and (ii), respectively. (2.16) states the goal to minimize arranged matches’ costs. Theorem 2.4. The double RRT problem is NP-hard. Proof. We reduce the single RRT problem to the double RRT problem. The single RRT problem is known to be NP-hard due to theorem 2.1. Given an instance of the single RRT problem we construct an instance of the double RRT problem with n = n as follows. Let f be an arbitrarily ordered 1-factorization of Kn , built using the method presented in Schreuder [76] for example. Remember that there is always an 1-factorization of Kn if n is even. We define costs c i,j,p of the double RRT problem as follows:

16

2 Basic Problems

Model 2.4: DRRTP-IP min

 



c i,j,p xi,j,p

(2.16)

i∈T j∈T \{i} p∈P

s.t.



xi,j,p

=

1

∀ i, j ∈ T, i = j

(2.17)

=

1

∀ i ∈ T, p ∈ P

(2.18)



{0, 1} ∀ i, j ∈ T, i = j, p ∈ P

(2.19)

p∈P



(xi,j,p + xj,i,p )

j∈T \{i}

xi,j,p

 ci,j,p ∀ i, j ∈ T, i = j, p ∈ {1, . . . , n − 1}      0 ∀ (i, j, n − 1 + p) with (i, j), i < j,    being in 1-factor p of f c i,j,p =  0 ∀ (j, i, n − 1 + p) with (i, j), i < j,     being in 1-factor p of f     M otherwise    with M = i∈T j∈T \{i} p∈P ci,j,p . Let td be an optimal solution to the double RRT problem. Then, the single RRT ts arranged in periods {1, . . . , n − 1} of td is an optimal solution to the single RRT problem. Suppose there is a single RRT t¯s having lower cost than ts . Then, we can construct a double RRT t¯d having lower cost than td : set the matches in periods {1, . . . , n − 1} according to t¯s . Additionally, let the matches in periods {n, . . . , 2n − 2} correspond to f . Then, matches in periods {n, . . . , 2n − 2} have overall cost of zero which, obviously, is the minimum possible value. Clearly, t¯d is a (general) double RRT and, furthermore, t¯d has lower cost than td has since t¯s has lower cost than   ts .

2.3 r Round Robin Tournament An obvious generalization of the RRTs presented so far is to let r ∈ N, in particular r > 2. We take into account the same interrelations between rounds as outlined in section 2.2. In the following we describe the resulting RRTs and give corresponding cost minimization problems

2.3 r Round Robin Tournament

17

all of which are NP-hard. We renounce to give proofs of complexity and IP models, respectively, since they are straightforward from those in section 2.2. A mirrored r-RRT is a r-RRT hosting a match between teams i ∈ T and j ∈ T , i = j, at i’s home in period p ∈ {1, . . . , (r − 1)(n − 1)} if and only if a match between teams i and j at j’s home takes place in period p + n − 1. Hence, the tournament is divided into r single RRTs where single RRT r  , r  ∈ {1, . . . , r − 1}, is complementary to single RRT r  + 1. We define the corresponding cost minimization problem hereafter. Definition 2.8. Given a set T with |T | = n , sets Pr with |Pr | = (n − 1) for each r  ∈ {1, . . . , r}, and costs ci,j,p for each (i, j, p) ∈  T × T × ∪rr =1 Pr , i = j, a feasible solution to the mirrored r-RRT   problem corresponds to a set of r n (n2 −1) triples such that (i) for each pair (i, j) ∈ T × T, i < j, exactly one triple of form (i, j, p1 ) or (j, i, p1 ) with p1 ∈ P1 is chosen, (ii) for each pair (i, p1 ) ∈ T × P1 exactly one triple of form (i, j, p1 ) or (j, i, p1 ) with j ∈ T \{i} is chosen, and (iii) for each chosen triple (i, j, p) ∈ T × T × Pr , i = j, r  ∈ {1, . . . , r − 1}, the triple (j, i, p) ∈ T × T × Pr +1 is chosen. The goal of the mirrored r-RRT problem is to find a feasible solution having the minimum sum of chosen triples’ cost. Conditions (i) and (ii) ensure that a single RRT takes place in the first round. Condition (iii) induces each round to be complementary to the previous and the following one. Obviously, mirrored r-RRT problems can be solved by solving single RRT problems. To this end, we set cost of a single RRT problem according to a generalization of equation (2.10):

ci,j,p =

r2

 r  =1

ci,j,p  (p) 2r −1

+

r2  r  =1

cj,i,p

2r  (p)

∀i, j ∈ T, i = j, p ∈ P (2.20)

A round-based r-RRT is a r-RRT according to characteristics introduced in section 1.3. Its periods can be partitioned into r rounds where each round is a single RRT. We define the corresponding cost minimization problem below. Definition 2.9. Given a set T with |T | = n , sets Pr with |Pr | = (n − 1) for each r  ∈ {1, . . . , r}, and costs ci,j,p for each (i, j, p) ∈  T × T × ∪rr =1 Pr , i = j, a feasible solution to the round-based r-RRT

18

2 Basic Problems 



problem corresponds to a set of r n (n2 −1) triples such that (i) for each r  ∈ {1, . . . , r} the chosen triples (i, j, p) with p ∈ Pr form a single

RRT and such that (ii) for each pair (i, j) ∈ T × T, i = j, at least 2r triples (i, j, p) with p ∈ ∪rr =1 Pr are chosen. The goal of the round-based r-RRT problem is to find a feasible solution having the minimum sum of chosen triples’ cost. Condition (i) is stated straightforwardly. Condition (ii) limits the difference of number of matches between teams i ∈ T and j ∈ T , i = j, at i’s home and number of matches between teams i and j at j’s home to be no more than 1. A (general) r-RRT is fully specified by the characteristics given in section 1.3. In the following, the cost minimization problem is given. Definition 2.10. Given a set T with |T | = n , a set P with |P | = r(n − 1), and costs c i,j,p for each (i, j, p) ∈ T × T × P, i = j, a feasible 



solution to the r-RRT problem corresponds to a set of r n (n2 −1) triples such that (i) for each pair (i, j) ∈ T ×T, i = j, at least 2r triples (i, j, p) with p ∈ P are chosen, such that (ii) for each pair (i, j) ∈ T × T, i < j, exactly r triples (i, j, p) or (j, i, p) with p ∈ P are chosen and such that (iii) for each pair (i, p) ∈ T × P exactly one triple of form (i, j, p) or (j, i, p) with j ∈ T \{i} is chosen. The goal of r-RRT problem is to find a feasible solution having the minimum sum of chosen triples’ cost.

Condition (i) limits the difference of number of matches between teams i ∈ T and j ∈ T , i = j, at i’s home and number of matches between teams i and j at j’s home to be no more than 1. Condition (ii) implies that each pair of teams meets exactly r times. Condition (iii) assures that each team has exactly one opponent per period. Note that we can drop (ii) if r is even, since then (ii) is implied by (i).

2.4 Decomposition Schemes Due to the complexity of RRT problems outlined in sections 2.1 to 2.3 two decomposition schemes are used frequently. Both separate the decisions about the period a match takes place in and about the venue it is carried out at. • First–Schedule–Then–Break: First, each pair of teams is fixed to compete in a specific period. Based on this timetable each match’s venue is determined, see Trick [81] for example.

2.4 Decomposition Schemes

19

• First–Break–Then–Schedule: First, the matches’ venues are decided. Afterwards, the matches are assigned to periods, see Nemhauser and Trick [68] for an example. Optimization problems considered below refer to single RRTs. However, adaption to other basic problems is straightforward. 2.4.1 First–Schedule–Then–Break An opponent schedule, as defined in Post and Woeginger [69], is a timetable which determines for each pair (i, p), i ∈ T , p ∈ P , the opponent of team i in period p. See table 2.5 for an example corresponding to the single RRT in table 2.1. Team i’s opponent in period p is specified in the row corresponding to i in column p. Table 2.5. Opponent Schedule for n = 6 team 1 1 2 3 4 5 6

2 1 5 6 3 4

2

3

4

5

4 3 2 1 6 5

6 5 4 3 2 1

3 6 1 5 4 2

5 4 6 2 1 3

In the following, we formally define the problem to find the minimum cost opponent schedule. Definition 2.11. Given a set T , |T | = n , a set P , |P | = n − 1, and costs ci,j,p for each (i, j, p) ∈ T × T × P , i < j, a feasible solution to 



the opponent-schedule problem corresponds to a set of n (n2−1) triples such that (i) for each subset of teams (i, j) ∈ T × T , i < j, exactly one triple (i, j, p) with p ∈ P is chosen and such that (ii) for each pair (i, p) ∈ T ×P exactly one triple of form (i, j, p), j ∈ T , i < j, or (j, i, p), j ∈ T , j < i, is chosen. The goal of the opponent-schedule problem is to find a feasible solution having the minimum sum of chosen triples’ cost. Condition (i) ensures that each team meets each other team while condition (ii) forces each team to compete exactly once per period.

20

2 Basic Problems

These requests as well as the cost minimization goal can be repre2 binary variables and 3n(n−1) sented as an IP model employing n(n−1) 2 2 constraints, see (2.21) to (2.24).

Model 2.5: Opponent-IP min

 

ci,j,p xi,j,p

(2.21)

i∈T i 12. For n = 18 run times may not be representative since we could only test very few instances. However, three out of three instances with PπS = 0.7 and n = 18 could be solved to optimality. The fraction of instances the solution process terminated for (either finding a feasible solution or detecting infeasibility) decreases with increasing PπS which may be an indicator for larger run time requirements. Furthermore, considering stadium availability seems to lead to lower run times than general forbidden matches do. Although instances considering stadium availability can be considered special cases for general forbidden matches run times for are significantly lower than for 12 < n < 18 and PπS = Pπ . HAP–set–based single RRT Problem Scheduling a single RRT to minimum cost with respect to a given HAP set can be seen as a special case of taking care of stadium availability with PπS = 0.5. However, due to the instance generation scheme infeasibility of an instance with given HAP set is less probable than infeasibility of an instance with PπS = 0.7 (see above) and, therefore, presumably less probable than infeasibility of an instance with PπS = 0.5. This is clearly confirmed in table 3.8: only one out of 70 randomly generated instances is infeasible while 20 out of 63 instances generated with PπS = 0.7 are infeasible according to table 3.7. Run times are significantly lower than those for instances considering stadium availability given in table 3.7 and, consequently, than those for instances incorporating general forbidden matches and for the single RRT problem. Since this section is the one considering the restrictions

48

3 Real World Problems Table 3.8. Comp. Results for HAP–set–based Single RRT Problem n i./s.f./n.s. 6 8 10 12 14 16 18

r.t.

10/10/0 0.01 10/9/1 0.01 10/10/0 0.18 10/10/0 0.62 10/10/0 3.28 10/10/0 72.73 10/10/0 5987.32

proofed to be most run time reducing in the course of the previous sections we illustrate the run time behavior in figure 3.2. 10.0 8.0 6.0 4.0 ln(t) 2.0 0.0 −2.0 −4.0 −6.0 6

8

10

12 n

14

16

18

Fig. 3.2. Run time behavior for HAP–set–based single RRT problem

In spite of the smaller run times we, again, observe a clearly exponential run time behavior which corresponds to the proof given in section 2.4.2.

3.3 Computational Study

49

Highly Attended Matches When considering limitation of the number of highly attended matches per period we expect the same reverse effects as lined out before: reduction of solution space versus difficulty of finding feasible solutions. We created instances with probability of Pa = 0.2 for a match to be attractive. We vary amax depending on the instances’ size n. Note that for n = 10 and Pa = 0.2 there is one attractive match per period on average. Therefore, we increase amax from 1 for n = 8 to 2 for n = 10. Table 3.9. Comp. Results for Highly Attended Matches n Pa amax i./s.f./n.s. 6 8 10 12 14 16 18

0.2 0.2 0.2 0.2 0.2 0.2 0.2

1 1 2 2 2 2 2

r.t.

10/10/0 0.01 10/10/0 0.09 10/10/0 0.29 10/10/0 2.78 10/10/0 96.90 10/10/0 16925.70 3/0/0 —

Obviously, amax is differently restrictive for instances having 6 and 8 teams and having 10, 12, 14, 16, and 18 teams, respectively. This leads to a distortion of run times. However, run times in table 3.9 are comparable to those of the single RRT problem in table 3.5. We clearly observe larger run times than for the single RRT problem if considering highly attended matches. Regions’ Capacity As outlined in section 3.1.2 we can think of a huge amount of constellations taking regions into account. For a given size of instances n there can be various numbers of regions, various sizes of regions, and various assignments of teams to regions (which can influence run times due to different costs assigned to different teams). We exemplarily create three classes R2 , R3 , and R4 of instances. In each class we have a given number |R| = 2, |R| = 3, and |R| = 4 of regions. The concept is to let regions have sizes as close to each other as possible without being identical for all regions. In R2 there are two disjoint regions of size n2 − 1 and n2 + 1, respectively. In R3 regions

50

3 Real World Problems

n are disjoint having minimum size of 3 . n mod 3 teams have size

n there are four disjoint regions having minimum size of + 1. In R 4

n3

n 4 . n mod 4 teams have size 4 + 1.  The number  of matches in a specific    region R ⊂ T is limited by |R | |R | CR = + 1. Since CR ≥ must hold (see section 3.1.2) 2 2 limitation for each region is quite restrictive. Table 3.10. Comp. Results for Regions |R| = 2 n |R | CR i./s.f. 6 8 10 12 14 16 18

2/4 3/5 4/6 5/7 6/8 7/9 8/10

2/3 2/3 3/4 3/4 4/5 4/5 5/6

n 6 8 10 12 14 16 18

r.t. |R |

10/10 0.01 10/10 0.09 10/10 0.40 10/10 3.65 10/10 81.35 10/10 4080.78 3/0 —

|R |

|R| = 3 CR i./s.f.

r.t.

2/2/2 2/2/2 10/10 0.01 2/3/3 2/2/2 10/10 0.08 3/3/4 2/2/3 10/10 0.83 4/4/4 3/3/3 10/10 4.82 4/5/5 3/3/3 10/10 345.28 5/5/6 3/3/4 10/5 32663.66 6/6/6 4/4/4 3/0 —

|R| = 4 CR i./s.f.

r.t.

1/1/2/2 1/1/2/2 10/10 0.02 2/2/2/2 2/2/2/2 10/10 0.03 2/2/3/3 2/2/2/2 10/10 0.83 3/3/3/3 2/2/2/2 10/10 16.21 3/3/4/4 2/2/3/3 10/10 546.06 4/4/4/4 3/3/3/3 10/10 20775.20 4/4/5/5 3/3/3/3 3/0 —

Results are given in table 3.10. Each single class of problem instances proofed to be feasible. Almost all instances could be solved to optimality. No instance having 18 teams could be solved to optimality. However, for those instances not solved to optimality feasible solutions were found. With respect to the run times we can conclude that mostly a larger number of regions means higher run times. We observe exceptions from this rule for instances having 16 teams and 3 and 4 regions, respectively. A possible explanation for this effect is that limitation for the number of matches in a specific region is much more restrictive in those instances

3.3 Computational Study

51

having 3 regions. For example for instances having 16 teams and 3 regions the number of matches per region is restricted to about 0.62 times the number of teams in a region on average. This rate amounts to 0.75 for instances with 16 teams and 4 regions and is, therefore, less restrictive. Furthermore, this effect elucidates why only half of the instances having 16 teams and 3 regions could be solved to optimality. The basic problem can be seen as a problem having |R| = 1, R = T , and CR = n2 + 1 according to the concept outlined above. Then, it fits into our observation that run times are lower if the number of regions is smaller. Breaks We study problems incorporating requirements considering breaks as introduced in section 3.2.1. In detail we solve problems requiring a minimum number of breaks (allowing and forbidding breaks in the second period) and having exactly one break per team. Run times are lined out in table 3.11. Table 3.11. Comp. Results for Breaks n 6 8 10

min no min no, ex. one b. ex. one b., not 2nd not 2nd 0.23 26.79 —

0.21 26.63 —

0.25 38.44 —

— 74.12 —

Clearly, allowing and forbidding breaks in the second period has no impact on run times when a minimum number of breaks is required. For n = 10 Cplex aborted the solution process due to lack of memory after about 15 hours of run time for each single instance. In most cases not even a single feasible solution was found. This, first of all, shows the enormous increase of run time for solving these problems to optimality. Second, it gives an idea of the difficulty to find even one feasible solution when cost oriented branching is employed instead of the standard generation scheme mentioned in section 3.2.1. This observation motivates the development of a branching scheme specialized in single RRTs having the minimum number of breaks in chapter 5. Run times for instances requiring exactly one break per team are even higher. Again, we can not solve instances with more than n =

52

3 Real World Problems

8 teams. Here, allowing and forbidding breaks in the second period influences run times. Average run time forbidding breaks in the second period is about twice as high as run times allowing them. This coincides with computational effort to find feasible solutions at all. Run time to find the first feasible solution forbidding breaks in the second period is about three times as high as run times when we allow them. Opponents’ Strengths In section 3.2.2 four variants to consider a team’s opponents’ strengths in order to establish fairness are proposed. We tested instances having up to n = 18 teams for all variants. The number of strength groups is set to 2 and n2 , respectively. For “changing” and “balanced” we additionally set the number of violations escmax and esbmax , respectively, to 0. Considering “equally unchanging” and “equally unbalanced” we additionally set the number of violations ffcix and ffbix to 1, respectively, Therefore, each team must violate the opponent strength constraint exactly once. Table 3.12. Comp. Results for Changing Opponents’ Strength Groups |S| = 2 |S| = n i./s.f./n.s. r.t. i./s.f./n.s. 6 8 10 12 14 16 18

10/0/10 — 10/10/0 0.02 10/0/10 — 10/10/0 0.18 10/0/10 — 10/10/0 10.33 10/0/10 —

n 2

r.t.

10/0/10 — 10/10/0 0.34 10/10/0 4.59 10/10/0 233.64 10/10/0 11559.40 3/0/0 — — —

In tables 3.12 to 3.15 results for all classes of instances are provided. Referring to the question of instances’ feasibility posed in section 3.2.2 we can identify problem classes considering changing opponents’ strengths being infeasible due to values of n and |S| according to table 3.12. For n ≤ 18, n mod 4 = 2, and |S| = 2 we observe infeasibility. We conjecture this to be true for n > 18. Furthermore, n = 6 and |S| = 3 leads to infeasibility as well. Run times for feasible instances having |S| = 2 are significantly lower than for the single RRT problem of corresponding size n. To the

3.3 Computational Study

53

contrary run times are significantly higher for |S| = n2 in comparison to both, the single RRT problem and instances with changing opponents’ strengths and |S| = 2. Consequently, instances having 16 teams and more can not be solved to optimality. Table 3.13. Comp. Results for Balanced Opponents’ Strengths |S| = 2 |S| = n2 n i./s.f./n.s. r.t. i./s.f./n.s. r.t. 6 8 10 12 14 16 18

10/0/10 — 10/0/10 — 10/10/0 0.03 10/10/0 0.03 10/0/10 — 10/0/10 — 10/10/0 0.61 10/10/0 0.50 10/0/10 — 10/0/10 — 10/10/0 10.09 10/0/10 9.26 10/0/10 — 10/0/10 —

Inspecting table 3.13 again we identify problem classes being infeasible. Note that balanced opponents’ strengths structure is a special case of changing opponents’ strengths structure. Therefore, it is straightforward that n and |S| is infeasible according to balanced opponent’s strengths if n and |S| is infeasible according to changing opponents’ strengths. However, we identify some instance classes being feasible according to changing opponent’s strengths but being infeasible according to balanced opponents’ strengths: For n ≤ 18, n mod 4 = 2, and |S| = n2 we observe infeasibility. Again, we conjecture this be valid for n > 18. Run times for both |S| = 2 and |S| = n2 are significantly lower than for the single RRT problem. Run times for |S| = n2 are slightly lower than those for |S| = 2 which is in contrast to the relation observed for changing opponents’ strengths. Note that instances with |S| = 2 and changing opponents’ strengths are balanced as well with |S| = 2. Hence, results according to |S| = 2 in tables 3.12 and 3.13 only differ slightly and due to cost structure and model formulation, respectively. Run times for equally unchanging opponents’ strengths are given in table 3.14. Here we observe remarkably higher run times compared with changing opponents’ strengths given in table 3.12. The reason for this might be the larger number of binary variables necessary to represent the equally unchanging opponents’ strengths requirement. Furthermore, integer variables not restricted to binary values are incorpo-

54

3 Real World Problems

Table 3.14. Comp. Results for Equally Unchanging Opponents’ Strengths |S| = 2 |S| = n i./s.f./n.s. r.t. i./s.f./n.s. 6 8 10 12 14

n 2

r.t.

10/0/10 — 10/0/10 — 10/10/0 0.76 10/10/0 8.26 10/0/10 — 10/10/0 3116.31 10/10/0 195.01 3/0/0 — 3/0/0 — — —

rated. As far as table 3.14 provides insights into this topic exactly the same classes of problem instances seem to be infeasible as for changing opponents’ strengths. Finally, run times for equally unbalanced opponents’ strengths are given in table 3.15. Table 3.15. Comp. Results for Equally Unbalanced Opponents’ Strengths |S| = 2 |S| = n i./s.f./n.s. r.t. i./s.f./n.s. 6 8 10 12 14

10/0/10 — 10/10/0 0.63 10/0/10 — 10/10/0 639.20 3/0/0 —

n 2

r.t.

10/0/10 — 10/10/0 2.40 10/0/10 — 10/10/0 17290.30 3/0/0 —

Run times are clearly higher than for balanced opponents’ strengths and equally unchanging opponents’ strengths given in tables 3.13 and 3.14, respectively. Again, results give only a slight idea of infeasible instance classes. However, as it was the case for changing and equally unchanging opponents’ strengths there seems to be no difference according to problems feasibility between balanced and equally unbalanced opponents’ strengths. Opponents’ Strengths and Breaks We line out computational results for a combination of strength group requirements as well as break requirements as introduced in sections 3.2.1 and 3.2.2. In contrast to other results corresponding to IP model

3.3 Computational Study

55

techniques in the chapter at hand we combine two requirements, here, because they have a special meaning in context of the branch-and-price (B&P) approach developed in chapter 6. Exemplarily we postulate the minimum number of breaks allowing breaks in the second period and changing opponents’ strengths considering 2 and n2 strength groups, respectively. Table 3.16. Comp. Results for Breaks and Changing Opponents’ Strengths |S| = 2 |S| = n2 n i./s.f./n.s. r.t. i./s.f./n.s. r.t. 8 10/10/0 3.71 10/10/0 83.81 10 — — 3/0/0 — — — — 12 3/0/0

Clearly, instances having 6 teams or 10 teams and 2 strength groups have no solution at all, see table 3.16. Furthermore, no instance having more than 8 teams could be solved to optimality. Solution process was aborted due to lack of memory instead. Run times for 8 teams and n2 = 4 strength groups are substantially larger than run times for both minimum number of breaks in table 3.11 and changing strength groups in table 3.12. In analogy, considering 2 strength groups for 8 teams (and the minimum number of breaks) run times are larger than for the problem considering only changing strength groups. However, they are clearly smaller than for problems considering only the minimum number breaks. This observations backs up the one in section 3.3.2 stating that balanced opponents’ strengths (changing opponents’ strengths with |S| = 2 means balanced opponents’ strengths) reduces run times. Teams’ Preferences We consider six classes of instances considering teams’ preferences as introduced in section 3.2.3: teams specify 1-2, 1-3, and 2-4 preferences, and exactly or at least 1, 1, and 2 preferences, respectively, have to be considered. Obviously, one can think of more preferences if the number of teams (and, therefore, the number of periods) is larger. For the sake of comparability we refuse to do so. Results are given in table 3.17. For each single problem class and size we can conclude that run times are significantly higher than for the single RRT problem having

56

3 Real World Problems Table 3.17. Comp. Results for Teams’ Preferences 1-2, exactly 1 1-3, exactly 1 2-4, exactly 2 n i./s.f./n.s. r.t. i./s.f./n.s. r.t. i./s.f./n.s. r.t. 6 8 10 12 14 16 18

10/10/0 0.01 10/10/0 0.09 10/10/0 0.50 10/10/0 3.14 10/10/0 66.05 10/10/0 6192.64 3/1/0 156683.00

10/10/0 0.02 10/10/0 0.14 10/10/0 0.86 10/10/0 4.01 10/10/0 157.36 10/10/0 12223.80 3/0/0 —

10/10/0 0.02 10/10/0 0.20 10/10/0 1.20 10/10/0 5.84 10/10/0 359.98 10/10/0 21002.60 3/0/0 —

1-2, at least 1 1-3, at least 1 2-4, at least 2 n i./s.f./n.s. r.t. i./s.f./n.s. r.t. i./s.f./n.s. r.t. 6 8 10 12 14 16 18

10/10/0 10/10/0 10/10/0 10/10/0 10/10/0 10/10/0 3/0/0

0.01 0.05 0.39 2.81 90.45 3596.75 —

10/10/0 10/10/0 10/10/0 10/10/0 10/10/0 10/10/0 3/0/0

0.02 0.06 0.48 3.05 67.22 2841.99 —

10/10/0 0.01 10/10/0 0.09 10/10/0 0.66 10/10/0 2.90 10/10/0 53.91 10/10/0 13205.00 3/0/0 —

identical size. Furthermore, run times for each problem class considering a given exact number of preferences to be granted are higher than for the corresponding class considering a given minimum number of preferences to be granted. This effect might result from the difficulty to find feasible solutions. Obviously, the set of solutions having an exact number p of granted preferences per team is a subset of the set of solutions having p as a minimum number of preferences to be granted per team. As we can see increasing the number of preferences and increasing the number of preferences to be granted leads to higher run times if an exact number of preferences to be fulfilled is given. Again, the reason for this probably is the difficulty to find feasible solutions: increasing the number of given preferences implies a rising number of preferences which must be neglected. This effect does not come into play if we consider a minimum number of preferences to be fulfilled. Instead, by increasing the number of preferences there is more freedom to choose the number of preferences to be fulfilled. Therefore, there is a tendency

3.4 Summary

57

that run times are lower if 1 to 3 preferences are given in comparison with 1 to 2 preferences.

3.4 Summary In this chapter we pick up several prominent real world requirements in the context of RRT scheduling. Furthermore, we substantiate requirements related to fairness which mostly have been proposed in abstract terms in literature so far. We formally define the requirements by means of IP modelling techniques. Moreover, we study the run time behavior resulting from the optimization models using Cplex. We observe exponential run time behavior for nearly all variations of the single RRT problem. Therefore, run times are exorbitant as soon as problem sizes grow relevant for real world problems except. We detect a single exception from this rule: If we consider balanced opponents’ strength groups run times remain manageable. Hence, these variants might serve as basis for real world problems.

4 Combinatorial Properties of Strength Groups

In this chapter we provide insights into combinatorial aspects concerning strength groups in RRTs as presented in Briskorn [7]. Note that indices are “0-based” throughout this chapter. We refer to section 3.2.2 for a motivation of strength groups and basic definitions. Definition 4.1. A single RRT where no team plays against teams of the same strength group in two consecutive periods is called groupchanging. Definition 4.2. A single RRT where no team plays more than once against teams of the same strength group within |S| consecutive periods is called group-balanced. Group-changing single RRTs and group-balanced single RRTs correspond to concepts presented in section 3.2.2. An interesting question is how n and |S| can be chosen such that a group-changing single RRT or a group-balanced single RRT exists. We empirically proof specific values of n and |S| to be infeasible in section 3.3.2. Moreover, we conjecture classes of values to be infeasible in general. These conjectures are proven to be true in the chapter at hand. Furthermore, we proof complexity of cost minimization problems being in line with problems considered in chapter 2 and considering strength groups in section 4.4. For the sake of convenience we introduce some short notations. A specific strength group is referred to as Sk ∈ S with k ∈ {0, . . . , |S|−1}. We denote the index of the strength group of team i ∈ T by S(i) ∈ S, i.e. i ∈ SS(i) . The opponent of team i ∈ T in period p ∈ P is denoted by oi,p ∈ T .

60

4 Combinatorial Properties of Strength Groups

4.1 Factorizations First, we focus on graph theoretical aspects. A r-factor of a graph G = (V, E) is a set of edges E  ⊆ E such that each node i ∈ V is incident to exactly r edges in E  . A r-factorization of G is a partition of its edges into r-factors. For details we refer the reader to Wallis [85], for example. A near-1-factor of G is a set of edges E  ⊆ E such that each node but one is incident to exactly one e ∈ E  . This node is incident to no e ∈ E  . A near-1-factorization of G is a partition of its edges into near-1-factors. An ordered r-factorization (near-1-factorization) is a r-factorization (near-1-factorization) where the r-factors (near-1-factors) are ordered. 4.1.1 Ordered 1-Factorization of Kk,k The complete balanced bipartite graph Kk,k , k ∈ N, is well known to have an ordered 1-factorization F bip , as proposed for example in de Werra [19]. Let the color classes (see Schrijver [78]) of Kk,k be defined by V0 := {i | i ∈ {0, . . . , k − 1}} and V1 := {i | i ∈ {k, . . . , 2k − 1}}. Then,   bip , where F bip = F0bip , . . . , Fk−1 Flbip = {[m, k + (m + l)mod k] | m ∈ {0, . . . , k − 1}} ∀ l ∈ {0, . . . , k − 1} . Here, [i, j], i, j ∈ V , denotes the edge incident to i and j. Note that differences i − j and j − i are not equal to 1 in F0bip unless k = 1. An example with k = 4 is given 4.1. in figure

We emphasize that in bip k F k no edge [m, k + n], m, n ∈ 0, . . . , 2 − 1 , and no edge [m, k + n], 2

m, n ∈ k2 , . . . , k − 1 , is contained if k is even. 4.1.2 Ordered 1-Factorizations of Kk It is well known that there is an ordered 1-factorization consisting of k − 1 1-factors of each Kk , k even. The canonical 1-factorization is defined in the following (all indices being taken modulo k − 1):

4.1 Factorizations

61

0

4

0

4

0

4

0

4

1

5

1

5

1

5

1

5

2

6

2

6

2

6

2

6

3

7

3

7

3

7

3

7

F0bip

F1bip

F2bip

F3bip

Fig. 4.1. 1-Factorization of K4,4

c F c = F0c , . . . , Fk−2 , where    k c Fl = [l, k − 1] ∪ [l − m, l + m] | m ∈ 1, . . . , − 1 2 ∀ l ∈ {0, . . . , k − 2}. If not stated otherwise we refer to F c as 1-factorization of Kk in the remainder. Note that we can force F kc −1 to exclusively contain edges of 2

form [i, i + 1], i even, by a simple mapping σ : V → V . An illustration of the canonical 1-factorization of K6 is given in figure 4.2.

2

3

2

3

5 1

1

1

3

2 5

1

4 0

F2c

3

5 4

0 F1c

2

5 4

0 F0c

3

5 4

0

2

1

4 0

F3c

F4c

Fig. 4.2. Canonical 1-Factorization of K6

If k is odd we can construct a near-1-factorization nF c consisting of k near-1-factors by simply letting the node matched with k in F c of Kk+1 be unmatched in each Flc , l ∈ {0, . . . , k − 1}:

62

4 Combinatorial Properties of Strength Groups

c nF c = aF0c , . . . , aFk−1 , where    k c ∀ l ∈ {0, . . . , k − 1} . nFl = [l − m, l + m] | m ∈ 1, . . . , − 1 2 Since F c and nF c is a so called started induced 1-factorization and near-1-factorization, respectively, each number in {1, . . . , 2k − 2} can be found as a difference i−j or j−i of an edge [i, j] in each 1-factor in F c and nF c . Next, we introduce 1-factorizations and near-1-factorizations for Kk , k ∈ N, k > 3, where not each of those numbers is contained in each 1-factor. The binary 1-factorization as proposed in de Werra [19] can be constructed for K2k if k even. Let V0 := {i | i ∈ {0, . . . , k − 1}} and V1 := {i | i ∈ {k, . . . , 2k − 1}} be a partition of V . Then, 1-factor Flb,e is set to Flbip as introduced in section 4.1.1 for each l ∈ {0, . . . , k − 1}. Hence, each edge between V0 and V1 is contained in 1-factors F0b,e b,e b,e to Fk−1 . Additionally, 1-factors Fkb,e to F2k−2 are constructed as 1factorization according to V0 and V1 , respectively. Then, 1-factorization F b,e is defined as follows:   b,e F b,e = F0b,e , . . . , F2k−2 , where Flb,e = {[m, k + (m + l)mod k] | m ∈ {0, . . . , k − 1}} ∀ l ∈ {0, . . . , k − 1} , Flb,e = [l, k − 1] ∪ [l + k, 2k − 1] ∪ {[(l − k − m)mod k, (l − k + m)mod k] | m ∈ {0, . . . , k − 1}} ∪ {[k + (l − k − m)mod k, k + (l − k + m)mod k] | m ∈ {0, . . . , k − 1}} ∀ l ∈ {k, . . . , 2k − 2} . Since F0b,e is based on F0bip none of the differences i − j or j − i is equal to 1 if k > 1. Figure 4.3 represents the binary 1-factorization of K8 . We extend the binary 1-factorization of K2k to the case where k b,o bip are defined as 1-factors F0bip to Fk−2 is odd. 1-factors F0b,o to Fk−2 b,o b,o according to V0 and V1 . 1-factors Fk−1 to F2k−2 are composed of near1-factors according to V0 and V1 . Note that one node of both, V0 and V1 , is unmatched. These nodes are matched and form the edges between b,o . V0 and V1 missing from F0b,o to Fk−2

4.1 Factorizations

63

0

4

0

4

0

4

0

4

1

5

1

5

1

5

1

5

2

6

2

6

2

6

2

6

3

7

3

7

3

7

3

7

F0b,e

F1b,e

F2b,e

F3b,e

0

2 4

6

0

2 4

6

0

2 4

6

1

3 5

7

1

3 5

7

1

3 5

7

F4b,e

F5b,e

F6b,e

Fig. 4.3. Binary 1-Factorization of K8

  b,o F b,o = F0b,o , . . . , F2k−2 , where Flb,o = {[m, k + (m + l)mod k] | m ∈ {0, . . . , k − 1}} ∀ l ∈ {0, . . . , k − 2} , Flb,o = [l − k + 1, k + (l − k)mod k] ∪ {[(l − k + 1 − m)mod k, (l − k + 1 + m)mod k] | m ∈ {0, . . . , k − 1}} ∪ {[k + (l − k − m)mod k, k + (l − k + m)mod k] | m ∈ {0, . . . , k − 1}} ∀ l ∈ {k − 1, . . . , 2k − 2} . Theorem 4.1. F b,o is a 1-factorization of K2k , k odd. Proof. We show that Flb,o is a 1-factor for each l ∈ {0, . . . , 2k − 2} and that F b,o is a partition of edges of K2k . Obviously, Flb,o is a 1-factor for l ∈ {0, . . . , k − 2} since it is defined by a 1-factor according to F bip considering a partition of V . Flb,o , l ∈ {k − 1, . . . , 2k − 2}, is defined by near-1-factors of both, V0 and V1 .

64

4 Combinatorial Properties of Strength Groups

Therefore all but nodes l − k + 1 and k + (l − k)mod(k − 1) are matched implicitly in Flb,o . These two nodes are matched explicitly. Each edge between V0 and V1 but [m, (m − 1) mod (k − 1)], m ∈ {0, . . . , k − 1} is contained exactly once in 1-factors Flb,o , l ∈ {0, . . . , k − 2}. Edges [m, (m − 1) mod (k − 1)], m ∈ {0, . . . , k − 1} are added to both near-1-factors of V0 and V1 in Flb,o , l ∈ {k − 1, . . . , 2k − 2}. Edges within V0 and V1 , respectively, are contained exactly once in Flb,o , l ∈ {k − 1, . . . , 2k − 2} by definition of near-1factors.   Again, F0b,o is based on F0bip and, therefore, none of the differences i − j or j − i is equal to 1 if k > 1. Figure 4.4 illustrates the binary 1-factorization of K2k with k = 5. We can construct near-1-factorizations according to F b,e and F b,o of K4k−1 and K4k+1 by simply adding a dummy node, constructing the corresponding 1-factorization of K4k and K4k+2 , and considering each node matched with the dummy node as unmatched. Again, no difference in the first near-1-factor is equal to 1 if and only if k ∈ N, k > 1. 4.1.3 Ordered Symmetric 2-Factorization of 2K2k+1 A 2-factor of graph G = (V, E) is a set of edges E  ⊆ E such that each node i ∈ V is incident to exactly two edges e, e ∈ E  , e = e . A 2-factorization of G is a partition of its edges into 2-factors (see Franek and Rosa [39] for details). An ordered 2-factorization is a 2-factorization having its 2-factors ordered. The complete multi-graph 2Kn is a graph on |V | = n nodes having exactly two edges incident with each pair of nodes. Kn , n odd, is known to have a 2-factorization as outlined in Burling and Heinrich [15]. Hence, 2Kn , n odd, has one, as well. An oriented 2-factorization is a 2-factorization where each edge e ∈ E is given an orientation. Definition 4.3. A symmetric 2-factorization of 2Kk , k odd, is an oriented 2-factorization where edges corresponding to the same pair of nodes are given opposite orientations. We construct a symmetric 2-factorization 2F of 2Kk , k odd, as follows:

4.1 Factorizations

65

0

5

0

5

0

5

0

5

1

6

1

6

1

6

1

6

2

7

2

7

2

7

2

7

3

8

3

8

3

8

3

8

4

9

4

9

4

9

4

9

F0o,b 2

F1o,b

3

7

8

F2o,b

2

3

7

6

8

2

3

7

6

4

1

F3o,b

0

6

4

9 1 5

4

9 1

0

5

F4o,b

0

F5o,b 3

2

7

8

2

0

9 5

F7o,b

9 5

F6o,b 3

7

6 4

1

8

4

1 0

8

6 9 5

F8o,b

Fig. 4.4. Binary 1-Factorization of K10

2F = {2F0 , . . . , 2Fk−2 } , where 2Fl = {[m, (m + 1 + l)mod k]0 }   k−1 − 1 , m ∈ {0, . . . , k − 1} , ∀ l ∈ 0, . . . , 2 2Fl+ k−1 = {[(m + 1 + l)mod k, m]1 } 2   k−1 − 1 , m ∈ {0, . . . , k − 1} . ∀ l ∈ 0, . . . , 2

66

4 Combinatorial Properties of Strength Groups

Here, [i, j]k , i, j ∈ V , k ∈ {0, 1}, identifies the edge between nodes i and j and having index k being oriented i → j. Theorem 4.2. 2F is a symmetric 2-factorization of 2Kk , k odd.   Proof. We show that 2F0 , . . . , 2F k−1 −1 forms a 2-factorization of 2

G := (V, {[i, j]0 | i, j ∈ V, i < j}). Obviously, each node has degree equal to two in 2-factor 2Fl unless

k −1 2 holds which is impossible since k is odd and l is integer. For each pair i, j ∈ V , i < j, either [i, j]0 is contained in 2Fj−i−1 if is contained in 2Fk−1−(j−i) if j − i > k−1 j − i ≤ k−1 2 or [j, i] 2 . 0  Consequently, 2F k−1 , . . . , 2Fk−2 forms a 2-factorization of G := 2 (V, {[i, j]1 | i, j ∈ V, i < j}). Obviously, both edges incident to a pair i, j ∈ V have opposite orientations by definition.   (m + 1 + l)mod k = (m − l − 1)mod k ⇔ l =

Note that each pair i, j ∈ V being matched in 2F0 has difference |i − j| = 1 as can be observed in figure 4.5. Furthermore, note that each node i is incident to exactly one ingoing edge [j, i]k , j ∈ V, k ∈ {0, 1}, and to exactly one outgoing edge [i, j]k , j ∈ V, k ∈ {0, 1}, in each 2factor 2Fl , l ∈ {0, . . . , k − 2}. Hence, each 2-factor consists of oriented circles.

2

3

1

2

4 0

3

1

4 0

2F0

2

3

1

2

4

1

0 2F1

3

4 0

2F2

Fig. 4.5. Symmetric 2-factorization of 2K5

2F3

4.2 Group-Balanced Tournaments

67

4.2 Group–Balanced Single Round Robin Tournaments In this section we provide several characteristics of group-balanced single RRTs. Additionally, we give a necessary and sufficient condition for n and |S| such that a corresponding group-balanced single RRT exists. Remark 4.1. The set of group-balanced single RRTs having n teams and |S| strength groups is a subset of the set of group-changing single RRTs having n teams and |S| strength groups. Hence, given n and |S| such that there is no corresponding group-changing single RRT then there neither is a corresponding group-balanced single RRT. Theorem 4.3. In a group-balanced single RRT the difference of two periods pj and pj where team i plays against teams j and j with S(j) =   n −1 . S(j), respectively, is |pj − pj | = k|S| with k ∈ 0, . . . , |S| Proof. Suppose team i plays against teams j and j with S(j) = S(j) in periods pj and pj with pj < pj and pj − pj = k|S| + l with k ∈   n − 1 and l ∈ {1, . . . , |S| − 1}. Then one of the following two 0, . . . , |S| cases holds. I

p −pj

There are less than j|S| − 1 matches of team i against teams of SS(j) in periods pj + 1, . . . , pj − 1. Then, there is at least one pair   (p, p) with p, p ∈ pj , . . . , pj , p < p such that team i plays against teams in S(j) in p and p, team i does not play against any team in

S(j) in any period p with p ∈ p + 1, . . . , p − 1 , and p − p > |S|. Hence, team i plays more than once against teams of at least one strength group Sk , k = S(j), in periods p + 1, . . . , p + |S|. p −pj

II There are more than j|S| − 1 matches of team i against teams of SS(j) in periods pj + 1, . . . , pj − 1. Then, there is at least one pair   (p, p) with p, p ∈ pj , . . . , pj , p < p such that team i plays against teams in S(j) in p and p and p − p < |S|. In both cases the single RRT is not group-balanced.

 

Theorem 4.4. In a group-balanced single RRT each match of team i against  S(i) = S(j) is carried out in period p = k|S| − 1,  team j with n k ∈ 1, . . . , |S| − 1 .

68

4 Combinatorial Properties of Strength Groups

Proof. According to theorem 4.3 the first period p containing a match between team i and an other team of strength group SS(i) determines the set of periods containing all matches between team i and teams of strength group SS(i) . If p = |S| − 1 then one of the following two cases holds. If p ∈ {|S|, . . . , n − 2} then team i plays twice against a team of at least one strength group Sk , k = S(i), in periods 0, . . . , |S| − 1. II If p ∈ {0, . . . , |S| − 2} then team i plays twice against a team of at least one strength group Sk , k = S(i), in periods n−|S|−1, . . . , n−2. I

 

In both cases the single RRT is not group-balanced. Theorem 4.5. There is no group-balanced single RRT where

n |S|

is odd.

Proof.  theorem 4.4 in each period p with p = k|S| − 1,  According to n k ∈ 1, . . . , |S| − 1 , only matches between teams i and j with S(i) = n in a strength group S(j) are carried out. If the number of teams |S| n Sk is odd then no more than |S| − 1 teams of Sk can play in those periods.   Theorem 4.6. In a group-balanced single RRT S(oi,p ) = S(oj,p ) holds for each period p and for each pair of teams (i, j) with S(i) = S(j). Proof. Assume there are teams i and j, S(i) = S(j), and a period p such that S(oi,p ) = S(oj,p ). Then, exactly one team of SS(oi,p ) plays against team j in period p with max{0, p−|S|+1} ≤ p ≤ p+max{0, |S|−p−1} and p = p, according to theorem 4.3. Obviously, |p − p| < |S| and, hence, |p − p| = k|S|, k ∈ N, holds for each period p where j plays against a team of SS(oi,p ) according to theorem 4.3. Then, team oi,p plays against i and j in two periods having distance not equal to k|S| for any k ∈ N which is infeasible since S(i) = S(j).   Definition 4.4. A pairing of strength groups is a mapping σ : S → S such that σ(σ(Sk )) = Sk for each k ∈ {0, . . . , |S| − 1}. Theorem 4.7. There is no group-balanced single RRT where |S| is odd. Proof. According to theorem 4.6 there is a pairing of strength groups σp in each period p such that for two strength groups Sk , Sl , σp (Sk ) = Sl , each team in Sk plays against a team in Sl in  p. Accordingto theorem n − 1 . Then, no 4.4 σp (Sk ) = Sk in period p = k|S| − 1, k ∈ 1, . . . , |S|   σp exists if |S| is odd.

4.2 Group-Balanced Tournaments

69

In order to construct a single RRT we have to arrange matches between each pair of teams and, therefore, pairings of strength groups such that each strength group is paired with each other strength group. This can be represented as 1-factorization of the complete graph K|S| where nodes correspond to strength groups and a 1-factor corresponds to a pairing. Two strength groups Sk , Sl , k = l, have to be paired exactly the amount of times needed to let each team of Sk play against each n from the cardinality of team of Sl . This number is known to be |S| n n introduced a 1-factorization of the complete bipartite graph K |S| , |S| in section 4.1.1. Furthermore, σp (Sl ) = Sl for each p = k|S| − 1,   n k ∈ 1, . . . , |S| − 1 , l ∈ {0, . . . , |S| − 1} (since only matches between teams of identical strength groups can be carried out in these periods). Accordingly, the construction scheme proposed in the following has two stages. In the first stage a schedule is constructed which prescribes teams of a specific strength group Sk to play against teams of an other strength group Sl , l = k, or to play against teams of the same strength group Sk , respectively. The result, namely a strength group schedule, is exemplarily represented in table 4.1.

Table 4.1. Strength group schedule for n = 16, |S| = 4 k

0

1

2

3

4

5

6

7

8

9 10 11 12 13 14

0 1 2 3

3 2 1 0

2 3 0 1

1 0 3 2

0 1 2 3

3 2 1 0

2 3 0 1

1 0 3 2

0 1 2 3

3 2 1 0

2 3 0 1

1 0 3 2

0 1 2 3

3 2 1 0

2 3 0 1

1 0 3 2

Line 2 to 5 correspond to strength groups S0 to S3 . For each strength group Sk , k ∈ {0, . . . , 3}, the strength group Sl , l ∈ {0, . . . , 3}, being paired with Sk in period p is given in the line corresponding to Sk and in the column corresponding to p. In the second stage we arrange matches between teams according to a given strength group schedule and to 1-factorizations introduced in section 4.1. Theorem 4.8. A group-balanced single RRT can be arranged if and n is even. only if |S| is even and |S| Proof. We show that a group-balanced single RRT can be arranged if n is even. Then, considering theorems 4.5 and 4.7 |S| is even and if |S| theorem 4.8 follows.

70

4 Combinatorial Properties of Strength Groups

We first construct the pairing σp of strength groups for each period p. According to   theorem 4.4 σp (Sl ) = Sl for each p = k|S| − 1, k ∈ n 1, . . . , |S| − 1 , l ∈ {0, . . . , |S| − 1}. Additionally, we arrange pairings of strength groups according to a 1-factorization of K|S| in periods 0 to |S| − 2. This is possible if and only if |S| is even. Naturally, the 1-factorization structure means each strength group being paired with each other strength group exactly once and each strength group being contained in each pairing σp , p ∈ {0, . . . , |S| − 2} exactly once. Next, we set σp+k|S| = σp , for each p ∈   n n − 1 . Hence, we obtain exactly |S| {0, . . . , |S| − 2} and k ∈ 1, . . . , |S| pairings containing a specific pair Sk , Sl , k = l, of strength groups. For each pair of strength groups Sk , Sl , k = l, we arrange all matches between teams of Sk and Sl in a way representable as a 1-factorization n n as shown in section 4.1.1. of K |S| , |S| Next, we arrange all group   inherent matches in periods p with p = n k|S| − 1, k ∈ 1, . . . , |S| − 1 by arranging a 1-factorization for the n corresponding to each strength group S complete graph K |S| k where n nodes represent teams of Sk . This is possible if and only if |S| is even (see section 4.1.2 for details). The result is a group-balanced single RRT since: • Each strength group is contained in each σp , p ∈ {0, . . . , n − 2}. Due to the 1-factorization structure according to teams of one strength group and teams of two paired strength groups, respectively, each team plays exactly once per period. • Each pair of teams meets exactly once due to the 1-factorization structure according to teams of one strength groups and teams of two paired strength groups, respectively. • No team plays more than once against teams of the same strength group within |S| consecutive periods  since identical pairings have  n distance of k|S|, k ∈ 0, . . . , |S| − 1 periods by construction. Hence, theorem 4.8 holds.

 

4.3 Group–Changing Single Round Robin Tournaments In this section we discuss cases of n and |S| where no group-balanced single RRTs exists and give construction schemes for group-changing single RRT. We adopt the basic idea of pairings of strength groups

4.3 Group-Changing Tournaments

71

from section 4.2. However, we have to extend this concept in order to allow fairness according to changing strength groups for more cases than those given in section 4.2. n is even a group-changing single RRT Remark 4.2. If |S| is even and |S| can be arranged by construction proposed for group-balanced single RRTs.

Remark 4.3. If |S| = 2 a group-changing single RRT is group-balanced as well and, therefore, no group-changing single RRT with |S| = 2 exists if n2 is odd. Lemma 4.1. If |S| = 4k + 3, k ∈ N+ , a group-changing single RRT can be arranged. Proof. We construct a group-changing single RRT given n and |S| = 4k + 3, k ∈ N+ . First, we construct a binary near-1-factorization according to F b,e as introduced in section 4.1.2 on the complete graph K|S| . We interpret each near-1-factor as a pairing σ of strength groups (see definition 4.4). Additionally, we define σ(Sk ) = Sk if the node not pairing resulting matched in the near-1-factor corresponds to Sk . The   b,e n −1 . from Fl is assigned to period p = k|S| − 1 − l, k ∈ 1, . . . , |S| Thus, periods 0 to n − |S| − 1 are assigned to pairings containing each n − 1 times. pair (Sk , Sl ), k, l ∈ {0, . . . , |S| − 1}, exactly |S| We arrange matches in periods 0 to n − |S| − 1 according to the pairing σp assigned to p, i.e. a match of i against j can not be carried out in period p if S(i) is not paired with S(j) in p. For each pair of strength groups Sk , Sl we arrange matches according to the 1-factorization of n n n as given in section 4.1.1. Since we can arrange only K |S| , |S| |S| − 1 out for each pair of strength groups. 1-factors we leave F bip n 2|S|

n −1 Furthermore, each strength group is paired with itself exactly |S| times. Hence, we can arrange all matches between teams of the same n (see section 4.1.2). strength group according to a 1-factorization of K |S| n n of K |S| corFinally, all matches contained in 1-factors F bip n , |S| 2|S|

responding to each pair of strength groups have to be arranged in periods n − |S| to n − 2. We construct a symmetric 2-factorization of 2K|S| according to section 4.1.3 and assign factor 2Fp to period n − |S| + p, p ∈ {0, . . . , |S| − 2}. If [i, j]k is contained in 2-factor 2Fp , n teams p ∈ {0, . . . , |S| − 2}, we arrange matches between the first 2|S|

72

4 Combinatorial Properties of Strength Groups

n n of Si and the last 2|S| teams of Sj in period n − |S| + p. Note that |S| is even since |S| is odd while n is even. Now, we have a single RRT with changing opponents’ strengths:

• Each team plays exactly once per period. For each period p, p ∈ {0, . . . , n − |S| − 1}, this is obvious due to the 1-factor structure within pairs of strength groups. In each remaining period p, p ∈ {n − |S|, . . . , n − 2}, each team plays exactly once since each 2factor is composed of oriented circles (see section 4.1.3). Hence, for each strength group the outgoing arc covers the first half of teams while the ingoing arc covers the second half. • Each pair of teams meets exactly once. Obviously, no pair of teams plays twice in periods p, p ∈ {0, . . . , n − |S| − 1}, due to the 1factorization structure between each pair of strength groups and bewithin single strength groups, respectively. The 1-factors F bip n 2|S|

tween each pair of strength groups missing from periods 0 to n−|S|−1 are exactly covered by both arcs between a pair of strength groups in 2K|S| . • No team plays against teams of the same strength group in two consecutive periods. In each time window [p, p + |S| − 1], p ∈ {0, . . . , n − 2|S|}, each team plays exactly once against each strength group due to repeating sequence of pairings. In periods p, p ∈ {n − |S|, . . . , n − 2}, each strength group is paired twice with each other strength group. However, corresponding arcs have opposite orientation and, hence, the set of teams involved in both pairs are disjoint. Therefore, in periods p, p ∈ {n − |S|, . . . , n − 2} each team plays exactly once against each other strength group and, hence, there is no violation of changing opponents’ strengths in periods p and p + 1, p ∈ {0, . . . , n − |S| − 2} and p ∈ {n − |S|, . . . , n − 2}, respectively. Note that 1-factor F0b,e chosen for p = n−|S|−1 does not contain any pair (Si , S(i+1) mod |S| ) (see section 4.1.2) if |S| > 3. 2F0 chosen for period p = n − |S| exclusively contains pairs of this form. Therefore, no team can play against the same strength group in periods n − |S| − 1 and n − |S|. Hence, lemma 4.1 holds.

 

Lemma 4.2. If |S| = 4k + 1, k ∈ N+ a group-changing single RRT can be arranged. Proof. The proof is analogous to the proof of lemma 4.1. The only difference is employing the binary 1-factorization of K2k , k odd, F b,o

4.3 Group-Changing Tournaments

73

given in section 4.1.2 instead of the binary 1-factorization F b,e in order to establish pairings for periods 0 to n−|S|−1. Each conclusion follows as above.   Theorem 4.9. According to lemmas 4.1 and 4.2 a group-changing single RRT can be arranged if |S| > 3 and odd. Theorem 4.10. If can be arranged.

n |S|

is odd and |S| > 2 a group-changing single RRT

n odd we construct a group-changing Proof. Given n and |S| > 2 with |S| single RRT. First, we construct an ordered 1-factorization of K|S| and associate 1-factors with pairings such that we obtain pairing σ |S| −1 2   having Sk paired with Sk+1 for each k ∈ 2l | l ∈ 0, . . . , |S| − 1 . 2   n −2 , Then we assign σp to periods p + k|S| + 1 for k ∈ 0, . . . , |S| p ∈ {0, . . . , |S| − 2}. ! n − 1 |S| + 1 for each Additionally, we assign σp to period p + |S|  !  n  +  − 2 and we assign σ to period p − 1 |S| p ∈ 0, . . . , |S| p 2 |S|   for each p ∈ |S| 2 , . . . , |S| − 2 . Hence, each pair of strength groups n times. We can except those contained in σ |S| −1 is arranged exactly |S| 2

construct 1-factorization F bip according to section 4.1.1 for the teams contained in each of those pairs of strength groups. Pairing σ |S| −1 is contained exactly

n |S|

2

− 1 times. Therefore, we can arrange all 1-factors

bip . Consequently, all matches between pairs of strength of F bip but F |S| 2

−1

n + k, groups are arranged except between teams i and j, i = S(i) |S|   n n n j = (S(i) + 1) |S| + (k − 1) mod |S| , S(i) even, k ∈ 0, . . . , |S| − 1 .   n − 1 , we To the remaining periods p, p = k|S|, k ∈ 0, . . . , |S| assign the pairing having each strength group paired with itself. Since n n |S| is odd we can construct a near-1-factorization with |S| near-1-factors for the set of teams in each strength group according to section 4.1.2. Naturally, in each strength group each team is not contained in a near1-factor exactly once. Team i ∈ Sk , k even,  not contained  in the nearn 1-factor assigned to period p = l|S|, l ∈ 0, . . . , |S| − 1 is arranged to play against team j ∈ Sk+1 not contained in the near-1-factor assigned to period p. Formally, we arrange 1-factors Flb,o , l ∈ {k − 1, . . . , 2k − 2}, according to section 4.1.2 for each pair (Sm , Sm+1 ), m even. Here, Sm

74

4 Combinatorial Properties of Strength Groups

and Sm+1 correspond to V0 := {0, . . . , k − 1} and V1 := {k, . . . , 2k − 1} of K2k , k odd, respectively. This results in a group-changing single RRT: • Each team plays   exactly once per period. For period p, p = k|S|, k ∈ n 0, . . . , |S| − 1 , this is obvious due to the 1-factor structure within   n −1 , pairs of strength groups. In periods p, p = k|S|, k ∈ 0, . . . , |S| each team but one per strength group plays exactly once due to the near-1-factor structure within each strength group. Team i, S(i) even, not playing against an other team of SS(i) in period p, p = k|S|,   n − 1 , plays against the team j ∈ SS(i)+1 not playing k ∈ 0, . . . , |S| n odd) against an other team of SS(i)+1 . Since |S| is even (n even, |S| each of these teams plays exactly once. • Each pair of teams meets exactly once. This is obvious for matches between teams i and j, S(i) < S(j), (S(i) odd ∨ S(i) + 1 = S(j)), of different strength groups due to the 1-factorization structure between each pair of strength groups. Furthermore, matches between teams i and j, S(i) = S(j), are carried out exactly once due to the near-1-factorization structure within strength groups. Matches of teams i and j (S(i) even ∧S(i)+ 1 = S(j))are composed of Fkbip , |S| 2

k =

− 1, in periods p, p ∈

|S| 2

n + l|S| | l ∈ 0, . . . , |S| −2

bip F |S|

, and

arranged between pairs of unmatched nodes in near-1-factors    n −1 . in periods p, p ∈ l|S| | l ∈ 0, . . . , |S| • No team plays against teams of the same strength group in two consecutive periods. This is obvious for pairs Sk , Sl , k < l, (k odd ∨ k + 1 = l) of strength groups since those are arranged in periods having distance no less than |S| − 1 by construction. |S| − 1 > 1 for |S| > 2. Matches within pairs Sk , Sl , k < l (k even ∧ k + 1 = l), of strength groups are arranged in periods p ∈ P in with    |S| n in + l|S| | l ∈ 0, . . . , −2 ∪ P := 2 |S|    n −1 l|S| | l ∈ 0, . . . , |S|    2n |S| | l ∈ 0, . . . , −2 . = l 2 |S| 2

−1

Obviously, pairwise distance is no less than 2 if |S| > 2.

4.4 Complexity

Hence, theorem 4.10 holds.

75

 

Remark 4.4. By enumeration: If n = 6 and |S| = 3 no group-changing single RRT exists but if n ∈ {12, 18} and |S| = 3 a group-changing single RRT can be arranged. We conjecture that a group-changing single RRT exists for |S| = 3 and each n = 6(k + 1), k ∈ N+ .

4.4 Complexity As outlined in section 2.1 there are several applications for associating cost ci,j,p with each match of team i at home against team j in period p. For the sake of convenience we repeat the definition of the single RRT problem given in section 2.1. Definition 2.3. Given a set T , |T | = n, and a set P , |P | = n − 1, each triple (i, j, p) ∈ T × T × P, i = j, represents a match of team i against team j at i’s home in period p. Costs ci,j,p are given for each match as well. A feasible solution to the single RRT problem corresponds to triples such that (i) for each pair (i, j) ∈ T × T, i < j, a set of n(n−1) 2 exactly one triple of form (i, j, p) or (j, i, p) with p ∈ P is chosen and such that (ii) for each pair (i, p) ∈ T × P exactly one triple of form (i, j, p) or (j, i, p) with j ∈ T \{i} is chosen. The problem is to find a feasible solution having the minimum sum of chosen triples’ cost. The single RRT problem has been proven to be NP-hard independently in Briskorn et al. [12] and in Easton [31]. We introduce two minimum cost problems corresponding to strength group requirements as introduced in section 3.2.2 and definitions 4.1 and 4.2. Definition 4.5. Given a set T , |T | = n even, of teams, a set of periods P , |P | = |T | − 1, a number |S| of strength groups and cost ci,j,p associated with each match of team i ∈ T at home against team j ∈ T , j = i, in period p ∈ P the group-balanced single RRT problem is to find the group-balanced single RRT having the minimum sum of arranged matches cost. Definition 4.6. Given a set T , |T | = n even, of teams, a set of periods P , |P | = |T | − 1, a number |S| of strength groups and cost ci,j,p associated with each match of team i ∈ T at home against team j ∈ T , j = i in period p ∈ P the group-changing single RRT problem is to find

76

4 Combinatorial Properties of Strength Groups

the group-changing single RRT having the minimum sum of arranged matches cost. Theorem 4.11. The group-balanced single RRT problem is NP-hard even if |S| is fixed. We proof theorem 4.11 by reduction from single RRT problem. Proof. Given a single RRT problem by a set of teams T  , |T  | = n , a set of periods P  , |P  | = n − 1, and cost ci ,j  ,p , i , j  ∈ T  , i = j  , p ∈ P  , we construct a group-balanced single RRT problem with n teams and |S| strength groups as follows. Let n = n |S|. We follow the idea of pairings of strength groups in each period given in section 4.2. We set cost   M    ci,j,p =

if

S(i) = S(j) = 0, p = k|S| −  1,

 n k ∈ 1, . . . , |S| −1 ,

 S(i) = S(j) = 0, p = (p + 1)|S| − 1, p ∈ P  , ci,j,p if     0 else,    with M = i ∈T  j  ∈T  ,j  =i p ∈P  ci ,j  ,p . Obviously, a group-balanced single RRT having cost less than M is provided by the construction scheme given in the proof of theorem 4.8. Each solution having cost less than M provides  a single RRT  s of n teams of S0 in periods p with p = k|S| − 1, k ∈ 1, . . . , |S| − 1 . Next, it can be easily seen that s is optimal for the original minimum cost single RRT problem by contradiction. If there is a single RRT s having less cost than s according to the single RRT problem we can exchange s by s in the group-balanced single RRT. Trivially, this leads to a group-balanced single RRT having less cost.   Lemma 4.3. The group-changing single RRT problem is NP-hard even if |S| > 3 is odd and fixed. Again, we give a reduction from the single RRT problem. The idea is quite the same as for theorem 4.11. Hence, a sketch of the proof suffices. Proof. Given a single RRT problem we construct a group-changing single RRT problem with n teams and |S| > 3, |S| odd, strength groups as follows. Let n = n |S|. We follow the idea of pairings of strength groups by near-1-factorizations according to F e,b and F o,b (depending

4.5 Summary

77

on the number of strength groups |S|). Note that S0 is matched with n itself in period |S| 2 − 1 + k|S| for each k ∈ 0, . . . , |S| − 2. We set cost    M if S(i) = S(j) = 0, p = |S|  2 − 1 + p |S|,    n   −2 , p ∈ 0, . . . , |S|    ci,j,p = ci,j,p if S(i) = S(j) = 0, p = |S| 2 − 1 + p |S|,     n  −2 , p ∈ 0, . . . , |S|     0 else,    with M = i ∈T  j  ∈T  ,j  =i p ∈P  ci ,j  ,p . Obviously, a group-changing single RRT having cost less than M is provided by the construction scheme given in the proof of lemmas 4.1 and 4.2, respectively. Each solution having cost less than M provides |S|  a single  RRT s ofteams of S0 in periods p = 2 − 1 + p |S| with

n −2 . p ∈ 0, . . . , |S| Obviously, s is optimal for the original single RRT problem.

 

Lemma 4.4. The group-changing single RRT problem is NP-hard even if |S| is even and fixed. Again, we give a reduction from the single RRT problem. The idea is quite the same as for theorem 4.11. Proof. Since the number of teams n of the single RRT problem is restricted to even values and |S| is given even we can reduce the single RRT problem exactly as in the proof of theorem 4.11.   Theorem 4.12. According to lemmas 4.3 and 4.4 the group-changing single RRT problem is NP-hard even if |S| = 3 is fixed.

4.5 Summary In this section we pick up a common idea to achieve fairness among teams competing in a single RRT. Although strength groups have already been proposed in several works there is no answer to the question for which values of n and |S| fair schedules can be constructed. We investigate two degrees of fairness: group-changing single RRTs and group-balanced single RRTs. We proof a necessary and sufficient condition for n and |S| to allow a group-balanced single RRT. Furthermore, we show how to decide for

78

4 Combinatorial Properties of Strength Groups

almost all cases whether a group-changing single RRT is possible or not. The remaining cases are n = 6k, k ∈ N, and |S| = 3 and we strongly conjecture a group-changing RRT to be possible if and only if k > 1.

5 Home-Away-Pattern Based Branching Schemes

5.1 Motivation This chapter focuses on branching schemes in line with decomposition schemes following the first-break-then-schedule idea (see section 2.4.2 for details). We consider the LP relaxation of the problem at hand, solve it to optimality, and branch on the venue of team i in period p. Accordingly, we construct a branching tree where a path from the root node to a leaf either can be pruned or represents a full HAP set. Branching on a subset of variables is a well known generalization of {0, 1}-branching on binary variables. Here, we choose a subset Qi,p of that the venue of team i is fixed in p by forcing  variables such  x = 1 or x∈Qi,p x∈Qi,p x = 0, respectively. Hence, Qi,p := {xi,j,p | j ∈ T \ {i}}. Deciding the venue of a specific team and period is part of almost all variants of RRTs presented in chapter 2. Furthermore, solutions to LP relaxations to most problems presented in chapters 2 and 3 do not imply a consistent decision about  venues (which means there probably is a i ∈ T and a p ∈ P such that x∈Qi,p x ∈ {0, 1}). Therefore, the branching idea is applicable as well as reasonable to almost all RRT problems. Let Qi,p := {xj,i,p | j ∈ T \ {i}}. Then, Qi,p + Qi,p = 1 due to constraint (2.8) and, hence, Qi,p = 1 ⇔ Qi,p = 0. Consequently, we implement each branching step by fixing subsets of variables Qi,p and Qi,p , respectively, to 0. In section 5.2 we treat the general case where no restriction concerning venues is given. Section 5.3 considers the case where we require the minimum number of breaks. In both cases defining a HAP set by branching can not guarantee binary solutions to the problem’s LP relaxation. Table 5.1 represents a fractional solution on the left hand

80

5 HAP Based Branching Scheme

side. Each match in periods 1 and 2 has variable value 0.5, for example. Matches in period 3 have variable value 1. The unique corresponding HAP set (having the minimum number of breaks) is shown on the right hand side. However, we refuse to develop a full branching scheme here in order to emphasize the power of the branching idea as a guiding scheme for the first levels of a branching tree. Consequently, if a node providing a full HAP set can not be pruned we solve the corresponding IP problem using the standard solver Cplex. Table 5.1. Fractional Solution Without Candidate for HAP set branching Period MDs

1

2

3

Team 1 2 3

1-2 1-4 2-1 2-3 2-4 3-4 3-2 4-3 4-1 1-3

Variable 0.5 0.5 0.5 0.5 1

1 2 3 4

0 1 0 1

10 00 11 01

Consider an IP problem corresponding to a HAP set. In section 2.4.2 we conjecture the corresponding LP relaxation not to be able to provide a larger number of matches than the IP problem does. If this conjecture holds the LP relaxation corresponding to an infeasible HAP set is proofed infeasible and, consequently, the node is pruned before the IP problem is solved. Then, no IP problem corresponding to an infeasible HAP set has to be solved.

5.2 General Home-Away-Pattern Sets First, we like to emphasize the importance to recognize infeasible HAP sets in order to keep computational effort low. Clearly, infeasible HAP sets correspond to infeasible nodes and subtrees, respectively, of the branching tree. Several necessary conditions for HAP sets to be feasible are considered by our branching scheme. This is lined out in section 5.2.1. In the following we specify several approaches by proposing alternative strategies to choose the next branching candidate as well as node order strategies in sections 5.2.2 and 5.2.3.

5.2 General Home-Away-Pattern Sets

81

5.2.1 Achieving Feasible Home-Away-Pattern Sets As lined out in section 2.4.2 HAP sets can not easily proofed either feasible or infeasible. There is no simple characterization of feasible HAP sets. Therefore, avoiding infeasible HAP sets is the challenging part here. We show that our branching scheme constructs HAP sets fulfilling two necessary conditions hereafter. We define the partial HAP as a generalization of the HAP. Definition 5.1. A partial HAP of team i is a string containing 0 at slot p if i plays at home in period p, containing 1 at slot p if i plays away in p, and containing ∗ at slot p if the venue of i in p is not decided. Consequently, a partial HAP set is a set of n partial HAPs being assigned to teams. Each node in our branching tree represents a partial HAP set. The partial HAP set corresponding to the root node exclusively contains asterisks. Following a path from the root node to a leaf we replace a ∗ either by 0 or by 1 in each step. A partial HAP set is called feasible in the remainder if we can obtain a feasible HAP set by replacing all asterisks. We adopt two necessary conditions for HAP sets to be feasible given in section 2.4.2 in order to consider partial HAP sets. (i) Partial HAPs of two teams must be different or contain at least one ∗. (ii)Each column of a partial HAP set must not contain more than n2 zeros and must not contain more than n2 ones. Theorem 5.1. When we branch on candidates Qi,p ∈ {0, 1} we can not create a partial HAP set violating condition (i) or condition (ii). Proof. Proof is done by contradiction: (i) Suppose we construct two identical HAPs corresponding to teams i and j having no ∗. Then, those two HAPs have differed in exactly one slot p in the current node’s father. Hence, in each feasible solution according to the current node’s father these two teams play against each other in period p. Therefore, neither Qi,p nor Qj,p has been a branching candidate. (ii)Suppose (w.l.o.g.) that we construct column p having n2 + 1 zeros. Then, this column has exactly n2 zeros in the current node’s father. Therefore, each team having no zero must play away in each feasible solution. Therefore, there was no branching candidate Qi,p , i ∈ T . Hence, both conditions can not be violated.

 

82

5 HAP Based Branching Scheme

Moreover, we adapt a necessary condition for HAP sets to be feasible developed in Miyashiro et al. [64] to partial HAP sets. Let c0 (T  , p), c1 (T  , p), and c∗ (T  , p) be the number of zeros, ones, and asterisks, respectively, in column p in lines corresponding to teams in T  ⊆ T . Miyashiro et al. [64] propose the following condition for (non-partial) HAP sets.  p∈P

 

|T |   min c0 (T , p), c1 (T , p) − ≥ 0∀T ⊆ T 2

(2.25)

The first term sums up the number of matches between teams in T  being possible according to the HAPs (see Miyashiro et al. [64] for   details). At least |T2 | matches must be possible in order to construct a single RRT. In order to consider partial HAP sets we propose the following modified condition.  p∈P

" min

# 

|T  |    , min c0 (T , p), c1 (T , p) + c∗ (T , p) 2   |T | − ≥ 0 ∀ T  ⊆ T (5.1) 2

Note that (5.1) is identical to (2.25) if c∗ (T  , p) = 0 for each p ∈ P . If c∗ (T  , p) > 0 we can choose undecided venues of teams such that we maximize the number of possible matches between teams of T  in p. Therefore, if w.l.o.g. c0 (T  , p) < c1 (T  , p) we consider min {|c0 (T  , p) − c1 (T  , p)|, c∗ (T  , p)} teams having undecided venues to play at home in p, first. If c∗ (T  , p) ≤ |c0 (T  , p) − c1 (T  , p)| there are no teams having undecided venues left and the number of possible matches between teams   of T  in p is restricted to min {c0 (T  , p), c1 (T  , p)} + c∗ (T  , p) ≤ |T2 | in p. If c∗ (T  , p) > |c0 (T  , p)−c1 (T  , p)| following the idea described above we obtain identical numbers of teams playing at home and away, respectively. The remaining teams having undecided venues are chosen to play at home or away such that the overall number of teams playing   at home and away, respectively, differ by no more than 1. Then, |T2 | matches between teams of T  are possible in p. Formally, we obtain this effect by the outer minimization in (5.1).

5.2 General Home-Away-Pattern Sets

83

Theorem 5.2. When we branch on candidates Qi,p ∈ {0, 1} we can not create a partial HAP set violating (5.1). Proof. First, we show that the left hand side of (5.1) can be lowered by no more than 1 if we replace a single asterisk. Let (w.l.o.g.) team i’s entry in slot p be changed from ∗ to 0. • If c0 (T  , p) < c1 (T  , p) with T  ⊆ T , i ∈ T  , before the replacement the left hand side of (5.1) does not change since the inner minimization term is increased by 1 and c∗ (T  , p) is decreased by 1. • If c0 (T  , p) ≥ c1 (T  , p) and c∗ (T  , p) > c0 (T  , p) − c1 (T  , p) before the replacement the left hand side of (5.1) does not change due to the outer minimization. • If c0 (T  , p) ≥ c1 (T  , p) and c∗ (T  , p) ≤ c0 (T  , p) − c1 (T  , p) before the replacement the left hand side of (5.1) decreases by 1 since the inner minimization does not change and c∗ (T  , p) is decreased by 1. Consequently, the outer   minimization is decreased by one since   c1 (T , p) + c∗ (T , p) ≤ |T2 | . 

Now, suppose we create a partial HAP set having  |T  | |T  |    = − 1 p∈P min 2 2 , min (c0 (T , p), c1 (T , p)) + c∗ (T , p)

with T  ⊆ T by changing the entry for team i ∈ T  in period p from ∗ to 0 (in accordance with the reasoning above). Then, according to the partial father   HAP set corresponding to the current   node’s  |T |  , p), c (T  , p)) + c (T  , p) = |T  | . Furthermin , min (c (T 0 1 ∗ p∈P 2 2 more, the third case described above is given since it is the only one resulting in a decreasing left hand side of (5.1). Consequently, in each feasible solution each team having ∗ or 1 in period p plays away. Therefore, Qi,p is no branching candidate according to the father node’s optimal solution.   Note that condition (i) is a special case of (5.1) with |T  | = 2. We conclude that two necessary conditions (ii) and (5.1) for partial HAP sets to be feasible can not be violated following our idea of branching candidates Qi,p ∈ {0, 1}. 5.2.2 Choice of Branching Candidates Given an optimal solution to the LP relaxation of a RRT problem  introduced in chapters 2 and 3 each tuple (i, p) ∈ T × P with j∈T \{i} xi,j,p ∈ {0, 1} is candidate for the next branching step.

84

5 HAP Based Branching Scheme

Random Choice: Starting with a randomly chosen tuple (i, p) we sequentially search all Qi,p for fractional values and choose the first branching candidate being found. Hence, we interpret neither the LP relaxation’s optimal solution nor the cost structure. The main advantage of this strategy is its low run time consumption.    x, x Least Fractional: We define infi,p := min x∈Qi,p x∈Qi,p as a measure of infeasibility of (i, p) in the current LP problem’s optimal solution. Among all branching candidates (i, p) we choose the one having lowest infi,p which means Qi,p is closest to binary. The idea is that forcing candidates having small infeasibility to feasibility causes only slight modifications to the LP relaxation’s optimal solution. Most Fractional: Among all branching candidates (i, p) we choose the one having highest infi,p which means Qi,p is closest to 0.5. As outlined in Achterberg et al. [1] the idea is to force feasibility for those tuples first where the LP solution implicates least tendency whether Qi,p to round to zero or to one. P Pseudo-Cost: We compute pseudo-cost chi,p = P

j∈T \{i} cj,i,p n−1

j∈T \{i} ci,j,p

n−1

and

representing average matches’ cost at home and cai,p = away, respectively, according to team i and period p. Among all branching candidates (i, p) we choose the one having lowest min{chi,p , cai,p }. This strategy aims at fixing those candidates first which propose low cost matches. We calculate chi,p and cai,p a single time before the branchand-bound (B&B) procedure starts and, hence, computational effort for considering pseudo-cost is identical to the one for least fractional and most fractional, respectively. Regret: We define regret ri,p as the cost of not choosing the venue having lower pseudo-cost for candidate (i, p): ri,p = |cai,p − chi,p |. Again, ri,p can be calculated beforehand and, therefore, does not increase computational effort. Pseudo-Cost Revisited: If we choose candidate (i, p) and fix the corresponding venue the set of possible matches of team j = i in period p is affected. A match of team i against team j at the venue ruled out for i can not be carried out in period p. Therefore, it is reasonable to adjust chi,p and cai,p according to the fixed venues on the path from the root node to the current node k. Adjusted pseudo-costs are denoted by − → k,a ck,h i,p and ci,p below. Let Vk denote the path from the root node to the current node. If team i is fixed to play at home or away in period p − → − → according to any branching step in Vk we denote this by Qi,p ∈ Vk or − → Qi,p ∈ Vk , respectively. Formally, modification of pseudo costs can then be stated as follows.

5.2 General Home-Away-Pattern Sets

ck,h i,p ck,a i,p

85

 → ci,j,p (n − 1)chi,p − Q ∈− j,p Vk $ = − →$$ $ n − 1 − $ Qj,p | Qj,p ∈ Vk $  → ci,j,p (n − 1)cai,p − Q ∈− j,p Vk $ = − →$$ $ n − 1 − $ Qj,p | Qj,p ∈ Vk $

Obviously, we can redefine regret depending on the current node k by k,a k,a k,h k employing ck,h i,p and ci,p : ri,p = |ci,p − ci,p |. Then, rules “Pseudo-Cost” and “Regret” can be applied to modified pseudo-costs. 5.2.3 Node Order Strategy When working off the set of nodes of the branching tree we have to decide which node to explore next. We implement two well known node order strategies: • Depth First Search: The node which has been created last is explored first. This strategy minimizes the memory requirements. • Breadth First Search: We define a fitness for each node. The node having best fitness is explored first. We employ the lower bound value obtained from the father’s LP problem as fitness. This strategy requires more memory than depth first search does but mostly leads to shorter run times. For both of these strategies the question arises in which order nodes having the same father are explored. Here, the decision is made depending on the choice of branching candidates according to section 5.2.2. Random Choice: Since candidate Qi,p has been chosen randomly we choose the order of corresponding child nodes randomly as well. Least Fractional, Most Fractional: Given a chosen branching candidate (i, p) we explore the node corresponding to i playing at home in p first if x∈Qi,p x ≥ 0.5. Otherwise we explore the node corresponding to i playing away in p first. Pseudo-Cost (revisited), Regret: Given a chosen branching candidate (i, p) one of both child nodes corresponds to the venue of i in p having the lower (revisited) pseudo-cost. This node is explored first.

86

5 HAP Based Branching Scheme

5.3 Minimum Number of Breaks Again, the basic idea is to branch on Qi,p . In opposite to general HAP sets as considered in section 5.2 the venues of team i in two periods are not independent here. Although there is no direct dependency of venues in consecutive periods the occurrence of identical venues in consecutive periods (namely a break) is restricted to an overall number of n − 2 for all teams, see sections 2.4 and 3.2.1. If team i has no break we say it has a break in the first period which is justified by the fact that the first and the last entry of the corresponding HAP are identical, then. In a RRT having the minimum number of breaks each team has exactly one break (see Miyashiro et al. [64]) and, therefore, we can specify each team’s HAP by venue and period of its unique break. Consequently, we branch on venue and period of a specific team’s break, here. Branching candidates are teams whose break is not fully specified by the optimal solution to the current node’s LP problem. We implement a specific break by fixing match variables to 0 according to the break’s venue and period. If we branch on team i to have a home-break in period p then we can fix to zero half the match variables corresponding to i as follows. Qi,p = 0 ∀p ∈ P, ((p < p ∧ p − p even) ∨ (p > p ∧ p − p odd)) Qi,p = 0 ∀p ∈ P, ((p < p ∧ p − p odd) ∨ (p ≥ p ∧ p − p even)) Consequently, fixing variables according to an away-break for team i in period p is done the other way round. Qi,p = 0 ∀p ∈ P, ((p < p ∧ p − p even) ∨ (p > p ∧ p − p odd)) Qi,p = 0 ∀p ∈ P, ((p < p ∧ p − p odd) ∨ (p ≥ p ∧ p − p even)) While we can represent team i having a break in p by fixing match variables (as seen above) we can not represent i not having a break in p by fixing match variables. Therefore, we propose a branching strategy where each subproblem is represented by a fixed break. Consequently, given a chosen branching candidate i ∈ T we create a child node for each single break (defined by venue and period) which can be assigned to i. Considering that each team has exactly one break in a feasible solution we obtain a partition of solution space corresponding to the current node into solution spaces corresponding to its child nodes. Obviously, as lined out in Briskorn and Drexl [11] this means a number of up to 2(n − 1) child nodes.

5.3 Minimum Number of Breaks

87

5.3.1 Achieving Feasible Home-Away-Pattern Sets Again, it is of great importance to recognize infeasible HAP sets and, moreover, avoid the construction of corresponding nodes in order to save run time. Several necessary conditions for HAP sets to be feasible are considered in section 5.2.1. Below we propose several strategies to calculate the set of possible breaks (each defined by period and venue) for a given branching candidate i ∈ T . Since we construct a child node for each break being considered possible the challenging part is to keep this set as small as possible. On the other hand, neglecting possible breaks results into inadmissible reduction of solution space and, therefore, must be avoided. No Restrictions: Here, we simply create a child node for each period and venue without consideration of breaks already fixed for other teams. Therefore, each node has 2(n − 1) children if it is not pruned. No Break Twice: The set of possible breaks can be reduced by taking into account the breaks already assigned to teams on the path from the root node to the current node. Two teams having identical breaks leads to both teams having identical HAPs. As lined out in section 2.4.2 an infeasible HAP set follows. Therefore, no break is assigned to more than one team. Consequently, the set of possible breaks can be reduced by all breaks being already assigned to a team on the path from the root node to the current node. Break Sequences: It is known from, e.g., Miyashiro et al. [64] that there are either no or two breaks in each period. Hence, we have to choose n2 − 1 periods (additional to the first period) where breaks occur and assign teams to both breaks corresponding to one of these periods in order to construct a HAP set. We refer to these periods as break periods in the remainder. Given a HAP set a specific assignment of each HAP to a team does not influence the HAP set’s feasibility. The HAP set is fully specified by a set of n2 break periods as far as feasibility is concerned. The set of break periods in ascending order is referred to as break sequence in the remainder. In de Werra [19] break sequences corresponding to the special class of canonical 1-factorizations are studied. − → Again, let k and Vk denote the current node of the branching tree and the path from the root node to the current node. Additionally, let − → br → and P− nbp,− → be the number of breaks fixed in period p on Vk and the Vk Vk − → set of periods where at least one break has been fixed in on Vk . Then, we can apply the following rules I to III as outlined in Briskorn and Drexl [11] in order to decide which breaks must be considered possible.

88

5 HAP Based Branching Scheme

I

A home-break (away-break) in the first period is possible if and only if no home-break (away-break) has been set in the first period on − → Vk . → = 1 we can set a home-break (away-break) in period p if II If nbp,− Vk and only if the existing one is an away-break (home-break). → = 0 if and III We can set a break in period p ∈ {2, . . . , n − 1}, nbp,− Vk $ $ $ br $ n only if $P− → \ {1}$ < 2 − 1. Vk

Rule I states that a break in the first period is possible if this specific − → break has not been set on Vk . Rule II takes care of the fact that in each period either two or no breaks are set. Hence, if exactly one break has − → been arranged in period p on Vk the complementary break is possible in p. Rule III decides whether a break can occur in a period where no − → break is arranged on Vk yet. Since there can be no more than n2 periods having breaks (including the first period) a break in a period having no break so far is possible if less than n2 − 1 periods (excluding the first period) have breaks already. Clearly, rules I and II cover “No Break Twice”. No Three Consecutive Breaks: As shown in Briskorn and Drexl [11] we can further restrict the set of possible breaks by incorporating a necessary condition from Miyashiro et al. [64]: Due to restriction   (2.7) for each subset T  ⊂ T there must be exactly |T |(|T2 |−1) matches between teams of T  , see (2.25). Theorem 5.3. A break sequence containing three (circular) consecutive periods leads to an infeasible subtree. Proof. Let p be the first of three consecutive periods having breaks. In each period having a break there is a home-break and an away-break. We combine three HAPs having breaks in p, p + 1, and p + 2 such that the break’s venue in p is equal to the break’s venue in p + 2 and different from the break’s venue in p+1. The three teams corresponding to these three HAPs cannot play against each other in period p ∈ {1, . . . , p − 1} ∪ {p + 2, . . . , n − 1}. There can be exactly one match among these teams in periods p and p + 1. Therefore, there is a subset   of teams which can play only |T |(|T2 |−1) − 1 times against each other and which, consequently, violates (2.25).   Table 5.2 provides an example of HAPs combined as done in the proof. According to theorem 5.3 we modify rule III of “Break Sequences” to rule III’.

5.3 Minimum Number of Breaks

89

Table 5.2. Example for 3 HAPs with too few matches Period

1

...

p−1

p

HAP 1 HAP 2 HAP 3

... ... ...

... ... ...

0 0 0

0 1 1

p+1 p+2 p+3 1 1 0

0 0 0

1 1 1

...

n−1

... ... ...

... ... ...

→ = 0 III’ We can set a break in period p ∈ {2, . . . , n − 1}, nbp,− Vk if a break is possible in ! period p according to III !and → = 0 ∨ nb − → = 0 ∧ nb − → = 0 ∨ nb − → =0 ∧ nbp−2,− Vk p−1,Vk p−1,Vk p+1,Vk !! → = 0 ∨ nb − → =0 . nbp+1,− V p+2,V k

k

Rule III’ checks whether a sequence of three consecutive break periods would be arranged if a break is fixed in period p. Note that we can further slightly strengthen III’ as III”.1 to III”.5 by taking into account that there must be two breaks in the first period in the final HAP set − → no matter whether they are already set on Vk or not. → = 0 if a III”.1 We can set a break in period p ∈ {4, . . . , n − 4}, nbp,− Vk break is possible in period p according to III’. → = 0 III”.2 We can set a break in period p = 3 if nb3,− Vk and if a break is possible in period !! 3 according to III and ! → = 0 ∧ nb − → = 0 ∨ nb − → =0 . nb2,− Vk 4,Vk 5,Vk → = 0 III”.3 We can set a break in period p = 2 if nb2,− Vk and if a break is possible ! in period 2 according to III and − → − → nbn−1,V = 0 ∧ nb3,V = 0 . k k → = 0 III”.4 We can set a break in period p = n − 1 if nbn−1,− Vk and if a break is possible! in period n − 1 according to III and → = 0 ∧ nb − → =0 . nbn−2,− Vk 2,Vk → = 0 III”.5 We can set a break in period p = n − 2 if nbn−2,− Vk and if a break!is possible in period n − 2 according to III and !! − → − → − → nbn−1,V = 0 ∧ nbn−3,V = 0 ∨ nbn−4,V = 0 . k

k

k

Rule III”.1 directly corresponds to III’ for p ∈ {4, . . . , n − 4}. Special cases are periods 3, 2, n − 1, and n − 2 in rules III”.2 to III”.5 being strengthened in comparison to III’. Here, the first period is not checked for breaks because in a complete HAP set with the minimum number of breaks there are exactly two breaks in it.

90

5 HAP Based Branching Scheme

Feasible Sequence: In the following we generalize the

basic idea of “No Three Consecutive Breaks”. Choosing l ∈ 1, . . . , n2 break periods means fixing 2l HAPs. In order to check (2.25) we have to take care of 22l − 2l − 1 subsets of HAPs. Note that |T  | = 2 and |T  | = 3 are checked inherently by “No Break Twice” and “No Three Consecutive Breaks”, respectively. Here, we propose an efficient way to check all T  having exactly l HAPs and, therefore, 2ll subsets of HAPs. In order to establish a common notation we first introduce parts of the one in Miyashiro et al. [64]. We sort the given 2l HAPs as follows: We arrange two blocks of l HAPs each. The first (second) one consists of HAPs having 0 (1) in the last slot. Both blocks are ordered by ascending HAPs’ break periods. An example with 2l = 6 HAPs is shown on the left hand of table 5.3. Table 5.3. Example for ordered HAP set and equivalent representation period 1

2

3

4

5

period 1

2

3

4

5

HAP HAP HAP HAP HAP HAP

1 0 0 0 1 1

0 0 1 1 1 0

1 1 1 0 0 0

0 0 0 1 1 1

HAP HAP HAP HAP HAP HAP

1 0 0 0 1 1

1 1 0 0 0 1

1 1 1 0 0 0

1 1 1 0 0 0

1 2 3 4 5 6

0 1 1 1 0 0

1 2 3 4 5 6

1 0 0 0 1 1

The right hand side of table 5.3 provides an equivalent representation of the ordered (partial) HAP set as proposed in Miyashiro et al. [64]. The construction of this representation is done as follows. • For HAPs having 0 in the last slot set all entries ahead of the break period to 0. Set the entry in the break period and all entries behind to 1. • For HAPs having 1 in the last slot set all entries ahead of the break period to 1. Set the entry in the break period and all entries behind to 0. Note that min{c0 (T  , p), c1 (T  , p)} is equivalent in corresponding columns p for each ordered set of HAPs T  and its equivalent representation, see Miyashiro et al. [64]. As special cases of T  Miyashiro et al. [64] introduce cyclically consecutive sets of HAPs and narrow sets of HAPs. A set of HAPs T  is

5.3 Minimum Number of Breaks

91

narrow if and only if there is at least one period where all HAPs’ entries are identical. Miyashiro et al. [64] show that given a HAP set for each subset of HAPs T  which is not narrow there is a narrow subset of HAPs T  such that (2.25) is at least as tight for T  as it is for T  . Consequently, checking (2.25) is restricted to narrow subsets of a HAP set. Let T l be the set of subsets of HAPs having exactly l HAPs and  let T l ⊂ T l be the set of narrow subsets of HAPs having exactly one HAP for each break period. 

Theorem 5.4. For each T  ∈ T l there is a T  ∈ T l such that (2.25) is at least as tight for T  as it is for T  . 



Proof. Given an arbitrary T  ∈ T l \ T l we construct T  ∈ T l such that (2.25) is at least as tight for T  as it is for T  . Circulate columns of T  such that there is no break in the first period. Consider the equivalent representation of T  as shown on the right hand side of table 5.3. Then, c0 (T  , 1) = c1 (T  , n − 1) and c1 (T  , 1) = c0 (T  , n − 1). If l is even there is at least one break period p such that c0 (T  , p − 1) − 1 = c0 (T  , p) = 2l or c0 (T  , p − 1) + 1 = c0 (T  , p) = 2l . If l is odd there is at least one break period p such that min {c0 (T  , p − 1), c1 (T  , p − 1)} =

min {c0 (T  , p), c1 (T  , p)} = 2l . Now, we construct T  as follows: • If l is even set 2l ones and 2l zeros in period p. Additionally, set 2l + 1 ones and 2l − 1 zeros in period p − 1 by copying the pattern of p % &

and exchanging one 0 by 1. If l is odd set 2l zeros and 2l ones in

% & p. Additionally, set 2l ones and 2l zeros in p − 1 by copying the pattern of p and exchanging one 0 by 1. • Going backward from p − 1 set a break in break period p for an arbitrary HAP i having 0 in p (then, i’s entry in each period p ∈ {1, . . . , p − 1} is 1). Thus, the number of zeros is decreased by 1 at the predecessor of each break period. Proceed until the first period is reached or there is no 0 left in any HAP. If there is no 0 left in break period p proceed by setting a break for an arbitrary HAP j not having a break yet in each break period (then, j’s entry in each period p ∈ {1, . . . , p − 1} is 0). Thus, the number of zeros is increased by 1 at the predecessor of each break period p ∈ {1, . . . , p − 1}. • Going forward from p set a break in break period p for an arbitrary HAP i having 1 in p − 1 (then, i’s entry in each period p ∈ {1, . . . , p − 1} is 0). Thus, the number of zeros is increased by 1 in each break period. Proceed until the last period is reached or

92

5 HAP Based Branching Scheme

there is no 1 left in any HAP. If there is no 1 left in break period p proceed by setting a break for an arbitrary HAP j not having a break yet in each break period (then, j’s entry in each period p ∈ {1, . . . , p − 1} is 1). Thus, the number of zeros is decreased by 1 in each break period p ∈ {p , . . . , n − 1}. 

First, we show that T  ∈ T l . Obviously, T  ∈ T l . Let op be the ordinal of p within the break sequence (after circulating). If op is greater or equal c0 (T  , p) then there is at least one period p ∈ {1, . . . , p − 1} having no 0. If op is lower or equal c0 (T  , p) then l − op is greater or equal than c1 (T  , p). Therefore, there is at least one period p ∈  {p + 1, . . . , n − 1} having no 1. Thus, T  ∈ T l . Second, we show that (2.25) is at least as tight for T  as it is for T  . Obviously, due to construction



min c0 (T  , p), c1 (T  , p) = min c0 (T  , p), c1 (T  , p) . Note that, depending on T  the minimization term of (2.25) is increased by 1, is decreased by 1, or is not changed at all at each break period. The latter case appears at break period p if two complementary HAPs with breaks in p are contained in T  . T  does not contain any complementary HAPs by construction and, therefore, the minimization term of (2.25) is increased by 1 or is decreased by 1, respectively, at each break period. Hence,



min c0 (T  , p + k), c1 (T  , p + k) ≥ min c0 (T  , p + k), c1 (T  , p + k)     n−2 n−2 , . . . , −1 ∪ 1, . . . , ∀k ∈ − 2 2 (indices taken modulo n − 1) since going backward and forward from p, respectively, the number of zeros or ones is strictly lowered to zero   in T  at each break period. In order to illustrate the construction as done in the proof above we provide tables 5.4 to 5.6. A HAP set (and, therefore, a subset of HAPs) T  is given on the left hand side of table 5.4. The HAPs’ break periods differ from each other  and, moreover, T  is not narrow. Therefore, T  ∈ T l \ T l with l = 4. First, we circulate periods until we have no break in the first period. This results into the set of HAPs shown on the right hand side of table 5.4 and does not affect the sum in term (2.25). The left hand side of table 5.5 shows the equivalent representation of T  . We choose break period p = 2 since c0 (T  , 2− 1)+ 1 = c0 (T  , 2) = 2l .

5.3 Minimum Number of Breaks

93

Table 5.4. T  before (left) and after (right) circulating Period 1 2 3 4 5 6 7

Period 1 2 3 4 5 6 7

HAP HAP HAP HAP

HAP HAP HAP HAP

1 2 3 4

0 0 0 1

0 1 1 0

10 01 01 11

10 10 01 01

1 1 0 0

Min(0,1) 1 2 2 1 2 2 2

1 2 3 4

1 1 0 0

00 01 01 10

1 0 0 1

01 11 10 10

0 0 1 1

Min(0,1) 2 1 2 2 1 2 2

Table 5.5. Equivalent representation of T  (left) and T  (right) Period 7 1 2 3 4 5 6

Period 7 1 2 3 4 5 6

HAP HAP HAP HAP

HAP HAP HAP HAP

1 2 3 4

0 0 1 1

0 0 0 1

11 00 00 11

11 01 00 00

1 1 0 0

Min(0,1) 2 1 2 2 1 2 2

1 2 3 4

1 1 1 1

11 10 00 11

1 0 0 1

00 00 00 10

0 0 0 0

Min(0,1) 0 1 2 2 1 0 0

Then, we construct T  by reducing the number of zeros at each break period going backward from p to the first period and reducing the number of ones at each break period going forward from p to the last period. The result is shown on the right hand side of table 5.5. Note that min {c0 (T  , 2), c1 (T  , 2)} = min {c0 (T  , 2), c1 (T  , 2)}. Since the minimization term’s value in (2.25) is strictly reduced at each break period going backward and forward from p = 2 in T  min {c0 (T  , p), c1 (T  , p)} ≥ min {c0 (T  , p), c1 (T  , p)} for each p ∈ P . Table 5.6. T  before (left) and after (right) recirculating Period 7 1 2 3 4 5 6

Period 7 1 2 3 4 5 6

HAP HAP HAP HAP

HAP HAP HAP HAP

1 2 3 4

1 1 1 1

0 0 1 0

10 01 01 10

01 01 01 11

0 0 0 0

Min(0,1) 0 1 2 2 1 0 0

1 2 3 4

0 0 1 0

1 0 0 1

00 10 10 01

10 10 10 10

1 1 1 1

Min(0,1) 1 2 2 1 0 0 0

94

5 HAP Based Branching Scheme

On the left hand side of table 5.6 we illustrate the interpretation of the equivalent representation of T  as a subset of HAPs. On the right hand side the set of HAPs is recirculated which finishes the construction of T  . Thus, given a set of l break periods we can check condition (2.25) for  each T  ∈ T l by checking condition (2.25) for each T  ∈ T l according  'l l+k l to theorem 5.4. Note that |T l | = 2ll = (2l)! k=1 k > 2 if l > 1 l!·l! =  l and |T | = 2l which makes this reduction extremely useful.  Since each T ∈ T l is narrow there exists a period pT such that from pT to the following break period all HAPs of T have identical entries.  Therefore, T ∈ T l can be specified by pT and the singleton entry  in these periods. We can further reduce T l by eliminating subsets of HAPs being complementary to each other: condition (2.25) is equally   tight for two subsets of HAPs T1 ∈ T l and T2 ∈ T l with pT1 = pT2 and different entries in period pT1 as shown in Miyashiro et al. [64]. Therefore, we check only the l subsets of HAPs having entry zero in periods where all entries are identical. As shown in Miyashiro et al. [65] each check can be done in linear time. The test above is applied in addition to “No Three Consecutive Breaks”. We apply it only for l ≥ 4 since “No Three Consecutive Breaks” covers l < 4. Feasible Subsequences: Note that T l ⊂ 2T . Hence, checking condition (2.25) for each T  ∈ T l as proposed by “Feasible Sequence” can not ensure condition (2.25) for each T  ⊆ T . For an example consider the partial HAP set h illustrated in table 5.7. Table 5.7. Infeasible subsequence period 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 HAP HAP HAP HAP

1 2 3 4

010 010 010 010

10 10 10 10

01 10 10 10

01 11 10 10

0 0 0 1

1 1 1 1

0 0 0 0

1 1 1 1

0 0 0 0

1 1 1 1

Checking break sequence {6, 9, 10, 11} according to “Feasible Sequence” does not identify h to be infeasible. Clearly, break sequence {9, 10, 11} would be identified as infeasible but is never checked according to “Feasible Sequence” if p = 11 is the break period being added last (in the course of our branching scheme). In order to fix this flaw we propose to check subsequences of break sequences, as well.

5.3 Minimum Number of Breaks

95

According to Miyashiro et al. [64] not each subset of HAPs must be checked if the minimum number of breaks is required. We have to check only those subsets having no more than n2 HAPs and exclusively containing cyclically consecutive HAPs. We aim at implicitly checking all these subsets of HAPs by checking subsequences. We need to check only cyclically consecutive subsequences of break periods since subsequences not being cyclically consecutive correspond to sets of HAPs not being cyclically consecutive. Note that there exist exactly two narrow and cyclically consecutive sets of HAPs corresponding to a given cyclically consecutive subsequence of break periods. These two sets of HAPs are complementary to each other and, hence, we have to check only one set of HAPs for each subsequence. According to our branching scheme we add break periods step by step. Therefore, we can restrict checking subsequences to subsequences incorporating the new break period in each step. Subsequences not containing the new break period have been checked before. Again, checking a specific (sub-)sequence of break periods is done according to Miyashiro et al. [65] in linear time. 5.3.2 Choice of Branching Candidates Since the branching scheme prescribes to create child nodes corresponding to each possible break of a specific team branching candidates correspond to teams (namely: branching teams), here. We propose several strategies to choose the branching  team below. First, we define a frac = |1 − tional break value bri,p j∈T \{i} (xi,j,p−1 + xi,j,p ) | where xi,j,0 means xi,j,n−1 for each team i ∈ T and period p ∈ P according to the current node’s optimal solution. Random Choice: Choosing the branching team randomly minimizes computational effort. Here, we implement a (pseudo-)random selection by choosing the team having index dk + 1 where dk is the depth of the current node k. = Largest Fractional Break: We choose team i      with largest fractional break arg maxi maxp bri,p | bri,p < 1 value as branching team. The motivation for this strategy is as follows. First, the current node’s optimal solution is cut by fixing i ’s break. Second, i having this specific break is likely to enable low cost tournaments. Most Infeasible 1: Among all teams we choose the one having the most infeasible constellation of break values. The common idea is to enforce feasibility for those teams first where least tendency is given

96

5 HAP Based Branching Scheme

in which period to set their break, see Achterberg et al. [1]. Note that not each team must have at least one break (including those in the first period) in a feasible solution to the LP relaxation. Accordingly, afeasible solution to the LP relaxation might provide a team i with  the structure of p∈P bri,p > 1. Since this effect severely contradicts      a feasible IP solution we choose team i = arg maxi br as i,p p∈P   branching team if p∈P bri ,p > 1. Otherwise, we choose the branching team according to “Largest Fractional Break”.    } . Most Infeasible 2: We choose team i = arg mini maxp {bri,p The basic idea is the same as for “Most Infeasible 1” but infeasibility is measured differently. There can be two reasons for the maximum fractional break value being small. First, p∈P bri ,p is small (in particular     p∈P bri ,p < 1). Second, p∈P bri ,p = 1 but fractional break values bri ,p are larger than 0 for many periods p. Both effects contradict the structure of a feasible IP solution. 5.3.3 Node Order Strategy Just as described in section 5.2.3 we consider “Depth First Search” and “Breadth First Search” as node order strategies. We observe that “Breadth First Search” is significantly more efficient due to less nodes being explored for instances with less than 10 teams. For larger instances our list of nodes grows that large that administrative effort uses up this advantage. Therefore, we propose a compromise between a short node list guaranteed by “Depth First Search” and a small number of nodes being explored guaranteed by “Breadth First Search” based on the father node’s optimal LP solution. First, we obtain vcur as normalized current node’s solution value dividing it by the lower bound. Second, we calculate the fraction xbin of match variables having binary values in the current node’s optimal solution. Then, we sort the node list in ascending order of vcur − w · xbin where w ∈ R. The idea is to explore a node earlier if its father’s optimal solution provides many binary variables. In preliminary tests w = 1.5 proofed to be a good choice as far as running times are concerned. According to the strategies proposed above all children of a single node are considered identical. Therefore, we additionally have to decide in which order nodes having the same father are explored. We propose several strategies hereafter.

5.3 Minimum Number of Breaks

97

Pseudo Cost: We calculate pseudo cost chi,p and cai,p representing average cost of matches being possible if i is branched to have a homebreak or away-break, respectively, in period p. 

chi,p =

1 (n − 1)2 1 (n − 1)2

 p−1 p−1

2 2      cj,i,p−2p + ci,j,p+1−2p  +  j∈T \{i}



  

p =1 n−p  2

p =1

ci,j,p−2+2p +

p =1

j∈T \{i}

n−p  2

  cj,i,p−1+2p (5.2)

p =1



cai,p =

1 (n − 1)2 1 (n − 1)2

 p−1 p−1

2 2      ci,j,p−2p + cj,i,p+1−2p  +  j∈T \{i}



   j∈T \{i}

p =1 n−p  2

p =1

p =1

cj,i,p−2+2p +

n−p  2

  ci,j,p−1+2p (5.3)

p =1

Child nodes are explored in ascending order of pseudo cost chi,p and corresponding to breaks at home and away, respectively, of branching team i in period p. Since chi,p and cai,p are static we calculate them once before the B&B procedure starts and, therefore, computational effort is small. Pseudo Cost Revisited: If we branch on team i to have a specific break pseudo costs according to other teams may be altered in the resulting branching subtree. Suppose, for example, team i is branched to have a home-break in period 2 and j is chosen as branching team in the corresponding subtree. In pseudo-cost caj,4 cost cj,i,2 and ci,j,3 should not be considered (in contrast to calculation in (5.3)) since these matches are not possible due to the fixed break of i. More generally, suppose team i is branched to have a break in period p. Additionally, a break for branching team j in period p is considered in the subtree. We eliminate

cai,p

• cost according to a match between teams i and j in period p ∈ {min{p, p }, . . . , max{p, p } − 1} from pseudo cost according to j and period p if either i’s and j’s break venues are identical and |p − p | is odd or the break venues differ and |p − p | is even,

98

5 HAP Based Branching Scheme

• cost according to a match between teams i and j in period p ∈ {1, . . . , min{p, p } − 1} ∪ {max{p, p }, . . . , n − 1} from pseudo cost according to j and period p if either i’s and j’s break venues are identical and |p − p | is even or the break venues differ and |p − p | is odd. Then, child nodes are explored in ascending order of revisited pseudo cost. Largest Fractional Break First: We first explore the child node corresponding to the branching team i’s break (defined by period and  according venue) which implies the largest fractional break value bri,p to the father node’s optimal solution. The remaining child nodes are explored in ascending order of “Pseudo Cost” or “Pseudo Cost Revisited”, respectively. The idea is here that a large fractional break value might indicate a profitable break due to the LP problem’s objective of cost minimization.

5.4 Computational Results In this section we line out efficiency of the branching schemes proposed in sections 5.2 and 5.3. For both we present the configuration proofed to be the most efficient one. Beside run times we line out both schemes’ efficiency as far as avoiding infeasible HAP sets is concerned. 5.4.1 General Home-Away-Pattern Sets Results are based on the formulation of the single RRT problem presented in section 2.1, namely SRRTP-IP. Among the strategies to choose the branching candidate “Most Fractional” turned out to be the most efficient one. Additionally, we employ “Breadth First Search” as node order strategy. In table 5.8 we line out the average number of LP problems solved (LP), the average number of LP problems being infeasible due to the HAP set (i.LP), the average number of IP problems solved since branching on each team’s venue in each period does not provide integer solutions (IP), average run times of our approach (r.t.), and average run times needed by Cplex (r.t. Cplex) as given before in table 3.5. Run times indicate that our branching scheme can not compete with Cplex when applied to the single RRT problem. However, according to theorems 5.1 and 5.2 the choice of branching candidates takes implicitly care of several necessary conditions for HAP

5.4 Computational Results

99

Table 5.8. Comp. Results for Branching on General HAP Sets n

LP i.LP IP

6 3.4 8 15.7 170.3 10 12 4124.9 14 136627.1

0 0 0 0 0

r.t. r.t. Cplex

0 0.01 0 0.03 0 0.70 0 54.02 0 4602.29

0.01 0.05 0.37 2.03 34.43

sets to be feasible. This is impressively confirmed by our test runs. Each single (partial) HAP set corresponds to a feasible LP problem. Note that this does not necessarily imply feasibility of each (partial) HAP set. Moreover, not a single IP problem is solved since each branching path was pruned beforehand. Hence, although our branching scheme can not guarantee integer solutions probability is high that it suffices to reach optimality. Considering these aspects it seems advisable to employ it in the approach developed in chapter 6. 5.4.2 Minimum Number of Breaks Here, results are based on the model formulation given in section 3.2.1. First, we evaluate our strategies to avoid infeasible HAP sets. We choose branching candidates according to “Random Choice”. Furthermore, we employ depth first search and insert child nodes in ascending order of the corresponding breaks’ periods. Since the resulting branching scheme is static and has no cost orientation, differences in the number of nodes being explored as well as run times are exclusively caused by decisions whether a node corresponding to a specific break is constructed or not. Strategies “No Restrictions”, “No Break Twice”, “Break Sequences”, and “No Three Consecutive Breaks” are carried out for 8 teams since the run times grow too high for instances with more teams as long as no efficient strategy is employed. Results are given in relation to results obtained using “No Restrictions”. Evaluation of strategy “Feasible Sequence” as described in section 5.3.1 and a special case, namely “Four Breaks in Five Periods”, is done solving instances having 10 teams. Here, results are given in relation to results obtained using “No Three Consecutive Breaks” for n = 10. Each strategy is employed in addition to the previous ones except “Four Breaks in Five Periods” replacing “Feasible Sequence”.

100

5 HAP Based Branching Scheme

We focus on the decreasing number of LP problems to be solved (LP red.) and infeasible LP problems (i.LP red.), respectively. Furthermore, the reduction of average run times (r.t. red.) is given in table 5.9. Table 5.9. Comp. Results for Branching Strategies Strategy No Restrictions (n = 8) No Break Twice (n = 8) Break Sequences (n = 8) No Three Consecutive Breaks (n = 8) No Three Consecutive Breaks (n = 10) Feasible Sequence (n = 10) Four Breaks in Five Periods (n = 10)

LP red. i.LP red. r.t. red. — 26.1% 49.4%

— 48.7% 87.3%

— 6.7% 22.9%

72.7%

100.0%

54.6%

— 30.0%

— 99.5%

— 28.9%

33.0%

100.0%

32.1%

As we can see the number of nodes being explored is strictly decreased by each strategy. Employing “No Break Twice” and “Break Sequences” reduces the number of LP problems by 50%. More remarkably, the number of infeasible LP problems is reduced by more than 85%. Note that both strategies avoid constructing infeasible (partial) HAPs which would immediately result into an infeasible LP problem. “No Three Consecutive Breaks” takes a step further by avoiding infeasible (partial) HAPs not corresponding to an infeasible LP problem but corresponding to an infeasible subtree. Here, we omit solving these LP problems and, possibly, further child nodes. Applying “No Three Consecutive Breaks” we do not obtain a single infeasible LP problem. The overall number of LP problems and run times are reduced by more than 70% and more than 50%, respectively. Considering 10 teams nearly each infeasible LP problem is avoided if “Feasible Sequence” is applied. Here, we recognize a special case for 10 teams. Four break periods in a sequence of five periods (without three consecutive break periods) is not infeasible in general but for n = 10. The fifth break period can not be feasibly chosen according to “Feasible Sequence”. In order to fine-tune our approach we consider strategy “Four Breaks in Five Periods” and observe that no infeasible LP problem remains to be solved. Both the overall number of LP problems and run times are reduced by more than 30%. Note that “Feasible Subse-

5.5 Summary

101

quences” is not evaluated, here, since this strategy takes effect only for problems with more than 10 teams. However, tests for larger problems are not possible due to run times. For the sake of completeness we line out that the average number of IP problems which are solved for each problem instance is 0.9 for n = 8 and 4.7 for n = 10. Among strategies to choose the branching candidate “Most Infeasible 1” turned out to be the most efficient. Additionally, we employ “Breadth First Search” considering the number of binary variables and “Pseudo Cost Revisited” as node order strategies. Out of the strategies evaluated above “No Three Consecutive Breaks” combined with “Four Breaks in Five Periods” is applied here. Run times are given in table 5.10. Table 5.10. Comp. Results for Minimum Number of Breaks n

B&B Cplex

6 0.15 0.23 8 12.62 26.79 10 7841.14 —

Clearly, our B&B approach outperforms Cplex for n < 10. Cplex runs out of memory after about 12 hours for n = 10. Note that Cplex was employed using “Depth First Search” to overcome lack of memory in Briskorn and Drexl [10, 11]. Not even a feasible solution is found within 6 days of running time, then. Accordingly, proofing optimality in 2.2 hours on average clearly indicates superiority of our approach.

5.5 Summary We propose a B&B approach in order to find minimum cost RRT schedules. The basic idea originates from a decomposition scheme fixing each team’s venue in each period, first, and arranging matches, afterwards. We distinguish between the general case and RRT schedules with the minimum number of breaks. Results are twofold for the general case. The branching scheme suffices to obtain optimal solutions and to avoid infeasible subtrees. However, run times are significant larger than those needed by Cplex. For the minimum number of breaks our branching scheme clearly outperforms Cplex. An outstanding difference between both cases might be

102

5 HAP Based Branching Scheme

the fact that we are able to prune whole subtrees for the minimum number of breaks while we can prune only single nodes in the general case. Since both branching schemes can not guarantee integer solutions we have to handle the case where no branching candidate is given for a fractional solution. We propose to solve the corresponding IP problem using Cplex. Since solving IP problems is significantly more time consuming than solving LP problems we emphasize that for all test instances only a very small number of IP problems had to be solved. In fact, no IP problem was solved in the general case. Our branching schemes are likely to obtain HAP sets that are feasible. Consequently, most probably only IP problems corresponding to feasible HAP sets have to be solved.

6 Branch–and–Price Algorithm

6.1 Motivation and Basic Idea In this chapter we aim at handling problems observed in chapters 2 and 3. When we tackle RRT problems with standard B&B methods we suffer from four main obstacles so far: • The problem size is determined by more than n(n − 1)2 variables constraints and allows exact solution only and more than 3n(n−1) 2 for rather small instances. • Solutions to LP relaxations in general are highly fractional and provide poor lower bounds. • When setting variables to integer values in a B&B framework we often experience fractional values coming up for other variables due to model inherent symmetry. • Cost oriented node order strategies are difficult to implement since fixing variables has intractable consequences for other variables due to the compact structure of time constrained SLS problems. Consequently, we propose a reformulation for SRRT-IP introduced in section 2.1 and further constraints presented in chapter 3. We expect less symmetry and fewer constraints when employing more meaningful columns. Moreover, we aim at tightening the provided lower bound. This step is motivated by, besides others, Mehrotra and Trick [60] who faced similar problems by solving the problem to color the vertices of a graph with the minimum number of colors. Since the number of variables of the reformulation is exponential (in n) we introduce a column generation (CG) model to be employed in a branch-and-price framework in order to enforce integrality.

104

6 Branch–and–Price

6.2 Reformulation Everything below is based on SRRT-IP and its extensions. However, adaptation to other basic problems introduced in section 2.2 is straightforward. We define a MD as a set of matches where each team i ∈ T plays exactly once. Columns and master’s variables, respectively, correspond to MDs being assigned to a specific period, scheduled MDs namely. The set of all scheduled MDs is denoted by P M . We say a pair (i, j), i, j ∈ T , i = j, of teams is contained in m ∈ P M (denoted by (i, j) ∈ m) if m comprises a match between teams i and j. Additionally, an ordered pair − → − → (i, j), i, j ∈ T , i = j, is contained in m ∈ P M (denoted by (i, j) ∈ m) if m comprises a match of team i at home against team j. Cardinality of P M is given in (6.1). |P M | = (n − 1)

n * n ! n2 n! = (n − 1) i > . n 2 n 2!

(6.1)

i= 2 +1

We denote the period of m ∈ P M by p(m). Cost cm of scheduled MD m ∈ P M is equal to the sum of the costs of all ordered pairings contained in m if carried out in period p(m) as outlined in (6.2).  ci,j,p(m) ∀m ∈ P M (6.2) cm = −−→ (i,j)∈m

In the following we present a master problem and the corresponding pricing problem. We use several constraints given in chapters 2 and 3. Thereby, we distinguish between constraints relating to a single MD only and constraints involving interrelations between several MDs. The first class of constraints is used in the pricing problem while the second class is considered in the master problem. 6.2.1 Set Partitioning Master Problem We propose an integer programming model equivalent to SRRT-IP employing binary variables ym , m ∈ P M , being equal to 1 if and only if scheduled MD m is chosen for a single RRT. Objective function (6.3) represents the goal of finding the minimum cost tournament. Constraint (6.4) forces each pair of teams to meet exactly once and it is equivalent to (2.7). Equality (6.5) assures exactly one MD being arranged in each period of the tournament and it corresponds to (2.8) since each team i ∈ T participates in each scheduled MD m ∈ P M exactly once.

6.2 Reformulation Model 6.1: CG Master – SRRT  cm y m min

105

(6.3)

m∈P M



s.t.

ym

=

1

∀ i, j ∈ T, i < j

(6.4)

ym

=

1

∀p∈P

(6.5)



{0, 1} ∀ m ∈ P M

m∈P M,(i,j)∈m



m∈P M,p(m)=p

ym

(6.6)

As mentioned above the number of variables is exponential in the number of teams. Therefore, it is reasonable to think of the LP relaxation to CG Master – SRRT as a restricted master problem in a CG process initialized with a subset of columns. Iteratively, columns are generated according to the current restricted master’s optimal solution in order to reach the master’s optimal solution. Consequently, we consider the LP relaxation to CG Master – SRRT in the following. As shown in de Carvalho [18] the CG approach can be accelerated if the domain of dual variables is restricted by restricting solution space of the master problem. Obviously, either (6.4) or (6.5) can be relaxed to “no less than” constraints and “no more than” constraints, respectively. Solving the resulting problem will still lead to solutions fulfilling both restrictions with equality. Relaxing both constraints to “no more than” constraints is not possible even if cm is not greater than zero for each m ∈ P M . Rosa and Wallis [71] proof that there are premature sets of 1-factors. This means that it is possible to choose a set of 1-factors having no edge in common but not being part of any 1-factorization. Therefore, the goal of cost minimization can not guarantee constraints (6.4) and (6.5) being obeyed with equality. Analogously, relaxing both restrictions to “no less than” constraints is not possible either. Instead, relaxing one of them to a “no more than” constraint while relaxing the other one to be a “no less than” constraint provides feasible solutions to CG Master – SRRT. We evaluated all variants and conclude that the relaxation having both constraints set to “no more than” constraints leads to shortest solution times but can not guarantee a feasible solution for CG Master – SRRT. In order to increase the probability for feasible solutions we

106

6 Branch–and–Price

transform all ci,j,p to have values not greater than zero by subtracting the maximum cost value as outlined in (6.7). Obviously, this has no effect on the optimal solution’s structure.  ci,j,p = ci,j,p −

max

i ,j  ∈T,i =j  ,p ∈P

 {ci ,j  ,p }

∀i, j ∈ T, i = j, p ∈ P (6.7)

Among all feasible relaxations the variant having constraint (6.4) relaxed to “no more than” constraint and (6.5) kept unchanged provides lowest run times. Consequently, in the remainder we will restrict ourselves to these two variants. The LP relaxation of CG Master – SRRT provides a lower bound to the original problem which is not lower than the one provided by the LP relaxation. However, it might be larger and, therefore, more useful. Clearly, each solution to the LP relaxation of CG Master – SRRT is a solution to the LP relaxation of SRRT-IP. On the other hand, there are solutions to the LP relaxation of SRRT-IP being not feasible to the LP relaxation of CG Master – SRRT. As pointed out in Trick [82] the lower bound given by the LP relaxation of SRRT-IP can be strengthened by adding odd set constraints as shown in (6.8).  

(xi,j,p + xj,i,p)



∀ p ∈ P, T  ⊂ T, |T  | odd (6.8)

1

i∈T  j∈T \T 

Theorem 6.1. Solutions to the LP relaxation of CG Master – SRRT fulfill all odd set constraints (6.8). Proof. A solution to the LP relaxation of CG Master – SRRT corresponds to a convex combination of scheduled MDs in P Mp = {m | m ∈ P M, p(m) = p} for each period p. Trivially, each scheduled MD m ∈ P M fulfills (6.8). Let o(T  , p) be the value of matches between teams in T  and teams in T \ T  in period p. 





m∈P M,p(m)=p

i∈T 

j∈T \T  ,(i,j)∈m

o(T  , p) =



=

ym



m∈P M,p(m)=p

=1



i∈T  j∈T \T  ,(i,j)∈m

m∈P M,p(m)=p





ym

ym

1

6.2 Reformulation

107

  Thus, we obtain a lower bound being not lower than the one of SRRTIP after adding each odd set constraint. Beside the constraints assuring a single RRT we use two of those restrictions introduced in chapter 3 considering more than one MD: minimum number of breaks and changing opponents’ strengths. In order to consider breaks we employ binary variables bri,p as introduced in chapter 3. Constraints (6.9), (6.10), (6.11), and (6.12) are equivalent to (3.5), (3.6), (3.7), and (3.8).

Model 6.2: CG Master – Minimum Number of Breaks   ym − bri,p ≤ 1 ∀ i ∈ T, p ∈ P ≥2

(6.9)

→ j∈T,j=i m∈P M,(− i,j)∈m,



p(m)∈{p−1,p}



→ j∈T,j=i m∈P M,(− j,i)∈m, p(m)∈{p−1,p}

ym − bri,p

 

∀ i ∈ T, p ∈ P ≥2



1

bri,p



n−2

bri,p

∈ {0, 1} ∀ i ∈ T, p ∈ P ≥2

(6.10)

(6.11)

i∈T p∈P ≥2

(6.12)

In order to consider changing opponents’ strengths we employ binary variables esci,p known from chapter 3. Recall that S denotes the set of strength groups and the number of violations of the changing opponents’ strengths postulation is limited by escmax . Constraints (6.13), (6.14), and (6.15) correspond to (3.13), (3.14), and (3.15). We use integer variable esvio in order to allow solutions having more than escmax violations of changing opponents’ strengths restriction for specific team i ∈ T . The necessityfor this  is outlined  in section 6.4.2 in detail. We associate cost Mvio = i∈T j∈T \{i} p∈P ci,j,p with esvio to punish violations and obtain objective function (6.3a):  cm ym + Mvio esvio (6.3a) min m∈P M

Further concepts from chapter 3 considering relations between more than one MD can be easily handled analogously. However, we choose the two above since they are among the most prominent ones.

108

6 Branch–and–Price

Model 6.3: CG Master – Changing Strength Groups   ym − esci,p ≤ 1 ∀ i ∈ T, S  ∈ S, p ∈ P ≥2 j∈S  ,j=i m∈P M,(i,j)∈m,p−1≤p(m)≤p



(6.13) esci,p

− esvio



escmax ∀i

∈T

esci,p



{0, 1} ∀i ∈ T, p ∈ P ≥2

(6.14)

p∈P ≥2

(6.15) esvio



N0

(6.16)

6.2.2 Matching Subproblem According to the well known concept of CG developed in Gilmore and Gomory [45] the subproblem is to find the column, i.e. MD assigned to a specific period, having the lowest reduced cost. Reduced cost cm of m ∈ P M is based on original cost cm and dual variables provided by the optimal solution to the current restricted master problem as given in table 6.1. Dual variable βp is restricted to non-positive values if (6.5) is formulated as “no more than” constraint and is not restricted otherwise. Table 6.1. Dual Variables of CG Master eq.

dual variables

(6.4) (6.5) (6.9) (6.10) (6.11) (6.13) (6.14)

αi,j ≤ 0 ∀ i, j ∈ T, i < j βp ∈ R, βp ≤ 0 ∀ p ∈ P γi,p ≤ 0 ∀ i ∈ T, p ∈ P ≥2 δi,p ≤ 0 ∀ i ∈ T, p ∈ P ≥2 ≤0 ζi,S  ,p ≤ 0 ∀ i ∈ T, S  ∈ S, p ∈ P ≥2 ηi ≤ 0 ∀i ∈ T

Reduced cost cm of MD m ∈ P M is defined according to terms (6.17) to (6.19).

6.2 Reformulation



cm = cm −

αi,j − βp(m)

(6.17)

(i,j)∈m,i 0 k k might hold if solely infeasible MDs form period p of the current restricted master’s solution. If so, minm∈P M inf ,p(m)=p {cm } = 0 holds k since at least one infeasible MD is part of the current solution.   Theorem 6.3. Suppose an optimal solution to the current restricted master problem having value zcur is given. Then, a lower bound of the optimal solution’s value to the master problem (considering breaks and changing opponents’ strengths) is given by

zcur +



 min

p∈P

 

min m∈P Mf ,p(m)=p

 {cm } , 0 +

(γi,p + δi,p − ) bri,p +

i∈T p∈P ≥2

 

esci,p

i∈T p∈P ≥2



ζi,S  ,p −

S  ∈S







ηi 

i∈T

 esci,p − esvio  .

p∈P ≥2

Proof. Subtracting the right hand side of (6.5) and the left hand side of (6.5), respectively, multiplied with corresponding dual variables βp from objective function (6.3) leads to equation (6.43).  m∈P Mk

cm ym −

 p∈P

βp =

 m∈P Mk

cm ym −

 p∈P

βp



ym (6.43)

m∈P Mk ,p(m)=p

Doing so for (6.4), (6.9), (6.10), (6.11), (6.13), and (6.14), as well, leads to (6.44). Note that all dual variables used in this second step are not positive.

6.4 Column Generation





cm ym −

m∈P Mk

  



cm ym −

m∈P Mk

p∈P

 

αi,j





i∈T

ym −  

→ j∈T,j=i m∈P M  ,(− i,j)∈m,p−1≤p(m)≤p

 ym − bri,p  − (6.44)

k



  ym − bri,p  −

k

bri,p  −

i∈T S  ∈S p∈P ≥2

i∈T

ηi ≥

ym −

→ j∈T,j=i m∈P M  ,(− j,i)∈m,p−1≤p(m)≤p

 

ηi 





i∈T p∈P



p∈P ≥2



m∈P Mk ,p(m)=p

  δi,p 

i∈T p∈P ≥2



βp

  γi,p 

 





ζi,S  ,p − escmax



i∈T p∈P ≥2



i∈T

127

γi,p −

i∈T p∈P ≥2

  S  ∈S

 

m∈P Mk ,(i,j)∈m

i∈T j∈T,j>i

 

αi,j −

i∈T j∈T,j>i

δi,p − (n − 2)  −

p∈P ≥2

i∈T

βp −

p∈P

 

 p∈P ≥2

 ζi,S  ,p 





j∈S  ,j=i m∈P Mk ,(i,j)∈m,p−1≤p(m)≤p

 ym − esci,p  −



esci,p − esvio 

 The terms being subtracted from m∈P M cm ym on the left hand side of (6.44) sum up to the dual solution’s value and, therefore, to the current optimal solution’s value of the restricted master due to duality theory. Furthermore, we extract scheduled MDs’ reduced cost on the right hand side and obtain (6.45).

128

6 Branch–and–Price



cm ym − zcur ≥

m∈P Mk







ym cm −

m∈P Mk

αi,j − βp(m) −

(i,j)∈m,j>i





   γi,p(m) + δj,p(m) +γi,p(m)+1 + δj,p(m)+1  − + ,.+ ,.

− → (i,j)∈m

if p(m)≤|P |−1

if p(m)≥2





   ζi,S(j),p + ζj,S(i),p +ζi,S(j),p+1 + ζj,S(i),p+1  + + ,.+ ,.

− → (i,j)∈m

 

(6.45)

if p(m)≤|P |−1

if p(m)≥2

(γi,p + δi,p − ) bri,p +

i∈T p∈P ≥2

  i∈T S  ∈S p∈P ≥2

ζi,S  ,p esci,p −



 ηi 





esci,p − esvio 

p∈P ≥2

i∈T

We identify reduced cost of scheduled MD m ∈ P Mk (see (6.17), (6.18), and (6.19)) in lines 2, 3, and 4 of (6.45). With  m∈P Mk ,p(m)=p ym = 1 for each p ∈ P and minm ∈P Mk ,p(m )=p {cm } ≤ cm for all m ∈ P Mk with p(m) = p we transform inequality (6.45) into (6.46). 

cm ym ≥ zcur +

m∈P Mk

 

 p∈P

min

m∈P Mk ,p(m)=p

(γi,p + δi,p − ) bri,p +

i∈T p∈P ≥2

  i∈T p∈P ≥2

{cm }+

esci,p

 S  ∈S

ζi,S  ,p −

 i∈T

(6.46) 

ηi 



 esci,p − esvio 

p∈P ≥2

Employing lemma 6.1 in (6.46) we obtain (6.47).



cm ym ≥ zcur +

m∈P Mk

  i∈T



 min

1

min

m∈P Mkf ,p(m)=p

p∈P

esci,p

i∈T p∈P ≥2



ζi,S  ,p −

S  ∈S



(6.47) 

ηi 

i∈T

129

cm , 0 +

(γi,p + δi,p − ) bri,p +

p∈P ≥2

 

6.4 Column Generation



 esci,p − esvio 

p∈P ≥2

 

Term (6.47) directly implies theorem 6.3.

Our proof is restricted to the case where constraint (6.5) is formulated as equation. Adaption to constraint (6.5) being “no more than” constraint is straightforward. First, equation (6.43) is replaced by (6.43’) since βp ≤ 0 ∀p ∈ P .  m∈P Mk

cm ym −

 p∈P

βp ≥

 m∈P Mk

cm ym −

 p∈P

βp



ym (6.43’)

m∈P Mk ,p(m)=p

Then, relation (6.45) is obtained analogously and can be transformed to (6.46) since cm ≤ 0 for each m ∈ P Mk . Hence, the lower bound stated in theorem 6.3 holds no matter which formulation of the master problem is chosen. Therefore, we can obtain lower bounds while solving the master problem with “no more than” constraints even if this formulation does not provide an optimal solution to the master problem (see section 6.2.1 for details). Another restriction of the proof is that minimum number of breaks and changing opponents’ strengths are considered. Note that, again, adaption to other cases is simple and, consequently, we can state four lower bounds depending on which of both types of constraints are considered, see table 6.4. Note, that definition of cm depends on the structure of the master’s problem as well (see sections 6.2.1 and 6.2.2 for details). A fairly convenient property of this lower bound is that nearly no additional computational effort is to be made: Dual variables are known from the current optimal solution to the restricted master problem and the minimum reduced cost of scheduled MDs’ is computed in order to find pleasant columns for each period, anyway. Moreover, this lower bound implies a customization to the current node k of the search tree. Branching constraints are incorporated in the matching subproblem restricting solution space of M Dp .

130

6 Branch–and–Price

Table 6.4. Lower bounds for CG master problem according to reduced cost additional constraints

lower bound

   zcur + p∈P min minm∈P M f ,p(m)=p cm , 0 k    min no of breaks zcur + p∈P min minm∈P M f ,p(m)=p cm , 0 + k   ≥2 (γi,p + δi,p − ) bri,p i∈T p∈P   opponents’ strengths zcur + p∈P min minm∈P M f ,p(m)=p cm , 0 +    k c i∈T p∈P ≥2 esi,p S  ∈S ζi,S  ,p!−   c ≥2 esi,p − esvio i∈T ηi  p∈P   min no of breaks, zcur + p∈P min minm∈P M f ,p(m)=p cm , 0 + k   (γ + δ − ) bri,p + opponents’ strengths i∈T p∈P ≥2 i,pc i,p i∈T p∈P ≥2 esi,p S  ∈S ζi,S  ,p!−   c i∈T ηi p∈P ≥2 esi,p − esvio none

Lower Bound Based on Feasible Dual Solution Farley [37] proposes a lower bound for the optimal solution’s value to a general master problem. Its computation requires a minimum cost MD problem having a non-linear objective function to be solved in our case. Therefore, with respect to the computational effort we refuse to employ this method. Instead, we develop a lower bound based on the same idea. Our lower bound can be computed via solving a linear minimum cost MD problem (lined out to be solvable in polynomial time for most variants in section 6.4.1) at the cost of being less tight than the one proposed in Farley [37]. First, we give the dual problem CGD to the LP relaxation of the master’s formulation considering a minimum number of breaks and changing opponents’ strengths. The dual problem to the restricted master problem differs from CGD by (6.49) not being stated for each m ∈ P Mk but for a subset P Mk ⊂ P Mk of scheduled MDs currently under consideration. Theorem 6.4. Suppose an optimal solution to the current restricted master value zcur is given. Let  minproblem having  {cm } f m∈P M ,p(m)=p k for each p ∈ P . If lbp = max max {−cm +cm } , 1 f m∈P M ,p(m)=p k  maxm∈P M f ,p(m)=p {−cm + cm } < 0 and − maxp∈P {lbp } · i∈T ηi ≤ k

6.4 Column Generation Model 6.10: CGD  max

αi,j +

i,j∈T,j>i

 



βp +

p∈P

 



(γi,p + δi,p ) +  +

i∈T p∈P ≥2

ζi,S  ,p + escmax

i∈T S  ∈S p∈P ≥2

s.t.

131



ηi

(6.48)

i∈T

αi,j + βp(m) +

i,j∈T,i