Open problems in optimization and data analysis 9783319991412, 9783319991429


308 5 2MB

English Pages 341 Year 2018

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Preface......Page 5
Citation......Page 12
Contents......Page 14
Contributors......Page 16
1 Introduction......Page 19
2 Some Open and Challenging Problems......Page 21
3 Concluding Remarks......Page 24
References......Page 25
1 Introduction......Page 27
2 Bharathi–Kempe–Salek Conjecture......Page 28
3 Approximation of Influence Spread......Page 29
4 Active Friending......Page 32
5 Rumor Blocking and Source Detection......Page 34
References......Page 37
1 Introduction......Page 41
2.1 M-Estimates with a Bounded ρ-Function......Page 43
2.4 Asymptotic Normality......Page 44
2.5 Equivariance......Page 45
3 Robust Local Estimation, Multivariate Least Trimmed Absolute Deviation Estimation......Page 46
3.1 Least Trimmed Absolute Deviation Estimator, LTAD......Page 47
3.2 Mixed Integer Programming Formulation......Page 48
3.4 Solution of LP Relaxation......Page 49
3.6 Conclusions and Open Problems......Page 50
4 Optimization Techniques for Robust Multivariate Location and Scatter Estimation......Page 51
4.1.1 Least Trimmed Euclidean Distance Estimator......Page 52
4.2 Detection of Scatter Outliers......Page 53
4.4 Conclusions and Open Problems......Page 55
5 Robust Regression with Penalized Trimmed Squares (PTS)......Page 56
5.1.2 Huber-Type Estimator as a Convex Quadratic Program......Page 57
5.2 Least Trimmed Squares......Page 58
5.3 Penalized Trimmed Squares......Page 59
5.3.1 Quadratic Mixed Integer Programming for PTS......Page 60
5.3.2 Fast-PTS......Page 61
5.4.1 -Insensitive Penalized Trimmed Squares......Page 62
5.4.3 Conclusions and Open Problems......Page 63
References......Page 64
Optimal Location Problems for Electric Vehicles Charging Stations: Models and Challenges......Page 66
1 Introduction......Page 67
2 Demand Forecasting......Page 68
3.1 Point Demand Location Models......Page 70
3.2 Flow Covering Models......Page 71
4 Challenges......Page 73
References......Page 75
1 Demand Selection Problems......Page 78
1.1 EOQ with Market Choice......Page 79
1.1.1 EOQ Model with Infinite Production Rate......Page 80
1.2 Selective Newsvendor......Page 81
1.3 Requirements Planning with Demand Selection......Page 83
1.3.1 ELSP with Order Selection......Page 84
1.3.2 ELSP with Market Choice......Page 85
1.4.1 Selective Newsvendor with Correlated Customer Demands......Page 86
1.4.3 Selective Newsvendor with Non-normal Demands......Page 87
2 Supplier Selection Problems......Page 88
2.1 Supplier Selection with Deterministic Demand......Page 89
2.2 Supplier Selection with Uncertain Demand......Page 91
2.3 Supplier Selection with Unreliable Suppliers......Page 92
2.4.1 Integrated Economic Lot-Sizing and Supplier Selection......Page 94
2.4.3 All-or-Nothing Supplier Reliability......Page 95
3 Combined Supplier and Demand Selection Problems......Page 96
References......Page 98
Open Problems in Green Supply Chain Modeling and Optimization with Carbon Emission Targets......Page 100
2 Open Problems: Formulation and Optimization Issues......Page 101
References......Page 106
1 Introduction......Page 108
2 Capacitated Vehicle Routing Problem......Page 109
3.1 Open Vehicle Routing Problem......Page 115
3.2 Vehicle Routing Problem with Time Windows......Page 116
4.1 One Commodity Pickup and Delivery Problem......Page 118
4.2 Vehicle Routing Problem with Simultaneous Pickup and Delivery......Page 119
5.1 Split Delivery Vehicle Routing Problem......Page 121
5.5 Multi-Vehicle Covering Tour Problem......Page 122
5.9 Periodic Vehicle Routing Problem......Page 123
5.12 Green Vehicle Routing Problem......Page 124
5.14 School Bus Routing and Scheduling Problem......Page 126
5.16 Team Orienteering Problem......Page 127
6.1 Vehicle Routing Problem with Stochastic Demands......Page 128
6.2 Vehicle Routing Problem with Stochastic Demands and Customers......Page 130
6.4 Vehicle Routing Problem with Fuzzy Demands......Page 131
7.1 The Chinese Postman Problem......Page 132
7.3 The Capacitated Arc Routing Problem......Page 133
8.1 Location Routing Problem......Page 134
8.2 Location Routing Problem with Stochastic Demands......Page 136
8.3 Inventory Routing Problem......Page 138
8.5 Ship Routing Problem......Page 139
References......Page 140
1 Introduction......Page 145
2 The New Mathematical Models......Page 148
3 Computational Results......Page 153
4 Extensive Results with Some Statistical Analyses......Page 158
References......Page 163
1 Introduction......Page 166
2.1 A Mathematical Model......Page 168
3.1 Mathematical Model......Page 172
4.1 A Mathematical Model......Page 174
4.2 Some Open Problems......Page 178
5.1 Mathematical Model......Page 179
5.2 Existing Methods and Challenges......Page 182
References......Page 183
1 Introduction......Page 186
2 Generalizations and Open Problems......Page 188
References......Page 194
1 Introduction......Page 197
2 Main Application Areas......Page 198
3.1 Impact of DG on Rigidity......Page 200
4.1 DGP in Given Dimensions......Page 201
4.4 Isometric Embeddings......Page 202
4.5 Matrix Completion......Page 203
5.1 Combinatorial Characterization of Rigidity......Page 204
5.1.1 Rigidity of Frameworks......Page 205
5.1.2 Infinitesimal Rigidity......Page 206
5.1.3 Generic Properties......Page 208
5.1.5 Combinatorial Characterization of Rigidity in the Plane......Page 209
5.1.6 Combinatorial Characterization of Rigidity in Space......Page 210
5.1.8 Relevance......Page 211
5.2.1 Complexity in the TM Model......Page 212
5.2.2 Complexity of Graph Rigidity......Page 214
5.2.3 NP-Hardness of the DGP......Page 215
5.2.5 EDMs and PSD Matrices......Page 217
5.2.6 Relevance......Page 218
5.3.1 Either Finite or Uncountable......Page 219
5.3.2 Loop Flexes......Page 220
5.3.3 Solution Sets of Protein Backbones......Page 221
5.3.4 Clifford Algebra......Page 224
5.3.5 Phase Transitions Between Flexibility and Rigidity......Page 225
5.3.6 Relevance......Page 226
5.4.1 Determining Nanostructures from Spectra......Page 227
5.4.2 Protein Shape from NOESY Data......Page 229
6 Conclusion......Page 230
References......Page 231
1.1 Problem Definition......Page 238
1.1.3 Optimization Problem of the Inscribed Ball in Polyhedral Set......Page 239
2 Problem Formulation......Page 240
3.1 Some Notations......Page 241
3.2 A Few Preliminary Results......Page 242
3.3 Expression of r1 via r2......Page 243
3.4 Existence of Global Maximum of r12 + r22......Page 245
3.5 Green Candidate and Red Candidate......Page 246
4 Maximum of r12 + r22 Type 1 PT Case......Page 248
4.1 Max Q(cotβ- ycotβ- tanα,y): Searching on the Green Line of the Domain......Page 249
4.2 Max Q(1,y): Searching on the Red Line of the Domain......Page 251
5 Maximum of r12 + r22 Type 2 QR Case......Page 253
6 Maximum of r12 + r22......Page 254
7.1 The Cutting Line Goes Through All Three Sides of the Triangle......Page 255
7.2 Malfatti n=2 Problem as Corollary......Page 257
Appendix......Page 259
References......Page 260
1 Introduction......Page 262
2.1 Initial Solution......Page 264
2.2.1 K-Means Heuristic......Page 266
2.2.2 H-Means Heuristic......Page 267
2.2.3 J-Means Heuristic......Page 268
2.3.2 Nested VND......Page 269
3 General Variable Neighborhood Search......Page 271
4.1 Instances......Page 272
4.2 Parameters......Page 273
4.4 Medium and Large Instances......Page 274
5 Conclusions and Open Problems......Page 275
Appendix 1: Small Instances......Page 276
Appendix 2: Medium and Large Instances......Page 281
References......Page 282
1 Introduction......Page 284
2 Algorithm Portfolios......Page 286
3 Open Problems in Designing Algorithm Portfolios......Page 287
3.1 Selection of Constituent Algorithms......Page 288
3.2 Allocation of Computation Budget......Page 290
3.3 Interaction of Constituent Algorithms......Page 293
3.4 The Role of Parallelism......Page 294
4 Conclusions......Page 295
References......Page 296
1 Introduction......Page 298
2 Quasi-Integrality and the Integral Simplex Method......Page 300
3 All-Integer Pivots......Page 304
4 Cycling for All-Integer Pivots......Page 312
5 Some Open Questions......Page 314
References......Page 315
1 Introduction......Page 317
1.1 About Jacques F. Benders......Page 318
2 Benders Decomposition Method......Page 319
3.1 Problem 1: What Is the Theoretical Proof of the Effectiveness of Acceleration Methods of Benders Decomposition Algorithm?......Page 322
3.3 Problem 3: Is There an Optimal Way to Decompose a Given Problem?......Page 323
3.4 Problem 4: Fluctuation of the Upper Bound (in a Minimization Problem)......Page 324
3.5 Problem 5: Does Producing More Optimality than Feasibility Cuts Lead to Faster Convergence of Benders Algorithm?......Page 326
3.6 Problem 6: In Multi-Cut Generation Strategies, What Is the Optimal Number of Cuts to Be Generated in Each Iteration So that the Master Problem Is Not Overloaded?......Page 327
References......Page 328
An Example of Nondecomposition in Data Fitting by Piecewise Monotonic Divided Differences of Order Higher Than Two......Page 330
1 Introduction......Page 331
2 The Example......Page 333
References......Page 340
Recommend Papers

Open problems in optimization and data analysis
 9783319991412, 9783319991429

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Springer Optimization and Its Applications  141

Panos M. Pardalos Athanasios Migdalas Editors

Open Problems in Optimization and Data Analysis

Springer Optimization and Its Applications Volume 141 Managing Editor Panos M. Pardalos, University of Florida Editor-Combinatorial Optimization Ding-Zhu Du, University of Texas at Dallas Advisory Board J. Birge, University of Chicago S. Butenko, Texas A&M University F. Giannessi, University of Pisa S. Rebennack, Karlsruhe Institute of Technology T. Terlaky, Lehigh University Y. Ye, Stanford University

Aims and Scope Optimization has been expanding in all directions at an astonishing rate during the last few decades. New algorithmic and theoretical techniques have been developed, the diffusion into other disciplines has proceeded at a rapid pace, and our knowledge of all aspects of the field has grown even more profound. At the same time, one of the most striking trends in optimization is the constantly increasing emphasis on the interdisciplinary nature of the field. Optimization has been a basic tool in all areas of applied mathematics, engineering, medicine, economics and other sciences. The series Springer Optimization and Its Applications publishes undergraduate and graduate textbooks, monographs and state-of-the-art expository works that focus on algorithms for solving optimization problems and also study applications involving such problems. Some of the topics covered include nonlinear optimization (convex and nonconvex), network flow problems, stochastic optimization, optimal control, discrete optimization, multi-objective programming, description of software packages, approximation techniques and heuristic approaches.

More information about this series at http://www.springer.com/series/7393

Panos M. Pardalos • Athanasios Migdalas Editors

Open Problems in Optimization and Data Analysis

123

Editors Panos M. Pardalos Industrial and Systems Engineering Department University of Florida, Center For Applied Optimization Gainesville, FL, USA

Athanasios Migdalas Industrial Logistics, ETS Institute, Lulea University of Technology Norrbotten, Sweden Aristotle University of Thessaloniki, Department of Civil Engineering Thessaloniki, Central Macedonia, Greece

ISSN 1931-6828 ISSN 1931-6836 (electronic) Springer Optimization and Its Applications ISBN 978-3-319-99141-2 ISBN 978-3-319-99142-9 (eBook) https://doi.org/10.1007/978-3-319-99142-9 Library of Congress Control Number: 2018962966 © Springer Nature Switzerland AG 2018 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, express or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Preface

Deucalion Summer Institute for Advanced Studies in Optimization, Mathematics, and Data Sciences was established in 2015 by Distinguished Professor Panos M. Pardalos at Drossato in the mountainous region Argithea of Thessaly in Central Greece. Drossato is his birthplace.

The name of the institute is based on an Ancient Greek myth of creation. Deucalion (EYKAIN in Greek), son of Prometheus (POMHEY), and his wife Pyrrha (YPPA) were, according to the Greek Mythology, the only survivors of the deluge sent by Zeus (ZEY) upon the people of the earth. They survived, following Zeus’ order, on the top of Mount Athos in the Chalkidiki v

vi

Preface

peninsula in Macedonia, Northern Greece, or on the top of Mount Othrys in Thessaly, Central Greece, or on a top close to the oracle of Dodona in Epirus, Nortwestern Greece. These are obviously local variations of the myth that have survived in the Ancient Greek bibliography, although other versions may have existed. One such complete text of the myth, cited in the beginning of the current book, survived in the work Biblioteca (BIBIOHKH) of Apollodorus (born ca. 180 BC). He has adopted the Athenian version of the myth in which Deucalion and Pyrrha landed on top of mount Parnassus, northwest of Attica and close to the oracle of Delphi. In naming the institute, the Thessalian version of the myth was adopted. Deucalion and Pyrrha re-populated the earth by throwing stones over their shoulders. Pyrrha’s stones became women, and Deucalion’s stones became men. They gave birth to many children, of which Hellen (EHN, i.e., “Greek”), their firstborn, became the forefather of all Greeks (“Hellenes”). The focus of the Deucalion Summer Institute (DSI) is to organize summer schools that concentrate on certain aspects of recent mathematical developments and data science. In such schools, each day is dedicated to discussions regarding one specific topic. The idea is to encourage thinking “out of the box” to generate new ideas, initiate research collaboration, and identify new research directions. The summer schools are inspired by and organized very much in the spirit of the “peripatetic school/lyceum” of Aristotle. The term “peripatetic” is a transliteration of the Greek word περιπατητικ´oς (peripatêtikos), which means “of walking.” Thus, although topic presentations are necessarily done in parlour, observations, comments, analyses, further developments, and future collaborations are discussed while taking a walk, a “peripatos” (περ´ιπατoς), in the beautiful and dramatic mountainous surroundings. There are no registration fees for these arrangements; however, participation is limited and only by invitation. Moreover, each participant is responsible for his/her own expenses. The current book consists of material presented at or prepared for such a DSI peripatetic summer school with the theme “Challenges and Open Problems in Optimization and Data Science.” Challenges and open problems as well as applications are motivating powers that drive research, thinking, and innovation and lead to new results. The purpose of this book is to survey some recent developments in the mentioned area and provide the reader with challenges and open problems that arise in several topics of the subject area. Our objective is relatively moderate, namely, to highlight challenges and open problems of diverse importance on selective topics of rather broad interest and intrigue and motivate further research and cooperation. The book is comprised of an introductory note on open problems by the editors and 16 contributed chapters which cover topics, surveys, challenges, and open problems covering aspects in: • • • •

Data science Mathematical modeling in logistics and traffic planning Bi-level programming and game theory Geometry

Preface

vii

• Optimization algorithms The first issue is addressed by the first two chapters. The chapter “Social Influence-Based Optimization Problems” by Chao Li, Jing Yuan, and Ding-Zhu Du considers information diffusion and sharing through social networking websites such as Facebook and Twitter and the social influence resulting from such online social networks which facilitate connections between people based on shared interests, values, and membership in particular groups. There are many optimization problems stemmed from studying the social influence, and this chapter reviews a few of them, presenting a short survey of the bibliography, and discusses open problems related to them. The second chapter “New Statistical Robust Estimators, Open Problems” by George Zioutas, Chris Chatzinakos, and Athanasios Migdalas is concerned with the problem of detecting outliers in data sets and developing methods that are robust against such outliers in the data sets. The authors emphasize on high breakdown estimators, which can deal with heavily contaminated data sets and give an overview of recent popular robust methods as well as present a new approach based on operational research techniques. Open problems on the new robust procedures asking for improving robustness and efficiency of the proposed estimators are presented. The third chapter “Optimal Location Problems for Electric Vehicles Charging Stations: Models and Challenges” by A. Karakitsiou, A. Migdalas, and P.M. Pardalos is a transitional paper as it is concerned both with data sets for demand forecasting and with mathematical modeling of optimization problems connected to the relatively recent adoption of electric vehicles (EVs). Since high traffic volumes, congestion, noise and air pollution, consumption of nonrenewable resources, and greenhouse emissions pose significant challenges to sustainability, EVs have consequently come into focus for governments and enterprises. However, despite governmental intervention and support and a host of positive market conditions the adoption rates of EVs have fallen short of initial goals. This chapter reviews reasons for the slow adoption of EVs and provides some insights into the recent developments of optimization problems formulated in order to support and promote EV adoption and to discuss several challenges that should be addressed. The next five chapters are concerned with modeling issues in logistics, traffic planning, and processor scheduling. Thus, in the fourth chapter “Supply and Demand Selection Problems in Supply Chain Planning,” Roshanak Mohammadivojdan and Joseph Geunes discuss a class of optimization problems arising in supply chain planning in which a profit-maximizing producer may select from among a set of suppliers and/or customers. They discuss existing models in this category of supply chain selection problems and limitations of these models and present open problems and corresponding opportunities for further research in the subject area. The fifth chapter, “Open Problems in Green Supply Chain Modeling and Optimization with Carbon Emission Targets” by Konstantina Skouri, Angelo Sifaleras, and Ioannis Konstantaras, is concerned with the green supply chain. As more industries try to adopt green logistics in their overall strategy under the pressure

viii

Preface

from customers and competitors as well as the need to comply with new rules imposed by regulatory agencies, focus not only on financial costs but also on the impact on the environment and the society as a whole has become increasingly important. The carbon tax and emissions trading mechanisms are the most effective market-based choices used to lower carbon emissions. This chapter investigates how these mechanisms are incorporated into the development inventory lot-sizing models and presents some open problems associated with the lot-sizing problem under such emissions constraints. In the sixth chapter, “Variants and Formulations of the Vehicle Routing Problem” by Yannis Marinakis, Magdalene Marinaki, and Athanasios Migdalas, a large number of formulations of the vehicle routing problem (VRP) in different situations are presented and discussed. From a practical point of view, the vehicle routing problem is one of the most important problems arising in supply chain management, and finding an optimal set of routes helps the decision-makers to reduce the cost of the supply chain and to service customers at the right time with the right quantities of the right product but also to optimize energy consumption and minimize impact on the environment. VRPs constitute a good source of inspiration for developing new optimization algorithms and a good benchmarking platform for testing and comparing such algorithms. VRPs can be viewed as open problems in the sense that continuously new real-life formulations of these problems are added to those already existing with increasingly more complicated and demanding objectives and constraints that need to be formulated and solved. The seventh chapter, “New MIP model for Multiprocessor Scheduling Problem with Communication Delays” by Abdessamad Ait El Cadi and Nenad Mladenovi´c, proposes a new mixed-integer program (MIP) formulation for the task scheduling problem on a multiprocessor system, taking into account communication delays and precedence constraints. The new proposed formulation reduces both the number of variables and the number of constraints, when compared to the best mathematical programming formulations from the literature. Extended tests are performed in order to assess the quality of this model and in order to discover which parameters affect the performance of it. Especially the impact of the network architecture, the communication and the number of tasks are investigated. Although these results are significant, some open problems still remain that need to be addressed. The eighth chapter, “On Optimization Problems in Urban Transport” by Tran Duc Quynh and Nguyen Quang Thuan, is a transitional paper as it considers modeling optimization problems in bi-level setting taking into consideration the interplay between different decision-makers. The chapter reviews certain important urban transport problems that are vital in developing countries. These problems are formulated as nonlinear, discrete, bi-level, and multi-objective optimization problems. Finding an efficient solution method for each problem is still a great challenge. Moreover, the reformulation of the existing mathematical models in a solvable form is also an open question. In the ninth chapter, “Some Aspects of the Stackelberg Leader/Follower Model” by L. Mallozzi, R. Messalli, S. Patrı`, and A. Sacco, bi-level problems are considered

Preface

ix

in a theoretical setting. Different aspects of the Stackelberg Leader/Follower model and generalizations of the fundamental model introduced by H. von Stackelberg are considered, and some related open questions are enlightened. The next two contributions are concerned with specific issues relating to geometry. The tenth chapter, “Open Research Areas in Distance Geometry” by Leo Liberti and Carlile Lavor, is concerned with distance geometry (DG) which is based on the inverse problem that asks to find the positions of points, in a Euclidean space of given dimension, that are compatible with a given set of distances. The authors briefly introduce the field, provide a review of application areas, emphasize the impact of DG on rigidity structures which is a very important application to statics and construction engineering, provide a lengthy discussion of open research areas, and discuss computational complexity issues. In the 11th chapter, “A Circle Packing Problem and Its Connection to Malfatti’s Problem” by D. Munkhdalai and R. Enkhbat, the authors succeed in solving analytically the geometrical problem on how to split a given triangle’s two sides by a line such that a total area of inscribed two circles embedded in each side of the line reaches the maximum. They also consider the famous Malfatti’s problem, posed in 1803 by Gianfrancesco Malfatti, which requires the determination of the three circular columns of marble of possibly different sizes which, when carved out of a right triangular prism, would have the largest possible total cross section. This problem is actually equivalent to finding the maximum total area of three circles which can be packed inside a right triangle of any shape without overlapping. Malfatti gave the solution as three circles (called the Malfatti circles) tangent to each other and to two sides of the triangle. However, it was shown in 1930 that the Malfatti circles were not always the best solution to the stated problem, and, even worse, in 1967, it was shown that they are never the optimal solution. In the present chapter the authors show that Malfatti’s problem is a particular case of the problem they are solving. The remaining chapters are concerned with algorithmic issues in optimization. The 12th chapter, “Review of Basic Local Searches for Solving the Minimum Sum-of-Squares Clustering Problem” by Pereira T., Aloise D., Brimberg J., and Mladenovic N., presents a review of the well-known K-means, H-means, and J-means heuristics, and their variants, which are used to solve the minimum sum-of-squares clustering problem. The authors develop two new local searches that combine these heuristics in a nested and sequential structure, also referred to as variable neighborhood descent. In order to show how these local searches can be implemented within a meta-heuristic framework, they are applied in the local improvement step of two variable neighborhood search (VNS) procedures. Computational experiments are carried out which suggest that this new and simple application of VNS is comparable to the state of the art. The selection and implementation of neighborhood structures are key issues to consider in solving optimization problems heuristically, and the authors raise several challenging research questions related to this topic.

x

Preface

In the 13th chapter, “On the Design of Metaheuristics-Based Algorithm Portfolios” by Dimitris Souravlias and Konstantinos E. Parsopoulos, the authors take the heuristic approach to optimization beyond the typical meta-heuristic field by considering the notion of “portfolio of algorithms.” The reason for this is that the selection of a specific meta-heuristic algorithm for solving a given problem constitutes a difficult decision. This can be attributed to possible performance fluctuations of the meta-heuristic during its application either on a single problem or on different instances of a specific problem type. Algorithm portfolios offer an alternative where, instead of using a single solver, a number of different solvers or variants of one solver are concurrently or interchangeably used to tackle the problem at hand by sharing the available computational resources. The design of algorithm portfolios requires a number of decisions from the practitioner’s side. This chapter exposes the essential open problems related to the design of algorithm portfolios, namely, the selection of constituent algorithms, resource allocation schemes, interaction among the algorithms, and parallelism issues. Recent research trends relevant to these issues are presented, offering motivation for further elaboration. The following two chapters are concerned with essentially optimizing, i.e., “exact” algorithms. The 14th chapter, “Integral Simplex Methods for the Set Partitioning Problem: Globalization and Anti-cycling” by Elina Rönnberg and Torbjörn Larsson, is concerned with the application of a simplex method for solving an integer problem, the set partitioning problem, to optimality. Indeed, the set partitioning problem has the quasi-integrality property, which means that every edge of the convex hull of the integer-feasible solutions is also an edge of the polytope of the linear programming relaxation. This property enables, in principle, the use of solution methods that find improved integer solutions through simplex pivots that preserve integrity; pivoting rules with this effect can be designed in a few different ways. Although seemingly promising, the application of these approaches involves inherent challenges. The purpose of this chapter is to lay a foundation for research on these topics. The 15th chapter, “Open problems on Benders Decomposition Algorithm” by Georgios K.D. Saharidis and Antonios Fragkogios, is concerned with the famous Benders decomposition algorithm for mixed-integer programming problems. The method is based on the idea of exploiting the structure of an optimization problem so that its solution can be obtained as the solution of several smaller subproblems. The authors review the fundamental method as proposed by Jacobus F. Benders and present several open problems related to its application. Finally, in the 16th chapter, “An Example of Nondecomposition in Data Fitting by Piecewise Monotonic Divided Differences of Order Higher than Two” by I. C. Demetriou, the author considers the problem of making the least sum of squares change to the data such that the sequence of the divided differences of order m of the fit changes sign at most σ times, for given integers m and σ and given n measurements of values of a real function of one variable that include random errors. The main difficulty in these calculations is that there are about O(nσ ) combinations of positions of sign changes, so that it is impracticable to test each one separately. Since this minimization calculation has local minima, a general

Preface

xi

optimization algorithm can stop at a local minimum that need not be a global one. It is an open question whether there exists an efficient algorithm that can compute a global solution to this important problem for general m and σ . The author presents an example which shows that the minimization calculation when m ≥ 3 and σ ≥ 1 may not be decomposed into separate calculations on subranges of adjacent data such that the associated divided differences of order m are either nonnegative or nonpositive. Therefore, the example rules out the possibility of solving the general problem by a similar dynamic programming calculation. Clearly the present book can by no means claim to completely cover the challenges and open problems in the field. However, we do hope that the material presented here will sufficiently intrigue researchers and motivate further collaboration and research in the exciting subject field. The special characteristic of the book is that it acts as an open platform of expression for all its authors to present their views, understanding, and results on complex and fascinating topics. In acknowledgment of all contributions in this book, we would like to express our special thanks to all authors who participated in this collective effort and also to all reviewers who have helped improve the quality and presentation of the chapters. Last but not the least, we would like to acknowledge the support and assistance that the Springer staff has provided in the preparation of this book. Gainesville, FL, USA Norrbotten, Sweden

Panos M. Pardalos Athanasios Migdalas

Citation

The myth of Deucalion as depicted in the “Charta” of Rhegas Pheraeos (Rigas Feraios), 1797. Source: http://digital.lib.auth.gr/record/127326/files/ (Aristotle University Digital Collections)

xiii

xiv ῞ ρoμηθευς ` δ`ε εξ υδατoς και` γης ˜ ᾿ ις και` πυρ, ᾿ ανθρ ωπoυς ´ πλασας ´ εδωκεν ῎ αυτo˜ ˜ ` εν ναρθηκι λαθρ‘ ´ ιoς ´ κρυψας ´ ... ρoμηθ´εως δ`ε πα˜ις ευκαλ´ιων εγ´ενετo. ῟ Oυτoς βασιλευων ´ των ˜ περι` θ´ιαν τ´oπων ` ᾿Eπιμηθ´ε ως και` γαμε˜ι υρραν ´ την ανδωρας, ´ ¼ν επλασαν ῎ θεoι` πρωτην ´ ᾿ γυνα˜ικα. ᾿Eπει` δ`ε αϕαν´ ισαι Zευς ` τo` ῾ χαλκoυν ˜ ²θ´ελησε γ´ενoς, υπoθεμ´ ε νoυ ρoμηθ´εως ευκαλ´ιων τεκτηναμενoς ´ λαρνακα, ´ και` τα` επιτηδεια ´ ενθ´ε μενoς, ει᾿ ς ταυτην ´ μετα´ υρρας ´ ει᾿ σ´ε βη. Zευς ` δ`ε ῾ oν ᾿ ` απ’ ᾿ πoλυν ` υετ oυρανo υ˜ χ´εας τα` πλε˜ιστα ῞ μ´ερη της ˜ {Eλλαδoς ´ κατ´εκλυσεν, ωστε ᾿ διαϕθαρηναι ˜ παντας ´ ανθρ ωπoυς ´ . . . τ´oτε ῍ δ`ε και` τα` κατα´ εσσαλ´ιαν oρη δι´ε στη, και` ` ᾿Iσθμoυ˜ και ελoπoννησoυ τα´ εκτoς ´ συνεχ´ε θη παντα. ´ ευκαλ´ιων δ`ε εν τῇ λαρνακι ´ δια` της ˜ θαλασσης ´ ϕερ´oμενoς ῾ ε ρας ενν´ε α και` νυκτας ημ´ ´ ἴσας τj | αρνασj | ῍ πρoσ´ισχει, κακε˜ι των ˜ oμβρων παυλαν ˜ ` θυει λαβ´oντων εκβας ´ ιι` ϕυξ´ιå. Zευς ` δ`ε ` αυτoν ` επ´ε τρεψεν π`εμψας {Eρμην ˜ πρoς αι῾ ρε˜ισθαι o῞ τι βoυλεται· ´ o῾ δ`ε αι῾ ρε˜ιται ᾿ | γεν´ε σθαι. Kαι` ιoς ` ᾿ ανθρ ωπoυς ´ αυτj ῾ ερ κεϕαλης ει᾿ π´oντoς υπ` ˜ εβαλλεν ῎ αἴρων ` ῞ λ´ ῎ ευκαλ´ιων, ,¸ ιθoυς, και oυς μ`εν εβαλε ῞ δ`ε υρρα ανδρες εγ´ενoντo, oυς ´ γυνα˜ικας. ᾿ {_θεν και` λαoι` μεταϕoρικως ˜ ωνoμ ασθησαν ´ ᾿ o` τoυ˜ λαας απ ˜ o῾ λ´ιθoς. ´ινoνται δ`ε εκ υρρας ´ ευκαλ´ιωνι πα˜ιδες {[λλην μ`εν ῞ εκ ιoς ` γεγγεννησθαι πρωτoς, ˜ oν ˜ ενιoι ῎ λ´ε γoυσι Aπoλλoδωρoυ ´ Bιβλιoθ ηκη ´ περ´ι τo 150 π.X.

Citation Prometheus made the humans by using water and earth and gave them the fire, secretly from Zeus (“Jupiter” in Latin), hiding it in a cane... Deucalion was a child of Prometheus. He reigned in Phthia and married to Pyrrha, the daughter of Epimetheus and Pandora, the first woman the gods had made. When Zeus wanted to destroy the bronze generation, Deucalion, with the advice of Prometheus, built an ark, and after putting in the necessary, he entered it with Pyrrha. Zeus having shed a lot of rain from the sky, flooded most parts of Greece, so that all the people drowned . . . it was then that the mountains of Thessaly were separated and everything was destroyed except of the Isthmus and the Peloponnese. Deucalion in the ark, having been in the sea for nine days and nights, drove ashore on the mountain Parnassus and there, when the rain ceased, he sacrificed to Zeus the Savior. Zeus then, after sending Hermes to him, allowed him to ask for what he wanted. So he asked for people to be born. And Zeus told him to throw stones over his head, then what was thrown by Deucalion became men, and those thrown by Pyrrha became women. That is why the peoples (“laos” in Greek) were metaphorically named by the word “laas”, which means stone. Many children were born from Pyrrha and Deucalion, firstborn was “Hellen” (i.e., “Greek”), who some say was born by Zeus Apollodorus Biblioteca (Library) ca. 150 BC

Contents

A Note on Open Problems and Challenges in Optimization Theory and Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A. Migdalas and P. M. Pardalos

1

Social Influence-Based Optimization Problems. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chao Li, Jing Yuan, and Ding-Zhu Du

9

New Statistical Robust Estimators, Open Problems . . . . . . . . . . . . . . . . . . . . . . . . . George Zioutas, Chris Chatzinakos, and Athanasios Migdalas

23

Optimal Location Problems for Electric Vehicles Charging Stations: Models and Challenges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A. Karakitsiou, A. Migdalas, and P. M. Pardalos Supply and Demand Selection Problems in Supply Chain Planning . . . . . . . Roshanak Mohammadivojdan and Joseph Geunes Open Problems in Green Supply Chain Modeling and Optimization with Carbon Emission Targets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Konstantina Skouri, Angelo Sifaleras, and Ioannis Konstantaras Variants and Formulations of the Vehicle Routing Problem . . . . . . . . . . . . . . . . Yannis Marinakis, Magdalene Marinaki, and Athanasios Migdalas

49 61

83 91

New MIP model for Multiprocessor Scheduling Problem with Communication Delays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 Abdessamad Ait El Cadi, Mustapha Ratli, and Nenad Mladenovi´c On Optimization Problems in Urban Transport. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 Tran Duc Quynh and Nguyen Quang Thuan Some Aspects of the Stackelberg Leader/Follower Model . . . . . . . . . . . . . . . . . . . 171 L. Mallozzi, R. Messalli, S. Patrì, and A. Sacco Open Research Areas in Distance Geometry. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183 Leo Liberti and Carlile Lavor xv

xvi

Contents

A Circle Packing Problem and Its Connection to Malfatti’s Problem . . . . . 225 D. Munkhdalai and R. Enkhbat Review of Basic Local Searches for Solving the Minimum Sum-of-Squares Clustering Problem. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249 Thiago Pereira, Daniel Aloise, Jack Brimberg, and Nenad Mladenovi´c On the Design of Metaheuristics-Based Algorithm Portfolios . . . . . . . . . . . . . . 271 Dimitris Souravlias and Konstantinos E. Parsopoulos Integral Simplex Methods for the Set Partitioning Problem: Globalisation and Anti-Cycling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285 Elina Rönnberg and Torbjörn Larsson Open Problems on Benders Decomposition Algorithm . . . . . . . . . . . . . . . . . . . . . . 305 Georgios K. D. Saharidis and Antonios Fragkogios An Example of Nondecomposition in Data Fitting by Piecewise Monotonic Divided Differences of Order Higher Than Two . . . . . . . . . . . . . . . . 319 I. C. Demetriou

Contributors

Daniel Aloise Department of Computer and Software Engineering, Polytechnique Montréal, Montreal, QC, Canada Jack Brimberg Department of Mathematics and Computer Science, The Royal Military College of Canada, Kingston, ON, Canada Abdessamad Ait El Cadi Université Polytechnique Hauts-De-France (UPHF)/ LAMIH CNRS UMR 8201, Valenciennes Cedex 9, France Chris Chatzinakos Bio-Technology Research Park, Richmond, VA, USA I. C. Demetriou Division of Mathematics and Informatics, Department of Economics, National and Kapodistrian University of Athens, Athens, Greece Ding-Zhu Du Department of Computer Science, University of Texas at Dallas, Richardson, TX, USA R. Enkhbat National University of Mongolia, Ulaanbaatar, Mongolia Antonios Fragkogios Department of Mechanical Engineering, Polytechnic School, University of Thessaly, Volos, Greece Joseph Geunes Department of Industrial and Systems Engineering, Texas A&M University, College Station, TX, USA A. Karakitsiou Technological Educational Institute of Central Macedonia, Department of Business Administration, Serres, Greece Ioannis Konstantaras Department of Business Administration, School of Business Administration, University of Macedonia, Thessaloniki, Greece Torbjörn Larsson Department of Mathematics, Linköping University, Linköping, Sweden Carlile Lavor Department of Applied Mathematics (IMECC-UNICAMP), University of Campinas, Campinas, SP, Brazil

xvii

xviii

Contributors

Chao Li Department of Computer Science, University of Texas at Dallas, Richardson, TX, USA Leo Liberti CNRS LIX, Ecole Polytechnique, Palaiseau, France L. Mallozzi Department of Mathematics and Applications, University Federico II, Naples, Italy Magdalene Marinaki Technical University of Crete, School of Production Engineering and Management, Chania, Greece Yannis Marinakis Technical University of Crete, School of Production Engineering and Management, Chania, Greece R. Messalli Department of Economics and Statistics, University Federico II, Naples, Italy Athanasios Migdalas Industrial Logistics, ETS Institute, Lulea University of Technology, Norrbotten, Sweden Aristotle University of Thessaloniki, Department of Civil Engineering, Thessaloniki, Central Macedonia, Greece Nenad Mladenovi´c Emirates College of Technologies, Abu Dhabi, UAE Mathematical Institute, SASA, Belgrade, Serbia Roshanak Mohammadivojdan University of Florida, Department of Industrial and Systems Engineering, Gainesville, FL, USA D. Munkhdalai National University of Mongolia, Ulaanbaatar, Mongolia Panos M. Pardalos Industrial and Systems Engineering Department, University of Florida, Center For Applied Optimization, Gainesville, FL, USA Konstantinos E. Parsopoulos Department of Computer Science & Engineering, University of Ioannina, Ioannina, Greece S. Patrì Department of Methods and Models for Economics, Territory and Finance, Sapienza University, Rome, Italy Thiago Pereira Universidade Federal do Rio Grande do Norte, Natal, Rio Grande do Norte, Brazil Tran Duc Quynh Faculty of Information Technology, Vietnam National University of Agriculture, Hanoi, Vietnam Elina Rönnberg Department of Mathematics, Linköping University, Linköping, Sweden A. Sacco Department of Methods and Models for Economics, Territory and Finance, Sapienza University, Rome, Italy Georgios K. D. Saharidis Department of Mechanical Engineering, Polytechnic School, University of Thessaly, Volos, Greece

Contributors

xix

Angelo Sifaleras Department of Applied Informatics, School of Information Sciences, University of Macedonia, Thessaloniki, Greece Konstantina Skouri Department of Mathematics, University of Ioannina, Ioannina, Greece Dimitris Souravlias Logistics Management Department, Helmut-Schmidt University, Hamburg, Germany Nguyen Quang Thuan International School University-Hanoi (VNU), Hanoi, Vietnam

(VNU-IS),

Vietnam

National

Jing Yuan Department of Computer Science, University of Texas at Dallas, Richardson, TX, USA George Zioutas Department of Electrical and Computer Engineering, Faculty of Engineering, Aristotle University of Thessaloniki, Thessaloniki, Central Macedonia, Greece

A Note on Open Problems and Challenges in Optimization Theory and Algorithms A. Migdalas and P. M. Pardalos

Abstract In this note, we review some open problems and challenges concerning optimization theory and the algorithms. Keywords Combinatorial optimization · Computational complexity · Global optimization · Meta-heuristics

1 Introduction Optimization is an important subject with a wide field of applications in decisionmaking problems that arise in economics, engineering, logistics and transportation, traffic planning, location and layout of facilities, telecommunications, social and biological networks, machine learning, and other fields (see, e.g., [12]). According to Werner [25], Cantor [9] observed that the first formal optimization problem in history was formally stated, ca. 300 BC, in Euclid’s Elements of Geometry (see, e.g., [11]), Book VI, and is concerned with the identification of a point on a triangle’s side such that the resulting parallelogram, obtained by drawing the parallel lines to the other two triangle sides from this point, has maximal area. Werner cites also another ancient Greek mathematician, Heron (ca. 100 BC), who solved another optimization problem in geometry, that of finding a point on a given line such that

A. Migdalas () Industrial Logistics, ETS Institute, Lulea University of Technology, Norrbotten, Sweden Aristotle University of Thessaloniki, Department of Civil Engineering, Thessaloniki, Central Macedonia, Greece e-mail: [email protected]; [email protected] P. M. Pardalos Industrial and Systems Engineering Department, University of Florida, Center For Applied Optimization, Gainesville, FL, USA e-mail: [email protected] © Springer Nature Switzerland AG 2018 P. M. Pardalos, A. Migdalas (eds.), Open Problems in Optimization and Data Analysis, Springer Optimization and Its Applications 141, https://doi.org/10.1007/978-3-319-99142-9_1

1

2

A. Migdalas and P. M. Pardalos

the sum of the distances from two other given points is minimal. Thus, it seems that historically, optimization has its roots in geometry, and actually, optimization still plays an important role in geometry (see, e.g., [5]) and particularly in computational geometry (see, e.g., [4, 8]). While some solution algorithms have been developed for most interesting optimization problems, and although several different solution approaches have been proposed for the same optimization problems, their performance may not always be satisfactory in practice and/or in theory. Thus, while every NP-hard problem can be solved in exponential time by exhaustive search, the question whether we can do better than a trivial enumeration is naturally raised. Could it ever be possible to solve NP-hard problems in quasi-polynomial time? Modern heuristic and meta-heuristic approaches (from the Greek “Heuriskein”— ‘   υρ ισ κιν —to find, to discover) have become so popular that a new optimization branch has been born that devises algorithms inspired from physics or nature, with such names as simulated annealing, ant colony, or particle swarm optimization. However, their mathematical properties and convergence remain largely unaddressed, and many open problems need attention (see, e.g., [29]). Moreover, the “No Free Lunch Theorem” shows that all non-resampling algorithms perform equally, averaged over all problems, that is, no such algorithm can outperform any other under any metric over all problems (see, e.g., [18]). Similar practical and theoretical considerations have led to the development of algorithm portfolios for certain optimization problems (see, e.g., [13]) whose theoretical and computational properties remain, however, largely unexplored. Under suitable convexity assumptions in nonlinear optimization, “exact” optimization algorithms find the global optimal solution if one exists. However, such “exact” algorithms are of limited use for global optimization problems, although they may be adopted as heuristics in “local” search techniques (see, e.g., [10]). Moreover, there are several issues here: One is the fact that there are optimization problems for which only global optimization matters [16]. Another is the complexity of determining the convexity of the problem [2, 23]. Black-box optimization, where the objective function is known only by observing different sets of input– output pairs from a computational simulation or experiment is the reign of heuristics and meta-heuristics. So, questions can be raised both with respect to problem convexity and to the “No Free Lunch Theorem” in continuous domain [3, 24]. Computing power is essential for all optimization algorithms. But, what are the limits of what the humans can compute and what are the limits of the machines? How far can emerging technologies, including quantum computers, stretch these limits, and what will be their impact on optimization algorithms? Such issues are discussed in, e.g., [7, 20] and [21].

A Note on Open Problems and Challenges in Optimization Theory and Algorithms

3

2 Some Open and Challenging Problems Nonlinear optimization is a great source of challenges and open problems. It is well known in this context that a global optimum can be provably attained by optimization algorithms based on local properties only under suitable convexity assumptions. But, how easy is it to prove convexity? One of the seven open problems in complexity theory for numerical optimization listed by Pardalos and Vavasis in [23] is the following: Given a degree 4 polynomial in n variables, what is the complexity of determining whether this polynomial describes a convex function? It was shown by Ahmadi et al. [2] that unless P=NP, there exists no polynomial time, not even pseudo-polynomial time, algorithm that can decide whether a multivariate polynomial of degree four or higher even degree is globally convex. They even show that deciding strict, strong, quasi- and pseudo-convexity of polynomials of degree four or higher even degree is strongly NP-hard, while quasiand pseudo-convexity of odd degree polynomials can be decided in polynomial time. So, the question whether determining convexity of a general function is a “decidable problem” is open. Another important open problem related to convexity is whether in the case of d.c. optimization it is possible to characterize the “best” d.c. decomposition of a function into the difference of two convex functions. The problem of minimizing a convex function f over a convex set X ⊆ Rn , where the only access to f is via a stochastic gradient oracle, which given a point x ∈ X returns a random vector g(x) such that E[g(x)] = ∇f (x), is known as the “stochastic exp-concave optimization problem” and is of great importance as it captures several fundamental problems in machine learning [1, 19]. Optimization algorithms, such as the stochastic gradient descent algorithm, are used in order to obtain a point xˆ for which f (ˆx) − minx∈X f (x) ≤ , for given target accuracy , either in expectation or with high probability. Despite the importance of this problem, current algorithms scale poorly with the dimension n of the problem. Therefore, algorithms with fast rate and which scale better with the dimension are sought. Attempts are discussed in, e.g., [1, 19]. There are several challenging problems in continuous global optimization, both theoretical and algorithmic ones, such as whether it is possible to derive general optimization conditions, whether it is possible to decide upon the feasibility in the case of large constrained problems, and how to utilize sparsity and other inherent structures in order to attack large-scale problems. However, even certain fundamental questions regarding optimality of global optimization problems may be indeed very hard to answer. Consider, for instance, the quadratic problem: 1 min f (x) = cT x + xT Qx 2 s.t. x ≥ 0, where Q is an arbitrary n × n symmetric matrix, and x ∈ Rn .

4

A. Migdalas and P. M. Pardalos

The Karush–Kuhn–Tucker optimality conditions for this problem become a socalled linear complementarity problem, LCP (Q, c), which is formulated as follows: Find x ∈ Rn , or prove that none such exists, that satisfies the system: Qx + c ≥ 0, x ≥ 0, x (Qx + c) = 0 T

In 1994, it was shown by Horst et al. that the LCP (Q, c) is an NP-hard problem (see, e.g., [17]). In fact, the problem of checking local optimality for a feasible point is not that easy either. Indeed, consider the linearly constrained problem: min f (x) s.t.

Ax ≥ b x ≥ 0,

where f (x) is indefinite quadratic function. The same researchers have shown that the problem of checking the strict local optimality of a feasible point x for the above problem is also NP-hard. However, even if local optimality can be proven, it may be pointless, since as Hiriart-Urruty [16] has shown, there exist problems for which every feasible point is also a local optimizer. Two such problems are: The problem of minimizing the rank of a matrix, min f (A) = rank(A) s.t.

A ∈ C,

where C ⊂ Mm,m (R), the vector space of m × n real matrices, and the related problem of minimizing the so-called counting function of nonzero components, min c(x) s.t.

x ∈ S,

where S ⊂ Rn and c(x) =number of xi = 0 in x, have been shown in [16] to possess the property that “every feasible point is a local minimizer.” Clearly, “only global optimization matters” for such problems [16]. Meta-heuristics provide the means to decide which part of the search space should be explored next. The local exploration is typically performed by some local minimization algorithms or some other heuristic approaches. Meta-heuristics have shown to be successful in practice for a large number of important combinatorial and global optimization problems. Of particular, fruitful importance for metaheuristic application have been the so-called black-box optimization problems. Such a problem can be stated as follows:

A Note on Open Problems and Challenges in Optimization Theory and Algorithms

5

min f (x) s.t.

x ∈ X,

where X ⊂ Rn , and f : X → R is function which is known only through a set of pairs of input and output values obtained through an experiment or simulation,   that is, it is only known as a set D = (x1 , f (x1 ), (x2 , f (x2 ), . . . , (xn , f (xn ) . There are quite a few challenges and open questions in relation to these very important problems in engineering design. One important issue is with respect to the “no free lunch theorem,” which in the case of combinatorial optimization roughly states that all non-resampling optimization algorithms perform equally, averaged over all problems, and therefore no optimization algorithm can outperform any other under any metric over all problems. In [18], Joyce and Herrmann summarize the following results from the literature which emphasize different aspects of the theorem: 1. The average performance of any pair of algorithms across all possible problems is identical. 2. For all possible metrics, no search algorithm is better than another when its performance is averaged over all possible discrete functions. 3. On average, no algorithm is better than random enumeration in locating the global optimum. 4. The histogram of values seen, and thus any measures or performance based on it, is independent of the algorithm if all functions are considered equally likely. 5. No algorithm performs better than any other when their performance is averaged over all possible problems of a particular type. 6. With no prior knowledge about the function f , in a situation where any functional form is uniformly admissible, the information provided by the value of the function in some points in the domain will not say anything about the value of the function in other regions of its domain. The last statement (6) was proved by Serafino [24] for exactly the case of black-box optimization. Hence, the prior knowledge about the objective function landscape is the key for success. Since all meta-heuristics assume such a priori model, their success largely depends upon the fitness of the model geometry to the geometry of the problem under consideration [24]. Thus, although new meta-heuristics are often introduced as a panacea, the “no free lunch theorem” says otherwise and emphasizes the need of obtaining prior knowledge of the problems geometry. Understanding this type of Bayesian prior and its relation to the “no free lunch theorem” seems as important for the development of successful optimization algorithm as is prior knowledge important for successful learning in machine learning [18]. Indeed, the importance of prior information on the objective function that would permit to choose algorithms that perform better than pure blind search is emphasized in the case of continuous search domain where the necessary conditions for the “no free lunch theorem” are shown to be even stronger and far more restrictive [3].

6

A. Migdalas and P. M. Pardalos

Evaluation of heuristic and meta-heuristic algorithms raises some interesting challenging computational problems with respect to experimental testing, supply of good lower- and upper-bound techniques, supply of benchmark instances with known optimal solution, and also derivation of techniques for automatic identification of parameter values. Concerning theory, the mathematical properties of almost all meta-heuristic algorithms remain largely unaddressed or unsatisfactorily investigated and therefore constitute challenging issues [29]. Population-based meta-heuristics can be addressed by studying the interaction of multiple Markov chains corresponding to the search formations. Theoretical development along these lines has already been initiated but it is in early stages. The mathematical analysis concerning the rate of convergence of population-based continues to constitute a challenging issue. Obtaining strategies and techniques that lead to a balanced trade-off between local intensification and global diversification is another important issue. Deriving combination of algorithms into algorithm portfolios that adaptively fit the assumed model geometry to the real geometry of the optimized problem is of interest. Studying the approach in relation to the “no free lunch theorem” is of theoretical importance. Conditions under which algorithm portfolios can provide computational advantages over other approaches are also of interest. In [13], it is shown that a “risk-seeking” strategy can be advantageous in a portfolio setting. What other strategies could be proven advantageous and for which kind of problems?

3 Concluding Remarks We have indicated above a few directions along which interesting challenges and open problems can be identified for further research. There are a lot more such challenges and open questions in several other sub-subject areas. In [15], several conjectures and open problems, including nonlinear optimization, are presented. Concerning open problems about exact algorithms and their worst time bounds and worst case bounds for NP-hard problems, the reader should consult [28]. Open problems concerning the theory of approximation algorithms for NP-hard discrete optimization problems are discussed in [27]. West [26] has gathered 38 open problems concerning both theory and optimization in the context of graph theory and combinatorics. A lengthy and well-structured list of open combinatorial optimization problems on graphs, including important problems of broadcasting and gossiping, as well as open problems concerning complexity is provided by Hedetniemi [14]. Computational challenges of cliques and related problems are the subjection in [22]. Finally, a source of open problems with respect to current and future algorithmic development for the solution of complex optimization problems is the book by Battiti et al. [6]

A Note on Open Problems and Challenges in Optimization Theory and Algorithms

7

References 1. Agarwal, N., Gonen, A.: Effective dimension of exp-concave optimization (2018). https://arxiv. org/abs/1805.08268 2. Ahmadi, A.A., Olshevsky, A., Parrilo, P.A., Tsitsiklis, J.N.: NP-hardness of deciding convexity of quartic polynomials and related problems. Math. Program. 137, 453–476 (2013) 3. Alabert, A., Berti, A., Caballero, R., Ferrante, M.: No-free-lunch theorem in continuum. Theor. Comput. Sci. 600, 98–106 (2015) 4. Allen-Zhu1, Z., Liao, Z., Yuan, Y.: Optimization algorithms for faster computational geometry. In: Chatzigiannakis, I., Mitzenmacher, M., Rabani, Y., Sangiorgi, D. (eds.) 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016), Article No. 53, pp. 53:1–53:6. (2016) 5. Andreatt, M., Bezdek, A., Boronski, J.P.: The problem of Malfatti: two centuries of debate. Math. Intell. 33, 72–76 (2011) 6. Battiti, R., Brunato, R., Mascia, F.: Reactive Search and Intelligent Optimization. Operations Research/Computer Science Interfaces Series. Springer, Berlin (2009) 7. Bennet, C.H., Landauer, R.: The fundamental physical limits of computation. Sci. Am. 253, 48–56 (2014) 8. Bezdek, K., Deza, A., Ye, Y.: Selected open problems in discrete geometry and optimization. In: Bezdek, K., et al. (eds.) Discrete Geometry and Optimization, pp. 321–336. Springer, Berlin (2013) 9. Cantor, M.: Vorlesungen über Geschichte der Matematik, Band 1. B.G. Teubner, Leipzig (1880) 10. D’Apuzzo, M., Marino, M., Migdalas, A., Pardalos, P.M., Toraldo, G.: Parallel computing in global optimization. In: Kontoghiorghes, E.J. (ed.) Handbook of Parallel Computing and Statistics, pp. 225–258. Chapman and Hall, Boca Raton (2006) 11. Euclid’s Elements of Geometry (Edited, and provided with a modern English translation, by Richard Fitzpatrick). http://farside.ph.utexas.edu/Books/Euclid/Elements.pdf 12. Floudas, C.A., Pardalos, P.M. (eds.): Encyclopedia of Optimization, 2nd edn. Springer, Berlin (2009) 13. Gomes, C.P., Selman, B.: Algorithm portfolios. Artif. Intell. 126, 43–62 (2001) 14. Hedetniemi, S.: Open problems in combinatorial optimization (1998). https://people.cs. clemson.edu/~hedet/preface.html 15. Hiriart-Urruty, J.-B.: Potpourri of conjectures and open questions in nonlinear analysis and optimization. SIAM Rev. 49, 255–273 (2007) 16. Hiriart-Urruty, J.-B.: When only global optimization matters. J. Glob. Optim. 56, 761–763 (2013) 17. Horst, R., Pardalos, P.M., Thoai, N.V.: Introduction to Global Optimization. Nonconvex Optimization and Its Applications, vol. 3. Springer, Berlin (2000) 18. Joyce, T., Herrmann, J.M.: A review of no free lunch theorems, and their implications for metaheuristic optimisation. In: Yang, X.-S. (ed.) Nature-Inspired Algorithms and Applied Optimization. Studies in Computational Intelligence, vol. 744, pp. 27–51. Springer, Berlin (2018) 19. Kore, T.: Open problem: fast stochastic exp-concave optimization. In: JMLR: Workshop and Conference Proceedings, vol. 30, pp. 1–3 (2013) 20. Markov, I.L.: Limits on fundamental limits to computation. Nature 512, 147–154 (2014) 21. Nielsen, M.A., Chuang, I.L.: Quantum Computation and Quantum Information. Cambridge Series on Information and Natural Sciences. Cambridge University Press, Cambridge (2000) 22. Pardalos, P.M., Rebennack, S.: Computational challenges with cliques, quasi-cliques and clique partitions in graphs. In: Festa, P. (ed.) International Symposium on Experimental Algorithms, SEA 2010: Experimental Algorithms. Lecture Notes in Computer Science, vol. 6049, pp. 13–22. Springer, Berlin (2010)

8

A. Migdalas and P. M. Pardalos

23. Pardalos, P.M., Vavasis, S.A.: Open questions in complexity theory for numerical optimization. Math. Programm. 57, 337–339 (1992) 24. Serafino, L.: Optimizing without derivatives: what does the no free lunch theorem actually says? Not. AMS 61, 750–755 (2014) 25. Werner, J.: Optimization Theory and Applications. Vieweg Advanced Lectures in Mathematics. Friedr. Vieweg & Son, Braunschweig (1984) 26. West, D.B.: Open problems - graph theory and combinatorics (2018). https://faculty.math. illinois.edu/~west/openp/ 27. Williamson, D.P., Shmoys, D.B.: The Design of Approximation Algorithms. Cambridge University Press, Cambridge (2010) 28. Woeginge, G.J.: Open problems around exact algorithms. Discrete Appl. Math. 156, 397–405 (2008) 29. Yang, X.-S.: Metaheuristic Optimization: Algorithm Analysis and Open Problems. In: Pardalos, P.M., Rebennack, S. (eds.) Experimental Algorithms. SEA 2011. Lecture Notes in Computer Science, vol. 6630. Springer, Berlin (2011)

Social Influence-Based Optimization Problems Chao Li, Jing Yuan, and Ding-Zhu Du

Abstract The social influence is an important research subject in computational social networks. There are many optimization problems stemmed from study of social influence. In this article, we select a few of them to present a small survey in the literature and existing open problems about them. Keywords Online social networks · Computational social networks · Social influence optimization · Rumor blocking · Source detection · Information diffusion models · Heuristics

1 Introduction Currently, social networking websites like Facebook and Twitter are in a high growth phase, due to the rising popularity of online social networks among people. Online social networks (OSN) facilitate connections between people based on shared interests, values, and membership in particular groups (i.e., friends, professional colleagues, and so on). They make information generating and sharing much easier than ever before. There are many benefits brought by OSNs as a medium for speed, widespread information dissemination. The most important one is the role they play on fast and immediate access to large-scale news data and other sources of information. They also serve as a medium to collectively achieve a social goal. For instance, with the use of group and event pages in Facebook, events such as Day of Action protests reached thousands of protestors [18]. As online social networks (OSNs) have developed significantly in recent years as media platforms of sharing, communicating, and disseminating information and influence, most of the current research are attracted on understanding properties of OSNs and utilizing it to unearth potential topics such online influence maximization,

C. Li · J. Yuan · D.-Z. Du () Department of Computer Science, University of Texas at Dallas, Richardson, TX, USA e-mail: [email protected]; [email protected]; [email protected] © Springer Nature Switzerland AG 2018 P. M. Pardalos, A. Migdalas (eds.), Open Problems in Optimization and Data Analysis, Springer Optimization and Its Applications 141, https://doi.org/10.1007/978-3-319-99142-9_2

9

10

C. Li et al.

social community detection and rumor blocking, etc. They are all related to social influence. In fact, the social influence is located at the center of study of computational social networks. There are many optimization problems stemmed from the study of social influence. In this chapter, we select a few of them to present a small survey in the literature and existing open problems about them.

2 Bharathi–Kempe–Salek Conjecture In the study of social influence, an information diffusion model has to be set up at the first place. There are two most popular and most important information diffusion models, the independent cascade (IC) model and the linear threshold (LT) model. The IC model is a probabilistic diffusion process on a given directed graph G = (V , E). In this directed graph G, each edge (u, v) is assigned with a probability puv , and each node has two states, active and inactive. The process consists of discrete steps. Initially, every node is inactive. To start, choose a subset of nodes as seeds and activate them. At each iteration, every active node u intends to activate some inactive out-neighbors; such an out- neighbor v accepts influence from u with probability puv . An important rule is that any node u has only one chance to influence its inactive out-neighbor v. That is, the link from u to v can be used at most once in the whole process. The diffusion process ends if no more node becomes newly active. The LT model is also a probabilistic diffusion process on a given directed graph G = (V , E). In this directed graph G, each node has two states, active and inactive, and each edge (u, v) is assigned with a weight puv such that for every node v,  u∈N −(v) puv ≤ 1 where N − (v) is the set of all in-neighbors of v. The process consists of discrete steps. Initially, every node v is inactive and randomly chooses a threshold θv uniformly from [0, 1]. At each iteration, every inactive node v evaluates the total weight of active in-neighbors. If this total weight is bigger than or equal to its threshold θv , then v becomes active; otherwise, it keeps inactive. The process ends if no more new active node is produced. Kempe et al. [24, 25] showed that the LT model is equivalent to a type of cascade model, very close to the IC model. However, a recent research work showed that they are actually different in computational complexity on some optimization problems. To see this, let us study an example. A fundamental optimization problem in the study of computational social networks is the influence maximization as follows. Optimization Problem 1 (Influence Maximization) Given a social network G, a diffusion model m, and a set of k seeds, find locations for those k seeds to maximize the expected number of influenced nodes. This problem was first proposed by Domingos and Richardson [13, 44] and then formulated mathematically by Kempe et al. [24, 25].

Social Influence-Based Optimization Problems

11

Domingos and Richardson [13, 44] presented a fundamental algorithmic problem motivated by applications to marketing: suppose that we have estimates for the extent to which individuals influence one another on a social network, and we would like to promote a new product that we hope will be adopted by a large fraction of the network. The premise of viral marketing is that by targeting a few influential members of the network initially say, offering them free samples of the product, we can trigger a propagation of influence by which friends will recommend the product to their friends, and many individuals will ultimately adopt it. Kempe et al. [24, 25] presented a fundamental algorithmic problem definition of the influence maximization problem as above. The influence maximization problem on either the IC model or the LT model has been proved to be NP-hard and to have a polynomial-time (1 − e−1 )-approximation if the computation of expected number of influence nodes is considered as an oracle, i.e., we do not count the time for this computation. Actually, given locations of seeds, computing expected number of influence nodes in the IC or LT model is #P-hard. Bharathi et al. [3] made the following conjecture. Conjecture 2 (Bharathi et al. [3]) The influence maximization problem is NP-hard for arborescence directed into a root. Wang et al. [52] showed that in the LT model, the influence maximization is polynomial-time solvable on such an arborescence. However, Lu et al. [36] showed that on the IC model, the Bharathi–Kempe–Salek conjecture is true. This is the first time to give the difference of the IC and LT models on computational complexity of an optimization problem. These results give motivation to study social influence-based optimization problems in different diffusion models with different approaches.

3 Approximation of Influence Spread A social network plays a critical role as a medium for the spread of information, influence, and ideas among its members. In the context of social networks, an idea or innovation can either die out quickly or make significant impact into the population. In the research community of social networks, researchers are interested in exploring the extent to which such ideas and innovations are adopted, since it can be fundamental to understand how the dynamics of adoption are likely to unfold within the underlying social networks. In other words, it can be important to reveal the extent to which people are likely to be affected by decisions and behaviors of contacts in their social circles. As described in the last section, the influence maximization problem is that given a parameter k, to find a good initial k-node set that start the diffusion process in a social network, such that the expected number of active nodes at the end of the process, given that initial active set, can be maximized. Two major approaches to formalize the influence maximization problem are the Independent Cascade (IC)

12

C. Li et al.

model and the Linear Threshold (LT) model. Both models assume that all the nodes in the network have two states, either active or inactive. Each active node has a single chance to activate its inactive immediate neighbors right after the node itself is activated. The models assume that the activation process is progressive, in the sense that once a node is activated, it will never become inactive subsequently. In the LT model, activation is based on cumulative effects on nodes, which leads to activation once a certain threshold is exceeded. Kempe et al. presented their main result that the optimal solution for influence maximization can be efficiently approximated to within a factor of (1 − 1/e − ), which is slightly better than 63%. The proposed algorithm that achieves this performance is a natural greedy hill-climbing strategy, and this result is proved using techniques from the theory of submodular functions [10, 26]. A large number of papers have been published with the aim of reducing the running time for the influence maximization problem. One major direction is to propose heuristics that retain the (1 − 1/e) approximation ratio of the greedy algorithm while speeding up the computation of the objective function via approximations. Several papers use approximations of the influence in the sense that the influence is only propagated through simple local structures. Among them, Kimura and Saito [28] proposed shortest path-based influence cascade models and provided efficient algorithms that compute influence spread under these models. Wang et al. [51] proposed to approximate the influence propagation by omitting the social network paths with low propagation probabilities and assuming that the influence is propagated from seeds through a local structure, namely maximum influence arborescence to other nodes in the network. A threshold parameter is used to control the trade-off between computation complexity and approximation accuracy. Goyal et al. [20] proposed an alternative algorithm SIMPATH that computes the spread by exploring simple paths in the neighborhood that leveraged two optimizations: the Vertex Cover Optimization cuts down the spread estimation calls in the first iteration, while Look Ahead Optimization improves the efficiency in subsequent iterations; similar with [51], a parameter is used in [20] to strike a balance between running time and desired quality of influence approximation. Chen et al. [8, 9] showed that computing influence spread in directed acyclic graphs (DAG) could be done in linear time, which relies on an important linear relationship in activation probabilities between a node and its in-neighbors in DAGs. The idea is to construct a local DAG surrounding every node in the network and restrict the influence to the node to be within the local DAG structure. To select local DAGs that could cover a significant portion of influence propagation, they proposed a greedy algorithm that adds nodes into the local DAG of a node in a way that only nodes having influence higher than a threshold will be retained. Leskovec et al. [32] proposed CELF heuristic based on a cost-effective lazy forward evaluation of the objective function. The key ingredient of their heuristic is: if a node’s marginal contribution to the objective function in the previous iteration of the greedy algorithm was smaller than the current best node’s, then it need not be reevaluated in the current iteration since by submodularity, its contribution in current iteration can only be lower. Goyal et al. [19] proposed adding several additional

Social Influence-Based Optimization Problems

13

heuristic optimizations to CELF, leading to the algorithm CELF++. Chen et al. [7] proposed reusing previous randomly drawn structures and the computations on them, as well as a discounted high-degree heuristic to produce improved influence spread. In one direction, they design new schemes to improve the greedy algorithm proposed in [24] and combine their scheme together with the CELF optimization to obtain faster greedy algorithms. In another direction, they proposed new degree discount heuristics with influence spreads that are significantly better than the classic degree and centrality-based heuristics and are close to the influence spread of the greedy algorithm. Jung et al. [23] proposed to set up a recurrence relation between the influence of different nodes and linearizing it to speed up computation of influence. They also investigate algorithms differing from the greedy addition of one node at a time. Jiang et al. [22] showed that significant speedups can be achieved by using simulated annealing scheme. Wang et al. [50] proposed a preprocessing step to partition the graph into communities, which can then be treated separately. Liu et al. [34] proposed a bounded linear approach for influence computation and influence maximization. In specific, they adopt a linear and tractable approach to describe the influence propagation. The unique perspective is that this linear approach assumes that the influence flowing into each node is a linear combination of the influence from its neighbors. Therefore, the influence of an arbitrary node set can be linearly computed in a closed form. Recently, Borgs et al. [4] proposed a heuristic that comes with provable guarantees on running time. They introduced a preprocessing step which generates a random hypergraph sampled according to reverse reachability probabilities in the original graph, then the greedy algorithm can be run to solve the maximum coverage problem on the sampled hypergraph subsequently. Lu et al. [35] studied the influence maximization problem in the deterministic linear threshold model. They showed that under this model there is no polynomialtime n1− -approximation unless P = NP. This inapproximatability result is derived with self-contained proofs. They also proved that in the case that a person can be activated when one of her neighbors becomes active, there is a polynomial-time e/(e − 1)-approximation that is the best possible approximation under a reasonable assumption in the complexity theory. Zhu et al. [57, 58] considered influence transitivity and limited propagation distance in their model of influence propagation. They proposed Semidefinite-based algorithm that achieves an approximation ratio of 0.857 without limitation on the number of seeds and an approximation ratio higher than 1 − 1/e if the ratio of the seeds to the total number of nodes resides in a certain range. Some possible research directions include theoretical analysis on the approximation of influence propagation with aforementioned heuristics. So far, the evaluation of proposed heuristics on speeding up the estimation of influence propagation is conducted based on experimental results. Researchers have been comparing the influence spread produced by proposed heuristic algorithm to the spread generated by the greedy algorithm, and in this way they claim that those heuristics can help to compute the influence spread efficiently while being effectively (close to the result of the greedy algorithm). More theoretical analysis on the influence approximation

14

C. Li et al.

heuristics will help us to get a better idea on the accuracy of the heuristics from the perspective of underlying fundamental metrics. Therefore, the following problem is still open and very important. Open Problem 3 Find a polynomial-time approximation for the expectation of influence spread with theoretically guaranteed performance, e.g., using it in the greedy algorithm would obtain a performance ratio close to 1 − 1/e.

4 Active Friending To boost the development of their user bases, existing social networking services usually provide friending recommendations to their users, encouraging them to send out invitations to make more friends. Traditionally, friending recommendations are made following a passive friending strategy in which a user passively selects candidates from the recommended list to send out the invitations. Moreover, the recommended candidates are usually friends of the friends. In contrast, the idea of active friending, where a person may take proactive actions to make friend with another person, can resonate with the real life: we are all familiar with the scenario in high school, a student fan may like to make friend with the captain of the school football team; a salesperson may be interested in getting acquainted with a valuable potential customer in the hope of making a sales pitch; and a junior researcher may desire to make friend with the leaders of the research community in her field to participate in organization and services of a conference. Yang et al. [54] studied the problem of Acceptance Probability Maximization (APM), i.e., providing friending recommendations to assist and guide a user to effectively approach another person for active friending in online social networks. An initiator can specify a friending target, and send out invitations according to social networking service’s recommendation. The invitation is displayed to a candidate along with the list of friends that the initiator and the candidate have in common so as to encourage acceptance of the invitation. This step is repeated until the friending target appears in the recommendation list. The key issue is on the design of the algorithms that select the recommendation candidates. The APM problem is formally defined as follows. Optimization Problem 4 (Acceptance Probability Maximization) Given an initiator s, a friending target t, and the maximal number rR of invitations allowed to be issued by the initiator, APM finds a set R of rR nodes, such that s can sequentially send invitations to the nodes in R in order to approach t. The objective is to maximize the probability at t of the friending invitation when s sends it to t. The parameter rR controls the trade-off between the expected acceptance probability of t and the anticipated efforts made by s for active friending.

Social Influence-Based Optimization Problems

15

To tackle the APM problem, the authors proposed three algorithms [54]: Rangebased Greedy (RG) algorithm, Selective Invitation with Tree Aggregation (SITA) algorithm, and Selective Invitation with Tree and In-Node Aggregation (SITINA) algorithm. RG selects candidates by taking into account their acceptance probability and the remaining budget of invitations, leading to the best recommendations for each step. However, the algorithm does not achieve the optimal acceptance probability of the invitation to a target due to the lack of coordinated friending efforts. Aiming to systematically select the nodes for recommendation, SITA is designed with dynamic programming to find nodes with a coordinated friending effort to increase the acceptance probability of the target. SITA is able to obtain the optimal solution, yet has an exponential time complexity. To address the efficiency issue, SITINA further refines the ideas in SITA by carefully aggregating some information gathered during processing to alleviate redundant computation in future steps and thus obtains the optimal solution for APM in polynomial time. Specifically, SITINA derives the optimal solution for APM with O(nV rR2 ) time, where nV is the number of nodes in a social network, and rR is the number of invitations budgeted for APM. Recently, Chen et al. [12] proved that it is NP-hard to approximate APM in a 1− general graph to within a near-exponential 2n factor. Kim [27] studied the friend recommendation problem in which given a source user s and a specific target user t, the goal is to find a set of nodes for the source user to make connection with so as to maximize the source user’s influence on the target. The author used Katz centrality to model the influence from source to target and proposed the Incremental Katz Approximation (IKA) algorithm to approximate the influence following a greedy approach. Badanidiyuru et al. [2] studied a stochastic optimization problem, namely the Adaptive Seeding problem: one seeks to select among certain accessible nodes in a network, and then select, adaptively, among neighbors of those nodes as they become accessible in order to maximize a global objective function. They reported a (1 − 1/e)2 -approximation for the adaptive seeding problem for any monotone submodular function and proposed an algorithm based on locally adaptive policies that combine a nonadaptive global structure with local adaptive optimizations. Chen et al. [11] studied the Target Influence Maximization (TIM) problem in which a boy progressively makes new friends in order to influence a target girl’s friends, with the goal of making friend with the girl at the end. The authors proposed two approximation algorithms and showed that this problem can be solved in polynomial time in networks with no directed cycles. Possible research directions include in the following: Open Problem 5 (Time-Constrained Active Friending) It is worth to explore the impact of the delay between sending an invitation and acquiring the result in active friending. This is important when the user wants to make friend with the target within a certain amount of time.

16

C. Li et al.

Open Problem 6 (Multi-Targets Active Friending) When an initiator specifies multiple active friending targets, it is not efficient to configure recommendations separately for each target. An idea is to give priority to the intermediate nodes that can approach multiple targets simultaneously. What is an efficient algorithm?

5 Rumor Blocking and Source Detection As an OSN grows rapidly, risks may lie in it. In fact, computer viruses or worms can spread in the Internet, contagious diseases can spread in populations, and rumors can grow in the medium of social networks. Then, the ease of information spread in social networks become disasters, and it can have disruptive effects. It is not a rare occasion to hear more about the latest Kardashian scandal or other celebrity’s gossip in the first second it happens. Even worse, during the Ebola virus outbreak period in the year 2014, information both accurate and not propagated faster thanks to sites like Twitter and Facebook. In Iowa, the Department of Public Health was forced to issue a statement dispelling social media rumors that Ebola had arrived in the state. Meanwhile, there had been a constant stream of posts saying that Ebola can be spread through the air, water, or food, which are all inaccurate claims [37]. In these cases, finding the hidden source or limiting the propagation of rumors in these networks is of great importance to control and prevent these network risks. Apparently, it is necessary to have such tools to limit the effect of misinformation or rumors, in order to make social networks serve as a reliable platform for critical information disseminating. Social network study on rumors firstly focus on machine learning-based techniques such as building classifiers, sentiment analysis, and so on [15, 33, 42, 43]. In these introduced research works, they identify misleading rumors according to a complete set of social conversations including rumors like tweets as the training dataset which need to be retrieved first. Due to the workload of training data collection and relation between rumor spreading and information propagation, many researches turn to the influence-based information propagation-related problems in computational research field. As we mentioned at the end of paragraph 2, there are two main topics attracting more attention: (1) how to block rumors spreading and (2) how to find the hidden rumor source(s). Of course, these two topics contain many problems worthy exploring. The first problem which has been discussed is the identification of influential users in a social network. For the influence maximization problem, given a probabilistic model of information diffusion such as the Independent Cascade Model, a network graph, and a budget k, the objective is to select a set S of size k for initial activation so that the expected value of f (S) (size of cascade created by selecting seed set A) is maximized [7, 9, 28, 29, 50, 53]. In comparison with influence maximization problem which studies single-cascade (also known as one influence diffusion in a social network) influence propagation, there is a series of work that focus on multiple-cascade influence diffusion in social networks. Bharathi et al. [3]

Social Influence-Based Optimization Problems

17

extend the Independent Cascade(IC) model for multiple-cascade influence diffusion. Borodin et al. [5] study the multiple-cascade influence diffusion in several different models generated from the Linear Threshold (LT) model. In [49], Trpevski et al. proposed a two-cascade influence diffusion model based on the SIS (susceptibleinfected-susceptible) model. Kostka et al. [31] consider the two-cascade influence diffusion problem from a game-theory aspect, where each cascade tries to maximize their influence among the social network. Then, the authors study it under a more restricted model than the IC model and the LT model. The work on multiple-cascade influence maximization problem derives existing work in rumor spread control including [6, 16, 17, 21, 30, 41], in which there are only two kinds of cascades, one is called positive cascade while the other is called negative cascade. The goal of rumor blocking becomes using the positive cascade diffusion to fight against the negative cascade diffusion (rumor propagation). A typic formulation is as follows. Optimization Problem 7 (Rumor Blocking) Given a social network with community structure, a rumor source, and k protectors, find k locations for the k protector to maximize the expected number of bridge-ends that can be protected, where a bridge-end is a neighbor of the community containing the rumor source and a bridge-end is protected if it could receive information from a protector before the rumor arrives. In [30], Kimura et al. propose a method to block a certain number of links in a network to reduce bad effects of rumors. In the presence of a misinformation cascade, [6, 16, 17, 21, 41] aim to find a near-optimal way to disseminate good information which will minimize the devastating effects of a misinformation campaign. The authors of [6] prove the NP-hardness of this optimization problem under the generalized IC model. They also established the submodularity of the objective function and therefore, the greedy algorithm was used as a constant-factor approximation algorithm. He et al. [21] seek ways of making sure that most of the users of the social network hear about the correct information before the bad one, making social networks a more trustworthy source of information. Fan et al. [16] address the Least Cost Rumor Blocking (LCRB) problem and propose a strategy to control the rumor diffusion by identifying a minimal subset of individuals as initial protectors to minimize the number of people infected in neighbor communities of Cr at the end of both diffusion processes. Extending both the IC and LT models to twocascade information diffusion model with a time deadline, [41] study the following problem: given bad influence sources, how to select the least number of nodes as good influence sources to limit bad influence propagation in the entire network, such that after T steps, the expected number of infected nodes is at most 1 − β. The authors propose effective greedy and heuristic algorithms and demonstrate several hardness results. Many studies on the problem of rumor source detection are inspired from the common issue of contagion and generally use models for viral epidemics in populations such as the susceptible-infected-recovered (SIR) model. On this subject, research has focused on the effects of the topological properties of the network

18

C. Li et al.

on inferring the source of a rumor in a network. Shah and Zaman [46–48] are the pioneers to study systematically the problem of infection sources detection based on a Susceptible-Infected (SI) model. In SI model, there is only one single infection source, while susceptible nodes have at least one infected neighbor and infected nodes do not recover any more. Subsequently, [38, 39] further consider the multiple sources estimation problem instead of one source under the SI model. Zhu and Ying [55] study the single source estimation problem based on the SusceptibleInfected-Recovered (SIR) model, where an infected node may recover but can never be infected again; and [40] consider the same problem based on the SusceptibleInfected-Susceptible (SIS) model, where a recovered node is susceptible to be infected again. Although all the works listed above answer some fundamental questions about information source detection in large-scale networks, they all build on two basic assumptions. First, a complete snapshot of the network is given, while in reality it is very expensive to obtain due to hundreds of millions of nodes. Second, infection across links and recovery across nodes is both homogeneous, but in reality, most networks are heterogeneous. For example, people close to each other are more likely to share rumors. Therefore, it is important to take sparse observations and network heterogeneity into account when locating information sources. In [39, 45, 56], detecting information sources with partial observations (only a fraction of nodes or observers can be observed) has been investigated. In [14], the authors have considered the detection rate of the rumor centrality estimator when a priori distribution of the source node is given. In [1], the authors describe a fast Monte Carlo algorithm to estimate the source of an infection on a general graph based on partial observations and a modified representation for the susceptible-infected (SI) infection model. Both rumor blocking and source detection methods we introduce here are mainly derived from information and influence propagation models, therefore, let us start our discussion of open problems by first talking influence propagation models. The two most popular influence propagation models considered in this research area are independent cascade (IC) model and linear threshold (LT) model. In both these models, at a given timestamp, each node is either active or inactive, and each node’s tendency to become active increases monotonically as more of its neighbors become active. An active node never becomes inactive again. Time unfolds deterministically in discrete steps. As time unfolds, more and more of neighbors of an inactive node u become active, eventually making u become active, and u’s decision may in turn trigger further decisions by nodes to which u is connected. The process repeats until no new node becomes active. The extension of such two models is widely used in many studies of rumor blocking and source detection. Besides these two, there are also other models employed to express the information and influence dissemination in the social networks. For example, epidemic model, voter model, Markov random field model, and percolation theory. However, in order to solve problems in rumor blocking and source detection, more suitable and specific models need to be proposed.

Social Influence-Based Optimization Problems

19

Open Problem 8 (Near-Linear Time or Faster Running Time) Real-life social graphs typically have hundreds of millions to billions of nodes and edges. For a large graph with n nodes and m edges, near-linear-time algorithms like O((m + n)polylog(n)) or O((m+n)(1+O(1)) ) are desirable [10]. Therefore, social networkrelated problems, such as influence maximization demand scalable algorithms with near-linear running time, and this generates new challenges of developing algorithmic techniques and new complexity classes for scalable graph problems [10]. Open Problem 9 (More Complex Social Networks) Besides the challenges of developing near-linear-time algorithms, another algorithmic challenge for rumor blocking and source detection is dealing with more complex social networks especially those dynamic or heterogeneous networks or social networks with multiple rumor sources. On the one hand, in the real world, social networks change dynamically because users may change their relationships between others and join or leave the network easily. Future studies need to properly consider network dynamics in influence diffusion models and influence maximization algorithms to achieve more realistic and applicable results. This is a widely open area with both challenges and opportunities. On the other hand, the majority of studies we mentioned has been confined to homogeneous networks. There may be interesting opportunities in heterogeneous networks, which we have mentioned before. To give another example, consider a network consisting of different types of social relationships. These different types of social ties of course have different influence between people. A graduate student’s research interest may be mainly influenced by researchers in the same research field, while his other interest will be more influenced by their close friends in daily life. Last but not the least, detection of multiple rumor source is a potential research area which starts to attract more and more attention.

References 1. Agaskar, A., Lu, Y.M.: A fast Monte Carlo algorithm for source localization on graphs. In: SPIE Optical Engineering+ Applications, p. 88581N. International Society for Optics and Photonics, San Diego (2013) 2. Badanidiyuru, A., Papadimitriou, C., Rubinstein, A., Seeman, L., Singer, Y.: Locally adaptive optimization: adaptive seeding for monotone submodular functions (2015). Preprint. arXiv:1507.02351 3. Bharathi, S., Kempe, D., Salek, M.: Competitive influence maximization in social networks. In: International Workshop on Web and Internet Economics, pp. 306–311. Springer, Berlin (2007) 4. Borgs, C., Brautbar, M., Chayes, J., Lucier, B.: Maximizing social influence in nearly optimal time. In: Proceedings of the Twenty-Fifth Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 946–957. SIAM, Philadelphia (2014) 5. Borodin, A., Filmus, Y., Oren, J.: Threshold models for competitive influence in social networks. In: Internet and Network Economics, pp. 539–550. Springer, Berlin (2010)

20

C. Li et al.

6. Budak, C., Agrawal, D., El Abbadi, A.: Limiting the spread of misinformation in social networks. In: Proceedings of the 20th International Conference on World Wide Web, pp. 665– 674. ACM, New York (2011) 7. Chen, W., Wang, Y., Yang, S.: Efficient influence maximization in social networks. In: Proceedings of the 15th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 199–208. ACM, New York (2009) 8. Chen, W., Wang, C., Wang, Y.: Scalable influence maximization for prevalent viral marketing in large-scale social networks. In: Proceedings of the 16th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1029–1038. ACM, New York (2010) 9. Chen, W., Yuan, Y., Zhang, L.: Scalable influence maximization in social networks under the linear threshold model. In: IEEE 10th International Conference on Data Mining (ICDM), 2010, pp. 88–97. IEEE, Piscataway (2010) 10. Chen, W., Lakshmanan, L.V.S., Castillo, C.: Information and influence propagation in social networks. Synth. Lectures Data Manag. 5(4), 1–177 (2013) 11. Chen, H., Xu W., Zhai, X., Bi, Y., Wang, A., Du, D.-Z.: How could a boy influence a girl? In: 10th International Conference on Mobile Ad-hoc and Sensor Networks (MSN), 2014, pp. 279–287. IEEE, Piscataway (2014) 12. Chen, W., Li, F., Lin, T., Rubinstein, A.: Combining traditional marketing and viral marketing with amphibious influence maximization. In: Proceedings of the Sixteenth ACM Conference on Economics and Computation, pp. 779–796. ACM, New York (2015) 13. Domingos, P., Richardson, M.: Mining the network value of customers. In: Proceedings of the seventh ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 57–66. ACM, New York (2001) 14. Dong, W., Zhang, W., Tan, C.W.: Rooting out the rumor culprit from suspects. In: 2013 IEEE International Symposium on Information Theory Proceedings (ISIT), pp. 2671–2675. IEEE, Piscataway (2013) 15. Ennals, R., Byler, D., Agosta, J. M., Rosario, B.: What is disputed on the web? In: Proceedings of the 4th Workshop on Information Credibility, WICOW ’10, New York, NY, 2010, pp. 67–74. ACM, New York (2010) 16. Fan, L., Lu, Z., Wu, W., Thuraisingham, B., Ma, H., Bi, Y.: Least cost rumor blocking in social networks. In: 2013 IEEE 33rd International Conference on Distributed Computing Systems (ICDCS), pp. 540–549. IEEE, Piscataway (2013) 17. Fan, L., Wu, W., Zhai, X., Xing, K., Lee, W., Du, D.-Z.: Maximizing rumor containment in social networks with constrained time. Soc. Netw. Anal. Min. 4(1), 1–10 (2014) 18. Garrison, J., Knoll, C.: Prop. 8 opponents rally across California to protest gay-marriage ban. Los Angeles Times (2008) 19. Goyal, A., Lu, W., Lakshmanan, L.V.S.: Celf++: optimizing the greedy algorithm for influence maximization in social networks. In: Proceedings of the 20th International Conference Companion on World Wide Web, pp. 47–48. ACM, New York (2011) 20. Goyal, A., Lu, W., Lakshmanan, L.V.S.: Simpath: an efficient algorithm for influence maximization under the linear threshold model. In: 2011 IEEE 11th International Conference on Data Mining (ICDM), pp. 211–220. IEEE, Piscataway (2011) 21. He, X., Song, G., Chen, W., Jiang, Q.: Influence blocking maximization in social networks under the competitive linear threshold model. In: SDM, pp. 463–474. SIAM, New York (2012) 22. Jiang, Q., Song, G., Cong, G., Wang, Y., Si, W., Xie, K.: Simulated annealing based influence maximization in social networks. In: AAAI, vol. 11, pp. 127–132 (2011) 23. Jung, K., Heo, W., Chen, W.: IRIE: scalable and robust influence maximization in social networks. In: 2012 IEEE 12th International Conference on Data Mining (ICDM), pp. 918– 923. IEEE, Piscataway (2012) 24. Kempe, D., Kleinberg, J., Tardos, É.: Maximizing the spread of influence through a social network. In: Proceedings of the Ninth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 137–146. ACM, New York (2003)

Social Influence-Based Optimization Problems

21

25. Kempe, D., Kleinberg, J., Tardos, É.: Influential nodes in a diffusion model for social networks. In: Automata, Languages and Programming, pp. 1127–1138. Springer, Berlin (2005) 26. Kempe, D., Kleinberg, J., Tardos, É.: Maximizing the spread of influence through a social network. Theory Comput. 11(4), 105–147 (2015) 27. Kim, S.: Friend recommendation with a target user in social networking services. In: 2015 31st IEEE International Conference on Data Engineering Workshops (ICDEW), pp. 235–239. IEEE, Piscataway (2015) 28. Kimura, M., Saito, K.: Tractable models for information diffusion in social networks. In: Knowledge Discovery in Databases: PKDD 2006, pp. 259–271. Springer, Berlin (2006) 29. Kimura, M., Saito, K., Nakano, R.: Extracting influential nodes for information diffusion on a social network. In: AAAI, vol. 7, pp. 1371–1376 (2007) 30. Kimura, M., Saito, K., Motoda, H.: Minimizing the spread of contamination by blocking links in a network. In: AAAI, vol. 8, pp. 1175–1180 (2008) 31. Kostka, J., Oswald, Y.A., Wattenhofer, R.: Word of mouth: rumor dissemination in social networks. In: Structural Information and Communication Complexity, pp. 185–196. Springer, Berlin (2008) 32. Leskovec, J., Krause, A., Guestrin, C., Faloutsos, C., VanBriesen, J., Glance, N.: Cost-effective outbreak detection in networks. In: Proceedings of the 13th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 420–429. ACM, New York (2007) 33. Leskovec, J., Backstrom, L., Kleinberg, J.: Meme-tracking and the dynamics of the news cycle. In: Proceedings of the 15th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 497–506. ACM, New York (2009) 34. Liu, Q., Xiang, B., Chen, E., Xiong, H., Tang, F., Yu, J.X.: Influence maximization over large-scale social networks: a bounded linear approach. In: Proceedings of the 23rd ACM International Conference on Conference on Information and Knowledge Management, pp. 171–180. ACM, New York (2014) 35. Lu, Z., Zhang, W., Wu, W., Fu, B., Du, D.: Approximation and inapproximation for the influence maximization problem in social networks under deterministic linear threshold model. In: 2011 31st International Conference on Distributed Computing Systems Workshops (ICDCSW), pp. 160–165. IEEE, Piscataway (2011) 36. Lu, Z., Zhang, Z., Wu, W.: Solution of Bharathi–Kempe–Salek conjecture for influence maximization on arborescence. J. Comb. Optim. 33(2), 803–808 (2017) 37. Luckerson, V.: Fear, misinformation, and social media complicate Ebola fight. Time (2014) 38. Luo, W., Tay, W.P.: Identifying multiple infection sources in a network. In: 2012 Conference Record of the Forty Sixth Asilomar Conference on Signals, Systems and Computers (ASILOMAR), pp. 1483–1489. IEEE, Piscataway (2012) 39. Luo, W., Tay, W.P.: Estimating infection sources in a network with incomplete observations. In: 2013 IEEE Global Conference on Signal and Information Processing (GlobalSIP), pp. 301– 304. IEEE, Piscataway (2013) 40. Luo, W., Tay, W.P.: Finding an infection source under the sis model. In: 2013 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 2930– 2934. IEEE, Piscataway (2013) 41. Nguyen, N.P., Yan, G., Thai, M.T., Eidenbenz, S.: Containment of misinformation spread in online social networks. In: Proceedings of the 4th Annual ACM Web Science Conference, pp. 213–222. ACM, New York (2012) 42. Qazvinian, V., Rosengren, E., Radev, D.R., Mei, Q.: Rumor has it: Identifying misinformation in microblogs. In: Proceedings of the Conference on Empirical Methods in Natural Language Processing, pp. 1589–1599. Association for Computational Linguistics, Stroudsburg (2011) 43. Ratkiewicz, J., Conover, M., Meiss, M., Gonçalves, B., Patil, S., Flammini, A., Menczer, F.: Truthy: mapping the spread of astroturf in microblog streams. In: Proceedings of the 20th International Conference Companion on World Wide Web, pp. 249–252. ACM, New York (2011)

22

C. Li et al.

44. Richardson, M., Domingos, P.: Mining knowledge-sharing sites for viral marketing. In: Proceedings of the Eighth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 61–70. ACM, New York (2002) 45. Seo, E., Mohapatra, P., Abdelzaher, T.: Identifying rumors and their sources in social networks. In: SPIE defense, security, and sensing, pp. 83891I–83891I. International Society for Optics and Photonics, San Diego (2012) 46. Shah, D., Zaman, T.: Finding sources of computer viruses in networks: theory and experiment. In: Proceedings of ACM Sigmetrics, vol. 15, pp. 5249–5262 (2010) 47. Shah, D., Zaman, T.: Rumors in a network: who’s the culprit? IEEE Trans. Inf. Theory 57(8), 5163–5181 (2011) 48. Shah, D., Zaman, T.: Rumor centrality: a universal source detector. In: ACM SIGMETRICS Performance Evaluation Review, vol. 40, pp. 199–210. ACM, New York (2012) 49. Trpevski, D., Tang, W.K.S., Kocarev, L.: Model for rumor spreading over networks. Phys. Rev. E 81(5), 056102 (2010) 50. Wang, Y., Cong, G., Song, G., Xie, K.: Community-based greedy algorithm for mining topk influential nodes in mobile social networks. In: Proceedings of the 16th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1039–1048. ACM, New York (2010) 51. Wang, C., Chen, W., Wang, Y.: Scalable influence maximization for independent cascade model in large-scale social networks. Data Min. Knowl. Disc. 25(3), 545–576 (2012) 52. Wang, A., Wu, W., Cui, L.: On Bharathi–Kempe–Salek conjecture for influence maximization on arborescence. J. Comb. Optim. 31(4), 1678–1684 (2016) 53. Xu, W., Lu, Z., Wu, W., Chen, Z.: A novel approach to online social influence maximization. Soc. Netw. Anal. Min. 4(1), 1–13 (2014) 54. Yang, D.-N., Hung, H.-J., Lee, W.-C., Chen, W.: Maximizing acceptance probability for active friending in online social networks. In: Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 713–721. ACM, New York (2013) 55. Zhu, K., Ying, L.: Information source detection in the SIR model: a sample path based approach. In: Information Theory and Applications Workshop (ITA), 2013, pp. 1–9. IEEE, New York (2013) 56. Zhu, K., Ying, L.: A robust information source estimator with sparse observations. Comput. Soc. Netw. 1(1), 1–21 (2014) 57. Zhu, Y., Wu, W., Bi, Y., Wu, L., Jiang, Y., Xu, W.: Better approximation algorithms for influence maximization in online social networks. J. Comb. Optim. 30(1), 97–108 (2015) 58. Zhu, Y., Li, D., Zhang, Z.: Minimum cost seed set for competitive social influence. InfoCom 2016 (2016)

New Statistical Robust Estimators, Open Problems George Zioutas, Chris Chatzinakos, and Athanasios Migdalas

Abstract The goal of robust statistics is to develop methods that are robust against outliers in the data. We emphasize on high breakdown estimators, which can deal with heavy contamination in the data set. We give an overview of recent popular robust methods and present our new approach using operational research techniques, like mathematical programming. We present some open problems of the new robust procedures for improving robustness and efficiency of the proposed estimators. Keywords Detecting outliers · Robust estimators · Regression · Covariance · Mathematical programming

1 Introduction When applying a statistical method in practice, it often occurs that some observations (outliers) deviate from the usual assumptions (outliers). However, many classical methods are sensitive to outliers. The goal of robust statistics is to develop methods that are robust against the possibility that one or several unannounced outliers may occur anywhere in the data. These methods then allow detecting

G. Zioutas Department of Electrical and Computer Engineering, Faculty of Engineering, Aristotle University of Thessaloniki, Thessaloniki, Central Macedonia, Greece e-mail: [email protected] C. Chatzinakos Bio-Technology Research Park, Richmond, VA, USA e-mail: [email protected] A. Migdalas () Industrial Logistics, ETS Institute, Lulea University of Technology, Norrbotten, Sweden Aristotle University of Thessaloniki, Department of Civil Engineering, Thessaloniki, Central Macedonia, Greece e-mail: [email protected]; [email protected] © Springer Nature Switzerland AG 2018 P. M. Pardalos, A. Migdalas (eds.), Open Problems in Optimization and Data Analysis, Springer Optimization and Its Applications 141, https://doi.org/10.1007/978-3-319-99142-9_3

23

24

G. Zioutas et al.

outlying observations by their residuals from a robust fit. We focus on high breakdown methods, which can deal with a substantial fraction of outliers in the data. We give an overview of recent high breakdown robust methods and our new approach using operational research techniques for multivariate settings such as local estimation, covariance estimation, and multiple regressions. Also, we discuss the open problems for the new robust procedures. Many multivariate sets contain outliers, i.e., data points that deviate from the usual assumption and/or from the pattern suggested by the majority of the data. Outliers are more likely to occur in data sets with many observations and/or variables, and often they do not show up by simple visual inspection. The usual multivariate analysis techniques are based on empirical means, covariance and correlation matrices, and least squares fitting. All of these can be strongly affected by even a few outliers. The most dangerous outliers is a group of data points which affect the estimated model so much in their direction that they are now well-fitted by it (masking problem). Once this effect is understood, one sees that the following two problems are essentially equivalent: 1. Robust estimation, find a robust fit, which is similar to the fit without the outliers, 2. Outlier detection, find all the outliers that matter. Thus, a solution of the first problem allows us to identify the outliers by their residuals from the robust fit. On the other hand, a solution to the second problem allows us to remove or down weigh the outliers yielding a reasonable fit as a robust result. Most of robust research is devoted in the first problem, and uses its result to answer the second. This is preferable, because from the combinatorial viewpoint it is easier to search for the larger group of good data points than to find the bad data points. In the robust literature, there are currently available highly robust multivariate estimators, among them, the LAD of Tableman [13], the minimum covariance determinant (MCD) of Rousseeuw [7, 8] for robust location and scatter estimation that can be computed efficiently with the FASTMCD algorithm of Rousseeuw and Driessen [10]. Also, for robust regression there are a number of famous estimators, among them, the Least Trimmed Squares (LTS) of Rousseeuw [7, 8], the MM of Yohai [15], and others. In this work, we present briefly our new approach for robust estimation of location, covariance, and regression and afterwards indicate significant open problems for future work, in order to improve robustness and efficiency. The rest of this chapter is structured as follows. In Section 2, we present the desirable properties of robust estimators. In Sections 3, 4, and 5, we present briefly the most famous robust estimators and our new techniques like Chatzinakos et al. [2], Zioutas et al. [18], Zioutas and Pitsoulis [6] for location, scatter, and regression, respectively. At the end of each of Sections 3, 4, and 5, we discuss the open problems which should studied in order to improve the performance of the new robust procedures.

New Statistical Robust Estimators, Open Problems

25

2 Desirable Properties of Robust Estimators In this section, we present some of the desirable properties for a robust estimator. Without loss of generality, we consider the linear regression model: y = xT β + u

(1)

where: 1. 2. 3. 4.

y is the response variable x = (1, x1 , . . . , xp )T is a vector of explanatory variables β = (β0 , β1 , . . . , βp )T is a vector parameter u is a random error that is normally distributed u ∈ N (0, σ ).

We observe a sample (x 1 , y1 ), . . . , (x n , yn ), and we wish to construct an estimator for the parameter β.

2.1 M-Estimates with a Bounded ρ-Function A class of robust estimators in a regression model is the M-estimators. A reasonable approach to robust regression estimates, where both x i and the yi may contain outliers, is to use an M-estimate βˆ defined by: n 

 ρ

i=1

ˆ ri (β) σ

 = min

(2)

with a bounded ρ and a high breakdown point preliminary scale σˆ . If ρ has a derivative ψ, it follows that: n  i=1

ψ

r i

σˆ

xi = 0

(3)

where ψ is redescending. Consequently, the estimating equation (3) may have multiple solutions corresponding to multiple local minima of the function on the left-hand side of (2), and generally only one of them (the good solution) corresponds to the global minimizer βˆ defined by (2). We have to choose ρ and σˆ such as to attain both a high breakdown point and a high efficiency. We now discuss the breakdown point, influence function, asymptotic normality, and efficiency of such an estimate.

26

G. Zioutas et al.

2.2 Breakdown Point Roughly speaking, the asymptotic breakdown point (BP) of an estimate βˆ of the parameter β is the largest amount of contamination (proportion of outliers) that the data may contain such that βˆ still gives some information about β, i.e., the contamination should not be able to drive βˆ to infinity. Although the asymptotic BP is an important theoretical concept, we focus on the finite breakdown point (FBP) of βˆ , which may be more useful for a finite sample. ˆ n ) with Zn = {z1 , . . . , zn }. Then, Put zi = (x i , yi ) and write the estimate as β(Z define the finite breakdown point as: ε∗ = m∗ /n

(4)

ˆ m ) is bounded. where m∗ is the maximum number of outliers where the estimate β(Z The BP for each type of estimate has to be treated separately.

2.3 Bounded Influence Function The influence function (IF) of an M-estimate with known σ under the model (1) is    u y0 − x o β σ   (5) I F ((x 0 , y0 )) = ψ Vx x 0 with b = Eψ b σ σ and with Vx defined by: Vx = Exx



(6)

It follows that the IF is unbounded. However, the IFs for the cases of monotone and of redescending ψ are rather different. If ψ is monotone, then the IF tends to infinity for any fixed x0 if y0 tends to infinity. If ψ is redescending, then the IF will tend to infinity only when x 0 tends to infinity which means that large outliers have no influence on the estimate.

2.4 Asymptotic Normality Assume that the regression model (1) holds, that x has finite variances, and that σˆ converges in probability to some σ . Then, it can be proved under rather general conditions βˆ is consistent for β in the sense that βˆ → β

(7)

New Statistical Robust Estimators, Open Problems

27

ˆ when n → ∞, and furthermore for large n the distribution of β, 

ˆ ≈ N (β, v(X X D(β)

−1

(8)

)

where v is: v = σ2

Eψ(u/σ )2  Eψ (u/σ ))2

(9)

Thus, the approximate covariance matrix of an M-estimate differs only by a constant factor from that of the LS estimate. Hence, LTS efficiency for normal u’s does not depend on X, that is: ˆ = Eff (β)

σ02 v

(10)

where v is given by (9) with the expectations computed for u ∼ N (0, σ02 ). It is easy to see that the efficiency does not depend on σ0 the estimate βˆ defined above is consistent and asymptotically normal. In order to evaluate the performance of Mestimates, it is necessary to calculate their distributions. Except for the mean and the median, there are no explicit expressions for the distribution of M-estimates in finite sample sizes, but approximations and heuristic derivation can be found.

2.5 Equivariance The original LS estimate of a regression model (1) satisfies βˆ LS (X, y + Xγ ) = βˆ LS (X, y) + γ βˆ LS (X, λy) = λβˆ LS (X, y)

for all

for all

γ ∈ Rp

λ∈R

(11)

and for all nonsingular pxp matrices A βˆ LS (XA, y) = A−1 βˆ LS (X, y)

(12)

The above properties are called, respectively, regression, scale, and affine equivariance. These are desirable properties, since they allow us to know how the estimate changes under these transformations of the data. The desired properties for an optimal robust estimator should be: • High breakdown point (robustness) • Bounded influence • Efficiency

28

G. Zioutas et al.

• Asymptotic normality • Consistency • Equivariance It is well known that there is a trade-off between breakdown point (bdp) and efficiency. Therefore, a reasonable robust estimator should obtain the maximum bdp ∼ = 1/2 with efficiency as high as possible.

3 Robust Local Estimation, Multivariate Least Trimmed Absolute Deviation Estimation Location estimate is one of the most important problems in the statistical theory. It is well known that classical methods using sample averages suffer from the presence of outliers. Using the median instead of the mean can partially resolve this issue but not completely. For the univariate case, a better approach is to use the Least Trimmed Absolute Deviation (LTAD) which is known to have desirable asymptotic properties such as robustness, consistency, high breakdown, and normality. We extend the LTAD estimate to the multivariate case and study numerical methods for its computation. A major issue with LTAD estimation lies on the combinatorial nature of the problem which makes it very computationally challenging. We propose a new trimming procedure that reformulates the multivariate LTAD problem as a mixed integer linear program (MILP) which is then shown to be equivalent to a linear program (LP) under some transformations. The focus of this work is to estimate the unknown location parameter m of a family of distributions Fm given some observational data which is contaminated with an unknown number of outliers. The LTAD robust estimator for univariate data was introduced in [13], where it was shown that it has desirable asymptotic properties such as robustness, consistently, high breakdown, and normality. Moreover, in [13] the author also presents an algorithm to efficiently compute the LTAD in O(n log n) time. However, these methods do not generalize to higher dimensions. In [2], the LTAD is generalized to handle multivariate data using the Euclidean norm, and the resulting combinatorial optimization problem is solved by an approximate fixedpoint like iterative procedure. Computational experiments in [2] on both real and artificial data indicate that the proposed method efficiently identifies both location and scatter outliers in varying dimensions and high degree of contamination. In this work, we extend the results in [2] and present a different generalization of LTAD which is based on the L1 norm. It is shown that the linear programming relaxation of the resulting mixed integer program is integral, after applying an appropriate equidistance data transformation [18]. This implies that the LTAD can be computed as a series of linear programs, which can be solved efficiently using a sub-gradient optimization approach.

New Statistical Robust Estimators, Open Problems

29

3.1 Least Trimmed Absolute Deviation Estimator, LTAD Given a sample of n univariate observations Xn = {x1 , x2 , . . . , xn } where xi ∈ R, i = 1, . . . , n, we can state the well-known location parameter median as follows: m(Xn ) = arg min m

n 

| xi − m |

(13)

i=1

The mean is another example of a location estimator for a data set X, as well as the least median of squares [7] which is the midpoint of the subset that contains half of the observations with the smallest range. If we make the assumption that n − h of observations are outliers, where h > n/2, we can define a robust version of the median which will be called least trimmed absolute deviations (LTAD) estimator, defined by the following problem: m(Xn , h) = arg min



m,T

x∈T

| x − m |,

s.t. | T |= h, T ⊆ Xn .

(14)

which implies that we have to find that subset T of h observations out of n which have the least median value. In order to satisfy the high breakdown property, the value of h is set to n/2. Solving (14) by complete enumeration would require the computation of the median for all possible combination(n, h) subsets T ⊆ Xn and choosing the one with the minimum value, which is computationally infeasible even for moderate values of n and p. The LTAD was introduced by Tableman [13, 14] for fixed h = n/2, where in addition to showing favorable theoretical properties the author also provided a simple procedure for its computation based on the observation that the solution to (14) will be the median of one of the following n−h contiguous subsets {x(1) , . . . , x(h) }, {x(2) , . . . , x(h+1) }, . . . , {x(n−h) , . . . , x(n) }. Therefore, it suffices to compute the n − h median values for the above subsets and choose the one which minimizes the sum in (14). Consider now the multidimensional version of the LTAD defined in (14), where we have p-variate observations Xn = {x1 , . . . , xn } with xi ∈ Rp , i = 1, . . . , n. Moreover, without loss of generality we can assume that the observations are rescaled. The multivariate LTAD is defined as: LTAD : m(Xn , h) arg min m,T

 x∈T

x − m1 ,

s.t. | T |= h, T ⊆ Xn .

(15)

The LTAD problem can be approximated by an iterative algorithm similar to the procedure described in [2] for solving the related least trimmed Euclidean distances (LTED) estimator, which leads to a highly convergent heuristic algorithm. However,

30

G. Zioutas et al.

although this algorithm is very fast, it almost always converges to a local optimum of unknown quality. We present a different solution method for the LTAD, by approximating its natural mixed integer nonlinear programming formulation with a mixed integer linear program whose linear programming relaxation is integral. We also develop specialized efficient solution methods for the resulting linear program.

3.2 Mixed Integer Programming Formulation The LTAD estimate (15) can be equivalently stated as the following mixed integer nonlinear programming problem: MINLP − LTAD : min w,m

s.t.

n

i=1 wi

n

x − m1 ,

= h, wi ∈ {0, 1}, i = 1 . . . , n. i=1 wi

(16)

where the zero-one weights w = (w1 , . . . , wn ) indicate whether observation i is an outlier (wi = 0) or a good observation (wi = 1). For any feasible tuple (w, m) to (15), let x(i) denote the vector x ∈ X with the ith smallest value of one norm of x − m, and w(i) its corresponding weight. n i=1 wi x i − m1 approaches nObserve that as m approaches zero, then x w − m , thus, for small values of m problem (16) can be approximated i i 1 i=1 by the following: min w,m

s.t.

n 

wi x i − m1

i=1 n 

wi = h

(17)

i=1

wi ∈ {0, 1}, i = 1, . . . , n. which is equivalent to the following mixed integer linear program: MILP − LTAD : min w,m

s.t.

p n  

dij

(18)

i=1 j =1 n 

wi = h

i=1

wi xij − mj − dij ≤ 0, i = 1, . . . , n, j = 1, . . . , p −wi xij + mj − dij ≤ 0, i = 1, . . . , n, j = 1, . . . , p w ∈ {0, 1}n

New Statistical Robust Estimators, Open Problems

31

where dij = |wi xij − mj |, and X = (xij ) is the n × p observations matrix whose rows are x Ti for i = 1, . . . , n. There are two issues with approximating (16) with (17). First of all, to ensure that MILP-LTAD is a good approximation of the MINLP-LTAD. Secondly, to be able to solve (16) efficiently. We will resolve the first issue by iteratively transforming the data such that the optimal m approaches zero. For the second issue that the resulting mixed integer linear programming problem is equivalent to a linear programming problem under certain assumptions.

3.3 Data Transformation Denote with LP-LTAD the linear programming relaxation of (18) where w ∈ [0, 1]n , Consider the linear programming relaxation of (18): LP − LTAD : min w,m

s.t.

p n  

dij

i=1 j =1 n 

wi = h

i=1

wi xij − mj − dij ≤ 0, i = 1, . . . , n, j = 1, . . . , p (19) −wi xij + mj − dij ≤ 0, i = 1, . . . , n, j = 1, . . . , p 0 ≤ wi ≤ 1 for

i = 1, . . . , n

Let (w∗LP , m∗LP ) be the optimum solution of LP-LTAD. If w ∗LP is integer, then this LP solution is also an optimal solution of (18). If in the linear programming optimum solution m∗LP is equal to zero, then ∗ (wLP , m∗LP ) is optimal for the MILP-LTAD; that is, we can solve the linear programming relaxation and use it to obtain an optimal solution for the MILP in (18). Therefore if we could transform the data in such a way that m∗LP gets closer to zero, then we can just solve the LP problem to obtain an approximated solution for the LTAD. This leads to the procedure, where the data is iteratively transformed such that its median value is less than some small value ε.

3.4 Solution of LP Relaxation The LP as defined in (18) can be solved efficiently for relatively small (n, p). However, for large values of n, p (e.g., n = 10,000 and p = 100), the problem

32

G. Zioutas et al.

has a million decision variables and two million constraints. Although this is still solvable, we need to find an efficient solution method since there would be multiple calls of this LP in order to obtain the final solution. In what follows, we could exploit the special structure of the problem to develop such a method.

3.5 Computational Experiments In this section, the performance of LP-LTAD estimator is compared against the performance of a heuristic iterative algorithm for solving the LTED estimator given in [2]. The solutions of the associated problems (18) and (19) were computed using the C-PLEX solver. A typical procedure for empirically evaluating the efficiency of robust estimators is to apply the estimators on a clean data set and compare their performance. The results show that the proposed robust location estimator LP-LTAD improves significantly with respect to efficiency, as the coverage h decreases. To study the finite-sample robustness and efficiency of the LTAD location estimates, we performed simulations with contaminated data sets. We recorded the mean square errors (MSE) as a performance criterion for comparison between the robust local estimates; moreover, we also recorded the computational time in CPU seconds for each method. From the results, we concluded that the performance of the new approach LP-LTAD is quite competitive compared to the LTED.

3.6 Conclusions and Open Problems In this work, we develop numerical methods for computing the multivariate LTAD estimator based on the L1 norm, by reformulating its original mixed integer nonlinear formulation. We show that the MINLP is equivalent to an MILP and subsequently to an LP under some conditions on the location estimate. The new LP-LTAD formulation can also be viewed as a new trimming procedure that trims away large residuals implicitly by shrinking the associated observations to zero. The new approach yields a robust location estimate without losing efficiency. The numerical experiments show that the new estimate performs well even in the case of contaminated and correlated multivariate data. However, in order to improve the scientific contribution of the new approach we should work further in the following open problems: 1. When proposing a robust estimator, its robustness must be investigated. Some concepts for assessing robustness are available in the literature, and few of them are presented in the second paragraph. The most promising are probably the finite breakdown point and influence functions. To label the proposed estimator as robust requires to prove its robustness.

New Statistical Robust Estimators, Open Problems

33

2. The robust estimator described in the manuscript could be an interesting extension of the set of established robust location estimators. However, some of its properties like efficiency, affine equivariance, and normality distribution of the estimator should be presented in a mathematical rigor and proven in a theoretical perspective. 3. The benefits of the new approach should be illustrated in a profound simulation study and other applications especially in transportation assignment problems. For instance, consider p shops in a large area and n potential locations for warehouses. The choice of the best k locations so as to minimize the total distances from warehouse to shops is a combinatory problem. This can be solved with the new approach of LTAD, by discarding the n − k potential locations. 4. To motivate the development of a tailor-made LP algorithm, a clear advantage over existing methods as implemented in standard solvers like CPLEX or Fort-MIP needs to be quantified. Future research will involve formulating the corresponding 0–1 integer programs and comparing the exact solutions for moderate sizes with the ones provided by the iterative procedures, as well as testing multi-start strategies for improving the quality of the solutions.

4 Optimization Techniques for Robust Multivariate Location and Scatter Estimation Outliers in a multidimensional data set can be hard to detect, especially when the dimension p exceeds 2, because we can no longer rely on visual inspection based on scatterplots. Different approaches have been proposed for outlier detection in multidimensional data. The most famous of them, the minimum covariance determinant (MCD) estimates proposed by Rousseeuw [7, 8], is based on the minimization of a robust scale of Mahalanobis distances. The MCD method is a highly robust estimator and its objective is to find h observations whose covariance matrix has the lowest determinant. The MCD demonstrate good performance on data sets with low dimension, but on larger data sets are quite computationally intensive since the exact solution requires a combinatorial search which increases exponentially with respect to the dimension. A computationally efficient procedure for identifying outliers was proposed by Filzmoser et al. [3], which is particularly effective for high dimensions. Their algorithm which is coined PCOut utilizes simple properties of principal components to identify outliers, leading to significant computational advantages for high-dimensional data. It is suitable for use on very large data sets with the number of dimensions exceeding the number of observations. In the article Chatzinakos et al. [2], we propose a new procedure for multivariate data outlier detection, which is composed of two stages. In the first stage, a new unbiased multivariate median estimate called least trimmed Euclidean deviation (LTED) is computed, which is the median center of the h points from the data set for which the Euclidean distance from the center is minimum. In the second

34

G. Zioutas et al.

stage, we employ the idea of the minimum covariance determinant (MCD) method. Although the LTED procedure does not lead to affine equivariant estimates, it is shown to perform well even under relatively high collinearity and contamination. We provide numerical evidence indicating that the method gains in robustness and efficiency by sacrificing some affine equivariance. Moreover, since the proposed LTED median estimator does not require the computation of the inverse of the covariance matrix, the method could also be applied to instances where the number of dimensions p is greater than the number of observations n. This is of particular interest in some applications.

4.1 Detection of Location Outliers Location outliers have been defined by Filzmoser et al. [3] as those points which are described by a different location parameter than the majority of the data. Given a sample of n observations, we consider a priori that h of these observations for h > 0.5(n) is the clean data with a location parameter m, and n − h are the potential outliers that have different location parameters. A straightforward procedure to identify the n − h location outliers would be to find the those h points out of the combinations C(n, h) possible ones, whose sum of distances from the location parameter m is minimum. In this section, we will present the least trimmed Euclidean deviation estimators.

4.1.1

Least Trimmed Euclidean Distance Estimator

Consider now the multidimensional case where we have p-variate observations Xn = {x 1 , . . . , x n } where x i ∈ Rp , i = 1, . . . , n. A generalization of the median is the L1 -median, defined as: m(Xn ) = arg min m

n 

x i − m

(20)

i=1

where  ·  stands for the Euclidean norm. The L1 -median is the point in Rp where the sum of Euclidean distances of the data points in Xn with respect to that point is minimum. The L1 -median has a breakdown point of 0.5, which makes it attractive in terms of robustness. Furthermore, it is equivariant under orthogonal transformations. Numerical computation of (20) involves the minimization of a convex function without constraints, and there exist several algorithms both iterative and exact from continuous optimization. Analogously with the LTAD estimator, we propose the least trimmed Euclidean distances (LTED) estimator, defined as:

New Statistical Robust Estimators, Open Problems

m(Xn , h) = arg min m,T

35



x − m

(21)

x∈T

s.t. |T | = h T ⊆ Xn The solution to (21) corresponds to the subset of h observations from Xn that have the smallest L1 -median value. The new approach starts with an iterative algorithm to compute a local optima for the optimization problem in (21) associated with the LTED estimator. Choose any subset H ⊆ Xn of h data points, compute its L1 -median value m(H ) which will be used to compute those h data points from Xn whose sum of distances from m(H ) is the least possible, and repeat the process. Given that we have a finite number of noncollinear data points, convergence of the process is guaranteed. The choice of the initial H can be any subset of Xn with cardinality h; however, we make the natural choice of the h closest observations to the median of all the observations Xn . With respect to the computational requirements of this algorithm, there are two main tasks to be performed with each iteration. First, the L1 -median of the set H has to be computed, which can be done efficiently using a Newtontype method given in [4]. Then, in order to compute the h data points in Xn with the smallest sum of distances from m(H ) we can simply order the distances in increasing order, a process which requires O(n log n) time. Experimentally, it has been observed that few number of iterations are needed until convergence.

4.2 Detection of Scatter Outliers Given the observation matrix X = (xij ). Let the expected value for each column x j be n xij E(x j ) = i=1 n for j = 1, . . . , p. The sample estimates of the expected value and covariance matrix of X will be μ = (E(x 1 ), E(x 2 ), . . . , E(x p ))T

(22)

and 1 (x i − μ)(x i − μ)T n n

=

(23)

i=1

respectively. If a subset of the rows H is used to compute (22) and (23), then we will write μH and  H , respectively. Scatter outliers are defined as those

36

G. Zioutas et al.

observations from Xn which violate the true correlational structure of the majority of the observations, thus influencing the sample estimate . Classical multivariate methods of estimation are based on the assumption that the set Xn = {x 1 , . . . , x n } is independent and identically distributed with each x i having a p-variate normal Np (μ, ) distribution. Just as in the univariate case, a small number of atypical observations may completely alter sample means and covariance. Worse still, a multivariate outlier may not necessarily be an outlier in any of the coordinates considered separately. Our proposed method for robust multivariate location μ and scatter matrix  estimates consists of two stages. In the first stage, a robust location estimate m(Xn ) with the corresponding coverage subset HLT ED is computed with inputs Xn and . This location estimate discards the observations which lie far away h = n+p+1 2 from the L1 -median of the data set, but: • it cannot be used to detect observations which violate the correlational structure of the data set, and • high correlated data, i.e., good observations, may have been removed from the initial data set as outliers. In the case that the true data distribution follows an elliptical pattern, the coverage HLT ED is not separated from Xn by an ellipsoid, which is a necessary condition for a robust covariance matrix (Rousseeuw and Driessen [10]). This is achieved in the second stage where we detect observations in HLT ED which violate the correlational structure of the data set, including high correlated observations from Xn − HLT ED . A well-known distance that may be used in detecting scatter outliers is to calculate the Mahalanobis distance for each observation defined as: (24) Dμ, (x i ) = (x i − μ)T  −1 (x i − μ) for given mean μ and variance . The Mahalanobis distance is a measure to identify observations which lie far away from the data cloud giving less weight to variables with large variances or to groups of highly correlated variables. Once the location outliers have been deleted in the first stage, we have a coverage set HLT ED which is used to compute μLT ED and  LT ED as defined in (22) and (23), respectively. For a data set Xn with corresponding μ, , and h ∈ [1, n], the final LTED scatter estimator yields from the solution of the following problem: arg min T



Dμ, (x i )

(25)

x∈T

s.t. |T | = h T ⊆ Xn denote the set of h observations from Xn with the least sum of Mahalanobis distances with respect to μ and .

New Statistical Robust Estimators, Open Problems

37

4.3 Computational Experiments To verify the correctness of the LTED approach and examine its behavior, we applied it to standard benchmark data sets, and we further compared the LTED estimator with respect to the MCD estimator on correlated data and multivariate outliers concentrated in a group. Also, we replicate the computational experiment of Filzmoser et al. [3] to further test the performance of the LTED approach with respect to the MCD and the PCOut estimators. The results demonstrate that LTED performs comparably well at identifying outliers while it also has lower values of false positive percentages. We also observe that LTED does well both for location outliers and for scatter outliers. We examined simulation results for high-dimensional data, where p ranges from 50 to 2000. In contrast to the previous simulation with dimension p = 10, in this case we were not able to examine the performance of the MCD estimator since it is not computationally feasible for such high dimensions. The LTED consistently produces lower values of false positive and false negative percentages in comparison with the PCOut. As the number of dimensions p increases, the performance of both the algorithms improves significantly. One of the advantages of the proposed LTED robust procedure is its low computational requirements. We observe that as p increases, the PCOut method has the lowest rate of increase in computational time and LTED the second lowest, while other robust estimators become computationally infeasible for this hardware configuration for high values of p.

4.4 Conclusions and Open Problems The LTED is a novel robust location and scatter algorithm for multivariate data, which is based on an unbiased generalization of the L1 -median to higher dimensions that are used in conjunction with the minimum covariance determinant estimator procedure. Extensive computational experiments on both real and artificial data indicate that the proposed method successfully identifies both location and scatter outliers in varying dimensions and high degree of contamination, with minimal computational requirements. While the proposed LTED algorithm shows promising results, there remain a number of open questions that should be further studied in the future as listed in the following: • When proposing a robust estimator, its robustness must be investigated. The most promising are probably the finite breakdown point and efficiency. To label the proposed estimator as robust requires proving its robustness. What is the breakdown point of LTED? With h < n, we get more bias and lose even more efficiency. The choice of the h should be investigated concerning the robustness and efficiency in a more mathematical background.

38

G. Zioutas et al.

• It is of great interest to have a robust method that works well for high-dimensional data. However, there is a limitation with the LTED for data where n < p or even n P , then the problem is N P-Hard, as the 0–1 knapsack problem becomes a special case. An asymptotically optimal heuristic for this problem is provided in [7].

1.2 Selective Newsvendor This section discusses a generalization of the single-period inventory planning problem with stochastic demand, known as the newsvendor problem. The newsvendor seeks an order quantity that maximizes expected profit, equal to the expected revenue from sales minus ordering costs and expected inventory shortage and overage penalty costs. We assume that the newsvendor has the option of choosing from a set of stochastic demand sources in order to sell the product. Let Q denote the order quantity, as before, and suppose the newsvendor incurs an inventory procurement cost of c per unit. The cost of an unsold item at the end of the selling period is h, which may be positive (corresponding to a disposal or holding cost) or negative (in the case of a positive salvage value). If the realized demand is greater than the stock level, the newsvendor incurs a shortage cost of b per unit. In the basic newsvendor problem, a single demand stream exists and the selling price is p per unit. The demand is assumed to be a random variable Y with probability density function (pdf) of fY (y) and cumulative distribution function (cdf) of FY (y). Then the expected single-period profit for the newsvendor can be written as

Supply and Demand Selection Problems in Supply Chain Planning



Q

EP (Q) = p

 yfY (y)dy +

0



 QfY (y)dy − cQ

Q Q

−h



65



(Q − y)fY (y)dy − b

0



(y − Q)fY (y)dy ,

(7)

Q

which can be rewritten as  EP (Q) = (p + h)μy − (p + b + h)



(y − Q)fY (y)dy − (c + h)Q.

(8)

Q (Q) This is a concave function in Q, and solving dEP = 0 (using the Leibniz integral dQ rule) provides the following equation for determining an optimal value of Q:

FY (Q∗ ) =

p−c+b . p+b+h

(9)

Let us next assume the random demand is normally distributed with mean μy and standard deviation σy (we also assume that the distribution is such that the probability of negative demand is negligible). If Z follows the standard normal p−c+b distribution with cdf (z) and pdf φ(z), solving (z∗ ) = p+b+h gives us an optimal order quantity via Q∗ = μy + z∗ σy .

(10)

∞ Defining L(z) = z (u − z)φ(u)du which is widely known as the standard normal loss function (see [15]), and using (10), we can rewrite the expected profit function as (see [6]) EP (Q∗ ) = rμy − K(z∗ )σy ,

(11)

where r = p − c is the net revenue from selling an item and K(z∗ ) is defined via the equation K(z∗ ) = (c + h)z∗ + (p + b + h)L(z∗ ). Next, instead of having a single demand source, suppose a set of independent markets is available, and the newsvendor can choose any subset of these markets to serve. Let J be the set of n potential markets, indexed by j , where the product is sold at a price of pj per unit in market j and a fixed cost of sj is incurred for serving market j . Then the net revenue obtained by selling a unit in market j equals rj = pj − c. Assume the demand in market j is a normally distributed random variable Yj with mean μj and standard deviation σj , independent of all other markets. Further, as before, we assume that xj is a binary variable representing the newsvendor’s decision on whether or not to enter market j , and x is the vector of decision variables in B n . Then the total demand Y will equal the sum of a set of independent normally distributed random variables and will also follow a normal

66

R. Mohammadivojdan and J. Geunes

  2 distribution with mean j ∈J μj xj and standard deviation j ∈J σj xj . Assuming that inventory is centrally pooled for all markets and the inventory shortage cost is independent of the market (see [16]), the expected profit function can be written as EP (Q∗ , x) =

   rj μj − sj xj − K(z∗ ) σj2 xj , j ∈J

(12)

j ∈J

which is solvable in O(n log n), as it is mathematically equivalent in structure to (5).

1.3 Requirements Planning with Demand Selection The economic lot-sizing problem (ELSP) considers inventory planning over a finite set of time periods {1, . . . , T } with dynamic (time-varying) deterministic demands and costs. While the EOQ model considers a continuous and infinite time horizon, the ELSP models situations involving a discrete set of time periods. Production in any period requires incurring a fixed cost, while holding inventory at the end of any period results in inventory holding costs. Thus, an optimal production/ordering plan must strike a balance between the fixed order costs and inventory holding costs, while satisfying customer demands. We will consider two important generalizations of this problem class, where the producer has the opportunity to choose which customers or individual orders to satisfy in each period. For illustrative purposes, we will assume that production capacity is unlimited (while it is straightforward to extend our problem formulations to the case of finite capacities, this generalization will tend to make the resulting problems substantially more complex in terms of worst-case solution time). We first provide a formulation of the basic ELSP. Suppose the demand in period t (t ∈ 1, . . . T ) equals the deterministic value dt , and each unit sold generates a net revenue of rt . The cost of ordering each unit in period t equals ct , and the fixed cost associated with placing an order in period t equals st . Let ht denote the inventory holding cost per unit held at the end of period t. The planner must determine the order quantity, Qt , in each period t. Let It represent the amount of inventory at the end of period t (with I0 = 0), while the binary variable yt corresponds to our decision on whether or not to place an order in period t (equal to 1 if an order is placed, and 0 otherwise). Then, the following provides a mixed integer linear programming (MILP) for the ELSP. [ELSP]

Maximize

T 

{rt dt − ct Qt − st yt − ht It }

(13)

t=1

Subject to:

It = It−1 + Qt − dt ,

t = 1, . . . , T ,

(14)

Qt ≤ Myt ,

t = 1, . . . , T ,

(15)

Supply and Demand Selection Problems in Supply Chain Planning

67

Qt , It ≥ 0,

t = 1, . . . , T ,

(16)

yt ∈ {0, 1},

t = 1, . . . , T .

(17)

The objective function (13) equals the total revenue (a constant, in this case) minus the sum of variable and fixed ordering costs and inventory holding costs. Constraint set (14) ensures conservation of inventory from period to period. In constraint set (15), M corresponds to a large positive value such that production in any period t is unlimited when yt = 1. Solving the ELSP requires minimizing a concave function over a linear set of constraints, implying that an optimal extreme point solution exists (see, e.g., [2]). The special structural properties of extreme points for this problem lead to an ability to solve the problem efficiently. The first exact algorithm for solving this problem was introduced in 1958 by Wagner and Whitin [19], who used an equivalent shortest path problem to solve the problem in O(T 2 ). More efficient approaches have also been developed, for example, in [18]. Within a dynamic lot-sizing context, the flexibility associated with demand selection may arise in different forms. We consider two such forms, which we call order selection and market choice. Under order selection, each period’s demand consists of a set of customer orders, each of which may be accepted or rejected, regardless of the order selection decisions made in other time periods. In the market choice case, each of a set of distinct customer markets has a T -period vector of demands over the planning horizon, and the planner must determine which markets to accept and which to reject. Accepting a market implies that the producer must meet all of the market’s demands throughout the T -period planning horizon.

1.3.1

ELSP with Order Selection

The economic lot-sizing model with order selection considers the situation in which a set of orders is available in each period from which the planner may choose. Order l in period t requests dlt units of the product, and provides a unit revenue of rlt . The total number of orders in period t is denoted by nt . The information on all available orders is known before the start of the first period, and the producer must determine which orders will be satisfied in each period at the beginning of the planning horizon. If an order is accepted, it must be fully satisfied without shortages. Upon introducing a new binary decision variable zlt , which represents our decision on whether to accept order l in period t (for all l = 1, . . . , nt and t = 1, . . . , T ), we can formulate this problem as: [ELSP-OS] Maximize

n T  t t=1

Subject to:

 rlt dlt zlt − ct Qt − st yt − ht It

l=1

It = It−1 + Qt −

n t

Qt ≤ Myt ,

l=1 dlt zlt ,

(18) t = 1, . . . , T , (19) t = 1, . . . , T , (20)

68

R. Mohammadivojdan and J. Geunes

Qt , It ≥ 0,

t = 1, . . . , T , (21)

zlt ∈ {0, 1},

t = 1, . . . , T , l = 1, . . . , nt , (22)

yt ∈ {0, 1},

t = 1, . . . , T . (23)

The revenue part of the objective function depends on the orders that we accept and is no longer a constant. It can be shown (see [6]) that an equivalent reformulation of ELSP-OS exists such that if the binary constraint on zlt is replaced by its linear relaxation (to 0 ≤ zlt ≤ 1) for all t = 1, . . . , T and l = 1, . . . , nt , then an optimal solution exists such that all zlt values are exactly 0 or 1. A dual-ascent solution algorithm for solving this problem is presented in [6] with worst-case complexity O(nmax T 2 ), where nmax corresponds to the maximum number of available orders in a period, i.e., nmax = maxt∈T {nt }. 1.3.2

ELSP with Market Choice

In this section, we will discuss the ELSP with market choice. The difference between the problem considered in this section and the previous one is that instead of choosing orders in each period, we must decide on the markets that we will serve throughout the entire planning horizon. Assume a set J of n markets is available, and each market seeks demand satisfaction. The demand in each market varies with each time period; we will denote the vector of demands for market j throughout the time horizon as d j = (dj 1 , . . . , dj t , . . . , dj T ). If the unit revenue from selling the product in market j in period t is r j = (rj 1 , . . . , rj t , . . . , rj T ), then the total revenue gained from serving market j over the planning horizon would be Rj = r  j dj . We let the binary variable zj denote our decision on whether to choose market j or not (zj = 1 if market j is chosen, and 0 otherwise). Then the ELSP with market choice can be formulated as follows: [ELSP-MC] Maximize

n  j =1

Subject to:

Rj z j −

T

t=1 (st yt

It = It−1 + Qt −

+ ct Qt + ht It ) n  j =1

dj t zj ,

(24) t = 1, . . . , T , (25)

Qt ≤ Myt ,

t = 1, . . . , T , (26)

Qt , It ≥ 0,

t = 1, . . . , T , (27)

zj ∈ {0, 1},

j = 1, . . . , n, (28)

yt ∈ {0, 1},

t = 1, . . . , T . (29)

For any fixed value of the decision vector z, the problem reduces to the basic ELSP, as the total value of the demand in each period is completely determined.

Supply and Demand Selection Problems in Supply Chain Planning

69

On the other hand, if instead of fixing the zj values, we were to fix the production plan variable values (i.e., order quantities, inventory levels, and binary production variables in each period), the problem is also easy to solve (see [6]). Even though the problem is easily solvable for any given market selection scenario or production plan, the ELSP-MC is hard to solve because the combination of these two decision types increases the problem complexity substantially. In fact, this problem was shown to be strongly N P-complete in [17]. Consequently, it cannot be solved optimally in polynomial time; moreover, it was shown in [17] that no approximation algorithm is possible that can guarantee a solution within % of optimality (for some  > 0) unless P = N P. A reformulation of the problem (which is inspired by the facility location problem structure), in which each market’s revenue is treated as an opportunity cost, does permit an approximation algorithm with an objective value no more than 1.582 times the optimal solution for the reformulated problem; however, this feasible solution is not guaranteed to have the same approximate distance from the optimal solution in the original problem formulation (see [6, 9, 10]).

1.4 Open Problems in Demand Selection In this section, we discuss some of the open problems related to the various demand selection problems we have discussed thus far.

1.4.1

Selective Newsvendor with Correlated Customer Demands

Consider the selective newsvendor (SN) problem discussed in Section 1.2, where the demand sources were normally and independently distributed random variables Yj , j ∈ J with mean μj and standard deviation σj . Let us assume that these random demands are no longer independent, and let  denote the covariance matrix of the vector [Y1 , . . . , Yn ]. If x = [x1 , . . . , xn ] is the n-dimensional binary vector of our decision variables (for choosing  or not choosing markets, as defined before), then the sum of these demands, U = j ∈J Yj xj , will still follow a normal distribution, but with mean x μ (where μ is the vector of demand means), and standard deviation √ x x. The expected profit function in equation (12) then becomes: EP (Q∗ , x) =

   rj μj − sj xj − K(z∗ ) x x.

(30)

j ∈J

The resulting MINLP problem can no longer be solved through the proposed algorithm for the previously discussed SN problem, and it is not clear whether any special structure might exist that would enable solving this problem in polynomial time, thus making it an interesting open problem.

70

1.4.2

R. Mohammadivojdan and J. Geunes

Selective Newsvendor with Customer-specific Shortage Costs

Again, let us revisit the assumptions made in the SN problem, which we discussed in Section 1.2. In our definition of the SN problem, a critical assumption was made that the unit shortage cost (when demand exceeds the stock level Q) was independent of the market in which the shortage occurred. In fact, we assumed that all demand was ultimately satisfied through expediting at the end of the period, and that the cost to expedite a unit to a market was market independent. This expediting assumption and the corresponding market-independent expediting cost assumption have two important effects. First, these assumptions permit using rj μj as the expected revenue from sales in market  j . Second, these assumptions lead to an inventory cost 2 ∗ ∗ ∗ ∗ term that equals K(z ) j ∈J σj xj , where K(z ) = (c + h)z + (p + b + h)L(z ), and is completely independent of any market-specific parameters. Suppose instead that shortages result in lost sales, for example. Then the expected revenue in market j would equal the original term rj μj minus the expected lost sales in market j . This would, in turn, imply the need for a shortage cost term that depends on the market in which the lost sale occurred. When multiple markets exist, however, characterizing the expected number of lost sales in a market requires stating allocation rules for the case in which the total demand across markets exceeds the available stock. Such allocation rules are prevalent in the literature on lateral inventory transshipments among warehouses or retail stores (see [14] for a thorough review of lateral transshipments and allocation rules). To the best of our knowledge, the SN problem with lost sales and inventory allocation rules has not been considered in the literature.

1.4.3

Selective Newsvendor with Non-normal Demands

Another generalization of the SN problem would involve eliminating the normal distribution assumption on individual demands. Assume instead that market demands are simply independent and identically distributed (i.i.d.) random variables Yj , j ∈ J with mean μj and standard deviation σj . For the special case in which all price and cost parameters are market independent, the expected profit function can be written as ⎡⎛ ⎞+ ⎤ n n   EP (Q) = (p + h) μj xj − (p + b + h)E ⎣⎝ Yj xj − Q⎠ ⎦ − (c + h)Q, j =1

j =1

(31) Even in the simplified case in which all price and cost parameters are market independent, this problem is% formidable in general, due to the need to com$ +  n and to characterize the distribution of nj=1 Yj xj . pute E j =1 Yj xj − Q

Supply and Demand Selection Problems in Supply Chain Planning

71

Normality may serve as a good approximation if we know in advance that the solution will contain a sufficient number of selected markets (by the central limit theorem). Moreover, certain types of underlying distributions (e.g., Poisson) might permit mathematical characterization of the sum of independent random variables that could lead to an ability to solve the problem efficiently. For cases in which prices and/or shortage costs are market-dependent, however, or customer demands are correlated, we expect this problem to be quite difficult in general.

1.4.4

Polynomially Solvable Special Cases of the Market Selection Problem

The economic lot-sizing problem that we considered in Section 1.3 was formulated based on an assumption of either order selection or market selection. Although the ELSP with order selection is solvable in polynomial time, the market selection version is N P-complete, and the reason for this is that in the market selection framework, the demands within a market in different periods are not independent. As a result, we cannot solve the problem by breaking it down into smaller subproblems and taking advantage of dynamic programming techniques. However, a number of special cases of this problem are actually solvable in polynomial time. For example, when the demand in each market remains constant throughout all periods (i.e., dj t = dj for all t ∈ T ), or varies based on a seasonal factor multiplier (i.e., dj t = at dj for all t ∈ T ), the problem can be solved using a similar algorithm to the one used in solving the EOQ model with market choice (see [6]). Other special cases, such as specially structured market demand and infinite holding costs (discussed in [17]), can be solved in polynomial time as well. It is therefore interesting to consider whether other specially structured demands and/or costs might lead to polynomial solvability.

2 Supplier Selection Problems The nature of optimal procurement and inventory decisions at any stage of the supply chain depends directly on the availability of qualified suppliers and the quality of the products that they are able to provide. For any producer planning to satisfy a set of future demands, it is important to understand the scope of suppliers’ capabilities, constraints, and pricing terms, as these factors will affect not only the producer’s costs, but also the ability to manage demand risk under uncertainty, which will in turn influence shortage costs and excess inventory costs. It is thus important to broaden our investigations to account for the availability of multiple suppliers with varying pricing terms and capacity levels. These factors apply to various types of supply chain planning problems under different kinds of demand and planning horizon assumptions.

72

R. Mohammadivojdan and J. Geunes

In addition to pricing terms and supply capacity, it is also important to consider the reliability of each supplier if a disruption of supply is a possibility. Disruptions may occur due to supplier-specific internal factors such as production line or equipment breakdowns, or they could be caused externally due to, for example, global market effects and daily price volatility of raw materials, or natural disasters, such as a loss of agricultural products due to insufficient rain, or factory damage due to an earthquake, tornado, or hurricane. Supplier reliability may be considered in various forms. For example, a supplier might be disrupted completely, thus supplying nothing, or the amount a supplier periodically provides might follow a certain probability distribution. In addition, the nature of a supplier’s reliability may even depend on the amount ordered from the supplier. This section will discuss classes of supplier selection problems, some of which are well solved, and some of which remain open. In each of the problem classes we consider, we assume that a set of available suppliers exists, and that the inputs provided by these suppliers are substitutable or interchangeable. That is, each supplier has been prequalified such that the quality of the inputs they provide for production has been deemed acceptable from the producer’s point of view.

2.1 Supplier Selection with Deterministic Demand In this section, we discuss procurement problems involving supplier selection, beginning with the simplest case of planning for a single period with known demand d. To satisfy this demand, we can choose among a set I of n prequalified suppliers. Each supplier has a capacity Ki , and the cost of purchasing from supplier i in terms of the amount ordered, denoted by Qi , is determined by the non-negative and concave function gi (Qi ), with g(0) = 0. This single-period procurement planning problem with supplier selection (SP-SS) can be formulated as: [SP-SS]

Minimize



gi (Qi )

(32)

i∈I

Subject to:



Qi ≥ d

(33)

0 ≤ Qi ≤ Ki , i ∈ I.

(34)

i∈I

As demonstrated in [3], this problem is N P-hard, and cannot be solved in polynomial time unless P = N P. A pseudopolynomial-time algorithm and a fully polynomial-time approximation scheme are proposed for solving the problem, and some polynomially solvable special cases are identified in [3]. Let us next generalize this problem to a multi-period inventory planning setting with dynamic and deterministic demand over a T -period planning horizon, where dt is the amount of demand in period t ∈ {1, . . . , T }. Supplier i ∈ I has a capacity limit of Kit on the amount of inventory they can provide in period t, and the cost

Supply and Demand Selection Problems in Supply Chain Planning

73

of purchasing Qit units from supplier i in period t is determined by the function git (Qit ). The inventory planner must determine the amount to order from each supplier in every period prior to the beginning of the planning horizon. Letting It denote the amount in inventory at the end of period t, the ELSP with supplier selection can be formulated as follows:   T   (35) [ELSP-SS] Minimize git (Qit ) + ht It t=1

i∈I

Subject to: It = It−1 +



Qit − dt , t = 1, . . . , T ,

(36)

i∈I

Qit ≤ Kit ,

i ∈ I, t = 1, . . . , T , (37)

Qit , Iit ≥ 0,

i ∈ I, t = 1, . . . , T .

(38)

The terms in the objective function, represent the ordering costs and the inventory holding costs, respectively. For cases in which each order cost function git (·) takes a fixed-charge form with linear variable costs, we can introduce the fixed order cost sit for ordering from supplier i in period t, along with a per unit cost of cit . We also introduce the binary variable yit for each i ∈ I and t = 1, . . . , T (yit = 1 if an order is placed with supplier i in period t, and yit = 0 otherwise); the resulting problem formulation becomes [ELSP-SS-FC]

Minimize

 T   t=1

Subject to:

 (sit yit + cit Qit ) + ht It

i∈I

It =It−1 +



Qit − dt ,

(39) t=1, . . . , T , (40)

i∈I

Qit ≤ Kit yit ,

i ∈ I,

(41)

t = 1, . . . , T , Qit , Iit ≥ 0,

i ∈ I,

(42)

t = 1, . . . , T , yit ∈ {0, 1},

i ∈ I,

(43)

t = 1, . . . , T . A special case of the (ELSP-SS-FC) arises when all suppliers have unlimited capacities. This version of the model is a special case of the requirements planning with substitutions (RPS) problem studied in [1] with a single end item. This special case is considered in a biofuel supply chain setting with multiple suppliers in [13], which shows that an optimal solution exists such that Q∗it Q∗j t = 0 for all i, j ∈ I with i = j and t ∈ T , and Q∗it It−1 = 0 for all i ∈ I and t ∈ T (the zero-inventory ordering policy). These properties facilitate providing an O(nT 2 ) algorithm for this special case in [13] where n is the number of suppliers.

74

R. Mohammadivojdan and J. Geunes

2.2 Supplier Selection with Uncertain Demand This section considers a single-period inventory planning problem with uncertain demand. While the newsvendor problem has been the subject of study along various dimensions (such as the selective newsvendor considered in Section 1.2), some practical settings require the consideration of a newsvendor who may order stock from multiple suppliers. We suppose that the newsvendor wishes to satisfy a stochastic demand Y with known distribution (with pdf fY (y) and cdf FY (y)), and may choose from among a set of qualified suppliers. When suppliers offer different pricing terms, the problem can become quite complex. In the most general form of the problem, we consider a set I of n potential suppliers. Assume the cost of ordering qi units from supplier i ∈ I is given by the function gi (qi ), and the newsvendor would like to minimize the total expected cost associated with ordering, overstocking, and shortages by ordering qi units from supplier i, for each i ∈I (let q = (q1 , . . . , qn ) denote the vector of these order quantities, and define Q = i∈I qi ). Assume h represents the unit overstock cost, p is the unit selling price, and b is the unit shortage cost. Then, the expected profit function is written as EP (Q, q)=(p + h)μy −hQ−(p + b + h)E[(Y − Q)+ ] −

 i∈I

gi (qi ), (44)

and, assuming supplier i ∈ I has a capacity limit of Ki , the newsvendor problem with multiple suppliers (NPMS) can be formulated as [NPMS]

Maximize

EP (Q, q)

Subject to: 0 ≤ qi ≤ Ki , i ∈ I,  i∈I qi = Q.

(45) (46) (47)

First, let us consider the simple case in which gi (qi ) = ci qi for any qi between zero and Ki and for each i ∈ I . In this case, it is possible to show that an optimal solution exists with at most one supplier receiving an order quantity strictly between its bounds (0 and Ki ). Since the cost functions are linear, and hence both convex and concave, the generalized KKT conditions are necessary and sufficient for optimality. Analysis of the KKT conditions, and using the critical fractile i +b FY (Q∗i ) = p−c p+b+h for each supplier, along with a preference ordering of suppliers arranged in nondecreasing order of ci values, results in an algorithm for solving this problem that runs in O(n log n) (for details, please see [12]). We next consider the case where each supplier provides a marginal quantity discount; that is, supplier i provides a threshold value, Bi , above which any extra units purchased will receive a discount. In other words, the cost function for ordering from supplier i takes the form:  gi (qi ) =

if 0 ≤ qi < Bi , c0i qi , c0i Bi + c1i (qi − Bi ), if Bi ≤ qi ≤ Ki ,

Supply and Demand Selection Problems in Supply Chain Planning

75

where c0i > c1i for each supplier i ∈ I . This corresponds to a piecewise linear concave function with one breakpoint, and the expected profit function will thus correspond to a sum of convex and concave functions. The generalized KKT conditions will be necessary for optimality, although they will no longer be sufficient. Defining two critical values (c0i and c1i ) for each supplier, we can partition the [0, 1] interval into 2n + 1 sub-intervals, one of which contains FY (Q∗ ), where Q∗ denotes the optimal order quantity. It is possible to show that a polynomial number of candidate KKT points exists, and that an optimal solution can be found in O(n2 Qmax ), where Qmax corresponds to an upper bound on the maximum possible demand in any period (see [12]). This can also be generalized to the case in which the quantity discount structure associated with supplier i contains multiple breakpoints 0 = Bi0 < Bi1 < · · · < Bimi < Bimi +1 = Ki . Then the cost function will be a piecewise linear function with slope cri for Bir ≤ qi ≤ Bir+1 . A similar algorithm can be created for this case as well with a time complexity that also depends on the total number of breakpoints (in addition to the number of suppliers and the maximum demand quantity). Next, let us briefly consider another common discount structure known as the allunits discount. In this case, if an order is placed with the supplier that exceeds the threshold, then the discount is applied to all of the units purchased from the supplier. The cost function for supplier i in this case becomes:  gi (qi ) =

c0i qi , if 0 ≤ qi < Bi , c1i qi , if Bi ≤ qi ≤ Ki ,

with c0i > c1i . This function is no longer concave, and we cannot rely on the KKT conditions. However, we can show that there exists an optimal solution where at most one supplier receives an order that is not equal to 0, Bi or Ki . And, if the order quantity for supplier i falls on a segment that is strictly between 0 and Bi , or is strictly between Bi and Ki , then the value of FY (Q) must equal the critical fractile associated with the slope (unit cost) associated with the corresponding segment. Using this special property, an algorithm is possible for solving this problem in O(n2 Qmax ), the same as in the marginal discount case (see [12]).

2.3 Supplier Selection with Unreliable Suppliers In the previous sections, we discussed problems with multiple suppliers, each of which always delivered exactly what was ordered from them. In certain practical settings, suppliers may be subject to disruptions due to internal or environmental uncertainty, and accounting for this possibility can facilitate a more broadly applicable set of models for supplier selection. The sources of unreliability may be global and apply to all suppliers, or they may be local and supplier specific. We

76

R. Mohammadivojdan and J. Geunes

focus on individual supplier disruptions, as global factors affecting all suppliers can typically be accounted for within overall scenario probabilities. First we discuss the case where the quantity delivered by each supplier is a random fraction of the order placed by the producer. Let us consider a newsvendor problem, for instance, and assume a set I of n suppliers is available. If we order qi units from supplier i, then this supplier will deliver Ri qi units, where Ri is a random variable representing the reliability of supplier i, and is normally distributed with mean μi and variance σi2 (such that 0 ≤ μi ± 3σi ≤ 1, in order to ensure that the probability of a value outside [0, 1] is negligible). The newsvendor must choose a subset of suppliers and the amount to order from each (qi ), in order to maximize expected profit. Let xi denote a binary variable corresponding to our decision on whether or not to order from supplier i ∈ I . The unit selling price for supplier i equals ci , and si denotes a corresponding fixed cost for ordering from supplier i, while Ki denotes the capacity of supplier i. As before, Y is the random variable for demand, p is the unit selling price, and b is the unit shortage cost. Letting w denote the unit salvage value of the remaining inventory at the end of the period, then the expected profit function can be written as EP (q) = (p − w)μy −



  (w − ci )μi qi − (b − w)E[(Y − Ri qi )+ ] − si xi .

i∈I

i∈I

i∈I

(48) The newsvendor problem with multiple unreliable suppliers is discussed in [11], and can be formulated as follows: [NPMUS]

Maximize

(49)

EP (q)

Subject to: 0 ≤ qi ≤ Ki xi , i ∈ I, xi ∈ {0, 1},

i ∈ I.

(50) (51)

An efficient exact solution for the problem’s linear relaxation, as well as heuristic solution algorithms for the original problem, is proposed in [11]. A more general approach for considering this type of reliability (as in [4]) would be to define the production capability of a supplier as a function Ki (qi , Ri ), where Ri is a non-negative random variable with known pdf and cdf. If the amount delivered by supplier i is denoted by Si , then Si = min{qi , Ki } is a function of both the random variable Ri and order quantity qi . With this definition, when Ki = ∞ the supplier is completely reliable and when Ki (qi , Ri ) = Ri the problem is similar to the previously discussed NPMUS. Under the general function Ki (qi , Ri ), which is assumed to be continuous and differentiable  for each supplier, the total amount delivered by all suppliers will equal ST = i∈I Si , and the expected profit function of the newsvendor becomes

Supply and Demand Selection Problems in Supply Chain Planning

 EP (q) = E p min{Y, ST } + w[(ST − Y )+ ] − b[(Y − ST )+ ] −

77

i∈I

ci Si , (52)

where qi ≥ 0 for i ∈ I . This problem is considered in [4], where they use the KKT conditions to characterize optimal solution properties and develop policies for choosing suppliers based on their cost functions and reliabilities.

2.4 Open Problems in Supplier Selection In this section, we discuss some of the open problems related to the various supplier selection problems we have discussed thus far.

2.4.1

Integrated Economic Lot-Sizing and Supplier Selection

Problem ELSP-SS, discussed in Section 2.1, considers production planning when an input required for production may be procured from a number of interchangeable suppliers. As noted earlier, the fixed-charge version of this model with unlimited supplier capacities corresponds to a special case of the requirements planning problem with substitutions (RPS) analyzed in [1]. This special case arises in the RPS problem when only a single end product exists. It is worthwhile to note that the RPS problem in [1] has not been shown to be N P-complete to the best of our knowledge, and this therefore represents an open problem for exploration. The algorithm provided in [1] is polynomial in T for a fixed number of “suppliers” (or components, in the language of the former paper), but is exponential in the number of suppliers. We note that this problem was shown to be N P-complete when holding costs are unrestricted in sign (and may, therefore, take negative values) in [20]; however, the same result has not been shown to hold for the case of nonnegative holding costs (nor has a polynomial-time algorithm been identified). The problem ELSP-SS-FC is perhaps worthy of further exploration. It is clear that the case in which a single supplier exists with time-varying and finite capacities corresponds to capacitated economic lot-sizing problem, which was demonstrated to be N P-complete in [5]. However, it is not clear whether the case in which each supplier has time-invariant capacity limits might be polynomially solvable. In addition, the multiple end-item version of this problem with supplier capacity limits, which corresponds to the capacity-constrained version of the RPS problem [1], has not been explored in the literature to the best of our knowledge.

78

2.4.2

R. Mohammadivojdan and J. Geunes

Newsvendor with Multiple Suppliers and Concave Procurement Costs

Consider the newsvendor problem with multiple suppliers discussed in Section 2.2. We considered linear supply cost functions as well as marginal and all-units discounts. This problem can be generalized to account for cases in which the cost function for each supplier is concave but does not fall into one of the aforementioned categories. In the special cases considered in Section 2.2, the objective function consists of a sum of convex and piecewise linear functions. As a result, the objective functions considered were all piecewise convex. The special structure and properties of the problem in these special cases thus permitted obtaining effective results in terms of finding an optimal solution. If we suppose, on the other hand, that each supplier’s cost function is strictly concave and increasing in the order quantity, the problem becomes very different, and it is not clear whether it is possible to exploit some special structure in solving the problem under this assumption. This problem type has not been the subject of study to the best of our knowledge.

2.4.3

All-or-Nothing Supplier Reliability

Another consideration with respect to unreliable suppliers lies in accounting for cases where the supplier delivers the quantity ordered with a certain (and supplierspecific) probability 1 − π , or fails to deliver with probability π (and thus delivers nothing). Such suppliers have been referred to as all-or-nothing suppliers in the literature. Let us revisit the newsvendor problem with multiple unreliable suppliers, with the assumption that all suppliers are all-or-nothing suppliers, i.e., supplier i will be disrupted with probability πi . Let U denote the set of all possible scenarios associated with supplier disruption states, and denote the probability of scenario u ∈ U as Pu . Further, letting b denote the unit shortage cost and h the unit overstock cost, the expected overstocking and understocking costs under scenario u can be written as G(Qu ) = hE[(Qu − Y )+ ] + bE[(Y − Qu )+ ], where Qu denotes the quantity stocked by the newsvendor in scenario u ∈ U . The newsvendor problem with multiple all-or-nothing suppliers (NP-AON) can then be formulated as: [NP-AON]

Minimize Subject to:



 i∈I

u∈Ui



Pu ci qi +

 u∈U

0 ≤ qi ≤ Ki , i∈I

qi Riu = Qu ,

(53)

Pu G(Qu ) i ∈ I,

(54)

u ∈ U.

(55)

In the above formulation Ui denotes the set of scenarios in which supplier i is not disrupted, and the parameter Riu represents the availability of supplier i in scenario u, which takes a value of 0 (disrupted) or 1 (not disrupted). As can be easily seen,

Supply and Demand Selection Problems in Supply Chain Planning

79

since the number of possible scenarios for the case of n suppliers is 2n , the number of terms in the objective function grows exponentially with n. This increases the problem’s complexity enormously and makes the problem difficult to formulate explicitly and solve as the number of suppliers increases. In practical cases, however, the number of potential suppliers for a production input is not likely to be so great as to create practical problems of enormous or unmanageable size. Thus, because problem NP-AON is a convex program, it is easily solvable by a commercial solver for problems with a reasonable number of suppliers. The model is, nevertheless, worthy of further exploration to determine whether customized and efficient solution methods might be possible, and whether valuable managerial insights can be derived from its analysis.

3 Combined Supplier and Demand Selection Problems This section poses a new class of problems in which both demand selection and supplier selection are possible. For example, suppose we have a set of potential suppliers I , indexed by i, and a set of potential customers J , indexed by j . Let dj t denote the demand of customer j in period t ∈ {1, . . . , T }, where T is the number of time periods in the planning horizon. Let Kit denote the capacity of supplier i in period t. We define zj t as the percentage of customer j demand in period t that the producer will satisfy, while yit corresponds to the amount of supply obtained from supplier i in period t. Each unit of customer j demand in period t provides a revenue of rj t , while it costs git (yit ) to obtain yit units of production input from supplier i in period t. Assume for convenience that one unit of supply from any supplier may be used to satisfy a unit of demand for any customer, and let ht denote the cost to hold a unit in inventory at the end of period t. We can then formulate a dynamic, deterministic, multi-period demand and supplier selection problem (DSSP) as follows:   T    (56) rj t dj t zj t − git (yit ) −ht it [DSSP] Maximize t=1

j ∈J

Subject to: it = it−1 +



i∈I

yit −

i∈I



j ∈J

dj t zj t , t = 1, . . . , T ,

(57)

yit ≤ Kit ,

i ∈ I, t = 1, . . . , T ,

(58)

yit ≥ 0,

i ∈ I, t = 1, . . . , T ,

(59)

0 ≤ zj t ≤ 1,

j ∈ J, t = 1, . . . , T .

(60)

The DSSP generalizes the requirements planning with pricing (RPP) problem with piecewise linear revenue functions in [8] to permit multiple capacitated suppliers. We may consider different variants of this problem such that: • Demand selection must be all-or-nothing, which corresponds to requiring zj t ∈ {0, 1}, for all j ∈ J and t = 1, . . . , T ;

80

R. Mohammadivojdan and J. Geunes

• Market selection is required, where, in addition to the above requirement, we also require zj t = zj t+1 for t = 1, . . . , T − 1, i.e., satisfying the demand of any customer j in one period requires satisfying the customer’s demand in all periods; • Each supplier has stationary capacity limits, i.e., Kit = Ki for t = 1, . . . , T , for all i ∈ I ; • Supplier capacities are unlimited; • The git functions are concave (including the special case of a fixed plus linear form). We next consider a new class of demand and supplier selection problems in a singleperiod setting with uncertain customer demands. This problem will generalize the selective newsvendor problem in [16] to allow for the possibility of multiple suppliers. In this problem class, market j demand is a random variable Dj that is approximated by a normal distribution with expected value μj and standard deviation σj . The decision maker wishes to determine a subset of the market demands to satisfy, where zj = 1 if market j is selected and zj = 0 otherwise, and each unit of demand satisfied in market j produces a revenue of rj . The selective newsvendor orders Q units from suppliers in order to satisfy demand in the selected markets, where the total amount of selected demand equals D z = j ∈J Dj zj . Assuming a cost of c per unit of stock, a salvage value per unit of leftover inventory of v, an expediting cost of e per unit of inventory shortage, and a fixed cost of sj for entering market j , the selective newsvendor problem requires maximizing the expected profit G(Q, z) defined as G(Q, z) =

 ' & ' & (rj μj − sj )zj − cQ + vE (Q − D z )+ − eE (D z − Q)+ . (61) j ∈J

In the presence of supplier selection decisions, we let qi denote the quantity procured from supplier i ∈ I , and let gi (qi ) and Ki denote the corresponding cost and supplier capacity, respectively. Then the selective newsvendor problem with multiple suppliers (SNPMS) can be formulated as follows. [SNPMS]

Maximize

(62)

G(q, z)

Subject to: 0 ≤ qi ≤ Ki , i ∈ I,

(63)

zj ∈ {0, 1}, j ∈ J,

(64)

where G(q, z) =

  (rj μj − sj )zj − gi (qi ) j ∈J

i∈I

⎡⎛

+vE ⎣⎝

 i∈I

qi −

 j ∈J

⎞+ ⎤

⎡⎛ ⎞+ ⎤   Dj zj ⎠ ⎦ −eE ⎣⎝ D j zj − qi ⎠ ⎦ . (65) j ∈J

i∈I

Supply and Demand Selection Problems in Supply Chain Planning

81

The SNPMS generalizes the SNP to permit ordering from multiple capacitated suppliers with general order cost functions (instead of linear order costs). Identifying structural properties of optimal solutions and efficient solution methods for special cases of the DSSP and the SNPMS problems identified in this subsection may serve as interesting directions for future research.

4 Chapter Summary In this chapter, we discussed several fundamental supply chain planning problems when multiple suppliers and/or demand sources are available. First, we discussed the availability of multiple demand markets, where the inventory planner wishes to maximize profit (or minimize cost) by choosing a subset of these demand sources to satisfy. Next, we considered cases where multiple suppliers are available, and the problem requires determining a set of suppliers to choose and the amount to order from each. In each case, we discussed several open problems that fall within the problem category. Finally, we introduced a new class of problems incorporating both demand and supplier selection decisions, and posed interesting unexplored research directions arising within this new problem class.

References 1. Balakrishnan, A., Geunes, J.: Requirements planning with substitutions: exploiting bill-ofmaterials flexibility in production planning. Manuf. Serv. Oper. Manage. 2(2), 166–185 (2000) 2. Bazaraa, M.S., Sherali, H.D., Shetty, C.M.: Nonlinear Programming: Theory and Algorithms, 3rd edn. Wiley, Hoboken, NJ (2006) 3. Burke, G.J., Geunes, J., Romeijn, H., Vakharia, A.: Allocating procurement to capacitated suppliers with concave quantity discounts. Oper. Res. Lett. 36(1), 103–109 (2008) 4. Dada, M., Petruzzi, N., Schwarz, L.: A newsvendor’s procurement problem when suppliers are unreliable. Manuf. Serv. Oper. Manage. 9(1), 9–32 (2007) 5. Florian, M., Lenstra, J.K., Rinnooy Kan, A.H.G.: Deterministic production planning: algorithms and complexity. Manage. Sci. 26(7), 669–679 (1980) 6. Geunes, J.: Demand Flexibility in Supply Chain Planning. Springer Science and Business Media, Berlin (2010) 7. Geunes, J., Shen, Z., Romeijn, H.: Economic ordering decisions with market choice flexibility. Naval Res. Logist. 51(1), 117–136 (2004) 8. Geunes, J., Romeijn, H., Taaffe, K.: Requirements planning with pricing and order selection flexibility. Oper. Res. 54(2), 394–401 (2006) 9. Geunes, J., Levi, R., Romeijn, H., Shmoys, D.: Approximation algorithms for supply chain planning and logistics problems with market choice. Math. Program. 130(1), 85–106 (2011) 10. Levi, R., Geunes, J., Romeijn, H., Shmoys, D.: Inventory and facility location models with market selection. In: Integer Programming and Combinatorial Optimization, pp. 111–124. Springer, Berlin (2005) 11. Merzifonluoglu, Y., Feng, Y.: Newsvendor problem with multiple unreliable suppliers. Int. J. Prod. Res. 52(1), 221–242 (2014)

82

R. Mohammadivojdan and J. Geunes

12. Mohammadivojdan, R., Geunes, J.: The newsvendor problem with capacitated suppliers and quantity discounts. Eur. J. Oper. Res. 271, 109–119 (2018) 13. Palak, G., Ek¸sio˘glu, S., Geunes, J.: Analyzing the impacts of carbon regulatory mechanisms on supplier and mode selection decisions: an application to a biofuel supply chain. Int. J. Prod. Econ. 154, 198–216 (2014) 14. Paterson, C., Kiesmüller, G., Teunter, R., Glazebrook, K.: Inventory models with lateral transshipments: a review. Eur. J. Oper. Res. 210, 125–136 (2011) 15. Silver, E., Pyke, D., Peterson, R.: Inventory management and production planning and scheduling, 3rd edn. Wiley, New York (1998) 16. Taaffe, K., Geunes, J., Romeijn, H.: Target market selection and marketing effort under uncertainty: the selective newsvendor. Eur. J. Oper. Res. 189(3), 987–1003 (2008) 17. Van den Heuvel, W., Kundakcioglu, E., Geunes, J., Romeijn, H., Sharkey, T., Wagelmans, A.: Integrated market selection and production planning: complexity and solution approaches. Math. Program. 134(2), 395–424 (2011) 18. Wagelmans, A., van Hoesel, S., Kolen, A.: Economic lot sizing: an O (n log n) algorithm that runs in linear time in the Wagner-Whitin case. Oper. Res. 40(S1), S145–S156 (1992) 19. Wagner, H., Whitin, T.: Dynamic version of the economic lot sizing model. Manage. Sci. 5(1), 89–96 (1958) 20. Wu, D., Golbasi, H.: Multi-item, multi-facility supply chain planning: models, complexities, and algorithms. Comput. Optim. Appl. 28, 325–356 (2004)

Open Problems in Green Supply Chain Modeling and Optimization with Carbon Emission Targets Konstantina Skouri, Angelo Sifaleras, and Ioannis Konstantaras

Abstract Research on pollutant emissions management is developing into a very essential part of the green supply chain landscape as more industries try to make it part of their strategy among pressure from customers, competitors, and regulatory agencies. This has resulted in an increased tendency to not only focus on financial costs in the production process but also on its impact on society. This public impact comprises, for instance, environmental implications, such as the emission of pollutants during production. The carbon tax and emissions trading mechanisms are the most effective market-based choices used to lower carbon emissions. These mechanisms are incorporated into the development of inventory lot sizing models, the results of which could improve the effectiveness of carbon management in the supply chain. The paper presents some open problems associated with the lot sizing problem under emissions constraints. Keywords Green supply chain · Carbon emission optimization · Inventory lot sizing modeling

K. Skouri Department of Mathematics, University of Ioannina, Ioannina, Greece e-mail: [email protected] A. Sifaleras () Department of Applied Informatics, School of Information Sciences, University of Macedonia, Thessaloniki, Greece e-mail: [email protected] I. Konstantaras Department of Business Administration, School of Business Administration, University of Macedonia, Thessaloniki, Greece e-mail: [email protected] © Springer Nature Switzerland AG 2018 P. M. Pardalos, A. Migdalas (eds.), Open Problems in Optimization and Data Analysis, Springer Optimization and Its Applications 141, https://doi.org/10.1007/978-3-319-99142-9_6

83

84

K. Skouri et al.

1 Introduction Emissions of the anthropogenic greenhouse gases (GHG) that drive climate change and their impact around the world are increasing. In a recent estimate by the Committee on Climate Change [7], the cumulative emission of greenhouse gases needs to be reduced by about 60% (below 1990 levels) by the year 2030. The European Commission (EU) is looking at cost-efficient ways to make the European economy more climate-friendly. Its low-carbon economy roadmap suggests that by 2050, the EU should cut emissions to 80% below 1990 levels. To this end, 40% emissions should cut by 2030 and 60% by 2040 (http://ec.europa. eu/clima/policies/strategies). Consequently, many countries endeavor to reduce these greenhouse gases, as formalized in treaties, such as the Kyoto Protocol (United Nations [17]), as well as in legislation, such as the European Union Emissions Trading System (European Commission 2010) (Helmrich et al. [12]). When discussing emissions, most companies and people automatically start looking at electricity and other operational emission sources. However, emissions in the supply chain are a substantial part of a company’s footprint, and should not be ignored just because they are outside of direct operational control. When analyzing the emission inventories over 4000 companies, upstream emissions are on average over twice that of the operational emissions of a company (CDP Report, December 2015, https://www.cdp.net/CDPResults/committing-to-climateaction-in-the-supply-chain.pdf). In other words, the majority of the companies are focused on the physical processes for reducing emissions and not on the operational practices and policies, such as production, transportation, inventory, etc. (Benjaafar et al. [6]). Given now the potential impact of operational decisions on carbon emissions, there is a need for extensive model-based research that incorporates carbon emission targets. The traditional quantitative models on Operations Management, that are focused on either minimizing costs or maximizing profits, have to be extended to incorporate the carbon footprint [10]. After that, these models can be utilized to see how carbon emissions may influence operational choices and to especially inform operations managers on how basic policies, such as mandatory emission caps, taxes on carbon emissions, and emission cap and trade, affect decision-making. So, starting with the existing inventory lot sizing problems under carbon emissions constraints, the purpose of this chapter is to present some open problems.

2 Open Problems: Formulation and Optimization Issues In the following section, we shall present some open problems with lot sizing under carbon emissions constraints. Before that, we have to recall the basic dynamic lot sizing model given by Wagner and Within [18], which will be denoted as problem P1 :

Open Problems in Green Supply Chain

min

85

T 

(Kt yt + pt xt + ht It ),

(1)

t=1

s.t. It = It−1 + xt − dt , t = 1, . . . , T , xt ≤ yt

T 

ds , t = 1, . . . , T ,

(2) (3)

s=t

I0 = 0,

(4)

xt , It ≥ 0, t = 1, . . . , T ,

(5)

yt ∈ 0, 1, t = 1, . . . , T ,

(6)

where T : is the planning horizon, xt : the quantity produced in period t, It : the inventory at the end of period t, yt : a binary variable indicating a setup in period t, dt : the demand in period t, Kt : the setup cost in period t, pt : production cost in period t, ht : holding cost in period t. It should be noted that the constraints (3) ensure that production can only take place if there is a setup in that period. Starting with this model, Absi et al. [1] proposed a multisourcing lot sizing problem with carbon emission constraints, which aims at limiting carbon emissions per unit of product supplied with different modes. One mode corresponds to the combination of a production facility and a transportation mode and is characterized by its economical costs and its unitary carbon emissions. This problem (thereafter called P2 ) is modeled by Absi et al. [1] as: min

T M   

T   Ktm ytm + ptm xtm + ht It ,

m=1 t=1

(7)

t=1

s.t. It = It−1 +

M  m=1

xtm − dt , t = 1, . . . , T ,

(8)

86

K. Skouri et al.

xtm ≤ ytm

T 

ds , t = 1, . . . , T , m = 1, . . . , M

(9)

s=t

I0 = 0, xtm , It

≥ 0, t = 1, . . . , T , m = 1, . . . , M ytm ∈ 0, 1, t = 1, . . . , T , T M  

(etm − E max )xtm ≤ 0

(10) (11) (12) (13)

m=1 t=1

where m : a specific production mode (i.e., a specific supplier), etm : carbon emission related to supplying one unit using supplier m in period t, E max : maximum unitary environmental impact allowed in period t. Relation (13) represents that the unitary carbon emission over the whole planning horizon cannot be larger than the maximum unitary environmental impact allowed. The above problem P2 is NP-complete as it has been proved by Absi et al. [1]. Based on the Absi et al. [1] basic model, the following open problems are possible: Problem A.1 The above problem can be extended in an integrated supply chain framework in order to jointly optimize supplier and retailer costs. To this end, a kind of cooperation between suppliers could be adopted regarding their carbon emissions—for example, by transferring unused quantities of carbon units between suppliers. Problem A.2 An inventory routing coordination scheme that takes carbon emissions from warehouse activities into consideration could be of interest. So, P2 could be extended by assuming that a fleet of vehicles transport different products from multiple suppliers to multiple retailers under emission constraints and objectives. Problem A.3 The demand per period could be a stochastic variable (see Parsopoulos et al. [14]). Problem A.4 For firms dealing with both manufacturing and remanufacturing activities, decisions on managing the new and remanufactured products should be made under the consideration of carbon emissions. This leads to interesting research questions such as: How will carbon emission constraints influence production and remanufacturing decisions? Can the regulations lead to carbon emission reduction and urge the manufacturer to choose lower carbon remanufacturing technology? For the above extensions, the derivation of theoretical results could be useful for their optimization, and heuristics or metaheuristics could be proposed or modified for their solution as well.

Open Problems in Green Supply Chain

87

Recently, Hong et al. [13] proposed a modification of the problem from Absi et al. [1] by assuming emission limitations in each period and considering M = 2 where 1 is used for regular and 2 is used for green mode. So, the resulting problem (hereinafter called P3 ) is modeled as: min

T 2   

T   Ktm ytm + ptm xtm + ht It ,

m=1 t=1

(14)

t=1

s.t. 2 

It = It−1 +

xtm − dt , t = 1, . . . , T ,

(15)

ds , t = 1, . . . , T , m = 1, 2

(16)

m=1

xtm ≤ ytm

T  s=t

I0 = 0, xtm , It 2 

(17)

≥ 0, t = 1, . . . , T , m = 1, 2

(18)

ytm ∈ 0, 1, t = 1, . . . , T ,

(19)

etm xtm ≤ E max , t = 1, . . . , T ,

(20)

m=1

Based on the Hong et al. [13] model, the following open problems are possible: Problem B.1 In the above problem, the emissions constraint can be modified once the emissions both from inventory holding and production activities are taken into account. Problem B.2 A carbon cap can be imposed under a carbon offset mechanism, and a carbon market also exists that allows the purchase of carbon units. Notice that under a carbon offset mechanism, unused carbon units cannot be sold. In this case, the above problem can be modified as: min

T 2  T T    m m   Kt yt + ptm xtm + ht It + α et+ , m=1 t=1

t=1

(21)

t=1

s.t. 2 

It = It−1 +

xtm − dt , t = 1, . . . , T ,

(22)

ds , t = 1, . . . , T , m = 1, 2

(23)

m=1

xtm ≤ ytm

T  s=t

88

K. Skouri et al.

I0 = 0,

(24)

xtm , It ≥ 0, t = 1, . . . , T , m = 1, 2

(25)

∈ 0, 1, t = 1, . . . , T ,

(26)

etm xtm ≤ E max + et+ , t = 1, . . . , T ,

(27)

ytm 2  m=1

where et+ : the quantity of carbon units to buy in period t, α : the market price of one unit of carbon. Akbalik and Rapine [2] extend P1 by considering the carbon emissions constraints under the cap-and-trade policy. Besides a limitation on the total carbon emissions over the entire horizon, the cap-and-trade policy allows a firm to buy and to sell carbon units. Therefore, the ensuing problem (hereinafter denoted as P4 ) is modeled as: min

T 

(Kt yt + pt xt + ht It ) + α(

t=1

T 

et+ −

t=1

T 

et− ),

(28)

t=1

s.t. It = It−1 + xt − dt , t = 1, . . . , T , xt ≤ yt

T 

(29)

ds , t = 1, . . . , T ,

(30)

s=t

I0 = 0,

(31)

xt , It ≥ 0, t = 1, . . . , T ,

(32)

yt ∈ 0, 1, t = 1, . . . , T ,

(33)

T  T T    + ˆ ˆ ˆ Kt yt + pˆ t xt + ht It ≤ C + et − et− t=1

t=1

α

T 

et+ ≤ B

(34)

t=1

(35)

t=1 T  t=1

et+ ≥ 0,

T  t=1

et− ≥ 0

(36)

Open Problems in Green Supply Chain

89

where et− : the quantity of carbon units to sell in period t, Kˆ t : the carbon emission per setup in period t, pˆ t : the carbon emission per produced unit in period t, hˆ t : the carbon emission per stored unit in period t, C : the global carbon cap (carbon limit) over the entire horizon, B : the available budget. Relation (34) represents the cap-and-trade constraint, while (35) represents the budget constraint. Based on the Akbalik and Rapine [2] model, the following open problems are possible: Problem C.1 Open problems relating to P4 could be in the same direction as in P2 (i.e., Problems A.1–A.4). Problem C.2 In an integrated supply chain framework, cooperative activities between members of the supply chain in regard to the trade of carbon units could be considered. Problem C.3 The consideration of the market price of one unit of carbon, α, as a decision-making variable could be another research direction.

3 Conclusions Companies have increased efforts to manage and reduce their carbon footprint. Towards this direction, the carbon management in the supply chain is an essential capability. In this chapter, we have presented some interesting open problems concerning modeling and optimization issues with inventory lot sizing under carbon emissions constraints. There is a growing need for facing such carbon emission reduction problems in the modern supply chain industry. Since, the majority of these problems are NP-hard, further research effort will be required, using either mathematical programming approaches [3, 4, 8, 15] or metaheuristics [9, 16], and exploiting parallel computing techniques [5, 11] in order to tackle such cases.

References 1. Absi, N., Dauzère-Pérès, S., Kedad-Sidhoum, S., Penz, B., Rapine, C.: Lot sizing with carbon emission constraints. Eur. J. Oper. Res. 227(1), 55–61 (2013) 2. Akbalik, A., Rapine, C.: Single-item lot sizing problem with carbon emission under the capand-trade policy. In: 2014 International Conference on Control, Decision and Information Technologies (CoDIT), pp. 30–35 (2014)

90

K. Skouri et al.

3. Al Dhaheri, N., Diabat, A.: A mathematical programming approach to reducing carbon dioxide emissions in the petroleum refining industry. In: 2nd International Conference on Engineering Systems Management and Its Applications (ICESMA 2010), pp. 1–5. IEEE, Piscataway (2010) 4. Almansoori, A., Betancourt-Torcat, A.: Design of optimization model for a hydrogen supply chain under emission constraints - a case study of Germany. Energy 111, 414–429 (2016) 5. Antoniadis, N., Sifaleras, A.: A hybrid CPU-GPU parallelization scheme of variable neighborhood search for inventory optimization problems. Electronic Notes Discrete Math. 58, 47–54 (2017) 6. Benjaafar, S., Li, Y., Daskin, M.: Carbon footprint and the management of supply chains: insights from simple models. IEEE Trans. Autom. Sci. Eng. 10(1), 99–116 (2013) 7. CCC: The Fourth Carbon Budget – Reducing emissions through the 2020s, Committee on Climate Change, London (December, 2010) 8. Cunha, J.O., Konstantaras, I., Melo, R.A., Sifaleras, A.: On multi-item economic lot-sizing with remanufacturing and uncapacitated production. Appl. Math. Model. 43, 678–686 (2017) 9. Devika, K., Jafarian, A., Nourbakhsh, V.: Designing a sustainable closed-loop supply chain network based on triple bottom line approach: a comparison of metaheuristics hybridization techniques. Eur. J. Oper. Res. 235(3), 594–615 (2014) 10. Eskandarpour, M., Dejax, P., Miemczyk, J., Péton, O.: Sustainable supply chain network design: an optimization-oriented review. Omega 54, 11–32 (2015) 11. Eskandarpour, M., Zegordi, S.H., Nikbakhsh, E.: A parallel variable neighborhood search for the multi-objective sustainable post-sales network design problem. Int. J. Prod. Econ. 145(1), 117–131 (2013) 12. Helmrich, M.J.R., Jans, R., van den Heuvel, W., Wagelmans, A.P.: The economic lot-sizing problem with an emission capacity constraint. Eur. J. Oper. Res. 241(1), 50–62 (2015) 13. Hong, Z., Chu, C., Yu, Y.: Dual-mode production planning for manufacturing with emission constraints. Eur. J. Oper. Res. 251(1), 96–106 (2016) 14. Piperagkas, G.S., Konstantaras, I., Skouri, K., Parsopoulos, K.E.: Solving the stochastic dynamic lot-sizing problem through nature-inspired heuristics. Comput. Oper. Res. 39(7), 1555–1565 (2012) 15. Shaw, K., Irfan, M., Shankar, R., Yadav, S.S.: Low carbon chance constrained supply chain network design problem: a benders decomposition based approach. Comput. Ind. Eng. 98, 483–497 (2016) 16. Sifaleras, A., Konstantaras, I.: General variable neighborhood search for the multi-product dynamic lot sizing problem in closed-loop supply chain. Electron. Notes Discr. Math. 47, 69–76 (2015) 17. UNFCCC: Kyoto protocol. united nations framework convention on climate change. Tech. rep. (1997) 18. Wagner, H., Whitin, T.: Dynamic version of the economic lot size model. Manage. Sci. 5, 89–96 (1958)

Variants and Formulations of the Vehicle Routing Problem Yannis Marinakis, Magdalene Marinaki, and Athanasios Migdalas

Abstract The vehicle routing problem is one of the most important problems in the field of supply chain management, of logistics, of combinatorial optimization, and, in general, of operational research. The interest in this problem has been recently increased both from theoretical and practical aspects. In this chapter, a number of the most important variants of the vehicle routing problem are presented. In some of them, the basic formulation of the problem is, also, given. Keywords Vehicle routing problem · Formulations · Variants of the vehicle routing problem

1 Introduction There are a number of reasons for the large number of studies concerning the vehicle routing problem (VRP). From the practical point of view, the problem is one of the most important problems in the supply chain management and, thus, the finding of the optimal set of routes will help the decision makers to reduce the cost of the supply chain and to increase the profit. Also, in the formulation of the problem the managers could simulate the complete network and add any of the constraints concerning the customers, the vehicles, the routes, and, also, the traffic conditions of the network and the energy consumption of the vehicles. Thus, someone could

Y. Marinakis () · M. Marinaki Technical University of Crete, School of Production Engineering and Management, Chania, Greece e-mail: [email protected]; [email protected] A. Migdalas Industrial Logistics, ETS Institute, Lulea University of Technology, Norrbotten, Sweden Aristotle University of Thessaloniki, Department of Civil Engineering, Thessaloniki, Central Macedonia, Greece e-mail: [email protected]; [email protected] © Springer Nature Switzerland AG 2018 P. M. Pardalos, A. Migdalas (eds.), Open Problems in Optimization and Data Analysis, Springer Optimization and Its Applications 141, https://doi.org/10.1007/978-3-319-99142-9_7

91

92

Y. Marinakis et al.

solve an exhaustive realistic problem and find a near-optimal set of solutions. From the theoretical point of view, there are a huge number of researchers who deal with the solution of a variant (or more variants) of the vehicle routing problem. These variants of the problem focus on a specific constraint, and the researchers are trying to find an algorithm that gives new best solutions in a specific set of benchmark instances in short computational time with as less as possible parameters in order to give a more general algorithm. The reason that usually a near-optimal set of solutions is found is that the problem from its origin (it was first introduced by Dantzig and Ramser in 1959 [27]) was proved to be an NP-hard problem even in its simpler version, the capacitated vehicle routing problem where the only constraint that was taken into account was the capacity of the vehicles (and later the maximum tour length constraint was added). Thus, it is impossible in real-life applications to find an optimal solution. A number of heuristic, metaheuristic (mainly), evolutionary, and nature-inspired approaches have been proposed for the solution of the VRP and its variants. Also, exact algorithms have been proposed in order to solve the problem. They are, mainly, used for the solution of the simplest vehicle routing problems (the problems with as few as possible constraints) and for a small number of nodes. A number of books [7, 44, 47, 66, 83, 85] and survey papers [13, 14, 33, 41, 42, 46, 54, 55, 58, 81] devoted on vehicle routing problems have been published. The vehicle routing problems cannot be viewed as open problems in the sense that some of them have not been solved at all but we can say that open problems may exist in this field in the sense that continuously new real-life formulations of these problems with more complicated and demanding objective functions and constraints are proposed that need to be solved.

2 Capacitated Vehicle Routing Problem The Vehicle Routing Problem (VRP) or the Capacitated Vehicle Routing Problem (CVRP) is often described as the problem in which vehicles based on a central depot are required to visit geographically dispersed customers in order to fulfill known customer demands [13, 14]. The problem is to construct a low-cost, feasible set of routes - one for each vehicle. A route is a sequence of locations that a vehicle must visit along with the indication of the service it provides. The vehicle must start and finish its tour at the depot. The objective function is to minimize the total distance traveled [47, 83]. Let G = (V , E) be a graph where V is the vertex set and E is the arc set. The customers are indexed i = 2, · · · , n and i = 1 refers to the depot. The vehicles are indexed k = 1, · · · , K. A customer i has a demand of qi . The capacity of vehicle k is Qk . If the vehicles are homogeneous, the capacity for all vehicles is equal and denoted by Q. A demand qi and a service time sti are associated with each customer node i. The travel cost and the travel time between customers i and j is cij and tij , respectively.

Variants and Formulations of the Vehicle Routing Problem

93

The formulations of the capacitated vehicle routing problem are divided into three different categories, vehicle flow models, commodity flow models, and set partitioning models. The vehicle flow models use integer variables associated with each arc [14, 23, 34]. They can, easily, be used when the cost of the solution can be expressed as the sum of the costs associated with each arc. In the commodity flow models, additional integer variables are associated with the arcs, and they express the flow of the commodities along the paths traveled by the vehicles [83]. Finally, in the set partitioning formulation [83], a collection of circuits with minimum cost is determined which serves each customer once and, possibly, satisfies additional constraints. One of the main drawbacks of the last models is the huge number of variables. The explicit generation of all feasible constraints is normally impractical, and one has to apply a column generation approach to solve the linear programming relaxation of the set partitioning formulations. In the following, we present the most common mathematical formulations of the VRP. • Formulation 1 by Fisher [35] and Fisher and Jaikumar [34] Let ⎧ ⎨ 1, if vehicle k visits customer j immediately after xij k = customer i ⎩ 0, otherwise

(1)

and  yik =

1, if customer i is visited by vehicle k 0, otherwise.

(2)

The vehicle routing problem can then be stated as:

min



cij



i,j

s.t. 

 yik =

k

  j

(3)

xij k

k

1, i = 2, · · · , n, K, i=1

qi yik ≤ Qk

i

xij k =



xj ik = yik

(4) k = 1, · · · , K

(5)

i = 1, · · · , n

(6)

j

k = 1, · · · , K

94

Y. Marinakis et al.



xij k ≤ | S | −1,

∀S ⊆ {2, · · · , n},

(7)

i,j ∈S

k = 1, · · · , K yik ∈ {0, 1},

i = 1, · · · , n,

(8)

k = 1, · · · , K xij k ∈ {0, 1},

i, j = 1, · · · , n,

(9)

k = 1, · · · , K. Constraints (4) ensure that each customer is assigned to one vehicle, except of the depot which is visited by all vehicles, constraints (5) are the capacity constraints of the vehicles, constraints (6) ensure that a vehicle which visits a customer leaves immediately from the customer, and constraints (7) are the subtour elimination constraints for the TSP. • Formulation 2 by Christofides [23] and Christofides et al. [24] Let all optimal feasible single routes for each vehicle in the VRP be indexed r = 1, · · · , rˆ , the index set of customers in route r be Mr , and the cost of the route be cr . Let ni = {r | i ∈ Mr }. It is assumed that the routes are ordered in descending order of their load Kr = qi and use rk as the smallest value of r i∈Mr

such that Kr ≤ Qk . Define rm+1 = rˆ + 1. Let  yr =

1, if route r is in the optimal VRP solution 0, otherwise.

(10)

The VRP is then stated as:

min

rˆ 

cr yr

(11)

r=1

s.t.



yr = 1 i = 2, · · · , n

(12)

yr ≤ k, rk = rk+1 , k = 1, · · · , K

(13)

r∈Ni rk+1 −1

 r=1

rˆ  r=1

yr = K

(14)

Variants and Formulations of the Vehicle Routing Problem

95

yr ∈ {0, 1}, r = 1, · · · , rˆ .

(15)

Constraints (12) ensure that every customer is visited, and constraints (13) and (14) ensure that the K routes chosen in the solution are feasible to operate using K vehicles. • Formulation 3 by Christofides [23] and Christofides et al. [24] The previous two formulations are integer programming models, while the next one will be based on dynamic programming. Let S1 = {2, · · · , n} be the set of customers. For any S2 ⊆ S1 , let f (k, S2 ) be the minimum cost of supplying the customers in S2 using only vehicles 1, · · · , k, and let c(S2 ) be a minimum  cost to the TSP defined by the depot and the customers in S2 , and q(S2 ) = qi . The i∈S2

dynamic programming recursion is initialized for k = 1, by f (1, S2 ) = c(S2 ) and defined for k > 2 by: f (k, S2 ) = min {f (k − 1, S2 − S) + c(S)} S⊂S2

(16)

s.t. q(S2 ) −

k−1 

Qm ≤ q(S) ≤ Qk

(17)

m=1 1 K−k q(S1

− S2 ) ≤ q(S) ≤

1 k q(S2 ).

(18)

Here, k = 2, · · · , K except for the left-hand side of the last constraint for which k = K. The sets S2 ⊆ S1 to be considered must satisfy K 

q(S1 ) −

Qm ≤ q(S2 ) ≤

m=k+1

k 

Qm .

(19)

m=1

• Formulation 4 by Golden [14] is stated as follows: min

n  K n  

cij xijk

(20)

i=1 j =1 k=1

s.t. K n   i=1 k=1

xijk = 1,

j = 2, · · · , n

(21)

96

Y. Marinakis et al. K n  

xijk = 1,

i = 2, · · · , n

(22)

k = 1, · · · , K

(23)

j =1 k=1 n 

xiik 1



n  j =1

i=1

xik1 j = 0,

i1 = 1, · · · , n n  i=1 n  i=1

tik

n 

qi

n 

xijk ≤ Qk ,

k = 1, · · · , K

(24)

tijk xijk ≤ T mk , k = 1, · · · , K

(25)

j =1

xijk +

n n  

j =1

i=1 j =1 n 

k x1j ≤ 1,

k = 1, · · · , K

(26)

k xi1 ≤ 1,

k = 1, · · · , K

(27)

j =2 n  i=2

X∈S xijk = 0 or 1,

(28) for all i, j, k

(29)

where n is the number of nodes, K the number of vehicles, Qk the capacity of vehicle k, T mk the maximum time allowed for a route of vehicle k, qi the demand of node i (q1 = 0), tik the time required for vehicle k to deliver or to collect at node i (t1k = 0), tijk the travel time for vehicle k from node i to node k the cost of travel from node i to node j , and x k = 1 if arc i − j j (tiik = ∞), cij ij is traversed by vehicle k and 0 otherwise. Objective function (20) states that the total distance is to be minimized. Equations (21) and (22) ensure that each demand node is served by exactly one vehicle. Route continuity is represented by (23), i.e., if a vehicle enters in a demand node, it must exit from that node. Equations (24) are the vehicle capacity constraints and (25) are the total elapsed route time constraints. Equations (26) and (27) guarantee that vehicle availability is not exceeded. • Formulation 5 by Gavish and Graves (Commodity Flow Formulation) [83] This formulation requires a new set of continuous variables, associated with the arcs, which represent the amounts of demand that flow along them. The formulation requires the extended graph G = (V  , E  ) obtained from G by adding node n + 1, which is a copy of the depot node. Routes are now paths from node 0 (the depot) to node n + 1 (the depot). Two nonnegative flow variables yij and yj i are associated with each edge (i, j ) ∈ E  . If a vehicle travels from i to j , then yij give the vehicle load in customer i and yj i give the vehicle residual

Variants and Formulations of the Vehicle Routing Problem

97

capacity. Let P C be the total amount of products of a vehicle the moment that it begun from the depot then yj i = P C − yij . The roles are reversed if the vehicle travels from j to i. For any route of a feasible solution, the flow variables define two directed paths, one path from 0 to n + 1, whose variables represent the vehicle load, and another path from node n + 1 to node 0, whose variables represent the residual capacity on the vehicle. Define:  1, if the arc (i, j ) is in the solution xij = (30) 0, otherwise, then, the problem may be stated as follows: min



(31)

cij xij

i,j

s.t.



(yj i − yij ) = 2qi ,

∀i ∈ V \ {0, n + 1} (32)

j ∈V



y0j = q(V \ {0, n + 1})

(33)

j ∈V \{0,n+1}



yj 0 = K · P C − q(V \ {0, n + 1})

(34)

j ∈V \{0,n+1}



y(n+1)j = K · P C

(35)

j ∈V \{0,n+1}

yj i + yij = P Cxij ,  (xj i + xij ) = 2,

∀(i, j ) ∈ E 

(36)

∀i ∈ V \ {0, n + 1} (37)

j ∈V

yij ≥ 0, xij ∈ {0, 1},

∀(i, j ) ∈ E 

(38)



(39)

∀(i, j ) ∈ E .

The flow conservation constraints (32) impose that the difference between inflow and outflow at each node i is equal to twice the demand of i. Constraints (33)–(35) impose the correct values for the commodity flow variables incident to the depot nodes. Finally, constraints (36) and (37) impose the relation between vehicle flow and commodity flow variables and the node degree, respectively.

98

Y. Marinakis et al.

3 Basic Variants of the Vehicle Routing Problem 3.1 Open Vehicle Routing Problem The Open Vehicle Routing Problem (OVRP) is the variant of the classic vehicle routing problem where the vehicles do not return to the depot after the service of the customers [76]. The main real-life application of the open vehicle routing problem concerns the case where either the company does not have vehicles at all or the vehicles owned by the company are not enough in order to use them for the distribution of the products to the customers and, thus, a number of vehicles have to be hired by the company in order to realize the distribution of the products [16]. The OVRP can be stated as follows: Let G = (V , E) be a graph where V = {j0 , j1 , j2 , · · · jn } is the vertex set (ji = j0 refers to the depot and the customers are indexed ji = j1 , · · · , jn ) and E = {(jl , jl1 ) : jl , jl1 ∈ V } is the edge set. Each customer must be assigned to exactly one of the k vehicles, and the total size of deliveries for customers assigned to each vehicle must not exceed the vehicle capacity (Qk ). If the vehicles are homogeneous, the capacity for all vehicles is equal and denoted by Q. Each vehicle has the same traveling cost L. A demand qjl and a service time stjl are associated with each customer node jl . The travel cost between customers jl and jl1 is costjl jl1 . The purpose is to construct a low-cost, feasible set of routes—one for each vehicle. A route is a sequence of locations that a vehicle must visit along with the indication of the service it provides. Each vehicle starts at the depot but it does not return to the depot. The total traveling cost of each route cannot exceed the restriction L [16]. A formulation of the problem is the following. The main difference of the formulation of a CVRP is that each vehicle departs from the depot but it never returns back to the depot. Thus, if the depot is denoted by the node 1, then: min

n  K n  

cij xijk

(40)

i=1 j =1 k=1

s.t. K n  

xijk = 1,

j = 2, · · · , n

(41)

xijk = 1,

i = 2, · · · , n

(42)

k = 1, · · · , K

(43)

i=1 k=1 K n   j =2 k=1 n  i=1

xiik 1 −

n  j =2

xik1 j = 0,

i1 = 1, · · · , n

Variants and Formulations of the Vehicle Routing Problem n  j =2 n  j =2

tik

n  i=1

qi

n 

xijk ≤ Qk ,

99

k = 1, · · · , K

(44)

tijk xijk ≤ T mk , k = 1, · · · , K

(45)

i=1

xijk +

n n   j =2 i=1

n 

k x1j ≤ 1,

k = 1, · · · , K

(46)

k xi1 = 0,

k = 1, · · · , K

(47)

j =2 n  i=2

X∈S xijk = 0 or 1,

(48) for all i, j, k

(49)

where the notation is the same with the one used in the fourth formulation of the CVRP presented previously. The objective function (40) states that the total distance is to be minimized. Equations (41) and (42) ensure that each demand node is served by exactly one vehicle. Route continuity is represented by (43), i.e., if a vehicle enters in a demand node, it must exit from that node. Equations (44) are the vehicle capacity constraints and (45) are the total elapsed route time constraints. Equations (46) and (47) guarantee that vehicle availability is not exceeded.

3.2 Vehicle Routing Problem with Time Windows The Vehicle Routing Problem with Time Windows (VRPTW) specifies that customers must be serviced within some time window. Vehicles may, also, be associated with time windows within which they are allowed to operate. Usually, in most publications concerning the vehicle routing problem with time windows, the objective is to design least cost routes for a fleet of identical capacitated vehicles to service geographically scattered customers within pre-specified time windows. Each customer must be serviced once by a vehicle, and the total demands of the customers serviced by the vehicle must not exceed the capacity of the vehicle. Moreover, each customer must be serviced within a specified time window. If a vehicle arrives at a customer earlier than the lower bound of the customer’s time window, the vehicle must wait until the service is possible. The depot has, also, a time window, and all the vehicles must return by the closing time of the depot. The objective is to minimize, initially, the number of tours or routes and, then, for the same number of routes to minimize the total traveled distance [44, 47, 66, 78, 79, 83].

100

Y. Marinakis et al.

The VRPTW is defined on a network G = (V , E), where the depot is represented by the two nodes 0 and n + 1. All feasible vehicle routes correspond to paths in G that start at node 0 and end at node n + 1. A time window is also associated with nodes 0 and n+1, i.e., {μ0 , ν0 } = {μn+1 , νn+1 } = {E, L}, where E and L represent the earliest possible departure from the depot and the latest possible arrival at the depot, respectively. Moreover, zero demands and service times are defined for these two nodes, that is, q0 = qn+1 = st0 = stn+1 = 0. In the following nonlinear mixed integer formulation, there are two types of variables [25]:

xij k

⎧ ⎨ 1, if vehicle k visits customer j immediately = after customer i ⎩ 0, otherwise,

(50)

and the time variables wik which specify the start of service at node i when serviced by vehicle k. The problem can then be stated as follows:

min

 i,j

s.t.

(51)

cij xij k

k



xij k = 1,

∀i ∈ V

(52)

x0j k = 1,

∀j ∈ V , k ∈ K

(53)

xj ik = 0, ∀j ∈ V , k ∈ K

(54)

k∈K j ∈V



i∈V −{0}





xij k −

i∈V −{j }

i∈V −{j }



xi,n+1,k = 1,

∀k ∈ K

(55)

i∈V −{n+1}

xij k (wik + sti + tij − wj k ) ≤ 0, ∀k ∈ K, (i, j ) ∈ E   xij k ≤ wik ≤ νi xij k , ∀i ∈ V , k ∈ K μi j ∈V

(57)

j ∈V

E ≤ wik ≤ L,   qi xij k ≤ Q, i∈N

(56)

∀i ∈ (0, n + 1), k ∈ K

(58)

∀k ∈ K

(59)

xij k ≥ 0,

∀k ∈ K, (i, j ) ∈ E

(60)

xij k ∈ {0, 1},

∀k ∈ K, (i, j ) ∈ E

(61)

j ∈V

Variants and Formulations of the Vehicle Routing Problem

101

The objective function (51) expresses the total cost. Constraints (52) restrict the assignment of each customer to exactly one vehicle route, constraints (53)– (55) characterize the flow on the path to be followed by vehicle k while constraints (56), (58), and (59) guarantee schedule feasibility with respect to time consideration and capacity aspects, respectively. Note that for a given k, constraints (57) force wik = 0 whenever customer i is not visited by vehicle k.

3.3 Multi-Depot Vehicle Routing Problem The Multi-Depot Vehicle Routing Problem (MDVRP) is the variant of vehicle routing problem where more than one depot are used for the customers’ service. The MDVRP is one of the most challenging variants of the VRP where there are a very large number of published papers according to [63]. In this problem, there is a possibility for each customer to be clustered and served from only one depot or the customers have to be served from any of the depots using the available fleet of vehicles. In the classic formulation of the problem proposed in [73], each vehicle route starts and ends at the same depot, each customer is serviced exactly once by a vehicle, the total demand of each route does not exceed the vehicle capacity, and the total cost of the distribution is minimized.

4 Pickup and Delivery The pickups and deliveries problem is a version of the vehicle routing problem that has gained much attention in the last years. The problem can be divided into pickup and delivery of goods and of people. Also, other different formulations are the formulations as a problem with one origin and destination, as a problem with different origins and destinations, as a problem with one single vehicle or as a problem with more than one vehicle, as a problem where all the deliveries are made and, then, all the pickups are made or mixed, and so on [9, 26, 29, 48]. The most known formulation of the pickup and delivery problem is the one denoted as 1-M-1 Pickup and delivery problem (one-to-many-to-one) in which deliveries and pickups concern two distinct sets of commodities: some are shipped from the depot to the customers, and others are picked up at the customers and delivered to the depot [48].

4.1 One Commodity Pickup and Delivery Problem The One-Commodity Pickup and Delivery Vehicle Routing Problem (1-PDVRP) can be defined as follows. A complete graph G = (V , A) is given. There is a fleet K of identical vehicles each having capacity Q. Node 0 is the depot. Each customer i

102

Y. Marinakis et al.

has a demand qi where qi > 0 means that the customer requires a pickup and qi < 0 that it requires a delivery. Vehicles can leave the depot either empty or with some load. Binary variables xij k can take value 1 if arc (i, j ) is used by vehicle k ∈ K and 0 otherwise, and nonnegative variables fij indicating the load transported on arc (i, j ). The formulation of the problem is the following [9]: min

n  K n  

cij xijk

(62)

i=1 j =1 k=1

s.t.



xij k = 1,

∀i ∈ V \ {0}

(63)

j ∈V k∈K



xij k −

j ∈V



j ∈V

0 ≤ fij ≤ Q 

fij −

j ∈V





xj ik = 0, ∀k ∈ K, i ∈ V 

xij k

∀i, j ∈ V

(64) (65)

k∈K

fj i = qi , ∀i ∈ V \ {0}

(66)

j ∈V

xij k ≤ |S| − 1, S ⊆ V \ {0}, S = ∅, k ∈ K

(67)

fij ≥ 0

∀i, j ∈ V

(68)

∀k ∈ K, i, j ∈ V

(69)

j ∈S i∈S

xij k = 0 or 1,

where the objective function (62) is the minimization of the routing cost. Constraints (63) mean that each customer will be visited exactly once, and constraints (64) mean that the vehicle will finish its tour in the depot. Constraints (65) are the capacity constraints of the vehicles. Constraints (75) are the flow conservation constraints. Finally, constraints (67) are the tour elimination constraints.

4.2 Vehicle Routing Problem with Simultaneous Pickup and Delivery In the Vehicle Routing Problem with Simultaneous Pickup and Delivery (VRPSPD), there is a set of customers where they do not only require the delivery of products but, also, a simultaneous pickup of products from them is needed. Thus, in this problem [83], each customer i has two different quantities, the quantity di that represents the demand of homogeneous commodities to be delivered and the quantity pi that represents the picked up quantity of customer i. The nodes that play

Variants and Formulations of the Vehicle Routing Problem

103

the role of the origin (iO ) of the vehicles and the destination (iD ) of the vehicles should be defined. It is assumed that the delivery is performed before the pickup and that the vehicle load should never be negative or larger than the vehicle capacity. Usually, the origin and destination nodes are the same (the depot), and all the other constraints are the same with the constraints of the CVRP [44, 47, 66, 83]. The problem is to construct K simple routes with minimum cost where each route visits the depot node. Each vehicle is associated with only one route and each customer node is visited by exactly one vehicle. The current load of the vehicle along the route must be nonnegative and should never exceed the vehicle capacity Q. A formulation of the problem is presented in the following [62]. In this formulation, xij k takes value 1 if the arc (i, j ) is traversed by vehicle k and 0 otherwise. The flow variables yij and zij are the amounts of pickup and delivery commodities traveling on arc (i, j ), respectively [9, 26, 29]: min

n  K n  

cij xijk

(70)

i=1 j =1 k=1

s.t.



xij k = 1,

∀i ∈ V \ {0}

(71)

j ∈V k∈K



xij k −

j ∈V





xj ik = 0, ∀k ∈ K, i ∈ V

x0j k ≤ 1,

i∈V

yij + zij ≤ Q 

yij −

j ∈V



j ∈V

(72)

j ∈V





∀k ∈ K, j ∈ V xij k ∀i, j ∈ V

(73) (74)

k∈K

yj i = pi , ∀i ∈ V \ {0}

(75)

zj i = di , ∀i ∈ V \ {0}

(76)

j ∈V

zij −



j ∈V

yij , zij ≥ 0 xij k = 0 or 1,

∀i, j ∈ V

(77)

∀k ∈ K, i, j ∈ V

(78)

where the objective function (70) is the minimization of the routing cost. Constraints (71) and (72) mean that each customer will be visited exactly once. Constraints (73) denote that each vehicle will be used at most once while Equation (74) is the capacity constraint of the vehicle. Finally, constraints (75) and (76) are the flow conservation constraints for pickups and deliveries, respectively.

104

Y. Marinakis et al.

4.3 Vehicle Routing Problem with Backhauls In this problem, the vehicles are not only required to deliver goods to (linehaul) customers but also to pick up goods at (backhaul) customer locations. Linehaul customers require the delivery of a given quantity of product from the depot, whereas a given quantity of product must be picked up from Backhaul customers and transported to the depot. The customers are partitioned into two subsets, the n1 Linehaul Customers, each requiring a quantity of product to be delivered. The second subset contains the n2 Backhaul Customers, where a given quantity of inbound product must be picked up. The objective of the problem is to find a set of vehicle routes that serves all the linehaul and backhaul customers, where [19, 20, 84] each route visits the depot node. Each vehicle is associated with only one route and each customer node is visited by exactly one vehicle. For each route, the total loads associated with linehaul and backhaul customers do not exceed, separately, the vehicle capacity, and the distance of each route does not exceed the maximum distance that the associated vehicle can travel. On each route, the backhaul customers, if any, are served after all linehaul customers, and the total distance traveled by the fleet is minimized [51].

4.4 Dial-A-Ride Vehicle Routing Problem In this problem, a vehicle picks up customers at their places of origin and takes them to their destination, without exceeding vehicle capacity, and respecting time window constraints at both the origins and the destinations. The objective is to find the itinerary which minimizes the total distance traveled [70]. This is one of the classic pickup and delivery problems focusing on people and not on goods.

5 Other Variants of the Vehicle Routing Problem 5.1 Split Delivery Vehicle Routing Problem In the classic vehicle routing problem, a customer is not allowed to be serviced by more than one vehicle but in the Split Delivery Vehicle Routing Problem (SDVRP) a customer is allowed to be serviced by several vehicles if this reduces the overall cost [32]. Thus, the latter addresses the situation where a fleet of homogeneous vehicles must serve a set of customers, the demand of which can take any integer number, possibly greater than the vehicle capacity [51]. A customer may need to be served more than once (multiple customer visits), contrary to what is usually assumed in vehicle routing problems. Each time a vehicle visits a customer, it collects an integer

Variants and Formulations of the Vehicle Routing Problem

105

quantity. No constraint on the number of available vehicles is considered. Each vehicle starts from and returns to the depot at the end of each tour. The objective is to minimize the total distance traveled by the vehicles to serve all the customers [3].

5.2 Heterogeneous Fleet Vehicle Routing Problem In the Heterogeneous Fleet Vehicle Routing Problem (HFVRP), there is a set of vehicles each having a possibly different capacity and cost [6]. A set of n customers is given, each one having a demand qi . There is, also, a fleet of P different vehicle types. Each vehicle type p = 1, · · · , P has a capacity Qp , a fixed cost F Cp , p and a traveling cost cij . Many specific variants of the problem were studied in the literature [6, 51]. The main differences between them are: the vehicle fleet may be either limited or unlimited, the fixed cost of the vehicles may be either considered or ignored and the routing costs on arcs may be vehicle-dependent or vehicle-independent.

5.3 Multi-Trip Vehicle Routing Problem The Multi-Trip Vehicle Routing Problem has a number of additional constraints of the CVRP. The most important of them is that a vehicle can make more than one trip.

5.4 Routing and Scheduling with Full Loads and Time Windows In the problem of Routing and Scheduling with Full Loads and Time Windows, a set of demands is specified for a number of origin–destination pairs. Each demand is a full trailer which must be loaded onto a tractor at an origin and unloaded at a destination. These stops must satisfy pre-specified time window constraints, and the goal is to design routes and schedules for the fleet of tractors. In most cases, the objective is to minimize total transportation costs or the number of tractors used.

5.5 Multi-Vehicle Covering Tour Problem In the Multi-Vehicle Covering Tour Problem [49], two sets of locations are given. The first set, V1 , consists of potential locations at which some vehicles may stop, and the second set, V2 , consists of locations not actually on the vehicle routes but

106

Y. Marinakis et al.

within an acceptable distance of a vehicle route. The problem is to construct several vehicle routes through a subset of V1 , all starting and ending at the same locations, subject to some side constraints, having a total minimum length, and such that every location of V2 is within a reasonable distance of a route.

5.6 Vehicle Routing Problem with Satellite Facilities An important aspect [8] of the vehicle routing problem that has been largely overlooked is the use of Satellite Facilities to replenish vehicles during a route. When possible, satellite replenishment allows the drivers to continue making deliveries until the close of their shift without necessarily returning to the central depot. This situation arises primarily in the distribution of fuel and certain retail items.

5.7 Vehicle Routing Problem with Trailers The Vehicle Routing Problem with Trailers Gerdessen [43] is concerned with the case where a vehicle consists of a truck and a trailer. Both can carry goods. The use of truck and trailer may cause problems when serving customers who are located in the center of a city or customers who have little space nearby for maneuvering the vehicle. Time and trouble could be saved if these customers were served by the trucks only.

5.8 Multiple Commodities Vehicle Routing Problem Sometimes, vehicles are divided into compartments for storage of different products. In some vehicle routing problems, the vehicles are compartmented so that different commodities are stored in segregated compartments. Each customer may require specified quantities of different types of commodity [23]. The problem is then called Multiple Commodities Vehicle Routing Problem.

5.9 Periodic Vehicle Routing Problem One of the most interesting generalization of the capacitated vehicle routing problem is the Periodic Vehicle Routing Problem (PVRP). In this problem, vehicle routes must be constructed over multiple days where during each day within the planning period, a fleet of capacitated vehicles travels along routes that begin

Variants and Formulations of the Vehicle Routing Problem

107

and end at a single depot [36]. The objective of the PVRP is to find a set of tours for each vehicle that minimizes total travel cost while satisfying vehicle capacity and visit requirements [2, 36]. The PVRP require three types of decisions: initially, the selection of visiting patterns for each customer, afterwards, the assignment of the chosen day–customer combinations to tours, and, finally, the routing of vehicles for each day of the planning horizon [51].

5.10 Rich Vehicle Routing Problems In recent years, there are a number of real-life applications of the vehicle routing problem where more than one variant are combined in one problem. Thus, the complexity of the problem is increased significantly. These problems are usually referred as Rich Vehicle Routing Problems (RVRP). A comprehensive taxonomy of what could be thought as a rich vehicle routing problem can be found in [53]. Thus, following the definition that the authors gave in [53] about what is a rich vehicle routing problem, a number of variants of the VRP that we presented and analyzed previously could be considered as RVRPs. These variants as they belong in different categories of the VRPs are analyzed separately.

5.11 Vehicle Scheduling Problem Vehicle Scheduling Problems can be thought of as routing problems with additional constraints imposed by time periods during which various activities may be carried out [14]. Three constraints commonly determine the complexity of the vehicle scheduling problems: • The length of the time that a vehicle may be in operation before it must return to the depot for service or refueling. • The fact that certain tasks can only be carried out by certain vehicle types. • The presence of a number of depots where vehicles may be housed.

5.12 Green Vehicle Routing Problem In recent years, a significant growth of publications in Green Vehicle Routing Problems (GVRP) has been realized. In these problems, the main target is the reduction of fuel and energy consumption or the reduction of the pollution caused by the CO2 emissions. Thus, a number of different publications with different objectives and different constraints have been made. In these publications, the authors did not use a transformation of the CVRP into GVRP but they used any one of the variants of the VRP depending on the problem that they have to solve [56, 82].

108

Y. Marinakis et al.

Xiao et al. [87] proposed a new formulation for the minimization of the fuel consumption of a vehicle, the Fuel Consumption Vehicle Routing Problem (FCVRP). In their formulation, they used the traveled distance and the load of the vehicle, and, also, they added the Fuel Consumption Rate (FCR) that is measured in liters per km. Considering that F C is the fuel consumption, Q0 is the weight of an empty vehicle, Q1 is the weight of the cargo, and Q is the maximum weight of load that the vehicle can carry (capacity of the vehicle), then the F CR can be calculated using the following Equation (79): F CR(Q1 ) = F CR0 +

F CR ∗ − F CR0 Q1 Q

(79)

where F CR0 is the value of the F CR of an empty vehicle and F CR ∗ is the value of the F CR of a full load vehicle. Also, in order to calculate the F CR from a node i to a node j the following equation is used: F CRij = F CR0 +

F CR ∗ − F CR0 yij , ∀(i, j ) ∈ A Q

(80)

where yij is the weight of the carried load from the node i to the node j . The fuel consumption F C from a node i to a node j measured in volume units is calculated using the Equation (81): F Cij = F CRij dij xij , ∀(i, j ) ∈ A

(81)

where xij = 1 if the arc (i, j ) is on the tour, and zero otherwise. Also, the fuel cost F cost is calculated using the Equation (82): F costij = c0 F CRij dij xij , ∀(i, j ) ∈ A

(82)

where c0 is the unit of fuel cost. Considering that there are η customers with node 0 as the depot (V = {0, . . . , η} the set of nodes and A = {(i, j ) : i, j ∈ V , i = j } the set of arcs), and each customer i has Di demand, m is the number of homogeneous vehicles with finite capacity equal to Q and fixed cost equal to F , thus, the formulation of the problem is the following: min

η  j =1

F x0j +

η η   i=0 j =0

c0 dij (F CR0 xij +

F CR ∗ − F CR0 yij ) Q

(83)

s.t. η  j =0

xij = 1, i = 1, . . . , η

(84)

Variants and Formulations of the Vehicle Routing Problem η 

xij −

j =0 η  j =0,j =i

η 

xj i = 0, i = 0, . . . , η

109

(85)

j =0 η 

yj i −

yij = Di , i = 1, . . . , η

(86)

j =0,j =i

yij ≤ Qxij , i, j = 0, . . . , η

(87)

xij ∈ {0, 1}, i, j = 1, . . . , η

(88)

The first part of the objective function calculates the sum of the fixed costs of the vehicles, and the second part refers to the sum of the fuel costs of all vehicles. The first constraint (84) denotes that each customer must be visited by only one vehicle. The next constraint (85) denotes that any vehicle that arrives at a node must leave from that node also. The constraint (86) indicates the reduced cargo of the vehicle after it visits a customer and satisfies the demand of the customer. It also prohibits any illegal subtours. The constraint (87) limits the maximal load carried by the vehicle. Integrality constraints are given in (88).

5.13 Emergency Logistics A new variant of the vehicle routing problem is the Evacuation Vehicle Routing Problem (EVRP) [89]. The EVRP is a process of moving vehicles from a vehicle location to the potentially flooded area (PFA) and from PFA to relief center using a number of capacitated vehicles [89]. The motivation of this problem was derived from flood evacuations where it is very difficult to find the optimal evacuation route for transporting people to more safe places as there is a difficulty in the gathering of the required information about the circumstances that exist in the routes.

5.14 School Bus Routing and Scheduling Problem In the School Bus Routing and Scheduling Problem [13, 15], there are a number of schools, each having being assigned a set of bus stops, with a given number of students assigned to each stop, and time windows for the pickup and the delivery of the students. The problem is to minimize the number of buses used or total transportation costs while serving all the students and satisfying all the time windows.

110

Y. Marinakis et al.

5.15 Vehicle Routing Problem with Profits In the Vehicle Routing Problems with Profits (VRPPs), the set of customers to be served is not given in advance [4]. Two different decisions have to be taken: (1) which customers to serve, and (2) how to cluster the customers to be served in different routes (if more than one) and order the visits in each route [4]. The basic characteristic is that a profit is associated with each customer that makes such a customer more or less attractive. Thus, any route or set of routes, starting and ending at a given depot, can be measured both in terms of cost and in terms of profit [4]. There are a large number of applications that correspond to a vehicle routing problem with profits [4].

5.16 Team Orienteering Problem The Team Orienteering Problem (TOP) is a variant of the vehicle routing problem with profits or pricing. In the team orienteering problem (TOP), a set of locations is given, each with a score. The goal is to determine a fixed number of routes, limited in length, that visit some locations and maximize the sum of the collected scores [86]. The objective of the TOP is to construct a certain number of paths starting at an origin and ending at a destination that maximize the total profit without violating pre-defined limits [77]. The Team Orienteering Problem (TOP) can be described using a complete graph G = (V , A), where V = {1, · · · , N} is the set of nodes and A = {(i, j )|i, j ∈ V } is the set of arcs able to connect the nodes of V set. Every node i included in V set is related to a score si , in addition the required traveling time tij between nodes is given for every pair (i, j ) where the starting and the ending node has score equal to zero. A limited number of M vehicle routes have to be formed. Each route is a feasible path with respect to a pre-specified traveling duration limit T max. Each path should start from the initial node 1, while ending at the final node N , each node is visited at most once and belongs to exactly one path. The objective is to identify the set of M paths that maximize the total Score obtained by the visited nodes. Using the following set of decision variables, the problem can be formulated as an integer problem [52]: • yid = 1 if node i (i ∈ 1, . . . , N ) belongs to vehicle route d (d ∈ 1, . . . , M), yid = 0 otherwise. • xij d = 1 if vehicle route d (d ∈ 1, . . . , M) includes edges i, j (i, j ∈ 1, . . . , N ), otherwise xij d = 0. The problem’s symmetry ensures that tij = tj i , and thus, only xij d for i < j are designated. The mathematical formulation of TOP is: z = max

M N −1   i=2 d=1

si yid

(89)

Variants and Formulations of the Vehicle Routing Problem

111

subject to: M N   j =2 d=1

 ij m 

yid ≤ 1, ∀i = 2, . . . , N − 1

(92)

tij xij d ≤ T max, ∀d = 1, . . . , M

(93)

d=1 N −1   i=1 j >i



xij d ≤ |U | − 1, ∀U ⊂ V \{1, N}; 2 ≤ |U | ≤ N − 2; ∀d = 1, . . . , M

(i,j )∈U,iq

fjr (q) = dj,0 + d0,j +1 +

K 

fj +1 (Q − k)pj +1,k

(99)

k=1

with boundary condition: fn (q) = dn,0 , q ∈ Ln

(100)

where s = (0, 1, . . . , n) is an a priori tour, q is the vehicle’s remaining load after serving customer j , fj (q) is the expected cost from the customer j onward (f0 (q) p is the expected cost of the a priori tour), fj (q) is the expected cost when the vehicle does not return to the depot but goes to the next customer, and fjr (q) is the expected cost when the vehicle returns to the depot for preventive restocking. It should be noted that due to the random behavior of customers’ demands, a route failure may occur, i.e., the final demand of any route may exceed the actual vehicle capacity. In order a route failure not to happen, a threshold value may be chosen [88] and if the residual load after a customer is greater or equal to this value, then it is better to move to the next customer; otherwise, a return to the depot for preventive restocking is chosen.

Variants and Formulations of the Vehicle Routing Problem

113

6.2 Vehicle Routing Problem with Stochastic Demands and Customers In the Vehicle Routing Problem with Stochastic Demands and Customers, the customers’ demands are stochastic variables, and we are, also, not aware before the route that if every customer is present or not in the route, i.e., if he has a demand in this specific route. This problem is more difficult than the previous problem. The route is constructed as in previous case where an a priori route with all customers present is constructed. Then, when the route begins, we check if a customer is present in the route, and, thus, if its demand is different than zero and when the vehicle arrives to the customer how much is the demand of the customer. There are two ways to deal with the route failure in this problem as in the case of the vehicle routing problem with stochastic demands, either by using the preventive restocking strategy and return to the depot before the route failure occurs or to return immediately to the depot when the route failure occurs. Let s = (1, . . . , n) be an a priori tour. The most known formulation of the problem is the following [11, 38, 39]: F (S) =

n+1 n  

dij p¯ ij + γ2 (Q)

(101)

i=1 j =i+1

where

p¯ ij =

⎧ ⎪ ⎪ ⎨ pi pj , ⎪ pp ⎪ ⎩ i j

j, −1

j =i+1 (1 − ph ), j > i + 1

(102)

h=i+1

 ⎧ p (d + d ) pnl , (i = n, 1 ≤ g ≤ Q), ⎪ n n1 1n ⎪ ⎪ ⎪ ⎪ l|ξnl >g ⎪ ⎪ ⎪ ⎪ )γ (g)+ (1 − p ⎪ i i+1 ⎪ ⎪ ⎪ ⎪ pil (γi+1 (Q − ξil ) + di1 + d1i )+ pi [ ⎪ ⎪ ⎨ l|ξ >g i γi (g) = pil γi+1 (g − ξil )] + P (ξi = g|i is present) ⎪ ⎪ ⎪ ⎪ l|ξi q

fijr (q)

= dj,0 + d0,j +1 +

K 

fi,j +1 (Q − k)pj +1,k

(128)

k=1

with boundary condition fin (q) = dn,0 , q ∈ Ln where • The customers’ demands (ξj , j = 1, . . . , n) are stochastic variables independently distributed with known distributions, • The real demand of each customer is known only when the vehicle arrives to the customer, • The demand ξj , j = 1, . . . , n, does not exceed the vehicle’s capacity Q and follows a discrete probability distribution pj k = P rob(ξj = k), k = 0, 1, 2, . . . , K ≤ Q, • Bi is the fixed cost of locating a facility at candidate site i, • QBi is the capacity of each facility i, • q is the remaining load of the vehicle after the completion of the service in customer j , • fij (q) is the expected cost from the customer j onward (thus, if j = 0, then fi0 (q) denotes the expected cost of the a priori tour), p • fij (q) is the expected cost of the route when the vehicle does not return to the depot but goes to the next customer, and • fijr (q) is the expected cost when the vehicle returns to the depot for preventive restocking.

8.3 Inventory Routing Problem The Inventory Routing Problem (IRP) is one of the core problems that has to be addressed when implementing the emerging business practice called Vendor Managed Inventory Replenishment (VMI) [17, 18, 32].

122

Y. Marinakis et al.

The inventory routing problem is concerned with the repeated distribution of a single product, from a single facility, to a set of n customers over a given planning horizon, possibly infinite. Customer i consumes the product at a given rate σi (volume per day) and has the capability to maintain a local inventory of the product up to a maximum of I NVi . The inventory at customer i is I N V0i at time 0. A fleet of k homogeneous vehicles, with capacity Q, is available for the distribution of the problem. The objective is to minimize the average distribution costs during the planning period without causing stockouts at any of the customers. inventory routing problems arise in the distribution of liquid products such as industrial gases or gasoline. In these problems, each customer has an inventory of the product, and the distributor must determine the timing and amount of deliveries so that the customer does not run out of product [17, 18, 32]. There are many potential models integrating inventory control and vehicle routing problems. The two most important variants of the inventory routing problems are [32]: • Single period model with customers having stochastic demands; here, deliveries serve to replenish inventories to levels that appropriately balance inventory carrying and shortage costs and thereby vehicle routing costs are incurred as well. • Infinite horizon model with customers each having demands at a, customerspecific, constant and deterministic rate. Here, one needs to determine (infinite horizon) replenishment policies for all customers as well as efficient vehicle routes.

8.4 Production Routing Problem In recent years, a hybridization of VRP with production problems leads to the field of Production Routing Problems (PRPs). These problems are very complicated problems as they include among others decisions concerning the number of workers to be hired, the quantities of products to be produced using various strategies, the amount of inventories to be maintained, the assignment of the customers in different manufacturing plans or depots, and, finally, the design of the routes in order to satisfy the customers’ demands [1].

8.5 Ship Routing Problem Very interesting applications concerning the routing and scheduling of ships are given in the frame of Ship Routing Problems (SRPs). A number of surveys have been conducted for the presentation and analysis of the most important publications in the field [21, 22, 74, 75].

Variants and Formulations of the Vehicle Routing Problem

123

9 Conclusions In this paper, a number of variants and formulations of the vehicle routing problem were presented. The variants that were presented are the most cited in the literature. There are a huge number of other variants that have been published that either use some elements of the variants that were presented in this paper or they use combinations of them and add some new characteristics. We did not deal with the solution methods that are used for the solution of these approaches due to the limitation of the space of the chapter. The interesting reader could find in the literature either the variant that fits to his problem or the suitable solution method for the problem at hand.

References 1. Adulyasak, Y., Cordeau, J.F., Jans, R.: Optimization-based adaptive large neighborhood search for the production routing problem. Transp. Sci. 48(1), 20–45 (2014) 2. Angelelli, E., Speranza, M.G.: The periodic vehicle routing problem with intermediate facilities. Eur. J. Oper. Res. 137(2), 233–247 (2002) 3. Archetti, C., Speranza, M.G., Hertz, A.: A tabu search algorithm for the split delivery vehicle routing problem. Transp. Sci. 40(1), 64–73 (2006) 4. Archetti, C., Speranza, M.G., Vigo, D.: Vehicle routing problems with profits. In: Toth, P., Vigo, D. (eds.) Vehicle Routing: Problems, Methods and Applications, 2nd edn. MOS-Siam Series on Optimization, pp. 272–297. Siam, Philadelphia (2014) 5. Assad, A.A., Golden, B.L.: Arc routing methods and applications. In: Ball, M.O., Magnanti, T.L., Momma, C.L., Nemhauser, G.L. (eds.) Network Routing, Handbooks in Operations Research and Management Science, vol. 8, pp. 375–483. Elsevier Science B. V., Amsterdam (1995) 6. Baldacci, R., Battarra, M., Vigo, D.: Routing a heterogeneous fleet of vehicles. In: Golden, B.L., Raghavan, S., Wasil, E.A. (eds.) The Vehicle Routing Problem: Latest Advances and New Challenges. Operations Research/Computer Science Interfaces Series, vol. 43, pp. 3–27. Springer, New York (2008) 7. Ball, M.O., Magnanti, T.L., Momma, C.L., Nemhauser, G.L. (eds.): Network Routing, Handbooks in Operations Research and Management Science, vol. 8. Elsevier Science B V, Amsterdam (1995) 8. Bard, J.F., Huang, L., Dror, M., Jaillet, P.: A branch and cut algorithm for the VRP with satellite facilities. IIE Trans. 30, 821–834 (1998) 9. Battarra, M., Cordeau, J.-F., Iori, M.: Pickup-and-delivery problems for goods transportation. In: Toth, P., Vigo, D. (eds.) Vehicle Routing: Problems, Methods and Applications, 2nd edn. MOS-Siam Series on Optimization, pp. 161–192. Siam, Philadelphia (2014) 10. Bektas, T., Repoussis, P.P., Tarantilis, C.D.: Dynamic vehicle routing problems. In: Toth, P., Vigo, D. (eds.) Vehicle Routing: Problems, Methods and Applications, 2nd edn. MOS-Siam Series on Optimization, pp. 299–347. Siam, Philadelphia (2014) 11. Bertsimas, D.J.: A vehicle routing problem with stochastic demands. Oper. Res. 40, 574–585 (1992) 12. Bianchi, L., Birattari, M., Manfrin, M., Mastrolilli, M., Paquete, L., Rossi-Doria, O., Schiavinotto, T.: Hybrid metaheuristics for the vehicle routing problem with stochastic demands. J. Math. Model. Alg. 5(1), 91–110 (2006)

124

Y. Marinakis et al.

13. Bodin, L.D., Golden, B.L.: Classification in vehicle routing and scheduling. Networks 11, 97– 108 (1981) 14. Bodin, L.D., Golden, B.L., Assad, A.A., Ball, M.: The state of the art in the routing and scheduling of vehicles and crews. Comput. Oper. Res. 10, 63–212 (1983) 15. Braca, J., Bramel, J., Posner, B., Simchi Levi, D.: A computerized approach to the New York City school bus routing problem. IIE Trans. 29(8), 693–702 (1997) 16. Brandao, J.: A tabu search algorithm for the open vehicle routing problem. Eur. J. Oper. Res. 157(3), 552–564 (2004) 17. Campbell, A., Clarke, L., Kleywegt, A., Sawelsberg, M.: The inventory routing problem. In: Crainic, T.G., Laporte, G. (eds.) Fleet Management and Logistics, pp. 95–113. Kluwer Academic Publishers, Boston (1998) 18. Campbell, A., Clarke, L., Sawelsberg, M.: Inventory routing in practice. In: Toth, P., Vigo, D. (eds.) The Vehicle Routing Problem. Monographs on Discrete Mathematics and Applications, pp. 309–330. Siam, Philadelphia (2002) 19. Caretto, C., Baker, B.: A GRASP interactive approach to the vehicle routing problem with backhauls. In: Ribeiro, C.C., Hansen, P. (eds.) Essays and Surveys on Metaheuristics, pp. 185– 199. Kluwer Academic Publishers, Norwell (2002) 20. Casco, D.O., Golden, B.L., Wasil, E.A.: Vehicle routing with backhauls: models, algorithms, and case studies. In: Golden, B.L., Assad, A.A. (eds.) Vehicle Routing: Methods and Studies, pp. 127–147. North Holland, Amsterdam (1988) 21. Christiansen, M., Fagerholt, K., Ronen, D.: Ship routing and scheduling: status and perspectives. Transp. Sci. 38(1), 1–18 (2004) 22. Christiansen, M., Fagerholt, K., Nygreen, B., Ronen, D.: Ship routing and scheduling in the new millennium. Eur. J. Oper. Res. 228, 467–483 (2013) 23. Christofides, N.: Vehicle routing. In Lawer, E.L., Lenstra, J.K., Rinnoy Kan, A.H.G., Shmoys, D.B. (eds.) The Travelling Salesman Problem: A Guided Tour of Combinatorial Optimization, pp. 431–448. Wiley, Hoboken (1985) 24. Christofides, N., Mignozzi, A., Toth, P.: The vehicle routing problem. In: Christofides, N. (ed.) Combinatorial Optimization, pp. 315–338. Wiley, Hoboken (1979) 25. Cordeau, J.F., Deasulniers, G., Desrosiers, J., Solomon, M.M., Soumis, F.: VRP with time windows. In: Toth, P., Vigo, D. (eds.) The Vehicle Routing Problem. Monographs on Discrete Mathematics and Applications, 157–193. Siam, Philadelphia (2002) 26. Cordeau, J.F., Laporte, G., Ropke, S.: Recent Models and Algorithms for One-to-One Pickup and Delivery Problems. In: Golden, B.L., Raghavan, S., Wasil, E.A. (eds.) The Vehicle Routing Problem: Latest Advances and New Challenges. Operations Research/Computer Science Interfaces Series, vol. 43, pp. 327–357. Springer, New York (2008) 27. Dantzig, G.B., Ramser, J.H.: The truck dispatching problem. Manag. Sci. 6(1), 80–91 (1959) 28. Daskin, M.: Network and Discrete Location. Models, Algorithms and Applications. Wiley, New York (1995) 29. Doerner, K.F., Salazar-Gonzalez, J.J.: Pickup-and-delivery problems for people transportation. In: Toth, P., Vigo, D. (eds.) Vehicle Routing: Problems, Methods and Applications, 2nd edn. MOS-Siam Series on Optimization, pp. 193–212. Siam, Philadelphia (2014) 30. Dror, M.: Arc Routing Theory, Solutions and Applications. Kluwer Academic Publishers, Boston (2000) 31. Eiselt, H.A., Laporte, G.: A Historical perspective on arc routing. In: Dror, M. (ed.) Arc Routing Theory, Solutions and Applications, pp. 1–16. Kluwer Academic Publishers, Boston (2000) 32. Fedegruen, A., Simchi-Levi, D.: Analysis of vehicle routing and inventory routing problems. In: Ball, M.O., Magnanti, T.L., Momma, C.L., Nemhauser, G.L. (eds.) Network Routing. Handbooks in Operations Research and Management Science, vol. 8, pp. 297–373. Elsevier Science B. V., Amsterdam (1995) 33. Fisher, M.L.: Vehicle routing. In: Ball, M.O., Magnanti, T.L., Momma, C.L., Nemhauser, G.L. (eds.) Network Routing. Handbooks in Operations Research and Management Science, vol. 8, pp. 1–33. North Holland, Amsterdam (1995)

Variants and Formulations of the Vehicle Routing Problem

125

34. Fisher, M.L., Jaikumar, R.: A generalized assignment heuristic for vehicle routing. In: Golden, B., Bodin, L. (eds.) Proceedings of the International Workshop on Current and Future Directions in the Routing and Scheduling of Vehicles and Crews, pp. 109–124. Wiley, Hoboken (1979) 35. Fisher, M.L., Jaikumar, R., Wassenhove, L.N.V.: A mutliplier adjustment method for the generalized assignment problem. Manag. Sci. 32(9), 1095–1103 (1986) 36. Francis, P.M., Smilowitz, K.R., Tzur, M.: The period vehicle routing problem and its extensions. In: Golden, B. et al. (eds.) The Vehicle Routing Problem: Latest Advances and New Challenges, pp. 73–102. Springer, New York (2008) 37. Gendreau, M., Potvin, J.Y.: Dynamic vehicle routing and dispatching. In: Crainic, T.G., Laporte, G. (eds.) Fleet Management and Logistics, pp. 115–125. Kluwer Academic Publishers, Boston (1998) 38. Gendreau, M., Laport, G., Seguin, R.: A tabu search heuristic for the vehicle routing problem with stochastic demands and customers. Oper. Res. 44, 469–477 (1995) 39. Gendreau, M., Laport, G., Seguin, R.: An exact algorithm for the vehicle routing problem with stochastic demands and customers. Oper. Res. 29, 143–155 (1995) 40. Gendreau, M., Laport, G., Seguin, R.: Stochastic vehicle routing. Eur. J. Oper. Res. 88, 3–12 (1996) 41. Gendreau, M., Laporte, G., Potvin, J.-Y.: Vehicle routing: modern heuristics. In: Aarts, E.H.L., Lenstra, J.K. (eds.) Local Search in Combinatorial Optimization, pp. 311–336. Wiley, Chichester (1997) 42. Gendreau, M., Laporte, G., Potvin, J.-Y.: Metaheuristics for the capacitated VRP. In: Toth, P., Vigo, D. (eds.) The Vehicle Routing Problem. Monographs on Discrete Mathematics and Applications, pp. 129–154. Siam, Philadelphia (2002) 43. Gerdessen, J.H.: Vehicle routing problem with trailers. Eur. J. Oper. Res. 93, 135–147 (1996) 44. Golden, B.L., Assad, A.A.: Vehicle Routing: Methods and Studies. North Holland, Amsterdam (1988) 45. Golden, B.L., Wong, R.T.: Capacitated arc routing algorithms. Networks 11, 305–315 (1981) 46. Golden, B.L., Wasil, E.A., Kelly, J.P., Chao, I.M.: The impact of metaheuristics on solving the vehicle routing problem: algorithms, problem sets, and computational results. In: Crainic, T.G., Laporte, G. (eds.) Fleet management and logistics, pp. 33–56. Kluwer Academic Publishers, Boston (1998) 47. Golden, B.L., Raghavan, S., Wasil, E.A. (eds.): The vehicle routing problem: latest advances and new challenges. Operations Research/Computer Science Interfaces Series, vol. 43. Springer, New York (2008) 48. Gribkovskaia, I., Laporte, G.: One-to-many-to-one single vehicle pickup and delivery problems. In: Golden, B.L., Raghavan, S., Wasil, E.A. (eds.) The Vehicle Routing Problem: Latest Advances and New Challenges. Operations Research/Computer Science Interfaces Series, vol. 43, pp. 359–377. Springer, New York (2008) 49. Hachicha, M., Hodgson, M.J., Laporte, G., Semet, F.: Heuristics for the multi-vehicle covering tour problem. Comput. Oper. Res. 27, 29–42 (2000) 50. Hertz, A., Mittaz, M.: Heuristic algorithms. In: Dror, M. (ed.) Arc Routing Theory, Solutions and Applications, pp. 327–386. Kluwer Academic Publishers, Boston (2000) 51. Irnich, S., Schneider, M., Vigo, D.: Four variants of the vehicle routing problem. In: Toth, P., Vigo, D. (eds.) Vehicle Routing: Problems, Methods and Applications, 2nd edn. MOS-Siam Series on Optimization, pp. 241–271. Siam, Philadelphia (2014) 52. Ke, L., Archetti, C., Feng, Z.: Ants can solve the team orienteering problem. Comput. Ind. Eng. 54(3), 648–665 (2008) 53. Lahyani, R., Khemakhem, M., Semet, F.: Rich vehicle routing problems: from a taxonomy to a definition. Eur. J. Oper. Res. 241, 1–14 (2015) 54. Laporte, G., Semet, F.: Classical heuristics for the capacitated VRP. In: Toth, P., Vigo, D. (eds.) The Vehicle Routing Problem. Monographs on Discrete Mathematics and Applications, pp. 109–128. Siam, Philadelphia (2002)

126

Y. Marinakis et al.

55. Laporte, G., Gendreau, M., Potvin, J.-Y., Semet, F.: Classical and modern heuristics for the vehicle routing problem. Int. Trans. Oper. Res. 7, 285–300 (2000) 56. Lin, C., Choy, K.L., Ho, G.T.S., Chung, S.H., Lam, H.Y.: Survey of green vehicle routing problem: past and future trends. Expert Syst. Appl. 41, 1118–1138 (2014) 57. Marinakis, Y.: An improved particle swarm optimization algorithm for the capacitated location routing problem and for the location routing problem with stochastic demands. Appl. Soft Comput. 37, 680–701 (2015) 58. Marinakis, Y., Migdalas, A.: Heuristic solutions of vehicle routing problems in supply chain management. In: Pardalos, P.M., Migdalas, A., Burkard, R. (eds.) Combinatorial and Global Optimization, pp. 205–236. World Scientific Publishing Co., Singapore (2002) 59. Marinakis, Y., Marinaki, M.: A bilevel genetic algorithm for a real life location routing problem. Int. J. Log. Res. Appl. 11(1), 49–65 (2008) 60. Migdalas, A.: Bilevel programming in traffic planning: models, methods and challenge. J. Global Optim. 7, 381–405 (1995) 61. Min, H., Jayaraman, V., Srivastava, R.: Combined location-routing problems: a synthesis and future research directions. Eur. J. Oper. Res. 108, 1–15 (1998) 62. Montane, F.A.T., Galvao, R.D.: A tabu search algorithm for the vehicle routing problem with simultaneous pick-up and delivery service. Comput. Oper. Res. 33, 595–619 (2006) 63. Montoya-Torres, J.R., Franco, J.L., Isaza, S.N., Jimenez, H.F., Herazo-Padilla, N.: A literature review on the vehicle routing problem with multiple depots. Comput. Ind. Eng. 79, 115–129 (2015) 64. Nagy, G., Salhi, S.: Location-routing: issues, models and methods. Eur. J. Oper. Res. 177, 649–672 (2007) 65. Pearn, W.L., Assad, A.A., Golden, B.L.: Transforming arc routing into node routing problems. Comput. Oper. Res. 14(4), 285–288 (1987) 66. Pereira, F.B., Tavares, J.: Bio-inspired algorithms for the vehicle routing problem. Studies in Computational Intelligence, vol. 161. Springer, Berlin (2008) 67. Pillac, V., Gendreau, M., Gueret, C., Medaglia, A.L.: A review of dynamic vehicle routing problems. Eur. J. Oper. Res. 225, 1–11 (2013) 68. Powell, W.B., Jaillet, P., Odoni, A.: Stochastic and dynamic networks and routing. In: Ball, M.O., Magnanti, T.L., Momma, C.L., Nemhauser, G.L. (eds.) Network Routing. Handbooks in Operations Research and Management Science, vol. 8, pp. 141–295. Elsevier Science B V, Amsterdam (1995) 69. Prodhon, C., Prins, C.: A survey of recent research on location-routing problems. Eur. J. Oper. Res. 238, 1–17 (2014) 70. Psaraftis, H.N.: Scheduling large-scale advance request dial-a-ride systems. Am. J. Math. Manag. Sci. 6, 327–368 (1986) 71. Psaraftis, H.N.: Dynamic vehicle routing problems. In: Golden, B.L., Assad, A.A. (eds.) Vehicle Routing: Methods and Studies, pp. 223–248. North Holland, Amsterdam (1988) 72. Psaraftis, H.N.: Dynamic vehicle routing: status and prospects. Ann. Oper. Res. 61, 143–164 (1995) 73. Renaud, J., Laporte, G., Boctor, F.F.: A Tabu search heuristic for the multidepot vehicle routing problem. Comput. Oper. Res. 23(3), 229–235 (1996) 74. Ronen, D.: Cargo ships routing and scheduling: survey of models and problems. Eur. J. Oper. Res. 12, 119–126 (1983) 75. Ronen, D.: Ships scheduling: the last decade. Eur. J. Oper. Res. 71(3), 325–333 (1993) 76. Sariklis, D., Powell, S.: A heuristic method for the open vehicle routing problem. J. Oper. Res. Soc. 51(5), 564–573 (2000) 77. Sevkli, A.Z., Sevilgen, F.E.: Discrete particle swarm optimization for the team orienteering problem. Turk. J. Electr. Eng. Comput. Sci. 20(2), 231–239 (2012) 78. Solomon, M.M.: Algorithms for the vehicle routing and scheduling problems with time window constraints. Oper. Res. 35(2), 254–265 (1987) 79. Solomon, M.M., Desrosiers, J.: Time window constrained routing and scheduling problems. Transp. Sci. 22(1), 1–13 (1988)

Variants and Formulations of the Vehicle Routing Problem

127

80. Stewart, W.R., Golden, B.L.: Stochastic vehicle routing: a comprehensive approach. Eur. J. Oper. Res. 14, 371–385 (1983) 81. Tarantilis, C.D.: Solving the vehicle routing problem with adaptive memory programming methodology. Comput. Oper. Res. 32(9), 2309–2327 (2005) 82. Tiwari, A., Chang, P.C.: A block recombination approach to solve green vehicle routing problem. Int. J. Prod. Econ., 164, 379–387 (2015) 83. Toth, P., Vigo, D.: The Vehicle Routing Problem. Monographs on Discrete Mathematics and Applications. Siam, Philadelphia (2002) 84. Toth, P., Vigo, D.: VRP with backhauls. In: Toth, P., Vigo, D. (eds.) The Vehicle Routing Problem. Monographs on Discrete Mathematics and Applications, pp. 195–224. Siam, Philadelphia (2002) 85. Toth, P., Vigo D.: Vehicle Routing: Problems, Methods and Applications, 2nd edn. MOS-Siam Series on Optimization. Siam, Philadelphia (2014) 86. Vansteenwegen, P., Souffriau, W., Berghe, G.V., Oudheusden, D.V.: A guided local search metaheuristic for the team orienteering problem. Eur. J. Oper. Res. 196, 118–127 (2009) 87. Xiao, Y., Zhao, Q., Kaku, I., Xu, Y.: Development of a fuel consumption optimization model for the capacitated vehicle routing problem. Comput. Oper. Res. 39(7), 1419–1431 (2012) 88. Yang, W.H., Mathur, K., Ballou, R.H.: Stochastic vehicle routing problem with restocking. Transp. Sci. 34, 99–112 (2000) 89. Yusoff, M., Ariffin, J., Mohamed, A.: A multi-valued discrete particle swarm optimization for the evacuation vehicle routing problem. In: Tan, Y. et al. (eds.) ICSI 2011, Part I. Lecture Notes in Computer Science, vol. 6728, pp. 182–193. Springer, Berlin (2011) 90. Zhang, T., Chaovalitwongse, W.A., Zhang, Y.: Scatter search for the stochastic travel-time vehicle routing problem with simultaneous pick-ups and deliveries. Comput. Oper. Res. 39, 2277–2290 (2012)

New MIP model for Multiprocessor Scheduling Problem with Communication Delays Abdessamad Ait El Cadi, Mustapha Ratli, and Nenad Mladenovi´c

Abstract In this chapter, we consider The Multiprocessor Scheduling Problem with Communication Delays. We propose a new Mixed Integer Program (MIP) formulation for this problem taking into account the precedence constraints and the communication delays—delays that depend on the network and the tasks. The new proposed formulation reduces both the number of variables and the number of constraints, when compared to the best mathematical programming formulations from the literature. We summarize the mathematical formulation in a previous work and, in the present chapter; we added extra results to show the quality of the new model. The aim of the extended tests is to assess the quality of this model from one side and from the other side to show which parameters affect the performance of our model, especially the network architecture, the communication, and the number of task impacts. The results are significant but there are still some open problems to solve. Keywords Multiprocessors · Task scheduling · Communication delay · Mixed Integer Program · CPLEX

1 Introduction Problem Description The paper [2] deals with the Multiprocessors Scheduling Problem with Communication Delays (MSPCD). The MSPCD is defined by two graphs. The first graph G = (N, A) is a directed acyclic graph, where the vertex

A. Ait El Cadi () · M. Ratli Université Polytechnique Hauts-De-France (UPHF)/LAMIH CNRS UMR 8201, Campus Mont-Houy, Valenciennes Cedex 9, France e-mail: [email protected] N. Mladenovi´c Emirates College of Technologies, Abu Dhabi, UAE Mathematical Institute, SASA, Belgrade, Serbia © Springer Nature Switzerland AG 2018 P. M. Pardalos, A. Migdalas (eds.), Open Problems in Optimization and Data Analysis, Springer Optimization and Its Applications 141, https://doi.org/10.1007/978-3-319-99142-9_8

129

130

A. Ait El Cadi et al.

set N corresponds to n tasks, and edge set A represents the logical precedence between tasks. In the graph G, the vertex weights indicate the task processing time, while the edge weights represent the amount of inter-task exchanged data. The second graph G = (M, A ) is a complete graph corresponding to the computing network, where the vertex set M represents m computing units (i.e., processors), and the edge set A corresponds to the communication support between each two processors. We consider here homogeneous case, i.e., the computing units are identical. The communication is defined by two terms: the communication access cost and the communication rate and will be represented by the two matrices. The problem is to assign the tasks to processing units while respecting the precedence and communication constraints. For each task, the starting time and its assigned processing unit must be determined. There is no overlapping between tasks assigned to the same processing unit. Each processing unit could execute only one task at a time. The preemption is not allowed in our case. The objective is to minimize the makespan, the maximum completion time, i.e., the end time of the last task to be processed. To illustrate the problem statement, a small example with n = 5 tasks to be executed on m = 4 identical processors is given in Figure 1. In this figure, the precedence graph of the tasks to be scheduled is given with the corresponding weights. Precedence structure given in the graph indicates that tasks 1 and 3 have no predecessors, while tasks 4 and 5 have no successors. Each task has a duration shown with the vertex weight; for instance, task 1 has duration of 10 time units. The edge weight indicates the amount of exchanged data. In this example, the link from task 1 to task 2 indicates that task 2 needs a data of size 10 to be received from task 1, before starting the execution. The time cost to exchange this amount of data depends on the link between the processing units to whom the tasks were assigned. The tables show the communication access cost and rate for 4 processing units using different communication supports (shared memory, bus, etc.). The Gantt diagram shows a feasible schedule. Application Areas This problem of task assignment is dealt with in many areas of application. Figure 2 summarizes the main area of its applications. Also, there exists a huge amount of literature covering many task and parallel computer models. The basic classification is on homogeneous and heterogeneous systems. This field has received a large attention from researchers in the last years. This is due to the evolving technologies and modern applications [21] but also to the new questions that arise in manufacturing, logistics, computing systems, and so on: Questions about energy saving, efficiency, reliability, and real-time limits [20]. The scheduling and load balancing problems were studied in different areas [4] such as job-shop [5], scheduling in parallel and distributed systems [1], resource-constrained project problem [21], assembly line balancing problem [32], and simulation in a distributed environment [23]. Literature Review Rayward-Smith [29] was the first to address the communication delay case in solving Multiprocessor Scheduling Problem (MSP). He studied the case with a unit communication delay, proved that it is NP-hard, and proposed a

New MIP model for MSPCD

131

Fig. 1 An example of MSPCD with n = 5 and m = 4

generalized list schedule algorithm. Chrétienne and Picouleau [8] give a survey for the existing variants on MSPCD including their complexity and solution methods. The majority of work deals with meta-heuristics (see, e.g., [15, 26]). Moreover, the practical scheduling problems are even harder since they incorporate additional side constraints and/or optimize more than one objective. Hwang et al. [18] present a comparison of algorithms for the case of identical processors with communication cost. Luo et al. [24] review 20 greedy algorithms for cases without communication. The m-machine problem with communication delays has been proven to be NP-complete for arbitrary precedence relations even for two processors in [28]. Nevertheless, there are some special classes for which the polynomial time algorithms may be constructed [3, 19, 28].

132

A. Ait El Cadi et al.

Fig. 2 An overview of the areas related to the MPSCD problem

Even though, prior to developing an effective heuristic, researchers usually start with a mathematical programming formulation of the scheduling problem under study. Although several mathematical programming models have been developed, they typically do not perform very well for practically motivated problem instances due to model formulation and/or computational difficulties [31]. Unlu and Mason [31] give an overview of the different mathematical programming formulations for the case without communication. They document four different MIP formulations based on four different types of decision: assignment and positional date variables [10], linear ordering variables [27], time indexed variables [30], and network variables [6]. To our knowledge, three papers (Davidovi´c et al. [12, 13] and Venugopalan and Sinnen [33]) consider the communication delays. In these works, the standard linearization suggested increases the size of the problem significantly. Outlines This chapter is organized as follows. Section 2 introduces the proposed MIP model. Sections 3 and 4 present the computational results on the Davidovi´c et al. [11] instances to show the quality of our scheduling model. Then, we conclude this work in Section 5.

2 The New Mathematical Models In this section, we present the mathematical models developed by Ait El Cadi et al. [2] for the MSPCD. First, the linear model is given, and second the reduced model and the associated pruning procedures are illustrated. In the rest of this chapter, we will need the following notation: P red(i) = {j ∈ N : (i, j ) ∈ A} is the set of

New MIP model for MSPCD

133

tasks that precedes the task i; Succ(i) = {j ∈ N : (j, i) ∈ A} is the set of tasks that succeeds the task i; N + = {i ∈ N : Succ(i) = ∅} is the set of tasks with no successors; N − = {i ∈ N : P red(i) = ∅} is the set of tasks with no predecessors; ti is processing time of the task i; cik,j l is the communication delay between task i on processor k and the task j on processor l; and B is a very large real number. Communications depend on the architecture of the hardware used, such as shared memory, Ethernet links, and so on. The network architecture (for example, full connected, ring, hypercube, and star) also has impact on communication. Generally, the communication cost is nonlinear, and the communication cost function is defined d as cik,j l = akl + rklij , where akl is the fixed communication cost between processing units k and l, dij is the size of data sent from the task i to task j , and rkl is the communication rate between the two processors k and l. We assume that rkl = rlk and akl = alk . In the literature, there are different types of decision variables used to model the parallel machines scheduling problems as an MIP. Among them, we could list: assignment and positional date variables [1, 10], linear ordering variables [1, 27], time indexed variables [30], network variables [6], and packing formulation variables [13]. Unlike Davidovi´c et al. [12, 13] and Venugopalan and Sinnen [33], we use assignment and positional date variables as follows:  xik =

1 if the task i is assigned to the processor k; 0 else.

si —the starting time of the task i. We choose these variables since they provide the answer to natural scheduling questions: who does what job and when the job is scheduled. Also, this choice reduces the number of variables, used in [12] and [33], and avoids their overlapping. The Quadratic MSPCD Model The first mathematical model presented in [2] is a quadratic model and is as follows: min Cmax m 

(1)

xik = 1

∀i ∈ N

(2)

si + ti ≤ Cmax

∀i ∈ N

(3)

s.t.

k=1

s i + ti +

m m  

cik,j l xj l xik ≤ sj ∀j ∈ N, ∀i ∈ P red(j )

(4)

k=1 l=1

⎧ ⎨ si + ti − sj ≤ B(1 − xik xj k ) or ⎩ sj + tj − si ≤ B(1 − xik xj k ) xik ∈ {0, 1}; si ∈ R +

∀i, j ∈ N, ∀k ∈ M

(5)

∀i ∈ N, ∀k ∈ M

(6)

134

A. Ait El Cadi et al.

where the objective function (1) minimizes the makespan Cmax . The set of constraints (2) guarantees that each task must be assigned to one and only one processor. The Cmax corresponds to the total length of the schedule which is expressed by (3). The set of precedence constraints (4) describes that each task j which successes a task i must be carried out after the starting time of the task i, plus its processing and communication times. The set of disjunctive constraints (5) asserts that two tasks assigned to the same processor must not overlap. In these constraints, we use a big value B to express that constraints hold only for the tasks assigned to the same computing unit, i.e., when xik = xj k = 1. Let us consider the MSPCD model (1)–(6). The number of binary and continuous variables is equal to nm + n, the number of linear constraints is equal to 2n, and the number of nonlinear constraints is |A| + mn2 . The Linear MSPCD Model There are many examples of integer linear program models for solving scheduling problems. However, they usually do not take into account all the constraints. For example, in assembly line balancing problem, Urban [32] builds a model that minimizes the number of used machines respecting precedence constraints. But, this model is not suitable for the MSPCD. In parallel computing, Darte [9] suggests another MIP formulation, but without communication constraints. Chen and Lin [7] propose an MIP which minimizes the communication costs under capacity constraints. Our first model in [2] is quadratic and takes into account all the constraints; the communication/precedence and the disjunctive constraints are quadratic but the objective function and all other constraints are linear. The communication/precedence constraints were linearized without adding any extra variables, and the disjunctive constraints were linearized by the introduction of n2 binary variables. The linearization demonstration is shown in the paper [2]. The resulting Mixed Integer Linear Program (MILP) is: (7)

min Cmax m 

xik = 1

∀i ∈ N

(8)

si + ti ≤ Cmax

∀i ∈ N

(9)

s.t.

k=1

si + ti + cik,j l (xj l + xik − 1) ≤ sj ∀k, l ∈ M, ∀j ∈ N, ∀i ∈ P red(j ) (10) si + ti − sj ≤ B(3 − xik − xj k − δij )

∀i, j ∈ N, ∀k ∈ M

(11)

sj + tj − si ≤ B(2 − xik − xj k + δij )

∀i, j ∈ N, ∀k ∈ M

(12)

xik , δij ∈ {0, 1}; si ≥ 0

∀i, j ∈ N, ∀k ∈ M

(13)

Let us consider the linear MSPCD model (7)–(13). Then, the number of binary variables, continuous variables, and constraints are n(m+n), n, 2n+m2 |A|+2mn2 , respectively.

New MIP model for MSPCD

135

The Reduced Linear MSPCD Model All variables and constraints, in the previous MILP model, are not mandatory. The constraints (11) and (12) are defined only if the two tasks i and j are not linked with a path in the task graph G. In other words, if there is a path from i to j or from j to i, any solution that respects the precedence constraints (4) will never allow these two tasks to overlap on the same processor. Using this property, Ait El Cadi et al. [2] proposed a preprocessing procedure to prune the MILP model. This procedure uses the sets (i), defined for each i, as the set of tasks that can be reached from i using a path in the graph G or the inverse graph G−1 . These sets were computed using an algorithm based on the breadth-first search described in [22]. Therefore, the reduced linear model is as follows: (14)

min Cmax s.t. :

m 

xik = 1

∀i ∈ N

(15)

∀i ∈ N +

(16)

k=1

si + ti ≤ Cmax

si + ti + cik,j l (xj l + xik − 1) ≤ sj ∀k, l ∈ M, ∀j ∈ N, ∀i ∈ P red(j ) (17) si + ti − sj ≤ B(3 − xik − xj k − δij ) ∀k ∈ M, ∀i ∈ N, ∀j ∈ N \(i)

(18)

sj + tj − si ≤ B(2 − xik − xj k + δij ) ∀k ∈ M, ∀i ∈ N, ∀j ∈ N \(i)

(19)

xik , δij ∈ {0, 1}; si ≥ 0

∀i ∈ N, j ∈ N \(i), ∀k ∈ M

(20)

The procedure in Figure 3 shows how to get the set of necessary disjunctive constraint in the preprocessing phase. Given an undirected graph G = (N, A). (1) We compute the nth power of G, a graph Gn = (N, An ) in which two vertices are adjacent when their distance in G is at most n [16]. (2) We compute H , the complement of the undirected graph associated with Gn . (3) Then in H , the connected tasks are the tasks concerned with the disjunctive constraints. In the example in Figure 3, the number of constraints and binary variables due to the disjunctive constraints are, respectively, reduced from 600 to 12 and from 100 to 2. Which is a big gain! And for even less dense graph (D = 26.67%). Ait El Cadi et al. [2] show, also, that the number of disjunctive constraints could be known in advance by calculating |A¯n | from the following equation: -  n / .   &  ' k n ¯ ¬ (Adj + I )n (i, j ) (21) Adj |A | = ¬ (i, j ) = 1≤i,j =50%) and m=8

7000000

With communication y = 22.104x2 - 9.6126x - 488.82 R2 = 0.9773

6000000

Number of constraints

5000000

4000000

3000000

2000000

Without communication y = 164.81x - 4427.8 R2 = 0.9387

1000000

0 0

100

200

300

400

600

500

NbTasks

-1000000

Fig. 10 General regression between the number of constraints and the number of tasks Table 9 Average solving time for different architectures and number of tasks Nb tasks 10 20 30 40 50

Nb CPU = 2 Null Full 0.53 0.40 0.36 0.66 1.05 85.72 0.44 2.13 0.50 1.29

Nb CPU = 4 Null Full 0.15 0.30 0.43 0.73 0.21 49.67 0.21 112.05 0.20 5.41

Hyper 0.30 0.62 44.51 127.95 5.83

Nb CPU = 8 Null Full 0.11 0.37 0.11 1.79 0.12 117.93 0.16 339.69 0.18 19.00

Hyper 0.41 2.43 50.23 287.10 32.14

Ring 0.43 1.87 97.80 285.56 36.30

Table 10 The communication impact on the solving time Nb tasks 10 20 30 40 50

Without communication 0.26 0.28 0.41 0.24 0.26

With communication 0.37 1.37 73.84 173.99 13.74

Ratio 1.41 4.87 178.80 714.18 52.56

solving time. Table 10 summarizes the effect of communication, in the case of 791 solved instances, on the solving time. The time is in some cases multiplied by almost 700. These results are quite erratic and hard to analyze.

New MIP model for MSPCD

147

5 Conclusions We propose a new MIP formulation for the MSPCD problem. Linearization is done without adding any extra variables. We, significantly, reduce the size of the model, and the number of variables from O(n2 m2 ) to O(n) for graphs with higher density. We use two reductions: (1) we reduce the size of the problem, especially the number of binary variable by exploring the graph and analyzing its n-th power graph; (2) using original preprocessing procedures, the solution space is additionally reduced by general cuts. Our techniques work for all cases and especially for cases with strongly connected graph. These graphs are the objective and the main difficulty of cases with communication delays. The literature, generally, qualify them as harder problem. We develop an Equation (21) that gives the number of disjunctive constraints needed for the model. Our model outperforms the current state-of-theart: the largest problem that could be solved in few seconds was increased from 20 to 50 tasks, which we believe is significant. There are, still, some open questions regarding this model. (1) The first question is how to characterize a task graph for which there is no need for disjunctive constraints? In other words, how to characterize a graph such that its n-th power graph is a complete graph? The answer to this question will help classify easy scheduling problems from harder ones. (2) Is there a link between some characteristics of the graph (like degrees, density, and chromatic number . . . ) and the magnitude order of the number of disjunctive constraints. (3) How to generalize this problem of size reduction to other optimization problems on graphs. Acknowledgements The authors would like to gratefully thank the IRT (Institut de recherche technologique) Railenium for the financial support to achieve this research. Also, the authors thank the International Chair Professor N. Mladenovi´c, for his contribution to this work. This Chair position at the University of Valenciennes is cofunded by the region Nord-Pas-de-Calais and the IRT Railenium. This research is conducted within or partially covered by the framework of the grant num. BR05236839 “Development of information technologies and systems for stimulation of personality’s sustainable development as one of the bases of development of digital Kazakhstan”.

References 1. Ait El Cadi, A.: Automatisation de la parallèlisation de systémes complexes avec application à l’environnement Matlab/Simulink. MS Thesis, École Polytechnique de Montréal (2004) 2. Ait El Cadi, A., Ben Atitallah, R., Hanafi, S., Mladenovi´c, N., Artiba, A.: New MIP model for multiprocessor scheduling problem with communication delays. Optimization Letters, 11(6), 1091–1107 (2017) 3. Ali, H.H., El-Rewini, H.: An optimal algorithm for scheduling interval ordered tasks with communication on N processor, University of Nebraska at Omaha, Mathematics and Computer Science Department, Technical Report, 91-20 (1990) 4. Baker, K.R., Trietsch, D.: Principles of Sequencing and Scheduling. Wiley, Hoboken (2009) 5. Banharnsakun, A., Sirinaovakul, B., Achalakul, T.: Job Shop Scheduling with the Best-so-far ABC. Eng. Appl. Artif. Intell. 25(3), 583–593 (2012)

148

A. Ait El Cadi et al.

6. Cakici, E., Mason, S.J.: Parallel machine scheduling subject to auxiliary resource constraints. Prod. Plan. Control 18, 217–225 (2007) 7. Chen, W.H., Lin, C.S.: A hybrid heuristic to solve a task allocation problem. Comput. Oper. Res. 27(3), 287–303 (2000) 8. Chrétienne, P., Picouleau, C.: Scheduling with communication delays: a survey. In: Chrétienne, P., Coffman, E.G., Lenstra, J.K., Liu, Z. (eds.) Scheduling Theory and Its Applications, pp. 65– 90. Wiley, New York (1995) 9. Darte, A., Robert, Y., Vivien, F.: Scheduling and Automatic Parallelization. Birkhäuser, Boston (2000) 10. Dauzère-Pérès, S., Sevaux, M.: Using Lagrangean relaxation to minimize the weighted number of late jobs on a single machine. Nav. Res. Logist. 50(3), 273–288 (2003) 11. Davidovi´c, T., Crainic, T.G.: Benchmark-problem instances for static scheduling of task graphs with communication delays on homogeneous multiprocessor systems. Comput. Oper. Res. 33, 2155–2177 (2006) 12. Davidovi´c, T., Hansen, P., Mladenovi´c, N.: Permutation-based genetic, Tabu and variable neighborhood search heuristics for multiprocessor scheduling with communication delays. Asia Pac. J. Oper. Res. 22(3), 297–326 (2005) 13. Davidovi´c, T., Liberti, L., Maculan, N., Mladenovi´c, N.: Towards the optimal solution of the multiprocessor scheduling problem with communication delays. In: MISTA Proceedings (2007) 14. Djordjevi´c, G.L., Toši´c, M.B.: A heuristic for scheduling task graphs with communication delays onto multiprocessors. Parallel Comput. 22(9), 1197–1214 (1996) 15. Garey, M.R., Johnson, D.S.: Computers and Intractability: A Guide to the Theory of NPCompleteness. WH Freeman & Co., San Francisco (1979) 16. Harris, J.M.: Combinatorics and Graph Theory. Springer, New York (2000) 17. Hartmann, S., Briskorn, D.: A survey of variants and extensions of the resource-constrained project scheduling problem. Eur. J. Oper. Res. 207, 1–14 (2010) 18. Hwang, R., Gen, M., Katayama, H.: A comparison of multiprocessor task scheduling algorithms with communication costs. Comput. Oper. Res. 35, 976–993 (2008) 19. Isaak, G.: Scheduling rooted forests with communication delays. Order 11, 309–316 (1994) 20. Jeannot, E., Saule, E., Trystram, E.: Optimizing performance and reliability on heterogeneous parallel systems: approximation algorithms and heuristics. J. Parallel Distrib. Comput. 72, 268–280 (2012) 21. Jó´zwiak, L., Nedjah, N.: Modern architectures for embedded reconfigurable systems - a survey. J. Circuits Syst. Comput. 18(2), 209–254 (2009) 22. Knuth, D.E.: The Art of Computer Programming, vol. 1, 3rd edn. Addison-Wesley, Boston (1997) 23. Long, Q., Lin, J., Sun, Z.: Agent scheduling model for adaptive dynamic load balancing in agent-based distributed simulations. Simul. Model. Pract. Theory 19, 1021–1034 (2011) 24. Luo, P., Lü, K., Shi, Z.: A revisit of fast greedy heuristics for mapping a class of independent tasks onto heterogeneous computing systems. J. Parallel Distrib. Comput. 67, 695–714 (2007) 25. Mladenovi´c, N., Hansen, P.: Variable neighborhood search. Comput. Oper. Res. 24, 1097–1100 (1997) 26. Murty, K.G.: Operations Research: Deterministic Optimization Models. Prentice-Hall, Englewood Cliffs (1994) 27. Pinedo, M.: Scheduling: Theory, Algorithms, and Systems, 2nd edn. Prentice-Hall, Upper Saddle River (2002) 28. Prastein, M.: Precedence-constrained scheduling with minimum time and communication. MS Thesis, University of Illinois at Urbana-Champaign (1987) 29. Rayward-Smith, V.J.: UET scheduling with unit interprocessor communication delays. Discret. Appl. Math. 18, 55–71 (1987) 30. Sousa, J.P., Wolsey, L.A.: A time-indexed formulation of nonpreemptive single machine scheduling problems. Math. Program. 54, 353–367 (1992)

New MIP model for MSPCD

149

31. Unlu, Y., Mason, S.J.: Evaluation of mixed integer programming formulations for nonpreemptive parallel machine scheduling problems. Comput. Ind. Eng. 58, 785–800 (2010) 32. Urban, T.L.: Note. Optimal Balancing of U-Shaped Assembly Lines. Manag. Sci. 44(5), 738–741 (1998) 33. Venugopalan, S., Sinnen, O.: Optimal linear programming solutions for multiprocessor scheduling with communication delays. In: Xiang, Y., Stojmenovic, I., Apduhan, B.O., Wang, G., Nakano, K., Zomaya, A. (eds.) Algorithms and Architectures for Parallel Processing, vol. 7439, pp. 129–138. Springer, Heidelberg (2012)

On Optimization Problems in Urban Transport Tran Duc Quynh and Nguyen Quang Thuan

Abstract This chapter reviews some urban transport problems that are vital in developing countries. These problems are formulated as optimization programs. They are usually nonlinear, discrete, bi-level, and multi-objective. Finding an efficient solution method for each problem is still a challenge. Besides, the reformulation of existing mathematical models in a solvable form is also an open question. Keywords Urban traffic planning · Transit route network design · Bus rapid transit scheduling · Traffic signal control · Bi-level optimization · Multi-objective optimization · (Meta-)heuristics · Genetic algorithms

1 Introduction Urban transportation is crucial in the economic and social development of a country. Traffic congestion causes delays, increases in the emission of hazardous gases, aggravates users’ stress, and multiplies accidents. Finding solutions to limit the traffic congestion is now an urgent task for researchers in the field of operations research and applied mathematics. To do that, improving public transportation networks (e.g., designing routes network, setting frequency, scheduling buses, . . . , etc.) would be an efficient solution. A transit route network design problem (TNDP) is to configure the itinerary of bus routes and their corresponding frequencies in order to optimizecosts. A TNDP is

T. D. Quynh Faculty of Information Technology, Vietnam National University of Agriculture, Hanoi, Vietnam e-mail: [email protected] N. Q. Thuan () International School (VNU-IS), Vietnam National University-Hanoi (VNU), Hanoi, Vietnam e-mail: [email protected] © Springer Nature Switzerland AG 2018 P. M. Pardalos, A. Migdalas (eds.), Open Problems in Optimization and Data Analysis, Springer Optimization and Its Applications 141, https://doi.org/10.1007/978-3-319-99142-9_9

151

152

T. D. Quynh and N. Q. Thuan

modeled in terms of graphs whose nodes represent bus stops, intersections of streets or zone centroids, and whose arcs represent connections between two nodes. For cities where bus line networks already existed, an important issue is to optimally assign a fleet of vehicles to the pre-defined routes. It is known as a frequency setting problem. TNDP and frequency setting problems can be formulated as optimization problems whose objective functions are total travel time of all passengers, the operation costs, . . . , etc., and the constraints are demand satisfaction, required level of service, resource availability, the behavior of passengers, . . . , etc. The behavior of passengers is usually described via an optimization problem. Thus, the mathematical formulation is in general bi-level and nonlinear. In the literature, existing solution methods are based on deterministic local approaches (gradient descent algorithm [6]), heuristic approaches [14, 15], and meta-heuristic approaches [16, 55]. Nowadays, many cities all over the world have developed bus rapid transit (BRT) systems such as Bogota, Delhi, Guangzhou, Jakarta, . . . , etc. BRT can be defined as a corridor in which buses operate on a dedicated right-of-way such as a bus way or a bus lane reserved for buses on an arterial road or freeway. It can be understood as a rapid mode of transportation that can combine the quality of rail transit and the flexibility of buses [27]. BRT may be an efficient transportation service, especially for cities in developing countries where there are high transitdependent populations and financial resources are limited [39]. To improve the quality of BRT service or bus systems, optimal scheduling is absolutely considered. One usually schedules the frequency of each bus route, namely, the headway—the time between two consecutive buses leaving the initial station. Another scheduling problem is to plan whether BRT vehicles should stop at certain stations. This problem is called the problem of scheduling combination for buses. Most researches focused on traditional bus scheduling [2, 8, 10, 38, 44, 47]. For BRT systems, there are fewer researches. The frequency of BRT route is studied by Bai et al. in 2007 [3] or by Liang et al. in 2011 [28]. Scheduling combination for BRT is considered in [32]. Sun et al. in 2008 investigated the frequency and scheduling combination for a BRT route simultaneously [45]. The solution methods were usually heuristic such as taboo search methods [3], and genetic algorithms [28, 45]. Apart from optimizing public transportation network, traffic signal control (determination of the green time and cycle time of traffic lights to minimize the total delay of all users) plays an important role to reduce congestion, improve safety, and protect environment [42]. At the beginning, researchers studied isolated junctions [52]. Thus, an urban network is signalized by considering all its junctions independently. Some researchers considered the group of junctions as the problem of green wave in which the traffic light at a junction depends on the others [40, 53]. Normally, one assumes that the flow rates will not change after setting a new timing. Almond and Lott in 1968, however, showed that the assumption is not valid anymore for a wide area [1]. We thus need to investigate the problem by considering the change of associated flows resulting from a given signal timing (rerouting). Three following sections present typical models for the problems above. Section 2 is reserved for a TNDP. A frequency setting problem is described in Section 3.

On Optimization Problems in Urban Transport

153

Section 4 is dedicated to a BRT scheduling problem, while Section 5 is devoted to a traffic signal control problem considering rerouting. In each section, existing solution methods and challenges are discussed.

2 Transit Network Design Problems 2.1 A Mathematical Model This section describes a mathematical model that is considered in [19, 35, 43]. In order to be conventional in solving TNDP, a transit network route is transformed into a simple and directed weighted graph model in which the weights in arcs relate to the passenger’s cost when they use these arcs to travel. The passenger cost consists of two main factors: traveling time and waiting time. A transit network in Figure 1 was formulated as the graph in Figure 2. The graph has five types of arcs and five types of nodes. A destination node represents a node where some trips start. It has no successor node and has, at least, one predecessor node.

Fig. 1 Example network

Fig. 2 Graph model network

154

T. D. Quynh and N. Q. Thuan

In fact, at one location, there are numerous arrival and leaving passengers from others in the city. Hence, this is described by one origin node and one destination node. A stop node represents a platform of a station, where passengers alight, board, or wait for buses. An alighting node refers to the specific line where passengers alight. A boarding node refers to the specific line that passengers board. A line arc denotes a bus line connecting two stations. These arcs’ weight is the time needed to travel from a stop node to its neighbors. A walking arc connects an origin to a stop node or a stop node to a destination. These arcs’ weight is the time used when passengers walk from their homes, their offices, or their university to a bus stop, etc. A boarding arc connects a stop node to a boarding node. The weights in these arcs are the time passengers need to wait for a specific bus line. An alighting arc connects an alighting node to a stop node. These arcs describe passengers alighting at a bus stop. A stopping arc connects an alighting node to a boarding arc. These arcs express passengers who do not change their bus line at a stop node. Note that, in this graph model, only walking arc, line arc, and boarding arc have nonzero weights. The weight of the others is zero. The model is adapted common line problems that use hyperpaths to formulate the problem. The concept of hyperpath was first proposed by Nguyen and Pallottino [33]. In the following, we present the travel behavior of passengers and the cost of hyperpaths. Consider a hyperpath Hp = (Np , Ap , tp ) on graph G = (N, A); where tap is the probability passengers use arc a of hyperpath Hp (or p). At a stop node, there are several arcs leading out on a hyperpath, and traffic is split according to tap . To present the model easily, we give the following notations: - OU Tp (i): set of leading out of node i on hyperpath p. - W A, W LA, and BA: set of waiting arcs, walking arcs, and boarding arcs, respectively. - Sp : set of stop nodes on hyperpath p. - fa : frequency of bus line on the edge a. Note that, by the graph model presented, there is at most one bus line on an arc. Before analyzing the users’ moving strategy, we adopted the following assumption regarding common line problems [19] as follows. - Passengers arrive randomly at every stop node, and always board the first arriving vehicle of their choice set. - All transit lines are statistically independent with given exponential distributed headway, and mean equal to the inverse of line frequency. With the above hypothesis, for all i ∈ Sp , probability tap is calculated as below: ∀a ∈ OU Tp (i) ∩ BA

tap = fa /Fip ,

(1)

∀a ∈ OU Tp (i) ∩ W A

tap = 1,

(2)

On Optimization Problems in Urban Transport

155

where 

Fip =

fa .

a∈OU Tp (i)

The hyperpath cost’s formula is different from the previous [43] because the waiting time is presented as the weight of waiting arc in the graph model. The specific formula is described as below. With hyperpath Hp = (Ap , Np , tap ), denote: - Vp set of paths in Hp ; - δal equals 1 if arc a belongs to path l, equals 0 otherwise; - L set of line arcs on hyperpath Hp . The cost ca using arc a on hyperpath p is calculated in terms of travel time ta : ⎧ αta ⎪ ⎪ ⎨ βta ca = ⎪ γt ⎪ ⎩ a 0

ifa ∈ L, ifa ∈ W LA, ifa ∈ W A, otherscases,

where ξ and ζ are coefficients depending on a real bus network. Let λlp be the probability of choosing particular path l of Hp . It can prove that ,

λlp = 

δal tap ,

(3)

a∈Ap

λlp = 1.

(4)

l∈Vp

Similarly, let α be the probability passengers traverse arc a of Hp . This is calculated as the sum of probability that passengers use paths including arc a: αap =



δal λlp .

(5)

l∈Vp

the cost of hyperpaths contains two factors: moving cost on line arcs and waiting cost at stop nodes. Cost gp of hyperpath Hp is written as: gp =



αap ca .

(6)

a∈Ap

When using buses, by the time, passengers will find out the strategies so that their cost is minimum. Strategy with minimum cost is a description of the shortest hyperpath. In [33], Nguyen and Pallatino proposed a passenger assignment model in which passengers are assigned to the shortest hyperpath based on Markov chain.

156

T. D. Quynh and N. Q. Thuan

This model considers two objectives: cost for the operating bus system and total travel time of passengers. The problem is to find the set of bus routes and frequency that optimize the passengers’ cost and operator cost. The model can be outlined as follows: Input: – – – –

Topology defined by the roads, bus stops, origin, and destination; Origin–destination (OD) matrix demand; Number of bus vehicles available; Number of lines maximum.

Output: – Configuration of bus routes; – Frequencies of bus routes. The following notations are used to formulate the problem: – – – – – – – – – –

|L|: number of lines, r = (r1 , r2 , . . . r|L| ): set of bus lines, f = (f1 , f2 , . . . , f|L| ): frequency vector of bus lines, Hrs : set of hyperpaths connecting r to s, gp : cost of hyperpath p, y = (yp )p∈Hrs equilibrium traffic flow vector of Hrs , drs : traffic demand from r to s, ck (fk )(k ∈ L): total travel of the k th bus line from its origin to its destination, Ckmax : upper bound for travel time of the k th bus line, N V : available number of vehicles.

The T N DP problem is formulated as follows: Vmin(ψ1 (r, f) , ψ2 (y, r, f))T ψ1 (r, f) =

|L| 

fk ck (rk )2 ,

(7) (8)

k=1

ψ2 (y, r, f) =



drs grs ,

(9)

r,s∈OD

subject to: grs = minp∈Hrs gp , ck (rk ) ≤ ckmax |L|  k=1

fk ck (rk ) ≤ N V .

∀k ∈ L,

(10) (11) (12)

On Optimization Problems in Urban Transport

157

2.2 Existing Methods and Challenges The TNDP above is a bi-level nonlinear optimization problem in which the upper level is a bi-objective problem. In the lower level, Equation (10) describes that all passengers having the same pair OD use the shortest hyperpath connecting O to D. In the upper level, the objective function ψ1 is the total operator cost, and objective function ψ2 is the total cost of all passengers. In [43], Shimamoto et al. proposed a genetic-based algorithm, NSGA-II, and tested it on a small-scale instance. Then, Nguyen et al. [35] applied this approach for Danang city in Vietnam. The experimental results showed that the obtained bus network is much better than the one proposed by the local authorities. However, the computing time is expensive due to the complexity of the lower problem. Moreover, the solution obtained by the genetic algorithm is probably not a Pareto solution. Thus, some open issues are the following: 1. Finding an efficient strategy of choosing the parameters of NSGA-II (the solution quality depends much on the parameters). 2. Proposing an efficient algorithm for finding the shortest hyperpaths in the lower problem; 3. Reformulating or modeling the lower problem in the better form. 4. Investigating algorithms for solving the bi-objective optimization problem.

3 Frequency Optimization in Public Networks 3.1 Mathematical Model This part describes a typical frequency setting problem that is mentioned in [16]. Consider a transportation network which is modeled as a directed graph G(N, A) where N is the set of nodes, and A is a set of links of G. We assume that the set of origin–destination pairs (OD pair) and demand of each OD pair are given. To state the model, the following notations (Table 1) and variables (Table 2) are used. Note that: • • • • •

Arcs between nodes in N s belong to AT . Arcs from a node in N p to a node in N s belong to AB . Arcs from a node in N s to a node in N p belong to AL . Ok , Dk are origin and destination of OD pair k, respectively. The waiting time of a passenger at node n is  1 + f . Therefore, the total waiting time at node n is wn = Hence, Vn =



v a∈A+ n a

 Vn

a∈A+ n

a∈An

fa

and Vn = wn

a

 Vn fa

and the flow on link a, va =





a∈A+ n

fa imply wn =

a∈A+ n

+ va

a∈An

a∈A+ n

fa

=

(the number of passengers who use link a is proportional to the frequency).

fa

.

va fa

158

T. D. Quynh and N. Q. Thuan

Table 1 Notations Travel time on link a Demand of OD pair k Set of stop nodes Set of endpoints of street segment (N = N p ∪ N s ) Set of travel arcs Set of boarding (alighting) arcs (A = AT ∪ AB ∪ AL ) Set of outgoing (incoming) arcs from (to) node n Set of frequencies Total of available buses Set of lines Index in  of the frequency representing arc a Index in L of the line corresponding to arc a

ca δk Np Ns AT AB (AL ) − A+ n (An )  B L f (a) l(a) Table 2 Variables va vak fa ylf Vn wn

Flow on link a Flow on link a corresponding to OD pair k Frequency value of the line corresponding to boarding arc a Variable indicating whether frequency θf is set to line l Flow on node n (number of passengers waiting for bus on node n) Waiting time on node n

For a single OD pair, user behaves in order to minimize total of on-board travel time and waiting time.   min{ ca va + wn } v,w a∈A p   n∈N s.t. a∈A+n va − a∈A−n va = bn ∀n ∈ N va ≤ fa wn ∀n ∈ N p , a ∈ A+ n va ≥ 0 ∀a ∈ A, where bn = δk if node n is the origin of OD pair k, bn = −δk if node n is the destination of OD pair k, and 0 otherwise, ∀n ∈ N . In a network with a set of OD pairs and multiple bus lines, we give a set  = {θ1 , · · · , θm } where each element θi is a possible value for the frequency of any line. By introducing a binary variable ylf which takes value 1 if frequency θf is set to line l. An optimization model is presented as below. This model expresses simultaneously decisions of planners regarding to the setting of frequencies (variable y), as well as the corresponding decisions of passengers regarding to flow assignment (variables v and w) [16].

On Optimization Problems in Urban Transport

 

s.t.

159



min ( ca vak + n∈N p wnk ) y,v,w k∈K a∈A m    θf .ylf ca = B, l∈L f =1 a∈A m  ylf = 1 ∀l ∈ L, f =1 

a∈A+ n

vak −



a∈A− n

vak = bn ∀n ∈ N, k ∈ K,

vak ≤ θf (a) wnk vak ≤ δk yl(a)f (a) vak ≥ 0 ylf ∈ {0, 1}

∀n ∈ N p , a ∈ A+ n , k ∈ K, ∀a ∈ AB , k ∈ K, ∀a ∈ A, k ∈ K, ∀l ∈ L, f ∈ {1 . . . m}.

3.2 Existing Methods and Challenges The presented model is a mixed integer linear optimization problem that can be solved by software such as CPLEX, GUROBI, COUENNE, . . . , etc. However, it is probably not suitable for real large-scale problems due to its computing time. Hector et al. in [16] proposed a meta-heuristic method based on Tabu search. This approach is used for the large-scale setting but its solution quality is not evaluated yet. Hence, new efficient methods for this problem need to be investigated. A perspective is to use cutting plane and good relaxations for improving the lower bound of the objective value while continuously enhancing the upper bound by seeking efficient local algorithms such as GA (genetic algorithm), DCA (difference of convex functions), . . . , etc. There are two variants of the model above applying to cities where one needs to design a new public transportation network [17]. The first one is a mixed integer linear problem and the second one is a bi-level optimization problem whose lower level is linear. The authors used CPLEX to solve the first formulations. However, solution methods for the bi-level one is still an open question. An approach is to reformulate it as a linear problem with complementarity constraints and zero-one integer variables. Thus, branch-and-bound- or branch-and-cut-based methods are probably used.

4 BRT Scheduling Problem 4.1 A Mathematical Model According to the BRT vehicle operation form and stops number, the scheduling is regularly divided into three forms: normal scheduling, zone scheduling, and express

160

T. D. Quynh and N. Q. Thuan

scheduling. With the normal scheduling form, vehicles run along routes and stop at every station from the initial stop to the end. The zone scheduling is defined as vehicles only run on high-traffic-volume zone, while for the express scheduling, vehicles only stop at certain stations with the large demand of passengers. It is clear that the design and assignment of BRT vehicles to suitable scheduling forms are transport planners’ very important tasks. Sun et al. proposed a genetic algorithm (GA)-based method to give the optimal headway and a suitable scheduling combination so that the total cost (including the waiting time, traveling time, operation BRT cost, . . . , etc.) is minimized [45]. The authors were based on the assumption that a BRT vehicle is not allowed to cross another BRT vehicle. Most recently, Nguyen et al. [34] proposed a modified formulation allowing the fact that an express BRT vehicle may pass a normal BRT. Consider a BRT system with the following assumptions: 1. BRT vehicles are completely prioritized, i.e., they never stop due to traffic lights; 2. BRT vehicles run at constant speed, namely, the running time between two any stations does not change; 3. The headway is fixed in the study period; 4. The passenger arrival rate is uniform and unchanged in the given period; 5. The duration of stop and acceleration and deceleration are fixed; We are using some notations as follows: i—BRT vehicle ith i (i = 1, 2, · · · , M), in which M is the total number of operating vehicles in the study period; j —stop j th on the BRT route (j = 1, 2, · · · , N ), in which N is the total number of stops on the route; l—scheduling form: l = 1 means the normal scheduling, l = 2 shows the zone scheduling, and l = 3 assigns to the express scheduling; l —binary variable, for the scheduling form l, gets the value of 1 if vehicle i δi,j stops at j , otherwise is 0; l —binary variable, for the scheduling form l, gets the value of 1 if vehicle i δi,j k stops at both j and k, otherwise is 0; T —studied period; T0 —dwelling duration at every stop; c—acceleration and deceleration duration; h—headway; ai,j —the arrival time vehicle i at stop j ; di,j —the departure time vehicle i at stop j ; tj —the running time of vehicles between stop j − 1 and j ; rj,k —passenger arrival rate at stop j who want to go to stop k(k > j ); N  Rj —arrival rate at stop j : (Rj = rj,k ); k=j +1

Ai,j —the number of alighting passengers at stop j from vehicle i; Bi,j —the number of boarding passengers at stop j from vehicle i; Li,j —the number of passengers on BRT when vehicle i runs from j to j + 1;

On Optimization Problems in Urban Transport

161

Ij —BRT vehicle ith actually leaving stop j ; At the initial station, the order of BRT vehicles leaving the station is 1, 2, . . . , i, . . . , M, respectively. Due to the situation of crossing among vehicles, the order of BRT vehicles leaving the station j = 1 is probably not the same as the original order. To avoid confusion, in the next, the actual order of vehicle ith is denoted by I th (the big ith). To determine exactly the order of vehicles leaving stop j , we can base on the value of di,j . hI,j —headway between vehicle I and I − 1 at stop j ; sI,j k —passenger number wanting to go to stop k, but missing vehicle I when it leaves stop j ; SI,j —total number of passengers missing vehicle I at stop j ; WI,j k —passenger number wanting to go from stop j to stop k by vehicle I . Input • • • • • •

A BRT route having N stations, and a fleet of M BRT vehicles; Three scheduling forms: Normal, Zone, and Express; Running time between two consecutive stations tij ; Dwelling time T0 and acceleration/deceleration time c; Matrix of passenger arrival rate rj k ; Studied period T .

Output Headway h and the assignment of each vehicle to certain scheduling form so that costs are optimized. Objective Function The objectives are costs in the following: • f1 =

M  N 

[

i=1j =1

Rj .h2I,j 2

+ SI −1,j .hI,j ],

where the first term is the average passenger waiting time for vehicle I at stop j in the duration of hI,j and the second term is the waiting time for vehicle I of the passengers missing vehicle I − 1; • f2 =

M  N 

l .T + C [Li,j −1 − Ai,j ].δi,j 0 2

i=1j =1

−1 M N  i=1 j =1

l + δl Li,j [tj +1 + (δi,j i,j +1 ).c],

where the first term is the total waiting time of on-board passengers when vehicles stop, and the second one is the travel time of on-board passengers. • f3 =

M  N 

l .T + C δi,j 0 3

i=1j =1

−1 M N  i=1 j =1

l + δl [tj +1 + (δi,j i,j +1 ).c],

that is the total cost of operation BRT vehicles.

162

T. D. Quynh and N. Q. Thuan

Constraints (1) Time Constraints The arrival time and departure time at stop 1 are equal for every vehicle: ai,1 = di,1 = (i − 1).h,

i = 1, M.

(13)

The arrival time of vehicle i at stop j is equal to the sum of the departure time of that vehicle at stop j − 1 and the running time and acceleration/deceleration time: l l ai,j = di,j −1 + tj + (δi,j −1 + δi,j ).c.

(14)

At stop j , the departure time of vehicle i is the sum of its arrival time and the dwelling time: l .T0 . di,j = ai,j + δi,j

(15)

The headway at stop j between vehicle I and I − 1: hI,j = dI,j − dI −1,j .

(16)

(2) Passenger Number Constraints The number of passengers waiting for vehicle I at stop j is composed of the number of passengers missing vehicle I − 1 and a new coming passenger number: WI,j k = sI −1,j k + rj,k .hI,j .

(17)

The passenger number wanting to go to stop k but missing vehicle I at stop j depends on the passenger number waiting vehicle I and that vehicle I has plan to stop at j and k or not: l sI,j k = WI,j k (1 − δI,j k ).

(18)

The passenger number missing vehicle I at stop j is the sum of passengers missing vehicle I at stop j and wanting to all k after j : SI,j =

N 

sI,j k .

(19)

k=j +1

The alighting passenger number of vehicle i at stop j equals the sum of boarding passenger number at all stop before j :

On Optimization Problems in Urban Transport

163

j −1  l l Ai,j = δi,j . Wi,kj .δi,kj .

(20)

k=1

The boarding passenger number of vehicle i at stop j equals the sum of alighting passenger number at all stops after j : l . Bi,j = δi,j

N 

l Wi,j k .δi,j k.

(21)

k=j +1

The number of on-board passengers of vehicle i from stop j to stop j + 1 equals the one from the stop j − 1 to j plus boarding passengers at stop j minus alighting passengers at stop j : Li,j = Li,j −1 + Bi,j − Ai,j .

(22)

4.2 Some Open Problems The scheduling of BRT system is an optimization problem whose constraints are (13)–(16) and (17)–(22). In [34, 45], the objective is in the form of min{c1 f1 +c2 f2 + c3 f3 }, where c1 , c2 , and c3 are weights corresponding to the functions, respectively. The problem is now a single optimization problem but it is still difficult due to its nonlinearity and discreteness. Hence, the proposed solution methods were heuristic. Some open issues are as follows: 1. The combination of three functions above is considered as a scalarization technique when dealing with multi-objectives. It, however, is not ensured to get a Pareto solution due to the locality of solutions obtained by heuristics. Thus, globally solving the problem is necessary. 2. In the literature, proposed solution methods based on heuristics. This kind of algorithm is also used for solving multi-objective optimization problem, for instance, NSGA-II [9]. Thus, multi-objective BRT optimization problem should be investigated. 3. BRT systems need a private lane independent of other lanes. However, BRT lanes and ordinary lanes usually make intersections in developing countries. So, existing BRT models are not really suitable anymore (the signal traffic light of intersections influences the schedule of BRT vehicles). It is necessary to have new models taking this situation in the account.

164

T. D. Quynh and N. Q. Thuan

5 Optimizing Traffic Signals in Networks Considering Rerouting 5.1 Mathematical Model The problem of determining optimum signal timing is usually formulated as a bilevel optimization problem. In the upper level, the objective function is often nonsmooth and nonlinear that optimizes some measures such as total delay, pollution, operating cost, . . . , etc. This upper-level problem is constrained by the lower-level equilibrium problem in which transport users try to alter their travel choices in order to minimize their travel costs. Such an optimization problem may have multiple optima and an efficient method to even get local optima is difficult to find [30]. Recently, a new deterministic model for this problem is proposed. The problem is first formulated as an optimization problem with complementarity constraints. The objective function is the total travel time of all vehicles in the network. We need some of the following notations (see Tables 3 and 4) and variables (Tables 3 and 4) to present the model. The total travel time is calculated by: TT =



tp .fp =



p

dw .tw .

w

Cmin ≤ C ≤ Cmax , 0 ≤ θh ≤ C − 1,

∀h,

φh,r,min ≤ φh,r ≤ φh,r,max .

(23) (24) (25)

Table 3 Parameters p

p

p

p

Path p = i1 → i2 → . . . . → in(p) ,

w = (i, j ) Pw P = ∪Pw dw a = (u, v) δa,p h Sh Ir,h

Pair of origin i and destination j (OD pair), Set of paths from i to j , Set of all paths, Demand of origin–destination pair w, Link a, Parameter equal to 1 if link a belongs to path p, 0 otherwise, Junction h, Total number of stages at junction h, Inter-green between the end of green time for stage r and the start of the next green, Parameter equal to 1 if the vehicles on path p can cross junction h at stage r, Minimum of cycle time, Maximum of cycle time, Minimum of duration green time of stage r at junction h, Maximum of duration green time of stage r at junction h,



h,r,p

Cmin Cmax φh,r,min φh,r,max

On Optimization Problems in Urban Transport

165

Table 4 Variables qa ta tp fp tw W Th,p 0 W Th,p

Flow on link a, Travel time on link a, Travel time on path p, Flow on path p, Travel time for OD pair w, Waiting time at junction h associated to path p, Initial waiting time at junction h associated to path p,

zh,p

0 , Integer variables, that is used to calculate W Th,p

STh,r θh C φh,r

Starting time of stage r at junction h, Offset of junction h, Common cycle time, Duration of the green time for stage r at junction h

Sh 

C=

φh,r +

r=1

Sh 

∀h.

Ih,r ,

(26)

r=1

q(u,v) =



(27)

δu,v,p .fp .

p

tp =

n(p)−1 

t(i p ,i p k

k+1

)+

k=1

where αu,v

n(p)−1 

W Ti p ,p k

∀p.

(28)

k=2

0 t(u,v) = t(u,v) + αu,v .qu,v is a constant.



fp = dw

∀(u, v),

(29)

∀w.

(30)

p∈Pw

tp ≥ tw

∀p ∈ Pw

fp (tp − tw ) = 0

(31)

∀p ∈ Pw

(32)

ST1,1 = 0 STh,1 = STh−1,1 + θh

(33) ∀h ≥ 2

STh,r = STh,r−1 + φh,r−1 + Ih,r−1   r i p ,r,p k

.STi p ,r − k

  r i p ,r,p k−1

.STi p

k−1 ,r

− t(i p

p k−1 ,ik )

(34)

∀h, ∀r ≥ 1

(35)

− zipk ,p .C = W Ti0p ,p , k

∀p, k (36)

166

T. D. Quynh and N. Q. Thuan

0 ≤ W Ti0p ,p ≤ C,

∀p, k = 3, .., n(p) − 1.

k

W Ti0p ,p = 2

  1 .φi p ,r ] [C − 2 2 r p

(37)

∀p

(38)

i2 ,r,p

W Ti p ,p = W Ti0p + βi p ,p . k

 

k

k

.fp1 ,

∀p,

(39)

p1 i p ,r,p1 k

where βi p ,p is a constant. k

fp , tp , tw ≥ 0

∀p, w

zi p ,p ∈ Z

(40)

∀p, k

k

(41)

The aim of problem is to minimize the total travel time T T in the network. Therefore, it is formulated as the following optimization problem: (P1 )

min{T T =

 w

dw .tw }

s.t.(23) − (41)

This is a mixed integer nonlinear program. By using penalty techniques, Problem (P1 ) is reformulated as Problem (P2 ): (P2 )

min{T T (ξ ) =

 w

dw .tw + λ.

 p

min{fp , tp − tw} + λ.



sin2 (zh,p .π )}

h,p

s.t. (23) − (31), (33) − (40)

where λ is a sufficiently large number, and function μ(ξ ) is defined as below: μ(ξ ) =



min{fp , tp − tw},

ν(ξ ) =

p



sin2 (zh,p .π )

h,p

We see that constraint (32) and constraint (41) can be replaced by μ(ξ ) ≤ 0 and ν(ξ ) ≤ 0, respectively. We consider the following problem: Problem (P1 ) is equivalent to: min{T T (ξ ) = (P3 )

 w

dw .tw }

s.t (23) − (26)  ξ ∈ argmin{ sin2 (zh,p .π )} h,p

s.t. (27) − (31), (33) − (40) μ(ξ ) ≤ 0.

On Optimization Problems in Urban Transport

167

By using exact penalty techniques, the lower problem in (P3 ) is tackled and we obtain problem (P4 ). min{T T (ξ ) = (P4 )

 w

dw .tw }

s.t (23) − (26)  (fp , tp , tw , qu,v , t(u,v) , zh,p ) ∈ argmin{ sin2 (zh,p .π ) + η.μ(ξ )} h,p

s.t. (27) − (31), (33) − (40) where η > 0 is a sufficiently large number.

5.2 Existing Methods and Challenges Recall that the problem is normally formulated as a bi-level optimization problem in which the upper-level objective is to minimize total travel time and the lowerlevel problem is a traffic equilibrium problem. Many solution methods are studied to devise an efficient technique for solving the above problem: heuristic methods [11, 46], linearization methods [4, 21], sensitivity-based methods [12, 54], Krash– Kuhn–Tucker-based methods [50], marginal function method [31], cutting plan method [20], and stochastic search methods [5, 7, 13]. One of the impressive researches is of Ceylan and Bell [5, 11]. They use a signal timings optimization method in which rerouting is taken into account. The proposed solution method was heuristic, namely, a genetic algorithm (GA) for the upper-level problem and the SATURN package for the lower-level one. SATURN is a simulation-assignment modeling software package [49] that gives an equilibrium solution by solving heuristically subroutines. In [48], a genetic algorithm for directly solving model (P2 ) is proposed, and a combination of GA and DCA is presented to solve model (P4 ). However, the obtained solution in general is a local solution. Therefore, it may not hold the constraints of the original problem (P1 ). Consequently, how to globally and effectively solve the problem (P3 ) is still an open question. A perspective is to investigate exact methods (for example, branch-and-bound or branch-and-cut strategies). The difficulty resides in finding a good relaxation of objective function to estimate the lower bound while consecutively improving the upper bound. Another alternative is to propose new mathematical models that can be effectively solved by the existing methods. Acknowledgements This work is supported by Vietnam National Foundation for Science and Technology Development (NAFOSTED) under Grant Number 101.01-2013.10.

168

T. D. Quynh and N. Q. Thuan

References 1. Almond, J., Lott, R.S.: The Glasgow experiment: implementation and assessment. Road Research Laboratory Report 142, Road Research Laboratory, Crowthorne (1968) 2. Avishai, C.: Urban transit scheduling: framework, review and examples. J. Urban Plann. Dev. 128(4), 225–243 (2002) 3. Bai, Z.J., He, G.G., Zhao, S.Z.: Design and implementation of Tabu search algorithm for optimizing BRT Vehicles dispatch. Comput. Eng. Appl. 43(23), 229–232 (2007) 4. Ben Ayed, O., Boyce, D.E., Blair, C.E. III: A general bi-level linear programming formulation of the network design problem. Transp. Res. B 22(4), 311–318 (1988) 5. Ceylan, H., Bell, M.G.H.: Traffic signal timing optimisation based on genetic algorithm approach, including drivers’ routing. Transp. Res. B 38(4), 329–342 (2004) 6. Costantin, I., Florian, M.: Optimizing frequency in transit network: a nonlinear bi-level programming approach. Int. Trans. Oper. Res. 2(2), 149–164 (1995) 7. Cree, N.D., Maher, M.J., Paechter, B.: The continuous equilibrium optimal network design problem: a genetic approach. In: Bell, M.G.H. (ed.) Transportation Networks: Recent Methodological Advances, pp.163–174. Pergamon, Oxford (1998) 8. Dai, L.G., Liu, Z.D.: Research on the multi-objective assembled optimal model of departing interval on bus dispatch. J. Transp. Syst. Eng. Inf. Technol. 7(4), 43–46 (2007) 9. Deb, K., Pratap, A., Agarwal, S., Meyarivan, T.: A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. 6, 182–197 (2002) 10. Fan, Q.S., Pan, W.: Application research of genetic algorithm in intelligent transport systems scheduling of vehicle. Comput. Digit. Eng. 35(5), 34–35 (2007) 11. Fitsum, T., Agachai, S.: A genetic algorithm approach for optimizing traffic control signals considering routing. Comput. Aided Civ. Inf. Eng. 22, 31–43 (2007) 12. Friesz, T.L., Tobin, R.L., Cho, H.J., Mehta, N.J.: Sensitivity analysis based heuristic algorithms for mathematical programs with variational inequality constraints. Math. Program. 48, 265– 284 (1990) 13. Friesz, T.L., Cho, H.J., Mehta, N.J., Tobin, R., Anandalingam, G.: A simulated annealing approach to the network design problem with variational inequality constraints. Transp. Sci. 26, 18–26 (1992) 14. Gao, Z., Sun, H., Shan, L.L.: A continuous equilibrium network design model and algorithm for transit systems. Transp. Res. B 38(3), 235–250 (2004) 15. Han, A.F., Wilson, N.M.: The allocation of buses in heavily utilized networks with overlapping routes. Transp. Res. B 13(3), 221–232 (1982) 16. Hector, M., Antonio, M., Maria, E.U.: Frequency optimization in public transportation systems: formulation and methaheuristic approach. Eur. J. Oper. Res. 236, 27–36 (2014) 17. Hector, M., Antonio, M., Maria, E.U.: Mathematical programming formulations for transit network design. Transp. Res. B: Methodol. 77, 17–37 (2015) 18. Hunt, P.B., Robertson, D.I., Bretherton, R.D., Winton, R.I.: SCOOT-A traffic responsive method of coordinating signals, TRRL Laboratory Report 1014, TRRL, Berkshire, England (1981) 19. Kurauchi F., Bell M.G.H, Schmoecker, J.-D.: Capacity constrained transit assignment with common lines. J. Math. Model. Algorithms 2, 309–327 (2003) 20. Lawphongpanich, S., Hearn, D.W.: An MPEC approach to second-best toll pricing. Math. Program. B 101(1), 33–55 (2004) 21. LeBlanc, L., Boyce, D.: A bi-level programming for exact solution of the network design problem with user-optimal flows. Transp. Res. B Methodol. 20, 259–265 (1986) 22. Le Thi, H.A.: Contribution à l’optimisation non-convex and l’optimisation globale: théorie, algorithmes et applications. Habilitation à Diriger des recherches, Université de Rouen (1997) 23. Le Thi, H.A., Pham Dinh, T.: A branch and bound method via d.c. optimization algorithms and ellipsoidal technique for box constrained nonconvex quadratic problems. J. Glob. Optim. 13, 171–206 (1998)

On Optimization Problems in Urban Transport

169

24. Le Thi, H.A., Pham Dinh, T.: A continuous approach for globally solving linearly constrained quadratic zero-one programming problem. Optimization 50(1–2) , 93–120 (2001) 25. Le Thi, H.A., Pham Dinh, T.: The DC(difference of convex functions) Programming and DCA revisited with DC models of real world non-convex optimization problems. Ann. Oper. Res. 133, 23–46 (2005) 26. Le Thi, H.A., Pham Dinh, T., Huynh, V.N.: Exact penalty and error bounds in DC programming. J. Glob. Optim. 52(3), 509–535 (2012) 27. Levinson, H., Zimmerman, S., Clinger, J., Rutherford, S., Smith, R.L., Cracknell, J., Soberman, R.: Bus rapid transit, volume 1: case studies in bus rapid transit, TCRP Report 90, Transportation Research Board, Washington (2003) 28. Liang, S., He, Z., Sha, Z.: Bus rapid transit scheduling optimal model based on genetic algorithm. In: 11th International Conference of Chinese Transportation Professionals (ICCTP), pp. 1296–1305 (2011) 29. Luenberger, D.G.: Linear and Nonlinear Programming, 2nd edn. Springer, Berlin (2004) 30. Luo, Z., Pang, J.S., Ralph, D.: Mathematical Programs with Equilibrium Constraints. Cambridge University Press, New York (1996) 31. Meng, Q., Yang, H., Bell, M.G.H.: An equivalent continuously differentiable model and a locally convergent algorithm for the continuous network design problem. Transp. Res. B 35(1), 83–105 (2001) 32. Miller, M.A., Yin, Y., Balvanyos, T., Avishai, C.: Framework for bus rapid transit development and deployment planning. Research report, California PATH, University of California Berkeley (2004) 33. Nguyen, S., Pallotino, S.: Equilibrium assignment for large scale transit network. Eur. J. Oper. Res. 37, 176–186 (1988) 34. Nguyen, Q.T., Phan, N.B.T.: Scheduling Problem for Bus Rapid Transit Routes. Advances in Intelligent Systems and Computing, vol. 358, pp. 69–79. Springer, Heidelberg (2015) 35. Nguyen, N.D., Nguyen, Q.T., Vu, T.H., Nguyen, T.H.: Optimizing the bus network configuration in Danang city. Adv. Ind. Appl. Math. ISBN 978-604-80-0608-2 (2014) 36. Pham Dinh, T., Le Thi, H.A.: Convex analysis approach to d.c programming: theory, algorithms and applications. Acta Math. Vietnam. 22(1), 289–355 (1997), dedicated to Professor Hoang Tuy on the occasion of his 70th birthday 37. Pham Dinh, T., Le Thi, H.A.: DC optimization algorithms for solving the trust region subproblem. SIAM J. Optim. 8, 476–505 (1998) 38. Ren, C.X., Zhang, H., Fan, Y.Z.: Optimizing dispatching of public transit vehicles using genetic simulated annealing algorithm. J. Syst. Simul. 17(9), 2075–2077 (2005) 39. Rickert, T.: Technical and operational challenges to inclusive bus rapid transit: a guide for practitioners. World Bank, Washington (2010) 40. Robertson, D.I.: ‘TRANSYT’ method for area traffic control. Traffic Eng. Control 10, 276–281 (1969) 41. Schaefer, R.: Foundations of Global Genetic Optimization. Studies in Computational Intelligence, vol. 74. Springer, Berlin (2007) 42. Shepherd, S.P.: A Review of Traffic Signal Control, Monograph, Publisher University of Leeds, Institute for Transport Studies (1992) 43. Shimamoto, H., Schmöcker, J.-D., Kurauchi F., Optimisation of a bus network configuration and frequency considering the common lines problem. J. Transp. Technol. 2, 220–229 (2012) 44. Shrivastava, P., Dhingra, S.L.: Development of coordinated schedules using genetic algorithms. J. Transp. Eng. 128(1), 89–96 (2002) 45. Sun, C., Zhou, W., Wang, Y.: Scheduling combination and headway optimization of bus rapid transit. J. Transp. Syst. Eng. Inf. Technol. 8(5), 61–67 (2008) 46. Suwansirikul, C., Friesz, T.L., Tobin, R.L.: Equilibrium decomposed optimization: a heuristic for the continuous equilibrium network design problem. Transp. Sci. 21(4), 254–263 (1987) 47. Tong, G.: Application study of genetic algorithm on bus scheduling. Comput. Eng. 31(13), 29–31 (2005)

170

T. D. Quynh and N. Q. Thuan

48. Tran, D.Q., Phan, N.B.T., Nguyen, Q.T.A.: New Approach for Optimizing Traffic Signals in Networks Considering Rerouting. Advances in Intelligent Systems and Computing, vol. 359, pp. 143–154. Springer, Heidelberg (2015) 49. Van Vliet, D.: SATURN-A modern assignment model. Traffic Eng. Control 23, 578–581 (1982) 50. Verhoef, E.T.: Second-best congestion pricing in general networks: heuristic algorithms for finding second-best optimal toll levels and toll points. Transp. Res. B 36(8), 707–729 (2002) 51. Wardrop, J.G.: Some theoretical aspects of road traffic research. Proc. Inst. Civil Eng. 1(2), 325–378 (1952) 52. Webster, F.V.: Traffic Signal Settings, Road Research Technical Paper No. 39, HMSO, London (1958) 53. Wu, X., Deng, S., Du, X.: Jing MaGreen-Wave traffic theory optimization and analysis. World J. Eng. Technol. 2, 14–19 (2014) 54. Yang, H.: Sensitivity analysis for the elastic-demand network equilibrium problem with applications. Transp. Res. B 31(1), 55–70 (1997) 55. Yu, B., Yang, Z., Yao, J.: Genetic algorithm for bus frequency optimization. J. Transp. Eng. 136(6), 576–583 (2010)

Some Aspects of the Stackelberg Leader/Follower Model L. Mallozzi, R. Messalli, S. Patrì, and A. Sacco

Abstract The paper presents different aspects of the Stackelberg Leader/Follower model. Generalizations of the model introduced by von Stackelberg (Marktform und Gleichgewicht, Julius Springer, Vienna, 1934) are discussed, and some related open questions are enlightened. Keywords Game theory · Stackelberg model · Bi-level optimization

1 Introduction In a noncooperative game theoretical situation, a set of agents interact and choose a strategy according to a given set of rules. The decision can be taken simultaneously or sequentially. A solution concept for two-person games that involve a hierarchical structure in decision-making is the following: one of the players (called leader) declares and announces his strategy before the other player (called follower). The follower actually observes this and in equilibrium picks the optimal strategy as a response. Players may engage in a Stackelberg competition if one has some sort of advantage enabling it to move first. In other words, the leader must have commitment power. Such games are called Stackelberg games and have been

L. Mallozzi () Department of Mathematics and Applications, University Federico II, Naples, Italy e-mail: [email protected] R. Messalli Department of Economics and Statistics, University Federico II, Naples, Italy e-mail: [email protected] S. Patrì · A. Sacco Department of Methods and Models for Economics, Territory and Finance, Sapienza University, Rome, Italy e-mail: [email protected]; [email protected] © Springer Nature Switzerland AG 2018 P. M. Pardalos, A. Migdalas (eds.), Open Problems in Optimization and Data Analysis, Springer Optimization and Its Applications 141, https://doi.org/10.1007/978-3-319-99142-9_10

171

172

L. Mallozzi et al.

introduced in the context of static duopoly problems by H. von Stackelberg, who published “Market Structure and Equilibrium” (Marktform und Gleichgewicht) in 1934 [60]. This game displays sequential moves: players choose at different stages of the game taking their own decision. Player 1 behaves as a leader and plays first anticipating the reactions of the rival and takes them into account before choosing his strategy. Player 2 behaves as a follower answering to player 1 in an optimal way. In this sense, the Stackelberg game can be viewed as a dynamic game. In the rest of the chapter, we will consider the static framework, where the leader and the follower play once, and the structure of the game is given with player 1 as the leader and player 2 as the follower.1 Let  =< N; X1 , X2 ; f1 , f2 > be a two-person game, N = {1, 2}, where player 1 is the leader, and both players are cost minimizers. For any x1 ∈ X1 , let B2 (x1 ) be the follower’s best reply to the leader’s decision x1 . Suppose that for any x1 ∈ X1 the best reply is a singleton denoted by B2 (x1 ) = {x˜2 (x1 )}, so that B2 is a function. In a two-person game  with player 1 as leader, a strategy x¯1 ∈ X1 is called a Stackelberg equilibrium strategy for the leader if f1 (x¯1 , x˜2 (x¯1 )) ≤ f1 (x1 , x˜2 (x1 )), ∀x1 ∈ X1

S

where x˜2 (x1 ), for any x1 ∈ X1 , is the unique solution of the problem: f2 (x1 , x˜2 (x1 )) ≤ f2 (x1 , x2 ), ∀x2 ∈ X2

P (x1 ).

The Stackelberg equilibrium strategy x¯1 ∈ X1 is a solution to the upper-level problem S , and x˜2 (x¯1 ) is the optimal choice for player 2, the follower. The pair (x¯1 , x˜2 (x¯1 )) will be called Stackelberg equilibrium. Sometime P (x1 ) is called the lower-level problem and corresponds to the follower’s optimization problem.2 The game is solved by backward induction, i.e., analyzing the game from back to front. The Leader/Follower model corresponds to a bi-level optimization that is a special kind of optimization where one problem is embedded (nested) within another [3, 16, 41]. In this setting, the leader’s optimization problem contains a nested optimization task that corresponds to the follower’s optimization problem. In the Stackelberg game, the upper-level optimization problem is commonly referred to as the leader’s problem, and the lower-level optimization problem is commonly referred to as the follower’s problem. Migdalas [40] investigates the conditions under which the bi-level problem is equivalent to a bi-criteria problem that means under which conditions the Stackelberg equilibrium is also Pareto efficient.

1 For

a discussion about endogenous timing, see [18] and [1].

2 Remark that a Stackelberg game can also be seen as a subgame-perfect Nash equilibrium of a two-

stage game (i.e., the strategy profile that serves each player, given the strategy of the other player, and entails every playing in a Nash equilibrium in every subgame) with complete information (see e.g., [1]).

Some Aspects of the Stackelberg Leader/Follower Model

173

Applications of the Stackelberg model can be found in the literature in a lot of disciplines: for example, in Economics [6, 15], in Telecommunications [49], and in Transportation [39]. Several references can be found in [12, 14, 41, 42, 57].

2 Generalizations and Open Problems Different questions arise approaching the Stackelberg Leader/Follower model. In the following, we shall present some questions related to this hierarchical solution concept. For any question, we introduce briefly the problem providing the state of art and emphasizing some directions of future research. Problem 1 (Multiple-Follower Reaction) As it happens in many cases, the lowerlevel problem P (x1 ) may have more than one solution for at least one x1 ∈ X1 . Let us consider for any x1 ∈ X1 the best reply B2 (x1 ) of the follower player, that is a correspondence defined on X1 and valued in X2 mapping to any x1 ∈ X1 the subset B2 (x1 ) ⊆ X2 of all possible solutions to the problem P (x1 ). In this case, the best reply is a multivalued function, and the upper-level problem has to be formulated depending on the leader’s behavior. The leader has to optimize the updated cost function, but he does not know what the follower’s choice in the set B2 (x1 ) is. So, a possible approach is that the leader supposes that the follower’s choice is the best for himself and solves the following upper-level problem: find x¯1 ∈ X1 s.t. min

x2 ∈B2 (x¯1 )

f1 (x¯1 , x2 ) = min

min

x1 ∈X1 x2 ∈B2 (x1 )

f1 (x1 , x2 )

Ss

where B2 (x1 ), for any x1 ∈ X1 , is the set of all possible solutions x˜2 to the problem P (x1 ). Any solution x¯1 to the problem S s is called a strong Stackelberg equilibrium strategy for the leader. Any pair (x¯1 , x¯2 ) with x¯2 ∈ B2 (x¯1 ) is referred to as a strong Stackelberg solution for the two-person game with player 1 as the leader. The equilibrium strategy for the follower would be any strategy in the set of the best replies to the one adopted by the leader, namely if x¯1 is the strong Stackelberg equilibrium strategy for the leader, then any x¯2 ∈ B2 (x¯1 ) will be an optimal strategy for the follower, and the pair (x¯1 , x¯2 ) is referred to as a strong Stackelberg solution for the two-person game with player 1 as the leader. This solution concept corresponds to an optimistic leader’s point of view [7, 21]. Note that if x¯1 ∈ X1 is a strong Stackelberg strategy for the leader, any pair (x¯1 , x2 ) with x2 ∈ B2 (x¯1 ) is a possible exit for the game. Since B2 (x¯1 ) = argminx2 ∈X2 f2 (x¯1 , x2 ), then any choice x2 ∈ B2 (x¯1 ) is equivalent for the follower. For the leader, the situation is different, and if there exists a x2b ∈ B2 (x¯1 ) such that min f1 (x¯1 , x2 ) = f1 (x¯1 , x2b ), the choice (x¯1 , x2b ) will be the best for him. x2 ∈B2 (x¯1 )

A very common feature in applications is the so-called weak Stackelberg strategy or security strategy. We now suppose that the leader prevents the worst that can

174

L. Mallozzi et al.

happen when the follower chooses his decision in the set of the best replies. So, he minimizes the worst and solves the following upper-level problem: find x¯1 ∈ X1 s.t. max f1 (x¯1 , x2 ) = min

x2 ∈B2 (x¯1 )

max f1 (x1 , x2 )

x1 ∈X1 x2 ∈B2 (x1 )

Sw

where B2 (x1 ), for any x1 ∈ X1 , is the set of all possible solutions x˜2 to the problem P (x1 ). Any solution x¯1 to the problem S w is called a weak Stackelberg equilibrium strategy for the leader. Any pair (x¯1 , x¯2 ) with x¯2 ∈ B2 (x¯1 ) is referred to as a weak Stackelberg solution for the two-person game with player 1 as the leader [7, 21]. Differently from the strong solution concept, the weak one corresponds to a pessimistic leader’s point of view and is also known as generalized Stackelberg strategy or security strategy. Existence of weak Stackelberg strategies is a difficult task from mathematical point of view, because it may not exist even in smooth examples. An existence theorem guarantees the existence of weak Stackelberg strategies under assumptions on the structure of the best reply set [4]. Existence results of solutions as well as of approximated solutions can be found in [23–26] for the strong and the weak Stackelberg problem under general assumptions. Existence of solutions and approximations in the context of mixed strategies for the follower as well as for both players are in [31, 32]. A more general definition is the so-called intermediate Stackelberg strategy (see [33, 35]) where the leader has some probabilistic information about the choice of the follower in the optimal reaction set. It would be interesting to study suitable selections of the best reply correspondence and then to define new types of solution concepts. Another interesting approach would be to study and to modelize the cooperativeness of the follower with respect to the leader. In [8], in the case of linear static Stackelberg game, it has been defined a degree of cooperation, β ∈ [0, 1] , that represents the level of the follower cooperation that can be partial when 0 < β < 1. A model has been formulated for the leader maximizing his expected return, i.e., a weighted sum between the optimistic reaction effect, weighted with β, and the pessimistic one, weighted with 1 − β. So at the end, the leader’s optimal choice will depend on β, and it could be a solution in between the optimistic one and the pessimistic one. It would be interesting to study an extension of this partial cooperation model to the case of a nonlinear Stackelberg game and also to study the case in which the follower cooperation level depends on the leader’s choice. Problem 2 (Inverse Stackelberg Game) In the inverse Stackelberg game, the leader does not announce the strategy x1 , as in the Stackelberg one, but a function gL (·), which maps x2 into x1 . Given the function gL , the follower’s optimal choice x2∗ satisfies f2 (gL (x2∗ ), x2∗ ) ≤ f2 (gL (x2 ), x2 ), ∀x2 ∈ X2 .

Some Aspects of the Stackelberg Leader/Follower Model

175

The leader, before announcing the function gL , realizes how the follower will play, and he should exploit this knowledge in order to choose the best possible gL function, such that his own cost becomes as small as possible, that is: gL∗ (·) = argmingL (·) f1 (gL (x2 (gL (·))), x2 (gL (·))) The problem is in general very difficult to solve. However, if the leader knows what he can achieve (in terms of minimal costs) and what has to be done by all players to reach this outcome, the leader may be able to persuade other players to help him to reach this goal (i.e., the value of the leader’s cost function obtained if all players minimize it). If it is unknown what the leader can achieve in terms of minimal costs, finding the leader’s optimal gL -strategy is generally very difficult. The problem has been studied for special classes of payoff and applied to an electricity market problem [53]. Interesting subjects for future research are the existence of the solutions to general problem of the inverse Stackelberg type and the definition of an inverse Stackelberg game in a more general context, such as with a higher number of players or with leaders or followers being cooperative among themselves. Problem 3 (First Mover Advantage) Given a two-person strategic form game, suppose that player 1 acts as a leader. A first question is comparing the outcomes of players in a Stackelberg Leader/Follower model with the classical well-known concept of Nash equilibrium, where they play simultaneously. Let us consider the Cournot duopoly example. There are two firms with identical products, which operate in a market in which the market demand function is known. We denote by qi the production level of firm i and suppose a linear demand structure p = a − (q1 + q2 ) and linear production costs to both firms cqi (i = 1, 2) with a, c positive constants and a > c. The situation can be described by a two-person game  =< N; X1 , X2 ; f1 , f2 >, where N = {1, 2} are the two firms profit maximizing, the strategy sets are X1 = X2 = [0, +∞[, and the profit functions are given by: fi (q1 , q2 ) = qi (a − (q1 + q2 )) − cqi , i = 1, 2. The two firms act sequentially: firm 2 reacts to the leader firm’s decision. We assume that firm 1 acts as the leader and announces q1 ≥ 0; firm 2, the follower, observes q1 and firm 2 reacts by choosing q2 ∈ B2 (q1 ), B2 (q1 ) = {(a − q1 − c)/2}, for q1 < a − c. Firm 1 knows that for any q1 firm 2 will choose q2 = B2 (q1 ) (that is unique) and solves max f1 (q1 , B2 (q1 )). q1 ≥0

The Stackelberg equilibrium strategy, also called Stackelberg–Cournot equilibrium strategy, is q1∗ = (a −c)/2, and firm 2 will choose q2∗ = B2 (q1∗ ) = (a −c)/4. If the two firms act simultaneously, the Nash equilibrium solution (also called Nash– Cournot equilibrium) is (qˆ1 , qˆ2 ) = ((a − c)/3, (a − c)/3).

176

L. Mallozzi et al.

Fig. 1 Nash–Cournot vs. Stackelberg–Cournot equilibrium

NE

S B2 (q1 ) B1 (q2 )

qˆ1

q1∗

q1

Now, we compare the profits for both firms in the Nash–Cournot case and in the Stackelberg–Cournot case (Figure 1). For the leader firm, we have f1 (q1∗ , q2∗ ) = (a − c)2 /8 > f1 (qˆ1 , qˆ1 ) = (a − c)2 /9 while for the follower firm we have f2 (q1∗ , q2∗ ) = (a − c)2 /16 < f2 (qˆ1 , qˆ1 ) = (a − c)2 /9. For the leader, this is a general situation when the follower’s best reply is a singleton: the leader’s profit (resp., cost) in case of Stackelberg equilibrium is always higher (resp., lower) than his profit (resp., cost) in case of Nash equilibrium [4]. A second and deeply studied question is the first mover advantage: is it better to be leader or follower in a Stackelberg model? In [17], one can find a proof of the result that when two identical players move sequentially in a game, the player who moves first earns lower (higher) profits than the player who moves second if the reaction functions of the players are upwards (downwards) sloping, respectively. In [22], there is a first overview on the theoretical literature on mechanisms that confer advantages and disadvantages on first mover firms. There are two crucial assumptions that determine the first mover advantage. The first one is that the moves in the game are sequential, that is some player (first mover) commits to an action, while another player reacts to this action. The second one is that the second mover perfectly knows at no cost the action performed by the first mover. This second assumption has been tested several times in the literature. In particular, when the move of the first mover can be viewed with some noise or there is a cost to see it, then the value of commitment is eliminated at least in pure strategy equilibrium (some examples are in [2, 45–47, 51, 56]). Nevertheless, the value of the commitment is restored when, as in many real cases, there is some uncertainty about the behavior of the first mover and if this behavior has an impact on the optimal choice of the second mover (see [19, 47]), or if the first mover possesses private information (see, e.g., [30, 58]).

Some Aspects of the Stackelberg Leader/Follower Model

177

Given the large range of possible applications of Stackelberg models, there is space in future research to identify the specific cases in which the first mover advantage exists or it is null or there is a second mover advantage. Moreover, all the literature cited considers models in which the payoff functions admit a unique best reply strategy for the second mover. It could be interesting to analyze whether these conclusions still hold in the case of multiple best replies. Problem 4 (Multiple-Follower Case) A more general case, dealing with one leader and multiple followers, is the so-called Stackelberg–Nash problem. Let us consider an n + 1-person game  =< N ; X0 , X1 , . . . , Xn ; f0 , f1 , . . . , fn >, where N = {0, 1, . . . , n}, one player 0 is the leader and the rest of them 1, . . . , n are followers in a two-level Stackelberg game. It is supposed that the n followers are engaged in a noncooperative competition corresponding to a Nash equilibrium problem. Let X0 , X1 , . . . , Xn be the leader’s and the followers’ strategy sets, respectively. Let f0 , f1 , . . . , fn be real-valued functions defined on X0 × X1 × . . . . × Xn representing the leader’s and the followers’ cost functions. The players are cost minimizing. The leader is assumed to announce his strategy x0 ∈ X0 in advance and commit himself to it. For a given x0 ∈ X0 , the followers select (x1 , . . . , xn ) ∈ R(x0 ) where R(x0 ) is the set of the Nash equilibria of the n-person game with players 1, . . . , n, strategy sets X1 , . . . , Xn , and cost functions f1 , . . . , fn . For each x0 ∈ X0 , that is the leader’s decision, the followers solve the following lower-level Nash equilibrium problem N (x0 ): find(x¯1 , . . . , x¯n ) ∈ X1 × . . . . × Xn such that fi (x0 , x¯1 , . . . ., x¯n ) = min fi (x0 , x¯1 , . . . , x¯i−1 , xi , x¯i+1 , . . . , x¯n ) xi ∈Xi

∀i = 1, . . . , n

The nonempty set R(x0 ) of the solutions to the problem N (x0 ) is called the followers’ reaction set. The leader takes into account the followers’ Nash equilibrium, that we assume to be unique, and solves an optimization problem in a backward induction scheme. Let (x˜1 (x0 ), . . . ., x˜n (x0 )) ∈ X1 ×. . . .×Xn be the unique solution of the problem N(x0 ), the map x0 ∈ X0 → R(x0 ) = (x˜1 (x0 ), . . . ., x˜n (x0 )) is called the followers’ best reply (or response). The leader has to compute a solution of the following upper-level problem S SN : find x¯0 ∈ X0 such that f0 (x¯0 , x˜1 (x¯0 ), . . . ., x˜n (x¯0 )) = min f0 (x, x˜1 (x), . . . ., x˜n (x)) x∈X0

S SN

Any solution x¯0 ∈ X0 to the problem S SN is called a Stackelberg–Nash equilibrium strategy.

178

L. Mallozzi et al.

The given definition for n = 1 is nothing but the classical Stackelberg equilibrium solution. This model, for n > 1, has been introduced in the oligopolistic market context in [55] and studied from a computational point of view in [29]. Existence of solutions and approximate solutions under general assumptions are in [44]. Existence of solutions in mixed strategies has been given in [34], [36] for two followers playing a zero-sum or a nonzero sum game, respectively. Several applications can be found in the literature (for example, [11, 27, 43]). An example dealing with communication networks is studied in [5]: the problem is formulated as a leader–follower (Stackelberg) game, with a single leader (the service provider, who sets the price) and a large number of Nash followers (the users, who decide on their flow rates), and the asymptotical behavior with an infinite number of followers is discussed. Models of oligopolistic markets with a leader firm have been studied in [48, 52]. In these works, the number of followers is exogenously given: assuming the cost of entering a market, it could be interesting to endogenize the number of followers. Moreover, it might be useful to consider the implications of analyzing the social surplus as in [59] in this kind of models. Problem 5 (Multiple-Player Games with Hierarchy) It is possible to extend the Stackelberg Leader/Follower model also in the case of multiple players: it is necessary to fix the hierarchical level of each player and precise his behavior as leader as well as follower. A first definition is the generalization of the Stackelberg–Nash problem to a m + n-person game with m players acting as leaders and the rest of them behave as n followers. It is assumed a noncooperative behavior between the leaders and between the followers, so the model can be written by considering a Nash equilibrium problem at the lower level with the follower players and another Nash equilibrium problem at the upper level with the leader players. An existence result for equilibria for Stackelberg games where a collection of leaders compete in a Nash game constrained by the equilibrium conditions of another Nash game among the followers, imposing no single-valuedness assumption on the equilibrium of the follower-level game, has been given in [20] under the assumption that the objectives of the leaders admit a quasi-potential function. In the paper, an application in communication networks is also illustrated. In the context of International Environmental Agreements, players are organized in coalitions: it is supposed that a fixed group of the players participate in an agreement (signatories) and choose strategies by maximizing the aggregate welfare of the coalition, and the rest (non-signatories) acting in a noncooperative way to choose their strategies as a Nash equilibrium [10, 13, 37]. These situations are also known as partial cooperative equilibrium problems. More signatory coalitions, acting noncooperatively between themselves, have been also considered [50]. Another point of view is the case where in a multiple leader-multiple follower game, the action chosen by any leader is observed by only one exclusive follower [9]. A Stackelberg game where one of the players is a leader and the rest are involved in a cooperative TU-game has been studied in [54].

Some Aspects of the Stackelberg Leader/Follower Model

179

It is also possible to assume that in an n-player situation, there is hierarchy between them: the game is played in n stages and any player acts at a stage as leader of the successor player and follower of the predecessor player [28]. More precisely, when player i ∈ {1, . . . , n} acts, we say that players {1, . . . , i − 1} are leaders or predecessors of player i and players {i + 1, . . . , n} are followers of player i. This game, where only one player operates at each stage, is called hierarchical game and has been studied in [38, 61] together with an application to sequential production situations. The case of multiple follower’s reaction is still to be investigated. Further possible direction of research could be studying multiple leader–multiple follower models where the leaders and/or the followers choose a solution concept different from the noncooperative behavior of the Nash equilibrium concept. Acknowledgements This work has been supported by STAR 2014 (linea 1) “Variational Analysis and Equilibrium Models in Physical and Social Economic Phenomena,” University of Naples Federico II, Italy and by GNAMPA 2016 “Analisi Variazionale per Modelli Competitivi con Incertezza e Applicazioni.”

References 1. Amir, R., Grilo, I.: Stackelberg versus Cournot equilibrium. Games Econ. Behav. 26, 1–21 (1999) 2. Bagwell, K.: Commitment and observability in games. Games Econ. Behav. 8, 271–280 (1995) 3. Bard, J.F.: Practical Bilevel Optimization: Algorithms and Applications. Kluwer Academic, Dordrecht (1998) 4. Ba¸sar, T., Olsder, G.J.: Dynamic noncooperative game theory. Reprint of the second 1995 edition. Classics in Applied Mathematics, vol. 23. Society for Industrial and Applied Mathematics (SIAM), Philadelphia (1999) 5. Ba¸sar, T., Srikant, R.: Stackelberg network game with a large number of followers. J. Optim. Theory Appl. 115, 479–490 (2002) 6. Ben Abdelaziz, F., Ben Brahim, M., Zaccour, G.: R&D equilibrium strategies with surfers. J. Optim. Theory Appl. 136, 1–13 (2008) 7. Breton, M., Alj, A., Haurie, A.: Sequential Stackelberg equilibria in two-person games. J. Optim. Theory Appl. 59, 71–97 (1988) 8. Cao, D., Leung, L.C.: A partial cooperation model for non-unique linear two-level decision problems. Eur. J. Oper. Res. 140(1), 134–141 (2002) 9. Ceparano, M.C., Morgan, J.: Equilibria for multi–leader multi–follower games with vertical information: existence results. In: CSEF Working Paper, vol. 417 (2015) 10. Chakrabarti, S., Gilles, R.P., Lazarova, E.A.: Strategic behavior under partial cooperation. Theor. Decis. 71, 175–193 (2011) 11. Chinchuluun, A., Pardalos, P.M., Huang, H.X.: Multilevel (hierarchical) optimization: complexity issues, optimality conditions, algorithms. In: Gao D., Sherali, H. (eds.) Advances in Applied Mathematics and Global Optimization, pp. 197–221. Springer, Berlin (2009) 12. Colson, B., Marcotte, P., Savard, G.: An overview of bilevel optimization. Ann. Oper. Res. 153, 235–256 (2007) 13. D’Amato, E., Daniele, E., Mallozzi, L., Petrone, G.: Equilibrium strategies via GA to Stackelberg games under multiple follower’s best reply. Int. J. Intell. Syst. 27, 74–85 (2012) 14. Dempe, S.: Annotated bibliography on bilevel programming and mathematical programs with equilibrium constraints. Optimization 52, 333–359 (2003)

180

L. Mallozzi et al.

15. Denisova, L., Garnaev, A.: Fish wars: cooperative and non-cooperative approaches. The Czech Econ. Rev. 2, 28–41 (2008) 16. Floudas, C.A., Pardalos, P.M.: Encyclopedia of Optimization. Springer, New York (2008) 17. Gal-Or, E.: First mover and second mover advantages. Int. Econ. Rev. 26(3), 649–653 (1985) 18. Hamilton, J., Slutsky, S.: Endogenous timing in duopoly games: Stackelberg or Cournot equilibria. Games Econ. Behav. 2, 29–46 (1990) 19. Hörtnagl, T., Kerschbamer, R.: How the value of information shapes the value of commitment or: why the value of commitment does not vanish. EconPaper Repec (2014) 20. Kulkarni, A.A., Shanbhag, U.V.: An existence result for hierarchical Stackelberg v/s Stackelberg games. IEEE Trans. Autom. Control. 60(12), 3379–3384 (2015) 21. Leitmann, G.: On generalized Stackelberg strategies. J. Optim. Theory Appl. 26, 637–643 (1978) 22. Lieberman, M.B., Montgomery, D.B.: First-mover advantages. Strateg. Manage. J. 9(S1), 41– 58 (1988) 23. Lignola, M.B., Morgan J.: Topological existence and stability for Stackelberg problems. J. Optim. Theory Appl. 84, 145–169 (1995) 24. Lignola, M.B., Morgan J.: Stability of regularized bilevel programming problems. J. Optim. Theory Appl. 93(3), 575–596 (1997) 25. Loridan, P., Morgan J.: New results on approximate solution in two-level optimization. Optimization 20(6), 819–836 (1989) 26. Loridan, P., Morgan J.: Weak via strong Stackelberg problem: new results. J. Global Optim. 8, 263–287 (1996) 27. Lu, J., Shi, C., Zhang, G.: On bilevel multi-follower decision making: general framework and solutions. Inf. Sci. 176, 1607–1627 (2006) 28. Luh, P.B., Chang, T.S., Ning, T.: Three-level Stackelberg decision problems. IEEE Trans. Autom. Control. AC-29, 280–282 (1984) 29. Luo, Z.Q., Pang, J.S., Ralph, D.: Mathematical Programs with Equilibrium Constraints. Cambridge University Press, Cambridge (1996) 30. Maggi, G.: The value of commitment with imperfect observability and private information. RAND J. Econ. 30(4), 555–574 (1999) 31. Mallozzi, L., Morgan J.: ε-mixed strategies for static continuous Stackelberg problem. J. Optim. Theory Appl. 78(2), 303–316 (1993) 32. Mallozzi, L., Morgan, J.: Weak Stackelberg problem and mixed solutions under data perturbations. Optimization 32, 269–290 (1995) 33. Mallozzi, L., Morgan, J.: Hierarchical systems with weighted reaction set. In: Di Pillo, G., Giannessi, F. (eds.), Nonlinear Optimization and Applications, pp. 271–282. Plenum Publ. Corp., New York. ISBN: 0-306-45316-9 (1996) 34. Mallozzi, L., Morgan, J.: Mixed strategies for hierarchical zero-sum games. In: Altman E., Pourtallier O. (eds.) Advances in Dynamic Games and Applications. Annals of the International Society of Dynamic Games, vol. 6, pp. 65–77. Birkhauser, Boston (2001) 35. Mallozzi, L., Morgan, J.: Oligopolistic markets with leadership and demand functions possibly discontinuous. J. Optim. Theory Appl. 125(2), 393–407 (2005) 36. Mallozzi, L., Morgan, J.: On approximate mixed Nash equilibria and average marginal function for two-stage three players games. In: Dempe, S., Kalshnikov V. (eds.) Optimization with Multivalued Mapping. Springer Optimization and Its Applications, vol. 2, pp. 97–107. Springer, New York (2006) 37. Mallozzi, L., Tijs, S.: Conflict and cooperation in symmetric potential games. Int. Game Theory Rev. 10(3), 1–12 (2008) 38. Mallozzi, L., Tijs, S., Voorneveld, M.: Infinite hierarchical potential games. J. Optim. Theory Appl. 78(2), 303–316 (2000) 39. Marcotte, P., Blain, M.A.: Stackelberg-Nash model for the design of deregulated transit system. In: Hamalainen, R.H., Ethamo, H.K. (eds.) Dynamic Games in Economic Analysis. Lecture Notes in Control and Information Sciences, vol. 157, pp. 21–28. Springer, Berlin (1991)

Some Aspects of the Stackelberg Leader/Follower Model

181

40. Migdalas, A.: When is a Stackelberg equilibrium Pareto optimum? In: Pardalos, P. et al. (eds.) Advances in Multicriteria Analysis, pp. 175–181. Kluwer Academics, Dordrecht (1995) 41. Migdalas, A., Pardalos, P.M.: Editorial: hierarchical and bilevel programming. J. Global Optim. 8(3), 209–215 (1996) 42. Migdalas, A., Pardalos, P.M., Varbrand, P. (eds.): Multilevel Optimization: Algorithms and Applications. Kluwer Academic Publishers, Dordrecht (1998) 43. Miller, T.C., Friesz, T.L., Tobin, R.L.: Equilibrium Facility Location on Networks. Springer, Berlin (1996) 44. Morgan, J., Raucci, R.: Lower semicontinuity for approximate social Nash equilibria. Int. J. Game Theory. 31, 499–509 (2002) 45. Morgan, J., Várdy, F.: An experimental study of commitment and observability in Stackelberg games with observation costs. Game Econ. Behav. 49, 401–423 (2004) 46. Morgan, J., Várdy, F.: The value of commitment in contests and tournaments when observation is costly. Game Econ. Behav. 60, 326–338 (2007) 47. Morgan, J., Várdy, F.: The fragility of commitment. Manag. Sci. 59(6), 1344–1353 (2013) 48. Nakamura, T.: One-leader and multiple-follower Stackelberg games with private information. Econ. Lett. 127, 27–30 (2015) 49. Nan, G., Mao, Z., Yu, M., Li, M., Wang, H., Zhang, Y.: Stackelberg game for bandwidth allocation in cloud-based wireless live-streaming social networks. IEEE Syst. J. 8(1), 256–267 (2014) 50. Ochea, M.I., de Zeeuw, A.: Evolution of reciprocity in asymmetric international environmental negotiations. Environ. Resour. Econ. 62(4), 837–854 (2015) 51. Oechssler, J., Schlag, K.H.: Loss of Commitment? An Evolutionary Analysis of Bagwell’s Example. Working Paper (2013) 52. Okuguchi, K., Szidarovszky, F.: The Theory of Oligopoly with Multi-Product Firms. Springer, Berlin (1990) 53. Olsder, G.J.: Phenomena in inverse Stackelberg games, part 1: static problems. J. Optim. Theory Appl. 143(3), 589–600 (2009) 54. Pensavalle, C., Pieri, G.: Stackelberg problems with followers in the grand coalition of a TUgame. Decisions Econ. Finan. 36(1), 89–98 (2013) 55. Sheraly, H.D., Soyster, A.L., Murphy, F.H.: Stackelberg-Nash-Cournot equilibria: characterizations and computations. Oper. Res. 31, 253–276 (1983) 56. Várdy, F.: The value of commitment in Stackelberg games with observation costs. Games Econ. Behav. 49, 374–400 (2004) 57. Vincente, L.N., Calamai, P.H.: Bilevel and multilevel programming: a bibliography review. J. Global Opt. 5, 291–306 (1994) 58. Vives, X.: Information and competitive advantage. Int. J. Ind. Organ. 8, 17–35 (1990) 59. Vives, X.: Strategic supply function competition with private information. Econometrica. 79(6), 1919–1966 (2011) 60. von Stackelberg, H.: Marktform und Gleichgewicht. Julius Springer, Vienna (1934). In: Peacock, A. (ed.) The Theory of the Market Economy, English Edition. William Hodge, London (1952) 61. Voorneveld, M., Mallozzi, L., Tijs, S.: Sequential production situations and potentials. In: Patrone, F., Garcia-Jurado, I., Tijs, S. (eds.) Game Practice: Contributions from Applied Game Theory. Theory and Decision Library C, vol. 23, pp. 241–258. Kluwer Academic Publishers, Boston (2000)

Open Research Areas in Distance Geometry Leo Liberti and Carlile Lavor

Abstract Distance geometry is based on the inverse problem that asks to find the positions of points, in a Euclidean space of given dimension, that are compatible with a given set of distances. We briefly introduce the field, and discuss some open and promising research areas. Keywords Computational geometry · Fundamental distance geometry problem · Problem variants and extensions · Rigidity structure · Protein backbones · Clifford algebra · Computational complexity

1 Introduction Distance geometry (DG) is based on the concept of distance rather than points and lines. Its development as a branch of mathematics is mainly due to the motivating influence of other fields of science and technology, although pure mathematicians have also worked in DG over the years. DG becomes necessary whenever one can collect or estimate measurements for the pairwise distances between points in some set, and is then required to compute the coordinates of those points compatibly with the distance measurements. The fundamental problem in DG is the DISTANCE GEOMETRY PROBLEM (DGP), a decision problem that, given an integer K > 0 and a connected simple edge-weighted graph G = (V , E, d) where d : E → R+ , asks whether there exists a realization x : V → RK such that:

L. Liberti () CNRS LIX, Ecole Polytechnique, Palaiseau, France e-mail: [email protected] C. Lavor Department of Applied Mathematics (IMECC-UNICAMP), University of Campinas, Campinas, SP, Brazil e-mail: [email protected] © Springer Nature Switzerland AG 2018 P. M. Pardalos, A. Migdalas (eds.), Open Problems in Optimization and Data Analysis, Springer Optimization and Its Applications 141, https://doi.org/10.1007/978-3-319-99142-9_11

183

184

L. Liberti and C. Lavor

∀{u, v} ∈ E

xu − xv  = duv ,

(1)

where  ·  indicates an arbitrary norm, making this into a problem schema parametrized on the norm (as we shall see in the following, most applications employ the 2 norm). The DGP is an inverse problem: whereas computing some of the pairwise distances given the positions of the points is an easy task,1 the inverse inference (retrieving the point positions given some of the distances) is not so easy. We remark that a realization can be represented by a |V | × K matrix, the i-th row of which is the position vector xi ∈ RK for vertex i ∈ V . The main purpose of this paper is to survey what we think are the most important open problems in the field of DG. The rest of this paper is organized as follows. We discuss some of the applications driving research behind DG in Section 2. We give a very short (and partial) historical overview of the development of DG in Section 3. Section 4 introduces some DGP variants. The main section, Section 5, presents many open questions and promising areas for further research.

2 Main Application Areas The DGP arises in many application areas. We list here some of those for which the application is direct. In the following, we shall refer to X as the set of solutions of the DGP variant being discussed. • In certain network synchronization protocols, the time difference between certain clocks can be estimated and exchanged, but what is actually required is the absolute time [136]. Here the points in the solution set X are sequences of time instants, each of which is a vector in R1 (i.e. a scalar) indicating the absolute time for a given clock. The time differences are Euclidean distances between points in one dimension: we recall that  · 2 = | · | in R1 . • In wireless networks the devices may move, usually on a two-dimensional surface. Some of the pairwise distances, typically those that are sufficiently close, can be estimated by measuring the battery power used in peer-to-peer communication. The information of interest to the network provider is the localization of the devices. In this setting, the solution set X contains sequences of 2D coordinates (one per device), and the measured distances are assumed to be approximately Euclidean, although they may be noisy and imprecise. See [22, 51, 59, 76, 82, 146, 147]. • Proteins are organic molecules consisting of a backbone with some side chains attached. Proteins have chemical properties (e.g. the atoms which compose it) 1 Direct problems may not be all that easy: see Erd˝ os’ unit distances and distinct distances problems [63].

Open Research Areas in Distance Geometry

185

and geometrical properties (the relative position of each atom in the protein). Nowadays we know the chemical compositions of most proteins of interest, but not necessarily their shape; and yet, proteins bind to other proteins, and/or to specific sites on the surface of living cells, depending on both shape and chemical composition. Some of the pairwise inter-atomic distances can be measured in fairly precise ways (e.g. the lengths of the covalent bonds). Kurt Wüthrich discovered that some other distances (typically smaller than 5 Å) can be estimated using nuclear magnetic resonance (NMR) techniques [144]. In this setting, X contains sequences of 3D coordinates (one per atom), and some of the measured distances can be noisy or just wrong [20]. Also see [32, 40, 128]. • Fleets of unmanned autonomous underwater vehicles (AUV) are deployed in order to install and maintain offshore installations such as oil rigs, wind and wave energy farms; such vehicles can estimate distances between each other, with the ocean bed and with the installation using sonar signals. In this setting, X contains sequences of 3D coordinates (one per AUV). Measurements can be noisy and depend on time, and the positions of the AUVs move continuously in the ocean. See [14]. • Nanostructures, such as graphite or buckminsterfullerene, are used extensively in material sciences. The main issue is determining their shape from a spectrographic analysis. The input data is essentially a probability density function (p.d.f.) over distance values, i.e. a function R+ → [0, 1]: by looking at the peaks of the p.d.f., one can extract a sequence of most likely values (with their multiplicities). From this list of distance values, one has to reconstruct their incidences to atoms, i.e. the graph, and its realization in R3 . In this setting, X contains sequences of 3D coordinates (one per atom occurring in the nanostructure). The distances are unassigned, i.e. they are simply values with multiplicities, but the incidence to the adjacent atoms is also unknown. See [21, 56, 143]. • In the analysis of robotic movements one is given the bar-and-joint structure of the robot (i.e. a geometric model consisting of idealized rigid bars held together by freely rotating joints), the absolute position of its joints, and the coordinates of one or more points in space, and one would like to know if the robot can flex to reach that point. This involves computing the manifold of solutions of the DGP corresponding to the robotic graph weighted by the bar lengths, and asking whether the solution manifold contains the given points [131]. Again, X contains sequences of 3D coordinates (one per joint). In other cases, DG may be one of the steps towards the solution of other problems. In data analysis, for example, one often wishes to represent highdimensional vectors visually on a 2D or 3D space. Many dimensional reduction techniques aim at approximately preserving the pairwise distances between the vectors [27]. This is fairly natural in that the “shape” of a set of points could be defined through the preservation of pairwise distances: two sets of points for which there exists a congruence mapping one to the other can definitely be stated to have “the same shape”. Thus, an approximate congruence between two sets (such as

186

L. Liberti and C. Lavor

the one defined in Equation (2)) might well be taken as a working definition of the two sets “having approximately the same shape”. In dimensional reduction, the dimension K of the target space is unspecified.

3 Some Historical Notes About DG Arguably the first mathematical result that can be ascribed to DG is Heron’s theorem for computing the area of a triangle given its side lengths [72]. This was further generalized by Arthur Cayley to simplices of arbitrary dimensions, the volume of which turns out to be equal a scaled determinant of a matrix which is a function of the side lengths of the simplex [34]. Karl Menger proposed an axiomatization of DG [116, 117] that provided necessary and sufficient conditions for a metric space to be isometrically embeddable in a Euclidean space of finite dimension [24, 29, 70]. Much of the impact of DG on engineering applications will be discussed in the rest of the paper. In this section, we focus on two cases where the history of DG made an impact on modern mathematics. For more information, see [98].

3.1 Impact of DG on Rigidity Motivated by applications to statics and construction engineering, DG played a prominent role in the study of rigid structures, i.e. bar-and-joint frameworks having congruences as their only continuous motions (a bar-and-joint framework is a barand-joint structure together with positions for the joints, or, equivalently, a pair (G, x) of graph G with an associated realization x). Euler conjectured in 1766 [60] that all three-dimensional polyhedra are rigid. Cauchy provided a proof for strictly convex polyhedra [33] (Cauchy’s original proof contained two mistakes, subsequently corrected by Steinitz and Lebesgue), and Alexandrov [3] extended the proof to all convex polyhedra. If polyhedra are defined by their face incidence lattice rather than as intersections of half-spaces, then polyhedra can also be nonconvex: this, in fact, appeared to be the setting proposed by Euler in his original conjecture, expressed in terms of (triangulated) surfaces. In this setting, Bob Connelly finally found in 1978 [37] an example of a nonconvex triangulated sphere which can undergo a flexible motion of some of its vertices, while keeping all the edge distances equal, and disproved Euler’s conjecture. J.C. Maxwell studied rigidity [113, 114] in relation to balancing forces acting on structures, more precisely force diagrams by reciprocal figures. These were at the basis of graphical algorithms to verify force balancing [41, 42], in use until computers became dominant [129].

Open Research Areas in Distance Geometry

187

3.2 The Role of DG in “Big Data” DG is also at the centre of two results currently used in the analysis and algorithmics of large data sets, also known as “big data”. Both results gave rise to dimensional reduction techniques, i.e. methods for projecting a finite set Y of points in Rm (with m large) to RK (with K much smaller than m), while approximately preserving the pairwise distances over Y . The first such technique is multidimensional scaling (MDS), originally based on a 1935 result of Isaac Schoenberg [135] (rediscovered in 1938 by Young and Householder [148]). The second technique is based on a lemma of Johnson and Lindenstrauss [79], which ensures that, for a given ε ∈ (0, 1), a large enough |Y |, and K = O( ε12 ln |Y |), there exists a function f : Rm → RK which preserves pairwise distances approximately, up to a multiplicative error: ∀x, y ∈ Y

(1 − ε)x − y2 ≤ f (x) − f (y)2 ≤ (1 + ε)x − y2 .

(2)

MDS is now a pervasive data analysis technique, applied to a vast range of problems from science and technology. The Johnson–Lindenstrauss Lemma (JLL) is less well known, but employed, e.g., for fast clustering of Euclidean data [74]. A possible application of these results to the DGP is useful to project large dimensional realizations to a smaller dimension while keeping all pairwise distances approximately equal.

4 Problems in DG As already mentioned, the DGP is the fundamental problem in DG. In the DGP formulation we gave in Section 1, however, we omitted to specify the norm, which is mainly intended to be Euclidean. In this case, the DGP is also called EUCLIDEAN DGP (EDGP) [52, 104]. In this section we look at several types of DGP variants. In Section 4.1 we look at the case of fixed K given as part of the input. In Section 4.2 we discuss the DGP using other norms than the Euclidean one. In Section 4.3 we discuss the DGP in the presence of measurement errors on the input data. In Section 4.4 we discuss the case where G is complete and K is not given as part of the input, but rather as an asymptotic bound in function of n = |V |. In Section 4.5 we discuss the case where K is part of the output. Finally, in Section 4.6 we present the DGP variant where the weighted graph is replaced by a list of distance values.

4.1 DGP in Given Dimensions Although the dimension K is specified in the DGP as part of the input, most applications require a fixed given constant, see Section 2: for example, the determination

188

L. Liberti and C. Lavor

of protein structure from distance data requires K = 3. When the dimension K is fixed to a constant γ , we denote the corresponding problem by DGPγ (equivalently, we denote by EDGPγ the EDGP in fixed dimension γ , and similarly for other DGP variants). Because of the application to molecules, the EDGP3 is also called MOLECULAR DGP (MDGP); similarly, because of the application to wireless networks, the DGP2 is also called the SENSOR NETWORK LOCALIZATION PROBLEM (SNLP).

4.2 DGP with Different Norms The DGP using other norms is not as well studied as the EDGP. Some mixed-integer linear programming (MILP) formulations and some heuristics have been developed for the 1 and ∞ norms [48]. The ∞ norm is used as a proxy to the 2 norm in [43]. Some works in spatial logic reduce to solving the DGP with a discretevalued semimetric taking values in a set such as {almost_equal, near, far} (each label is mapped to an interval of possible Euclidean distances) [55]. Geodesic spherical distances have been briefly investigated by Gödel [65], who proved that if a weighted complete graph over 4 vertices can be realized in R3 , but not in R2 , then it can also be realized on the surface of a sphere. This was extended to arbitrary dimensions in [106].

4.3 DGP with Intervals In most applications, distances are not given precisely. Instead, some kind of measurement errors are involved. A common way to deal with this issue is to model L , d U ], yielding what is known distances duv by means of an uncertainty interval [duv uv as interval DGP (iDGP): ∀{u, v} ∈ E

L U ≤ xu − xv  ≤ duv . duv

(3)

Very few combinatorial techniques for solving DGPs extend naturally in the case of intervals (but see Section 5.3.4). Typically, optimization techniques based on mathematical programming (MP) do, however [47, 119]. See [32, 92, 101, 140] for more information.

4.4 Isometric Embeddings Many works exist in the literature about isometric embeddings, i.e. embeddings of metrics in various vector spaces. We look specifically at cases where the metrics are

Open Research Areas in Distance Geometry

189

finite and the target space is the Euclidean space RK (for some K). We remark that the isometric embedding problem with finite metrics is close to the case of the DGP where the input graph G is a complete graph. In this line of (mostly theoretical) research, K is not usually given as part of the input but rather proved to be a function of |V | which is asymptotically bounded above. The JLL (Section 3.2) is an example of this type of results. An ingenious construction shows that any valid metric D = (duv ) can be embedded exactly in Rn (where n = |V |) under the ∞ norm. It suffices to define [61, 83]: ∀v ∈ V

xv = (duv | u ∈ V ).

(4)

This construction is known as the Fréchet embedding. For the 1 norm, no such general result is known. It is known that 2 metric spaces of n points can be embedded in a vector space of O(n) dimensions under the 1 norm almost isometrically [112, §2.5]. The “almost” in the previous sentence refers to a multiplicative distortion similar to the JLLs: (1 − ε)x2 ≤ f (x)1 ≤ (1 + ε)x2 , where f preserves norms approximately while reducing the dimensionality of x, for some ε ∈ (0, 1). Moreover, any finite metric on n points can be embedded in an exponentially large dimensional vector space using the 1 norm with O(log n) distortion: this was shown in [28] by means of a probabilistic weighted extension of the Fréchet embedding on all subsets of V . The dimension was reduced to O(n2 ) by a deterministic construction [108]; moreover, an appropriate randomized choice of subsets of V drives it down to O(log n) [107]. Similar results hold for many other norms, including 2 one. The relatively large distortion O(log n) unfortunately limits the usefulness of these results.

4.5 Matrix Completion The EDGP can also be formulated as follows: given K > 0 and the squared 2 ) of G (with d being the weight of the edge weighted adjacency matrix D = (duv uv 2 ) {u, v} if {u, v} ∈ E), find a squared Euclidean distance matrix (sqEDM) D¯ = (d¯uv K corresponding to a realization in R and such that, for each {u, v} ∈ E, d¯uv = duv . The EDM COMPLETION PROBLEM (EDMCP) consists in relaxing the requirement that D¯ should be the sqEDM of a realization in RK for a given K. Instead, K is not part of the input, and D¯ should simply correspond to a realization in a Euclidean space of any dimension. Informally, this means that, in the EDMCP, K becomes (implicitly) part of the output. More details can be found in [86].

190

L. Liberti and C. Lavor

4.6 DGP Without Adjacency Information Suppose that, instead of providing a weighted graph (G, d), we provided instead a list L of distance values, and then asked the same question. We can no longer write Equation (1) since we do not know what duv is: instead, we only have L = (d1 , . . . , dm ) where m = |E|. This DGP variant, called the UNASSIGNED DGP (uDGP) is very important, since NMR experiments actually provide the distance values rather than the actual edges of the graph. Although a fair amount of work has been carried out by physicists [56] and structural biologists [127] on this problem, it is largely unstudied by mathematicians and computer scientists for all K > 1 (the case K = 1, on the other hand, has been studied under different names: turnpike problem and partial digest problem [96]). This prompted us to list it as one of the main “open areas” in Section 5 below (see Section 5.4).

5 Open Research Areas In the last 10 years, our work in the DG research community allowed us to survey many theoretical, methodological and applicative areas connected to DG [88, 91, 94, 97, 101, 104, 125, 126]. Although our viewpoint is certainly not exhaustive, we list here the research topics which we think are most promising for further research. 1. Combinatorial characterization of rigidity in dimensions K > 2. 2. Computational complexity of Euclidean distance matrix completion in the Turing machine model. 3. A priori estimation of the number of realizations for a given set of distances. 4. The unassigned DGP. We also remark that the longest-standing problem in DG, that of the rigidity of all closed triangulated surfaces, was opened by L. Euler in 1766 [60] and finally answered in the negative by Bob Connelly in 1978 [37]. In the rest of this paper, we discuss each of these research areas in detail.

5.1 Combinatorial Characterization of Rigidity Rigidity is a property relating to the bar-and-joint framework theory of statics. Architects and construction engineers are concerned with structures that do not bend: or, in other words, that are rigid. From the point of view of many other applications employing the EDGP as a model of the inverse problem of recovering positions from distance information, a desirable property is solution uniqueness. For example, if one is trying to recover the position of wireless devices in a mobile network, one would like the solution to be unique. In the case of protein

Open Research Areas in Distance Geometry

191

conformation, one would like to find all of the possible chiral isomers, which are in finite number. Again, the property that tells apart EDGP instances with a finite number of solutions from those with an uncountable number is rigidity. Formally speaking, there are many different definitions of rigidity. The most basic one is concerned with lack of local movement, i.e. the only possible movements that a structure can undergo without changing the given pairwise distances are rotations and translations. Another definition, most often used in statics, concerns the absence of infinitesimal motions (defined below). Other definitions concern solution uniqueness (global/universal rigidity) [4], minimality of edge set cardinality (isostaticity/minimal rigidity) [142], abstractness (graphical/abstract rigidity matroids) [67] and more [137]. If the framework has certain genericity properties, then rigidity can be ascribed directly to the graph, rather than the framework [64]. The question then becomes: given a graph, is it rigid in K dimensions? Since the input only consists of a graph and an integer, the ideal solution algorithm to settle this question should be “purely combinatorial”, meaning that during its execution on a Turing machine computational model, no real number should be approximated through floating point representations. Purely combinatorial characterizations are known for K ∈ {1, 2}, but not for any other value of K. Currently, this is considered the most important open question in rigidity, and possibly for the whole of DG.

5.1.1

Rigidity of Frameworks

Consider a YES instance of the EDGP, consisting of a weighted graph G = (V , E, d) and an integer K > 0, as well as a realization x ∈ RKn . The pair (G, x) is called a framework in RK . We let K(G, x) be the complete graph over V weighted by the edge function d¯ defined as follows: d¯uv = duv for all {u, v} ∈ E, and d¯uv = xu − xv 2 otherwise (we shorten K(G, x) to K when no ambiguities |E| arise). We further define the edge weight value function fG : RKn → R+ by fG (x) = (xu − xv 2 | {u, v} ∈ E). A framework (G, x) is rigid if there is at least a neighbourhood χ of x in RKn such that: fG−1 (fG (x)) ∩ χ = fK−1 (fK (x)).

(5)

The expression fG−1 (fG (x)) denotes the set of realizations x  that satisfy the same distance equations as x, i.e. the set of all solutions of the given EDGP instance. The LHS therefore indicates all those realizations of the EDGP instance which are in the neighbourhood of x. Similarly, the RHS indicates the same when G is replaced by its completion K(G, x). Realizations of complete graphs can be moved isometrically (i.e. while keeping the edge lengths invariant) only if the movements are congruences, namely compositions of rotations and translations (we do not consider reflections since they

192

L. Liberti and C. Lavor

are non-local). The intuitive sense of the above definition is that, if Equation (5) holds, then G locally “behaves like” its completion. In other words, a framework is rigid if it can only be moved isometrically by congruences. Testing rigidity of a given framework (with a rational realization) when K = 2 is coNP-hard [2] (see Section 5.2.1 for a definition of coNP).

5.1.2

Infinitesimal Rigidity

We now focus on rigidity from the point of view of the movement. Consider the graph G = ({1, 2, 3}, {{1, 2}, {2, 3}}) on three vertices, with two edges: node 2 is adjacent to both 1 and 3, and hence has degree 2, while nodes 1 and 3 both have degree 1. Consider the realization x1 = (0, 0), x2 = (1, 0) and x3 = (1, 1) in R2 . It is obvious that any position for x3 on the unit circle centred at x2 is also a valid realization: this generates an uncountable set x(α) of realizations as the angle α between the segments 12 and 23 ranges in the interval [0, 2π ]. If one sees τ as a time parameter, the variation of x3 is an isometric movement, also known as a (nontrivial) flex, which implies that this graph is flexible for K = 2 (by contrast, it is rigid for K = 1). We now generalize this to any graph G = (V , E) with any realization x in RK . By isometry, any flex has to satisfy Equation (1), which we write in its squared form: ∀τ ∈ [0, 1], {u, v} ∈ E

2 . xu (τ ) − xv (τ )22 = duv

Note that we can assume that τ ∈ [0, 1] by rescaling if necessary. Since xu (τ ) denotes the position in RK of vertex u at time τ , we can compute its velocity by simply taking derivatives with respect to τ : ∀τ ∈ [0, 1], {u, v} ∈ E

d 2 d xu (τ ) − xv (τ )22 = d . dτ dτ uv

2 are constants for all We now remark that the RHS of this equation is zero, since duv {u, v} ∈ E, yielding:

∀{u, v} ∈ E

(xu − xv ) · (x˙u − x˙v ) = 0,

where we assume that x = x(0) is the realization given in the framework (G, x). In order to find the velocity vector x˙ at τ = 0, we have to compute Ruv = xu − xv for all {u, v} ∈ E, then solve the homogeneous linear system R x˙ = 0,

(6)

Open Research Areas in Distance Geometry

193

where R is a matrix having |E| rows and Kn columns, called rigidity matrix. R is defined as follows: for a row indexed by {u, v} ∈ E, there are K possibly nonzero columns indexed by (u, 1), . . . , (u, K) containing the entries xu1 − xv1 , . . . , xuK − xvK , and K possibly nonzero columns indexed by (v, 1), . . . , (v, K) containing the reciprocal entries xv1 − xu1 , . . . , xvK − xuK . Sometimes rigidity matrices are shown in “short-hand format” |E| × n by writing each sequence of entries (xu1 − xv1 , . . . , xuK − xvK ) as xu − xv in the column indexed by u, and equivalently as xv − xu in the column indexed by v. We now make the following crucial observation: the vector subspace spanned by all x˙ satisfying Equation (6) contains all of the instantaneous velocity vectors that yield isometric movements of the given framework, also called infinitesimal motions of the framework. We remark that this subspace corresponds to the kernel ker R of the rigidity matrix. Now, the framework (G, x) is infinitesimally rigid if ker R only encodes the translations and rotations of (G, x). Otherwise the framework is infinitesimally flexible. Since infinitesimal rigidity is based on a linear system, it can be decided based on the estimation of the degrees of freedom of the framework. If we start with an empty edge set, each of the n vertices has K degrees of freedom, so the system has Kn degrees of freedom overall. Each linearly independent row of R decreases the degrees of freedom by one unit. In general, the framework (G, x) has Kn − rk R degrees of freedom (we denote the rank of R by rk R). We remark that in RK there are K basic translations and K(K − 1)/2 basic rotations (arising from pairs of distinct orthogonal axes), for a total of K(K + 1)/2 degrees of freedom of the group of Euclidean motions in RK . Since any framework can be moved isometrically by this group, at least these basic motions must satisfy Equation (6). Hence, we have dim ker R ≥ K(K + 1)/2. Specifically, (G, x) is infinitesimally rigid if and only if rk R = Kn −

K(K + 1) 2

(7)

K(K + 1) , 2

(8)

and infinitesimally flexible if and only if rk R < Kn −

194

L. Liberti and C. Lavor

as long as the affine hull of x spans the whole of RK [10]. Moreover, it was shown in [11] that a framework is infinitesimally rigid if and only if it is rigid and x is regular, meaning that its rigidity matrix has the maximum possible rank. We remark that infinitesimal rigidity is a stronger notion than rigidity: every infinitesimally rigid framework is also rigid, but there are rigid frameworks that are infinitesimally flexible.

5.1.3

Generic Properties

Another consequence of Equations (7)–(8) is that, if one can find a single rigid realization of the graph G, then almost all realizations of G must be rigid. This follows because: (a) rigidity is the same as infinitesimal rigidity with R having maximum rank (say) r among all the rigidity matrices corresponding to rigid frameworks; and (b) if a random matrix is sampled uniformly from a compact set, it ends up having its maximum possible rank with probability 1. The alternative, i.e. that a sampled R is rank deficient, corresponds to the existence of a linear relationship between its rows, which defines a subspace of zero Lebesgue measure in Rr . Because of this, both rigidity and infinitesimal rigidity are generic properties. This allows us to ascribe them directly to the underlying (unweighted) graph rather than the framework. Equivalently, it is sufficient to look at the sparsity structure of the rigidity matrix in order to decide whether a framework is rigid with probability one. Formally stated, we are concerned with the following decision problem. GRAPH RIGIDITY. Given a graph G = (V , E) and an integer K > 0, is G generically (infinitesimally) rigid in K dimensions?

It should be clear that we can potentially use the rank of the rigidity matrix in order to solve the problem. Consider this algorithm: 1. 2. 3. 4.

sample a random realization x of G compute the rank of R if it is equal to Kn − K(K + 1)/2, then output YES otherwise output NO.

This algorithm runs in polynomial time, since matrix ranks can be computed in polynomial time [35]. But this is not considered a “combinatorial characterization” since it involves computation with floating point numbers. Moreover, it is a randomized algorithm in the Turing machine (TM) computational model, since only finitely many rationals can be represented in the commonly used IEEE 754 standard floating point implementation: sampling a random realization x will yield a maximum rank rigidity matrix with practically high probability, but not with probability 1. Even with theoretical probability 1 assumptions, probability 1 is not the same as certainty: there is a possibility that this algorithm might output NO on a small fraction of YES instances. Re-running

Open Research Areas in Distance Geometry

195

the algorithm sufficiently many times on the same instance will make the probability of error as small as desired, but this is not a deterministic algorithm. An acceptable “combinatorial characterization” would limit its scope to decision algorithms that only employ the incidence structure of G, and perhaps integers bounded by a polynomial in |V | and |E|. Although the original meaning ascribed to “combinatorial characterization” did not call for algorithmic efficiency (in the sense of worst-case polynomial running time), the rigidity research community appears to feel that this would be a desirable characteristic [97].

5.1.4

Combinatorial Characterization of Rigidity on the Line

For K = 1, a combinatorial characterization of generic rigidity is readily available: G is generically rigid if and only if it is connected. This holds because the only flexes in R1 are translations. If a graph has a flex, say on vertex v, then all of its neighbours must undergo the same flex. If the graph is connected, then an induction argument shows that all of the vertices must undergo the same flex, showing that the flex is a congruence. If the graph is disconnected, then two connected components can undergo different flexes, which means different translations, the combination of which is not a congruence of the whole graph. We remark that graph connectedness can be decided in polynomial time, for example, by graph exploration algorithms [115].

5.1.5

Combinatorial Characterization of Rigidity in the Plane

For K = 2, the situation becomes much more complicated. James Clerk Maxwell was using a degree of freedom calculus already in 1864 [113] to the effect that if G is minimally rigid in the plane, then |E| = 2|V | − 3 must hold. We remark that a graph is minimally rigid (also known as isostatic) when it is no longer rigid after the removal of any of its edges. Gerard Laman finally proved his celebrated theorem in 1970 [84], namely that G = (V , E) with |V | > 1 is generically minimally rigid if and only if: (a) |E| = 2|V | − 3 (b) for each subgraph G = (V  , E  ) of G having |V  | > 1, we have |E  | ≤ 2|V  | − 3. This was accepted as a purely combinatorial characterization of rigidity in the plane, since it immediately gives rise to the following brute force algorithm. Given any graph G, 1. list all subgraphs of G with at least two vertices 2. for each of them, test whether Laman’s conditions (a) and (b) hold 3. if they do, output YES, otherwise NO.

196

L. Liberti and C. Lavor

Fig. 1 Counterexamples for Laman’s conditions in 3D. On the left, a rigid graph with fewer than 3|V | − 6 edges. On the right, a flexible graph satisfying the 3D adaptation of Laman’s condition (b): the dotted line shows a hinge around which the two “bananas” can rotate

Since there are exponentially many subgraphs of any graph, this algorithm is exponential time in the worst case. On the other hand, a polynomial time algorithm based on Laman’s theorem was given in [110].

5.1.6

Combinatorial Characterization of Rigidity in Space

Although Maxwell did use the rank formula in 3D, namely |E| = 3|V | − 6, in one of his papers [114], all attempts to extend Laman’s theorem along these lines for the case K = 3 have failed so far. In fact, Laman’s conditions fail spectacularly when K = 3. The equivalent of condition (a) above would be |E| = 3|V | − 6, but the counterexample [68, p. 2.14] (see Figure 1, left) shows this to be false. The failure of the equivalent of condition (b) above, i.e. that every subgraph with |V  | > 1 should have |E  | ≤ 3|V  | − 6 is exhibited by the famous “double banana” graph (see Figure 1, right), ascribed to Whiteley in [11]. Two ideas which might shed some light over the long-standing open question of finding a combinatorial characterization of 3D (generic) rigidity have been proposed by Meera Sitharam. In 2004, she and Y. Zhou have published a paper [137] based on the concept of module rigidity (to replace Laman-style degree of freedom counts), according to which graphs like the double banana of Figure 1 (right) would not be erroneously catalogued as rigid while being flexible. More recently, her talk (co-authored with J. Cheng and A. Vince) at the Geometric Rigidity 2016 workshop in Edinburgh bore the title “Refutation of a maximality conjecture or a combinatorial characterization of 3D rigidity?”. We report the abstract from the workshop proceedings, since there appear to be no publications about this idea yet (although Dr. Sitharam told us that a manuscript exists [138] and is being prepared for submission). The talk will present an explicit, purely combinatorial algorithm that defines closure in an abstract 3D rigidity matroid (GEM, for graded exchange matroid). Strangely, rank in this matroid is an upper bound on rank in the 3D rigidity matroid, either refuting a well-known maximality conjecture about abstract rigidity matroids, (more likely) or giving a purely combinatorial characterization of 3D rigidity (less likely). In addition, we can show that rank of a graph G in the new GEM matroid is upper bounded by the size of any maximal (3, 6)-sparse subset.

Open Research Areas in Distance Geometry

197

Although she does warn that the most likely possibility is that her result is not the sought-after characterization, we believe that explicitly and visibly displaying attempts to solve hard problems, such as this one, has the advantage of drawing attention (and hence further study and effort) towards their solution.

5.1.7

Global Rigidity

A framework (G, x) is globally rigid in RK if it is rigid in RK and only has one possible realization up to congruences (including reflections). This case is very important in view of applications to clock synchronization protocols, wireless network realization, autonomous underwater vehicles, and, in general, whenever one tries to estimate a unique localization occurring in the real physical world. By [66], global rigidity is a generic property, i.e. it depends only on the underlying graph G (rather than the framework) for almost all realizations. For K = 1, G is (generically) globally rigid if and only if it is bi-connected (i.e. there are at least two distinct simple paths joining each pair of vertices in the graph). For K = 2, G is (generically) globally rigid if and only if it is 3-connected and redundantly rigid (i.e. it remains rigid after the removal of any edge) [75]. No combinatorial characterization of global rigidity is known for K > 2.

5.1.8

Relevance

The combinatorial characterization of (generic) rigidity is important both because of applications—it helps to know whether a weighted graph has finitely or uncountably many realizations—and because the problem has been open ever since Maxwell’s times (mid-1800). From a practical point of view, however, the computation of the rank of the rigidity matrix, coupled with Asimow and Roth’s theorem (Equations (7)–(8)), appears to settle the question with high probability in polynomial (and practically acceptable) time. This “reduces” the problem to one of a mostly theoretical nature. As many theoretical problems, this one remains important also because, in trying to answer this question, researchers keep discovering interesting and practically relevant ideas (such as those about determining global rigidity, see Section 5.1.7). Specifically, however, this problem is very important because it has the merit of having shifted the study of rigidity from construction engineering to mathematics. Previously, definitions were rare, ideas informal and therefore ambiguously stated and often confused, and decision procedures almost completely empirical [68]. Although construction engineers have been building truss structures (such as bridges) for a long time, lack of understanding of concepts such as minimal and redundant rigidity have caused ruptures and disasters throughout history. See the Wikipedia page en.wikipedia.org/wiki/List_of_bridge_failures: truss-based bridges that collapsed due to “poor design” are likely suspects.

198

L. Liberti and C. Lavor

5.2 Computational Complexity of Matrix Completion Problems A common way to define the EDMCP is: given a partially defined matrix, determine whether it can be completed to an sqEDM or not. This problem, introduced in Section 4.5, is almost like the DGP, except for the seemingly minor detail that K shifts from being part of the input (in the EDGP) to part of the output (in the EDMCP). This difference is only apparently minor: while the EDGP is NP-hard in the TM computational model, the EDMCP is not known to be hard or tractable for the class NP. From a purely computational point of view, the EDGP can only be solved by exponential-time algorithms, while the EDMCP can be solved efficiently, since it can be formulated exactly by the following feasibility-only semidefinite programming (SDP) problem [50]: ∀{i, j } ∈ E Bii + Bjj − 2Bij = dij2 B  0.

0 (9)

The notation B  0 indicates that we require that B is symmetric and positive semidefinite, i.e. all its eigenvalues must be non-negative. We recall that SDPs can be solved approximately to any desired error tolerance in polynomial time [5]. Unfortunately, no method is known yet to solve SDPs exactly in polynomial time, and membership in the complexity class P in the TM computational model is defined by exhibiting a polynomial-time algorithm which can provably tell all YES instances apart from NO ones. With respect to the SDP formulation Equation (9), we remark that, in general, SDP solvers can find solutions of any rank, although a theoretical result of Barvinok [15] shows √ that if Equation (9) is feasible, then there must exist a solution of rank K = ( 8|E| + 1 − 1)/2. Another result of the same author [16] also proves that, provided there exists a manifold X of solutions of rank K, in some sense the solution of Equation (9) “cannot be too far” from X in some well defined but asymptotic sense (the latter result has been exploited computationally with some success [50, 99]). This begs the question that is the subject of this section, i.e. what is the complexity status of the EDMCP? Is it NP-hard? Is it in P? Is it somewhere between the two classes, provided P = NP? The rest of the section provides a tutorial towards understanding this question, and can be skipped by readers who already know the basics of computational complexity.

5.2.1

Complexity in the TM Model

We first give a short summary of complexity classes in the TM model. We limit our attention to classes of decision problems, i.e. problems to which the answer can be encoded in a single bit. For example, “given a graph G, is it complete?” is a valid

Open Research Areas in Distance Geometry

199

decision problem. Note that the question is parametrized over the input (in this case, the graph G): strictly speaking, no solution can be provided until we know what G is. When we replace the symbol G by an actual graph (stored on the TM tape), we obtain an instance of the decision problem. Thus, decision problems are also seen as infinite sets of instances. The instances having answer bit 1 are known as YES instances, while the rest are known as NO instances. An algorithm A solves a problem P when it is parametrized by the input ι of P such that, for each input ι of P , A(ι) = 1 if and only if the instance is YES. The worst-case running time of an algorithm is expressed as a class of functions of the input size |ι| = ν, i.e. the amount of bits that ι needs to be stored on the TM tape. Interesting classes of functions are constants, logarithms, polynomials and exponentials of ν. If the function is f (ν), the class is indicated as O(f (ν)), and contains all of the functions that are asymptotically upper bounded by g(ν)f (ν) + h(ν) where g, h are themselves asymptotically upper bounded by f . A function g is asymptotically upper bounded by a function f if there is ν0 ∈ N such that for all ν > ν0 we have g(ν) ≤ f (ν). For example, if an algorithm takes 36ν 2 + 15ν + 3 CPU cycles to run, then it belongs to the class O(ν 2 ). For a given problem P , it makes sense to ask the asymptotic worst-case running time of the fastest algorithm that solves P (over all algorithms that solve P ). Since the work of Cobham [36] and Edmonds [57] in 1965, we list problems that have a polynomial-time solution algorithm in a class called P: this is because any finer granularity would make the algorithm depend on the implementation details of the TM (number of heads, number of tapes and so on), whereas the class of all polynomials is invariant with respect to such implementation details. The class P is known as the class of “tractable” problems. Another interesting class, called NP, includes all problems for which YES instances can be proved to be YES by means of a certificate that can be verified in polynomial time. In the case of the toy problem “given a graph, is it complete?”, the certificate is the input itself: the graph can be checked to be complete in polynomial time, by verifying that each vertex is adjacent to all other vertices. This particular feature, i.e. that the input is the certificate, is shared by all problems in P, since the definition of P is exactly that there is a polynomial algorithm that, on YES instances, can provide the answer YES in polynomial time. Hence P ⊆ NP. The question whether P = NP or not is the most important open question of all computer science and one of the most important in mathematics, and will not be discussed further here (see [1, 78] for more information). NP is an interesting class because it contains problems for which no polynomial time algorithm is currently known, but that are very relevant in practice, such as many packing, covering, partitioning, clustering, scheduling, routing problems, as well as many combinatorial problems with resource constraints: once a solution is given, checking that it solves the problem is generally a question of replacement of variable symbols by values followed by function evaluation, all of which can usually be carried out in polynomial time. On the other hand, finding the solution usually takes exponential time, at least with the algorithms we know so far.

200

L. Liberti and C. Lavor

Since it contains so many seemingly hard problems, it makes sense to ask what problems are “hardest” in the class NP. A qualitative definition of the notion of “hardest” is based on the concept of polynomial reductions. A problem Q can be reduced to another problem P if there is a polynomial time algorithm α mapping YES instances of Q to YES instances of P and NO instances of Q to NO instances of P . Now suppose there is a reduction from any problem Q in NP to a given problem P . Suppose P were in P: then there would exist a polynomial time algorithm A(ι) to solve each instance ι of P . But then, given an instance η of Q, the algorithm A(α(η)) would provide an answer to η in polynomial time. In other words, if P were tractable, every problem in NP would be tractable: even the hardest problems of NP. This means that P must be as hard as the hardest problems of NP. So we define hardest for the class NP as those problems for which every problem in NP can be reduced to them. We call NP-hard the class of the hardest problems for NP. Note that a problem need not be in NP itself in order to be NP-hard. The first such problem was SATISFIABILITY (SAT) [38]: Stephen Cook used a reduction from a polynomial time bounded Turing machine to a set of boolean constraints of SAT. Ever since, it suffices to reduce from an NP-hard problem Q to a new problem P in order to prove its NP-hardness. This can be informally seen as follows: suppose P were not NP-hard, but easier. Then we could combine the solution algorithm for P with the polynomial reduction to yield a proof that Q is not hardest for NP, which is a contradiction. The first researcher to spot this reasoning was Richard Karp [80] in 1972. Now hundreds of problems have been shown to be NP-hard [62]. A problem is NP-complete if it is both NP-hard and belongs to NP. We remark that coNP is the class of decision problems that have a polynomialtime verifiable certificate for all NO instances (or, in other words, a polynomial-time verifiable refutation). We also remark that P is contained in NP ∩ coNP since the polynomial-time algorithm that decides whether the instance is YES or NO provides, with its own execution trace, a polynomial-time checkable proof (when the instance is YES) as well as a polynomial-time checkable refutation (when the instance is NO).

5.2.2

Complexity of GRAPH RIGIDITY

It is stated in [130] that the complexity of determining whether a given graph is (generically) infinitesimally rigid in RK is in NP, since, given a framework (G, x), where the realization x plays the role of certificate, it suffices to compute the rank of the rigidity matrix to establish rigidity according to Equations (7)–(8). This assertion is not false, but it should be made clear that it does not refer to the TM computational model. So far, to the best of our knowledge, there is no polynomialtime bounded algorithm for computing the exact rank of any possible matrix in the TM computational model. Investigations on the complexity of computing matrix rank do exist, but they are either based on different models of computation, such as real RAM or number of field operations [30, 35], or else they place the problem in altogether different

Open Research Areas in Distance Geometry

201

complexity classes than P or NP. Specifically, computing (as well as verifying) the matrix rank over rationals appears to be related to counting classes. These classes catalog the complexity of the problems of counting the number of solutions of some given decision problems [6, 73, 111]. On the other hand, it should be clear from Sections 5.1.4–5.1.5 that, in the case K ∈ {1, 2}, the problem of determining generic rigidity of a graph is in P.

5.2.3

NP-Hardness of the DGP

It was shown in [134] that the DGP is NP-hard. More specifically, the proof exhibits a reduction from the PARTITION problem (known to be NP-complete) to the EDGP with K = 1 (denoted EDGP1 ) restricted to the class of graphs consisting of a single simple cycle. The PARTITION problem is as follows. Given n positive integers a1 , . . . , an determine whether there exists a subset I ⊆ {1, . . . , n} such that 

ai =



(10)

ai .

i ∈I

i∈I

We reduce an instance a = (ai | 1 ≤ i ≤ n) of PARTITION to the simple cycle C = (V , E) where V = {1, . . . , n} and E = {{i, i + 1} | 1 ≤ i < n} ∪ {{1, n}}. We weigh the edges of C with the integers in a, and let d be the edge weight function such that: d1,n = a1



∀1 < i ≤ n di−1,i = ai .

Now suppose a is a YES instance of PARTITION: we aim to show that (C, d) is a YES instance of the EDGP1 . Since a is YES, there is an index set I such that Equation (10) holds. We construct a realization of (C, d) on the real line inductively as follows: 1. x1 = 0 2. for all 1 < i ≤ n, suppose xi−1 is known: then if i ∈ I let xi = xi−1 + di−1,i , otherwise let xi = xi−1 − di−1,i . It is obvious by construction that this realization satisfies all of the distances di−1,i for 1 < i ≤ n. It remains to be shown that |x1 − xn | = d1n . We assume without loss of generality that 1 ∈ I (the argument would be trivially symmetric in assuming 1 ∈ I ). We remark that d1n +



(xi − xi−1 ) = d1n +

i∈I {1}

=

 i ∈I



di−1,i =

i∈I {1}

ai =

 i ∈I



ai

i∈I

di−1,i =

 i ∈I

(xi−1 − xi ),

202

L. Liberti and C. Lavor

where the central equality holds by Equation (10). Now from the equality between LHS and RHS we have:   (xi − xi−1 ) = (xi−1 − xi ) d1n + i ∈I

i∈I {1}





(xi−1 − xi ) = d1n

1 xv },

and let I = {i < n | {i, i + 1} ∈ F }. We have: 

(xu − xv ) =



(xv − xu )

{u,v}∈F¯

{u,v}∈F





|xu − xv | =

{u,v}∈F





|xv − xu |

{u,v}∈F¯

duv =

 i∈I



duv

{u,v}∈F¯

{u,v}∈F





ai =



ai ,

i ∈I

against the assumption that a is a NO instance of PARTITION. The above argument shows that the subclass EDGP1 of the DGP, restricted to simple cycle graphs, is NP-hard. Since it is contained in the whole DGP class, then the DGP itself must be NP-hard, for if it were not, then it would be possible to solve even the restricted subclass in an easier way. We remark that [134] also proves (using a reduction from a different problem) stronger complexity results for the DGP, namely that it is strongly NP-hard even for fixed K and for edge weights in {1, 2}.

Open Research Areas in Distance Geometry

5.2.4

203

Is the DGP in NP?

In general, the DGP is not known to be in NP. The reason is that, in the TM model, numerical inputs are always constituted by rational numbers, which means that Equation (1) might have irrational solutions. In other words, the realizations of the given graph could involve irrational (though algebraic) numbers in the components. Although there are a few finite encodings of algebraic numbers, none of them has been shown (so far) to be a good candidate for verifying that the realization satisfies Equation (1) exactly [18]. The EDMCP is not in NP for much the same reasons. Both, however, are in NP with respect to the real RAM computational model [23]. We remark that the EDGP1 is in NP: any irrational realization can arbitrarily be translated to be aligned with a rational point for any vertex (say vertex 1). Since the distances are all rational (as they are part of the input), all of the other vertices will have rational points too. These rational points can be verified to satisfy the square of Equation (1) exactly in polynomial time. Since the EDGP1 is both NP-hard and in NP, it is NP-complete. It is shown in [48] that the DGP in 1 and ∞ norms belongs to NP independently of its dimension.

5.2.5

EDMs and PSD Matrices

As mentioned in Section 3 above, EDMs and positive semidefinite (PSD) matrices are strongly related. Let D be a sqEDM, and assume without loss  of generality that xv = 0K (the D is yielded by an n × K realization matrix that is centred, i.e. v≤n

all-zero K-vector). Then from the identity: ∀u, v ≤ n xu − xv 22 = xu 22 + xv 22 − 2xu · xv ,

(11)

we obtain the matrix equation: D = r 1 + 1 r  − 2B,

(12)

where B = x x  is the Gram matrix of x, r is the column n-vector consisting of the diagonal element of x x  and 1 is the all-one n-vector. We now consider the centring matrix J = I − n1 1 1 , where I is the n × n identity matrix:  when applied to an n × K matrix y, Jy is translation congruent to y such that i≤n (Jy)i = 0K . We remark that J is symmetric, so J  = J . From Equation (12) we obtain: 1 1 − J D J = − (J r 1 J + J 1 r  J ) + J B J 2 2 1  = − (J r 0 n + 0n r J ) + B = B, 2

204

L. Liberti and C. Lavor

whence 1 B = − J D J. 2

(13)

Note that J 1 = 0n since centring the all-one vector trivially yields the all-zero vector, and that J B J = J x x  J = x x  = B since x was assumed centred at zero. It is easy to see that a matrix is Gram if and only if it is PSD. If B is a Gram matrix, then there is an n × K realization matrix x such that B = x x  . Let x¯ be the n × n matrix obtained by padding x on the right with zero columns, then B = x¯ x¯  is a valid (real) factorization of B, which means that all eigenvalues of B are non-negative. Conversely, suppose B is PSD; the eigendecomposition of B is P   P , where  is a diagonal matrix with √ the eigenvalues along the diagonal. Since B is PSD,√ ≥ 0, which means that  is a real diagonal matrix. Then by setting x = P   we have B = x x  which proves that B is a Gram matrix. These two results prove that D is a sqEDM if and only if B is a PSD matrix. By this equivalence, it follows that the EDMCP is in P if and only if the PSD COMPLETION PROBLEM (PSDCP) is in P. Given the wealth of applications that can be modelled using SDP, establishing whether the PSDCP is in P makes the same question for the EDMCP even more important. In [85], it is observed that the PSDCP belongs to NP if and only if it belongs to coNP, so the PSDCP cannot be either NP-complete or co NP-complete unless NP = coNP.

5.2.6

Relevance

Why is it important to determine the complexity of the EDMCP (or, equivalently, of the PSDCP)? Similarly to what was said in Section 5.1.8, from a practical point of view, we can solve EDMCPs and PSDCPs in polynomial time to any desired accuracy using SDP solver technology. On the other hand, this is only a partial answer. Theoretically speaking, the issue as to whether P = NP, P = NP or even whether the whole question might actually be independent of the ZFC axioms [1] dominates the field of theoretical computer science and much of discrete mathematics. Identifying a problem that is in NP  P would obviously settle the question in the negative. Every such candidate is interesting until eliminated. Even though the EDMCP is not the perfect candidate on account of the uncertain status of its membership in NP, it is still one of the few known2 practically relevant problems with this status.

2 Another,

and possibly better, candidate is the GRAPH ISOMORPHISM problem [12, 13].

Open Research Areas in Distance Geometry

205

Another more practical reason is the bottleneck represented by current SDP solver technology: although the algorithm is polynomial-time, and notwithstanding the fact that existing implementations are of high quality [120], solving SDPs with significantly more than thousands of variables and constraints is still a hurdle. It is hoped that research in complexity will drive a search for better SDP solution algorithms.

5.3 Number of Solutions The issue of finding or estimating the number of solutions of a DGP instance prior to solving it was brought forward as a “major challenge in DG” by the scientific committee of the DGTA16 workshop [97] when putting together a (successful) NSF proposal to support the workshop. Although counting solutions of DGP instances is related to establishing rigidity or flexibility of graphs and frameworks (see Section 5.1), the former is a finergrained alternative to the latter. Rigidity results are qualitative insofar as they focus on three categories: unique solutions, a finite number of solutions and infinitely many solutions. For some applications this is not enough, and the exact or approximate number of solutions is required, or at least helps in the solution process. In this sense, counting solutions deserves the status of open research area independently of the studies on rigidity.

5.3.1

Either Finite or Uncountable

The first question that might arise is: can any DGP instance have an infinite, but countable number of solutions? The answer to this question is negative, and comes from both topology and algebraic geometry. The “topological” proof rests on an observation made by John Milnor in 1964 [118]. If V ⊆ Rm is the variety defined by the system of p polynomial equations ∀i ≤ p

fi (x1 , . . . , xm ) = 0

(14)

each of which has degree bounded above by the integer k ≥ 1, then the sum of the Betti numbers of V is bounded above by k(2k − 1)m−1 . A straight application of this result to the squared version of Equation (1) encoding the DGP leads to k = 2, m = Kn, V being the set of realizations satisfying the given DGP instance, and the sum of the Betti numbers being bounded above by 2(3Kn−1 ), which is O(3Kn ). We now recall that the Betti numbers count, among other things, the connected components of a variety. This immediately yields that that there are finitely many connected components. It is well known from basic topology that connected components can either be uncountable sets or consist of an isolated point.

206

L. Liberti and C. Lavor

A second method of proof consists in invoking the cylindrical algebraic decomposition results in real algebraic geometry [17, 19], which stem from Tarski’s quantifier elimination theory [141], to show that V consists of a finite number of connected components with a certain (cylindrical) structure. The result follows. While the algebraic geometry result is quantitative, meaning that it provides an explicit description of the geometrical properties of each connected component, the topological method simply proves the finiteness of the number of connected components, which is really all that is required.

5.3.2

Loop Flexes

An interesting observation about the “shape” of flexes in flexible graphs was made by Bruce Hendrickson in 1992 [71, Thm. 5.8]: if a graph is connected, (generically) flexible in RK and has more than K + 1 vertices, then for almost all edge weight functions the realization set contains a submanifold that is diffeomorphic to a circle. The situation is sketched graphically in [71, Fig. 7], reported in Figure 2. A manifold is a topological set which is locally homeomorphic to a Euclidean space; a homeomorphism is a continuous function between topological spaces which also has a continuous inverse function. A diffeomorphism is a smooth invertible function that maps a differentiable manifold to another, such that the inverse is also smooth. A function is smooth if it has continuous derivatives of any order everywhere. And, lastly, a manifold is differentiable if the composition of a local homeomorphism with the inverse of any other local homeomorphism is a differentiable map. All of these differential topology definitions formalize the concept that, although none of the graph vertices might really move in a circle, the flex itself contains a closed loop that is topologically equivalent to a circle. The proof proceeds by adding to the flexible graph G as many edges as are required to leave just one degree of freedom to the flex. It is then relatively easy to show that the flex is compact and that it is a manifold. A well-known

Fig. 2 Each flex is diffeomorphic to a circle

Open Research Areas in Distance Geometry

207

result3 in differential topology shows that compact one-dimensional manifolds are diffeomorphic to circles or closed real intervals; since the flex is a loop, it cannot be diffeomorphic to an interval. This result was used by Hendrickson to prove that redundant rigidity is a necessary condition to solution uniqueness: if a graph is rigid but not redundantly so, then there is an edge {i, j } such that its removal yields a flex diffeomorphic to a circle. In almost all cases, there will be two points on this circle, corresponding to two incongruent realizations, that are compatible with the given edge distance dij (in Figure 2, the missing edge {a, f } has the same length xa − xf 2 in both the left and the right picture).

5.3.3

Solution Sets of Protein Backbones

Proteins are organic molecules at the basis of life. They interact with other proteins and with cells and living tissues by chemical reaction of the atoms in their outside shell. For these reactions to happen, the protein must physically be able to “dock” to the prescribed sites, which requires it to have a certain geometrical shape. This way, proteins activate or inhibit life processes. Proteins consist of a backbone together with some side chains [40], the building blocks of which are a set of around twenty small atomic conglomerates called amino acids [145]. The problem of finding the shape of the protein can be decomposed in finding the shape of the backbone and then placing the side chains correctly [132, 133]. The backbone itself has an interesting graph structure, in that it provides an atomic order with a very convenient geometric property: we can estimate rather precisely the distances between each atom having rank greater than two in the order and its two immediate predecessors. We know the distances di−1,i for each i > 1 because they are covalent bonds; and, since we also know all of the covalent angles, we can also compute all of the missing sides from the consecutive triangles, which yields the distances di−2,i . This yields a graph consisting of i − 2 consecutive triangles adjacent by sides. Since most protein graphs need realizations in R3 , the sides by which the triangles are attached provide some rotating hinges, which in turn means that we have uncountably many solutions. This is where Wüthrich’s Nobel prize winning NMR-based techniques come in: we can estimate most distances smaller than about 6 Å. Note that generally, the distances di−3,i are smaller than this threshold. After adding these distances to our protein graph, we have a structure consisting of a sequence of consecutive tetrahedra adjacent by faces, which is clearly rigid in 3D, though not globally rigid, as shown in Figure 3. From the point of view of the underlying graph, protein backbones are defined by containing a subgraph consisting of a sequence of consecutive 4-cliques adjacent by 3-cliques. When such a sequence consists of two 4-cliques it is also known as a 5-quasiclique, since it is a graph on five vertices with all edges but one. 3 See,

e.g., en.wikipedia.org/wiki/Classification_of_manifolds.

208

L. Liberti and C. Lavor 1

1 5

1

2 3

3 3

4 2

5 4

5 4

2

Fig. 3 Two realizations of a quasiclique in R3 (the missing edge is dotted on the left, and bold on the centre and on the right) Fig. 4 A 3D realization of an artificial protein backbone [87]

Recall that NMR estimates distances up to 6 Å: while these contain those between atoms i and i − 3, they may also contain other distances if the protein backbone folds back in space close to itself (see Figure 4). So the edge set E of a typical protein backbone graph G = (V , E) also has these other distances. We call these pruning distances (and the underlying edges pruning edges EP ), while the distances forming the clique sequence are called discretization distances (and the underlying edges discretization edges ED ). The subset of the MDGP containing these instances is called the DISCRETIZABLE MDGP (DMDGP) [90]. This problem was shown to be NP-hard in [102]. It can be solved using an algorithm called branch-and-prune (BP) [100, 122], which works inductively as follows: suppose atom i − 1 ≥ 3 has already been realized as xi−1 ∈ R3 . Then, by trilateration [59], with probability 1 there will be at most two positions for xi . This can be seen, e.g., in Figure 3 using the order (1, 3, 4, 5, 2): once vertex 5 is realized, there are two positions for vertex 2, one close to vertex 1 (Figure 3, right) and one further away (Figure 3 centre). The probability zero cases, ignored by the algorithm, are those where the distances are exactly right for vertex 2 to be realized coplanar with x3 , x4 , x5 , in which case there is only one position for vertex 2. Another intuitive way of seeing this fact is that vertex i is at the intersection of the three spheres S(xi−3 , di−3,i ), S(xi−2 , di−2,i ), S(xi−1 , di−1,i ). Such an intersection contains at most two points [39, 89] (see Figure 5).

Open Research Areas in Distance Geometry

209

Fig. 5 The intersection of three spheres in R3 contains at most two points

Once two positions have been found for vertex i, say xi+ , xi− , the BP algorithm checks whether either, both or neither of these are compatible with any pruning distance which might be adjacent to vertex i: the incompatible positions are pruned (ignored). After this step, there may remain zero, one or two compatible positions. If there are none, this particular branch of the recursion is pruned, and the search backtracks to previous recursion levels. If there is only one position, it is accepted, and the search is called recursively. If there are two positions, the search is branched, and each branch yields a recursive call. When xn is placed (where n = |V |), a realization x has been found: this recursion path ends and x is stored in a solution set X. The search is then backtracked, until no more recursion calls are possible. The BP algorithm yields a search tree of height n with several interesting properties. (a) A necessary branching occurs at x4 [90] since no pruning edges involving predecessors of 4 can be adjacent to it. (b) If the pruning edges contain all of the {i − 4, i} pairs (for i > 4), then the only branching that occurs is the “4-th level branching” referred to in (5.3.3) above, and the BP runs in worst-case polynomial time. (c) If the pruning edges contain {1, n}, then the instance has only two realizations which are in fact reflections of each other through the symmetry induced by the 4-th level branching. Notwithstanding, BP does not necessarily run in polynomial time since extensive branching might occur during the search, only to be pruned before or at the last (n-th) level. (d) The worst-case running time of BP, in general, is exponential in n. In practice, however, it is very fast for considerably large instances, and very precise. (e) The BP can be stopped heuristically, e.g. based on the number of solutions found, in order to make it faster. But it is still NP-hard to find even one solution, so it could behave exponentially nonetheless.

210

L. Liberti and C. Lavor

(f) There is nothing special about three-dimensional spaces: the BP also works in arbitrary Euclidean spaces RK for any K ≥ 1. (g) When run to completion, in general the BP finds all solutions modulo rotations and translations; if one of the two sides of the 4-th level branching is pruned, then X will contain all incongruent realizations (including w.r.t. reflections) for the given instance. (h) The pruning edges induce a very elegant pattern of partial reflections over the 1 backbone. Let T = X be the superposition in space of all of the realizations in X. It is shown in [102, 105] that the set T is invariant to a certain set of partial reflection operators gi , acting on realizations, that fix all of the vertex positions up to xi−1 , and then reflect xi , . . . , xn with respect to the affine subspace spanned by xi−1 , . . . , xi−K . Moreover, the partial reflection group can be computed a priori as GP = gi | i > K∧ ∃{u, v} ∈ EP (u + K < i ≤ v) .

(15)

(i) This symmetry structure makes it possible to compute a single realization x, and then generate X as the orbit GP x of x [124]. (j) More relevant to this open research area, this symmetry structure also allows the aprioristic computation of the exact number of partial realizations active each of the n levels of the BP search tree [103]. In particular, this allows the control of the width (i.e. the breadth) of the tree in function of the distribution of the pruning edges [102]. (k) The estimation of the number of partial solutions at each level of the tree has yielded a crucial empirical observation: if protein backbones fold and the folds change direction only a logarithmic number of times in n, then the width of the tree is bounded above by a polynomial in n, which implies that the BP is a polynomial time algorithm on such instances. (l) It was shown in [47] that, using a very basic random generation model, DMDGP instances have a rapidly falling probability of having a nontrivial partial reflection group GP . The BP actually works on larger classes than the DMDGP: the largest class of instances that the BP can solve is called DISCRETIZABLE DGP (DDGP) [123] and consists of instances having a vertex order such that any vertex i is adjacent to at least three adjacent (not necessarily immediate) predecessors. In this setting, however, the results about partial reflection symmetries listed above are invalid. The BP was also extended to solve instances of the interval DDGP (iDDGP) problem, i.e. some of the distances in a DDGP instance are represented as intervals [32, 92]. In this setting, the BP ceases to be an exact and exhaustive algorithm, but it can still find a number of incongruent solutions to the instance. 5.3.4

Clifford Algebra

A particular type of geometric algebra, called Clifford algebra, is used to represent compactly and perform computations with various geometrical shapes in Euclidean

Open Research Areas in Distance Geometry

211

Fig. 6 Intersection of two spheres with a spherical shell: at most two circular arcs

spaces. It was recently used to provide a compact representation of the solution set of DMDGP instances [93]. Carlile Lavor’s presentation at the DGTA16 workshop [97] focused on an interesting extension of this representation to those iDMDGP3 instance where, for each v > 3, at most one in three distances duv where u is an adjacent predecessor of v is represented as an interval. In such cases, the intersection of three spheres shown in Figure 5 becomes the intersection of two spheres and a spherical shell (see Figure 6), which turns out to consist of two circular arcs with probability 1. Currently, efforts are under way to use these representations in order to do branching on circular arcs [7–9] rather than discretizations thereof [92], using a new atomic ordering for the protein backbone [95].

5.3.5

Phase Transitions Between Flexibility and Rigidity

Consider a random process by which an initially empty graph is added edges with some probability p: 1. sample u, v randomly from V 2. if {u, v} ∈ E, add {u, v} to E with probability p 3. repeat. At the outset, the graph is likely to consist of isolated edges (i.e. paths consisting of one edge), or pairs of adjacent edges (i.e. paths consisting of two edges), and is therefore flexible in R2 . As more and more edges get added, some rigid components appear that can move flexibly with respect to one another, and finally 2|E| the whole graph becomes rigid. What is the value of η = |V |(|V |−1) which marks the appearance of a single rigid component? Similar graph generation processes have been analysed with respect to the edge generation probability for the appearance of giant connected components in random graphs [25, 58].

212

L. Liberti and C. Lavor

In the case of rigidity, Duxbury and Thorpe independently (but in two papers published consecutively in Phys. Rev. Lett., in which one cites the other) proposed percolation analysis in 1995 [77, 121] (also see [21, §4]). Essentially, they randomly add edges (with probability p) to an empty graph, and verify rigidity of clusters after each edge addition using the so-called pebble game (see linkage.cs.umass.edu/pg/ pg.html). This is a graph labelling algorithm which identifies planar isostatisticity (by checking Laman’s conditions (a)–(b) in Section 5.1.5) while flagging redundant rigidity, or determines flexibility. These simulations link the emergence of a giant rigid component to the parameters of the random edge addition process, such as the edge creation probability p. In his lectures on rigidity given at the Institut Henri Poincaré in 2005 in Paris, Bob Connelly observes that on the type of triangular plane tessellation graphs used by Duxbury and Thorpe, the percolation simulations agree with the theoretical observation that the whole graph needs p ≥ 23 in order to be rigid, which is required to achieve |E| ≥ 2|V | − 3 on average. An interesting open question about phase transitions is motivated by the following observation: while the DMDGP is NP-hard, the DGPK (where K = 3) on protein backbone graphs where the edge {i − K − 1, i} has been added to the graph for all i > K + 1 can be solved efficiently using K-lateration (moreover, further increasing the number of edges helps, as the DGP can be solved efficiently on complete graphs too [54]). What is the critical value of the parameter η (defined above) that determines the phase transition from NP-hardness to tractability? 5.3.6

Relevance

This open area is mostly motivated by applications. There are applications, such as clock synchronization, wireless networks, autonomous underwater vehicles and others, where one is interested in DGP instances with exactly one solution (modulo congruences). There are other applications, such as those to molecular conformation (be it proteins or nanostructures) where one is interested in all (finitely many) chiral isomers of the molecule in question. And there are areas such as robotics, where one is interested in the whole solution manifolds, including flexes. This is not all. Although we consider molecular graphs to be rigid, which helps with computation, it is well known that atoms vibrate in molecules according to many factors, including the temperature. This means that the molecules undergo internal movement. Although these are not all necessarily of the flex type (meaning that some of them may not preserve all pairwise distances), the strongest chemical bonds may well be (almost preserved). So there is an analysis of flexibility involved [139]. Moreover, there are very few applications where the distances are known precisely. Mostly, distance errors are modelled as intervals, as in the iDGP (see Section 4.3). This necessarily introduces some flexibility in the frameworks. In most of the situations where a DGP instance of interest has more than one solution, it helps to have an a priori estimation of the number of solutions. In the presence of flexes, it also helps to have an idea of the type of flex involved.

Open Research Areas in Distance Geometry

213

5.4 The Unassigned DGP The uDGP was briefly introduced in Section 4.6, but we formally state it here for clarity. UNASSIGNED DGP (uDGP). Given a positive integer K > 0, a graph G = (V , E) and a sequence D of m = |E| positive real values (di | i ≤ m), determine whether there exists an assignment function α : {1, . . . , m} → E and a realization x : V → RK such that:

∀{u, v} ∈ E ∃i ≤ m

xu − xv  = dα(i) .

In general, this is a problem schema valid for any norm; usually, the uDGP is of interest in the Euclidean norm. The uDGP was previously studied only in the context K = 1 (see, e.g., [46, 96]). It was formally introduced in the context of nanostructure determination (see Section 5.4.1) in [56], as the optimization problem:  min f (xu − xv 2 − dα −1 (u,v) ), (16) α,x

{u,v}∈E

for any strictly convex univariate function f achieving its global minimum at 0. 5.4.1

Determining Nanostructures from Spectra

In his talk at the DGTA16 workshop [97], Simon Billinge jokingly explained that, while the crystal structure determination is largely a solved problem, on account of one giving it to one’s grad student so she can push the “start” button on the X-ray machine, the nanostructure determination problem is far from being in the same class. In crystals, the translational symmetry implies that X-ray experiments yield a periodic response signal, which can be decomposed using Fourier analysis. For nanostructures, one can get a response signal only from a large set of similar nanoparticles with unknown orientation. The output of such experiments is a pair distribution function (PDF) g : R+ → [0, 1], which is a function mapping a distance value d to the frequency with which d occurs in the nanostructure (see Figure 7). The specificity of the uDGP input coming from the application to nanostructure determination is that it allows the estimation of all of the inter-atomic distances [21, 56] (so G is complete), and the output realization is required to be in R3 . We recall that, although realizing complete graphs is a tractable problem (either using trilateration [54] or matrix factoring via the connection between EDM and PSD matrices, see Section 5.2.5), the uDGP has the added difficulty of finding the assignment α at the same time as the realization, so it is not clear at all whether the uDGP on complete graphs might be tractable in full generality. An algorithm for solving the uDGP on complete graphs, called tribond, was proposed in [56] in full generality for any K, and shown in Algorithm 1. It is claimed in [56] that if the initial guess for D¯ is good, and the subsequent choices of vertex

214

L. Liberti and C. Lavor

Fig. 7 Toy example of a PDF borrowed from a presentation by P. Duxbury. PDFs arising from experimental data are noisy and look like continuous wiggly curves with peaks corresponding to observed distances

Algorithm 1 The tribond algorithm Input: an integer K > 0, a graph G = (V , E), a sequence of m = |E| values D = (d1 , . . . , dm ). for each subsequence D¯ of K + 2 distances in D do if D¯ can be realized by a partial realization x then for K + 2 < i ≤ n do for each subsequence S of K + 1 distances in D  D¯ do ¯ S) then if ∃ xi ∈ RK such that (x, xi ) is a realization for (D, break else ¯ S) x ← (x, ¯ xi ), D¯ ← (D, if i = n then return x end if end if end for end for end if end for return infeasible

subsets S (see Algorithm 1) can always be extended to a full solution x, then this algorithm runs in polynomial time, since it does not incur the computational cost of looping over all subsets of cardinality K + 2 and K + 1. However, it is easy to notice that, if K is fixed, the tribond algorithm always takes polynomial running time. This    K+1  is because K+2 and n−| ¯ are polynomially bounded when K is fixed, as is the n D| case for nanostructures (K = 3). The tribond algorithm, however, requires precise distance data, while the distance data coming from experiments is noisy. The LIGA algorithm [21] is a populationbased heuristic designed to find realizations consistent with noisy distances or incomplete distance lists: it evaluates the fitness of each individual (partial) realization x by the mean square distance error, i.e.

Open Research Areas in Distance Geometry

min α

215

1  (xu − xv 2 − dα −1 (u,v) )2 . m {u,v}∈E

Given its practical importance, we believe this problem requires more work. Specifically, given that we possess theoretically and practically efficient methods for solving assignment problems [31] and some practically efficient methods for solving the iDGP [92], the approach consisting in decomposing these two subproblems has not been sufficiently explored. 5.4.2

Protein Shape from NOESY Data

The NMR experiments that allow the estimation of inter-atomic distances in proteins are of the nuclear overhauser effect (NOE) type: they are collectively called “NOE spectrometry”, or NOESY for short. The actual NOESY output looks like a twodimensional surface with some peaks at some positions on the plane, which has axes labelled by chemical shift, a relative frequency measured in part-per-million (ppm). The peak intensity is related with the distance value arising in atoms within atomic groups that resonate at the given ppm values. We formalize this problem as follows (T. Malliavin and B. Worley, 2016, Institut Pasteur, Paris, personal communication). Let V be a set of atoms, and I = {Ip ⊂ V | p < r} be a given system of r subsets of V (representing the atomic groups resonating at a given chemical shift), so that 2 Ip ⊆ V . p tan α, so y = cot θ ∈ [tan α, cot β]. Figure 6 shows domain of (x, y).

3.4 Existence of Global Maximum of r12 + r22 Theorem 1 Domain of (x, y) showed in Figure 6 is convex and compact. Proof For any points (x1 , y1 ) and (x2 , y2 ) of domain of (x, y) x1 , x2 ∈ [0, R] and y1 , y2 ∈ [

a sin β cos β − x a sin2 β

, cot β]

For any constant θ ∈ [0, 1] We denote (x, ¯ y) ¯ = θ (x1 , y1 ) + (1 − θ )(x2 , y2 ) = (θ x1 + (1 − θ )x2 , θy1 +(1 − θ )y2 ) [0, R] is convex =⇒ x¯ = θ x1 + (1 − θ )x2 ∈ [0, R] y1 ≥

a sin β cos β − x1 2

a sin β

θy1 + (1 − θ )y2 ≥ θ

andy2 ≥

a sin β cos β − x2

a sin β cos β − x1 2

a sin β

a sin2 β + (1 − θ )

=⇒

a sin β cos β − x2 a sin2 β

A Circle Packing Problem and Its Connection to Malfatti’s Problem

=

a sin β cos β + (θ x1 + (1 − θ )x2 ) 2

a sin β

=

233

a sin β cos β + x¯ a sin2 β

also y1 ≤ cot β and y2 ≤ cot β θy1 + (1 − θ )y2 ≤ θ cot β + (1 − θ ) cot β = cot β y¯ ∈ [

a sin β cos β − x¯ a sin2 β

, cot β]

(3.4) and (3.4) =⇒ (x, ¯ y) ¯ ∈ domain of (x, y) So domain of (x, y)is convex

Domain of (x, y) is closed and bounded in R2 . According to Heine − Borel theorem the domain of (x, y) is compact.

According to Weierstrass extreme value theorem and continuity of the objective function f (x, y) = r12 + r22 , we have Theorem 2 r12 + r22 reach its global maximum in domain of (x,y), showed in Figure 6.

3.5 Green Candidate and Red Candidate According to (3), f (x, y) is convex about x, it reaches its maximum at the edge of x. In Figure 6, we can observe, for any fixed y, T1 and T2 are the edge points of x, so f (x, y) will reach its maximum on green line and red line. In Section 4, we will show that one point on green line and another point on red line together make the global maximum point set of f (x, y). We call the two points as green candidate (or GC) and red candidate (or RC) of the global maximum point set of f (x, y). Before we calculate the GC and RC in Section 4, we use the following theorem to show that the GC and RC are both in the domain of f (x, y), because the middle results are useful in Section 4. Also because we are considering the type 1 P T case , we use GCP T and RCP T notations. √ α− cot β cot α cot β , Theorem 3 GCP T :(R cot β cot cot β cot α−1 cot α ),RCP T :(R, tan α + sec α) are both in domain of (x,y) showed in Figure 6.

234

D. Munkhdalai and R. Enkhbat √

α− cot β cot α Proof First we prove (R cot β cot , cot β cot α−1



cot β cot α )

in domain of (x,y)



 cot β = cot β tan α cot α a sin β cos β − x =⇒ y= a sin2 β y=

x = asin2 β(cot β −



√ cot β − cot β tan α cot β − tan α 1 − cot β1cot α

cot β tan α) = R

√ cot β cot α − cot β cot α =R =R cot β cot α − 1 1−

1 cot β cot α

R2 pR 2 p−c 1 = = = cot β cot α (p − b)(p − a) p(p − b)(p − a) p √ 1 − p−c p cot β cot α − cot β cot α =⇒ = R x=R cot β cot α − 1 1 − p−c p    √ cot β cot β cot α − cot β cot α , R on the line cot β cot α − 1 cot α y=

a sin β cos β − x a sin2 β It is suffice to show y ∈ [tan α, cot β] tan2 α < cot β tan α < cot2 β, y 2 = cot β tan α, y > 0 =⇒ y ∈ [tan α, cot β]

Figure 6 showed, x = R =⇒ y ∈ [tan α, cot α], so we just need to prove tan α + sec α ∈ [tan α, cot α] tan α + sec α > tan α(sec α > 0) We have an auxiliary function g(y) = y 2 − 2y tan α − 1 g(tan α + sec α) = (tan α + sec α)2 − 2(tan α + sec α) tan α − 1 = tan2 α + 2 tan α sec α + sec2 α − 2 tan2 α − 2 tan α sec α − 1 = sec2 α − tan2 α − 1 = 0 g(cot β) = cot 2 β − 2 tan α cot β − 1

A Circle Packing Problem and Its Connection to Malfatti’s Problem

= =

235

(p − b)2 R p−b −1 −2 2 p−a R R (p − b)2 (p−a)(p−b)(p−c) p

−2

p−b −1 p−a

=

(p − b)(p − c) (p − a)(p − c) p(p − b) −2 − (p − a)(p − c) (p − a)(p − c) (p − a)(p − c)

=

p2 − bp − 2p2 + 2bp + 2cp − 2bc − p2 + ap + cp − ac (p − a)(p − c)

=

−2p2 + bp + ap + cp + 2cp − 2bc − ac (p − a)(p − c)

=

−2p2 + (a + b + c)p + 2p · c − 2bc − ac (p − a)(p − c)

=

−2p2 + 2p2 + c(a + b + c) − 2bc − ac (p − a)(p − c)

=

ac + bc + c2 − 2bc − ac (p − a)(p − c)

=

c2 − bc c(c − b) = ≥ 0(c ≥ b) (p − a)(p − c) (p − a)(p − c) g (y) increase in [tan α, +∞), g (tan α + sec α) = 0, g (cot β) ≥ 0 =⇒ tan α + sec α ≤ cot β

4 Maximum of r12 + r22 Type 1 PT Case We denote t =

x R

and show that r1 and r2 are functions of variables t and y.

x ∈ [0, R] =⇒ t ∈ [0, 1] tanα ≤ y ≤ cot β r2 = x = tR r1 = =

cy − tRy cot β − tR cy − xy cot β − x = 2 y + y cot α y 2 + y cot α y cot α + y cot β − ty cot β − t (R cot α + R cot β)y − tRy cot β − tR = R y 2 + y cot α y 2 + y cot α

236

D. Munkhdalai and R. Enkhbat

a sin β cos β − x a sin2 β = cot β − t

=

R a sin2 β

a sin β cos β − tR a sin2 β = cot β − t (cot β − tan α) =⇒

cot β − t (cot β − tan α) ≤ y ≤ cot β =⇒ cot β − y ≤t ≤1 cot β − tan α Because r12 + r22 = R 2 Q(t, y), we have objective function as Q(t, y) Q(t, y) = (

y cot α + y cot β − ty cot β − t 2 ) + t2 y 2 + y cot α

y cot α + y cot β − ty cot β − t ∂Q =2 (−y cot β − 1) + 2t ∂t (y 2 + y cot α)2 ∂ 2Q (−y cot β − 1)(−y cot β − 1) (y cot β + 1)2 = 2 + 2 = 2 +2>0 ∂t 2 (y 2 + y cot α)2 (y 2 + y cot α)2 ∂2Q ∂t 2

> 0 means Q(t,y) is convex about parameter t, so for every fixed

y,MaxQ(t, y)

=

β−y Max{Q( cotcotβ−tan α , y), Q(1, y)}. When y goes through

β−y [tan α, cot β], we also have MaxQ(t, y) = Max{Max Q( cotcotβ−tan α , y), Max Q (1, y)}. β−y Geometrically Max Q( cotcotβ−tan α , y) will be searched on the green line of the domain and Max Q(1, y)} will be searched on the red line of the domain.

β−y 4.1 Max Q( cotcotβ−tan α , y): Searching on the Green Line of the Domain β−y We denote T (y) = Q( cotcotβ−tan α , y)

y ∈ [tan α, cot β] =⇒ Max T (y) = Max{T (tan α), T (cot β), T ( dT dy = 0)} y cot α + y cot β − ty cot β − t y 2 + y cot α =

β−y cot β−y y cot α + y cot β − ( cotcotβ−tan α )y cot β − cot β−tan α

y 2 + y cot α

A Circle Packing Problem and Its Connection to Malfatti’s Problem

=

y cot β cot α − y + y cot2 β − y cot β tan α − y cot2 β + y 2 cot β − cot β + y (y 2 + y cot α)(cot β − tan α)

=

y cot β cot α − y cot β tan α + y 2 cot β − cot β (y 2 + y cot α)(cot β − tan α)

=

y cot β(y + cot α) − cot β tan α(y + cot α) y(y + cot α)(cot β − tan α)

=

y cot β − cot β tan α y(cot β − tan α)

T (y) = [ =( =

y cot β − cot β tan α 2 cot β − y 2 ] +( ) y(cot β − tan α) cot β − tan α

cot β cot α − coty β cot β cot α − 1

)2 + (

cot β cot α − y cot α 2 ) cot β cot α − 1

1 cot β 2 ) + (cot β cot α − y cot α)2 ] [(cot β cot α − y (cot β cot α − 1)2

T (tan α) =

cot β 2 1 ) + (cot β cot α − tan α cot α)2 ] = 1 [(cot β cot α − tan α (cot β cot α − 1)2

T (cot β) = (

cot β 2 1 )2 [(cot β cot α − ) + (cot β cot α − cot β cot α)2 ] = 1 cot β cot α − 1 cot β

dT =0 dy =⇒ 2(cot β cot α −

cot β cot β ) 2 + 2(cot β cot α − y cot α)(− cot α) = 0 y y

1 cot2 β =⇒ cot2 β cot α 2 − − cot β cot2 α + y cot2 α = 0 y y3 1 1 =⇒ cot β cot α 2 (cot β − y 2 cot α) − 3 (cot2 β − y 4 cot2 α) = 0 y y 1 1 =⇒ cot β cot α 2 (cot β − y 2 cot α) − 3 (cot β − y 2 cot α)(cot β + y 2 cot α) = 0 y y 1 =⇒ (cot β − y 2 cot α) 3 (y cot β cot α − cot β − y 2 cot α) = 0 y 1 y3

237

>0

cot β − y 2 cot α = 0 or y cot β cot α − cot β − y 2 cot α = 0

238

D. Munkhdalai and R. Enkhbat

cot β − y 2 cot α = 0



=⇒ y =

cot β cot α

y cot β cot α − cot β − y 2 cot α = 0 =⇒ cot β cot α −

cot β = y cot αand y

y 2 cot α − y cot β cot α = − cot β T (y) =

1 cot β 2 ) [(cot β cot α − y (cot β cot α − 1)2

+ (cot β cot α − y cot α)2 ] =

1 (2y 2 cot2 α − 2y cot2 α cot β (cot β cot α − 1)2

+ cot2 β cot2 α) =

1 [2 cot α(y 2 cot α − y cot α cot β) (cot β cot α − 1)2

+ cot2 β cot2 α] =

cot2 β cot2 α − 2 cot β cot α S(tan α) cot2 β + cot β cot α

dS =0 dy =⇒ (

y cot α − 1 y 2 cot α + y cot2 α − (y cot α − 1)(2y + cot α) )[ ]=0 y 2 + y cot α (y 2 + y cot α)2

=⇒ (

y cot α − 1 −y 2 cot α + 2y + cot α) )[ ]=0 y 2 + y cot α (y 2 + y cot α)2

y cot α − 1 = 0 =⇒ y = tan α is already calculated Let us calculate −y 2 cot α + 2y + cot α = 0 − y 2 cot α + 2y + cot α = 0 =⇒ y 2 − 2y tan α − 1 = 0 =⇒ y = tan α + sec α y cot α − 1 (tan α + sec α) cot α − 1 = y 2 + y cot α (tan α + sec α)2 + (tan α + sec α) cot α sec α cot α = tan2 α + sec2 α + 2 sec α tan α + 1 + sec α cot α sec α cot α = 2 sec2 α + 2 sec α tan α + sec α cot α 1 = 2 sec α tan α + 2 tan2 α + 1 1 = 2 sec α tan α + tan2 α + sec α =

sec2 α − tan2 α (sec α + tan α)2

sec α − tan α) sec α + tan α sec α − tan α 2 ) +1 S(tan α + sec α) = ( sec α + tan α =

T heorem 3 =⇒ tan α + sec α ≤ cot β in [tan α + sec α, cot β],y cot α − 1 > 0, y 2 + y cot α > 0, − y 2 cot α + 2y + cot α ≤ 0

(4)

240

D. Munkhdalai and R. Enkhbat

cot β cot α − 1 −y 2 cot α + 2y + cot α) dS (y) = ( 2 )[ ]≤0 dy y + y cot α (y 2 + y cot α)2 S(y) does not increase in [tan α + sec α, cot β] =⇒ S(cot β) ≤ S(tan α + sec α) Max S(y) = S(tan α + sec α) Max Q(1, tan α + sec α) = S(tan α + sec α)

4.3 Max(r12 + r22 ) in Type 1 PT Case We can combine the above two results as β−y MaxQ(t, y) = Max{Max Q( cotcotβ−tan α , y), Max Q(1, y)} √

α− cot β cot α 2 = Max{1, 2( cot β cot ) , Q(1, tan α + sec α)} cot β cot α−1 Because Q(1, tan α + sec α)} = S(tan α + sec √ α) ≥ S(cot β) ≥ S(tan α) = 1 cot β cot α− cot β cot α 2 sec α−tan α 2 ) , ( sec α+tan α ) + 1} Therefore MaxQ(t, y) = Max{2( cot β cot α−1 √

α− cot β cot α 2 sec α−tan α 2 And so, Max(r12 + r22 ) = Max{2( cot β cot ) , ( sec α+tan α ) + 1}R 2 cot β cot α−1

5 Maximum of r12 + r22 Type 2 QR Case We just need to exchange b and c , exchange γ and β in all formula in Section 4, for example, f (x, y) = r12 + r22 = (

by − xy cot γ − x 2 ) + x2 y 2 + y cot α

Because the relation c ≥ b is not symmetrical , we have g(cot γ ) = cot 2 γ − 2 tan α cot γ − 1 =

R p−c (p − c)2 −1= −2 2 p−a R R

(p − c)2

=

(p − b)(p − c) (p − a)(p − b) p(p − c) −2 − (p − a)(p − b) (p − a)(p − b) (p − a)(p − b)

=

p2 − cp − 2p2 + 2cp + 2bp − 2bc − p2 + ap + bp − ab (p − a)(p − b)

(p−a)(p−b)(p−c) p

−2

p−c −1 p−a

A Circle Packing Problem and Its Connection to Malfatti’s Problem

=

−2p2 + bp + ap + cp + 2bp − 2bc − ab (p − a)(p − b)

=

−2p2 + (a + b + c)p + 2p · b − 2bc − ab (p − a)(p − b)

=

−2p2 + 2p2 + b(a + b + c) − 2bc − ab (p − a)(p − b)

=

ab + bc + b2 − 2bc − ab (p − a)(p − b)

=

b(b − c) b2 − bc = ≤ 0(c ≥ b) (p − a)(p − b) (p − a)(p − b)

241

g(y) increase in [tan α, +∞), g (tan α+ sec α)=0, g (cot γ ) ≤ 0 =⇒ tan α + sec α ≥ cot γ tan α + sec α beyond [tan α, cot γ ], so RCQR become (R, cot γ ), we have the following theorem in type 2 QR case. √ cot γ cot α− cot γ cot α cot γ Theorem 4 GCQR :(R , cot γ cot α−1 cot α ),RCQR :(R, cot γ ) are both in the domain of (x,y)

6 Maximum of r12 + r22 The candidate point set are {GCP T ,RCP T ,GCQR ,RCQR }. r12 + r22 is geometrically the same at RCQR :(R, cot γ ) point and at (R, cot β)) of type 1 P T case, so less than r12 + r22 at RCP T :(R, tan α + sec α). The candidate point set becomes {GCP T ,RCP T ,GCQR }. √ cot β cot α − cot β cot α 2 sec α − tan α 2 ) ,( ) cot β cot α − 1 sec α + tan α √ cot γ cot α − cot γ cot α 2 2 ) }R +1, 2( cot γ cot α − 1

Max(r12 + r22 ) = Max{2(

√ cot β cot α − cot β cot α cot β cot α − 1 √ √ cot β cot α( cot β cot α − 1) = √ √ ( cot β cot α + 1)( cot β cot α − 1)

242

D. Munkhdalai and R. Enkhbat

√ cot β cot α = =√ cot β cot α + 1 1 1 > (β < γ ) √ 1 + tan γ tan α tan β tan α √ cot γ cot α − cot γ cot α also, cot γ cot α − 1 √ √ cot γ cot α( cot γ cot α − 1) = √ √ ( cot γ cot α + 1)( cot γ cot α − 1) √ 1 cot γ cot α =√ = √ cot γ cot α + 1 1 + tan γ tan α √ cot β cot α − cot β cot α 2 2( ) cot β cot α − 1 √ cot γ cot α − cot γ cot α 2 ) > 2( cot γ cot α − 1 1+



(5)

The candidate point set becomes {GC ,RCP T }. √PT α− cot β cot α 2 sec α−tan α 2 Max(r12 + r22 ) = Max{2( cot β cot ) , ( sec α+tan α ) + 1}R 2 cot β cot α−1 we have the following theorem about Max(r12 + r22 ) Theorem 5 When the cutting line goes through AB, AC sides the r12 + r22 reach √ α− cot β cot α cot β its maximum at (R cot β cot , cot β cot α−1 cot α ) or (R, tan α + sec α), the maximum √

α− cot β cot α 2 2 α−tan α 2 2 ) R or [( sec value is 2( cot β cot cot β cot α−1 sec α+tan α ) + 1]R , respectively.

7 Proof of Malfatti’s Problem for n = 2 7.1 The Cutting Line Goes Through All Three Sides of the Triangle If the cutting line goes through AB, AC, andBC, according to Theorem 5 and (5) we have α−tan α 2 2 √ 1 Max(r12 + r22 ) = Max{2( 1+√tan1 β tan α )2 , ( sec sec α+tan α ) + 1, 2( 1+ tan β tan γ ) , β−tan β 2 2 sec γ −tan γ 2 2 √ 1 ( sec sec β+tan β ) + 1, 2( 1+ tan β tan γ ) , ( sec γ +tan γ ) + 1}R

BC is the long one, the 2( 1+√tan1 β tan γ )2 duplicate we have

A Circle Packing Problem and Its Connection to Malfatti’s Problem

243

α−tan α 2 Max(r12 + r22 ) = Max{2( 1+√tan1 β tan α )2 , 2( 1+√tan1 β tan γ )2 , ( sec sec α+tan α ) +

sec γ −tan γ 2 β−tan β 2 2 1, ( sec sec β+tan β ) + 1, ( sec γ +tan γ ) + 1}R and α ≥ γ , we get sec β−tan β 2 α−tan α 2 Max(r12 + r22 ) = Max{2( 1+√tan1 β tan γ )2 , ( sec sec α+tan α ) + 1, ( sec β+tan β ) + γ −tan γ 2 2 1, ( sec sec γ +tan γ ) + 1}R

(

sec β − tan β 2 1 − sin β 2 2(1 + sin2 β) ) +1=( ) +1= sec β + tan β 1 + sin β (1 + sin β)2

(6)

(

1 − sin γ 2 2(1 + sin2 γ ) sec γ − tan γ 2 ) +1=( ) +1= sec γ + tan γ 1 + sin γ (1 + sin γ )2

(7)

if β ≤ γ =⇒ tan β ≤ 2(

√ tan β tan γ

1 2 2(1 + sin2 β) )2 = < √ √ √ 1 + tan β tan γ (1 + tan β tan γ )2 (1 + tan β tan γ )2 ≤ (6) =⇒ 2(

And if β > γ =⇒ tan γ < 2(

1+





2(1 + sin2 β) 2(1 + sin2 β) < 2 (1 + tan β) (1 + sin β)2

1 sec β − tan β 2 ) +1 )2 < ( sec β + tan β tan β tan γ

(8)

tan β tan γ

1 2 2(1 + sin2 γ ) )2 = < √ √ √ 1 + tan β tan γ (1 + tan β tan γ )2 (1 + tan β tan γ )2 ≤ (7) =⇒ 2(

1+



2(1 + sin2 γ ) 2(1 + sin2 γ ) < 2 (1 + tan γ ) (1 + sin γ )2

1 sec γ − tan γ 2 ) +1 )2 < ( sec γ + tan γ tan β tan γ

(9)

sec γ −tan γ 2 sec β−tan β 2 α−tan α 2 2 Max(r12 + r22 ) = Max{( sec sec α+tan α ) + 1, ( sec β+tan β ) + 1, ( sec γ +tan γ ) + 1}R From (8) and (9), we have

Theorem 6 When the cutting line goes through AB, AC, and BC sides, the r12 + r22 reach its maximum at (R, tan α + sec α) or (R, tan β + sec β) or (R, tan γ + sec β−tan β 2 α−tan α 2 2 2 sec γ ), the maximum value is [( sec sec α+tan α ) + 1]R or [( sec β+tan β ) + 1]R or γ −tan γ 2 2 [( sec sec γ +tan γ ) + 1]R , respectively.

244

D. Munkhdalai and R. Enkhbat

(

sec α − tan α 2 1 − sin α 2 (1 − sin α)2 2 ) +1=( ) +1=[ ] sec α + tan α 1 + sin α 1 − sin2 α +1 = (

α 1 − sin α 4 π ) + 1 = [tan( − )]4 + 1 cos α 4 2

We have assumed α ≥ γ ≥ β , so [tan( π4 − α2 )]4 + 1 ≤ [tan( π4 − γ2 )]4 + 1 ≤ [tan( π4 − β2 )]4 + 1 β−tan β 2 2 and we have Max(r12 + r22 ) = ( sec sec β+tan β ) R Finally, we have Theorem 7 When the cutting line goes through AB, AC, and BC sides, if β is smallest half angle of "ABC , the r12 + r22 reach its maximum at (R, tan β + sec β), β−tan β 2 2 the maximum value is [( sec sec β+tan β ) + 1]R .

7.2 Malfatti n = 2 Problem as Corollary Theorem 8 At (R, tan α + sec α) point, r1 and r2 go to tangent position. Proof we just need to proof |O1 O2 | = r1 + r2

r1 = = (4) =⇒

R(cot α + cot β)y − Ry cot β − R cy − xy cot β − x = 2 y + y cot α y 2 + y cot α y cot α − 1 Ry cot α − R =R 2 y 2 + y cot α y + y cot α

sec α − tan α y cot α − 1 = 2 sec α + tan α y + y cot α 1 − sin α 1 + sin α 1 − sin α ) r1 + r2 = R(1 + 1 + sin α 2 =R 1 + sin α =

r2 2 r1 2 ) +( ) = sin θ cos θ R 2 cy − xy cot β − x 2 ( ) +( 2 ) = cos θ (y + y cot α) sin θ (

(

R 2 R(cot α + cot β)y − Ry cot β − R 2 ) +( ) = cos θ (y 2 + y cot α) sin θ

A Circle Packing Problem and Its Connection to Malfatti’s Problem

R 2 R 2 y cot α − 1 2 ) +( ) ( 2 ) cos θ sin θ y + y cot α r1 2 r2 2 (4) =⇒ ( ) +( ) = sin θ cos θ R 2 R 2 sec α − tan α 2 ( ) +( ) ( ) = cos θ sin θ sec α + tan α R 2 R 2 sec α − tan α 2 ) +( ) ( ) = ( cos θ sin θ cot θ R 2 R ( ) +( )2 (sec α − tan α)2 = cos θ sin θ cot θ R 2 ( ) (1 + sec2 α − 2 sec α tan α + tan2 α) = cos θ R 2 ( ) (2 sec2 α − 2 sec α tan α) = cos θ (

(R 2 sec2 θ )2 sec α(sec α − tan α) sec2 θ = 1 + tan2 θ = 1 + (

1 )2 = sec α + tan α

sec2 α + 2 sec α tan α + tan2 α + 1 = (sec α + tan α)2 2 sec2 α + 2 sec α tan α = (sec α + tan α)2 sec α + tan α 2 sec α (sec α + tan α)2 r1 2 r2 2 ( ) +( ) = sin θ cos θ (R 2 sec2 θ )2 sec α(sec α − tan α) = sec α + tan α sec α(sec α − tan α) = (sec α + tan α)2 sec α )2 (sec2 α − tan2 α) = 4R 2 ( sec α + tan α 1 )2 4R 2 ( 1 + sin α

4R 2 sec α

 O1 O2 =

(

r1 2 r2 2 2 ) +( ) =R = r1 + r2 sin θ cos θ 1 + sin α

The following theorem also holds symmetrically.

245

246

D. Munkhdalai and R. Enkhbat

Theorem 9 At (R, tan β + sec β) point, r1 and r2 go to tangent position. According to Theorem 9, the following theorem is the solution of Malffati two circle problem as corollary of Theorem 7. It is also the greedy arrangement of the two circles in the triangles. Corollary 10 For all tangent circle O1 and circle O2 inside "ABC, if β is the β−tan β 2 smallest half angle of "ABC , the r12 + r22 reach its maximum value [( sec sec β+tan β ) + 1]R 2 .

Appendix A

w2 c

b

w1 r2

V

I2

r1

I1 B

U X

M

a

Y

C

Find a point M on the side BC, for which sum of the areas of the inscribed circles of ABM and ACM reaches maximum value. (R. Enkhbat) Solution is given by Luvsanbyamba [4]: Let us denote inscribed circles of ABC,ABM,AMC as ω = C(I, r), ω1 = C(I1 , r1 ), ω2 = C(I2 , r2 ), respectively, and the area of triangle ABC by S, the height from the vertex A by h. Lemma pr 2 + ar1 r2 = pr(r1 + r2 ). Proof BC ∩ ω1 = X, BC ∩ ω2 = Y , AM ∩ ω1 = U , AM ∩ ω2 = V . Without loss of generality b > c. Let us denote BX = m, CY = n. Then U V = b − c + m − n, tan ∠B 2 =

r1 ∠C m ,tan 2

=

r p−c

=

r2 n ,XY

= a − m − n.

r p−b

=

A Circle Packing Problem and Its Connection to Malfatti’s Problem

247

U V 2 + (r1 + r2 )2 = I1 I 22 = XY 2 + (r2 − r1 )2 =⇒ U V 2 + 4r1 r2 = XY 2 ⇐⇒ [(b − c) + (m − n)]2 + 4r1 r2 = [a − (m + n)]2 ⇐⇒ 2a(m + n) + 2(b − c)(m − n) + 4r1 r2 = a 2 − (b − c)2 + 4mn =⇒ (p − c)m + (p − b)n + r1 r2 = 1 2 (p−b)(p−c)+mn =⇒ (p−c) (p−b)r +(p−b) (p−c)r +r1 r2 = (p−b)(p−c)+ r r (p−b)r1 (p−c)r2 2 =⇒ (p−b)(p−c)(r−r1 )(r−r2 ) = r r1 r2 ; (p−a)(p−b)(p−c) = r r pr 2 =⇒ p(r − r1 )(r − r2 ) = (p − a)r1 r2 =⇒ pr 2 + ar1 r2 = pr(r1 + r2 ). Q.E.D. F (r1 , r2 ) = r12 + r22 , using lemma, G(r1 , r2 ) = r1 + r2 − 2rh1 r2 = r = const. Let us consider H (r1 , r2 ) = F (r1 , r2 ) − λG(r1 , r2 )

∂H ∂r1 ∂H ∂r2

= 2r1 − λ(1 − = 2r2 − λ(1 −

2r1 h ) 2r2 h )

=0 =0

2 2 1 ⇐⇒ rr12 = h−2r h−2r2 ⇐⇒ h(r1 − r2 ) = 2(r1 − r2 ) ⇐⇒ r1 = r2 or 2(r1 + r2 ) = h 2 If r = r1 , r2 = 0, then F0 = F (r1 , r2 ) = r1 + r22 = r 2 .

If 2(r1 + r2 ) = h, then the lemma implies r12

+ r22

= (r1 + r2

)2

− 2r1 r2 =

F0 > F2 ⇐⇒ r2 > rh −

h2 4

h2 4

2 − ( h2

h2 2

− 2r1 r2 = rh and F2 = F (r1 , r2 ) =

− rh) = rh −

h2 4 .

⇐⇒ (r − h2 )2 > 0

√ 2 2r 2 If r1 = r2 , by the lemma we have 2r1 − h1 = rr1 = h− h2 −2rh . F1 = √ F (r1 , r2 ) = 2r12 = 2r1 h − rh = h2 − h h2 − rh − rh. √ √ F0 < F2 ⇐⇒ r 2 < h2 − h h2 − rh − rh ⇐⇒ h h2 − rh < h2 − r 2 − 2 2 4s 2 2 rh ⇐⇒ 0 < r 2 (r 2 + 2rh − h2 ) ⇐⇒ 0 < ps 2 + 4s pa − a 2 ⇐⇒ (b + c) < √ 2 √ (a 2) ⇐⇒ b + c < a 2 √ Thus, in case of b + c < a 2 , r12 + r22 , our sum gets its maximum value iff r1 = r2 , and in other case it gets the maximum value iff r1 = 0 or r2 = 0.

References 1. Andreescu, T., Mushkarov, O., Stoyanov, L.: Geometric Problems on Maxima and Minima. Malfatti’s Problems 2.3, 80 pp. Birkhäuser, Boston (2005) 2. Enkhbat, R., Barsbold, B.: Optimal inscribing of two balls into polyhedral set. In: Optimization, Simulation, and Control. Springer Optimization and Its Applications, vol. 76, pp. 35–47. Springer, Berlin (2013) 3. Enkhbat, R., Bayarbaatar, A.: On the maximum and minimum radius problems over a polyhedral set. Mongolian Math. J. 11, 23–33 (2007) 4. Enkhbat, R., Buyankhuu, L.: T2-2 find a point M on the side BC, for which sum of the areas of the inscribed circles of ABD and ACD reaches maximum value. Mongolian Mathematical Olympiad Series – No. 36 (2012) 5. Marco, A., Bezdek, A., Boro´nski, J.P.: The problem of Malfatti: two centuries of debate. Math. Intell. 33(1), 72–76 (2011)

248

D. Munkhdalai and R. Enkhbat

6. Zalgaller, V.A., Los’, G.A.: The solution of Malfatti’s problem. J. Math. Sci. 72(4), 3163–3177 (1994) 7. Enkhbat, R., Barkova, M., Strekalovsky, A.S.: Solving Malfatti’s high dimensional problem by global optimization. Numer. Algebra Control Optim. 6(2), 153–160 (2016) 8. Enkhbat, R., Barkova, M.: Global search method for solving Malfatti’s four-circle problem. J. Irkutsk State Univ. (Ser. Math.) 15, 38–49 (2016). http://isu.ru/izvestia 9. Enkhbat, R.: Global optimization approach to Malfatti’s problem. J. Global Optim. 65, 33–39 (2016)

Review of Basic Local Searches for Solving the Minimum Sum-of-Squares Clustering Problem Thiago Pereira, Daniel Aloise, Jack Brimberg, and Nenad Mladenovi´c

Abstract This paper presents a review of the well-known K-means, H-means, and J-means heuristics, and their variants, that are used to solve the minimum sum-of-squares clustering problem. We then develop two new local searches that combine these heuristics in a nested and sequential structure, also referred to as variable neighborhood descent. In order to show how these local searches can be implemented within a metaheuristic framework, we apply the new heuristics in the local improvement step of two variable neighborhood search (VNS) procedures. Computational experiments are carried out which suggest that this new and simple application of VNS is comparable to the state of the art. In addition, a very significant improvement (over 30%) in solution quality is obtained for the largest problem instance investigated containing 85,900 entities. Keywords Clustering · Minimum sum-of-squares · VNS · K-means

1 Introduction In the digital era in which we live, where the internet and online technologies are well developed, a big volume of information is generated daily due to the interaction T. Pereira Universidade Federal do Rio Grande do Norte, Natal, Rio Grande do Norte, Brazil D. Aloise Department of Computer and Software Engineering, Polytechnique Montréal, Montreal, QC, Canada e-mail: [email protected] J. Brimberg Department of Mathematics and Computer Science, The Royal Military College of Canada, Kingston, ON, Canada N. Mladenovi´c () Emirates College of Technologies, Abu Dhabi, UAE Mathematical Institute, SASA, Belgrade, Serbia © Springer Nature Switzerland AG 2018 P. M. Pardalos, A. Migdalas (eds.), Open Problems in Optimization and Data Analysis, Springer Optimization and Its Applications 141, https://doi.org/10.1007/978-3-319-99142-9_13

249

250

T. Pereira et al.

between people and between people and objects. With the development of large and powerful data centers, we face a big volume of data and information that did not exist when compared to some decades ago. Observing some tools used daily by a great part of the world population, like social medias, cloud storage, websites, consumption, and production of online resources, it is possible to notice how data generation is present in everyday activities. Diverse groups argue about the potential benefits of analyzing information from Twitter, Google, Facebook, and Wikipedia, where a big number of users daily leave digital traces and deposit data [11]. Due to this great volume of data, tools are needed to help in extraction of relevant information, and compacting and classifying them. In the literature there is a distinction between classes of learning models. The classification models make use of data analysis techniques in a supervised way, that means, when there is a previous knowledge of the data which is being used; the data clustering methods make use of data analysis techniques that do not have previous knowledge of the data, i.e., unsupervised. Due to this lack of prior knowledge of data, data clustering models are more difficult [21]. The objective of data clustering is to identify natural groups in a set of points or objects. Its purpose can be described as starting from a representation of N objects, to find M groups based in a similarity measure such that objects in the same group are very similar at the same time that objects in different groups have a low similarity. Although, the data clustering problem is simple to formulate, it may have different variations depending on the model adopted and the type of data that will be used [1]. Among the many criteria used in data clustering, the most natural, intuitive and frequently adopted criterion is clustering by the minimum sum of quadratic Euclidean distances (MSSC) [41]. Formally, the MSSC objective, starting with a set of entities X = {x1 , . . . , xN } in Euclidean space with d dimensions, is to separate them in M disjoint groups Cj , called clusters, such that the sum of squared distances between each entity and the centroid x¯j of the cluster Cj to which it belongs is minimum. We can express it as: min x,x¯

N  M 

wij ||xi − x¯j ||2

i=1 j =1

subject to M 

(1) wij = 1, ∀i = 1, . . . , N

j =1

wij ∈ {0, 1}, ∀i = 1, . . . , N ; ∀j = 1, . . . , M. where wij is 1 if the entity xi belongs to cluster Cj and is 0 otherwise. This formulation of MSSC forms a non-convex optimization problem with a large

Review of Basic Local Searches for Solving the Minimum Sum-of-Squares. . .

251

number of local minimum [41]. In [2] it was proved that the minimum sum of squared distances problem is NP-hard, i.e., exact solution methods require in the worst case an exponential computing time to solve the problem. Due to its complexity, heuristics are usually developed to solve this problem [18]. In the literature many methods are developed to solve the MSSC problem. Among them we can mention the following heuristics: K-means [12], global Kmeans [24, 28], J-means [15], H-means [25], genetic algorithms [22, 23], and the state-of-the-art algorithm, hyperbolic smoothing [5]. Metaheuristics have been developed to avoid being trapped in the first obtained local optimum of a problem. Metaheuristics are algorithms designed to approximately solve a wide range of optimization problems without having a deep adaptation to each problem [7]. Recently, heuristics for the MSSC, which do not stop at the first local minimum, were applied, based on several metaheuristic frameworks using the tabu search [37], simulated annealing [34], genetic search [30], and the variable neighborhood search [26]. The main goal of this chapter is to review the standard and often used local search procedures for MSSC known as K-means, H-means, and J-means, and their variants, and show how they may be incorporated within a metaheuristic framework. Two new local searches also are developed which combine these popular algorithms in nested and sequential structures. These local searches, referred to as variable neighborhood descent (VND), are then applied in the local improvement step of two variable neighborhood search (VNS) procedures. The paper is thus organized as follows. A review of the better-known local searches is given in Section 2, which also describes the two new ones developed by us. Section 3 gives an overview of VNS and GVNS, and describes the algorithms of the new GVNS variants proposed in this paper. Results of the comparison between the two new variants and the existing state-of-the-art algorithm are presented in Section 4. Section 5 concludes this work and presents some directions for research in the future.

2 Local Search Procedure 2.1 Initial Solution In order to avoid a large number of iterations of local search, a hierarchical clustering method was selected to obtain an initial solution instead of a totally random approach. The hierarchical clustering method we chose is Ward’s method using the Lance–Williams formulas. The Ward’s method was first presented by [38]. The objective of this hierarchical method is to construct partitions PN , PN −1 , . . . , P1 minimizing loss of information in each cluster merging. Usually this loss of information is quantified by a sum of squared error criteria [1]. However, for the case where the dissimilarity matrix makes use of quadratic Euclidean distances, it can be updated through the Lance– Williams formulas during the clustering process [40].

252

T. Pereira et al.

Algorithm 1 Ward’s method 1. Start from an initial partition Cj , j = 1, . . . , N of entities set X = {x1 , x2 , . . . , xN } where x¯j are the centroids of each corresponding cluster. 2. Find the best pair of clusters (Ca , Ce ) whose merging presents the least increase in objective function, i.e., find in the dissimilarity matrix the pair (a, e), a = e, with the least value. 3. Merge clusters Ca and Ce into a single cluster and update the dissimilarity matrix now with the new cluster Ca ∪ Ce replacing Ca and Ce . 4. If the desired number of clusters is reached, stop. Otherwise, go back to Step 2 with minus one cluster in the new partition.

This method can be described as starting from an initial partition where each entity of the problem is assigned to its own cluster. Simplifying, let N be the number of entities in the problem. Then N clusters are created and each of the N entities is assigned to one of them. From this initial condition, two clusters Ca and Ce are found whose merging causes the least possible impact on the objective function. The position of the centroid of this new cluster, created by the merging of clusters Ca and Ce , is then updated. With this, we have a new partition of N − 1 clusters. The process then repeats until the desired number of final clusters is reached. In Algorithm 1 the step-by-step of this method can be observed. For the case of the MSSC that makes use of quadratic Euclidean distances, as mentioned previously, in the step of updating the matrix of dissimilarities the formulas of Lance–Williams can be used. In this way, for each cluster Cj of the partition, its distance, in the dissimilarities matrix, to a new cluster created by the merging Ca ∪ Ce can be calculated by the following formula: D(Cj , Ca ∪ Ce ) =

na + nj ne + nj D(Ca , Cj ) + D(Ce , Cj ) na + ne + nj na + ne + nj nj D(Ca , Ce ), − na + ne + nj

(2)

where D(·, ·) refers to the squared Euclidean distance between the centroids of two clusters, and nt to the number of elements in cluster t. In cases where the working instance is large, i.e., with N > 10,000, we use a random initial solution instead of Ward’s method because Ward’s method with the Lance–Williams formulas requires a square matrix of size (2∗N )−M, which means that for large instances, a large amount of memory is required. For a random initial solution, M entities are selected randomly to be the locations where the starting centroids will be. After this step, each entity xi is assigned to the cluster with the closest centroid. At the end, the centroid positions and the solution cost are updated.

Review of Basic Local Searches for Solving the Minimum Sum-of-Squares. . .

253

2.2 Review of LS Heuristics 2.2.1

K-Means Heuristic

The K-means heuristic was initially proposed by [12]. This heuristic appears constantly in the literature despite its age. This fact may be attributed to its efficiency and easy implementation. Its association with the MSSC problem is so common that in some cases the latter is called the K-means problem. For the operation of K-means an initial solution of the problem is necessary, this initial solution being either random or constructed. It starts from an initial partition of entities in clusters, that is, each entity xi is assigned to a cluster Cj with centroid x¯j . Every entity xi is assigned to the cluster Cj which has the nearest centroid x¯j . If there are no changes in assignments, the heuristic stops. Otherwise, the position of all centroids is updated, and the process repeats itself until there is no further change in the attributions of entities to the centroids. Despite good efficiency and easy implementation, K-means has some disadvantages. One is that the heuristic solution depends heavily on the initial solution. Knowing this disadvantage many papers can be found that try to find a remedy. As an example, the work of [17] presents a study on 25 initialization methods drawn from the literature. Another feature of K-means is that in some cases the heuristic presents a degenerate final solution, that is, the solution presents less clusters than the desired number M. The work in [15] proposes a solution for this characteristic. Given a degenerate solution with t empty clusters, t entities, with the largest quadratic distance to their centroids, are selected and reallocated to the empty clusters. With this modification, a new lower-cost solution is generated, and due to the possibility of improving the solution, the K-means process is restarted. This modification is called K-means+. Subsequently, in [4] a study was made on solutions to solve the problem of degeneration and K-means+ was found to be the most efficient method. Thus K-means+ was chosen to be used in this research. A step-by-step description of the K-means+ heuristic is shown in Algorithm 2. It is also interesting to note that a similar degeneracy problem has been observed in the continuous facility locationallocation problem (see [8]).

Algorithm 2 K-means+ heuristic 1. Start from an initial partition Cj , j = 1, . . . , M of set X where x¯j are the corresponding centroids of each cluster. 2. Assign each entity xi (i = 1, . . . , N ) to its closest centroid x¯j . 3. If no change occurs in assignments, a local optimum is found; otherwise go to Step 5. 4. If the solution is not degenerate the heuristic stops; otherwise if it is degenerate with t empty clusters, get t points that have the biggest distance from their centroids, assign them to the t empty clusters, and return to Step 2. 5. Update each centroid x¯j location and return to Step 2.

254

2.2.2

T. Pereira et al.

H-Means Heuristic

Another well-known heuristic is the H-means heuristic, presented by [25], whose principle is to act on the solution by analyzing one entity at a time. From an initial solution, the algorithm, for each entity xi , finds the cluster Cj , whose impact on the solution by the change of entity xi from the cluster to which it belongs Cl to the cluster Cj shows improvement. If there is a change that presents improvement, it is done immediately followed by an update of the centroid positions of the solution. In this way, the process repeats itself until no improvement can be found, that is, no change in the assignments of the entities provides improvement in the solution. Unlike the K-means heuristic, H-means changes the assignment of one entity at a time, and then updates the position of the centroids. Formulas have been suggested in [36], which make it easy to obtain the new position of the centroids and the impact on the cost of the solution when an entity xi is switched from a cluster Cl to a cluster Cj . To calculate the change in position of the centroids the following formulas can be used: x¯l ←

nl x¯l − xi , nl − 1

x¯j ←

nj x¯j + xi nj + 1

(3)

where nj and nl are the number of entities that are currently assigned to clusters Cj and Cl , respectively. The impact on the cost of the solution can be calculated by the following formula: vij =

nj nl ||x¯j − xi ||2 − ||x¯l − xi ||2 , xi ∈ Cl . nj + 1 nl − 1

(4)

These same formulas can also be applied to the K-means heuristic. A description of the H-means heuristic can be found in Algorithm 3. In the improvement step of H-means, it is possible to address two strands, the best improvement strand and the first improvement. In the case of the first improvement, during the search described in Step 2 of Algorithm 3, the change of assignment of each entity is made for the first cluster whose impact on the solution is improved.

Algorithm 3 H-means heuristic 1. Start from an initial partition Cj , j = 1, . . . , M of set X where x¯j are the corresponding centroids of each cluster. 2. For each entity xi (i = 1, . . . , N ) a. Define vij as the impact in solution cost by moving the entity xi from the cluster where it belongs Cl to the cluster Cj (see equation (4)). b. If vij presents improvement, make the change and retain the updated centroid positions (see equation (3)). Otherwise, continue to the next entity. 3. If there are no changes in assignments, a local minimum is found and the algorithm stops; otherwise go back to Step 2.

Review of Basic Local Searches for Solving the Minimum Sum-of-Squares. . .

255

That is, it is not certain that the change made is the one that has the best impact on the solution among all possible for the entity visited. In the case of best improvement, we first analyze all the impacts in the solution from the possible changes of an entity. From these impacts, the one which presents the best improvement for the solution is chosen. In a study by [16], the first improvement for the H-means algorithm shows better behavior when the initial solution is of poor quality. The best improvement approach is preferred when the initial solution is of higher quality. For this reason, the option of best improvement was chosen here. Another adoption made for the H-means algorithm was on the order of visits of each entity xi ∈ X in Step 2 of Algorithm 3. In order to avoid improvements only in the same first entities visited, a random visit was adopted. In this way, the algorithm can present improvements distributed among the entities.

2.2.3

J-Means Heuristic

An important heuristic for the MSSC problem is the J-means algorithm, which was developed by [15] based on the idea that in cases where the value of M is large, centroids can be located in entities in the solution . In their work, when a centroid is located in the same position of an entity, the entity is then referred to as occupied. From this concept, the heuristic J-means inserts into the solution a new centroid at an unoccupied entity xi position thus making a solution with M + 1 clusters. It is then verified which is the best centroid to be withdrawn from the solution so that the number of clusters returns to M. The input and output pair is then evaluated to verify the impact of this change in the solution. At the end, the best pair of locations are found, the change is made to the solution, and the entity assignments to clusters are recalculated. The process then restarts until there is no pair that improves the solution. For brevity, the objective function of the MSSC problem will be referred to below simply as the objective function. The heuristic J-means is summarized in Algorithm 4. Due to its high computational cost an efficient implementation of Step 3 is given in [39] and [14], which is called Fast interchange. The Fast interchange is in the algorithms below.

Algorithm 4 J-means Heuristic 1. Start from an initial solution with fopt as the objective function value. 2. Find all unoccupied entities, in this case, all entities xi that do not coincide with any centroid location x¯j . 3. For each unoccupied entity xi , add a new centroid x¯M+1 in its location and find the index j of the best centroid to be removed from the solution. Denote vij as the impact on the objective function of making this change. 4. Find the best i  and j  where vij is minimum. 5. If vi  j  < 0, change x¯j  to xi  position, update the entity assignments in order to get new solution f  = fopt + vi  j  , set fopt = f  , and restart from Step 2 with this new solution; otherwise stop.

256

T. Pereira et al.

Returning to [15], a proposed modification in the J-means algorithm is given, making it a hybrid heuristic. This modification consists in executing the K-means and H-means algorithms on each improvement of the J-means algorithm. This resulting hybrid is known as J-means+.

2.3 Variable Neighborhood Descent with K-Means, H-Means, and J-Means As one of the main improvement procedures of VNS, the variable neighborhood descent (VND) assumes that a solution that is a local optimum for multiple neighborhood structures is more likely to be a global optimum than a local optimum of one neighborhood structure. More precisely, the VND exploits multiple neighborhood structures in a sequential or nested composite way to improve a given solution [20].

2.3.1

Sequential VND

The basic sequential VND explores several neighborhood structures in a specified order in which they are to be visited. Starting from an initial solution S, this sequence of neighborhood structures is then explored in the order listed, until the solution is improved. When a better solution is found, the search resumes in the first neighborhood in the sequence from this new solution. Algorithm 5 outlines this procedure.

2.3.2

Nested VND

The nested VND explores a large number of neighborhood points obtained through a multi-neighborhood composition. Basically, when a neighborhood structure is exploited, it takes it to another pre-defined neighborhood structure. We can cite as an example a VND with three neighborhood structures B1 , B2 , and B3 . When exploring the first neighborhood B1 , the nested approach maps the solution into

Algorithm 5 Basic sequential VND 1. 2. 3. 4. 5. 6.

Start from an initial solution S with f (S) as the respective value of the objective function. Define b ← 1 to start from the first neighborhood. Find a solution S  in the neighborhood Bb (S), if it exists, such that f (S  ) < f (S). If f (S  ) < f (S), set S ← S  and b ← 1; otherwise, move to the next neighborhood b ← b + 1. If b ≤ bmax then go back to Step 3. Otherwise stop (no improvements were obtained) with resulting solution S.

Review of Basic Local Searches for Solving the Minimum Sum-of-Squares. . .

257

Algorithm 6 Nested VND 1. Start from an initial solution S with f (S) as the respective value of the objective function and B ∗ = B1 ◦ B2 ◦ . . . ◦ Bmax as the result of all neighborhood structures combined. 2. Find a neighbor solution S  in the neighborhood B ∗ (S), if it exists, such that f (S  ) < f (S). 3. If f (S) ≤ f (S  ) then stop with resulting solution S; otherwise set S ← S  and restart from Step 2.

Algorithm 7 Mixed VND 1. Start from an initial solution S with f (S) as the respective value of the objective function and B ∗ = B1 ◦ B2 ◦ . . . ◦ Bz as the result of a combination of nested neighborhood structures. 2. For each neighbor solution S  found in neighborhood B ∗ (S) a. Find a solution S  as result of application of basic sequential VND on solution S  with neighborhood structures B  = Bz+1 , . . . , Bmax b. If f (S  ) < f (S) then S ← S  . 3. If no improvements were found by Step 2, stop; otherwise, return to Step 2.

another neighborhood structure, for example, B2 . In this way we can say that the neighborhood structure of a nested VND can be seen as a composition of neighborhoods B ∗ (S) = (B3 (B2 (B1 (S)))) for the case in which the number of neighborhood structures defined is three. The nested VND procedure is outlined in Algorithm 6. Finally, we have the mixed VND, which is formed when there are both nested and sequential neighborhood structures. Let B  = {B1 , . . . , Bz } be the set of nested neighborhoods. Each time a solution is visited in this structure, a set of sequential neighborhoods B  = {Bz+1 , . . . , Bmax} is also visited. Algorithm 7 outlines the steps of this approach. For the MSSC problem, the sequential VND in this paper makes use of three known heuristics with different neighborhood structures. These are K-means+, Hmeans, and J-means. Initially the K-means+ algorithm is executed until a local minimum in its neighborhood structure is found; then the H-means is executed in the same way until a local minimum is reached; and finally, the J-means is executed starting from the solution of H-means until a local minimum is found. In this way, the process repeats itself until there are no improvements after executing the three heuristics used. By the specific form that was implemented, the sequential VND has its nomenclature changed to VND pipe, according to [20]. That is, neighborhoods are exploited sequentially, but unlike normal sequential VND, when there is an improvement in the solution the neighborhood sequence continues to be explored without going back to the beginning. The other improvement procedure addressed in this research is a mixed VND. It applies the neighborhood structures in both a nested way and sequentially. This time the neighborhoods are ordered as J-means, K-means, and H-means. For each point in the neighborhood J-means, the neighborhood of the K-means heuristic is explored,

258

T. Pereira et al.

and then the neighborhood of the H-means heuristic is explored in sequence. The order K-means followed by H-means occurs because any local optimum obtained by the H-means heuristic cannot be improved by the K-means heuristic [15]. This approach is similar to that presented in [15], but at the time of its publication, there were still no denominations and types of VND.

3 General Variable Neighborhood Search Presented by [26], the variable neighborhood search (VNS) consists in the systematic exploration of different neighborhood structures in search of an optimal (or close to optimal) solution. Looking at problems of data clustering, it is possible to say that VNS has already been applied successfully several times [6, 15, 19, 33]. To solve optimization problems the metaheuristic VNS is based on the following observations: • A local optimum of one neighborhood structure is not necessarily a local optimum of another neighborhood structure. • A global optimum is a local optimum in relation to all neighborhood structures. • Most of the local optima are relatively close to each other. The application of VNS metaheuristics to different types of problems requires us to define some essential procedures, namely the perturbation procedure, improvement procedure, and the neighborhood change step. The generic VNS that allows us to do this is described in Algorithm 8. To define our VNS metaheuristic for the MSSC problem, the following composition was selected: • The perturbation procedure uses the neighborhood structure of the H-means heuristic. • For the local improvement procedure, VND is selected. • A sequential step is selected for the neighborhood change step. With the neighborhood structure chosen for the perturbation, we obtain a random solution in neighborhood Bb , 1 ≤ b ≤ bmax , by randomly selecting b entities at a time, and designating them to different clusters than they belong, which are also selected randomly.

Algorithm 8 Generic VNS 1. 2. 3. 4. 5. 6. 7.

Start from an initial solution S. Set b ← 1 to start exploration of neighborhoods in the first one. Apply the given perturbation procedure on S to obtain S  ∈ neighborhood Bb . Apply the given local improvement procedure on S  to obtain local optimum S  . Change neighborhood following the neighborhood change step criteria selected. If b ≤ bmax return to Step 3. If the stopping condition is reached, stop; otherwise, restart from Step 2.

Review of Basic Local Searches for Solving the Minimum Sum-of-Squares. . .

259

Algorithm 9 Sequential neighborhood change step 1. Let S  be the new solution found and S the previous solution. 2. If f (S  ) < f (S) then, S ← S  and b = 1. Otherwise, b ← b + 1.

Algorithm 10 Generic form of GVNS adopted to this paper 1. Obtain initial solution S. 2. Repeat: a. b. c. d. e.

Start from the first neighborhood b = 1. Apply perturbation procedure to S to obtain S  ∈ Bb . Apply VND to S  to obtain S  . If S  is better then S, then S ← S  and b = 1. Otherwise, b ← b + 1. If b ≤ 10 then go back to Step 2.

Until the stopping condition is satisfied.

The neighborhood change step adopted in this paper is sequential. In other words, neighborhoods of increasing size are explored one at a time until a better solution is found. Once a better solution is found the neighborhood sequence restarts at the first one. In this research the maximum neighborhood number bmax = 10 was adopted. Thus, the neighborhood is augmented until reaching the maximum specified value. When it reaches the maximum value, its value is reset to 1. To formalize, Algorithm 9 presents the steps adopted. As mentioned in Section 2, two different types of VND are adopted for the improvement step, a sequential VND and a mixed VND. When a VND is applied to the improving step, the VNS approach is referred to as general variable neighborhood search (GVNS). In Algorithm 10 we outline the generic procedure as adopted by this paper. From this generic algorithm we have two GVNS variants, referred to as GVNS1 that makes use of the mixed VND presented in the previous section, and GVNS2 that makes use of the sequential VND also presented in the previous section.

4 Computational Experiments 4.1 Instances To evaluate the methods developed, a total of 13 instances were chosen to compose our benchmark. Table 1 lists these instances as well as their characteristics. In [3], the first ten instances are solved exactly, with different values of M. These optimal results are used for comparison purposes here. For comparisons with the state of the art of the MSSC problem, instances 11–13 were selected from the work of [5] with their best known solutions.

260

T. Pereira et al.

Table 1 List of data sets Name 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13.

Ruspini data Grostshel and Holland’s 202 cities coordinates Grostshel and Holland’s 666 cities coordinates Padberg and Rinaldi’s hole-drilling data Fisher’s Iris Glass identification Body measurements Telugu Indian vowel sounds Concrete compressive strength Image segmentation Reinelt’s hole-drilling data TSPLIB3038 Pla85900

Entities 75 202 666 2392 150 214 507 871 1030 2310 1060 3038 85,900

Attributes 2 2 2 2 4 9 5 3 8 19 2 2 2

These instances were obtained from [13, 29, 31, 32] and from repository UCI machine learning repository on http://archive.ics.uci.edu/ml/datasets.html. The computational experiments were performed on a 64 bit AMD A8-5500B APU ×2 processor with 8 GB of RAM memory and with a Linux mint 17.1 operating system. In the tables of results presented in the following sections, to measure the percentage deviation value between a solution found and the best known solution of the problem, the following expression was considered, where fopt is the value of the best known solution and f is the value of the solution found by the algorithm. % Deviation =

f − fopt × 100 fopt

(5)

To compare the performance of GVNS1 and GVNS2, we highlight in bold the best of the two average % deviation values in each row of these tables.

4.2 Parameters The stopping criterion chosen was based on the instance size. For smaller instances we ran each one 10 times in order to obtain a mean behavior using a stopping criterion of N/2 s, where N is the number of entities in the instance. Larger instances were run 4 times each, with N/10 s as time limit to stop the algorithm if N/10 < 600, and 600 s otherwise. For all instances the maximum range in neighborhood used in the shaking step was bmax = 10.

Review of Basic Local Searches for Solving the Minimum Sum-of-Squares. . .

261

Table 2 Comparison of average comportment of different GVNS strategies for small data sets N considered (tmax = ) 2 Data set Ruspini Grotschel202 Grotschel666 Padberg Fisher Glass Body Vowel Concrete Image

% deviation GVNS1 0.00 0.00 0.35 0.90 0.00 0.00 0.07 0.19 0.73 1.69

GVNS2 0.00 0.07 0.42 0.71 0.10 0.13 0.39 0.65 1.24 1.63

CPU times best GVNS1 GVNS2 0.01 0.06 1.76 2.36 18.28 15.46 141.21 116.01 0.02 0.00 3.23 8.67 93.34 87.68 187.78 199.17 290.07 278.42 543.52 547.30

Iterations GVNS1 20,734.78 8928.62 2747.55 129.38 11,685.56 3405.50 1683.33 1216.57 1030.29 82.29

GVNS2 15,500.33 7216.08 2315.82 81.75 10,096.44 2787.25 1501.50 1180.00 876.86 66.71

4.3 Small Instances The following tables in this section present a resumé of the tests for each instance. Complete tables covering each instance are in Appendix 1. The resumé table presents the average behavior of each instance from 10 different runs for each selected number of clusters. In Table 2, the first column presents the name of the instance being investigated. The second and third column present the average value of the deviation in % obtained from tables present in Appendix 1. Columns four and five show the average of the mean of CPU times that each variant required to find the best solution. The last two columns present the average of the mean number of iterations that each variant completed during the execution time. From Table 2, it is possible to notice that the GVNS1 heuristic presented better results among the two variants. We also observe that in the same amount of time the GVNS1 heuristic was able to perform more iterations than the other variant. This means that in addition to presenting the best results it also presents a higher speed of execution.

4.4 Medium and Large Instances In order to compare the performance of the GVNS variants researched in this work with the state-of-the-art heuristic, three instances were selected from [5]. The instances used were Reinelt’s hole-drilling data, TSPLIB3038, and Pla85900. The summary results of the experiments can be seen in Tables 3 and 4. For detailed results on each instance, see the tables in Appendix 2.

262

T. Pereira et al.

Table 3 Comparison of average comportment of different GVNS strategies and N clustering algorithm (medium-size instances, tmax = ) 10 Data set % deviation CPU times best Iterations SCA GVNS1 GVNS2 SCA GVNS1 GVNS2 GVNS1 Reinelt 0.00 0.27 0.49 1.58 30.83 26.28 277.00 TSPLIB3038 0.02 0.595 0.62 5.32 57.18 56.61 200.00

smoothing

GVNS2 212.83 133.83

Table 4 Comparison of average comportment of different GVNS strategies and smoothing clustering algorithm (large instance, tmax = 600) Data set Pla85900

% deviation SCA GVNS1 0.00 −33.79

GVNS2 −33.79

CPU times best SCA GVNS1 1376.47 300.10

GVNS2 324.61

Iterations GVNS1 GVNS2 1.00 1.00

In Table 3, we see that neither GVNS variant was able to present average values better than the SCA algorithm, for both solution value and time. However, for the largest instance the behavior presented a radical change. Observing Table 4, it is possible to notice that both GVNS variants showed values of deviation well below zero. Thus the best solution values found by SCA in the work of [5] are significantly worse than those found by both GVNS variants. Due to the large difference observed in the deviation, it is suspected that the solution value found by the state-of-the-art algorithm, SCA, may not even be a local minimum.

5 Conclusions and Open Problems This chapter presents a review of some well-known local searches used in the literature to solve the minimum sum-of-squares clustering problem. These algorithms are then combined to form two new variable neighborhood descent (VND) heuristics, which are embedded within two GVNS-based procedures. Comparative experiments were performed between the two GVNS variants and also between the GVNS variants and the state-of-the-art clustering algorithm proposed by [5]. Based on computational results, the GVNS variant that presented the best behavior on average was the GVNS1 variant that makes use of a variable neighborhood descent with mixed nested structure in the local improvement step. This variant not only presented the best results but also a faster execution time. When confronted with the state-of-the-art algorithm, SCA (see [5]), the two GVNS variants presented mildly inferior behaviors for medium-sized instances. However, for the large instance experiment, the two variants presented a much better behavior than the SCA algorithm, raising the hypothesis that the results found by the state of the art might be further improved by adding a local improvement at the end.

Review of Basic Local Searches for Solving the Minimum Sum-of-Squares. . .

263

The selection and implementation of neighborhood structures are key issues to consider in solving optimization problems heuristically. This current research raises several challenging questions related to this topic: 1. If we decide to use all three neighborhoods (K-, H-, and J-means) in the VND local search, we should determine the best way of using them. (a) Is there a way to define the best order in sequential VND? Is the order proposed here within GVNS2 the best? (b) What would be the impact of using pure nested (or composite) neighborhoods, and in what order? (c) Is there a better mixed-nested strategy than that proposed in GVNS1? 2. Do we really need all three neighborhoods? In other words, what would be the minimum number of neighborhoods used that provides the best results (see “Less is more” approach in [9, 10, 27, 35])? A related consideration would be the implementation of a phased approach where neighborhoods are progressively added to the local search for further intensification. 3. Are there either theoretical or empirical arguments for using the best or the first improvement strategy for each combination of neighborhoods? A standard VND approach with immediate return to the first neighborhood after each improvement instead of continuing to the next neighborhood can also be explored. 4. The H-means neighborhood was selected for the shaking (or perturbation) step in our proposed GVNS heuristics. Would another choice (e.g., J-means) work better? As other future work, we plan to study the use of the solutions of the SCA algorithm in [5] as initial solutions for the GVNS variants proposed here, since it is suspected that the solutions generated by the SCA algorithm can be improved in a few iterations of GVNS. Further experiments with larger instances should also be done to confirm the findings of this work. Acknowledgements Thiago Pereira is grateful to CAPES-Brazil. Daniel Aloise and Nenad Mladenovi´c were partially supported by CNPq-Brazil grants 308887/2014-0 and 400350/2014-9. This research was partially covered by the framework of the grant number BR05236839 “Development of information technologies and systems for stimulation of personality’s sustainable development as one of the bases of development of digital Kazakhstan”.

Appendix 1: Small Instances See Tables 5, 6, 7, 8, 9, 10, 11, 12, 13, 14.

264

T. Pereira et al.

Table 5 Comparison of mean comportment of 10 runs on Ruspini data set for different values of N M and two GVNS strategies; tmax = s 2 % deviation CPU times best Iterations M Opt. solution GVNS1 GVNS2 GVNS1 GVNS2 GVNS1 GVNS2 2 0.893378e+05 0.00 0.00 0.00 0.00 41,383.00 23,333.00 3 0.510634e+05 0.00 0.00 0.00 0.00 29,301.00 21,500.00 4 0.128810e+05 0.00 0.00 0.00 0.00 22,888.00 19,390.00 5 0.101267e+05 0.00 0.00 0.00 0.00 20,913.00 16,858.00 6 0.857541e+04 0.00 0.00 0.01 0.28 17,286.00 14,526.00 7 0.712620e+04 0.00 0.00 0.00 0.00 16,023.00 12,552.00 8 0.614964e+04 0.00 0.00 0.04 0.23 13,991.00 11,289.00 9 0.518165e+04 0.00 0.00 0.00 0.00 12,821.00 10,392.00 10 0.444628e+04 0.00 0.00 0.00 0.00 12,007.00 9663.00 Average 0.00 0.00 0.01 0.06 20,734.78 15,500.33

Table 6 Comparison of mean comportment of 10 runs on Grostshel and Holland’s 202 cities N coordinates data set for different values of M and two GVNS strategies; tmax = s 2 % deviation CPU times best Iterations M Opt. solution GVNS1 GVNS2 GVNS1 GVNS2 GVNS1 GVNS2 2 0.234374e+05 0.00 0.00 0.01 0.00 18,088.00 9723.00 3 0.153274e+05 0.00 0.00 0.00 0.00 9509.00 8187.00 4 0.114556e+05 0.00 0.00 1.43 6.16 10,280.00 8798.00 5 0.889490e+04 0.00 0.00 0.00 0.00 10,280.00 8908.00 6 0.676488e+04 0.00 0.00 0.00 0.00 9500.00 8268.00 7 0.581757e+04 0.00 0.00 0.03 0.04 9683.00 8297.00 8 0.500610e+04 0.00 0.00 0.02 0.01 8860.00 7450.00 9 0.437619e+04 0.00 0.00 0.10 0.66 8938.00 7685.00 10 0.379249e+04 0.05 0.05 0.26 0.00 8369.00 7208.00 15 0.232008e+04 0.00 0.26 1.13 3.43 6633.00 5855.00 20 0.152351e+04 0.00 0.00 0.55 9.99 5860.00 4988.00 25 0.108556e+04 0.00 0.00 0.10 0.04 5323.00 4468.00 30 0.799311e+03 0.00 0.56 19.22 10.35 4749.00 3974.00 Average 0.00 0.07 1.76 2.36 8928.62 7216.08

Review of Basic Local Searches for Solving the Minimum Sum-of-Squares. . .

265

Table 7 Comparison of mean comportment of 10 runs on Grostshel and Holland’s 666 cities N coordinates data set for different values of M and two GVNS strategies; tmax = s 2 % deviation CPU times best Iterations M Opt. solution GVNS1 GVNS2 GVNS1 GVNS2 GVNS1 GVNS2 2 0.175401e+07 0.00 0.00 0.03 0.01 3051.00 2498.00 3 0.772707e+06 0.00 0.00 0.08 0.04 3272.00 2594.00 4 0.613995e+06 0.00 0.00 0.03 0.01 3004.00 2471.00 5 0.485088e+06 3.83 3.83 0.02 0.02 3380.00 2593.00 6 0.382676e+06 0.00 0.00 0.01 0.01 3409.00 2577.00 7 0.323283e+06 0.00 0.00 0.45 41.10 2799.00 2461.00 8 0.285925e+06 0.00 0.00 1.04 30.61 2701.00 2366.00 9 0.250989e+06 0.00 0.00 2.39 5.14 2623.00 2304.00 10 0.224183e+06 0.00 0.00 1.82 11.56 2441.00 2245.00 20 0.106276e+06 0.03 0.19 108.47 21.99 2120.00 1968.00 50 0.351795e+05 0.04 0.63 86.71 59.60 1423.00 1397.00 Average 0.35 0.42 18.28 15.46 2747.55 2315.82

Table 8 Comparison of mean comportment of 10 runs on Padberg and Rinaldi’s hole-drilling data N set for different values of M and two GVNS strategies; tmax = s 2 % deviation CPU times best Iterations M Opt. solution GVNS1 GVNS2 GVNS1 GVNS2 GVNS1 GVNS2 2 0.296723e+11 1.00 0.00 193.23 110.91 221.00 109.00 3 0.212012e+11 4.64 1.90 215.06 115.22 215.00 107.00 4 0.141184e+11 0.00 0.00 80.33 135.34 218.00 104.00 5 0.115842e+11 0.05 0.05 86.63 8.90 182.00 113.00 6 0.948900e+10 0.00 0.00 47.89 93.25 215.00 104.00 7 0.818180e+10 0.00 0.00 66.16 49.90 178.00 107.00 8 0.701338e+10 0.27 0.27 34.99 8.59 165.00 107.00 9 0.614600e+10 0.32 0.34 98.22 30.09 150.00 110.00 10 0.532491e+10 1.45 1.45 12.13 18.88 145.00 110.00 100 0.404498e+09 0.43 1.20 151.83 48.87 80.00 87.00 150 0.245685e+09 0.53 0.90 215.38 189.62 61.00 57.00 200 0.175431e+09 1.32 1.57 206.78 222.53 48.00 40.00 250 0.132352e+09 1.53 0.98 225.28 218.91 50.00 40.00 300 0.101568e+09 1.05 0.90 197.80 183.02 43.00 33.00 350 0.804783e+08 0.66 0.68 212.47 220.66 49.00 40.00 400 0.657989e+08 1.09 1.15 215.14 201.50 50.00 40.00 Average 0.90 0.71 141.21 116.01 129.38 81.75

266

T. Pereira et al.

Table 9 Comparison of mean comportment of 10 runs on Fisher’s Iris data set for different values N of M and two GVNS strategies; tmax = s 2 % deviation CPU times best Iterations M Opt. solution GVNS1 GVNS2 GVNS1 GVNS2 GVNS1 GVNS2 2 0.152348e+03 0.00 0.00 0.00 0.00 14,107.00 12,272.00 3 0.788514e+02 0.00 0.00 0.00 0.00 14,006.00 11,596.00 4 0.572285e+02 0.00 0.00 0.00 0.00 13,932.00 11,421.00 5 0.464462e+02 0.00 0.00 0.00 0.00 13,156.00 10,938.00 6 0.390400e+02 0.00 0.00 0.00 0.00 12,866.00 10,304.00 7 0.342982e+02 0.00 0.00 0.00 0.00 11,676.00 9189.00 8 0.299889e+02 0.00 0.00 0.01 0.00 11,277.00 8982.00 9 0.277861e+02 0.00 0.90 0.16 0.00 9232.00 8424.00 10 0.258340e+02 0.00 0.00 0.01 0.00 4918.00 7742.00 Average 0.00 0.10 0.02 0.00 11,685.56 10,096.44

Table 10 Comparison of mean comportment of 10 runs on Glass identification data set for N different values of M and two GVNS strategies; tmax = s 2 % deviation CPU times best Iterations M Opt. solution GVNS1 GVNS2 GVNS1 GVNS2 GVNS1 GVNS2 15 0.155766e+03 0.00 0.09 0.22 1.07 4997.00 4140.00 20 0.114646e+03 0.00 0.76 1.08 17.88 4216.00 3560.00 25 0.842515e+02 0.00 0.14 4.98 0.07 3704.00 3152.00 30 0.632478e+02 0.00 0.00 0.00 0.00 3309.00 2702.00 35 0.492386e+02 0.00 0.04 10.01 13.23 3090.00 2481.00 40 0.394983e+02 0.00 0.00 0.12 0.97 2766.00 2208.00 45 0.320395e+02 0.00 0.00 0.66 1.29 2666.00 2095.00 50 0.267675e+02 0.00 0.01 8.73 34.83 2496.00 1960.00 Average 0.00 0.13 3.23 8.67 3405.50 2787.25

Table 11 Comparison of mean comportment of 10 runs on body measurements N different values of M and two GVNS strategies; tmax = s 2 % deviation CPU times best Iterations M Opt. solution GVNS1 GVNS2 GVNS1 GVNS2 GVNS1 30 0.195299e+05 0.03 0.48 21.48 45.44 2120.00 40 0.162318e+05 0.01 0.99 48.93 92.00 2071.00 50 0.139547e+05 0.05 0.29 103.48 75.26 1623.00 60 0.121826e+05 0.13 0.30 137.08 74.03 1550.00 70 0.107869e+05 0.09 0.12 111.25 62.23 1409.00 80 0.964873e+04 0.10 0.14 137.85 177.09 1327.00 Average 0.07 0.39 93.34 87.68 1683.33

data set for

GVNS2 1964.00 1864.00 1477.00 1380.00 1205.00 1119.00 1501.50

Review of Basic Local Searches for Solving the Minimum Sum-of-Squares. . .

267

Table 12 Comparison of mean comportment of 10 runs on Telugu Indian vowel sounds data set N for different values of M and two GVNS strategies; tmax = s 2 % deviation CPU times best Iterations M Opt. solution GVNS1 GVNS2 GVNS1 GVNS2 GVNS1 GVNS2 40 0.636653e+07 0.00 0.99 72.51 135.49 1823.00 1665.00 50 0.524020e+07 0.11 0.70 282.61 131.06 1597.00 1433.00 60 0.442262e+07 0.20 0.95 215.32 362.71 1340.00 1177.00 70 0.375286e+07 0.14 0.76 286.72 229.34 1403.00 1266.00 80 0.324801e+07 0.29 0.74 125.12 211.88 1100.00 910.00 90 0.285069e+07 0.35 0.20 140.27 176.76 455.00 852.00 100 0.251058e+07 0.23 0.19 191.92 146.94 798.00 957.00 Average 0.19 0.65 187.78 199.17 1216.57 1180.00

Table 13 Comparison of mean comportment of 10 runs on concrete compressive strength data set N for different values of M and two GVNS strategies; tmax = s 2 % deviation CPU times best Iterations M Opt. solution GVNS1 GVNS2 GVNS1 GVNS2 GVNS1 GVNS2 60 0.288107e+07 0.35 1.98 341.98 245.15 1275.00 1094.00 70 0.247893e+07 1.25 1.92 362.19 99.10 1151.00 970.00 80 0.215791e+07 1.00 2.24 270.79 291.52 1095.00 941.00 90 0.189364e+07 0.67 1.15 256.17 290.23 996.00 859.00 100 0.168778e+07 0.72 0.67 166.31 326.06 923.00 746.00 110 0.151334e+07 0.60 0.37 288.08 318.84 864.00 738.00 120 0.136737e+07 0.51 0.34 344.97 378.04 908.00 790.00 Average 0.73 1.24 290.07 278.42 1030.29 876.86

Table 14 Comparison of mean comportment of 10 runs on image segmentation data set for N different values of M and two GVNS strategies; tmax = s 2 % deviation CPU times best Iterations M Opt. solution GVNS1 GVNS2 GVNS1 GVNS2 GVNS1 GVNS2 230 0.463938e+06 0.58 0.44 548.42 530.52 97.00 81.00 250 0.421018e+06 0.61 0.58 562.91 542.76 94.00 77.00 300 0.338072e+06 0.79 0.74 505.92 545.01 99.00 82.00 350 0.276957e+06 0.81 0.74 529.41 540.57 86.00 69.00 400 0.230310e+06 0.96 0.90 541.42 567.25 67.00 52.00 450 0.195101e+06 0.87 0.94 566.67 550.73 71.00 58.00 500 0.157153e+06 7.18 7.04 549.91 554.26 62.00 48.00 Average 1.69 1.63 543.52 547.30 82.29 66.71

268

T. Pereira et al.

Appendix 2: Medium and Large Instances See Tables 15, 16, 17. Table 15 Comparison of mean comportment of 4 runs on Reinelt’s hole-drilling data set for N different values of M, for two GVNS strategies and smoothing clustering algorithm; tmax = s 10 % deviation CPU times best Iterations M Best-known SCA GVNS1 GVNS2 SCA GVNS1 GVNS2 GVNS1 GVNS2 2 0.983195e+10 0.00 0.00 0.00 0.05 36.63 82.45 415.00 222.00 5 0.379100e+10 0.00 0.25 0.25 0.17 41.67 26.78 308.00 221.00 10 0.175484e+10 0.00 0.05 0.19 0.70 37.07 36.16 256.00 214.00 15 0.112120e+10 0.00 0.84 1.12 1.25 7.45 3.82 219.00 198.00 20 0.791790e+09 0.00 0.47 0.93 2.70 18.60 7.45 245.00 218.00 25 0.606700e+09 0.00 −0.02 0.47 4.63 43.57 1.04 219.00 204.00 Average 0.00 0.27 0.49 1.58 30.83 26.28 277.00 212.83

Table 16 Comparison of mean comportment of 4 runs on TSPLIB3038 data set for different N values of M, for two GVNS strategies and smoothing clustering algorithm; tmax = s 10 % deviation CPU times best Iterations M Best-known SCA GVNS1 GVNS2 SCA GVNS1 GVNS2 GVNS1 GVNS2 2 0.316880e+10 0.00 0.00 0.00 0.25 0.23 0.21 290.00 148.00 5 0.119820e+10 0.00 0.00 0.00 0.98 0.22 0.21 293.00 152.00 10 0.560250e+09 0.00 0.57 0.57 2.64 10.41 85.69 197.00 133.00 15 0.356040e+09 0.00 2.70 2.70 5.16 26.20 130.67 154.00 125.00 20 0.266810e+09 0.11 0.12 0.25 8.86 229.54 118.89 136.00 126.00 25 0.214500e+09 0.01 0.18 0.19 14.01 76.47 4.01 130.00 119.00 Average 0.02 0.60 0.62 5.32 57.18 56.61 200.00 133.83

Table 17 Comparison of mean comportment of 4 runs on Pla85900 data set for different values of M, for two GVNS strategies and Smoothing Clustering Algorithm; tmax = 600 s M Best-known 2 0.374910e+16 5 0.133970e+16 10 0.682940e+15 15 0.460290e+15 20 0.350870e+15 25 0.283230e+15 Average

% deviation SCA GVNS1 0.00 −30.52 0.00 −33.41 0.00 −34.33 0.00 −34.92 0.00 −35.31 0.00 −34.29 0.00 −33.79

GVNS2 −30.52 −33.41 −34.33 −34.93 −35.31 −34.23 −33.79

CPU times best SCA GVNS1 123.96 252.64 452.96 270.74 1011.17 367.52 1596.67 314.31 2210.91 248.77 2863.18 347.21 1376.47 300.20

GVNS2 352.30 324.02 348.03 255.99 342.36 324.97 324.61

Iterations GVNS1 GVNS2 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00

Review of Basic Local Searches for Solving the Minimum Sum-of-Squares. . .

269

References 1. Aggarwal, C.C., Reddy, C.K.: Data Clustering: Algorithms and Applications. Chapman and Hall/CRC, Boca Raton (2013) 2. Aloise, D., Deshpande, A., Hansen, P., Popat, P.: NP-hardness of Euclidean sum-of-squares clustering. Mach. Learn. 75(2), 245–248 (2009) 3. Aloise, D., Hansen, P., Liberti, L.: An improved column generation algorithm for minimum sum-of-squares clustering. Math. Program. 131(1), 195–220 (2012) 4. Aloise, D., Damasceno, N., Mladenovi´c, N., Pinheiro, D.: On strategies to fix degenerate kmeans solutions. J. Classif. 34, 165–190 (2017) 5. Bagirov, A.M., Ordin, B., Ozturk, G., Xavier, A.E.: An incremental clustering algorithm based on hyperbolic smoothing. Comput. Optim. Appl. 61(1), 219–241 (2015) 6. Belacel, N., Hansen, P., Mladenovi´c, N.: Fuzzy J-Means: a new heuristic for fuzzy clustering. Pattern Recogn. 35(10), 2193–2200 (2002) 7. Boussaïd, I., Lepagnot, J., Siarry, P.: A survey on optimization metaheuristics. Inf. Sci. 237, 82–117 (2013). Prediction, Control and Diagnosis using Advanced Neural Computations 8. Brimberg, J., Mladenovi´c, N.: Degeneracy in the multi-source Weber problem. Math. Program. 85(1), 213–220 (1999) 9. Brimberg, J., Mladenovi´c, N., Todosijevi´c, R., Urosevi´c, D.: Less is more: solving the max-mean diversity problem with variable neighborhood search. Inf. Sci. 382–383, 179– 200 (2017). https://doi.org/10.1016/j.ins.2016.12.021. http://www.sciencedirect.com/science/ article/pii/S0020025516320394 10. Costa, L.R., Aloise, D., Mladenovi´c, N.: Less is more: basic variable neighborhood search heuristic for balanced minimum sum-of-squares clustering. Inf. Sci. 415–416, 247– 253 (2017). https://doi.org/10.1016/j.ins.2017.06.019. http://www.sciencedirect.com/science/ article/pii/S0020025517307934 11. Fahad, A., Alshatri, N., Tari, Z., Alamri, A., Khalil, I., Zomaya, A.Y., Foufou, S., Bouras, A.: A survey of clustering algorithms for big data: taxonomy and empirical analysis. IEEE Trans. Emerg. Top. Comput. 2(3), 267–279 (2014) 12. Forgey, E.: Cluster analysis of multivariate data: efficiency vs. interpretability of classification. Biometrics 21(3), 768–769 (1965) 13. Grötschel, M., Holland, O.: Solution of large-scale symmetric travelling salesman problems. Math. Program. 51(1), 141–202 (1991) 14. Hansen, P., Mladenovi´c, N.: Variable neighborhood search for the p-median. Locat. Sci. 5(4), 207–226 (1997) 15. Hansen, P., Mladenovi´c, N.: J-means: a new local search heuristic for minimum sum of squares clustering. Pattern Recogn. 34(2), 405–413 (2001) 16. Hansen, P., Mladenovi´c, N.: First vs. best improvement: an empirical study. Discret. Appl. Math. 154(5), 802–817 (2006). IV ALIO/EURO Workshop on Applied Combinatorial Optimization 17. Hansen, P., E., N., B., C., N., M.: Survey and comparison of initialization methods for k-means clustering. Paper not published 18. Hansen, P., Jaumard, B., Mladenovi´c, N.: Minimum sum of squares clustering in a low dimensional space. J. Classif. 15(1), 37–55 (1998) 19. Hansen, P., Ruiz, M., Aloise, D.: A VNS heuristic for escaping local extrema entrapment in normalized cut clustering. Pattern Recogn. 45(12), 4337–4345 (2012) 20. Hansen, P., Mladenovi´c, N., Todosijevi´c, R., Hanafi, S.: Variable neighborhood search: basics and variants. EURO J. Comput. Optim. 1–32 (2016). https://doi.org/10.1007/s13675-0160075-x 21. Jain, A.K.: Data clustering: 50 years beyond k-means. Pattern Recogn. Lett. 31(8), 651–666 (2010) 22. Laszlo, M., Mukherjee, S.: A genetic algorithm using hyper-quadtrees for low-dimensional k-means clustering. IEEE Trans. Pattern Anal. Mach. Intell. 28(4), 533–543 (2006)

270

T. Pereira et al.

23. Laszlo, M., Mukherjee, S.: A genetic algorithm that exchanges neighboring centers for k-means clustering. Pattern Recogn. Lett. 28(16), 2359–2366 (2007) 24. Likas, A., Vlassis, N., Verbeek, J.J.: The global k-means clustering algorithm. Pattern Recogn. 36(2), 451–461 (2003). Biometrics 25. MacQueen, J., et al.: Some methods for classification and analysis of multivariate observations. In: Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability, Oakland, vol. 1, pp. 281–297 (1967) 26. Mladenovi´c, N.: A variable neighborhood algorithm-a new metaheuristic for combinatorial optimization. In: Papers Presented at Optimization Days, vol. 12 (1995) 27. Mladenovi´c, N., Todosijevi´c, R., Urosevi´c, D.: Less is more: basic variable neighborhood search for minimum differential dispersion problem. Inf. Sci. 326, 160– 171 (2016). https://doi.org/10.1016/j.ins.2015.07.044. http://www.sciencedirect.com/science/ article/pii/S0020025515005526 28. Ordin, B., Bagirov, A.M.: A heuristic algorithm for solving the minimum sum-of-squares clustering problems. J. Glob. Optim. 61(2), 341–361 (2015) 29. Padberg, M., Rinaldi, G.: A branch-and-cut algorithm for the resolution of large-scale symmetric traveling salesman problems. SIAM Rev. 33(1), 60–100 (1991) 30. Rahman, M.A., Islam, M.Z.: A hybrid clustering technique combining a novel genetic algorithm with k-means. Knowl. Based Syst. 71, 345–365 (2014) 31. Reinelt, G.: TSPLIB—a traveling salesman problem library. ORSA J. Comput. 3(4), 376–384 (1991) 32. Ruspini, E.H.: Numerical methods for fuzzy clustering. Inf. Sci. 2(3), 319–350 (1970) 33. Santi, É., Aloise, D., Blanchard, S.J.: A model for clustering data from heterogeneous dissimilarities. Eur. J. Oper. Res. 253(3), 659–672 (2016) 34. Selim, S.Z., Alsultan, K.: A simulated annealing algorithm for the clustering problem. Pattern Recogn. 24(10), 1003–1008 (1991) 35. Silva, K., Aloise, D., de Souza, S.X., Mladenovi´c, N.: Less is more: simplified Nelder-Mead method for large unconstrained optimization. Yugoslav J. Oper. Res. 28(2), 153–169 (2018). http://yujor.fon.bg.ac.rs/index.php/yujor/article/view/609 36. Spath, H.: Cluster Analysis Algorithms for Data Reduction and Classification of Objects. Computers and Their Applications. E. Horwood, Chichester (1980) 37. Turkensteen, M., Andersen, K.A.: A Tabu Search Approach to Clustering, pp. 475–480. Springer, Berlin (2009) 38. Ward, J.H.J.: Hierarchical grouping to optimize an objective function. J. Am. Stat. Assoc. 58(301), 236–244 (1963) 39. Whitaker, R.: A fast algorithm for the greedy interchange for large-scale clustering and median location problems. Inf. Syst. Oper. Res. 21(2), 95–108 (1983) 40. Wishart, D.: 256. note: an algorithm for hierarchical classifications. Biometrics 25(1), 165–170 (1969) 41. Xavier, A.E., Xavier, V.L.: Solving the minimum sum-of-squares clustering problem by hyperbolic smoothing and partition into boundary and gravitational regions. Pattern Recogn. 44(1), 70–77 (2011)

On the Design of Metaheuristics-Based Algorithm Portfolios Dimitris Souravlias and Konstantinos E. Parsopoulos

Abstract Metaheuristic optimization has been long established as a promising alternative to classical optimization approaches. However, the selection of a specific metaheuristic algorithm for solving a given problem constitutes an impactful decision. This can be attributed to possible performance fluctuations of the metaheuristic during its application either on a single problem or on different instances of a specific problem type. Algorithm portfolios offer an alternative where, instead of using a single solver, a number of different solvers or variants of one solver are concurrently or interchangeably used to tackle the problem at hand by sharing the available computational resources. The design of algorithm portfolios requires a number of decisions from the practitioner’s side. The present chapter exposes the essential open problems related to the design of algorithm portfolios, namely the selection of constituent algorithms, resource allocation schemes, interaction among the algorithms, and parallelism issues. Recent research trends relevant to these issues are presented, offering motivation for further elaboration. Keywords Algorithm portfolios · Metaheuristics · Global optimization · Design of algorithms

1 Introduction A central issue in applied optimization is the selection of a suitable solver for a given problem. Metaheuristics have proved to be very useful under various conditions when the approximation of (sub-) optimal solutions is desirable. Despite the large

D. Souravlias () Logistics Management Department, Helmut-Schmidt University, Hamburg, Germany e-mail: [email protected] K. E. Parsopoulos Department of Computer Science & Engineering, University of Ioannina, Ioannina, Greece e-mail: [email protected] © Springer Nature Switzerland AG 2018 P. M. Pardalos, A. Migdalas (eds.), Open Problems in Optimization and Data Analysis, Springer Optimization and Its Applications 141, https://doi.org/10.1007/978-3-319-99142-9_14

271

272

D. Souravlias and K. E. Parsopoulos

number of metaheuristics in the relevant literature, choosing one for the problem at hand is a crucial decision that is often based on the practitioner’s experience and knowledge on the specific problem type. Strong theoretical and experimental evidence suggest that there is no universal optimization algorithm capable of tackling all problems equally well [17]. Thus, relevant research has been focused on matching efficient algorithms with specific problem types. However, even different parameterizations of an algorithm may exhibit highly varying performance on a given problem. In this context, the idea of concurrently or interchangeably using a number of different algorithms or variants of a single algorithm was cultivated in order to reduce the risk of a wrong decision. Inspiration behind this idea was drawn from the financial markets, where selecting a portfolio of stocks instead of investing the whole capital on a single stock reduces the risk of the investment. Algorithm portfolios (APs) are algorithmic schemes that harness a number of algorithms into a unified framework [5, 7]. They were initially introduced two decades ago and, since then, they have gained increasing popularity in various scientific fields such as inventory routing [11], lot sizing [12], facility location [4], and combinatorics [13]. The term “algorithm portfolio” was first proposed in [7] and dominated over previously used terms such as the ensembles of algorithms [6]. The algorithms employed in an algorithm portfolio are called the constituent algorithms, and they can be of different types ranging from evolutionary algorithms [9] and randomized search methods [5] to SAT solvers [18]. In the present chapter, we focus on algorithm portfolios consisting of metaheuristics, including both population-based and local-based algorithms. Population-based algorithms exploit a population of search points that iteratively probe the search space. On the other hand, local-based (or trajectory-based) methods operate on a single search point that is iteratively improved through various local search strategies. Thus, population-based algorithms possess essential exploration capabilities, whereas local-based algorithms are well known for their exploitation capacities. It is widely perceived that the trade-off between exploration and exploitation determines the quality of a metaheuristic approach. Therefore, a framework that encompasses both exploration-oriented and exploitation-oriented methods is expected to be more beneficial than using a single approach. The design of algorithm portfolios involves a number of issues that shall be addressed by the practitioner. Specifically, the following decisions shall be made: 1. 2. 3. 4.

How many and which algorithms shall be included? How are the computational resources allocated to the constituent algorithms? Shall the algorithms interact among them and how? How parallelism affects the operation of the algorithm portfolio?

In the following sections, we discuss these issues and review relevant research developments. The rest of the chapter is structured as follows: Section 2 provides a general description of algorithm portfolios. Various approaches on the selection of constituent algorithms are presented in Section 3.1, while resource allocation

On the Design of Metaheuristics-Based Algorithm Portfolios

273

schemes are discussed in Section 3.2. Section 3.3 is devoted to the interaction among algorithms, and Section 3.4 discusses the implications of parallelism. Finally, Section 4 concludes the chapter.

2 Algorithm Portfolios The need for the use of metaheuristics stems from the inability of classical optimization algorithms to efficiently tackle optimization problems of various types either due to special characteristics of the objective function or due to excessive running time [3]. Such problems usually lack nice mathematical properties and frequently involve, among others, discontinuous, noisy, and computationally heavy objective functions, as well as multiple local and global minimizers. Without loss of generality, we consider the unconstrained minimization problem, min f (x) x∈D

where D stands for the corresponding search space. The type of D (e.g., continuous, discrete, etc.) is irrelevant since it affects only the type of relevant solvers that will be used. Let also S = {S1 , S2 , . . . , Sk }, be a set of available solvers suitable for the problem at hand. The set S may consist of different algorithms or different instances (parameterizations) of a single algorithm or both. The design of an algorithm portfolio primarily requires the determination of its constituent algorithms [2]. Thus, an algorithm selection process takes places to distinguish n out of the k available solvers. The selection is typically based on performance profiles of the algorithms on the specific problem or similar ones. The outcome is a portfolio AP defined as AP = {Si1 , Si2 , . . . , Sin } ⊆ S. For simplicity reasons, we will henceforth use the indices 1, 2, . . . , n, instead of i1 , i2 , . . . , in , for the constituent algorithms. Moreover, let Tmax be the available computational budget. This may refer to the total execution time or the number of function evaluations that is available to the algorithm portfolio. Then, each solver Si of AP is assigned a fraction Ti of this budget, taking care of that n  i=1

Ti = Tmax .

(1)

274

D. Souravlias and K. E. Parsopoulos

The set of the allocated budgets will be henceforth denoted as T = {T1 , T2 , . . . , Tn }. The allocated budgets can be determined either offline, i.e., prior to the application of the algorithm portfolio, or online during its run. In both cases, performance data (historical or current, respectively) are used along with predefined performance measures. During the run of the algorithm portfolio, the constituent algorithms may be either isolated [14] or interactive [12]. In the latter case, the algorithms exchange information and the user shall determine all the relevant details, including the type and size of the exchanged information, as well as the communication frequency or the conditions for triggering such a communication. Typically, the communication takes place between pairs of algorithms, and the same type of information is exchanged among them. For example, such information can be the best solution detected so far by each algorithm. The constituent algorithms of an algorithm portfolio may sequentially and interchangeably run on a single processor. In this case, each constituent algorithm Si is run in turn on the processor consuming a prespecified fraction of its assigned computational budget Ti before the next algorithm occupies the processor. Thus, the total budget assigned to each algorithm is not consumed in one turn but rather in multiple turns (called also episodes or batches). Nevertheless, the use of multiple algorithms in an algorithm portfolio makes parallelism very appealing. In this case, a number of CPUs are available and devoted to AP . Let U = {U1 , U2 , . . . , Um } be the set of available processing units. A common master–slave parallelism model usually employs a master-node, which is responsible for the coordination of the portfolio’s run as well as the necessary book-keeping, and a number of slave-nodes that host the constituent algorithms. In the ideal case, the number of slave-nodes is equal to the number of constituent algorithms, i.e., m = n + 1. If this is not possible, CPUs host more than one of the constituent algorithms. Our descriptions above have brought forward a number of issues and open problems that influence the application of algorithm portfolios. The aptness of the underlying choices determines the efficiency and effectiveness of the portfolio. A number of research works have probed these issues and derived useful conclusions and suggestions for various problem types. The rest of the present chapter offers an outline of the most important issues and suggestions.

3 Open Problems in Designing Algorithm Portfolios In the following paragraphs, we present essential open problems in the design of algorithm portfolios. Specifically, we consider the following problems: 1. 2. 3. 4.

Selection of the constituent algorithms. Allocation of the computational budget. Interaction among the constituent algorithms. Parallel vs sequential application of the algorithm portfolio.

On the Design of Metaheuristics-Based Algorithm Portfolios

275

Our main goal is to expose the corresponding problems and discuss some of the state-of-the-art remedies proposed in the relevant literature. This information can be used as motivation and starting point for further elaboration.

3.1 Selection of Constituent Algorithms The selection of the constituent algorithms in an algorithm portfolio is highly related to the algorithm selection problem originally described by Rice [10]. This refers to the selection and application of the best-performing algorithm for a given optimization problem. Most commonly, algorithm selection is addressed for specific problem types or sets of problems. Given a predefined set of algorithms S, a set of problem instances I , and a cost function F : S × I → R, the widely known per instance algorithm selection problem refers to the detection of a mapping G : I → S, such that the cost function F is minimized over all the considered problem instances, i.e., min G



F (G (i), i) .

(2)

i∈I

In the context of algorithm portfolios, this problem is referred as the constituent algorithms selection problem [2, 15]. Specifically, given a predefined set of algorithms S, a set of problem instances I , and a cost function F , the goal is to select a set of algorithms AP ⊆ S, such that the cost function is minimized over all problem instances. This can be considered as a relaxed version of the problem defined in Equation (2) where, instead of mapping problem instances to algorithms, the main challenge is to design the best-performing algorithm portfolio over all instances, consisting of the selected algorithms Si ∈ S. The constituent algorithms selection problem has been primarily addressed by using offline selection methods. Such methods are applied prior to the execution of the algorithm portfolio. They are based on a priori knowledge on the average performance of each candidate algorithm over all problems at hand. If there is no adequate information on the algorithms, a preprocessing phase is usually needed to identify their relevant properties. However, this can be a time-consuming and laborious task in cases of hard real-life problems. In [2] a general selection scheme was proposed to tackle the constituent algorithms selection problem. This scheme admits as input the number of all available algorithms and the targeted problems, and generates the set of constituent algorithms based on various selectors. The number of selected algorithms is either specified by the user or determined by the used selector. The evaluation of the selectors in [2] was conducted on a set of instances of the maintenance scheduling problem using random data generated by the same distribution.

276

D. Souravlias and K. E. Parsopoulos

Comparisons among different selectors revealed the superiority of the selector delete distance (SDI) and the selector racing (SRC) policies [2]. The SDI technique applies all the available algorithms on a declining list of problem instances. First, the order of the available instances is randomized and then each problem instance is individually solved by each algorithm in turn. The algorithm that solves the problem instance (i.e., the acquired objective function value is below a user-defined threshold) is selected as a constituent algorithm, while the problem instance is removed from the list. The process ends when all instances are solved by an algorithm. On the other hand, the SRC method uses the F-Race method to select algorithms. Specifically, each problem instance is randomly selected and tackled by all algorithms in turn. If the instance is solved once, it is not considered again. Algorithms that do not perform adequately well are discarded, based on statistical comparisons of their performance with the rest. This procedure is terminated when n constituent algorithms are selected. A portfolio of population-based algorithms was considered in [15], consisting of evolutionary algorithms for numerical optimization problems. The portfolio is equipped with an online mechanism used to automatically select its constituent algorithms. The selection mechanism is allocated a portion of the available computational budget, and relies on the estimated performance matrix method applied on each candidate evolutionary algorithm. Specifically, each algorithm is applied k1 times for each one of the available k2 problem instances. Hence, for each algorithm Si , a k1 × k2 performance matrix P Mi is formed, where each component is the solution’s function value detected in a single run of the algorithm on the specific problem instance. The matrices are eventually used to select the constituent algorithms of the portfolio, while the remaining computational budget is utilized to solve the problem at hand. The role of the synergy of the constituent algorithms in the success of an algorithm portfolio has been recognized through theoretical and experimental evidence [9]. The use of portfolios instead of individual algorithms aims at alleviating possible weaknesses of one algorithm by using another one with complementary performance. For example, let S1 , S2 ∈ S be two algorithms, and I1 , I2 , I3 , I4 be four test problems. Suppose that S1 performs better than S2 on instances I1 and I2 , whereas S2 is more successful than S1 on instances I3 and I4 . Including both algorithms into a portfolio seems to be a reasonable choice, as they complementarily perform on the considered instances. Thus, complementarity of the constituent algorithms in a portfolio arises as an important issue in the algorithm selection phase. Moreover, complementarity is essential even during the run of the portfolio. For example, the use of both exploration-oriented (global search) algorithms that are very useful at the early stages of the optimization procedure and exploitationoriented (local search) algorithms that conduct more refined search around the best detected solutions at the final stages of the optimization procedure can be highly beneficial. A combination of these two can prevent from the undesirable premature convergence problem. Recent experimental evidence on portfolios consisting of both population-based and local search algorithms justified the above claims on challenging manufacturing problems [12].

On the Design of Metaheuristics-Based Algorithm Portfolios

277

The identification of complementary algorithms is not a trivial task. To this end, a new technique for designing portfolios with algorithms that exhibit complementarity properties was introduced in [8]. This technique models the constituent algorithm selection problem as an election task, where each problem “votes” for some of the available algorithms. An algorithm Si is considered to be better than algorithm Sj for a particular problem if it achieves better performance in terms of the necessary number of function evaluations for attaining a specific objective value threshold. The method consists of two phases. In the first (initialization) phase, a subset of the available algorithms is generated. This subset contains all solvers that achieve the best performance for at least one of the problems at hand. In the second phase, an iterative procedure is initiated, where each problem defines a partial ordering of the available algorithms based on the proposed performance measure. The algorithms that are preferred by the majority of the problems are then added to the portfolio. Summarizing, the constituent algorithms selection problem in algorithm portfolios is an open problem directly connected to the well-known algorithm selection problem. Although its relevance and significance for the portfolio’s performance is supported by strong experimental evidence, it is probably the less studied problem in relevant works. This can be partially attributed to the existence of abundant data for various optimization problem types, which may narrow the selection to specific algorithm types. This way the user can make more informative decisions on the considered candidates. Nevertheless, integrated solutions that would offer ad hoc selection options per problem or automate the whole selection procedure, considering also variable numbers of algorithms in the portfolio, would add significant merit to the general concept of algorithm portfolios.

3.2 Allocation of Computation Budget The allocation of computational budget to the constituent algorithms is perhaps the most studied topic in algorithm portfolios’ literature. The allocation problem can be summarized in two essential questions: 1. What fraction of the total computational budget shall be assigned to each constituent algorithm? 2. How is this budget allocated to the algorithms during the run? Following the notation of Section 2, budget allocation requires the determination of the set T = {T1 , T2 , . . . , Tn }, where Ti stands for the budget (function evaluations or running time) allocated to the constituent algorithm Si , i = 1, 2, . . . , n, such that Equation (1) holds. The goal is to distribute the available computational resources among the n constituent algorithms such that the expected objective function value, denoted as Ef , is minimized, i.e., min Ef , T

subject to

n  i=1

Ti = Tmax ,

and

Ti  0, ∀i.

(3)

278

D. Souravlias and K. E. Parsopoulos

The expected objective function value is averaged over a number of independent experiments for each problem instance under consideration. Naturally, the assignment of computational budget to algorithms has significant performance impact. Assigning computational resources prior to the execution of the algorithm portfolio may be insufficient since it neglects the online dynamic of each algorithm. For this reason, online techniques for the dynamic allocation of resources during the run have been recently proposed [4, 12, 19]. In [4] the proposed algorithm portfolio is composed of both population-based and local search metaheuristics, and it is applied on the dynamic maximal covering location problem. The portfolio is equipped with a learning mechanism that distributes the available credit (namely computational budget) among the constituent algorithms according to their performance. The credit assigned to each algorithm is based on a weight function that takes into account the credit of each algorithm in the previous iteration, a reward, and a penalization score. The reward score is added to the current weight if the algorithm improves the best so far solution. The penalization score is subtracted from the current weight and occurs when the algorithm results in a solution that is inferior than the overall best. Eventually, the weight is used to define a selection probability that is fed to a roulette wheel procedure. The procedure above is incorporated into the main algorithm selection scheme and determines the algorithm that will be executed in the next iteration. In the specific study, the term “iteration” is interpreted differently for each type of constituent algorithm. Thus, in local search methods, iteration is defined as one application of the employed neighborhood operator and the computation of the corresponding objective function. In population-based metaheuristics, an iteration refers to the sequential application of the underlying evolutionary operators such as crossover, mutation, and selection. A different population-based portfolio was introduced in [19]. The algorithms of that portfolio are individually executed without any exchange of information among them. Also, they are executed interchangeably, i.e., they automatically switch from one algorithm to another as a function of the available computational resources. During the search procedure, the performance of each algorithm is predicted by using its past objective function values. Specifically, a future time point is determined and the performance curve of each algorithm is extrapolated to that point. Extrapolation is conducted by applying a linear regression model to the performance curve of each algorithm between iterations i − l and i. As l can admit different values, a number of linear models (straight lines) are the outcome of this procedure. In turn, each linear model results in a corresponding predicted value, and all these values are used to create a bootstrap probability distribution. The final predicted value is sampled from the bootstrap probability distribution. Eventually, the algorithm that achieves the highest predicted function value is executed in the next iteration. The term iteration corresponds to the application of the algorithm’s operators and the computation of the corresponding objective values, under the assumption that all algorithms have identical population sizes. An advantage of this allocation scheme is the lack of many parameters, which disburdens the practitioner from the tedious task of their tuning.

On the Design of Metaheuristics-Based Algorithm Portfolios

279

Another allocation scheme was proposed in [12], based on stock trading models similar to the ones that motivated the development of algorithm portfolios. The proposed parallel algorithm portfolio embeds a market trading-based budget allocation mechanism. This is used to dynamically distribute the available computational budget, rewarding the best-performing algorithms of the portfolio with additional execution time. The core idea behind the allocation mechanism assumes constituent algorithms-investors that invest on elite solutions, using execution time as the trading currency. Both population-based and local search metaheuristics are incorporated into the studied portfolio in [12]. Moreover, the constituent algorithms are concurrently executed by using the master–slave parallelism model, where slave-nodes host the constituent algorithms and the master-node coordinates the procedure. According to the proposed scheme in [12], all algorithms are initially allocated equal computational budgets, which is the running time in this case. Then, a fraction of the assigned time (investment time) is used by each algorithm to buy solutions from other algorithms, while its remaining time is devoted to the algorithm’s own execution. During this procedure, the master-node retains an archive of one elite solution per algorithm. For each stored solution, a price is computed taking into consideration the objective values of all archived solutions in the elite set. Obviously, high-quality solutions are also the most costly ones. In the case that an algorithm cannot improve its own best solution for a specific time period, it decides to bid for a better solution among the archived ones. To do so, the buyer-algorithm contacts the master-node to initiate a profitable bargain. Among the available archived solutions, the buyer-algorithm chooses to buy the one that maximizes a specific quality measure, called the return on investment index [12]. In simple words, among the affordable archived solutions (determined by its available investment time and the solution prices) the buyer-algorithm chooses to buy the solution that maximizes the improvement over its own best solution. Then, the seller-algorithm, i.e., the one that discovered the traded solution, receives from the buyer-algorithm the amount of computational budget (running time) specified by the traded solution’s price. Overall, the proposed allocation mechanism enables the best-performing algorithms to sell solutions more frequently, thus gaining additional execution time during their run. Note that the total execution time that was allocated initially to the algorithm portfolio remains unchanged throughout the optimization procedure. It is simply reallocated to the algorithms according to the aforementioned dynamic scheme. Summarizing, the allocation of computational budget to the constituent algorithms of an algorithm portfolio is the main key for the portfolio’s efficiency. The mechanism shall be capable of identifying either offline, based on historical performance data or preprocessing, or online, based on current feedback of the algorithms during their execution, the most promising algorithms and award them additional fractions of the total budget. Online approaches fit their budget allocation to the specific conditions during the portfolio’s execution. Thus, they can be highly reactive to the course of the optimization procedure. On the other hand, their outcome is hardly reusable in similar problems contrasting the offline methods. Given the variety of optimization problems and the diversity

280

D. Souravlias and K. E. Parsopoulos

of algorithm characteristics, the possibility of developing universal optimal budget allocation schemes seems questionable. Nevertheless, it is the authors’ belief that enhanced ad hoc procedures can be developed by thoroughly analyzing (offline or online) the performance profiles of the constituent algorithms. To this end, machine learning and time series analysis can be valuable.

3.3 Interaction of Constituent Algorithms In early algorithm portfolio models, the constituent algorithms were considered to be isolated [11, 14, 19], i.e., there was no form of interaction among them. In other works, some form of interaction existed [9, 12, 16]. Interaction comes in various forms. For instance, in [9] the proposed portfolios comprise different populationbased algorithms. The portfolio holds separate subpopulations, each one assigned its own computational budget. Also, each subpopulation adopts one of the constituent algorithms. Interaction takes place through the exchange of individuals among the subpopulations. This process is the well-known migration property, and it is applied following a simple scheme. In particular, two parameters are used, namely the migration interval and the migration size, which are defined prior to the portfolio’s run. The migration interval determines the number of iterations between two consecutive migrations, while the migration size stands for the number of migrating individuals. In [16], a multi-method framework that accommodates various evolutionary algorithms was proposed for real-valued optimization problems. According to this approach, the constituent algorithms are concurrently executed and share a joint population of search points. In the beginning, an initial population is randomly and uniformly created. Then, instead of using a single algorithm, all the constituent algorithms contribute offspring to the population of the next generation. The number of offspring per algorithm is determined through a self-adaptive learning strategy. This strategy follows a two-step procedure. In the first step, the contribution of each algorithm to the overall improvement of the objective function value is computed. In the second step, this value is used to determine the fraction of offspring generated by each algorithm in the next iteration. Obviously, the algorithms that achieve higher improvements are offered the chance to be more productive than the rest. Besides the use of joint populations or subpopulations, other types of information can be exchanged among the constituent algorithms. For example, in [12] the proposed portfolios include both population-based and local search algorithms, which are executed in parallel based on a master–slave model. Then, whenever an algorithm fails to improve its findings for a specific amount of time, it receives the best solution achieved by a different algorithm. Overall, the interaction of constituent algorithms can be beneficial for an algorithm portfolio especially when population-based algorithms are used. Migration of individuals between populations can offer the necessary boost to less efficient

On the Design of Metaheuristics-Based Algorithm Portfolios

281

algorithms or may lead to improved solutions if it is used as initial condition for a local search algorithm. Moreover, information exchange promotes interaction and cooperation between the algorithms, also promoting performance correlations. Nevertheless, it is a concept that remains to be studied individually as well as in combination with the budget allocation approaches.

3.4 The Role of Parallelism The constituent algorithms of an algorithm portfolio can be executed either interchangeably on a single CPU or concurrently on many CPUs. Three essential parallelism models are used in parallel portfolios. Probably the most popular one is the master–slave model, which has been widely used [12, 14]. In this model, the slave-nodes usually host the constituent algorithms, while the master-node is devoted to coordination and book-keeping services. Figure 1 depicts a parallel heterogeneous and a parallel homogeneous portfolio based on the master–slave model. The heterogeneous portfolio consists of two variable neighborhood search (VNS) metaheuristics and two iterated local search (ILS) algorithms. The two instances in each pair of metaheuristics assume the same parameter configuration between them. The homogeneous portfolio includes four copies of the VNS algorithm with the same configuration. Note that each metaheuristic runs on a single slave-node, which sends (e.g., periodically) its best solution to the master-node during the optimization procedure. A different approach is the fine-grained parallelism model. This is used mostly for population-based metaheuristics when massive parallel computers are available [1]. According to this model, the algorithm consists of a single population and each individual that belongs to at least one neighborhood is confined to communicate with the members of its own neighborhood. Overlaps among different neighborhoods may also exist, promoting interactions among all individuals of the population. Obviously, the neighborhood topology and the number of individuals that comprise the neighborhood affect the performance of this model. Typically, a number of individuals of the population are assigned to a single processing unit,

Slave I: VNS

Slave I: VNS

Slave II: VNS Master

Slave II: VNS Master Slave III: VNS

Slave III: ILS Slave IV: VNS Slave IV: ILS

Fig. 1 Heterogeneous (left) and homogeneous (right) parallel algorithm portfolios

282

D. Souravlias and K. E. Parsopoulos

while the ideal case for time-consuming problems would allow only one individual per CPU. However, in ordinary problems such distributed computation may impose additional communication delays and should be avoided. Another model is the so-called coarse-grained parallelism model, which is also very common in population-based algorithms [1]. In this model, a population consists of individual subpopulations, each one running on a single CPU. The following parameters control the performance of such models: 1. 2. 3. 4.

Model topology: it specifies the communication links among subpopulations. Migration period: it defines the frequency of migrations. Migration size: it determines the number of migrating individuals. Migration type: it distinguishes between synchronous and asynchronous migration.

Synchronous migration takes place at prespecified time points (e.g., periodically), whereas asynchronous migration dictates that subpopulations communicate when specific events occur. It becomes obvious that the design of parallel algorithm portfolios raises a number of issues. Special attention is required in the selection of the suitable parallelism model and its parameters tuning [1]. Despite that, parallelism shall be promoted against sequential approaches in order to take full advantage of modern computer systems and increase the portfolios’ time-efficiency [1]. An equally important goal refers to effectiveness in terms of solution quality. Even a simplistic portfolio that comprises copies of the same algorithm can in some cases outperform its sequential counterparts, under the assumption that both models receive equal computational resources [14]. This is attributed to the fact that more than one threads concurrently explore different parts of the search space, hence the probability of detecting better solutions is significantly increased. Overall, parallelism can offer obvious advantages to algorithm portfolios in terms of time-efficiency. Also, the parallelism model can influence the synchronization and information flow in interactive portfolios. Thus, it is an open problem of interest that shall be carefully considered in combination with the previously exposed open problems in order to guarantee the development of more efficient approaches.

4 Conclusions Metaheuristics-based algorithm portfolios have gained ongoing popularity as promising alternatives for tackling hard optimization problems. So far, their effectiveness and efficiency have been identified on several challenging problems spanning diverse research areas. The present chapter provided information on recent research trends and the most important issues in the design of efficient algorithm portfolios. In the relevant literature, four essential open problems can be distinguished, namely the selection of constituent algorithms, resource allocation schemes, inter-

On the Design of Metaheuristics-Based Algorithm Portfolios

283

action among the algorithms, and parallelism against sequential implementations. Each problem individually but, mostly, their interplay in the design of an algorithm portfolio can draw the borderline between a top-performing scheme and a failure. Research developments of the past decade offer the necessary motivation for further discussion and elaboration on algorithm portfolios. It is the authors’ belief that the accessibility to powerful parallel machines in the forthcoming years will further expand research on algorithm portfolios. Latest developments on machine learning and data analysis cultivate the ground toward this goal, placing algorithm portfolios in a salient position among the available algorithmic artillery for global optimization.

References 1. Akay, R., Basturk, A., Kalinli, A., Yao, X.: Parallel population-based algorithm portfolios. Neurocomputing 247, 115–125 (2017) 2. Almakhlafi, A., Knowles, J.: Systematic construction of algorithm portfolios for a maintenance scheduling problem. In: IEEE Congress on Evolutionary Computation, Cancun, Mexico, pp. 245–252 (2013) 3. Boussaïd, I., Lepagnot, J., Siarry, P.: A survey on optimization metaheuristics. Inf. Sci. 237, 82–117 (2013) 4. Calderín, J.F., Masegosa, A.D., Pelta, D.A.: An algorithm portfolio for the dynamic maximal covering location problem. Memetic Comput. 9(2), 141–151 (2017) 5. Gomes, C.P., Selman, B.: Algorithm portfolios. Artif. Intell. 126(1), 43–62 (2001) 6. Hart, E., Sim, K.: On constructing ensembles for combinatorial optimisation. Evol. Comput. 26(1), 67–87 (2018) 7. Huberman, B.A., Lukose, R.M., Hogg, T.: An economics approach to hard computational problems. Science 275(5296), 51–54 (1997) 8. Mu¨noz, M.A., Kirley, M.: Icarus: identification of complementary algorithms by uncovered sets. In: IEEE Congress on Evolutionary Computation (CEC), Vancouver, BC, Canada, pp. 2427–2432 (2016) 9. Peng, F., Tang, K., Chen, G., Yao, X.: Population-based algorithm portfolios for numerical optimization. IEEE Trans. Evol. Comput. 14(5), 782–800 (2010) 10. Rice, J.R.: The algorithm selection problem. Adv. Comput. 15, 65–118 (1976) 11. Shukla, N., Dashora, Y., Tiwari, M., Chan, F., Wong, T.: Introducing algorithm portfolios to a class of vehicle routing and scheduling problem. In: 2nd International Conference on Operations and Supply Chain Management (OSCM), Bangkok, Thailand, pp. 1015–1026 (2007) In: Proceedings OSCM 2007, pp. 1015–1026 (2007) 12. Souravlias, D., Parsopoulos, K.E., Alba, E.: Parallel algorithm portfolio with market tradingbased time allocation. In: Lübbecke, M., Koster, A., Letmathe, P., Madlener, R., Peis, B., Walther, G. (eds.) Operations Research Proceedings 2014, pp. 567–574. Springer, Berlin (2016) 13. Souravlias, D., Parsopoulos, K.E., Kotsireas, I.S.: Circulant weighing matrices: a demanding challenge for parallel optimization metaheuristics. Optim. Lett. 10(6), 1303–1314 (2016) 14. Souravlias, D., Parsopoulos, K.E., Meletiou, G.C.: Designing Bijective S-boxes using algorithm portfolios with limited time budgets. Appl. Soft Comput. 59, 475–486 (2017) 15. Tang, K., Peng, F., Chen, G., Yao, X.: Population-based algorithm portfolios with automated constituent algorithms selection. Inf. Sci. 279, 94–104 (2014)

284

D. Souravlias and K. E. Parsopoulos

16. Vrugt, J.A., Robinson, B.A., Hyman, J.M.: Self-adaptive multimethod search for global optimization in real-parameter spaces. IEEE Trans. Evol. Comput. 13(2), 243–259 (2009) 17. Wolpert, D.H., Macready, W.G.: No free lunch theorem for optimization. IEEE Trans. Evol. Comput. 1, 67–82 (1997) 18. Xu, L., Hutter, F., Hoos, H.H., Leyton-Brown, K.: SATzilla-07: the design and analysis of an algorithm portfolio for SAT. In: International Conference on Principles and Practice of Constraint Programming, Providence, RI, USA, pp. 712–727 (2007) 19. Yuen, S.Y., Chow, C.K., Zhang, X., Lou, Y.: Which algorithm should I choose: An evolutionary algorithm portfolio approach. Appl. Soft Comput. 40, 654 – 673 (2016)

Integral Simplex Methods for the Set Partitioning Problem: Globalisation and Anti-Cycling Elina Rönnberg and Torbjörn Larsson

Abstract The set partitioning problem is a generic optimisation model with many applications, especially within scheduling and routing. It is common in the context of column generation, and its importance has grown due to the strong developments in this field. The set partitioning problem has the quasi-integrality property, which means that every edge of the convex hull of the integer feasible solutions is also an edge of the polytope of the linear programming relaxation. This property enables, in principle, the use of solution methods that find improved integer solutions through simplex pivots that preserve integrality; pivoting rules with this effect can be designed in a few different ways. Although seemingly promising, the application of these approaches involves inherent challenges. Firstly, they can get be trapped at local optima, with respect to the pivoting options available, so that global optimality can be guaranteed only by resorting to an enumeration principle. Secondly, set partitioning problems are typically massively degenerate and a big hurdle to overcome is therefore to establish anti-cycling rules for the pivoting options available. The purpose of this chapter is to lay a foundation for research on these topics. Keywords Quasi-integrality · Set partitioning · Integral simplex method · Anti-cycling rules

1 Introduction The set partitioning problem as well as its close relatives, the set covering and packing problems, arise in a wide variety of planning situations, especially within scheduling, routing, delivery, and location applications [e.g., 3]. One of the most well-known set partitioning formulations originates from solving cockpit crew

E. Rönnberg () · T. Larsson Department of Mathematics, Linköping University, Linköping, Sweden e-mail: [email protected]; [email protected] © Springer Nature Switzerland AG 2018 P. M. Pardalos, A. Migdalas (eds.), Open Problems in Optimization and Data Analysis, Springer Optimization and Its Applications 141, https://doi.org/10.1007/978-3-319-99142-9_15

285

286

E. Rönnberg and T. Larsson

scheduling problems, where each flight leg of an airline must be assigned one crew. Another example is the political districting problem where a geographical region (e.g., a state) should be partitioned into districts such that each population unit (e.g., county) is assigned to one district. The set partitioning problem can be stated as follows. [SP P ]

z∗ = min



cj xj

j ∈N

s.t.



aij xj = ei ,

i∈M

j ∈N

xj ∈ {0, 1},

j ∈N

Here, N = {1, . . . , n} and M = {1, . . . , m} are the sets of indices for the variables and constraints, respectively. The costs, cj , j ∈ N , are integers, the constraint coefficients aij , i ∈ M, j ∈ N , are zeros or ones, and ei = 1, i ∈ M. Thanks to the importance of the set partitioning problem, the properties of its set of feasible solutions have been subject to extensive studies in order to obtain strong formulations, for references to such studies, see Hoffman and Padberg [3]. This chapter is devoted to one particular property of the polytope of feasible solutions to the linear programming relaxation of the set partitioning problem—the quasi-integrality property—and strategies for solving the set partitioning problem facilitated by this property. The quasi-integrality property implies that there exists a path between every pair of feasible integer points, consisting only of such edges of the polytope that connect feasible integer points by simplex pivots on one-entries (since any nondegenerate pivot from an integer feasible solution on any other entry would yield an infeasible point). Other optimisation problems that have the quasi-integrality property are the simple plant location problem [13, Section 7.3], the uncapacitated network design problem [2], and the one and two facility, one commodity, network design problem [8]. The existence of paths defined by pivots on one-entries between pairs of feasible integer points enables the use of linear programming techniques for finding improved integer solutions. This possibility was first outlined by Trubin [10] in the 1960s, and it was then more thoroughly described and named the integral simplex method by Yemelichev et al. [13, p. 190]. Thompson [9] proposed a practical realisation of the integral simplex method, in terms of a specific pivoting rule; this realisation acts as a local search method and is complemented with a tailored branching strategy to ensure the finding of an optimal solution. A recent contribution to the field of set partitioning and the integral simplex method can be found in [14], where a specially adapted direction-finding subproblem is employed for mitigating the weaknesses of the straightforward integral simplex method, as implemented by Thompson [9].

Integral Simplex Methods for the Set Partitioning Problem

287

The authors of this chapter have in Rönnberg and Larsson [7] proposed an extension of the integral simplex method proposed by Thompson [9], called allinteger pivots. We have further in Rönnberg and Larsson [6] and Rönnberg and Larsson [7] considered the integration of the integral simplex method of Thompson [9] and all-integer pivots with the column generation principle [e.g., 4, 11]. This chapter contains both known results from the literature, which are presented in a new way with the purpose of making them easily accessible, and some new insights into how the quasi-integrality property can be used when solving the set partitioning problem. The purpose of the chapter is to lay a foundation for research in this field, especially concerning globalisation of integral simplex type methods and the prevention of cycling in such methods.

2 Quasi-Integrality and the Integral Simplex Method Throughout the chapter, we assume that the problem SP P is feasible and that n ≥ m holds. Let aj = (a1j , . . . , amj )T , j ∈ N , and assume without loss of generality that the matrix (a1 , . . . , an ) has no zero column and full rank. The linear programming relaxation of SP P , obtained when xj ∈ {0, 1} is replaced by xj ≥ 0, j ∈ N , is denoted SP P LP . Since the feasible set of SP P LP is contained in the unit hypercube, all feasible integer points of SP P are extreme points of this set. Consider an extreme point to the feasible set of SP P LP and a basis associated with the point. (If an extreme point is degenerate, it is typically associated with more than one basis.) Let I be the index set of the basic columns, denote the basis by B = (aj )j ∈I , and let J be the index set of the non-basic columns. Let e¯ = B −1 e, where e = (e1 , . . . , em )T , and a¯ j = B −1 aj , j ∈ N , be the updated right-hand side and constraint columns, respectively. Finally, let uT = cBT B −1 , where cB = (cj )j ∈I , be the complementary dual solution, and let c¯j = cj − uT aj , j ∈ J , be the reduced costs. Two bases are called adjacent if they differ in exactly one column, and two extreme points are called adjacent if there exists a basis belonging to one of the extreme points that is adjacent to a basis belonging to the other extreme point. We begin by formally introducing the quasi-integrality property and presenting a proof of this property for the set partitioning problem. Definition 1 Let X be a polytope and XI its set of integer points. The polytope X is called quasi-integral if every edge of the convex hull of XI is also an edge of X. The first to show that the polytope of feasible solutions to SP P LP has this property was Trubin [10]. Given below is what in our opinion is the more accessible proof given by Yemelichev et al. [13, Theorem 7.2], though with some minor rephrasings. Theorem 1 The feasible polytope of SP P LP is quasi-integral.

288

E. Rönnberg and T. Larsson

The main argument used in the proof is that the quasi-integrality property holds if each pair of integer extreme points belongs to an integral face of the polytope describing the feasible solutions to the linear programming relaxation of the integer problem. The term integral face here refers to a face that has the integrality property, which holds if the face can be described by linear constraints of which the matrix is totally unimodular. For the proof we need the following two lemmas; the first is the above-mentioned sufficient condition for a polytope to be quasi-integral and it is adopted from Yemelichev et al. [13, Proposition 7.1]. Lemma 1 If, given any two integral vertices of X there is an integral face containing them, then X is quasi-integral. The second lemma gives a sufficient condition for a matrix to be totally unimodular, and is taken from Wolsey [12, p. 39]. Lemma 2 A matrix A with elements aij is totally unimodular if (1) aij ∈ {0, 1, −1}, ∀i, j. (2) Each column contains at most two nonzero coefficients. (3) There exists a partition (M1 , M2 ) of the set of rows such that each column j containing two nonzero coefficients satisfies  i∈M1

aij −



aij = 0.

i∈M2

The results needed to prove the results of Theorem 1 are now available, and the proof is as follows. Proof of Theorem 1 According to Lemma 1, and because all feasible integer points of SP P are extreme points of the feasible polytope of SP P LP , it suffices to show that any two feasible integer points, x 1 and x 2 , of SP P belong to an integral face of the polytope of SP P LP . Partition the index set N into the three subsets N0 = {j : xj1 = xj2 = 0}, N1 = {j : xj1 = xj2 = 1}, and N2 = {j : xj1 = xj2 }. The set of feasible solutions of SP P LP that satisfy xj = 0, j ∈ N0 , and xj = 1, j ∈ N1 , constitutes a face of the polytope of feasible solutions to SP P LP . To show that this face is integral, it suffices to show that the matrix A2 = (aj )j ∈N2 is totally unimodular. Since x 0 = (x 1 + x 2 )/2 is a feasible solution to SP P LP , it holds, for each i, that   aij (xj1 + xj2 ) ≥ aij . 2= j ∈N

j ∈N2

Integral Simplex Methods for the Set Partitioning Problem

289

Hence, every row of A2 contains at most two entries that are one. The columns of A2 can be partitioned into two disjoint sets (M1 , M2 ), which contain the columns such that xj1 = 1 and xj2 = 1, respectively. That is, the transpose of A2 fulfils the conditions of Lemma 2 and is thereby totally unimodular. A matrix is totally unimodular if and only if its transpose is totally unimodular [e.g., 12, p. 39], so A2 is totally unimodular and the face containing x 1 and x 2 is integral.  The following property of the set partitioning problem, which is closely related to the quasi-integrality property, was shown in Balas and Padberg [1]. Theorem 2 Let x 1 be a feasible and integer solution to SP P LP , associated with the basis B1 , and suppose that x 1 is not optimal in SP P . Denote by x 2 an optimal solution to SP P , and let B2 be an associated basis in SP P LP . Let J1 and Q2 be the index sets of the non-basic columns in x 1 and the one-valued basic variables in x 2 , respectively. Then there exists a sequence of adjacent bases B10 , B11 , B12 , . . . , B1p in SP P LP , such that B10 = B1 , B1p = B2 , and (a) the associated vertices x 1 = x 10 , x 11 , x 12 , . . . , x 1p = x 2 , are all feasible and integer,    1p (b) j ∈N cj xj10 ≥ j ∈N cj xj11 ≥ . . . ≥ j ∈N cj xj , and (c) p = |J1 ∩ Q2 |. The sequence of bases of the theorem can be obtained through simplex pivots, if degenerate pivots are allowed to be made on any nonzero entry. The construction of this sequence does however require knowing the optimal solution x 2 . A consequence of the theorem is that the Hirsch conjecture, see for example [5, p. 441], is true for integer solutions to SP P LP . Since any vertex can be made optimal by adjusting the objective function, it follows from the theorem that for any two integer basic feasible solutions to SP P LP , x 1 and x 2 , there exists a sequence of adjacent bases of length p = |J1 ∩Q2 | such that the associated vertices are all feasible and integer. From the theorem follows that set partitioning problems can in principle be solved by integrality-preserving pivots that yield non-increasing objective values. The practical difficulty lies in degeneracy and, to the best of our knowledge, the lack of anti-cycling rules for integrality-preserving pivoting principles. As long ago as in his paper from the 1960s, Trubin [10] outlined the idea of a method that utilises the fact that the polytope of SP P LP has the quasiintegrality property. It is referred to as the integral simplex method, and is obtained by restricting the simplex method to pivots that preserve integrality of the solution. The same idea was later also described in Yemelichev et al. [13, p. 190], but neither of them specify how to actually design an integral simplex type method. The only realisation of an integral simplex method that we are familiar with is by Thompson [9]. In his realisation of the method, it is started from a purely artificial basis and then simplex pivots on one-entries are performed. These pivots preserve integrality of the solution and are referred to as pivots-on-one, and because of their importance, we make the following definition. Definition 2 Given a basis associated with a feasible solution to SP P LP and an s ∈ J such that c¯s < 0. Then

290

E. Rönnberg and T. Larsson

(1) a non-degenerate pivot-on-one is a pivot operation on an entry a¯ rs = 1 such that 3   e¯i 33 e¯r a¯ is > 0 = min = 1, and i∈M a ¯ is 3 a¯ rs (2) a degenerate pivot-on-one is a pivot operation on an entry a¯ rs = 1 such that  min i∈M

3  e¯i 33 e¯r a ¯ > 0 = = 0. is a¯ is 3 a¯ rs

Both (1) and (2) are referred to as a pivot-on-one. Note that by definition, the entering variable shall have a negative reduced cost. The huge advantage of the principle of the integral simplex method is clearly that all the solutions encountered are integral, but there are major difficulties associated with the method. Firstly, since set partitioning problems are often highly degenerate, it can be expected to be difficult to find a basis that enables a non-degenerate pivot, and there is a risk of a lot of time being spent on making degenerate pivots which will yield no improvement in the objective function. Secondly, and also because of degeneracy, the prevention of cycling is an important issue. If the integral simplex method with pivots-on-one is to be used for solving problems of practical interest, an effective (and preferably also efficient) rule for prevention of cycling needs to be constructed, but as far as we know, no such rule has been published. A drawback of the integral simplex method when implemented through the restriction to pivots-on-one is that one might reach a basis associated with a nonoptimal extreme point where no further pivot-on-one is possible. (An example of this is given in Section 3.) For this reason, the integral simplex method based on pivots-on-one can be regarded as a local search method that can get trapped at a local optimum, as defined below. Definition 3 A basis for an integer extreme point to SP P LP is called a local optimum with respect to pivots-on-one if it is not possible to make a pivot-on-one. By complementing the integral simplex method based on pivots-on-one with implicit enumeration, global optimality can eventually be reached, and such a strategy has been suggested and applied by Thompson [9]. If his integral simplex method is trapped at a local optimum which cannot be pruned from further consideration, then the feasible set is partitioned through a branching. This branching creates a subproblem for each variable with a negative reduced cost, and in the subproblem this variable is forced to take the value one. The fixation of a variable enables a reduction of the original problem, and each of the reduced problems is solved by the integral simplex method, with further branchings performed if necessary. Thompson applied his method to randomly generated problem instances and problems arising in crew scheduling, and he reports that the results are promising.

Integral Simplex Methods for the Set Partitioning Problem

291

3 All-Integer Pivots The development of the integral simplex method based on pivots-on-one has its point of departure in how to restrict the simplex method to maintain integrality. When studying the results presented above, it is natural to raise these questions: ‘What type of pivots is it possible to make if integrality should be guaranteed, and how can we understand the mechanisms of such pivots?’ These questions led us [7] to extend the integral simplex method with pivots-on-one to an integral simplex method based on all-integer pivots, to be described below. We will also provide arguments for why one should be cautious to extend the pivoting options beyond the all-integer pivots. To make the description more easily accessible, we consider throughout this section a set partitioning problem with the constraint matrix ⎛

⎞ 11010 A = ⎝1 0 1 1 1⎠. 11100 This example problem is called SP PEX . Because the example will be used only to illustrate the quasi-integrality property, which is a property of the feasible polytope of SP P LP , and the differences in pivoting options when using pivots-on-one and all-integer pivots, respectively, we do not specify an objective function. (Some of the pivots shown below can actually not be possible according to Definition 2 above or Definition 5 below, since the condition of a negative reduced cost of the entering variable would be violated.) LP has three The feasible set of the linear programming relaxation SP PEX A B extreme points. These are x = (1, 0, 0, 0, 0), x = (0, 1, 0, 0, 1), and x C = (0, 1/2, 1/2, 1/2, 0). In Figure 1 the polytope describing the feasible set LP is illustrated by a graph, where the extreme points and edges of the of SP PEX polytope are represented by vertices and edges of the graph, respectively. Because of degeneracy, an extreme point can be associated with more than one basis. For x A , there are five possible sets of basic variables, namely {x1 , x2 , x3 }, {x1 , x2 , x4 }, {x1 , x3 , x4 }, {x1 , x3 , x5 }, and {x1 , x4 , x5 }. For x B there are two possible sets of basic variables, {x2 , x3 , x5 } and {x2 , x4 , x5 }, and for x C there is only Fig. 1 A graph where the vertices and edges are representing the extreme LP and the points of SP PEX polytope edges that connect them, respectively

A

xA = (1, 0, 0, 0, 0) xB = (0, 1, 0, 0, 1) xC = (0, 1/2, 1/2, 1/2, 0)

C

B

292

E. Rönnberg and T. Larsson

Fig. 2 A graph where the vertices within a super-vertex represent the possible bases for each extreme point. Each edge shows that there exists an edge of the polytope between the two extreme points

A

1

3

5 4

2

C

8

B 6

1

3

7

5 4

2

8

6

7

Possible simplex pivots

LP Fig. 3 The possible simplex pivots between the bases of SP PEX

one, {x2 , x3 , x4 }. In Figure 2, all these possible bases for each extreme point are illustrated as vertices within a super-vertex representing the extreme point. As in Figure 1, each edge of the graph represents the fact that there exists an edge of the polytope between the two extreme points. So far, quasi-integrality has been described as a geometric property of the polytope of feasible solutions of SP P LP , but this property can of course also be considered from an algebraic point of view. When carrying out a pivot in the simplex method, a variable xs , s ∈ J , such that c¯s < 0, is chosen to enter the basis, and a variable xr , r ∈ I , such that 3   e¯i 33 e¯r a ¯ min > 0 = is i∈M a ¯ is 3 a¯ rs LP are is chosen to leave the basis. In Figure 3, the feasible bases of SP PEX represented by vertices the same way as in Figure 2, and all possible simplex pivots between them are represented by edges in the graph. Within the theory of degeneracy graphs, the graph in Figure 3 is called a representation graph; for further reading on degeneracy graphs, see Zörnig [15].

Integral Simplex Methods for the Set Partitioning Problem

293

When solving the set partitioning problem, we are only interested in moving between bases representing integer extreme points, and derived below are sufficient conditions for integrality. When working with integer solutions to a linear program, the determinant of the basis is of special interest, and for future reference the following definition is given. Definition 4 A linear programming basis B is called unimodular if |det (B)| = 1. The importance of unimodular bases is motivated as follows. Consider an extreme point of the feasible set of SP P LP and for j ∈ I , let xji denote the ith basic variable. The values of xji , i ∈ M, can be calculated by using Cramer’s rule, that is, as xji =

det (B i ) , det (B)

i ∈ M,

where B i is the matrix obtained when replacing the ith column of B by the column vector e. The value of det (B i ) is integral because the matrix B i is integral, hence, if |det (B)| = 1, then all xji , i ∈ M, have integral values. Further, the values must be nonnegative since the extreme point is feasible. If an extreme point of SP P LP is associated with more than one basis, one of which is unimodular, then the argument above can be applied to this particular basis, which gives the following lemma. Lemma 3 If an extreme point has some unimodular basis, then it is integral. The sufficient condition of the lemma is not necessary, as shown by the following counterexample. Example 1 Consider a set partitioning problem with the constraint matrix ⎛ ⎞ 010101 ⎜1 0 0 1 1 1⎟ ⎟ A=⎜ ⎝0 0 1 1 1 0⎠. 111100 Its linear programming relaxation has the three extreme points (0, 0, 0, 1, 0, 0), (0, 0, 1, 0, 0, 1), and (0, 1, 0, 0, 1, 0), which are associated with four, two, and two bases, respectively. All these bases have |det (B)| = 2. In the simplex method, the pivot operation that transforms the current basis B into the new basis Bnew by letting the variable xs enter the basis and the variable xr leave the basis can be represented by a pivot matrix, P , see Murty [5, p. 141]. The pivot matrix P is an identity matrix of order m with column r replaced by the eta vector,   −a¯ 1s −a¯ r−1,s 1 −a¯ r+1,s −a¯ ms T η= ,..., , , ,..., . a¯ rs a¯ rs a¯ rs a¯ rs a¯ rs −1 = P B −1 . The relationship between the current basis and the new one is that Bnew

294

E. Rönnberg and T. Larsson

If we start from a unimodular basis, integrality can be maintained by allowing only such pivots that lead to a new unimodular basis. The relationship between the determinants of the current basis and the new one is that det (Bnew ) =

det (B) , det (P )

where the determinant of P can be calculated by using expansion by minors on the rth row, yielding det (P ) =

(−1)r+s . a¯ rs

Then, if |det (B)| = 1, it holds that |det (Bnew )| =

|det (B)| = |a¯ rs | , |det (P )|

which leads to the following lemma. Lemma 4 A pivot from a unimodular basis B yields a new basis Bnew that is unimodular if and only if the pivot is performed on an entry a¯ rs = ±1. For a pivot to be between two distinct extreme points, it needs to be nondegenerate, and if the current solution is feasible, a non-degenerate pivot needs to be made on a positive entry a¯ rs in order for the new solution to be feasible. This is not required in the degenerate case, however, since the pivot is then made between two adjacent bases for the same solution. This observation leads to our definition of all-integer pivots, and to a theorem providing a sufficient condition for a sequence of pivots to be between integer extreme points only. Definition 5 Given a unimodular basis associated with a feasible solution to SP P LP and an s ∈ J such that c¯s < 0. Then an all-integer pivot is, (1)

in the non-degenerate case a pivot made on an entry a¯ rs = 1 such that  min i∈M

(2)

3  e¯i 33 e¯r a¯ is > 0 = = 1, and 3 a¯ is a¯ rs

in the degenerate case a pivot made on an entry a¯ rs = ±1 such that e¯r = 0.

Note that it is again required that the entering variable has a negative reduced cost. Theorem 3 Starting from a feasible unimodular basis for SP P LP , a sufficient condition for a sequence of pivots to yield a path of integer extreme points is that the pivots are all-integer. Proof From Lemma 3 it follows that it suffices to show that the all-integer pivots preserve feasibility in SP P LP and unimodularity of the bases. That each basis in

Integral Simplex Methods for the Set Partitioning Problem

295

the pivoting sequence will be unimodular follows from Lemma 4. Since the choice of pivot entry in the non-degenerate case complies with the regular pivoting rule used in the simplex method, the pivot leads to a feasible solution. In the degenerate case, feasibility is maintained since the pivot is made within the same solution.  Hence, by starting from a feasible unimodular basis and restricting the pivots to be only all-integer, only unimodular bases are reached and integrality of the solutions is thereby preserved. All-integer pivots are not necessary for maintaining integrality when making degenerate pivots. However, if deviating from the all-integer pivoting principle in a degenerate pivot, then integrality can be lost in a later non-degenerate pivot, as illustrated by the following example where a non-degenerate pivot-on-one from a basis which is not unimodular leads to a non-integer extreme point. Example 2 This example uses the same constraint matrix as in Example 1, but it is here augmented with an identity matrix such that the variables x1 to x6 are associated with the columns of the original matrix, and the variables x7 to x10 are associated with the columns of an identity matrix. Consider the basis that is constituted by the columns a1 , a2 , a4 , and a6 ; this basis has det (B) = 2, and is a basis for the integer extreme point x = (0, 0, 0, 1, 0, 0, 0, 0, 0, 0). Making a non-degenerate pivot-onone where x9 enters the basis and x4 leaves the basis will lead to the basis consisting of the columns a1 , a2 , a6 , and a9 , which is associated with the non-integer extreme point x = (1/2, 1/2, 0, 0, 0, 1/2, 0, 0, 1, 0). The example is clearly valid for both the integral simplex method with pivots-on-one and with all-integer pivots. It can be noted that if the all-integer pivots are initiated at a non-unimodular basis for an integer extreme point, then the pivots will of course preserve the absolute value of the basis determinant, so that they have the ability to reach only extreme points to SP P LP that have a basis with the same absolute value of the determinant as the initial one. The extreme points that can at all be reached by the all-integer pivots are thus governed by the initial integer extreme point and the determinant of the initial basis. All-integer pivots from an initial feasible unimodular basis can thus only reach extreme points to SP P LP that have a unimodular basis. As shown by Example 1, there can however be integer extreme points for which no basis is unimodular, and in particular an optimal extreme point could lack a unimodular basis. Therefore it can be impossible for both the integral simplex method with pivotson-one (which starts with a purely artificial, and therefore unimodular, basis) and with all-integer pivots (starting from a unimodular basis) to reach certain, possibly optimal, extreme points. Further, even if an optimal extreme point has a unimodular basis, it appears plausible that it could happen that every path to this extreme point is blocked by extreme points that are lacking unimodular bases; we do however not have an example of this situation. Analogously to Definition 3, we make the following definition.

296

E. Rönnberg and T. Larsson

Definition 6 A basis for an integer extreme point to SP P LP is called a local optimum with respect to all-integer pivots if it is not possible to make an all-integer pivot. A common way to find an initial basis in the simplex method is to introduce artificial variables, and it is also profitable to do so in order to initiate the all-integer pivots. From the next theorem it follows that if a set partitioning problem contains a full set of artificial variables, then each integer extreme point is associated with at least one unimodular basis. Theorem 4 If the constraint matrix of SP P contains the columns of an identity matrix of order m, then there exists a unimodular basis for each integer extreme point of SP P LP . Proof Let x be any integer extreme point and form the matrix R = (aj )j ∈Q where Q = {j ∈ I : xj = 1}. The following procedure can then be used to create a unimodular basis B at the integer extreme point. Note that the matrix R has exactly one element with the value one in each row, and let nj be the number of rows covered by column j ∈ Q in R. 1. Let q = 1 . 2. Pick the column ap ∈ R that covers row q and sort the rows of R such that ap will cover rows q, . . . , q + np − 1. 3. Let column q+np −1 of B be ap , and if np > 1, let columns i = q, . . . , q+np −2 of B be the ith identity matrix column. 4. If q + np − 1 < |M|, let q = q + np and go to 2. The resulting matrix B will be upper triangular with all diagonal elements equal to one. Hence, det (B) = 1 and the basis is unimodular.  One may note that a set partitioning problem that is obtained from a set packing problem, through the addition of slack variables, will thus have some unimodular basis associated with each integer extreme point. Below, the problem SP PEX will be used to illustrate the all-integer pivots, but we begin by returning to the pivoting options in the integral simplex method with pivotsLP , all bases for the integer on-one. In the linear programming relaxation SP PEX extreme points are unimodular. To illustrate how to determine which of the pivots in Figure 2 that are pivots-onone, the simplex tableaus for vertices 3 and 4 of the example are given in Figure 4, and the associated movements in the graph are found in Figure 5.

Vertex 3:

x1 1 2 0 0 -1 1 x3 0 -1 1 0 1 0 x4 0 -1 0 1 1 0

Vertex 4:

x1 1 1 0 1 0 1 x3 0 0 1 -1 0 0 x5 0 -1 0 1 1 0

Fig. 4 The simplex tableaus corresponding to vertices 3 and 4 in the example

Integral Simplex Methods for the Set Partitioning Problem c

3 a

b

4

d

e

297

vertex

movement

entering leaving variable variable

3 3 3 4 4

a to vertex 8 b to vertex 4 c to vertex 5 d to vertex 3 e to vertex 6

x2 x5 x5 x4 x2

x1 x4 x3 x5 x1

Fig. 5 The possible simplex pivots for leaving vertices 3 and 4 1

3

5 4

2

8

6

7

Non−degenerate pivot−on−one Degenerate pivot−on−one Not a pivot−on−one

LP Fig. 6 A categorisation of the possible simplex pivots between the bases of SP PEX

There are three possible simplex pivots associated with leaving vertex 3. • If x2 enters and x1 leaves the basis, the pivot will be non-degenerate, and not on a one entry, hence the new solution will be the fractional extreme point x C . • If x5 enters the basis and x4 leaves the basis, the pivot is a degenerate pivot-onone, and the solution will still be x A . • If x5 enters the basis and x3 leaves the basis, it is also a degenerate pivot-on-one, and the solution will still be x A . For vertex 4 there are two possible pivots. • If x4 enters and x5 leaves the basis, it is a degenerate pivot-on-one, and the solution will still be x A . • If x2 enters and x1 leaves the basis, it is a non-degenerate pivot-on-one leading to x B . LP are categorised in the same In Figure 6, all the pivots between the bases of SP PEX manner as for the pivots associated with vertices 3 and 4. The quasi-integrality property guarantees that there exists a non-degenerate pivot-on-one between every pair of adjacent integer solutions, but it says nothing

298

E. Rönnberg and T. Larsson

1

3

5 4

2

8

6

Non−degenerate pivot−on−one Degenerate pivot−on−one Not a pivot−on−one

7

Fig. 7 A possible path of pivots-on-one from vertex 3 to vertex 1

1

3

5 4

2

8

6

7

Non−degenerate pivot−on−one Degenerate pivot−on−one Not a pivot−on−one

Fig. 8 A path of pivots-on-one from vertex 3, not leading to vertex 1

about how to find it. Assume that we do not have an objective function and, for some reason, would like to pivot from vertex 3 to vertex 1. It is then possible to do so by pivoting as shown by the arrows in Figure 7. At vertex 3, a choice is made between pivoting to vertex 4 or to vertex 5, and this choice is made without any knowledge of the consequences with respect to forthcoming pivots. Assuming that instead of choosing vertex 4 to follow on vertex 3, vertex 5 is chosen, then the path would be as shown by the arrows in Figure 8. As can be seen from the figure, it is not possible to continue with a pivot-onone from vertex 2, and therefore vertex 2 is a local optimum for the integral simplex method with pivots-on-one. A possibility could of course be to reverse the pivots that led to this local optimum and choose another path, and if restricted to pivots-on-one,

Integral Simplex Methods for the Set Partitioning Problem

1

299

3

5 4

2

8

6

7

Non−degenerate pivot−on−one Degenerate pivot−on−minus−one

Degenerate pivot−on−one Not a pivot−on−one

Fig. 9 Vertex 1 can be reached from vertex 2 by a degenerate pivot on a minus one entry

the only way to get to vertex 1 from vertex 2 is by reversing the pivots back to vertex 3 and there choose to pivot to vertex 4 instead. If this type of approach is used, it remains a true challenge how to develop efficient strategies for how to reverse pivots. (Further, in order to allow reversed pivots, the condition of Definition 2 that c¯s < 0 shall hold must be relaxed.) The possible pivots presented in Figure 6 only include the pivots-on-one and do not allow the possibility to perform a degenerate pivot on a minus one entry. If studying the same example from an all-integer perspective, the pivots between the following pairs of bases become possible: (1, 2), (1, 3), (1, 4), (2, 3), (2, 5), and (4, 5). In particular, the extension to all-integer pivots facilitates a degenerate pivot on a minus one entry at vertex 2, by letting x3 become basic instead of x4 , leading to vertex 1. This pivot corresponds to the movement illustrated in Figure 9, where vertex 1 is reached from vertex 2 and vertex 2 is no longer a local optimum. To summarise, the extension to all-integer pivots increases the number of possible pivots compared to the integral simplex method with pivots-on-one, and could therefore decrease the risk of getting trapped at a local optimum that is not global.

4 Cycling for All-Integer Pivots The properties of a solution method using all-integer pivots are essentially the same as those of the integral simplex method with pivots-on-one, with one difference being that the risk of getting trapped at a local optimum is decreased simply because there are more possible pivots.

300

E. Rönnberg and T. Larsson

A solution method based on all-integer pivots can be designed as follows. 1. Start from a purely artificial basis. 2. If possible, perform a non-degenerate all-integer pivot where the pivot entry is chosen according to some pivoting rule, and then go to 2. 3. If possible, perform a degenerate all-integer pivot where the pivot entry is chosen according to some pivoting rule, and then go to 2. 4. No all-integer pivot is possible. Stop. An important question in the design of a method based on all-integer pivots is how to prevent cycling. As commented on above, we do not know of an anti-cycling rule for integral simplex type methods. In a paper by Zörnig [16], it is claimed that it is possible to construct examples where the integral simplex method cycles, using the following pivoting rules: • The most negative reduced cost rule for the entering variable and the smallest (minimum) ratio rule for the leaving variable, where ties are broken according to the least index rule. (This is referred to as Dantzig’s rule.) • The most negative reduced cost rule for the entering variable and the smallest ratio rule for the leaving variable, where ties are broken according to the largest coefficient rule with respect to the pivot element. • The entering variable is the one with the most negative ratio of reduced cost to the length of the vector corresponding to a unit change in the non-basic variable. It is not clear to which version of the integral simplex method Zörnig [16] refers however, and for this reason his results need to be investigated further before any conclusions can be drawn. We have only briefly addressed the challenge of developing an anti-cycling rule for the all-integer pivots. The contributions we have made so far are examples of pivoting rules that do not prevent cycling, and the rules studied are the following: • Dantzig’s rule, as described above, but with the modification that ratio comparison in the degenerate case is made with respect to the absolute values. • Bland’s rule, see, for example, [5, Section 10.5], whereby an ordering of the variables is chosen and the eligible variable with the lowest index according to this ordering is selected, both with respect to the entering and the leaving variable. The cycles we have constructed are presented in the following example. Example 3 Study the following instance of the set partitioning problem.  999951321111 x ⎛ ⎞ ⎛ ⎞ 100011001110 1 ⎜0 1 0 0 0 1 1 0 1 1 0 1⎟ ⎜1⎟ ⎜ ⎟ ⎟ s.t. ⎜ ⎝0 0 1 0 0 0 1 1 1 0 1 1⎠x = ⎝1⎠ 000110010111 1

z∗ = min



x ∈ {0, 1}12

Integral Simplex Methods for the Set Partitioning Problem

301

The following tableau is obtained when the basic variables are x6 , x12 , x8 , and x10 .

x6 x12 x8 x10

−7 1 −1 1 0

−10 0 1 −1 0

−7 1 0 1 −1

−9 −1 0 0 1

−3 0 −1 1 1

0 1 0 0 0

−2 1 1 0 −1

0 0 0 1 0

2 2 0 1 −1

0 0 0 0 1

3 1 −1 2 0

0 0 1 0 0

3 1 0 1 0

From this tableau, no non-degenerate all-integer pivot is possible, but degenerate ones are. If the pivot entry is chosen according to Dantzig’s rule, x11 shall enter the basis and x12 shall leave the basis, yielding the following tableau:

x6 x11 x8 x10

−10 0 1 −1 0

−7 1 −1 1 0

−7 1 0 1 −1

−9 −1 0 0 1

−6 −1 1 −1 1

0 1 0 0 0

1 2 −1 2 −1

0 0 0 1 0

2 2 0 1 −1

0 0 0 0 1

0 0 1 0 0

3 1 −1 2 0

3 1 0 1 0

Again, no non-degenerate all-integer pivot is possible. If Dantzig’s rule for choosing the pivot entry is applied, x12 shall enter the basis and x11 shall leave the basis, and the first tableau is obtained again and a cycle is created. It can be shown that if Bland’s rule is used on the given example, a cycle between the bases I1 = {6, 12, 8, 9}, I2 = {6, 12, 8, 10}, and I3 = I1 = {6, 12, 8, 9} is obtained. A comment to conclude this section is that we believe that it is a challenge to develop a pivoting rule for the all-integer pivots which both prevents cycling and yields a method that efficiently progresses to better solutions.

5 Some Open Questions The purpose of this chapter was to convey an understanding of the possibilities and challenges associated with utilising the quasi-integrality property and all-integer pivots for solving the set partitioning problem. It is known [9] that the integral simplex method that is based on pivots-on-one can get trapped at a local optimum (according to Definition 3) that is not a global optimum. A subject for further research is to investigate whether local optima with respect to all-integer pivots (according to Definition 6) can exist or not. The answer may very well depend on whether the constraint matrix of SP P contains an identity matrix of order m or not. In case the constraint matrix does not contain an identity matrix, the existence of local optima seems likely, since the allinteger pivoting strategy cannot find an optimal extreme point if it does not possess

302

E. Rönnberg and T. Larsson

a unimodular basis, provided that there is some feasible unimodular basis where the method can be initiated. It remains however to construct a numerical example where these conditions actually hold. In case the constraint matrix contains an identity matrix, Theorem 4 establishes that each integer extreme point of SP P LP is associated with at least one unimodular basis. It is however unknown whether these unimodular bases are connected by all-integer pivots alone. It is therefore an open question if this existence of a unimodular basis at every integer extreme point is sufficient to prohibit termination at an optimum that is local but not global. We conjecture that this is in general not true, that is, that all-integer pivots can also in this case get trapped a local optimum that is not global. But we have no example of this, and this question should be further investigated. If local optimality can actually occur, then it is of interest to augment the allinteger pivoting principle with an auxiliary solution principle, such as a branch-andbound scheme, for finding a global optimum (cf. the work of Thompson [9] on the integral simplex method with pivots-on-one). Zörnig [16] gives a systematic principle for constructing examples of cycling in the ordinary simplex method, for arbitrary pivot selection criteria which do not prevent cycling. He further claims that his principle can be directly extended to the construction of examples of cycling in the integral simplex method. A subject for further study is to elaborate on his claim by extending and applying his construction principle for finding examples of cycling for the integral simplex with pivots-onone, for commonly used pivot selection criteria. Although we already know from Example 3 that all-integer pivoting can cycle when using Dantzig’s and Bland’s pivoting rules, it is also of interest to extend the construction principle of Zörnig [16] to the case of all-integer pivots. Intriguing, and most likely challenging, subjects for further research is to develop anti-cycling rules for the integral simplex method with pivots-on-one and all-integer pivots, respectively. We have in this chapter considered pivots-on-one and all-integer pivots as two options for preserving integrality and obtaining non-increasing objective values. As mentioned after Theorem 2, the set partitioning problems could however also be approached by other integrality-preserving pivoting principles that yield nonincreasing objective values, provided that cycling can be prevented. An interesting topic for further research is therefore to explore other such pivoting principles and corresponding anti-cycling rules.

References 1. Balas, E., Padberg, M.W.: On the set-covering problem. Oper. Res. 20, 1152–1161 (1972) 2. Hellstrand, J., Larsson, T., Migdalas, A.: A characterization of the uncapacitated network design polytope. Oper. Res. Lett. 12, 159–163 (1992) 3. Hoffman, K., Padberg, M.: Set covering, packing and partitioning problems. In: Floudas, C.A., Pardalos, P.M. (eds.) Encyclopedia of Optimization, 2nd edn. Springer, Berlin (2008)

Integral Simplex Methods for the Set Partitioning Problem

303

4. Lübbecke, M.E., Desrosiers, J.: Selected topics in column generation. Oper. Res. 53, 1007– 1023 (2005) 5. Murty, K.G.: Linear Programming. Wiley, New York (1983) 6. Rönnberg, E., Larsson, T.: Column generation in the integral simplex method. Eur. J. Oper. Res. 192, 333–342 (2009) 7. Rönnberg, E., Larsson, T.: All-integer column generation for set partitioning: basic principles and extensions. Eur. J. Oper. Res. 233, 529–538 (2014) 8. Sastry, T.: One and two facility network design revisited. Ann. Oper. Res. 108, 19–31 (2001) 9. Thompson, G.L.: An integral simplex algorithm for solving combinatorial optimization problems. Comput. Optim. Appl. 22, 351–367 (2002) 10. Trubin, V.A.: On a method of solution of integer linear programming problems of a special kind. Translated by V. Hall. Soviet Math. Doklady 10, 1544–1546 (1969) 11. Wilhelm, W.E.: A technical review of column generation in integer programming. Optim. Eng. 2, 159–200 (2001) 12. Wolsey, L.A.: Integer Programming. Wiley, New York (1998) 13. Yemelichev, V.A., Kovalev, M.M., Kravtsov, M.K.: Polytopes, Graphs and Optimisation (Translated by G. H. Lawden). Cambridge University Press, Cambridge (1984) 14. Zaghrouti, A., Soumis, F., El Hallaoui, I.: Integral simplex using decomposition for the set partitioning problem. Oper. Res. 62, 435–449 (2014) 15. Zörnig, P.: Degeneracy Graphs and Simplex Cycling. Springer, Berlin (1991) 16. Zörnig, P.: Systematic construction of examples for cycling in the simplex method. Comput. Oper. Res. 33, 2247–2262 (2006)

Open Problems on Benders Decomposition Algorithm Georgios K. D. Saharidis and Antonios Fragkogios

Abstract The Benders decomposition method is based on the idea of exploiting the structure of an optimization problem so that its solution can be obtained as the solution of several smaller subproblems. We review here the fundamental method proposed by Jacobus F. Benders and present several open problems related to its application. Keywords Benders decomposition · Open problems · Mixed integer programming

1 Introduction “A problem well stated is a problem half-solved.” So said Charles Franklin Kettering (August 29, 1876–November 24 or 25, 1958), an American inventor, engineer, businessman, and the holder of 186 patents. In this chapter of the book, we try to state well a series of conjectures and open problems concerning Benders decomposition algorithm. As stated by Hiriart-Urruty [1] the list of open problems can be divided into three groups. However, a problem may belong to two different classes. The three main groups are: • Problems of pure mathematical interest. Such problems focus only on the theoretical aspect and just develop the mathematical science. This means that the eventual answer to the problem will not revolutionize the field. The first open problem of this chapter belongs to this category. • Problems motivated by scientific computing and applications. The answer to these problems could lead to the creation of new algorithms and solution methods

G. K. D. Saharidis () · A. Fragkogios Department of Mechanical Engineering, Polytechnic School, University of Thessaly, Volos, Greece © Springer Nature Switzerland AG 2018 P. M. Pardalos, A. Migdalas (eds.), Open Problems in Optimization and Data Analysis, Springer Optimization and Its Applications 141, https://doi.org/10.1007/978-3-319-99142-9_16

305

306

G. K. D. Saharidis and A. Fragkogios

that are better (in certain or in all aspects) than the ones already developed. The open problems 4, 5 and 6 belong to this category of problems. • Problems whose solutions are known but for which we would like to have better (i.e., shorter, more natural, or more elegant) proofs. There are some problems that have been proved and solved, but their proof is either weird or too extended. Synthesizing and cleaning existing results, and providing new proofs take a substantial part of mathematical research. The second and third open problems of this chapter belong to this category.

1.1 About Jacques F. Benders Jacobus Franciscus (Jacques) Benders (June 1, 1924–January 9, 2017, Swalmen) was a Dutch mathematician and Emeritus Professor of Operations Research at the Eindhoven University of Technology [2]. He was the first Professor in the Netherlands in the field of Operations Research and was known for his contributions to mathematical programming. Benders studied mathematics at the Utrecht University, where he later also received his PhD in 1960 with the thesis entitled “Partitioning in Mathematical Programming” under supervision of Hans Freudenthal. Late 1940s had started his career as statistician for the Rubber Foundation. In 1955 he moved to Shell laboratory in Amsterdam, where he researched mathematical programming problems concerning the logistics of oil refinery. He developed the technique known as “Benders’ decomposition” method, one of the cornerstones of modern optimization, and used the results in his doctoral thesis. In 1963 Benders was appointed Professor of Operations Research at the Eindhoven University of Technology, being the first Professor in the Netherlands in that field. He retired at the Eindhoven University of Technology on May 31, 1989. In 2009, Benders was awarded with the “EURO Gold Medal” from the Association of European Operational Research Societies (EURO) for his contribution to the Operations Research. His most known publication is: [3]. Since 1962, when he published the aforementioned paper, more than 1600 papers have been published citing his method, according to Scopus [4]. The real importance of the introduced algorithm is more evident today than ever before. It has had a profound influence on the development of new generation of algorithms that have made possible the solution of larger and more complex optimization problems [5]. His method, often referred to as “Benders Decomposition Method”, has been modified, extended and accelerated by a lot of authors and has been applied to many fields of Operations Research, as described by Rahmaniani et al. [6]. Such fields are mixed-integer linear programming, stochastic programming, multi-objective programming, mixed-integer quadratic programming, and non-linear programming. The case studies in which pure, modified, or extended Benders method has been used belong to a large variety of problem categories of Operations Research. Some of these categories are crew scheduling, plant scheduling, supply chain network design, network flow problem, and power systems.

Open Problems on Benders Decomposition Algorithm

307

2 Benders Decomposition Method The Benders decomposition [3] method is based on the idea of exploiting decomposable structure of a given problem so that its solution can be converted into the solution of several smaller subproblems. Table 1 shows this block-decomposable structure, where A is the matrix of the constraints coefficients [7]. The main feature on which the decomposition is based is that certain variables of an original mathematical model are considered to be complicating. By fixing them, the original problem is decomposed into the relaxed master problem (RMP), which contains only the complicating variables, and the primal subproblem (PSP), which contains the rest of the variables. Thus, RMP is a relaxation of the original problem and is expected to provide the optimal solution after the addition of a number of constraints (i.e., inequalities). Without loss of generality, one could consider the following mixed integer linear problem: Original problem (OP): minimize cT x + d T y

(1)

s.t. B1 y ≤ b1 ,

(2)

Ax + B2 y ≤ b2 ,

(3)

x ∈ Rn+ ,

(4)

q

y ∈ Z+

(5)

where c ∈ Rn , d ∈ Rq , b1 ∈ Rm , b2 ∈ Rm , A, B1 , and B2 are mxn, mxq and mxq matrices, respectively.

Table 1 Block-decomposable structure of a problem suitable for Benders decomposition approach

308

G. K. D. Saharidis and A. Fragkogios

Assuming vector y contains a number of decision variables, which are considered to be complicating, the decision variables are partitioned into two sets x and y and the OP decomposes into the following problems: Primal slave problem (PSP): minimize cT x

(6)

s.t. Ax ≤ b2 − B2 y,

(7)

x ∈ Rn+

(8)

where y is the fixed values of the complicating variables, obtained by solving RMP. Relaxed master problem (RMP): minimize z

(9)

s.t. B1 y ≤ b1 ,

(10)

v iT (b2 − B2 y) ≤ 0,

(11)

uj T (b2 − B2 y) + d T y ≤ z,

(12)

q

y ∈ Z+ ,

(13)

z ∈ R1

(14)

where vi is the vector that corresponds to the extreme ray i of the dual problem of PSP and uj is the vector that corresponds to the extreme point j. An extreme ray (point) of the dual feasible space is a ray (point) that cannot be expressed as a linear combination of any other rays (points) of the dual feasible space. The dual problem of PSP has the following form: Dual slave problem (DSP): minimize uT (b2 − B2 y)

(15)

s.t. AT u ≥ c,

(16)

u ∈ Rm

(17)

After the decomposition is made, the Benders algorithm is applied in order to find the solution of the (OP). The algorithm is iterative and can be summarized as follows.

Open Problems on Benders Decomposition Algorithm

309

START

Solve RMP and update Lower Bound (LB) Add Benders Cut to RMP

Solve DSP

NO Create Benders Optimality Cut

Is DSP bounded?

Create Benders Feasibility Cut

YES Update Upper Bound (UB)

NO −

?

YES END

Fig. 1 Flowchart of the classical Benders decomposition algorithm

In each iteration, the RMP is solved and its optimal solution, i.e., the values of the complicating variables, is transmitted to the PSP. Next, the newly formed PSP is solved. If the PSP is infeasible (DSP is unbounded), a Benders feasibility cut is created (viT (b2 − B2 y) ≤ 0), where vi is the vector that corresponds to the extreme ray i of DSP. The feasibility cut is then added to the RMP. If the PSP is feasible (DSP is bounded), a Benders optimality cut is created (ujT (b2 − B2 y) + dT y ≤ z,), where uj is the vector that corresponds to extreme point j of DSP. The optimality cut is then added to the RMP. Afterwards, the enriched RMP is solved again and the procedure is repeated. In Fig. 1, the classical Benders decomposition algorithm is described using a flowchart. In each iteration, in the case of minimization (-maximization), the value of the objective function of RMP gives a valid lower (-upper) bound for the original

310

G. K. D. Saharidis and A. Fragkogios

problem as a relaxation of it. On the other hand, if solution y yields a feasible subproblem, then the sum of both dT y and the objective value of PSP provides a valid upper (-lower) bound for the original problem as a restriction of it. Thus, the convergence criterion for the termination of the Benders algorithm is defined based on the difference between the minimum upper (-lower) bound and the current lower (-upper) bound. When this difference is lower than a small number or equal to zero the algorithm is terminated, otherwise it continues. It should be noted that, in every iteration, a new Benders cut is introduced to RMP, i.e., more information about the solution space of the OP is gradually integrated in RMP. Moreover, one could observe that, during the iterations of the algorithm only the objective function (15) of DSP is updated, as the constraints are not modified. This means that the solution space of DSP is always the same. Extending this thought, if all possible Benders cuts were produced using all the extreme points and extreme rays of the DSP solution space, then the resulting augmented RMP is an equivalent version of OP and its solution would yield the same optimal solution as the OP. The total number of Benders cuts, which equals the number of extreme points and extreme rays of DSP, is, generally, enormous. However, it is known that at the optimum the number of RMP active constraints will never exceed the number of RMP decision variables [8]. The main idea of the Benders algorithm is based on the observation that the algorithm will converge, satisfying the optimality condition before the addition of all Benders cuts. Note that the finite convergence of the algorithm results from the fact that DSP has a finite number of extreme points and extreme rays, since its solution space remains stable.

3 Open Problems 3.1 Problem 1: What Is the Theoretical Proof of the Effectiveness of Acceleration Methods of Benders Decomposition Algorithm? Since the publication of the initial paper of Benders, a lot of researchers have tried to accelerate its convergence. Some of them have targeted on one of the algorithm’s weakness, which might be the iterative solution of an integer master problem, which, in the beginning of the iterative procedure, has very few information about the solution space of the original problem. To tackle this weakness, some researchers have introduced some acceleration techniques including the generation of modified cuts or more cuts in every iteration. These techniques aim to incorporate more information in the master problem earlier so that we do not need to solve it many times, thus reducing the total iterations needed to reach convergence. However, in several acceleration techniques, there is lack of an explicit theoretical proof of its faster convergence.

Open Problems on Benders Decomposition Algorithm

311

For example, Saharidis et al. [8] introduce the covering cut bundle (CCB) technique, which is really efficient in cases where low-density cuts are generated. The authors claim that in general a bundle of low-density cuts is more desirable for acceleration of the algorithm than a cut corresponding to the sum of these low-density cuts, which is a high-density cut. Moreover, Azad et al. [9] display computational results that support this claim. Also, the computational results of Saharidis et al. [8] show that the decrease in the CPU time and number of iterations is higher when the density of the cuts is low. Furthermore, the authors claim that the application of CCB in cases where the Benders cuts have a medium or high density may not be as effective as in the low-density cuts case. However, this claim is not tested on a case study and remains to be proved explicitly.

3.2 Problem 2: At Which Problems Should Benders Decomposition Algorithm Be Applied? Since the first introduction of Benders decomposition algorithm by Benders [3], there are a lot of papers published in the literature, which apply this procedure into a large variety of problems of mathematical programming. Some of these applications of the method are assignment problems, network design problems, production planning problems, and scheduling problems. Most of these problems are formulated in such a way into mixed integer linear programs and then decomposed to a master problem and a subproblem. When the mathematical formulation has a block-decomposed structure, it has been shown that Benders decomposition is a suitable method to apply in order to solve it [10]. Apart from the block-decomposed model in mixed-integer programming, Benders seems to be really effective in other types of models, such as the stochastic programming [11, 12] and multi-objective programming [13, 14]. However, it has not been explicitly explained and mathematically described what should the other cases of a mathematical formulation be (except for the blockdecomposed), so that Benders Decomposition is the suitable method to be applied and it is expected to be the most efficient one to solve the model.

3.3 Problem 3: Is There an Optimal Way to Decompose a Given Problem? How does the problem’s decomposition affect the performance of the Benders algorithm? Is there a “best” way to decompose a given problem? For those problems where the algorithm does not work well, is there a mechanism for improving its

312

G. K. D. Saharidis and A. Fragkogios

convergence properties? The answer to these questions would enhance prospects for applying the Benders decomposition technique and has prompted the study reported in Saharidis and Ierapetriou [15]. These questions have been partially answered in Magnanti and Wong [16] and Geoffrion and Graves [17], who introduce model selection criteria for the better performance of the Benders algorithm. In fact, they theoretically prove that for any mixed integer programming formulation, the convex hull of its feasible region will be a model formulation that is “optimal” in terms of generating Benders cuts, since it has a relaxed primal problem whose feasible region is the smallest. Such a formulation would result in fewer Benders iterations needed for the convergence of the algorithm. Also, Saharidis and Ierapetriou [15] compare two different partitioning alternatives for the scheduling problem of multipurpose multiproduct batch plant. They show that when not only binary variables, but also some continuous variables are assumed to be complicating variables (thus included in RMP), this decomposition results in slow convergence rate. The main reason that slows the Benders algorithm is the form of the produced Benders cuts. Introducing more decision variables in the RMP results in more infeasible slave problems giving rise to the production of a large number of feasibility cuts and, thus, a huge increase in CPU time and iterations needed for convergence. Moreover, Crainic et al. [18] propose different strategies on the application of Benders decomposition that add to the master explicit information from the scenario subproblems, by retaining or creating scenarios, or both. They apply their proposed decomposition strategies on two-stage stochastic integer models and obtained significant improvements in terms of number of generated cuts and computational time. However, no study exists in the literature, which investigates all the decomposition alternatives that may exist when dealing with a problem and explicitly proves the faster convergence of a single one. Even if a “general rule” might not be easy to obtain, there should be more studies in the literature where different decomposition strategies are applied on a case study and compared in order to reach useful conclusions.

3.4 Problem 4: Fluctuation of the Upper Bound (in a Minimization Problem) When applying Benders decomposition algorithm in a minimization problem the Lower Bound, computed by the objective function of the master problem, is monotonically increasing through the iterations. This does not happen with the upper bound, which is not monotonically decreasing through the iterations. Equivalent characteristics are valid in the application of Benders algorithm on a maximization problem.

Open Problems on Benders Decomposition Algorithm

313

1 10 19 28 37 46 55 64 73 82 91 100 109 118 127 136 145 154 163 172 181 190 199 208 217 226 235 244 253 262 271 280 289 298 307 316 325 334 343 352 361 370 379 388

Value

Classical Benders Algorithm 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5

Iterations Lower Bound

Upper Bound

Fig. 2 The fluctuation of the upper bound (in a minimization problem) in the classical Benders algorithm

The term “Fluctuation of the Valid Upper Bound” is defined as the phenomenon of finding a worse (i.e., greater) valid upper bound in a posterior iteration, while a better (i.e., lower) valid upper bound has already been reached in an anterior iteration. In order to better understand this phenomenon a visualization is provided in Fig. 2. It depicts the progress of the values of the valid upper and lower bounds along the iterations of the classical Benders algorithm, when it is applied in the minimization problem of crude oil scheduling [19]. It should be noted that in the diagram of Fig. 2, the values of the valid upper bound, which are not shown, correspond to iterations where PSP is infeasible. This fluctuation of the upper bound is a parameter that might decelerate the convergence of the algorithm. Generally, one could expect that if the upper bound were monotonically decreasing, the gap between it and the lower bound would diminish faster and the convergence criteria would be met in fewer iterations, thus maybe less CPU time. Whereas the reason for the fluctuation of the upper bound might be the cutting plane characteristic of the method and the inherent instability of such methods, the authors believe that future researchers should study more deeply its cause. Moreover, the fluctuation of the Upper Bound and its relation with the CPU time needed to reach convergence is not yet investigated in the literature. Which technique might result in its smoother fluctuation or even its monotonic decrease? How would this smoother fluctuation or monotonic decrease of the Upper Bound affect the convergence rate of Benders algorithm? Are the Benders cuts generated by the fluctuation of the Upper Bound needed for faster convergence of the algorithm? Is there an explicit mathematical proof that the Upper Bound cannot be monotonically decreasing?

314

G. K. D. Saharidis and A. Fragkogios

3.5 Problem 5: Does Producing More Optimality than Feasibility Cuts Lead to Faster Convergence of Benders Algorithm? Generally, Benders optimality cuts improve the lower bound (in a minimization problem), while feasibility cuts exclude infeasible solutions of the master problem. Although a large number of papers have been published based on Benders’ method, no one explicitly addresses the question of how optimality and feasibility cuts interact with each other during the convergence process. The amount of optimality cuts generated against the amount of feasibility ones might affect the convergence rate of Benders algorithm. Saharidis and Ierapetriou [15] claim that producing more optimality than feasibility cuts would lead to faster convergence of Benders. However, this issue is a bit obscure and an explicit theoretical proof is needed to support it. The authors propose the maximum feasible sub-system (MFS) cut generation strategy. In each iteration of the Benders algorithm, whenever the PSP is infeasible, the maximum feasible sub-system of the PSP is defined relaxing the remaining infeasible constraints with minimum changes and an additional cut is generated improving the RMP’s objective function. Thus, each time the Benders algorithm produces a feasibility cut, an extra MFS (optimality) cut is produced. The numerical results show a significant reduction of the solution time and of the total number of iterations, confirming the efficiency of the MFS cuts. Trying to address the generation of optimality and feasibility cuts, Fischetti et al. [20] proposed a new cut selection criterion for Benders’ cuts. The authors’ study was based on an idea from Fukuda et al. [21] that finding a most-violated optimality cut is equivalent to finding an optimal vertex of a polyhedron with unbounded rays, which is a strongly NP-hard problem. In this context, if one could find an optimal vertex of Benders dual subproblem with unbounded rays, then probably an optimality cut could be produced which would both improve the lower bound and exclude infeasible solutions of RMP. Fischetti et al. [20] reformulated the dual subproblem as a feasibility problem where the optimality and feasibility cuts are derived by searching for minimal infeasible sub-systems (MIS):    max π T r − T x ∗ − π0 η∗ : π T Q ≤ π0 d T , πi + w0 π0 = 1, (π, π0 ) ≥ 0 i∈I (T )

(18) The authors compare their method with a state-of-the-art branch and cut solver and two other similar methods that involve normalization. On the whole, the computational results show that their new cut selection criterion outperforms its competitors and obtains its best speedup when both optimality and feasibility cuts are be separated due to the fact that these cuts are treated in a sound unified framework.

Open Problems on Benders Decomposition Algorithm

315

However, more research should be made to investigate whether the feasibility cuts are necessary for the faster convergence of the Benders algorithm or it would be better if one could produce somehow only optimality cuts that could exclude infeasibilities as well as improve the lower bound.

3.6 Problem 6: In Multi-Cut Generation Strategies, What Is the Optimal Number of Cuts to Be Generated in Each Iteration So that the Master Problem Is Not Overloaded? In the literature, several multi-cut generation strategies have been proposed for the faster convergence of the Benders algorithm. These strategies aim in generating more than one cut in each iteration so that the solution space of the master problem is more restricted and thus fewer iterations are needed to reach convergence. However, the addition of more cuts in the master problem overloads it with constraints and makes it more difficult and time-consuming to solve. There is still no explicit theoretical investigation for the critical number of cuts to be added in each iteration, for which the fastest convergence is obtained. If the cuts are more than this number, the iterations may be reduced even more, but the total CPU time needed to reach the optimal solution is increased due to the large master problem to be solved in each iteration. Saharidis et al. [8] introduce such a multi-cut strategy, called covering cut bundle (CCB), and have shown in their numerical results that the choice of the number of cuts produced in each iteration influences the behavior of the algorithm and that some parameter tuning is necessary in order to obtain the maximum reduction in the overall computing time. Moreover, Azad et al. [9], while applying a combination of CCB and MDC methods, depict that the number of cuts added is strongly related with the number of variables covered in each cut. Their computational results show that increasing the number of variables to be covered from the yet uncovered ones, while simultaneously reducing the maximum number of cuts generated in each iteration might result in faster convergence. However, this relation between the number of cuts and the number of variables to be covered is yet to be investigated in depth, so that general conclusions can be reached. Furthermore, Saharidis and Ierapetriou [22] state that a good strategy in order to converge to optimality faster than the classic algorithm is to maintain a balance between the number of iterations and the number of cuts produced in each iteration. This balance should be based on the idea that increasing the number of cuts decreases the number of iterations, but RMP becomes more complicated to be solved to optimality and extra time is needed to generate the additional cuts. However, this balance is not explicitly defined in the literature.

316

G. K. D. Saharidis and A. Fragkogios

4 Conclusion Generally, in this chapter a series of hypothesis and open problems concerning Benders decomposition algorithm are introduced. The purpose is to stimulate future researchers on addressing these open problems and try to answer them explicitly. It should be noted that in this study the target is not to present specific aspects of Benders decomposition in a detailed way, since they are analyzed as future work in almost every relevant paper and are very well presented by Rahmaniani et al. [6]. However, this chapter addresses the method from a generic point of view and states problems in need of explicit answers that would not be problem-specific.

References 1. Hiriart-Urruty, J.-B.: Potpouri of conjectures and open questions in nonlinear analysis and optimization. SIAM Rev. 49(2), 255–273 (2007) 2. Wikipedia. Retrieved from https://en.wikipedia.org/wiki/Jacques_F._Benders (2018) 3. Benders, J.: Partitioning procedures for solving mixed-variables programming problems. Numer. Math. 4, 238–252 (1962) 4. Scopus. Retrieved from http://www.scopus.com/ (2018) 5. Maros, I.: Jacques F. Benders is 80. Comput. Manag. Sci. 2, 1 (2005) 6. Rahmaniani, R., Crainic, T., Gendreau, M., Rei, W.: The Benders decomposition algorithm: a literature review. Eur. J. Oper. Res. 253(3), 801–817 (2017). https://doi.org/10.1016/j.ejor.2016.12.005 7. Fragkogios, A., Saharidis, G.: Latest advances on benders decomposition. In: Encyclopedia of Information Science and Technology, Fourth Edition, pp. 5411–5421. IGI GLOBAL, Hershey, PA (2018). https://doi.org/10.4018/978-1-5225-2255-3.ch470 8. Saharidis, G.K., Minoux, M., Ierapetriou, M.G.: Accelerating Benders method using covering cut bundle generation. Int. Trans. Oper. Res. 17(2), 221–237 (2010) 9. Azad, N., Saharidis, G.K., Davoudpour, H., Malekly, H., Yektamaram, S.A.: Strategies for protecting supply chain networks against facility and transportation disruptions: an improved Benders decomposition approach. Ann. Oper. Res. 210, 125–163 (2013) 10. Conejo, A.J., Castillo, E., Minguez, R., Garcia-Bertrand, R.: Decomposition Techniques in Mathematical Programming: Engineering and Service Applications. Springer, Berlin (2006) 11. Gabriel, S.A., Fuller, J.D.: A Benders decomposition method for solving stochastic complementarity problems with an application in energy. Comput. Econ. 35(4), 301–329 (2010) 12. Watkins Jr., D.W., McKinney, D.C., Lasdon, L.S., Nielsen, S.S., Martin, Q.W.: A scenariobased stochastic programming model for water supplies from the highland lakes. Int. Trans. Oper. Res. 7, 211–230 (2000) 13. Kagan, N., Adams, R.N.: A Benders’ decomposition approach to the multi-objective distribution planning problem. Int. J. Electr. Power Energy Syst. 15(5), 259–271 (1993) 14. Khodr, H.M., Vale, Z.A., Ramos, C.: A Benders decomposition and fuzzy multicriteria approach for distribution networks remuneration considering DG. IEEE Trans. Power Syst. 24(2), 1091–1101 (2009) 15. Saharidis, G., Ierapetriou, M.G.: Improving benders decomposition using maximum feasible subsystem (MFS) cut generation strategy. Comput. Chem. Eng. 34, 1237–1245 (2010) 16. Magnanti, T.L., Wong, R.T.: Accelerating benders decomposition: algorithmic enhancement and model selection criteria. Oper. Res. 29(3), 464–484 (1981)

Open Problems on Benders Decomposition Algorithm

317

17. Geoffrion, A.M., Graves, G.W.: Multicommodity distribution system design by benders decomposition. Manag. Sci. 20(5), 822–844 (1974) 18. Crainic, T., Hewitt, M., Maggioni, F., Rei, W.: Partial Benders decomposition strategies for two-stage stochastic integer programs. In: CIRRELT-2016-37. CIRRELT, Montreal, QC (2016) 19. Saharidis, G.K., Boile, M., Theofanis, S.: Initialization of the Benders master problem using valid inequalities applied to fixed-charge network problems. Expert Syst. Appl. 38, 6627–6636 (2011). https://doi.org/10.1016/j.eswa.2010.11.075 20. Fischetti, M., Salvagnin, D., Zanette, A.: A note on the selection of Benders’ cuts. Math. Program. 124, 175–182 (2010). https://doi.org/10.1007/s10107-010-0365-7 21. Fukuda, K., Liebling, T., Margot, F.: Analysis of backtrack algorithms for listing all vertices and all faces of a convex polyhedron. Comput. Geom. 8, 1–12 (1997). https://doi.org/10.1016/0925-7721(95)00049-6 22. Saharidis, G.K., Ierapetriou, M.G.: Speed-up Benders decomposition using maximum density cut (MDC) generation. Ann. Oper. Res. 210, 101–123 (2013)

An Example of Nondecomposition in Data Fitting by Piecewise Monotonic Divided Differences of Order Higher Than Two I. C. Demetriou

Abstract Let n measurements of values of a real function of one variable be given, but the measurements include random errors. For given integers m and σ , we consider the problem of making the least sum of squares change to the data such that the sequence of the divided differences of order m of the fit changes sign at most σ times. The main difficulty in these calculations is that there are about O(nσ ) combinations of positions of sign changes, so that it is impracticable to test each one separately. Since this minimization calculation has local minima, a general optimization algorithm can stop at a local minimum that need not be a global one. It is an open question whether there is an efficient algorithm that can compute a global solution to this important problem for general m and σ . It has been proved that the calculations when m = 1, which gives a piecewise monotonic fit to the data, and m = 2, which gives a piecewise convex/concave fit to the data, reduce to separating the data into σ + 1 disjoint sets of adjacent data and solving a structured quadratic programming problem for each set. Separation allows the development of some dynamic programming procedures that solve the particular problems in O(n2 +σ n log2 n) and about O(σ n3 ) computer operations, respectively. We present an example which shows that the minimization calculation when m ≥ 3 and σ ≥ 1 may not be decomposed into separate calculations on subranges of adjacent data such that the associated divided differences of order m are either non-negative or non-positive. Therefore, the example rules out the possibility of solving the general problem by a similar dynamic programming calculation. Keywords Approximation · Combinatorics · Data fitting · Decomposition · Divided differences · Dynamic programming · Least squares · Piecewise convexity/concavity · Piecewise monotonicity · Smoothing

I. C. Demetriou () Division of Mathematics and Informatics, Department of Economics, National and Kapodistrian University of Athens, Athens, Greece e-mail: [email protected] © Springer Nature Switzerland AG 2018 P. M. Pardalos, A. Migdalas (eds.), Open Problems in Optimization and Data Analysis, Springer Optimization and Its Applications 141, https://doi.org/10.1007/978-3-319-99142-9_17

319

320

I. C. Demetriou

1 Introduction A counterexample is presented to a conjecture that if it were true, the following combinatorial data smoothing problem could be solved very efficiently. Let there be given n data points (xi , φi ), i = 1, 2, . . . , n, where the abscissae xi , i = 1, 2, . . . , n are distinct and in ascending order, and the ordinate φi is the measurement of a real function f (x) at xi , but the measurements include random errors. An excellent way for determining whether the measurements are smooth is to form a table of divided differences and to seek sign irregularities in higher order differences. The ith divided difference of order m is denoted by φ[xi , xi+1 , . . . , xi+m ] and is defined to be the coefficient of x m in the polynomial of degree at most m that interpolates the values φj , j = i, i + 1, . . . , i + m. Thus it has the value (see, for example, Hildebrand [9]) φ[xi , xi+1 , . . . , xi+m ] =

φi (xi − xi+1 )(xi − xi+2 ) · · · (xi − xi+m ) +

φi+1 (xi+1 − xi )(xi+1 − xi+2 ) · · · (xi+1 − xi+m )

+··· +

φi+m . (xi+m − xi )(xi+m − xi+1 ) · · · (xi+m − xi+m−1 )

An isolated error tends to cause m sign changes in the sequence φ[xi , xi+1 , . . . , xi+m ], i = 1, 2, . . . , n − m and, in general, the error propagation in a difference table is actually sufficient to change completely the values in this table, because the higher divided differences follow two opposing trends: The differences of the true function values are rapidly decreasing while the differences of the errors are rapidly increasing (Lanczos [10]). It follows that many sign changes in the sequence are possible when the measurements contain random errors. If, however, the data are exact values of f (x) that has a continuous mth derivative, then 1 (m) φ[xi , xi+1 , . . . , xi+m ] = m! f (ξ ), for some ξ ∈ [xi , xi+m ]. Therefore, Powell (see Demetriou [4]) takes the view that some smoothing of the data may be helpful if their divided differences of order m have more sign changes than are expected in the mth derivative of f . He recommends to calculate values yi , i = 1, 2, . . . , n from the measurements by minimizing the objective function n  F (y1 , y2 , . . . , yn ) = (φi − yi )2 , i=1

subject to the constraints that the sequence of the divided differences yi (xi − xi+1 )(xi − xi+2 ) · · · (xi − xi+m ) yi+1 + (xi+1 − xi )(xi+1 − xi+2 ) · · · (xi+1 − xi+m )

y[xi , xi+1 , . . . , xi+m ] =

(1)

On the Nondecomposition of Data Fitting by Piecewise Monotonic Divided Differences

+··· +

321

yi+m , i = 1, 2, . . . , n − m (xi+m − xi )(xi+m − xi+1 ) · · · (xi+m − xi+m−1 ) (2)

changes sign at most σ times, for some prescribed positive integer σ that is smaller than n. We use the vector notation y and φ for the components yi , i = 1, 2, . . . , n and φi , i = 1, 2, . . . , n, respectively, and we call y ‘optimal’. The main difficulty in this calculation is that the optimal positions of the sign changes are also variables of the optimization problem, but so many combinations of sign changes can occur that it is prohibitively expensive to test each one separately even for small values of n and σ . In addition, a general optimization algorithm can stop at a local minimum that need not be a global one. Two properties that make the suggested smoothing approach attractive to use are as follows [2]. First, the technique avoids the assumption that f (x) has a form that depends on a few parameters, which occurs in many other smoothing techniques, as, for example, we see in de Boor [3], Cheney and Light [1], Dierckx [7] and Whaba [11]. Instead, it adopts the assumption that some useful smoothing should be possible if the measurements fail to possess some property that has been lost due to errors in the data and it gives an approximation to the original data by an optimization calculation that is constrained by the missing property. Second, the smoothing technique provides a projection operation, because if the data satisfy the sign condition on the divided differences, then the data remain unaltered. An advantage of this approach to data smoothing is that by identifying appropriate values for m and σ , we are making the least change to the data that gives properties that occur in a wide range of underlying functions. Therefore the smoothing calculation may have many applications. Since many results depend on this smoothing concept, it is well to look at the concept from several sides. For example, the method for general values of m and σ gives a fit that consists of σ + 1 sections of monotonic divided differences of order m − 1, a property that is highly suitable for data modeling. For instance, if we let m = 2 in (2), σ = 1 and the first nonzero difference be positive, then the corresponding sequence of second differences changes sign once. These restrictions imply increasing rates of change, due to the convex section of the fit, and decreasing rates of change, due to the concave section of the fit, on the relevant intervals of the fit that depend on the position of the sign change of the second differences. Further, the ‘concavity’ of the fit may alternatively be expressed by the fact that the first differences of the fit decrease, which gives the property of ‘diminishing rates of change’. Convexity can be described in a similar way. Demetriou and Powell [5, 6] studied the particular problems with m = 1, which is the case of piecewise monotonic components of the solution, and m = 2, which is the case of piecewise convex/concave components of the solution (that is, the piecewise linear interpolant to the components of y consists of convex and concave sections alternately), and stated some highly useful properties. Specifically, if m = 1 and if y is an optimal n-vector whose first divided differences on [xp , xq ], 1 ≤ p ≤

322

I. C. Demetriou

q ≤ n are non-negative, then the components yi , i = p, p+1, . . . , q have the values q that minimize the sum of squares i=p (φi − yi )2 subject to the monotonically increasing components yp ≤ yp+1 ≤ · · · ≤ yq , which is a strictly convex quadratic programming problem. Similar results hold when these first differences are non-positive, which gives monotonically decreasing components. Also, if m = 2 and if y is an optimal n-vector whose second divided differences on [xp , xq ] are non-negative, qthen the components yi , i = p, p + 1, . . . , q have the values that minimize i=p (φi − yi )2 subject to the constraints y[xi−1 , xi , xi+1 ] ≥ 0, i = p + 1, p + 2, . . . , q − 1, except that there are no constraints if q ≤ p + 1, which is a strictly convex quadratic programming problem. Similar results hold when these second divided differences are non-positive. Therefore, if one knows the monotonic sections, or the convex and concave sections, of an optimal fit y, then the components of y are calculated by solving separate quadratic programming problems for each section. Conversely, in view of these properties, the required fit can be generated by employing dynamic programming. Indeed, some algorithms are proposed by Demetriou and Powell that generate optimal fits in at most O(n2 + σ n log2 n) computer operations for the piecewise monotonic case and in about O(σ n3 ) computer operations for the piecewise convex/concave case. The success of the dynamic programming methods depends on the separation properties of the optimal piecewise monotonic fit and the optimal piecewise convex/concave fit. Unfortunately, in Section 2 we provide an example which shows that if y is an optimal fit with one sign change in its third divided differences, then the components of y that are associated with the non-negative third divided differences and the components of y that are associated with the non-positive third divided differences need not be derived by separate calculations on the corresponding data. It is an open question whether there is an efficient algorithm for general m and σ to compute the global minimum of (1) subject to the condition that the sequence of the mth divided differences of the fit changes sign at most σ times.

2 The Example In order to present the example we need the following definitions. We call an n-vector y ‘feasible’ if the sequence of third divided differences y[xi , xi+1 , xi+2 , xi+3 ], i = 1, 2, . . . , n − 3 changes sign once, where we assume without loss of generality that the first nonzero difference is positive. Then the components yi , i = 1, 2, . . . , n satisfy the inequalities  y[xi , xi+1 , xi+2 , xi+3 ] ≥ 0, i = 1, 2, . . . , j − 3 , y[xi , xi+1 , xi+2 , xi+3 ] ≤ 0, i = j − 2, j − 1, . . . , n − 3

(3)

On the Nondecomposition of Data Fitting by Piecewise Monotonic Divided Differences

323

for some integer j ∈ [3, n], where we omit any line in (3) if the right-hand limit on i is less than the left-hand limit. Let n = 7, let the abscissae be xi = i, i = 1, 2, . . . , 7 and let the data be φ1 = φ7 = 3, φ2 = φ6 = 2, φ3 = φ5 = 1 and φ4 < −6. Throughout the section, we shall refer to the formulae (1), (2) and (3) with the understanding that n = 7 and m = 3. After a straightforward calculation that is simplified by taking into account the relation φ[xi , xi+1 , xi+2 , xi+3 ] = 16 (φi+3 − 3φi+2 + 3φi+1 − φi ), we see that the data give the inequalities ⎫ φ4 −3φ4 + 2 ⎪ < 0, φ[x2 , x3 , x4 , x5 ] = > 0⎪ ⎬ 6 6 . ⎪ 3φ4 − 2 φ4 ⎪ ⎭ φ[x3 , x4 , x5 , x6 ] = < 0, φ[x4 , x5 , x6 , x7 ] = − >0 6 6 φ[x1 , x2 , x3 , x4 ] =

(4)

These inequalities confirm that φi , i = 1, 2, . . . , 7 do not satisfy the constraints (3). What is particularly relevant in these data is that there is exactly one optimal fit, say it is y ∗ , that is obtained by minimizing (1) subject to the equations y4 − 3y3 + 3y2 − y1 = 0andy7 − 3y6 + 3y5 − y4 = 0,

(5)

or, due to the symmetry of the data, by minimizing the function 2

3 

(φi − yi )2 + (φ4 − y4 )2

(6)

i=1

subject to y4 − 3y3 + 3y2 − y1 = 0. Solving the last equation for y4 and eliminating y4 from (6), an unconstrained minimization calculation gives the components (see, Figure 1) 1 1 (20φ1 + 3φ2 − 3φ3 + φ4 ),y2∗ = y6∗ = (3φ1 + 12φ2 + 9φ3 − 3φ4 ), 21 21 1 1 (−3φ1 + 9φ2 + 12φ3 + 3φ4 ),y4∗ = (2φ1 − 6φ2 + 6φ3 + 19φ4 ). y3∗ = y5∗ = 21 21

y1∗ = y7∗ =

We substitute these values in (2) and we obtain the relations y ∗ [x1 , x2 , x3 , x4 ] = 0, y ∗ [x2 , x3 , x4 , x5 ] = y ∗ [x3 , x4 , x5 , x6 ] = −

1 − φ4 > 0, 3

1 − φ4 < 0, y ∗ [x4 , x5 , x6 , x7 ] = 0, 3

324

I. C. Demetriou 4

2

0

-2

-4

-6

-8

0

1

2

3

4

5

6

7

8

Fig. 1 The optimal fit (circles) to the data (crosses). The dotted line consists of two parabolas, 5 2 19 y1 (x) = 53 x 2 − 8x − 19 3 , x ∈ [1, 4] and y2 (x) = 3 x + 8x − 3 , x ∈ [4, 7] that interpolate the optimal values

which confirm that the constraints (3) are satisfied with j = 5. Also, we find that (1) at y ∗ has the value 2 [(φ1 − 3φ2 + 3φ3 − φ4 )2 + (−3φ1 + 9φ2 − 9φ3 + 3φ4 )2 441 1 (−2φ1 + 6φ2 − 6φ3 + 2φ4 )2 + (3φ1 − 9φ2 + 9φ3 − 3φ4 )2 ] + 441 2 2 φ . = 21 4

F (y ∗ ) =

The optimality of y ∗ is a crucial part of our analysis, which depends on the subtle differences that occur in the various approximations that we are going to consider. We begin by proving that the components yi∗ , i = 1, 2, . . . , 7 solve the quadratic programming problem that minimizes (1) subject to the constraints (3) when j = 5, namely  y[xi , xi+1 , xi+2 , xi+3 ] ≥ 0, i = 1, 2 . y[xi , xi+1 , xi+2 , xi+3 ] ≤ 0, i = 3, 4

(7)

On the Nondecomposition of Data Fitting by Piecewise Monotonic Divided Differences

325

Indeed, we consider the identity ⎞ −φ1 + 3φ2 − 3φ3 + φ4 ⎜ 3φ − 9φ + 9φ − 3φ ⎟ 1 2 3 4 ⎟ ⎜ ⎜ −3φ + 9φ − 9φ + 3φ ⎟ ⎜ 1 2 3 4⎟ 2 ⎜ ⎟ ⎜ 2φ1 − 6φ2 + 6φ3 − 2φ4 ⎟ = ⎟ 21 ⎜ ⎜ −3φ1 + 9φ2 − 9φ3 + 3φ4 ⎟ ⎟ ⎜ ⎝ 3φ1 − 9φ2 + 9φ3 − 3φ4 ⎠ −φ1 + 3φ2 − 3φ3 + φ4 ⎛

⎞ ⎞ ⎛ −1 0 ⎜ 3⎟ ⎜ 0⎟ ⎟ ⎟ ⎜ ⎜ ⎜ −3 ⎟ ⎜ 0⎟ ⎟ ⎟ ⎜ ⎜ 2 2 ⎟ ⎟ ⎜ ⎜ (φ1 − 3φ2 + 3φ3 − φ4 ) ⎜ 1 ⎟ + (φ1 − 3φ2 + 3φ3 − φ4 ) ⎜ 1 ⎟ , ⎟ ⎟ ⎜ ⎜ 21 21 ⎜ 0⎟ ⎜ −3 ⎟ ⎟ ⎟ ⎜ ⎜ ⎝ 0⎠ ⎝ 3⎠ 0 −1 ⎛

(8)

because the left-hand side is the gradient of (1) when yi = yi∗ , i = 1, 2, . . . , 7 and, in view of (5) and (7), the vectors on the right-hand side are the gradients, scaled by 6, of the linear constraint functions y[x1 , x2 , x3 , x4 ] and −y[x4 , x5 , x6 , x7 ]. The multipliers on the right-hand side of equation (8) are equal to −2φ4 /21 and, since they are both positive, the Karush–Kuhn–Tucker conditions (see, for example, Fletcher [8]) for the solution of the problem that minimizes (1) subject to the constraints (7) are satisfied. In order to establish the optimality of y ∗ , it is sufficient to prove that F (y ∗ ) provides the least value of (1) for any feasible fit. Because several possibilities of feasibility can occur, we make our analysis easier if we consider first whether the solution of any of the following quadratic programming problems is a feasible vector that gives a lower value than F (y ∗ ): For j = 0, 1, . . . , n minimizeF (y1 , y2 , . . . , yn ) =  subjectto

n 

(φi − yi )2

(9)

y[xi , xi+1 , xi+2 , xi+3 ] ≥ 0, i = 1, 2, . . . , j − 3, y[xi , xi+1 , xi+2 , xi+3 ] ≤ 0, i = j + 1, j + 2, . . . , n − 3,

(10)

i=1

where we omit any line in (10) if the right-hand limit on i is less than the lefthand limit. We see immediately that the inequalities (10) are derived from (3) after excluding the (j − 2)th, (j − 1)th and j th constraints. Thus, the first j and the last n − j components of the solution of the minimization problem (9)–(10) are actually

326

I. C. Demetriou

derived by two separate quadratic programming problems. We will find that the corresponding fits for all values of j fail to be optimal. In order to establish this result we state some definitions. Specifically, we denote by ψ (j ) , j = 1, 2, . . . , n − 1 the n-vector whose first j components occur in the definition of αj and whose last n − j components occur in the definition of βj +1 , where ⎧ j ⎪ α0 = 0; αj = min i=1 (φi − yi )2 = 0, j = 1, 2, 3, ⎪ ⎪ ⎪ ⎪ j ⎪ 2 ⎪ min ⎪ i=1 (φi − yi ) , j = 4, 5, . . . , n, ⎨ αj = y[xi ,xi+1 ,xi+2 ,xi+3 ]≥0, i=1,2,...,j −3

n 2 ⎪ βj = min ⎪ i=j (φi − yi ) , j = 1, 2, . . . , n − 3, ⎪ ⎪ y[x ,x ,x ,x ]≤0, i=j,j +1,...,n−3 i i+1 i+2 i+3 ⎪ ⎪ ⎪ ⎪  ⎩ βj = min ni=j (φi − yi )2 = 0, j = n − 2, n − 1, n; βn+1 = 0 (11)

and, where we denote by ψ (0) and ψ (7) the vectors that occur in the definitions of β1 and α7 , respectively. Now we see that the value of the function (9) at the solution of the quadratic programming problem (9)–(10) is αj + βj +1 , for j ∈ [0, 7]. Our purpose is to show that none of the vectors {ψ (j ) : j = 0, 1, . . . , 7} is an optimal fit to φ. We continue by considering separately each of these vectors. It is convenient to start from ψ (4) that is the vector whose first four components occur in α4 and whose last three components are equal to the corresponding data values, for they occur inβ5 . Since α4 is the value of the objective function obtained by the minimization of 4i=1 (φi −yi )2 subject to the constraint y[x1 , x2 , x3 , x4 ] ≥ 0 and since φ[x1 , x2 , x3 , x4 ] < 0, we conclude that ψ (4) has to satisfy the equation y4 − 3y3 + 3y2 − y1 = 0. Hence, by a calculation similar to the one that gave y ∗ , we obtain the components 1 1 (4) (19φ1 + 3φ2 − 3φ3 + φ4 ),ψ2 = (3φ1 + 11φ2 + 9φ3 − 3φ4 ), 20 20 1 1 (4) (−3φ1 + 9φ2 + 11φ3 + 3φ4 ),ψ4 = (φ1 − 3φ2 + 3φ3 + 19φ4 ), = 20 20

(4)

ψ1 = (4)

ψ3

ψ5(4) = φ5 , ψ6(4) = φ6 , ψ7(4) = φ7 . The components ψi(4) , i = 1, 2, 3, 4 solve the quadratic programming problem that returns α4 , which can be deduced from the identity ⎛

⎞ ⎛ ⎞ −φ1 + 3φ2 − 3φ3 + φ4 −1 ⎟ ⎜ ⎟ 2 ⎜ ⎜ 3φ1 − 9φ2 + 9φ3 − 3φ4 ⎟ = 2 (φ1 − 3φ2 + 3φ3 − φ4 ) ⎜ 3 ⎟ , (12) ⎝ ⎠ ⎝ −3 ⎠ 20 −3φ1 + 9φ2 − 9φ3 + 3φ4 20 φ1 − 3φ2 + 3φ3 − φ4 1

On the Nondecomposition of Data Fitting by Piecewise Monotonic Divided Differences

327

by analogy with Equation (8). However, ψ (4) is not feasible, because it gives the differences 8 − 9φ4 > 0, 24 −20 + 27φ4 19φ4 ψ (4) [x3 , x4 , x5 , x6 ] = < 0, ψ (4) [x4 , x5 , x6 , x7 ] = − > 0. 60 120 (13) ψ (4) [x1 , x2 , x3 , x4 ] = 0, ψ (4) [x2 , x3 , x4 , x5 ] =

Next, we consider ψ (3) that is the vector whose first three components are equal to the corresponding data values, forthey occur in α3 , and whose last four components occur in β4 , so they minimize 7i=4 (φi − yi )2 subject to the constraint y[x4 , x5 , x6 , x7 ] ≤ 0. By symmetry to ψ (4) , we obtain the differences ψ (3) [x1 , x2 , x3 , x4 ] < 0, ψ (3) [x2 , x3 , x4 , x5 ] > 0, ψ (3) [x3 , x4 , x5 , x6 ] < 0, ψ (3) [x4 , x5 , x6 , x7 ] = 0 and therefore ψ (3) is not feasible. At both ψ (4) and ψ (3) , function (9) attains the same value, namely F (ψ (4) ) = F (ψ (3) ) = φ42 /20. Further, a candidate for ψ (5) is either ψ (4) , because, in view of (13), ψ (4) satisfies the constraints required for obtaining α5 , or the n-vector that is obtained by minimizing (9) subject to the equation y2 − 3y3 + 3y4 − y5 = 0, say it is z. Specifically the components of z are 1 (19φ2 + 3φ3 − 3φ4 + φ5 ), 20 1 1 z3 = (3φ2 + 11φ3 + 9φ4 − 3φ5 ),z4 = (−3φ2 + 9φ3 + 11φ4 + 3φ5 ), 20 20 1 z5 = (φ2 − 3φ3 + 3φ4 + 19φ5 ),z6 = φ6 ,z7 = φ7 , 20 z1 = φ1 ,z2 =

and satisfy 6 − 5φ4 > 0,z[x2 , x3 , x4 , x5 ] = 0, 24 −2 + 3φ4 −6 − φ4 < 0,z[x4 , x5 , x6 , x7 ] = > 0, z[x3 , x4 , x5 , x6 ] = 24 6 z[x1 , x2 , x3 , x4 ] =

where the rightmost inequality holds because of our assumption that φ4 < −6. Thus, z satisfies the constraints required by α5 . Since (9) at z attains the value F (z) = 9 2 2 (4) (5) = ψ (4) . 20 (φ4 − 3 ) , which is strictly larger than F (ψ ), it follows that ψ (5) (4) (2) (3) Also, by symmetry to ψ and ψ , we obtain ψ = ψ . Moreover, besides that z is not feasible, we can immediately verify the inequality F (z) > F (y ∗ ), which is going to be used in the following analysis.

328

I. C. Demetriou

It is useful to the subsequent discussion to consider the n-vector w that minimizes (9) subject to y3 − 3y4 + 3y5 − y6 = 0, which by symmetry to z satisfies w[x1 , x2 , x3 , x4 ] < 0, w[x2 , x3 , x4 , x5 ] > 0, w[x3 , x4 , x5 , x6 ] = 0, w[x4 , x5 , x6 , x7 ] < 0, and gives F (w) = F (z). Now, consideration of ψ (5) , z and w rules out the possibility that ψ (6) is better than y ∗ , as we show next. We start by noticing that ψ (6) has to satisfy the constraints ψ (6) [x1 , x2 , x3 , x4 ] ≥ 0, ψ (6) [x2 , x3 , x4 , x5 ] ≥ 0, ψ (6) [x3 , x4 , x5 , x6 ] ≥ 0. (14) If ψ (6) [x1 , x2 , x3 , x4 ] = 0 or ψ (6) [x2 , x3 , x4 , x5 ] = 0 or ψ (6) [x3 , x4 , x5 , x6 ] = 0 was achieved, then we know that ψ (6) would be ψ (5) or z or w, respectively, but none of these vectors satisfies the conditions required by ψ (6) . Therefore we assume that ψ (6) satisfies as equations at least two of the three inequalities in (14), in which case candidates for ψ (6) are each of the vectors that minimize (9) subject to: either y1 −3y2 +3y3 −y4 = 0 and y2 −3y3 +3y4 −y5 = 0; or, y2 −3y3 +3y4 −y5 = 0 and y3 − 3y4 + 3y5 − y6 = 0; or, y1 − 3y2 + 3y3 − y4 = 0 and y3 − 3y4 + 3y5 − y6 = 0. It follows that (9) at the solution of each of these calculations attains a value that has to be strictly larger than F (z) = F (w). Hence we obtain F (ψ (6) ) > F (z) and by symmetry, F (ψ (1) ) > F (w). From the last two inequalities and since F (ψ (7) ) ≥ F (ψ (6) ), F (ψ (0) ) ≥ F (ψ (1) ) and F (z) > F (y ∗ ), the vectors ψ (0) , ψ (1) , ψ (6) and ψ (7) cannot be better than y ∗ . Moreover, we have already seen that none of the vectors ψ (j ) , j = 2, 3, 4, 5 is feasible. Therefore, we have proved that none of the vectors {ψ (j ) : j = 0, 1, . . . , 7} is an optimal fit to φ. It remains to prove that y ∗ is the optimal fit to φ. Since y ∗ is obtained when j = 5 occurs in (3), it suffices to rule out the possibility that any of the vectors that minimize (1) subject to the constraints (3) for j = 3, 4, 6, 7 is optimal. The case with j = 3 in (3) requires the minimization of (1) subject to the constraints y[xi , xi+1 , xi+2 , xi+3 ] ≤ 0, i = 1, 2, 3, 4 and, analogously, the case j = 7 requires the minimization of (1) subject to the constraints y[xi , xi+1 , xi+2 , xi+3 ] ≥ 0, i = 1, 2, 3, 4. The solutions to the associated quadratic programming problems are the vectors ψ (0) and ψ (7) , respectively, which, as we already saw, are not optimal. The case with j = 4 in (3) requires the minimization of (1) subject to the constraints y[x1 , x2 , x3 , x4 ] ≥ 0 and y[xi , xi+1 , xi+2 ,  xi+3 ] ≤ 0, i = 2, 3, 4. If y ∈ &7 is the solution to this problem, then the sum 7i=1 (φi − yi )2 is bounded below by the optimal value of the objective function of the problem (9)–(10) when j = 1, namely F (ψ (1) ) = α1 + β2 . Since F (ψ (1) ) > F (y ∗ ), it follows that the solution to the problem when j = 4 occurs in (3) cannot be optimal. By symmetry, the solution to the problem when j = 6 occurs in (3), which requires the minimization of (1) subject to the constraints y[xi , xi+1 , xi+2 , xi+3 ] ≥ 0, i = 1, 2, 3, and y[x4 , x5 , x6 , x7 ] ≤ 0, is bounded below by the optimal value of the objective function of the problem (9)–(10) when j = 6, namely F (ψ (6) ) = α6 + β7 . Since

On the Nondecomposition of Data Fitting by Piecewise Monotonic Divided Differences

329

F (ψ (6) ) > F (y ∗ ), the solution to the problem when j = 6 occurs in (3) cannot be optimal. The proof that y ∗ is the unique optimal fit with one sign change to the sequence of its third divided differences to φ is now complete. Further, in view of the first line of (7), the sum of squares of residuals for the optimal components yi∗ , i = 1, 2, 3, 4, 5 is bounded below by the inequality 5 5   32 2 1 2 (5) φ4 > α5 = φ . (φi − yi∗ )2 = (φi − ψi )2 = 441 20 4 i=1

i=1

Thus, {yi∗ : i = 1, 2, 3, 4, 5} differ from the components {ψi : i = 1, 2, 3, 4, 5} that are optimal in α5 . By symmetry, the optimal components {yi∗ : i = 3, 4, 5, 6, 7} differ from the components {ψi(2) : i = 3, 4, 5, 6, 7} that are optimal in β3 . Indeed, we see that the fit {yi∗ : i = 1, 2, . . . , 7} is not obtained by separate calculations. Therefore, the example does establish that although y ∗ is optimal, its components associated with the non-negative third divided differences and its components associated with the non-positive third divided differences are not derived by separate calculations on the corresponding data. This result shows that the case when m = 3 and σ = 1 no longer has the property that is fundamental to the cases m = 1 and m = 2 and, hence, it rules out the possibility of solving the problem of Section 1 by a similar dynamic programming calculation. Since the divided differences provide excellent diagnostic tools for data errors and since higher smoothness in data approximations can be achieved by allowing sign changes in the divided differences of order higher than two, it is desirable to know whether there exists an efficient algorithm that can calculate the global minimum of (1) subject to the condition that the sequence of the divided differences of order m of the fit changes sign at most σ times. On the other hand, the properties of the important special cases m = 1 and m = 2 suggest that research on problems that arise from particular values of m and σ in (2) may be valuable. (5)

References 1. Cheney, W., Light, W.: A Course in Approximation Theory. Brooks/Cole Publishing Company, An International Thompson Publishing Company, New York (2000) 2. Cullinan, M.P., Powell, M.J.D.: Data smoothing by divided differences. In: Watson, G.A. (ed.) Numerical Analysis Proceedings, Dundee 1981. Lecture Notes in Mathematics, vol. 912, pp. 26–37. Springer, Berlin (1982) 3. de Boor, C.: A Practical Guide to Splines, Revised Edition. Applied Mathematical Sciences, vol. 27. Springer, New York (2001) 4. Demetriou, I.C.: Data smoothing by piecewise monotonic divided differences, Ph.D. Dissertation, Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Cambridge (1985) 5. Demetriou, I.C., Powell, M.J.D.: Least squares smoothing of univariate data to achieve piecewise monotonicity. IMA J. Numer. Anal. 11, 411–432 (1991)

330

I. C. Demetriou

6. Demetriou, I.C., Powell, M.J.D.: Least squares fitting to univariate data subject to restrictions on the signs of the second divided differences. In: Buhmann, M.D., Iserles, A. (eds.) Approximation Theory and Optimization. Tributes to M.J.D. Powell, pp. 109–132. Cambridge University Press, Cambridge (1997) 7. Dierckx, P.: Curve and Surface Fitting with Splines. Oxford, Clarendon Press (1995) 8. Fletcher, R.: Practical Methods of Optimization. Wiley, Chichester (2003) 9. Hildebrand, F.B.: Introduction to Numerical Analysis, 2nd edn. Dover Publications, New York (1974) 10. Lanczos, C.: Applied Analysis. Pitman and Sons, London (1957) 11. Wahba, G.: Spline Models for Observational Data. SIAM, Philadelphia (1990)